text
stringlengths 87
777k
| meta.hexsha
stringlengths 40
40
| meta.size
int64 682
1.05M
| meta.ext
stringclasses 1
value | meta.lang
stringclasses 1
value | meta.max_stars_repo_path
stringlengths 8
226
| meta.max_stars_repo_name
stringlengths 8
109
| meta.max_stars_repo_head_hexsha
stringlengths 40
40
| meta.max_stars_repo_licenses
sequencelengths 1
5
| meta.max_stars_count
int64 1
23.9k
⌀ | meta.max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | meta.max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | meta.max_issues_repo_path
stringlengths 8
226
| meta.max_issues_repo_name
stringlengths 8
109
| meta.max_issues_repo_head_hexsha
stringlengths 40
40
| meta.max_issues_repo_licenses
sequencelengths 1
5
| meta.max_issues_count
int64 1
15.1k
⌀ | meta.max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | meta.max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | meta.max_forks_repo_path
stringlengths 8
226
| meta.max_forks_repo_name
stringlengths 8
109
| meta.max_forks_repo_head_hexsha
stringlengths 40
40
| meta.max_forks_repo_licenses
sequencelengths 1
5
| meta.max_forks_count
int64 1
6.05k
⌀ | meta.max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | meta.max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | meta.avg_line_length
float64 15.5
967k
| meta.max_line_length
int64 42
993k
| meta.alphanum_fraction
float64 0.08
0.97
| meta.converted
bool 1
class | meta.num_tokens
int64 33
431k
| meta.lm_name
stringclasses 1
value | meta.lm_label
stringclasses 3
values | meta.lm_q1_score
float64 0.56
0.98
| meta.lm_q2_score
float64 0.55
0.97
| meta.lm_q1q2_score
float64 0.5
0.93
| text_lang
stringclasses 53
values | text_lang_conf
float64 0.03
1
| label
float64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
# day23: Introduction to autograd for gradient descent
# Objectives
We will practice:
* Using the **autograd** Python package to compute gradients of functions
* Using gradients from autograd to do a basic linear regression
# Outline
* [Part 1: Autograd for scalar input, scalar output functions](#part1)
* [Part 2: Autograd for vector input, scalar output functions](#part2)
* [Part 3: Using autograd with simple gradient descent procedure](#part3)
* [Part 4: Using autograd to solve linear regression](#part4)
* [Part 5: Using autograd with functions that take dicts and other data structures](#part5)
# Takeaways
* Automatic differentiation is a powerful idea that has made experimenting with different models and loss functions far easier than it was even 8 years ago.
* The Python package `autograd` is a wonderfully simple tool that makes this work with numpy/scipy
* `autograd` works by a super-smartly implemented version of the backpropagation dynamic programming we've already discussed from Unit 3
* * Basically, after doing a "forward" pass to evaluate the function, we do a "reverse" pass through the computation graph and compute gradients via the chain rule.
* * This general purpose method is called [reverse-mode differentiation](https://github.com/HIPS/autograd/blob/master/docs/tutorial.md#reverse-mode-differentiation)
* `autograd` does NOT do symbolic math!
* * e.g. It does not simplify `ag_np.sqrt(ag_np.square(x))` as `x`. It will use the chain rule on all nested functions that the user specifies.
* `autograd` does NOT do numerical approximations to gradients.
* * e.g. It does not estimate gradients by perturbing inputs slightly
* In Part 5, we see how we can define losses in terms of dictionaries, which let us define complicated models with many different parameters. We'll exploit this in Project C for matrix factorization with many parameters
# Limitations
FYI There are some things that autograd *cannot* handle that you should be aware of.
Make sure any loss function you define that you want to differentiate does not do any of these things:
* Do not use assignment to elements of arrays, like `A[0] = x` or `A[1] = y`
* * Instead, compute entries individually and then stack them together.
* * Like this: `x = ...; y = ...; A = ag_np.hstack([x, y])`
* Do not rely on implicit casting of lists to arrays, like `A = ag_np.sum([x, y])`
* * use `A = ag_np.sum(ag_np.array([x, y]))` instead.
* Do not use A.dot(B) notation
* * Instead, use `ag_np.dot(A, B)`
* Avoid in-place operations (such as `a += b`)
* * Instead, use a = a + b
# Further Reading
Check out these great resources
* Official tutorial for the autograd package: https://github.com/HIPS/autograd/blob/master/docs/tutorial.md
* Short list of what autograd *can* and *cannot* do: https://github.com/HIPS/autograd/blob/master/docs/tutorial.md#supported-and-unsupported-parts-of-numpyscipy
```python
## Import numpy
import numpy as np
import pandas as pd
import copy
```
```python
## Import autograd
import autograd.numpy as ag_np
import autograd
```
```python
# Import plotting libraries
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('seaborn') # pretty matplotlib plots
import seaborn as sns
sns.set('notebook', font_scale=1.25, style='whitegrid')
```
<a name="part1"></a>
# PART 1: Using autograd.grad for univariate functions
Suppose we have a mathematical function of interest $f(x)$.
For now, we'll assume this function has a scalar input and scalar output. This means:
* $x \in \mathbb{R}$
* $f(x) \in \mathbb{R}$
We can ask: what is the derivative (aka *gradient*) of this function:
$$
g(x) \triangleq \frac{\partial}{\partial x} f(x)
$$
Instead of computing this gradient by hand via calculus/algebra, we can use `autograd` to do it for us.
First, we need to implement the math function $f(x)$ as a **Python function** `f`.
The Python function `f` needs to satisfy the following requirements:
* INPUT 'x': scalar float
* OUTPUT 'f(x)': scalar float
* All internal operations are composed of calls to functions from `ag_np`, the `autograd` version of numpy
### From numpy to autograd's wrapper of numpy
You might be used to importing numpy as `import numpy as np`, and then using this shorthand for `np.cos(0.0)` or `np.square(5.0)` etc.
For autograd to work, you need to instead use **autograd's** provided numpy wrapper interface:
`from autograd.numpy as ag_np`
The `ag_np` module has the same API as `numpy`. So for example, you can call
* `ag_np.cos(0.0)`
* `ag_np.square(5.0)`
* `ag_np.sum(a_N)`
* `ag_np.mean(a_N)`
* `ag_np.dot(u_NK, v_KM)`
Or almost any other function you usually would use with `np`
**Summary:** Make sure your function `f` produces a scalar and only uses functions within the `ag_np` wrapper
### Example: f(x) = x^2
$$
f(x) = x^2
$$
```python
def f(x):
return ag_np.square(x)
```
```python
f(0.0)
```
0.0
```python
f(1.0)
```
1.0
```python
f(2.0)
```
4.0
### Computing gradients with autograd
Given a Python function `f` that meets our requirements and evaluates $f(x)$, we want a Python function ``g` that computes the gradient $g(x) \triangleq \frac{\partial}{\partial x}$
We can use `autograd.grad` to create a Python function `g`
```
g = autograd.grad(f) # create function g that produces gradients of input function f
```
The symbol `g` is now a **Python function** that takes the same input as `f`, but produces the derivative at a given input.
```python
g = autograd.grad(f)
```
```python
# 'g' is just a function.
# You can call it as usual, by providing a possible scalar float input
g(0.0)
```
0.0
```python
g(1.0)
```
2.0
```python
g(2.0)
```
4.0
```python
g(3.0)
```
6.0
### Discussion 1a: Do you agree that the printed values above are correct? Why or why not?
```python
# TODO discuss
```
### Plot to demonstrate the gradient function side-by-side with original function
```python
# Input values evenly spaced between -5 and 5
x_grid_G = np.linspace(-5, 5, 100)
fig_h, subplot_grid = plt.subplots(nrows=1, ncols=2, sharex=True, sharey=True, squeeze=False)
subplot_grid[0,0].plot(x_grid_G, [f(x_g) for x_g in x_grid_G], 'k.-')
subplot_grid[0,0].set_title('f(x) = x^2')
subplot_grid[0,1].plot(x_grid_G, [g(x_g) for x_g in x_grid_G], 'b.-')
subplot_grid[0,1].set_title('gradient of f(x)');
```
### Exercise 1b:
Consider the decaying periodic function below. Can you compute its derivative using autograd and plot the result?
$$
f(x) = e^{-x/10} * cos(x)
$$
```python
def f(x):
return ag_np.exp(-x/10) * ag_np.cos(x)
g = autograd.grad(f) # TODO define g as gradient of f, using autograd's `grad`
# TODO plot the result
x_grid_G = np.linspace(-10, 10, 500)
fig_h, subplot_grid = plt.subplots(nrows=1, ncols=2, sharex=True, sharey=True, squeeze=False)
subplot_grid[0,0].plot(x_grid_G, [f(x_g) for x_g in x_grid_G], 'k.-');
subplot_grid[0,0].set_title('f(x) = x^2');
subplot_grid[0,1].plot(x_grid_G, [g(x_g) for x_g in x_grid_G], 'b.-');
subplot_grid[0,1].set_title('gradient of f(x)');
```
# PART 2: Using autograd.grad for functions with multivariate input
Now, imagine the input $x$ could be a vector of size D.
Our mathematical function $f(x)$ will map each input vector to a scalar.
We want the gradient function
\begin{align}
g(x) &\triangleq \nabla_x f(x)
\\
&= [
\frac{\partial}{\partial x_1} f(x)
\quad \frac{\partial}{\partial x_2} f(x)
\quad \ldots \quad \frac{\partial}{\partial x_D} f(x) ]
\end{align}
Instead of computing this gradient by hand via calculus/algebra, we can use autograd to do it for us.
First, we implement math function $f(x)$ as a **Python function** `f`.
The Python function `f` needs to satisfy the following requirements:
* INPUT 'x': numpy array of float
* OUTPUT 'f(x)': scalar float
* All internal operations are composed of calls to functions from `ag_np`, the `autograd` version of numpy
### Worked Example 2a
Let's set up a function that is defined as the inner product of the input vector x with some weights $w$
We assume both $x$ and $w$ are $D$ dimensional vectors
$$
f(x) = \sum_{d=1}^D x_d w_d
$$
Define the fixed weights
```python
D = 2
w_D = np.asarray([1., 2.,])
```
Define the function `f` using `ag_np` wrapper functions only
```python
def f(x_D):
return ag_np.dot(x_D, w_D) # dot product is just inner product in this case
```
Use `autograd.grad` to get the gradient function `g`
```python
g = autograd.grad(f)
```
Try putting in the all-zero vector
```python
x_D = np.zeros(D)
print("x_D", x_D)
print("f(x_D) = %.3f" % (f(x_D)))
```
x_D [0. 0.]
f(x_D) = 0.000
Compute the gradient wrt that all-zero vector
```python
g(x_D)
```
array([1., 2.])
Try another input vector
```python
x_D = np.asarray([1., 2.])
print("x_D", x_D)
print("f(x_D) = %.3f" % (f(x_D)))
```
x_D [1. 2.]
f(x_D) = 5.000
Compute the gradient wrt the vector [1, 2, 3]
```python
g(x_D)
```
array([1., 2.])
### Discussion 2b: Does this gradient computation agree with what you expect?
```python
# TODO discuss
```
### Exercise 2c:
Let's set up a function that is just the sum-of-squares penalty on the input vector x
$$
f(x) = \sum_{d=1}^D x_d^2
$$
Define the function `f` using `ag_np` wrapper functions only
```python
def f(x_D):
return ag_np.sum(x_D * x_D) # TODO define sum-of-squares function f via calls to ag_np functions
```
Use `autograd.grad` to get the gradient function `g`
```python
g = autograd.grad(f)
```
Try out an input vector
```python
x_D = np.asarray([1., 2.])
print("x_D", x_D)
print("f(x_D) = %.3f" % (f(x_D)))
```
x_D [1. 2.]
f(x_D) = 5.000
Compute the gradient.
```python
g(x_D)
```
array([2., 4.])
### Discussion 2d: Mathematically, should the gradient of sum-of-squares function be when x is all zero? Does your function `g` agree?
TODO try out feeding `np.zeros(D)` into your `g` function and discuss if you get what you expect
```python
g(np.zeros(D))
```
array([0., 0.])
# Part 3: Using autograd gradients within gradient descent to solve multivariate optimization problems
### Helper function: basic gradient descent
Here's a very simple function that will perform many gradient descent steps to optimize a given function.
```python
def run_many_iters_of_gradient_descent(f, g, init_x_D=None, n_iters=100, step_size=0.001):
''' Run many iterations of GD
Args
----
f : python function (D,) to float
Maps vector x_D to scalar loss
g : python function, (D,) to (D,)
Maps vector x_D to gradient g_D
init_x_D : 1D array, shape (D,)
Initial value for the input vector
n_iters : int
Number of gradient descent update steps to perform
step_size : positive float
Step size or learning rate for GD
Returns
-------
x_D : 1D array, shape (D,)
Best value of input vector for provided loss f found via this GD procedure
history : dict
Contains history of this GD run useful for plotting diagnostics
'''
# Copy the initial parameter vector
x_D = copy.deepcopy(init_x_D)
# Create data structs to track the per-iteration history of different quantities
history = dict(
iter=[],
f=[],
x_D=[],
g_D=[])
for iter_id in range(n_iters):
if iter_id > 0:
x_D = x_D - step_size * g(x_D)
history['iter'].append(iter_id)
history['f'].append(f(x_D))
history['x_D'].append(x_D)
history['g_D'].append(g(x_D))
return x_D, history
```
### Worked Example 3a: Minimize f(x) = sum(square(x))
It's easy to figure out that the vector with smallest L2 norm (smallest sum of squares) is the all-zero vector.
Here's a quick example of showing that using gradient functions provided by autograd can help us solve the optimization problem:
$$
\min_x \sum_{d=1}^D x_d^2
$$
```python
def f(x_D):
return ag_np.sum(ag_np.square(x_D))
g = autograd.grad(f)
# Initialize at x_D = [6, 4, -3, -5]
D = 4
init_x_D = np.asarray([6.0, 4.0, -3.0, -5.0])
```
```python
opt_x_D, history = run_many_iters_of_gradient_descent(f, g, init_x_D, n_iters=1000, step_size=0.01)
```
```python
# Make plots of how x parameter values evolve over iterations, and function values evolve over iterations
# Expected result: f goes to zero. all x values goto zero.
fig_h, subplot_grid = plt.subplots(
nrows=1, ncols=2, sharex=True, sharey=False, figsize=(15,3), squeeze=False)
for d in range(D):
subplot_grid[0,0].plot(history['iter'], np.vstack(history['x_D'])[:,d], label='x[%d]' % d);
subplot_grid[0,0].set_xlabel('iters')
subplot_grid[0,0].set_ylabel('x_d')
subplot_grid[0,0].legend(loc='upper right')
subplot_grid[0,1].plot(history['iter'], history['f'])
subplot_grid[0,1].set_xlabel('iters')
subplot_grid[0,1].set_ylabel('f(x)');
```
### Exercise 3b: Minimize the 'trid' function
Given a 2-dimensional vector $x = [x_1, x_2]$, the trid function is:
$$
f(x) = (x_1-1)^2 + (x_2-1)^2 - x_1 x_2
$$
Background and Picture: <https://www.sfu.ca/~ssurjano/trid.html>
Can you use autograd + gradient descent to find the optimal value $x^*$ that minimizes $f(x)$?
You can initialize your gradient descent at [+1.0, -1.0]
```python
def f(x_D):
return ag_np.power(x_D[0] - 1, 2) + ag_np.power(x_D[1] - 1, 2) - (x_D[0] * x_D[1]) # TODO
g = autograd.grad(f) # TODO
# Initialize at x_D = [6, 4, -3, -5]
D = 4
init_x_D = np.asarray([6.0, 4.0, -3.0, -5.0])
```
```python
# TODO call run_many_iters_of_gradient_descent() with appropriate args
opt_x_D, history = run_many_iters_of_gradient_descent(f, g, init_x_D, n_iters=1000, step_size=0.01)
```
```python
# TRID example
# Make plots of how x parameter values evolve over iterations, and function values evolve over iterations
# Expected result: ????
fig_h, subplot_grid = plt.subplots(
nrows=1, ncols=2, sharex=True, sharey=False, figsize=(15,3), squeeze=False)
for d in range(D):
subplot_grid[0,0].plot(history['iter'], np.vstack(history['x_D'])[:,d], label='x[%d]' % d);
subplot_grid[0,0].set_xlabel('iters')
subplot_grid[0,0].set_ylabel('x_d')
subplot_grid[0,0].legend(loc='upper right')
subplot_grid[0,1].plot(history['iter'], history['f'])
subplot_grid[0,1].set_xlabel('iters')
subplot_grid[0,1].set_ylabel('f(x)');
```
# Part 4: Solving linear regression with gradient descent + autograd
We observe $N$ examples $(x_n, y_n)$ consisting of D-dimensional 'input' vectors $x_n$ and scalar outputs $y_n$.
Consider the multivariate linear regression model for making a prediction given any input vector $x_i \in \mathbb{R}^D$:
\begin{align}
\hat{y}(x_i) = w^T x_i
\end{align}
One way to train weights would be to just compute the weights that minimize mean squared error
\begin{align}
\min_{w \in \mathbb{R}^D} \sum_{n=1}^N (y_n - x_n^T w )^2
\end{align}
### Toy Data for linear regression task
We'll generate data that comes from an idealized linear regression model.
Each example has D=2 dimensions for x.
* The first dimension is weighted by +4.2.
* The second dimension is weighted by -4.2
```python
N = 100
D = 2
sigma = 0.1
true_w_D = np.asarray([4.2, -4.2])
true_bias = 0.1
train_prng = np.random.RandomState(0)
x_ND = train_prng.uniform(low=-5, high=5, size=(N,D))
y_N = np.dot(x_ND, true_w_D) + true_bias + sigma * train_prng.randn(N)
```
### Toy Data Visualization: Pairplots for all possible (x_d, y) combinations
You can clearly see the slopes of the lines:
* x1 vs y plot: slope is around +4
* x2 vs y plot: slope is around -4
```python
sns.pairplot(
data=pd.DataFrame(np.hstack([x_ND, y_N[:,np.newaxis]]), columns=['x1', 'x2', 'y']));
```
```python
# Define the optimization problem as an AUTOGRAD-able function wrt the weights w_D
def calc_squared_error_loss(w_D):
return ag_np.sum(ag_np.square(ag_np.dot(x_ND, w_D) - y_N))
```
```python
# Test the *loss function* at the known "ideal" initial point
calc_squared_error_loss(true_w_D)
```
1.6882674607128603
```python
# Createa an all-zero weight array to use as our initial guess
init_w_D = np.zeros(2)
```
```python
# Test the *loss function* at that all-zero initial point
calc_squared_error_loss(init_w_D)
```
30431.701153286307
```python
# Use autograd.grad to build the gradient function
calc_grad_wrt_w = autograd.grad(calc_squared_error_loss)
```
```python
# Test the gradient function at that same initial point
calc_grad_wrt_w(init_w_D)
```
array([-7148.8368846 , 7344.46400842])
### Discussion 4a: Is the gradient pointing in the right direction? Why or why not?
TODO discuss
### Run gradient descent
Use the code below to run GD on our simple regression problem
```python
# Because the gradient's magnitude is very large, use very small step size
opt_w_D, history = run_many_iters_of_gradient_descent(
calc_loss, calc_grad_wrt_w, init_w_D,
n_iters=400, step_size=0.00001,
)
```
```python
# LinReg worked example
# Make plots of how w_D parameter values evolve over iterations, and function values evolve over iterations
# Expected result: x
fig_h, subplot_grid = plt.subplots(
nrows=1, ncols=2, sharex=True, sharey=False, figsize=(15,3), squeeze=False)
for d in range(D):
subplot_grid[0,0].plot(history['iter'], np.vstack(history['x_D'])[:,d], label='w[%d]' % d);
subplot_grid[0,0].set_xlabel('iters')
subplot_grid[0,0].set_ylabel('w_d')
subplot_grid[0,0].legend(loc='upper right')
subplot_grid[0,1].plot(history['iter'], history['f'])
subplot_grid[0,1].set_xlabel('iters')
subplot_grid[0,1].set_ylabel('-1 * log p(y | w, x)');
```
### Discussion 4b: Do these trace plots indicate we have converged to good weight vector values? Why or why not?
TODO discuss
```python
```
# Part 5: Autograd for functions of data structures of arrays
#### Useful Fact: autograd can take derivatives with respect to DATA STRUCTURES of parameters
This can help us when it is natural to define models in terms of several parts (e.g. NN layers).
We don't need to turn our many model parameters into one giant weights-and-biases vector. We can express our thoughts more naturally.
### Demo 1: gradient of a LIST of parameters
```python
def f(w_list_of_arr):
return ag_np.sum(ag_np.square(w_list_of_arr[0])) + ag_np.sum(ag_np.square(w_list_of_arr[1]))
g = autograd.grad(f)
```
```python
w_list_of_arr = [np.zeros(3), np.arange(5, dtype=np.float64)]
print("Type of the gradient is: ")
print(type(g(w_list_of_arr)))
print("Result of the gradient is: ")
g(w_list_of_arr)
```
### Demo 2: gradient of DICT of parameters
```python
def f(dict_of_arr):
return ag_np.sum(ag_np.square(dict_of_arr['weights'])) + ag_np.sum(ag_np.square(dict_of_arr['bias']))
g = autograd.grad(f)
```
```python
dict_of_arr = dict(weights=np.arange(5, dtype=np.float64), bias=4.2)
print("Type of the gradient is: ")
print(type(g(dict_of_arr)))
print("Result of the gradient is: ")
g(dict_of_arr)
```
### Exercise 5a: Try to implement gradient descent for linear regression with weights and bias parameters
The above example only uses weights on the dimensions of each $x$ vector , and thus can only learn linear models that pass through the origin.
Can you instead optimize a model that includes a **bias** parameter $b>0$?
The predictions look like:
\begin{align}
\hat{y}(x_i) = w^T x_i + b
\end{align}
The training problem is:
\begin{align}
\min_{w \in \mathbb{R}^D,b \in \mathbb{R}} \sum_{n=1}^N (y_n - w^T x_n - b)^2
\end{align}
Use this format of parameter dictionary
```python
init_param_dict = dict(
w_D=np.zeros(D),
b=1.23)
```
Exercise 5a TODO: Define a function called `calc_loss` that computes our training loss given a parameter dictionary
```python
def calc_loss(param_dict):
w_D = param_dict['w_D'] # Unpack weight array
b = param_dict['b'] # Unpack bias scalar
return 0.0 # TODO fix me
```
Exercise 5a TODO: build function `g` that can compute gradients
```python
g = None # TODO fix me
```
Given: use the implementation below of gradient descent on a parameter dictionary representation
```python
def run_many_iters_of_gradient_descent_for_param_dict(f, g, init_param_dict=None, n_iters=100, step_size=0.001):
''' Run many iterations of GD
Args
----
f : python function of dict to float
Maps dict of arrays to scalar loss
g : python function of dict to dict
Maps dict of arrays to gradient dict of arrays
init_param_dict : dict
Initial values for the input parameters
n_iters : int
Number of gradient descent update steps to perform
step_size : positive float
Step size or learning rate for GD
Returns
-------
opt_param_dict : dict
Best value of parameter dict for provided loss f found via this GD procedure
history : dict
Contains history of this GD run useful for plotting diagnostics
'''
# Copy the initial parameter dict
param_dict = copy.deepcopy(init_param_dict)
# Create data structs to track the per-iteration history of different quantities
history = dict(
iter=[],
f=[],
param_dict=[],
grad_dict=[])
for iter_id in range(n_iters):
if iter_id > 0:
grad_dict = g(param_dict)
for key in param_dict.keys():
p_arr = param_dict[key] # current param array
g_arr = grad_dict[key] # current gradient array
p_arr = p_arr - step_size * g_arr
param_dict[key] = p_arr # store as latest value in dictionary
history['iter'].append(iter_id)
history['f'].append(f(param_dict))
history['param_dict'].append(param_dict)
history['grad_dict'].append(g(param_dict))
return param_dict, history
```
TODO Run gradient descent to see what you get
```python
```
| 97e38f468e8de6385845da54ace974c2b1e77b48 | 198,514 | ipynb | Jupyter Notebook | labs/day23_AutogradForGradientDescent.ipynb | brawnerquan/comp135-20f-assignments | 9570c17b872b7334b0e5b86160d868e3b854c71a | [
"MIT"
] | null | null | null | labs/day23_AutogradForGradientDescent.ipynb | brawnerquan/comp135-20f-assignments | 9570c17b872b7334b0e5b86160d868e3b854c71a | [
"MIT"
] | null | null | null | labs/day23_AutogradForGradientDescent.ipynb | brawnerquan/comp135-20f-assignments | 9570c17b872b7334b0e5b86160d868e3b854c71a | [
"MIT"
] | null | null | null | 126.442038 | 53,308 | 0.880411 | true | 6,196 | Qwen/Qwen-72B | 1. YES
2. YES | 0.793106 | 0.847968 | 0.672528 | __label__eng_Latn | 0.944411 | 0.40084 |
# Integrating out the noise in Gaussian likelihood problems
A property of Gaussian likelihoods and certain priors is that the joint posterior distribution $p(\theta,\sigma)$ can be analytically integrated with respect to $\sigma$ to yield the marginal distribution $p(\theta|X)$ up to a proportion,
$\begin{align} p(\theta|X) &= \int_{0}^{\infty} p(\theta, \sigma|X) \mathrm{d}\sigma\\ &\propto \int_{0}^{\infty} p(X|\theta, \sigma) p(\theta, \sigma) \mathrm{d}\sigma,\end{align}$
For example, for Gaussian log-likelihood models where we assume a prior, $\sigma\sim U(a, b),$ then the marginal posterior is given by,
$p(\theta|X) \propto \frac{\pi ^{-n/2} \text{sse}(\theta)^{\frac{1}{2}-\frac{n}{2}} \left[\Gamma
\left(\frac{n-1}{2},\frac{\text{sse}(\theta)}{2 b^2}\right)-\Gamma
\left(\frac{n-1}{2},\frac{\text{sse}(\theta)}{2 a^2}\right)\right]}{2 \sqrt{2} (b - a)}$,
where $\text{sse}(\theta) = \sum_{i=1}^n(f_i(\theta) - y_i)^2,$ is the sum of square errors and $\Gamma(u,v)$ is the upper incomplete gamma function.
In this notebook, we illustrate how using the posterior geometry where the noise parameter $\sigma$ is integrated out can lead to faster convergence of the posterior distribution than the circumstance where $\sigma$ is estimated.
An issue with using the integrated log-likelihood is that the estimated model is no longer generative and so posterior predictive checking is harder. Whether this is more than compensated for by the speed up of sampling from a posterior of lower dimensionality is problem specific although unlikely to considerably speed up sampling.
## Single output problem - logistic model
Here we illustrate how a posterior distribution for the logistic model growth rate and carrying capacity parameters can be estimated using both full posterior and the marginal posterior geometries.
```python
from __future__ import print_function
import pints
import pints.toy as toy
import pints.plot
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats
# Load a forward model
model = toy.LogisticModel()
# Create some toy data
real_parameters = [0.015, 500]
times = np.linspace(0, 1000, 100)
signal_values = model.simulate(real_parameters, times)
# Add noise
nu = 2
sigma = 10
observed_values_norm = signal_values + scipy.stats.norm.rvs(loc=0, scale=sigma, size=signal_values.shape)
plt.figure(figsize=(14, 6))
plt.xlabel('Time')
plt.ylabel('Values')
plt.plot(times, observed_values_norm)
plt.plot(times, signal_values)
plt.show()
# Create an object with links to the model and time series
problem = pints.SingleOutputProblem(model, times, observed_values_norm)
```
Fit the model using a Gaussian likelihood where we estimate the noise parameter $\sigma$.
```python
# Create a log-likelihood function (adds an extra parameter!)
log_likelihood = pints.GaussianLogLikelihood(problem)
# Create a uniform prior over both the parameters and the new noise variable
log_prior = pints.UniformLogPrior(
[0.01, 400, 1],
[0.02, 600, 100]
)
# Create a posterior log-likelihood (log(likelihood * prior))
log_posterior = pints.LogPosterior(log_likelihood, log_prior)
# Choose starting points for 3 mcmc chains
real_parameters1 = np.array(real_parameters + [sigma])
xs = [
real_parameters1 * 1.1,
real_parameters1 * 0.9,
real_parameters1 * 1.15,
]
# Create mcmc routine
mcmc = pints.MCMCController(log_posterior, 3, xs, method=pints.AdaptiveCovarianceMCMC)
# Add stopping criterion
mcmc.set_max_iterations(4000)
# Start adapting after 1000 iterations
mcmc.set_initial_phase_iterations(1000)
# Disable logging mode
mcmc.set_log_to_screen(False)
# Run!
print('Running...')
chains = mcmc.run()
print('Done!')
# Show traces and histograms
pints.plot.trace(chains)
# Discard warm up
chains = chains[:, 2000:, :]
chains_gaussian = chains
# Check convergence using rhat criterion
print('R-hat:')
print(pints.rhat_all_params(chains))
# Show graphs
plt.show()
```
Now fitting the same model using a log-likelihood where the noise parameter $\sigma$ has been integrated out. Here the convergence to the stationary distribution is quicker.
```python
# Create a log-likelihood function with lower and upper values on uniform prior for sigma
lower = 1
upper = 100
log_likelihood = pints.GaussianIntegratedUniformLogLikelihood(problem, lower, upper)
# Create a uniform prior over both the parameters and the new noise variable
log_prior = pints.UniformLogPrior(
[0.01, 400],
[0.02, 600]
)
# Create a posterior log-likelihood (log(likelihood * prior))
log_posterior = pints.LogPosterior(log_likelihood, log_prior)
# Choose starting points for 3 mcmc chains
real_parameters = np.array(real_parameters)
xs = [
real_parameters * 1.1,
real_parameters * 0.9,
real_parameters * 1.0,
]
# Create mcmc routine
mcmc = pints.MCMCController(log_posterior, 3, xs, method=pints.AdaptiveCovarianceMCMC)
# Add stopping criterion
mcmc.set_max_iterations(4000)
# Start adapting after 1000 iterations
mcmc.set_initial_phase_iterations(250)
# Disable logging mode
mcmc.set_log_to_screen(False)
# Run!
print('Running...')
chains = mcmc.run()
print('Done!')
# Show traces and histograms
pints.plot.trace(chains)
# Discard warm up
chains = chains[:, 2000:, :]
chains_integrated = chains
# Check convergence using rhat criterion
print('R-hat:')
print(pints.rhat_all_params(chains))
# Show graphs
plt.show()
```
Comparing the marginal distributions for $r$ and $\kappa$ for the two models, they look similar.
```python
chains_gaussian1 = np.vstack(chains_gaussian)
chains_integrated1 = np.vstack(chains_integrated)
# plot
plt.figure(figsize=(15, 6))
plt.subplot(1, 2, 1)
bins = np.linspace(0.0147, 0.0153, 20)
plt.hist(chains_gaussian1[:, 0], bins, alpha=0.5, color="blue", label='Estimate sigma')
plt.hist(chains_integrated1[:, 0], bins, alpha=0.5, color="orange", label='Integrate out sigma')
plt.title("Growth rate")
plt.subplot(1, 2, 2)
bins = np.linspace(490, 510, 20)
plt.hist(chains_gaussian1[:, 1], bins, alpha=0.5, color="blue", label='Estimate sigma')
plt.hist(chains_integrated1[:, 1], bins, alpha=0.5, color="orange", label='Integrate out sigma')
plt.title("Carrying capacity")
plt.rc('font', size=14)
plt.legend(loc="upper left", bbox_to_anchor=(0.5, -0.1))
plt.show()
```
## Multiple output problem - Fitzhugh-Nagumo model
Now we illustrate how to fit a Gaussian model to a problem with two outputs, where the noise has been integrated out of the likelihood.
```python
# Create a model
model = pints.toy.FitzhughNagumoModel()
# Run a simulation
parameters = [0.1, 0.5, 3]
times = np.linspace(0, 20, 200)
values = model.simulate(parameters, times)
# First add some noise
sigma = 0.5
noisy = values + np.random.normal(0, sigma, values.shape)
# Plot the results
plt.figure(figsize=(14, 6))
plt.xlabel('Time')
plt.ylabel('Noisy values')
plt.plot(times, noisy)
plt.show()
# Define a multiple output problem
problem = pints.MultiOutputProblem(model, times, noisy)
```
```python
# Create a log-likelihood function (adds an extra parameter!)
log_likelihood = pints.GaussianLogLikelihood(problem)
# Create a uniform prior over both the parameters and the new noise variable
log_prior = pints.UniformLogPrior(
[0, 0, 0, 0, 0],
[10, 10, 10, 20, 20]
)
# Create a posterior log-likelihood (log(likelihood * prior))
log_posterior = pints.LogPosterior(log_likelihood, log_prior)
# Choose starting points for 3 mcmc chains
real_parameters1 = np.array(parameters + [sigma, sigma])
xs = [
real_parameters1 * 1.1,
real_parameters1 * 0.9,
real_parameters1 * 1.15,
]
# Create mcmc routine
mcmc = pints.MCMCController(log_posterior, 3, xs, method=pints.AdaptiveCovarianceMCMC)
# Add stopping criterion
mcmc.set_max_iterations(4000)
# Start adapting after 1000 iterations
mcmc.set_initial_phase_iterations(1000)
# Disable logging mode
mcmc.set_log_to_screen(False)
# Run!
print('Running...')
chains = mcmc.run()
print('Done!')
# Show traces and histograms
pints.plot.trace(chains)
# Discard warm up
chains = chains[:, 2000:, :]
chains_gaussian = chains
# Check convergence using rhat criterion
print('R-hat:')
print(pints.rhat_all_params(chains))
chains_full = chains
# Show graphs
plt.show()
```
Now estimating the same model where the two noise parameters are integrated out of the posterior. Again, the convergence rate tends to be faster and we see better chain mixing although this varies from run to run.
```python
# Create a log-likelihood function (assumed U(0, 20) priors on both sigmas)
log_likelihood = pints.GaussianIntegratedUniformLogLikelihood(problem, 0, 20)
# Create a uniform prior over both the parameters and the new noise variable
log_prior = pints.UniformLogPrior(
[0, 0, 0],
[10, 10, 10]
)
# Create a posterior log-likelihood (log(likelihood * prior))
log_posterior = pints.LogPosterior(log_likelihood, log_prior)
# Choose starting points for 3 mcmc chains
real_parameters1 = np.array(parameters)
xs = [
real_parameters1 * 1.1,
real_parameters1 * 0.9,
real_parameters1 * 1.15,
]
# Create mcmc routine
mcmc = pints.MCMCController(log_posterior, 3, xs, method=pints.AdaptiveCovarianceMCMC)
# Add stopping criterion
mcmc.set_max_iterations(4000)
# Start adapting after 1000 iterations
mcmc.set_initial_phase_iterations(1000)
# Disable logging mode
mcmc.set_log_to_screen(False)
# Run!
print('Running...')
chains = mcmc.run()
print('Done!')
# Show traces and histograms
pints.plot.trace(chains)
# Discard warm up
chains = chains[:, 2000:, :]
chains_gaussian = chains
# Check convergence using rhat criterion
print('R-hat:')
print(pints.rhat_all_params(chains))
chains_marginal=chains
# Show graphs
plt.show()
```
Comparing the effective sample size between the posterior geometry where we sample from the full posterior versus the marginal version we see that we have much larger effective sample sizes using the marginalised density.
```python
full = pints.effective_sample_size(np.vstack(chains_full))
marginal = pints.effective_sample_size(np.vstack(chains_marginal))
print('Effective sample size of model parameters for full posterior: ' + str(full[0:2]))
print('Effective sample size of model parameters for marginal posterior: ' + str(marginal[0:2]))
```
Effective sample size of model parameters for full posterior: [347.53874375248461, 247.12936497020999]
Effective sample size of model parameters for marginal posterior: [488.53736925661275, 488.66809520505473]
| 730f3d7e8ae9c459de3fd0e759b7323a423df872 | 647,256 | ipynb | Jupyter Notebook | examples/sampling-integrated-gaussian-log-likelihood.ipynb | iamleeg/pints | bd1c11472ff3ec0990f3d55f0b2f20d92397926d | [
"BSD-3-Clause"
] | null | null | null | examples/sampling-integrated-gaussian-log-likelihood.ipynb | iamleeg/pints | bd1c11472ff3ec0990f3d55f0b2f20d92397926d | [
"BSD-3-Clause"
] | null | null | null | examples/sampling-integrated-gaussian-log-likelihood.ipynb | iamleeg/pints | bd1c11472ff3ec0990f3d55f0b2f20d92397926d | [
"BSD-3-Clause"
] | null | null | null | 1,119.820069 | 196,534 | 0.946111 | true | 2,788 | Qwen/Qwen-72B | 1. YES
2. YES | 0.913677 | 0.833325 | 0.761389 | __label__eng_Latn | 0.829428 | 0.607294 |
# Hamming网
## 简介
Hamming网使用汉明距离度量两个向量的距离。
汉明距离衡量的是两个相同长度向量对应位不同的数量,因此Hamming网仅能处理有限个离散值输入向量的模式识别问题。
Hamming网通过递归竞争判定输入向量最接近哪一个记忆的向量,实现对输入向量的模式识别。
对于一个特定的输入,Hamming网络收敛后仅有一个神经元的输出为非0值,该输出表示输入与该神经元记忆的向量最为相似。
## 符号定义
|符号|含义|
|:-:|:-:|
|$\bm{X}$|输入向量|
|$n$|输入向量维度|
|$\bm{a}$|前馈层输出|
|$\bm{W}$|前馈层权矩阵|
|$\bm{b}$|前馈层偏置|
|$f$|前馈层激活函数|
|$c$|前馈层神经元数量(网络记忆的模式总数)|
|$\bm{V}$|递归层权矩阵|
|$\bm{\hat{y}}$|递归层输出|
|$\bm{y}$|真实输出|
|$\bm{r_i}$|网络记忆的第i个向量|
## Hamming网的正向计算
### 前馈层
$$
\begin{equation}
\bm{X} = [x_1, x_2, \cdots, x_n]^T, \ x_i\in\{-1, 1\}, i=1, 2, 3, \cdots, n
\end{equation}
$$
$$
\begin{equation}
\bm{a} = f(\bm{W}\bm{X}+\bm{b})
\end{equation}
$$
$$
\begin{equation}
f(x) =
\left\{
\begin{array}{cc}
x,&x\geq0 \\
0,&x<0
\end{array}
\right.
\end{equation}
$$
### 递归层
$$
\begin{equation}
\left\{
\begin{array}{cc}
\bm{\hat{y}}(0) = \bm{a} \\
\bm{\hat{y}}(t+1) = f(\bm{V}\bm{\hat{y}}(t))
\end{array}
\right.
\end{equation}
$$
当递归层的输出$\bm{\hat{y}}(t+1)$仅有一个数为非0数时递归结束
## Hamming网的权重设计与学习
### 权重设计
* 前馈层权重
$$
\begin{equation}
\bm{W} =
\left[
\begin{array}{cc}
\bm{r_1}^T \\
\bm{r_2}^T \\
\vdots \\
\bm{r_c}^T \\
\end{array}
\right]
\end{equation}
$$
* 前馈层偏置
$$
\begin{equation}
\bm{b} = [n, n, \cdots, n]^T
\end{equation}
$$
* 递归层权重
$$
\begin{equation}
\bm{V} =
\left[
\begin{array}{cc}
1&-\epsilon&\cdots&-\epsilon \\
-\epsilon&1&\cdots&-\epsilon \\
\vdots&\ddots&\ddots&\vdots\\
-\epsilon&-\epsilon&\cdots&1
\end{array}
\right] , \
\epsilon \in (0, \frac{1}{c})
\end{equation}
$$
可以看到,上述的权重设计中,递归层对自身的反馈是正反馈,而对其他神经元的反馈是抑制的。这体现了Hamming网的思路:**竞争性**
### 权重学习
递归层的权重无需改变,仅需要改变前馈层的权重
对于待记忆的模式向量$\bm{r_i}$,其期望输出为$[0, 0, \cdots, 1, \cdots, 0]^T$,其中第i位为非零值
因此,对于前馈层的输出,应当第i位的输出尽可能大,其他位的输出尽可能小
由此可以得到对于待记忆的模式向量$\bm{r_i}$,有下式的权重更新方法
$$
\begin{equation}
\bm{W} = \bm{W} + [0, 0, \cdots, 1, \cdots, 0]^T\bm{r_i}^T
\end{equation}
$$
通过上式可以得到如下的参数学习伪代码
```
# 定义迭代次数
def iteration
# 定义待记忆模式向量
def r_1
def r_2
...
def r_c
# 定义权重矩阵
def W
# 定义偏置
def b
# 迭代开始
# 遍历所有待记忆的向量
for r_i in [r_1, r_2, ..., r_c]
while True
if argmax(W@r_i+b) == i
break
else
# 生成mask向量
mask_vector = zeros((c, 1))
mask_vector[i] = 1
W = W + mask_vector @ r_i.T
iter += 1
if iter == iteration:
break
```
权重学习没有太大的意义,对于Hamming网来说,设计得到的权重在归一化的情况下基本上可以认为是最优权重。
```python
import random
import numpy as np
import matplotlib.pyplot as plt
```
```python
class HammingNN(object):
def __init__(self, input_dim, output_dim, max_iters):
"""
input_dim: 输入维度
output_dim: 输出维度,也是总记忆向量的数量
max_iters: 递归层最大递归次数
"""
self.input_dim = input_dim
self.output_dim = output_dim
self.max_iters = max_iters
self.forward_weight = np.zeros((output_dim, input_dim+1))
self.recursive_weight = np.ones((output_dim, output_dim))
def forward(self, input_vector):
# 前馈层
input_vector_expend = np.concatenate((input_vector, [[1]]), axis=0)
self.forward_output = self.activate_func(np.matmul(self.forward_weight, input_vector_expend))
# 递归层
pred_vector = self.forward_output
pred_vector_list = [pred_vector]
for _ in range(self.max_iters):
if np.sum(pred_vector > 0) == 1:
break
else:
pred_vector = self.activate_func(np.matmul(self.recursive_weight, pred_vector))
pred_vector_list.append(pred_vector)
return pred_vector, np.array(pred_vector_list)
def weight_design(self, standard_vectors):
"""
standard_vectors : 待记忆向量列表
"""
assert len(standard_vectors) == self.output_dim
# 前馈层的权重设计
for i, standard_vector in enumerate(standard_vectors):
self.forward_weight[i] = np.append(np.array(standard_vector).reshape(-1), self.input_dim)
# 递归层的权重设计
self.recursive_weight = -np.ones((self.output_dim, self.output_dim)) / self.output_dim
self.recursive_weight += np.diag(np.ones(self.output_dim)+ 1 / self.output_dim)
print("weight design done...")
def weight_learning(self, standard_vectors):
pass
@staticmethod
def activate_func(input_vector):
mask_ = np.array(input_vector) > 0
return input_vector * mask_.astype(np.int32)
```
```python
# 测试
# 假设标准向量为[-1, -1], [1, 1]
standard_vectors = [[-1, -1], [1, 1]]
# 定义模型
hammingNN = HammingNN(input_dim=2, output_dim=2, max_iters=10)
# 权重设计
hammingNN.weight_design(standard_vectors)
# 测试输入
# 测试输入由[-1, 1]随机取的100个点组成
test_data = list()
trace_list = list()
random.seed(1024)
for i in range(200):
test_data.append([random.random()*2-1, random.random()*2-1])
test_data.append([-1, -1])
test_data.append([1, 1])
plt.figure(figsize=(10, 10))
plt.title("test samples", fontsize=15)
plt.xticks(fontsize=15)
plt.yticks(fontsize=15)
plt.xlim(0, 4)
plt.ylim(0, 4)
for test_sample in test_data:
_, pred_list = hammingNN.forward(np.array(test_sample).reshape(-1, 1))
if test_sample == [-1, -1] or test_sample == [1, 1]:
plt.scatter(pred_list[0][0], pred_list[0][1], label="{}".format(test_sample), s=200)
plt.plot(pred_list[:, 0], pred_list[:, 1])
else:
plt.scatter(pred_list[0][0], pred_list[0][1])
plt.plot(pred_list[:, 0], pred_list[:, 1])
plt.legend(fontsize=15)
plt.show()
```
可以看到,所有的样本点均收敛到了[c, 0]或是[0, c]。收敛有明显的分界点,并且作为待记忆向量的输入经过前向计算后位于坐标轴上
| 0e6555e7bef28a60fbcb856d7ce178eda9a09733 | 431,050 | ipynb | Jupyter Notebook | Hamming.ipynb | koolo233/NeuralNetworks | 39e532646ada8f1e821e0b6e3565379c1f73126c | [
"MIT"
] | null | null | null | Hamming.ipynb | koolo233/NeuralNetworks | 39e532646ada8f1e821e0b6e3565379c1f73126c | [
"MIT"
] | null | null | null | Hamming.ipynb | koolo233/NeuralNetworks | 39e532646ada8f1e821e0b6e3565379c1f73126c | [
"MIT"
] | null | null | null | 1,256.705539 | 421,522 | 0.955318 | true | 2,483 | Qwen/Qwen-72B | 1. YES
2. YES | 0.849971 | 0.721743 | 0.613461 | __label__yue_Hant | 0.149284 | 0.263606 |
# A complete use case
In this section we present a complete use case, based on the meaning classification dataset introduced in [Lorenz et al. (2021)](https://arxiv.org/abs/2102.12846) QNLP paper. The goal is to classify simple sentences (such as "skillful programmer creates software" and "chef prepares delicious meal") into two categories, food or IT. The dataset consists of 130 sentences created using a simple context-free grammar.
We will use a [SpiderAnsatz](../lambeq.rst#lambeq.tensor.SpiderAnsatz) to split large tensors into chains of smaller ones. For differentiation we will use JAX, and we will apply simple gradient-descent optimisation to train the tensors.
## Preparation
We start with a few essential imports.
```python
import warnings
warnings.filterwarnings('ignore') # Ignore warnings
from discopy.tensor import Tensor
from jax import numpy as np
import numpy
np.random = numpy.random
Tensor.np = np
np.random.seed(123458) # Fix the seed
```
<div class="alert alert-info">
**Note**
Note the `Tensor.np = np` assignment in the above code. This is required to let `discopy` know that from now on we use JAX's version of `numpy`.
</div>
Let's read the datasets:
```python
# Read data
def read_data(fname):
with open(fname, 'r') as f:
lines = f.readlines()
data, targets = [], []
for ln in lines:
t = int(ln[0])
data.append(ln[1:].strip())
targets.append(np.array([t, not(t)], dtype=np.float32))
return data, np.array(targets)
train_data, train_targets = read_data('datasets/mc_train_data.txt')
test_data, test_targets = read_data('datasets/mc_test_data.txt')
```
The first few lines of the train dataset:
```python
train_data[:10]
```
['skillful man prepares sauce',
'skillful man bakes dinner',
'woman cooks tasty meal',
'man prepares meal',
'skillful woman debugs program',
'woman prepares tasty meal',
'person runs program',
'person runs useful application',
'woman prepares sauce',
'woman prepares dinner']
Targets are represented as 2-dimensional arrays:
```python
train_targets
```
DeviceArray([[1., 0.],
[1., 0.],
[1., 0.],
...,
[0., 1.],
[1., 0.],
[0., 1.]], dtype=float32)
## Creating and parameterising diagrams
First step is to convert sentences into string diagrams:
```python
# Parse sentences to diagrams
from lambeq.ccg2discocat import DepCCGParser
parser = DepCCGParser()
train_diagrams = parser.sentences2diagrams(train_data)
test_diagrams = parser.sentences2diagrams(test_data)
train_diagrams[0].draw(figsize=(8,4), fontsize=13)
```
The produced diagrams need to be parameterised by a specific ansatz. For this experiment we will use a [SpiderAnsatz](../lambeq.rst#lambeq.tensor.SpiderAnsatz).
```python
# Create ansatz and convert to tensor diagrams
from lambeq.tensor import SpiderAnsatz
from lambeq.core.types import AtomicType
from discopy import Dim
N = AtomicType.NOUN
S = AtomicType.SENTENCE
# Create an ansatz by assigning 2 dimensions to both
# noun and sentence spaces
ansatz = SpiderAnsatz({N: Dim(2), S: Dim(2)})
train_circuits = [ansatz(d) for d in train_diagrams]
test_circuits = [ansatz(d) for d in test_diagrams]
all_circuits = train_circuits + test_circuits
all_circuits[0].draw(figsize=(8,4), fontsize=13)
```
## Creating a vocabulary
We are now ready to create a vocabulary.
```python
# Create vocabulary
from sympy import default_sort_key
vocab = sorted(
{sym for circ in all_circuits for sym in circ.free_symbols},
key=default_sort_key
)
tensors = [np.random.rand(w.size) for w in vocab]
tensors[0]
```
array([0.35743395, 0.45764418])
## Defining a loss function
This is a binary classification task, so we will use binary cross entropy as the loss.
```python
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def loss(tensors):
# Lambdify
np_circuits = [c.lambdify(*vocab)(*tensors) for c in train_circuits]
# Compute predictions
predictions = sigmoid(np.array([c.eval().array for c in np_circuits]))
# binary cross-entropy loss
cost = -np.sum(train_targets * np.log2(predictions)) / len(train_targets)
return cost
```
The loss function follows the steps below:
1. The symbols in the train diagrams are replaced with concrete ``numpy`` arrays.
2. The resulting tensor networks are evaluated and produce results.
3. Based on the predictions, an average loss is computed for the specific iteration.
We use JAX in order to get a gradient function on the loss, and "just-in-time" compile it to improve speed:
```python
from jax import jit, grad
training_loss = jit(loss)
gradient = jit(grad(loss))
```
## Training loop
We are now ready to start training. The following loop computes gradients and uses them to update the tensors associated with the symbols.
```python
training_losses = []
epochs = 90
for i in range(epochs):
gr = gradient(tensors)
for k in range(len(tensors)):
tensors[k] = tensors[k] - gr[k] * 1.0
training_losses.append(float(training_loss(tensors)))
if (i + 1) % 10 == 0:
print(f"Epoch {i + 1} - loss {training_losses[-1]}")
```
Epoch 10 - loss 0.07233709841966629
Epoch 20 - loss 0.015333528630435467
Epoch 30 - loss 0.00786149874329567
Epoch 40 - loss 0.00515687046572566
Epoch 50 - loss 0.0037753921933472157
Epoch 60 - loss 0.0029438300989568233
Epoch 70 - loss 0.002392344642430544
Epoch 80 - loss 0.0020021884702146053
Epoch 90 - loss 0.001713048666715622
## Testing
Finally, we use the trained model on the test dataset:
```python
# Testing
np_test_circuits = [c.lambdify(*vocab)(*tensors) for c in test_circuits]
test_predictions = sigmoid(np.array([c.eval().array for c in np_test_circuits]))
hits = 0
for i in range(len(np_test_circuits)):
target = test_targets[i]
pred = test_predictions[i]
if np.argmax(target) == np.argmax(pred):
hits += 1
print("Accuracy on test set:", hits / len(np_test_circuits))
```
Accuracy on test set: 0.9
## Working with quantum circuits
The process when working with quantum circuits is very similar, with two important differences:
1. The parameterisable part of the circuit is an array of parameters, as described in Section [Circuit Symbols](training-symbols.ipynb#Circuit-symbols), instead of tensors associated to words.
2. If optimisation takes place on quantum hardware, standard automatic differentiation cannot be used. An alternative is to use a gradient-approximation technique, such as [Simultaneous Perturbation Stochastic Approximation](https://en.wikipedia.org/wiki/Simultaneous_perturbation_stochastic_approximation) (SPSA).
Complete examples in training quantum circuits can be found in the following notebooks:
- [Quantum pipeline with JAX](../examples/quantum_pipeline_jax.ipynb)
- [Quantum pipeline with tket](../examples/quantum_pipeline_tket.ipynb)
**See also:**
- [Classical pipeline with PyTorch](../examples/classical_pipeline.ipynb)
| f54e8c0e8e3674fd6144a462dfa25fe2df6f7f72 | 35,474 | ipynb | Jupyter Notebook | docs/tutorials/training-usecase.ipynb | Thommy257/lambeq-pub | 502752346610a50fd26feb27ca6c7f5ceab5eff5 | [
"Apache-2.0"
] | 1 | 2021-11-24T10:26:36.000Z | 2021-11-24T10:26:36.000Z | docs/tutorials/training-usecase.ipynb | Thommy257/lambeq-pub | 502752346610a50fd26feb27ca6c7f5ceab5eff5 | [
"Apache-2.0"
] | null | null | null | docs/tutorials/training-usecase.ipynb | Thommy257/lambeq-pub | 502752346610a50fd26feb27ca6c7f5ceab5eff5 | [
"Apache-2.0"
] | null | null | null | 81.737327 | 13,572 | 0.826239 | true | 1,845 | Qwen/Qwen-72B | 1. YES
2. YES | 0.795658 | 0.73412 | 0.584108 | __label__eng_Latn | 0.933602 | 0.195409 |
```python
import numpy as np
from scipy import linalg as la
import sympy as sp
```
Starting from L9
Vectors x1 and x2 are independent if c1x1+c2x2 <>0
Vectors v1,v2,...,vn are columns of A. They are independant if nullspace of A is only zero vector(r=n, no free variables). They are dependant if Ac=0 for some nonzero c(r<n, there are free variables).
Vectors v1,...,vl span a space means: the space consists of all combs of those vectors.
Basis for a vector space is a sequence of vectors v1,v2,...,vd with 2 properties: 1. They are independant; 2. They span the space.
I3 is a basis for R^3
```python
I3 = np.identity(3)
I3
```
array([[ 1., 0., 0.],
[ 0., 1., 0.],
[ 0., 0., 1.]])
```python
Z = np.zeros(3)
Z
```
array([ 0., 0., 0.])
The only vector that gives zeros:
```python
np.dot(I3,Z)
```
array([ 0., 0., 0.])
Another basis:
```python
A = np.array([[1,1,4],
[1,2,3],
[2,7,11]])
A
```
array([[ 1, 1, 4],
[ 1, 2, 3],
[ 2, 7, 11]])
n x n matrix that is invertible.
```python
np.linalg.inv(A)
```
array([[ 0.125, 2.125, -0.625],
[-0.625, 0.375, 0.125],
[ 0.375, -0.625, 0.125]])
```python
np.linalg.det(A)
```
8.0000000000000018
Every basis for the space has the same number of vectors and this number is the dimension of this space.
```python
np.linalg.matrix_rank(A)
```
3
Rank is a number of pivot columns and it is a dimension of the columnspace. C(A)
The dimension of a Null Space is the number of free variables, total - pivot variables.
4 Fundamental subspaces:
Columnspace C(A) in R^m
nullspace N(A) in R^n
rowspace C(A^T) in R^n
nullspace of A^T = N(A^T) (Left NullSpace) in R^m
Just a lyrical digression. One can do a PLU transformation using scipy.linalg.lu.
```python
P,L,U = la.lu(A)
P,L,U
```
(array([[ 0., 1., 0.],
[ 0., 0., 1.],
[ 1., 0., 0.]]), array([[ 1. , 0. , 0. ],
[ 0.5, 1. , 0. ],
[ 0.5, 0.6, 1. ]]), array([[ 2. , 7. , 11. ],
[ 0. , -2.5, -1.5],
[ 0. , 0. , -1.6]]))
```python
A2 = np.array([[1,3,1,4],
[2,7,3,9],
[1,5,3,1],
[1,2,0,8]])
```
Using sympy it is possible to calculate Reduced Row Echelon Form of the Matrix and thus find the basis of the matrix. And understand which rows are linearly independant.
```python
sp.Matrix(A2).rref()
```
(Matrix([
[1, 0, -2, 0],
[0, 1, 1, 0],
[0, 0, 0, 1],
[0, 0, 0, 0]]), (0, 1, 3))
In a matrix A2 columns 1, 2 and 4 are linearly independant and form a basis. Thus the rank of this matrix is 3.
```python
np.linalg.matrix_rank(A2)
```
3
```python
sp.Matrix(A).rref()
```
(Matrix([
[1, 0, 0],
[0, 1, 0],
[0, 0, 1]]), (0, 1, 2))
And here all the columns are linearly independant and form the basis.
```python
sp.Matrix(np.transpose(A2)).rref()
```
(Matrix([
[1, 0, 0, 0],
[0, 1, 0, 1],
[0, 0, 1, -1],
[0, 0, 0, 0]]), (0, 1, 2))
```python
sp.Matrix(A2).nullspace()
```
[Matrix([
[ 2],
[-1],
[ 1],
[ 0]])]
```python
sp.Matrix(A2).rowspace()
```
[Matrix([[1, 3, 1, 4]]), Matrix([[0, 1, 1, 1]]), Matrix([[0, 0, 0, -5]])]
```python
sp.Matrix(A2).columnspace()
```
[Matrix([
[1],
[2],
[1],
[1]]), Matrix([
[3],
[7],
[5],
[2]]), Matrix([
[4],
[9],
[1],
[8]])]
The dim of both column space and row space are rank of matrix - r.
```python
A3 = np.array ([[1,2,3],
[1,2,3],
[2,5,8]])
```
```python
np.linalg.matrix_rank(A3)
```
2
```python
R = np.array(sp.Matrix(A3).rref()[0])
R
```
array([[1, 0, -1],
[0, 1, 2],
[0, 0, 0]], dtype=object)
```python
A4 = np.array([[1,2,3,1],
[1,1,2,1],
[1,2,3,1]])
```
```python
R4 = np.array(sp.Matrix(A4).rref()[0])
R4
```
array([[1, 0, 1, 1],
[0, 1, 1, 0],
[0, 0, 0, 0]], dtype=object)
```python
sp.Matrix(A4).rowspace()
```
[Matrix([[1, 2, 3, 1]]), Matrix([[0, -1, -1, 0]])]
```python
sp.Matrix(A4).columnspace()
```
[Matrix([
[1],
[1],
[1]]), Matrix([
[2],
[1],
[2]])]
```python
N41,N42 = np.array(sp.Matrix(A4).nullspace())
np.dot(A4,N41), np.dot(A4,N42)
```
(array([0, 0, 0], dtype=object), array([0, 0, 0], dtype=object))
```python
lN4 = np.array(sp.Matrix(np.transpose(A4)).nullspace())
np.dot(lN4,A4)
```
array([[0, 0, 0, 0]], dtype=object)
E matrix, such as EA = R
```python
preE4 = np.rint(np.array(sp.Matrix(np.c_[A4,np.eye(3)]).rref()[0]).astype(np.double))
```
```python
preE4
```
array([[ 1., 0., 1., 1., 0., 2., -1.],
[ 0., 1., 1., 0., 0., -1., 1.],
[ 0., 0., 0., 0., 1., 0., -1.]])
```python
A4
```
array([[1, 2, 3, 1],
[1, 1, 2, 1],
[1, 2, 3, 1]])
```python
E4 = preE4[:,4:7]
E4
```
array([[ 0., 2., -1.],
[ 0., -1., 1.],
[ 1., 0., -1.]])
```python
np.dot(E4,A4)
```
array([[ 1., 0., 1., 1.],
[ 0., 1., 1., 0.],
[ 0., 0., 0., 0.]])
Ending Lecture 10 here.
| 936d59c04110b19579c3cd1683c1222331051cb4 | 15,531 | ipynb | Jupyter Notebook | LinAl_003.ipynb | rtgshv/linal101 | f520987a6f1e468b3b466c14820e43dada565e43 | [
"MIT"
] | null | null | null | LinAl_003.ipynb | rtgshv/linal101 | f520987a6f1e468b3b466c14820e43dada565e43 | [
"MIT"
] | null | null | null | LinAl_003.ipynb | rtgshv/linal101 | f520987a6f1e468b3b466c14820e43dada565e43 | [
"MIT"
] | null | null | null | 19.033088 | 206 | 0.41987 | true | 2,094 | Qwen/Qwen-72B | 1. YES
2. YES | 0.964321 | 0.934395 | 0.901057 | __label__eng_Latn | 0.747833 | 0.931792 |
# Spectral Estimation of Random Signals
*This jupyter notebook is part of a [collection of notebooks](../index.ipynb) on various topics of Digital Signal Processing. Please direct questions and suggestions to [Sascha.Spors@uni-rostock.de](mailto:Sascha.Spors@uni-rostock.de).*
## The Welch Method
In the previous section it has been shown that the [periodogram](periodogram.ipynb) as a non-parametric estimator of the power spectral density (PSD) $\Phi_{xx}(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ of a random signal $x[k]$ is not consistent. This is due to the fact that its variance does not converge towards zero even when the length of the random signal is increased towards infinity. In order to overcome this problem, the [Bartlett method](https://en.wikipedia.org/wiki/Bartlett's_method) and [Welch method](https://en.wikipedia.org/wiki/Welch's_method)
1. split the random signal into segments,
2. estimate the PSD for each segment, and
3. average over these local estimates.
The averaging reduces the variance of the estimated PSD. While Barlett's method uses non-overlapping segments, Welch's is a generalization using windowed overlapping segments. For the discussion of Welch's method we assume a wide-sense ergodic real-valued random process.
### Derivation
The random signal $x[k]$ is split into into $L$ overlapping segments of length $N$, starting at multiples of the step size $M \in {1,2, \dots, N}$. These segments are windowed by the window $w[k]$ of length $N$, resulting in a windowed $l$-th segment $x_l[k]$ with $0\leq l\leq L-1$. The discrete-time Fourier transformation (DTFT) $X_l(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ of the windowed $l$-th segment is then given as
\begin{equation}
X_l(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \sum_{k = 0}^{N-1} x[k + l \cdot M] \, w[k] \; \mathrm{e}^{\,-\mathrm{j}\,\Omega\,k}
\end{equation}
where the window $w[k]$ defined within $0\leq k\leq N-1$ should be normalized as $\frac{1}{N} \sum\limits_{k=0}^{N-1} | w[k] |^2 = 1$. The latter condition ensures that the power of the signal is maintained in the estimate. The stepsize $M$ determines the overlap between the segments. In general, $N-M$ number of samples overlap between adjacent segments. For $M = N$ no overlap occurs. The overlap is sometimes given as ratio $\frac{N-M}{N}\cdot 100\%$.
Introducing $X_l(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ into the definition of the periodogram yields the periodogram of the $l$-th segment
\begin{equation}
\hat{\Phi}_{xx,l}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \frac{1}{N} \,| X_l(\mathrm{e}^{\,\mathrm{j}\,\Omega}) |^2
\end{equation}
The estimated PSD is then given by averaging over the segment's periodograms $\hat{\Phi}_{xx,l}(\mathrm{e}^{\,\mathrm{j}\,\Omega})$
\begin{equation}
\hat{\Phi}_{xx}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \frac{1}{L} \sum_{l = 0}^{L-1} \hat{\Phi}_{xx,l}(\mathrm{e}^{\,\mathrm{j}\,\Omega})
\end{equation}
Note, that the total number $L$ of segments has to be chosen such that the last required sample $(L-1)\cdot M + N - 1$ does not exceed the total length of the random signal. Otherwise the last segment $x_{L-1}[k]$ may be zeropadded towards length $N$.
The Bartlett method uses a rectangular window $w[k] = \text{rect}_N[k]$ and non-overlapping segments $M=N$. The Welch method uses overlapping segments and a window that must be chosen according to the intended spectral analysis task.
### Example
The following example is equivalent to the previous [periodogram example](periodogram.ipynb#Example---Periodogram). We aim at estimating the PSD of a random process which draws samples from normally distributed white noise with zero-mean and unit variance. The true PSD is consequently given as $\Phi_{xx}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = 1$.
```python
import numpy as np
import matplotlib.pyplot as plt
import scipy.signal as sig
N = 128 # length of segment
M = 64 # stepsize
L = 100 # total number of segments
# generate random signal
np.random.seed(5)
x = np.random.normal(size=L*M)
# estimate PSD by Welch's method
nf, Pxx = sig.welch(x, window='hamming', nperseg=N, noverlap=(N-M))
Pxx = .5*Pxx # due to normalization in scipy.signal
Om = 2*np.pi*nf
# plot results
plt.figure(figsize=(10, 4))
plt.stem(Om, Pxx, 'C0',
label=r'$\hat{\Phi}_{xx}(e^{j \Omega})$', basefmt=' ', use_line_collection=True)
plt.plot(Om, np.ones_like(Pxx), 'C1', label=r'$\Phi_{xx}(e^{j \Omega})$')
plt.title('Estimated and true PSD')
plt.xlabel(r'$\Omega$')
plt.axis([0, np.pi, 0, 2])
plt.legend()
# compute bias/variance of the estimator
print('Bias of the Welch estimate: \t\t {0:1.4f}'.format(np.mean(Pxx-1)))
print('Variance of the Welch estimate: \t {0:1.4f}'.format(np.var(Pxx)))
```
Bias of the Welch estimate: -0.0114
Variance of the Welch estimate: 0.0255
**Exercise**
* Compare the results to the periodogram example. Is the variance of the estimator lower?
* Change the number of segments `L`. What changes?
* Change the segment length `N` and stepsize `M`. What changes?
Solution: When comparing both the estimates of the PSD in the previous periodogram and above example, it is obvious that the variance of the Welch estimator is lower. Increasing the number of segments `L` lowers the variance further. Increasing the segment length `N` increases the total number of discrete frequencies in the estimated PSD. Since in above example the total number of segments is kept constant, the variance increases. Lowering the stepsize `M` has the same result, since the total number of samples is reduced for a fixed number of segments.
### Evaluation
It is shown in [[Stoica et al.](../index.ipynb#Literature)] that Welch's method is asymptotically unbiased. Under the assumption of a wide-sense stationary (WSS) random process, the periodograms $\hat{\Phi}_{xx,l}(e^{j \Omega})$ of the segments can be assumed to be approximately uncorrelated. Hence, averaging over these reduces the overall variance of the estimator. It can be shown formally that in the limiting case of an infinite number of segments (infintely long signal) the variance tends towards zero. As a result Welch's method is an asymptotically consistent estimator of the PSD.
Note, that for a finite segment length $N$ the properties of the estimated PSD $\hat{\Phi}_{xx}(e^{j \Omega})$ depend on the length $N$ of the segments and the window function $w[k]$ due to the [leakage effect](../spectral_analysis_deterministic_signals/leakage_effect.ipynb).
**Copyright**
This notebook is provided as [Open Educational Resource](https://en.wikipedia.org/wiki/Open_educational_resources). Feel free to use the notebook for your own purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Sascha Spors, Digital Signal Processing - Lecture notes featuring computational examples*.
| ba2a2c98b01c00e90304f38568f8e0f70713650b | 101,662 | ipynb | Jupyter Notebook | spectral_estimation_random_signals/welch_method.ipynb | ZeroCommits/digital-signal-processing-lecture | e1e65432a5617a309ec02327a14962e37a0f7ec5 | [
"MIT"
] | 630 | 2016-01-05T17:11:43.000Z | 2022-03-30T07:48:27.000Z | spectral_estimation_random_signals/welch_method.ipynb | alirezaopmc/digital-signal-processing-lecture | e1e65432a5617a309ec02327a14962e37a0f7ec5 | [
"MIT"
] | 12 | 2016-11-07T15:49:55.000Z | 2022-03-10T13:05:50.000Z | spectral_estimation_random_signals/welch_method.ipynb | alirezaopmc/digital-signal-processing-lecture | e1e65432a5617a309ec02327a14962e37a0f7ec5 | [
"MIT"
] | 172 | 2015-12-26T21:05:40.000Z | 2022-03-10T23:13:30.000Z | 64.629371 | 22,614 | 0.622553 | true | 1,887 | Qwen/Qwen-72B | 1. YES
2. YES | 0.880797 | 0.904651 | 0.796814 | __label__eng_Latn | 0.982632 | 0.689597 |
# Conditional Probability and Conditional Expectation
## Intro
- given some partial information
- or just first "condition" on some appropriate $r.v.
\DeclareMathOperator*{\argmin}{argmin}
\newcommand{\ffrac}{\displaystyle \frac}
\newcommand{\Tran}[1]{{#1}^{\mathrm{T}}}
\newcommand{\d}[1]{\displaystyle{#1}}
\newcommand{\EE}[2][\,\!]{\mathbb{E}_{#1}\left[#2\right]}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\Var}[2][\,\!]{\mathrm{Var}_{#1}\left[#2\right]}
\newcommand{\Cov}[2][\,\!]{\mathrm{Cov}_{#1}\left(#2\right)}
\newcommand{\Corr}[2][\,\!]{\mathrm{Corr}_{#1}\left(#2\right)}
\newcommand{\using}[1]{\stackrel{\mathrm{#1}}{=}}
\newcommand{\I}[1]{\mathrm{I}\left( #1 \right)}
\newcommand{\N}[1]{\mathrm{N} \left( #1 \right)}
\newcommand{\space}{\text{ }}
\newcommand{\bspace}{\;\;\;\;}
\newcommand{\QQQ}{\boxed{?\:}}
\newcommand{\CB}[1]{\left\{ #1 \right\}}
\newcommand{\SB}[1]{\left[ #1 \right]}
\newcommand{\P}[1]{\left( #1 \right)}
\newcommand{\ow}{\text{otherwise}}$
## The Discrete Case
$\forall$ events $E$ and $F$, the **conditional probability** of $E$ *given* $F$ is defined, as long as $P(F) > 0$, by
$$P(E\mid F) = \ffrac{P(EF)} {P(F)}$$
Hence, if $X$ and $Y$ are discrete $r.v.$, then it's natural to define the **conditional probability mass function** of $X$ *given* that $Y=y$ and $P\CB{Y=y}>0$, by:
$$\begin{align}
p_{X\mid Y}(X\mid Y) &= P\CB{X=x\mid Y=y} \\[0.5em]
&= \ffrac{P\CB{X=x\mid Y=y}} {P\CB{Y=y}} \\
&= \ffrac{p(x,y)} {p_{Y}(y)}
\end{align}$$
and the conditional pdf of $X$ *given* that $Y=y$ and $P\CB{Y=y}>0$ is
$$\begin{align}
F_{X\mid Y}(X\mid Y) &= P\CB{X \leq x\mid Y=y} \\[0.5em]
&= \sum_{a \leq x} p_{X\mid Y}(a\mid Y)
\end{align}$$
and finally, the conditional expectation of $X$ *given* that $Y=y$ and $P\CB{Y=y}>0$ is defined by
$$\begin{align}
\EE{X\mid Y=y} &= \sum_{x} x \cdot P\CB{X = x\mid Y=y} \\[0.5em]
&= \sum_{x} x \cdot p_{X\mid Y}(x\mid Y)
\end{align}$$
$Remark$
If $X$ is independent of $Y$, then all the aforementioned definitions are identical to what we have learned before.
**e.g.**
If $X_1$ and $X_2$ are independent **binomial** $r.v.$ with respective parameters $(n_1,p)$ and $(n_2,p)$. Find the conditional pmf of $X_1$ given that $X_1 + X_2 = m$.
>$\begin{align}
P\CB{X_1 = k \mid X_1 + X_2 = m} &= \ffrac{P\CB{X_1 = k , X_1 + X_2 = m}} {P\CB{X_1 + X_2 = m}} \\[0.6em]
&= \ffrac{P\CB{X_1 = k}\cdot P\CB{ X_2 = m-k}} {P\CB{X_1 + X_2 = m}}\\
&= \ffrac{\d{\binom{n_1} {k} p^k q^{\d{n_1 - k}} \cdot \binom{n_2} {m-k} p^{m-k} q^{\d{n_2 - m + k}}}} {\d{\binom{n_1 + n_2}{m} p^m q^{\d{n_1 + n_2 -m}}}}
\end{align}$
>
>Here $q = 1-p$ and $X_1 + X_2$ is a **binomial** $r.v.$ as well with parameters $(n_1 + n_2 , p)$. Simplify that, we obtain:
>
> $$P\CB{X_1 = k \mid X_1 + X_2 = m} = \ffrac{\d{\binom{n_1} {k}\binom{n_2} {m-k}}} {\d{\binom{n_1 + n_2}{m}}}$$
>
>$Remark$
>
>This is a hypergeometric distribution.
***
**e.g.**
If $X$ and $Y$ are independent **poisson** $r.v.$ with respective parameters $\lambda_1$ and $\lambda_2$. Find the conditional pmf of $X$ given that $X+Y = n$.
>We follow the same fashion and can easily get that
>
>$\begin{align}
P\CB{X = k \mid X + Y = n} &= \ffrac{e^{-\lambda_1} \; \lambda_1^k} {k!}\cdot\ffrac{e^{-\lambda_2}\; \lambda_2^{n-k}} {(n-k)!} \left[\ffrac{e^{-\lambda_1 - \lambda_2} \left(\lambda_1 + \lambda_2\right)^{n}} {n!}\right]^{-1} \\
&= \ffrac{n!} {k!(n-k)!}\left(\ffrac{\lambda_1} {\lambda_1 + \lambda_2}\right)^k\left(\ffrac{\lambda_2} {\lambda_1 + \lambda_2}\right)^{n-k}
\end{align}$
>
>Given this, we can say that the conditional distribution of $X$ given that $X+Y=n$ is the binomial distribution with parameters $n$ and $\lambda_1/\left(\lambda_1 + \lambda_2\right)$. Hence, we also have
>
>$$\EE{X \mid X+Y = n} = n\ffrac{\lambda_1}{\lambda_1 + \lambda_2}$$
***
## Continuous Case
If $X$ and $Y$ have a joint probability density function $f\P{x,y}$, the the ***conditional pdf*** of $X$, given that $Y=y$, is defined for all values of $y$ such that $f_Y\P{y} < 0$, by
$$f_{X \mid Y} \P{x \mid y} = \ffrac{f\P{x,y}} {f_Y\P{y}}$$
Then the expectation: $\EE{X \mid Y = y} = \d{\int_{-\infty}^{\infty}} x \cdot f_{X \mid Y} \P{x \mid y} \;\dd{x}$
**e.g.**
Joint density of $X$ and $Y$: $f\P{x,y} = \begin{cases}
6xy(2-x-y), & 0 < x < 1, 0 < y < 1\\
0, & \text{otherwise}
\end{cases}$
Compute the conditional expectation of $X$ given that $Y=y$, where $0 <y <1$.
>We first need to compute the conditional density:
>
>$f_{X \mid Y} \P{x\mid y} = \ffrac{f\P{x,y}} {f_Y\P{y}} = \ffrac{6xy(2-x-y)} {\d{\int_{0}^{1} 6xy(2-x-y) \;\dd{x}}} = \ffrac{6x(2-x-y)} {4-3y}$
>
>Hence we can find the expectation
>
>$\EE{X \mid Y=y} = \d{\int_{0}^{1}} x \cdot \ffrac{6x(2-x-y)} {4-3y} \;\dd{x} = \ffrac{5-4y} {8-6y}$
***
**e.g.** The ***t-Distribution***
If $Z$ and $Y$ are independent, with $Z$ having a **standard normal** distribution and $Y$ having a **chi-squared** distribution with $n$ degrees of freedom, then the random variable $T$ defined by
$$T = \ffrac{Z} {\sqrt{Y/n}} = \sqrt{n} \ffrac{Z} {\sqrt{Y}}$$
is said to be a $\texttt{t-}r.v.$ with $n$ degrees of freedom. We now compute its density function:
>Here the strategy is to use one conditional density function and multiply that with the one just on condition, then integrate them.
>
>$$f_T(t) = \int_{0}^{\infty} f_{T,Y}\P{t,y} \;\dd{y} = \int_{0}^{\infty} f_{T\mid Y}\P{t \mid y} \cdot f_{Y}(y) \;\dd{y}$$
>
>Since we have already get the pdf for **chi-squared** $r.v.$,so $f_Y(y) = \ffrac{e^{-y/2} y^{n/2-1}} {2^{n/2} \Gamma\P{n/2}}$, for $y > 0$. Then $T$ conditioned on $Y$, which is a **normal distribution** with mean $0$ and variance $\P{\sqrt{\ffrac{n} {y}}}^2 = \ffrac{n} {y}$, so
>
>$f_{T \mid Y} \P{t \mid y} = \ffrac{1} {\sqrt{2\pi n /y}} \exp\CB{-\ffrac{t^2 y} {2n}} = \ffrac{y^{1/2}} {\sqrt{2 \pi n}} \exp\CB{-\ffrac{t^2 y} {2n}}$ for $-\infty < t < \infty $. Then:
>
>$$\begin{align}
f_{T}(t) &= \int_{0}^{\infty} \ffrac{y^{1/2}} {\sqrt{2 \pi n}} \exp\CB{-\ffrac{t^2 y} {2n}} \cdot \ffrac{e^{-y/2} y^{n/2-1}} {2^{n/2} \Gamma\P{n/2}} \;\dd{y} \\[0.6em]
&= \ffrac{1} {\sqrt{\pi n} \; 2^{\P{n+1}/2} \; \Gamma\P{n/2}} \int_{0}^{\infty} \exp\CB{-\ffrac{1} {2} \P{1 + \ffrac{t^2} {n}}y} \cdot y^{\P{n-1}/2} \; \dd{y} \\[0.8em]
& \;\;\;\;\text{then we let } \ffrac{1} {2} \P{1 + \ffrac{t^2} {n}}y = x \\[0.8em]
&= \ffrac{\P{1 + \frac{t^2} {n}}^{-\P{n+1}/2} \P{1/2}^{-\P{n+1}/2}} {\sqrt{\pi n} \; 2^{\P{n+1}/2} \; \Gamma\P{n/2}} \int_{0}^{\infty} e^{-x} x^{\P{n-1}/2} \;\dd{x} \\[0.6em]
&= \ffrac{\P{1 + \frac{t^2} {n}}^{-\P{n+1}/2}} {\sqrt{\pi n} \; \Gamma\P{n/2}} \Gamma\P{\ffrac{n-1} {2} + 1} = \ffrac{\Gamma\P{\frac{n+1} {2}}} {\sqrt{\pi n} \; \Gamma\P{\frac{n} {2}}} \P{1 + \ffrac{t^2} {n}}^{-\P{n+1}/2}
\end{align}$$
$Remark$
It will be much easier if given the joint distribution of $T$ and $Y$. $\texttt{FXXX}$
***
**e.g.**
Joint density of $X$ and $Y$: $f\P{x,y} = \begin{cases}
\ffrac{1} {2} y e^{-xy}, & 0 < x < \infty, 0 < y < 2 \\[0.7em]
0, &\ow
\end{cases}$. So what is $\EE{e^{X/2} \mid Y=1}$?
> This time will be much easier, we first obtain the conditional density of $X$ given that $Y=1$:
>
>$$\begin{align}
f_{X\mid Y} \P{x \mid 1} &= \ffrac{f\P{x,1}} {f_Y\P{1}} \\
&= \ffrac{\ffrac{1} {2} e^{-x}} {\d{\int_{0}^{\infty}\frac{1} {2} e^{-x} \; \dd{x}}} = e^{-x}
\end{align}$$
>
>Hence, we have $\EE{e^{X/2} \mid Y=1} = \d{\int_{0}^{\infty}} e^{x/2} \cdot f_{X\mid Y} \P{x \mid 1} \; \dd{x} = 2$
***
**(V)e.g.**
Let $X_1$ and $X_2$ be independent **exponential** $r.v.$ with rates $\mu_1$ and $\mu_2$. Find the conditional density of $X_1$ given that $X_1 + X_2 = t$.
>We'll be using the formula: $f_{\d{X_1 \mid X_1 + X_2}}\P{x \mid t} = \ffrac{f_{\d{X_1, X_1 + X_2}}\P{x, t}} {f_{\d{ X_1 + X_2}}\P{t}}$. The denominator is a *constant at this time*, not given. And as for the numerator, using Jacobian determinant we can find that
>
>$$J = \begin{vmatrix}
\ffrac{\partial x} {\partial x} & \ffrac{\partial x} {\partial y}\\
\ffrac{\partial x+y} {\partial x} & \ffrac{\partial x+y} {\partial y}
\end{vmatrix} = 1$$
>
>So our conclusion is: $f_{\d{X_1, X_1 + X_2}}\P{x, t} = f_{\d{X_1, X_2}}\P{x_1,x_2} \cdot J^{-1} = f_{\d{X_1, X_2}}\P{x,t-x}$. Plug in, we have
>
>$$\begin{align}
f_{\d{X_1 \mid X_1 + X_2}}\P{x \mid t} &= \ffrac{f_{\d{X_1, X_2}}\P{x,t-x}} {f_{\d{X_1 + X_2}}\P{t}} \\
&= \ffrac{1} {f_{\d{X_1 + X_2}}\P{t}} \cdot \P{\:\mu_1 e^{\d{-\mu_1 x}}}\P{\mu_2 e^{\d{-\mu_2 \P{t-x}}}}, \;\;\;\; 0 \leq x \leq t \\[0.7em]
&= C \cdot \exp\CB{-\P{\mu_1 - \mu_2}t}, \;\;\;\; 0 \leq x \leq t
\end{align}$$
>
>Here $C = \ffrac{\mu_1 \mu_2 e^{\d{-\mu_2t}}} {f_{\d{X_1 + X_2}}\P{t}} $. The easier situation is when $\mu_1 = \mu_2 = \mu$, then $f_{\d{X_1 \mid X_1 + X_2}}\P{x \mid t} = C, 0 \leq x \leq t$, which is a uniform distribution, yielding that $C = 1/t$.
>
>And when they're not equal, we need to use:
>
>$$1 = \int_{0}^{t} f_{\d{X_1 \mid X_1 + X_2}}\P{x \mid t} \;\dd{x} = \ffrac{C} {\mu_1 - \mu_2} \P{1 - \exp\CB{-\P{\mu_1 - \mu_2}t}}\\[1.6em]
\Longrightarrow C = \ffrac{\mu_1 - \mu_2} {1 - \exp\CB{-\P{\mu_1 - \mu_2}t}}$$
>Now we can see the final answer and the byproduct:
>
>$f_{\d{X_1 \mid X_1 + X_2}}\P{x \mid t} = \begin{cases}
1/t, & \text{if }\mu_1 = \mu_2 = \mu\\[0.6em]
\ffrac{\P{\mu_1 - \mu_2}\exp\CB{-\P{\mu_1 - \mu_2}t}} {1 - \exp\CB{-\P{\mu_1 - \mu_2}t}}, &\text{if } \mu_1 \neq \mu_2
\end{cases}$
>
>$
f_{\d{X_1 + X_2}}\P{t} = \begin{cases}
\mu^2 t e^{-\mu t}, & \text{if }\mu_1 = \mu_2 = \mu\\[0.6em]
\ffrac{\mu_1\mu_2\P{\exp\CB{-\mu_2t} - \exp\CB{-\mu_1t}}} {\mu_1 - \mu_2}, &\text{if } \mu_1 \neq \mu_2
\end{cases}$
$Remark$
>When calculate $f_{\d{ X_1 + X_2}}\P{t}$, $t$ is no longer a given constant.
***
## Computing Expectation by Conditioning
Denote $\EE{X \mid Y}$ as the *function* of the $r.v.$ $Y$ whose value at $Y = y$ is $\EE{X \mid Y = y}$. An extremely important property of conditional expectation is that for all $r.v.$ $X$ and $Y$,
$$\EE{X} = \EE{\EE{X \mid Y}} = \begin{cases}
\d{\sum_y} \EE{X \mid Y = y} \cdot P\CB{Y = y}, & \text{if } Y \text{ discrete}\\[0.5em]
\d{\int_{-\infty}^{\infty}} \EE{X \mid Y = y} \cdot f_Y\P{y}\;\dd{y}, & \text{if } Y \text{ continuous}
\end{cases}$$
We can interpret this as that when we calculate $\EE{X}$ we may take a *weighted average* of the conditional expected value of $X$ given $Y = y$, each of the terms $\EE{X \mid Y = y}$ being weighted by the probability of the event on which it is conditioned.
**e.g.** The Expectation of the Sum of a Random Number of Random Variables
The expected number of unqualified cookie products per week at an industrial plant is $n$, and the number of broken cookies in each product are independent $r.v.$ with a common mean $m$. Also assume that the number of broken cookies in each product is independent of the number of unqualified products. Then, what's the expected number of broken cookies during a week?
>Letting $N$ denote the number of unqualified cookie products and $X_i$ the number of broken cookies inside the $i\texttt{-th}$ product. Then the total broken cookies is $\sum X_i$. Now to bring the sum operation out of the expecation (even linear operation, however $N$ is a $r.v.$), we need to condition that on $N$
>
>$\bspace \d{\EE{\sum\nolimits_{i=1}^{N} X_i} = \EE{\EE{\sum\nolimits_{i=1}^{N} X_i \mid N}}}$
>
>But by the independence of $X_i$ and $N$ , we can derive that:
>
>$\bspace \d{\EE{\sum_{i=1}^{N}X_i \mid N =n} = \EE{\sum_{i=1}^{n} X_i} = n\cdot\EE{X}}$,
>
>which yields the function for $N$: $\d{\EE{\sum\nolimits_{i=1}^{N}X_i\mid N} = N \cdot \EE{X}}$ and thus:
>
>$\bspace \d{\EE{\sum_{i=1}^{N}X_i} = \EE{N\cdot \EE{X}} = \EE{N} \cdot \EE{X} = mn}$
$Remark$
>***Compound Random Variable***, as those the sum of a random number (like the preceding $N$) of $i.i.d.$ $r.v.$ (like the preceding $X_i$) that are also independent of $N$.
**e.g.** The Mean of a Geometric Distribution
Before, we use derivatives or 错位相减 to obtain that $\EE{X} = \d{\sum_{n=1}^{\infty} n \cdot p(1-p)^{n-1}} = \ffrac{1} {p}$. Now a new method can be applied here:
>Let $N$ be the number of trials required and define $Y$ as
>
>$Y = \bspace \begin{cases}
1, &\text{if the first trial is a success} \\[0.6em]
0, &\text{if the first trial is a failure} \\
\end{cases}$
>
>Then $\EE{N} = \EE{\EE{N\mid Y}} = \EE{N\mid Y = 1}\cdot P\CB{Y = 1} + \EE{N \mid Y = 0}\cdot P\CB{Y=0}$. However, $\EE{N \mid Y = 1} = 1$ (no more trials needed) and $\EE{N \mid Y = 0} = 1 + \EE{N}$. Substitute these back we have the equation:
>
>$\bspace \EE{N} = 1\cdot p + \P{1 + \EE{N}}\cdot \P{1-p} \space \Longrightarrow \space \EE{N} = 1/p$
**e.g.** Multinomial Covariances
Consider $n$ independent trials, each of which results in one of the outcomes $1, 2, \dots, r$, with respective probabilities $p_1, p_2, \dots, p_r$, and $\sum p_i = 1$. If we let $N_i$ denote the number of trials that result in outcome $i$, then $\P{N_1, N_2, \dots, N_r}$ is said to have a ***multinomial distribution***. For $i \neq j$, let us compute $\Cov{N_i, N_j} = \EE{N_i N_j} - \EE{N_i} \EE{N_j}$
> Each trial independently results in outcome $i$ with probability $p_i$, and it follows that $N_i$ is binomial with parameters $\P{n, p_i}$, given that $\EE{N_i}\EE{N_j} = n^2 p_i p_j$. Then to compute $\EE{N_i N_j}$, condition on $N_i$ we can obtain:
>
>$\bspace \begin{align}
\EE{N_i N_j}&=\sum_{k=0}^{n} \EE{N_i N_j \mid N_i = k}\cdot P\CB{N_i = k} \\
&= \sum\nolimits_{k=0}^{n} k\EE{N_j \mid N_i = k} \cdot P\CB{N_i = k}
\end{align}$
>
>Now given that only $k$ of the $n$ trials result in outcome $i$, each of the other $n-k$ trials independently results in outcome $j$ with probability: $P\P{j \mid \text{not }i} = \frac{p_j} {1-p_i}$, thus showing that the conditional distribution of $N_j$, given that $N_i = k$, is binomial with parameters $\P{n-k,\ffrac{p_j} {1-p_i}}$ and its expectation is: $\EE{N_j \mid N_i = k} = \P{n-k} \ffrac{p_j} {1-p_i}$.
>
>Using this yields:
>$\bspace \begin{align}
\EE{N_i N_j} &= \sum_{k=0}^{n} k\P{n-k} \ffrac{p_j} {1-p_i} P\CB{N_i = k} \\
&= \ffrac{p_j} {1-p_i} \P{n \sum_{k=0}^{n} kP\CB{N_i = k} - \sum_{k=0}^{n} k^2 P \CB{N_i = k}} \\
&= \ffrac{p_j} {1-p_i}\P{n\EE{N_i} - \EE{N_i^2}}
\end{align}$
>And $N_i$ is a binomial $r.v.$ with parameters $\P{n,p_i}$, thus, $\EE{N_i^2} = \Var{N_i} + \EE{N_i}^2 = np_i\P{1-p_i} + \P{np_i}^2$. Hence, $\EE{N_iN_j} = \ffrac{p_j} {1-p_i} \SB{n^2 p_i - np_i\P{1-p_i} - n^2p_i^2} = n\P{n-1}p_ip_j$, which yields the result:
>
>$\bspace \Cov{N_i N_j} = n\P{n-1}p_ip_j - n^2p_ip_j = -np_ip_j$
**(V)e.g.** The Matching Rounds Problem from the example of last chapter, the one of randomly choosing hat. Now for those who have already chosen their own hats, leave the room. And then mix the wrongly-matched hats again and reselect. This process continues until each individual has his own hats.
$\P{1}$ Let $R_n$ be the number of rounds that are necessary when $n$ individuals are initially present. Find $\EE{R_n}$.
>Follow the results from last example. For each round, on average there'll be only *one* match, no matter how many candidates remain. So intuitively, $\EE{R_n} = n$. Here's an induction proof. Firstly, it's obvious $\EE{R_1} = $, then we assume that $\EE{R_k} = k$ for $k = 1,2,\dots,n-1$, then we find $\EE{R_n}$ by conditioning on $X_n$, which is the number of matches that occur in the first round:
>
>$\bspace \EE{R_n} = \d{\sum_{i=0}^{n} \EE{R_n \mid X_n = i}} \cdot P\CB{X_n = i}$. Now given that there're totally $i$ matches in the first round, then we have $\EE{R_n \mid X_n = i} = 1 + \EE{R_{n-i}}$.
>
>$\bspace \begin{align}
\EE{R_n} &= \sum_{i=0}^{n} \P{1 + \EE{R_{n-i}}} \cdot P\CB{X_n = i} \\
&= 1 + \EE{R_n} P\CB{X_n = 0} + \sum_{i=1}^{n} \EE{R_{n-i}} \cdot P\CB{X_n = i} \\
&= 1 + \EE{R_n} P\CB{X_n = 0} + \sum_{i=1}^{n} \P{n-i} \cdot P\CB{X_n = i}, \; \text{as the induction hypothesis}\\
&= 1 + \EE{R_n} P\CB{X_n = 0} + n\P{1-P\CB{X_n = 0}} - \EE{X_n}, \; \P{\EE{X_n} = 1 \text{ as the result before}}\\[0.6em]
&= \EE{R_n} P\CB{X_n = 0} + n\P{1-P\CB{X_n = 0}}\\[0.7em]
& \bspace \text{then we solve the equation,}\\[0.7em]
\Longrightarrow& \space\EE{R_n} = n
\end{align}$
$Remark$
>Assuming something happened in the first trial...
$\P{2}$ Let $S_n$ be the total number of selections made by the $n$ individuals for $n \geq 2$. Find $\EE{S_n}$
>Still we condition on $X_n$, which gives:
>
>$\bspace \begin{align}
\EE{S_n} &= \sum_{i=0}^{n} \EE{S_n \mid X_n = i} \cdot P\CB{X_n = i} \\
&= \sum_{i=0}^{n} \P{n + \EE{S_{n-i}}} \cdot P\CB{X_n = i} = n + \sum_{i=0}^{n} \EE{S_{n-i}} \cdot P\CB{X_n = i}
\end{align}$
>
>And since $\EE{S_0} = 0$ we rewrite it as $\EE{S_n} = n + \EE{S_{n-X_n}}$. To solve this, we first make a guess. What if there were exactly one matach in each round?.
>Thus, totally there'll be $n+\cdots+1 = n\P{n+1}/2$ selections. So, for $n \geq 2$, we assume that
>
>$\bspace an + bn^2 = n + \EE{a\P{n-X_n} + b\P{n-X_n}^2} = n + a\P{n-\EE{X_n}} + b\P{n^2 - 2n\EE{X_n} + \EE{X_n^2}}$
>
>Following the results before that $\EE{X_n} = \Var{X_n} = 1$, we can solve this that $a=1$ and $b = 1/2$. So that maybe, $\EE{S_n} = n + n^2/2$. What a guess (2333)! Now we prove it by induction on $n$.
>
> For $n=2$, the number of rounds is a geometric random variable with parameter $p = 1/2$. And it's obvious that the number of selections is twice the number of rounds. Thus, $\EE{S_n} = 4$. Right!
>
>Hence, upon assuming $\EE{S_0} = \EE{S_1} = 0$ and $\EE{S_k} = k+k^2/2$ for $k=2,3,\dots,n-1$:
>
>$\bspace \begin{align}
\EE{S_n} &= n + \EE{S_n} \cdot P\CB{X_n = 0} + \sum_{i=1}^{n} \SB{n-i+\P{n-i}^2}\cdot P\CB{X_n=i}\\
&= n + \EE{S_n}\cdot P\CB{X_n = 0} + \P{n+n^2/2}\P{1-P\CB{X_n = 0}} - \P{n+1}\EE{X_n} + \ffrac{\EE{X_n^2}} {2}\\[0.7em]
& \bspace \text{then we solve the equation by using }\EE{X_n} = 1 \text{ and } \EE{X_n^2} = 2\\[0.7em]
\Longrightarrow& \space\EE{S_n} = n+n^2/2
\end{align}$
$\P{c}$ Find the expected number of false selections made by one of the $n$ people for $n \geq 2$.
>Let $C_j$ denote the number of hats chosen by person $j$, then we have $\sum C_j = S_n$. Then taking the expectation, using the fact that each $C_j$ takes the same mean, $\EE{C_j} = \EE{S_n}/ n = 1 + n/2$. Thus, the result is $\EE{C_j - 1} = n/2$.
$\P{d}$ Suppose the first round, what if the first person cannot meet a match? Given that, find the conditional expected number of matches.
>Let $Y$ equal $1$ if the first person has a match and $0$ otherwise. Let $X$ denote the number of matches. Then, with the result before we have
>
>$\bspace \begin{align}
1 = \EE{X} &= \EE{X\mid Y=1} P\CB{Y=1} + \EE{X\mid Y=0} P\CB{Y=0} \\
&= \ffrac{\EE{X\mid Y=1}} {n} + \ffrac{n-1} {n}\EE{X \mid Y = 0}
\end{align}$
>
>Now given that $Y=1$, then the rest $n-1$ people will choose $n-1$ hats, thus $\EE{X\mid Y=1} = 1+1=2$, thus, the result is $\EE{X \mid Y=0} = \P{n-2}/\P{n-1}$
**(V)e.g.** Consecutive success
Independent trials, each of which is a success with probability $p$, are performed until there are $k$ consecutive success. What's the mean of necesssary trials?
>Let $N_k$ denote the number of necessary trials to obtain $k$ consecutive success and $M_k = \EE{N_k}$. Then we might write: $N_k = N_{k-1} + \texttt{something}$... And actually that something is the number of additional trials needed to go from have $k-1$ successes in a row to having $k$ in a row. We denote that by $A_{k-1,k}$. Then we take the expectation and obtain:
>$\bspace M_k = M_{k-1} + \EE{A_{k-1,k}}$. Wanna know what's $\EE{A_{k-1,k}}$? There're only two possible result, the $k\texttt{-th}$ trial is a success, or not, then:
>
>$\bspace \EE{A_{k-1,k}} = 1 \cdot p + \P{1+M_k}\cdot\P{1-p} = 1+\P{1-p}M_k \Rightarrow M_k = \ffrac{1} {p} + \ffrac{M_{k-1}} {p}$
>
>Well, what's $M_1$? Obviously $N_1$ is a geometric with parameter $p$, thus $M_1 = 1/p$ and recursively we have:
>
>$\bspace M_k = \ffrac{1} {p} + \ffrac{1} {p^2} + \cdots \ffrac{1} {p^k}$
**(V)e.g.** Quick-Sort Algorithm
Given a set of $n$ distinct values, sort them increasingly. The quick-sort algorithm is defined reursively as: When $n=2$, compare the two values and puts them in appropriate order. When $n>2$, randomly shoose one of the $n$ values, say, $x_i$ and then compares each of the other $n-1$ values with $x_i$, noting which are smaller and which are larger than $x_i$. Letting $S_i$ denote the set of elements smaller than $X_i$ and $\bar{S}_i$ the set of elements greater than $X_i$. Then sort the set $S_i$ and $\bar{S}_i$. That's it.
One measure of the effectiveness of this algorithm is the expected number of comparison that it makes. Denote by $M_n$ the expected number of comparisons needed by the q-s algorithm to sort a set of $n$ distinct values. To obtain a recursion for $M_n$ we condition on the rank of the initial value selected to obtain
$$M_n = \sum_{j=1}^{n} \EE{\text{number of comparisons}\mid \text{value selected is actually j}\texttt{-th}\text{ smallest}}\cdot \ffrac{1} {n}$$
So $M_n = \d{\sum_{j=1}^{n} \P{n-1 + M_{j-1} + M_{n-j}}\cdot\frac{1} {n}} = n-1 + \ffrac{2} {n} \sum_{k=1}^{n-1} M_k$ ($M_0 = 0$), or equivalently, $nM_n = n\P{n-1} + 2\d{\sum_{k=1}^{n-1}}M_k$.
我们用一点数列知识。。。
$\bspace \P{n+1}M_{n+1} - mM_n = 2n+2M_n \Longrightarrow M_n = 2\P{n+2}\sum\limits_{k=0}^{n-1}\ffrac{n-k} {\P{n+1-k}\P{n+2-k}} = 2\P{n+2}\sum\limits_{i=1}^{n} \ffrac{i} {\P{i+1}\P{i+2}}$ for $n \geq 1$.
$$\begin{align}
M_{n+1} &= 2\P{n+2}\SB{\sum_{i=1}^{n}\ffrac{2} {i+2} - \sum_{i=1}^{n}\ffrac{1} {i+1}} \\
&\approx 2\P{n+2}\SB{\int_{3}^{n+2} \ffrac{2} {x} \;\dd{x} - \int_{2}^{n+1} \ffrac{1} {x} \;\dd{x}}\\
&= 2\P{n+2}\SB{\log\P{n+2} + \log\P{\ffrac{n+1} {n+2}} + \log 2 -2\log 3}\\
&\approx 2\P{n+2}\log\P{n+2}
\end{align}$$
### Computing Variances by Conditioning
Here we first gonna use $\Var{X} = \EE{X^2} - \EE{X}^2$ while using the condioning to obtain both the $\EE{X^2}$ and $\EE{X}$.
**e.g.** Variance of the Geometric $r.v.$
Independent trials, each resulting in a success with probability $p$ and are performed in sequence. Let $N$ be the trial number of the first success. Find $\Var{N}$.
>Still we condition on the first trial: let $Y=1$ if the first trial is a success, and $0$ otherwise.
>
>$\bspace\begin{align}
\EE{N^2} &= \EE{\EE{N^2\mid Y}} \\
&= \EE{N^2 \mid Y=1}\cdot P\P{Y=1} + \EE{N^2 \mid Y=0}\cdot P\P{Y=0}\\
&= 1 \cdot p + \EE{\P{1+N}^2} \cdot \P{1-p}\\
&= 1 + \EE{2N+N^2} \cdot \P{1-p} \\
&= 1 + 2 \P{1-p}\EE{N} + \EE{N^2}\cdot \P{1-p}
\end{align}$
>
>And from **e.g.** The Mean of a Geometric Distribution, we've acquired that $\EE{N} = 1/p$, thus we can substitute this back and then solve the equation and get: $\EE{N^2} = \ffrac{2-p} {p^2}$. Then
>
>$\bspace \Var{N} = \ffrac{2-p} {p^2} - \P{\ffrac{1} {p}}^2 = \ffrac{1-p} {p^2}$
***
Then how about (a drink?) $\Var{X \mid Y}$? Here's the proposition.
$Proposition$ ***conditional variance formula***
$\bspace \Var{X} = \EE{\Var{X \mid Y}} + \Var{\EE{X \mid Y}}$
$Proof$
>$\bspace \begin{align}\EE{\Var{X \mid Y}} &= \EE{\EE{X^2 \mid Y} - \P{\EE{X\mid Y}}^2} \\
&= \EE{\EE{X^2 \mid Y}} - \EE{\P{\EE{X\mid Y}}^2} \\
&= \EE{X^2} - \EE{\P{\EE{X\mid Y}}^2}
\end{align}$
>
>$\bspace \begin{align}
\Var{\EE{X \mid Y}} &= \EE{\P{\EE{X \mid Y}}^2} - \P{\EE{\EE{X \mid Y}}}^2\\
&= \EE{\P{\EE{X \mid Y}}^2} - \P{\EE{X}}^2
\end{align}$
>
>Therefore, $\EE{\Var{X \mid Y}} + \Var{\EE{X \mid Y}} = \EE{X^2} - \P{\EE{X}}^2 = \Var{X}$
**e.g.** The Variance of a Compound $r.v.$
Let $X_1,X_2,\dots$ be $i.i.d.$ $r.v.$ with distribution $F$ having mean $\mu$ and variance $\sigma^2$, and assume that they are independent of the nononegative integer valued $r.v.$ $N$. Here the **compound $r.v.$** is $S = \sum_{i=1}^{N}X_i$. Find its variance.
>You can directly condition on $N$ to obtain $\EE{S^2}$, well.
>
>$\bspace \Var{S\mid N=n} = \Var{\sum\limits_{i=1}^{n} X_i} = n\sigma^2$
>
>and with the similar reasoning, we have $\EE{S \mid N=n} = n\mu$. Thus, $\Var{S\mid N} = N\sigma^2$ and $\EE{S \mid N} = N \mu$ and the conditional variance formula gives
>
>$\bspace \Var{S} = \EE{N\sigma^2} + \Var{N\mu} = \sigma^2 \EE{N} + \mu^2 \Var{N}$
$Remark$
>CANNOT directly write $\Var{S\mid N} = \EE{\P{S\mid N}^2}-\cdots$, because it's not well defined. So the correct way is to first find $\Var{S\mid N = n}$. Then convert this to the variable $\Var{S\mid N} = f\P{N}$
***
**e.g.** The Variance in the Matching Rounds Problem
Following the previous definition, $R_n$ is the number of rounds that are necesssary when $n$ individuals are initially present. Let $V_n$ be its variance. Show that $V_n = n$ for $n \geq 2$.
> Actually when you try $n=2$, as shown before, it's a geometric with parameter $p = 1/2$ thus $V_2 = \ffrac{1-p} {p^2} = 2$. So the assumption here is $V_j = j$ for $2\leq j \leq n-1$. And when there're $n$ individuals. still we condition that on the first round, or more specificly, let $X$ be the number of matches in the first round. Thus $\EE{R_n \mid X} = 1 + \EE{R_{n-X}}$ and by the previous result, we have $\EE{R_n \mid X} = 1 + n - X$. Also with $V_0 = 0$, we have $\Var{R_n \mid X} = \Var{R_{n-X}} = V_{n-X}$. Hence by the **conditional variance formula**,
>
>$\bspace \begin{align}
V_n &= \EE{\Var{R_n \mid X}} + \Var{\EE{R_n \mid X}} \\[0.6em]
&= \EE{V_{n-X}} + \Var{1+n-X} \\[0.5em]
&= \sum_{j=0}^{n} V_{n-j}\cdot P\CB{X=j} + \Var{X} \\
&= V_n \cdot P \CB{X=0} + \sum_{j=1}^{n} \P{n-j}\cdot P\CB{X=j} + \Var{X} \\
&= V_n \cdot P \CB{X=0} + n \P{1-P\CB{X=0}} - \EE{X} + \Var{X}
\end{align}$
>
>Here $\EE{X}$ is given in a example from Chapter 2 and as for $\Var{X}$, with the similar method, we have
>
>$\bspace \begin{align}
\Var{X} &= \sum\Var{X_i} + 2\sum_{i=1}^{N}\sum_{j>i} \Cov{X_i,X_j} \\
&= N\P{\EE{X_i^2} - \P{\EE{X_i}}^2} + 2\sum_{i=1}^{N}\sum_{j>i} \P{\EE{X_iX_j} - \EE{X_i} \EE{X_j}} \\
&= N\P{\ffrac{1} {N} - \ffrac{1} {N^2}} + 2 \ffrac{N\P{N-1}} {2} \P{\ffrac{1} {N\P{N-1}} - \ffrac{1} {N^2}} =1
\end{align}$
>
>Thus we substitute the value into the last equation and solve is. Then it's proved.
## Computing Probabilities by Conditioning
We've already know that by conditioning we can easily calculate the expectation and variance. Now we present how to use this method to find certain probabilities. First we define the ***indicator $r.v.$***:
$\bspace X = \begin{cases}
1, &\text{if } E \text{ occurs}\\
0, &\ow
\end{cases}$
Then $\EE{X} = P\P{E}$ and $\EE{X\mid Y=y} = \P{E\mid Y=y}$ for any $r.v.$ $Y$. Therefore,
$\bspace P\P{E} = \begin{cases}
\d{\sum_y P\CB{E \mid Y=y} \cdot P\CB{Y=y}}, & \text{if }Y\text{ is discrete}\\
\d{\int_{-\infty}^{\infty} P\CB{E \mid Y=y} \cdot f_Y\P{y}\;\dd{y}}, & \text{if }Y\text{ is continuous}\\
\end{cases}$
**e.g.** The probability of a $r.v.$ is less than another one
Let $X$ and $Y$ be two independent continuous $r.v.$ with densities $f_X$ and $f_Y$ respectively. Compute $P\CB{X < Y}$
> $\bspace \begin{align}
P\CB{X < Y} &= \int_{-\infty}^{\infty} P\CB{X<Y \mid Y=y} \cdot f_Y\P{y} \;\dd{y} \\
&= \int_{-\infty}^{\infty} P\CB{X<y} \cdot f_Y\P{y} \;\dd{y} \\
&= \int_{-\infty}^{\infty} F_X\P{y} \cdot f_Y\P{y} \;\dd{y} \\
\end{align}$
**e.g.**
A possion $r.v.$ with parameter $\lambda$, and each time it happens the result split into two, success and failure, with probability $p$ and $1-p$. Find the joint probability of exactly $n$ successes and $m$ failures.
>Let $T$ be the total times it happens. We can directly write that
>
>$\bspace \begin{align}
P\CB{S=n,F=m} &= P\CB{S=n,F=m,T=n+m}\\
&= P\CB{S=n,F=m \mid T = n+m} \cdot P\CB{T = n+m}
\end{align}$
>
>or by
>
>$\bspace \begin{align}
P\CB{S=n,F=m} &= \sum\limits_{t=0}^{\infty}P\CB{S=n,F=m \mid T = t} \cdot P\CB{T =t} \\
&= 0 + \CB{S=n,F=m \mid T = n+m} \cdot P\CB{T = n+m}
\end{align}$
>
>Thus, since it's just a binomial probability of $n$ successes in a $n+m$ trials, we have
>
>$\bspace \begin{align}
P\CB{S=n,F=m} &= \binom{n+m} {n} p^n \P{1-p}^{m} e^{-\lambda} \ffrac{\lambda^{n+m}} {\P{n+m}!}\\
&= \ffrac{\P{n+m}!} {n!m!} p^n \P{1-p}^{m} \lambda^n \lambda^m \ffrac{e^{-\lambda p}e^{-\lambda\P{1-p}}} {\P{n+m}!}\\
&= e^{-\lambda p} \ffrac{\P{\lambda p}^{n}} {n!}\cdot e^{-\lambda \P{1-p}} \ffrac{\P{\lambda \P{1-p}}^{m}} {m!}
\end{align}$
>
>That's it.
$Remark$
>It's also can be regarded as the product of two independent terms who are only rely on $n$ and $m$ respectively. It follows that $S$ and $F$ are independent. Moreover,
>
>$\bspace P\CB{S=n} = \sum\limits_{m=0}^{\infty}P\CB{S=n,F=m} = e^{-\lambda p} \ffrac{\P{\lambda p}^{n}} {n!}$
>
>And similar for $P\CB{F=m} = e^{-\lambda \P{1-p}} \ffrac{\P{\lambda \P{1-p}}^{m}} {m!}$
$Remark$
>We can also generalize the result to the case where each of a Poisson distributed number of events, $N$, with mean $\lambda$ is independently classified as being one of $k$ types with the probability that it is type $i$ being $p_i$. And $N_i$ is the number that are classified as type $i$, then:
>
> $N_i$ for $i = 1,2,\dots,k$ are independent Poisson $r.v.$ with respective means $\lambda p_1,\lambda p_2,\dots, \lambda p_k$, and this follows that:
>
>$\bspace \begin{align}
&P\CB{N_1 = n_1, N_2 = n_2,\dots, N_k = n_k} \\
=\;& P\CB{N_1 = n_1, N_2 = n_2,\dots, N_k = n_k \mid N = n} \cdot P\CB{N = n} \\[0.5em]
=\;& \binom{n} {n_1,n_2,\dots,n_k} \cdot e^{-\lambda} \ffrac{\lambda^n} {n!} \\
=\;& \prod_{i=1}^{k} e^{-\lambda p_i} \ffrac{\P{\lambda p_i}^{n_i}}{n_i!}
\end{align}$
**e.g.** The Distribution of the Sum of Independent Bernoulli $r.v.$
Let $X_i$ be independent Bernoulli $r.v.$ with $P\CB{X_i = 1} = p_i$. What's the pmf of $\sum X_i$? Here's a recursive way to obtain all these.
> First let $P_k\P{j} = P\CB{X_1 + X_2 + \cdots + X_k = j}$ and note that $P_k\P{0} = \prod\limits_{i=1}^{k} q_i$, and $P_k\P{k} = \prod\limits_{i=1}^{k} p_i$. Then we condition $P_k\P{j}$ on $X_k$. (We don't contion that on the first event this time, cause we are doing a recursion! RECURSION! F\*\*\*)
>
> $\bspace \begin{align}
P_k\P{j} &= P\CB{X_1 + X_2 + \cdots + X_k = j \mid X_k = 1} \cdot p_k + P\CB{X_1 + X_2 + \cdots + X_k = j \mid X_k = 0} \cdot q_k \\
&= P_{k-1}\P{j-1} \cdot p_k + P_{k-1}\P{j} \cdot q_k
\end{align}$
**e.g.** The Best Prize Problem
$n$ girls appeared in my life in sequence. I have to confess once if I met her or never would I got another chance. The only information I have when deciding whether to confess is the relative rank of that girl compared to ones alreay met. My wish is to maximize the probability of obtaining the best lover. Assuming all $n!$ ordering of the girls are equally likely, how do we do?
> Strategy: fix a value $k$ and only confess to the first girl after that is better than all first $k$ girls. Using this, we denote $P_k\P{\text{best}}$ as the probability that the best girl is confessed by me.
>And to find that we condition that on $X$, the position of best girl. This gives:
>
>$\bspace \begin{align}
P_k\P{\text{best}} &= \sum_{x=1}^{n} P_k\P{\text{best}\mid X =x}\cdot P\CB{X = x} \\
&= \ffrac{1} {n}\P{\sum_{x=1}^{k} P_k\P{\text{best}\mid X =x} +\sum_{x=k+1}^{n} P_k\P{\text{best}\mid X =x}}\\
&= \ffrac{1} {n}\P{0 + \sum_{x=k+1}^{n}\ffrac{k} {i-1}}\\
&\approx \ffrac{k} {n}\int_{k}^{n-1} \ffrac{1} {x} \;\dd{x} \approx \ffrac{k} {n} \log\P{\ffrac{n} {k}}
\end{align}$
>To find its maximum, we differentiate $g\P{x} = \ffrac{x} {n} \log\P{\ffrac{n} {x}}$ and get $x_{\min} = \ffrac{n} {e}$
**e.g.** Hat match game
$n$ men with their hats mixed up and then each man randomly select one. What's the probability of no matches? or exactly $k$ matches?
>We are finding this by conditioning on whether or not the first man selects his own hat, $M$ or $M^c$. Let $E$ denote the event that no matches occur. Then
>
>$\bspace P_n = P\P{E} = P\P{E\mid M} \cdot P\P{M} + P\P{E\mid M^c} \cdot P\P{M^c}=P\P{E\mid M^c}\ffrac{n-1} {n}$
>
>Following that, since it's given that the first man doesn't get a hat, then there're only $n-1$ men left and thus
>
>$\bspace P\P{E\mid M^c} = P_{n-1} + \ffrac{1} {n-1} P_{n-2}$
>
>saying that it's equal to the condition when the second person find his hat, or not.
>
>Then since $P_1 = 0$, $P_2 = 0.5$ we can find all of them recursively.
**e.g.** The Ballot Problem
In an election, candidate $A$ have received $n$ votes, and $B$, $m$ votes with $n>m$. Assuming that all orderings are equally likely, show that the probability that $A$ is always ahead in the count of votes is $\P{n-m}/\P{n+m}$.
>Let $P_{n,m}$ denote the desired probability then:
>
>$\begin{align}
P_{n,m} &= P\CB{A \text{ always ahead}\mid A \text{ receive last vote}} \cdot \ffrac{n} {n+m} \\
& \bspace + P\CB{A \text{ always ahead}\mid B \text{ receive last vote}} \cdot \ffrac{m} {n+m} \\
&= \ffrac{n} {n+m} \cdot P_{n-1,m} + \ffrac{m} {n+m} \cdot P_{n,m-1}
\end{align}$
>
>Then by induction, done! $P_{n,m} = \ffrac{n} {n+m} \cdot \ffrac{n-1-m} {n-1+m} + \ffrac{m} {n+m} \cdot \ffrac{n-m+1} {n+m-1} = \ffrac{n-m} {n+m}$
***
**e.g.**
Let $U_1,U_2,\dots$ be a sequence of independent uniform $\P{0,1}$ $r.v.$, and let $N = \min\CB{n \geq 2: U_n > U_{n-1}}$ and $M = \min\CB{n \geq 1: U_1 + U_2 + \dots + U_n > 1}$. Surprisingly, $N$ and $M$ have the same probability distribution, and their common mean is $e$! Prove it!
> For $N$ since all the possible ordering or $U_1,\dots,U_n$ are equally likely, we have:
>
>$\bspace P\CB{U_1 > U_2 > \cdots > U_n} = \ffrac{1} {n!} = P\CB{N > n}$.
>
>For $M$, to use induction method, we intend to prove $P\CB{M\P{x} > n} = x^n/n!$ where $M\P{x} = \min\CB{n\geq 1: U_1 + U_2 + \cdots + U_n > x}$, for $0 < x \leq 1$. When $n=1$,
> $\bspace P\CB{M\P{x} > 1} = P\CB{U_1 \leq x} = x$
>
>Then assume that holds true for all $n$ to determine $P\CB{M\P{x} > n+1}$ we condition that on $U_1$ to obtain:
>
>$\bspace \begin{align}
P\CB{M\P{x} > n+1} &= \int_{0}^{1} P\CB{M\P{x} > n+1\mid U_1 = y}\;\dd{y} \\
& \bspace\text{since }y \text{ cannot exceed }x\text{ , we change the upper limit of integral} \\
&= \int_{0}^{x} P\CB{M\P{x} > n+1\mid U_1 = y}\;\dd{y} \\
&= \int_{0}^{x} P\CB{M\P{x-y} > n}\;\dd{y} \\
&\bspace \text{induction hypothesis}\\
&= \int_{0}^{x} \ffrac{\P{x-y}^n} {n!}\;\dd{y} \\
&\using{u=x-y} \int_{0}^{x} \ffrac{u^n} {n!} \;\dd{u} \\
&= \ffrac{x^{n+1}} {\P{n+1}!}
\end{align}$
>
>Thus $P\CB{M\P{x} > n} = x^n/n!$ and let $x=1$ and we can draw the final conclusion that $P\CB{M > 1} = 1/n!$, so that $N$ and $M$ have the same distribution. Finally, we have:
>
>$$\EE{M} = \EE{N} = \QQQ \sum_{n=0}^{\infty} 1/n! = e$$
**e.g.**
Let $X_1, X_2,\dots,X_n$ be independent continuous $r.v.$ with a common distribution function $F$ and density $f = F'$, suppose that they are to be observed one at a time in sequence. Let $N = \min\CB{n\geq 2: X_n = \text{second largest of }X_1,X_2,\dots,X_n}$ and $M = \min\CB{n\geq 2: X_n = \text{second smallest of }X_1,X_2,\dots,X_n}$. Which one tends to be larger?
> To find $X_N$ it's natural to condition on the value of $N$. We first let $A_i = \CB{X_i \neq \text{second largest of }X_1,X_2,\dots,X_i}, i\geq 2$. Thus
>
> $$\newcommand{\void}{\left.\right.}P\CB{N=n} = P\P{A_2A_3\cdots A_{n-1}A^c} = \ffrac{1} {2}\ffrac{2} {3}\cdots\ffrac{n-2} {n-1} \ffrac{1} {n} = \ffrac{1} {n\P{n-1}}$$
>
>Here $A_i$ being independent is because $X_i$ are $i.i.d.$. Then we condition that on $N$ and obtain:
>
>$$\begin{align}
f_{X_{\void_N}}\P{x} &= \sum_{n=2}^{\infty} \ffrac{1} {n\P{n-1}} f_{X_{\void_N}\mid N} \P{x\mid n} \\
&= \sum_{n=2}^{\infty} \ffrac{1} {n\P{n-1}} \ffrac{n!} {1!\P{n-2}!} \P{F\P{x}}^{n-2}\cdot f\P{x} \cdot \P{1-F\P{x}} \\
&= f\P{x} \cdot \P{1-F\P{x}} \sum_{n=0}^{\infty} \P{F\P{x}}^i \\
&= f\P{x}
\end{align}$$
>
>We are almost there. Now we think their opposite number. Let $W_i = -X_i$, then $M = \min\CB{n\geq 2: X_n = \text{second largest of }W_1,W_2,\dots,W_n}$. So similiarly, $W_M$ has the same distribution with $W_1$ just like $X_N$ and $X_1$. Then we drop the minus sign so that $X_M$ has the same distribution with $X_1$. So they all are of the same distribution.
$Remark$
>This is a special case for ***Ignatov's Theorem***, where second could be $k\texttt{th}$ largest/smallest. Still the distribution is $F$, for all $k$!
***
**(V)e.g.**
A population consists of $m$ families. Let $X_j$ denote the size of family $j$ and suppose that $X_1,X_2,\dots,X_m$ are independent $r.v.$ having the common pmf: $p_k = P\CB{X_j = k}, \sum_{i=1}^{\infty}p_k = 1$ with mean $\mu = \sum\limits_k k\cdot p_k$. Suppose a member of whatever family is chosen randomly, and let $S_i$ be the event that the selected individual is from a family of size $i$. Prove that $P\P{S_i} \to \ffrac{ip_i} {\mu}$ as $m \to \infty$.
> This time we need to condition on a vector of $r.v.$. Let $N_i$ be the number of familier that are of size $i$: $N_i = \#\CB{k:k=1,2,\dots,m:X_k = i}$; and then condition on $\mathbf{X} = \P{X_1,X_2,\dots,X_m}$:
>
>$$P\P{S_i\mid \mathbf{X}} = \ffrac{iN_i} {\sum_{i=1}^{m} X_k}$$
>
>$$P\P{S_i} = \EE{P\P{S_i\mid X}}
= \EE{\ffrac{iN_i} {\sum_{i=1}^{m} X_k}}
= \EE{\ffrac{iN_i/m} {\sum_{i=1}^{m} X_k/m}}$$
>
>$\bspace$This follows by the **strong law of large numbers** that $N_i/m$ would converges to $p_i$ as $m \to \infty$, and $\sum_{i=1}^{m} X_k/m \to \EE{X} = \mu$. Thus: $P\P{S_i} \to \EE{\ffrac{ip_i} {\mu}} = \ffrac{ip_i} {\mu}$ as $m \to \infty$
***
**e.g.**
Consider $n$ independent trials in which each trials results in one of the outcomes $1,2,\dots,k$ with respective probabilities $p_i$ and $\sum p_i = 1$. Suppose further that $n > k$, and that we are interested in determining the probability that each outcome occurs at least once. If we let $A_i$ denote the event that outcome $i$ dose not occur in any of the $n$ trials, then the desired probability is $1 - P\P{\bigcup_{i=1}^{k} A_i}$ and by the inclusion-exclusion theorem, we have
$$\begin{align}
p\P{\bigcup_{i=1}^{k} A_i} =& \sum_{i=1}^{k} P\P{A_i} - \sum_i\sum_{j>i} P\P{A_iA_j} \\
&\bspace + \sum_i\sum_{j>i}\sum_{k>j} P\P{A_iA_jA_k} - \cdots + \P{-1}^{k+1} P\P{A_1 \cdots A_k}
\end{align}$$
where
$$\begin{align}
P\P{A_i} &= \P{1-p_i}^{n} \\
P\P{A_iA_j} &= \P{1-p_i - p_j}^{n}, \bspace i < j\\
P\P{A_iA_jA_k} &= \P{1-p_i - p_j - p_k}^{n}, \bspace i < j < k\\
\end{align}$$
How to solve this by conditioning on whatever something?
> Note that if we start by conditioning on $N_k$, the number of times that outcome $k$ occurs, then when $N_k>0$ the resulting conditional probability will equal the probability that all of the outcomes $1,2,\dots,k-1$ occur at least once when $n-N_k$ trails are performed, and each results in outcome $i$ with probability $p_i/\sum_{j=1}^{k-1} p_j$, for $i = 1, 2,\dots,k-1$. Then we could use a similar conditioning step on these terms.
>
>Follow this idea, we let $A_{m,r}$, for $m\leq n, r\leq k$, denote the event that each of the outcomes $1,2,\dots,r$ occurs at least once when $m$ independent trails are performed, where each trial results in one of the outcomes $1,2,\dots,r$ with respective probabilities $p_1/P_r, \dots,p_r/P_r$, where $P_r = \sum_{j=1}^{r}p_j$. Then let $P\P{m,r} = P\P{A_{m,r}}$ and note that $P_{n,k}$ is the desired probability. To obtain the expression of $P\P{m,r}$, condition on the number of times that outcome $r$ occurs. This gives rise to:
>
>$\bspace\begin{align}
P_{m,r} &= \sum_{j=0}^{m} P\CB{A_{m,r} \mid r \text{ occurs }j \text{ times}} \binom{m}{j} \P{\ffrac{p_r} {P_r}}^j \P{1 - \ffrac{p_r} {P_r}}^{m-j} \\
&= \sum_{j=1}^{m-r+1} P_{m-j,r-1}\binom{m}{j} \P{\ffrac{p_r} {P_r}}^j \P{1 - \ffrac{p_r} {P_r}}^{m-j}
\end{align}$
>
>Starting with $P\P{m,1} = 1$ for $m \geq 1$ and $P\P{m,1} = 0$ for $m=0$.
>
>And this's how this recursion works. We can first find the $P\P{m,2}$ for $m = 2,3,\dots,n-\P{k-2}$ and then $P\P{m,3}$ for $m = 2,3,\dots,n-\P{k-3}$ and so on, up to $P\P{m,k-1}$ for $m = k-1,k,\dots,n-1$. Then we use the recursion to compute $P\P{n,k}$.
***
Now we extend our conclusion of how to calculate certain expectations using the conditioning fashion. Here's another formula for this:
$\bspace\EE{X \mid Y = y} = \begin{cases}
\d{\sum_{w} \EE{X \mid W = w, Y = y} \cdot P\CB{W = w \mid Y = y}}, & \text{if } W \text{ discrete} \\
\d{\int_{w} \EE{X \mid W = w, Y = y} \cdot f_{W \mid Y}P\P{w \mid y} \;\dd{w}}, & \text{if } W \text{ continuous}
\end{cases}$
and we write this as $\EE{X \mid Y} = \EE{\EE{X \mid Y,W}\mid Y}$
**e.g.**
Automobile insurance company classifies its policyholders as one of the types $1, 2, \dots, k$. It supposes that the numbers of accidents that a type $i$ policyholder has in the following years are independent Poisson random variables with mean $\lambda_i$. For a new policyholder, the probability that he being type $i$ is $p_i$.
Given that a policyholder had $n$ accidents in her first year, what is the expected number that she has in her second year? What is the conditional probability that she has $m$ accidents in her second year?
>Let $N_i$ denote the number of accidents the policyholder has in year $i$. To obtain $\EE{N_2 \mid N_1 = n}$, condition on her risk type $T$.
>$$\begin{align}
\EE{N_2 \mid N_1 = n} &= \sum_{j=1}^{k} \EE{N_2\mid T = j, N_1 = n} \cdot P\CB{T = j \mid N_1 = n} \\
&= \sum_{j=1}^{k} \EE{N_2\mid T = j} \cdot \ffrac{P\CB{T = j, N_1 = n}} {P\CB{N_1 = n}} \\
&= \ffrac{\sum\limits_{j=1}^{k} \lambda_j \cdot P\CB{N_1 = n \mid T = j} \cdot P\CB{T = j}} {\sum\limits_{j=1}^{k} P \CB{ N_1 = n \mid T = j}\cdot P\CB{T = j}} \\
&= \ffrac{\sum\limits_{j=1}^{k} e^{-\lambda_j} \lambda_j^{n+1}p_j} {\sum\limits_{j=1}^{k} e^{-\lambda_j} \lambda_j^{n} p_j}
\end{align}$$
>
>$$\begin{align}
P\CB{N_2 = m \mid N_1 = n} &= \sum_{j=1}^{k} P\CB{N_2 = m \mid T = j, N_1 = n} \cdot P\CB{T = j \mid N_1 = n} \\
&= \sum_{j=1}^{k} e^{-\lambda_j} \ffrac{\lambda_j^m} {m!} \cdot P\CB{T = j \mid N_1 = n} \\
&= \ffrac{\sum\limits_{j=1}^{k} e^{-2\lambda_j} \lambda_j^{m+n} p_j} {m! \sum\limits_{j=1}^{k} e^{-\lambda_j} \lambda_j^n p_j}
\end{align}$$
$Remark$
$\bspace P\P{A \mid BC} = \ffrac{P\P{AB \mid C}} {P\P{B \mid C}}$
***
## Some Applications
See the extended version later if possible.😥
## An Identity for Compound Random Variables
Let $X_1,X_2,\dots$ be a sequence of $i.i.d.$ $r.v.$ and let $S_n = \sum_{i=1}^{n} X_i$ be the sum of the first $n$ of them, $n \geq 0$, where $S_0 = 0$. Then the **compound random variable** is defined as $S_N = \sum\limits^N X_i$ with the distribution of $N$ called the **compounding distribution**.
To find $S_N$, first define $M$ as a $r.v.$ that is independnet of the sequence $X_1,X_2,\dots$, and which is such that $P\CB{M = n} = \ffrac{nP\CB{N = n}} {\EE{N}}$.
$Proposition.5$ The Compound $r.v.$ Identity
$\bspace$For any function $h$, $\EE{S_N h\P{S_N}} = \EE{N} \cdot \EE{X_1 h\P{S_M}}$
$Proof$
$$\begin{align}
\EE{S_N h\P{S_N}} &= \EE{\sum_{i=1}^{N} X_i h\P{X_N}} \\
&= \sum_{n=0}^{\infty} \EE{\sum_{i=1}^{N} X_i h\P{X_N}\mid N = n} \cdot P\CB{N = n} \\
&= \sum_{n=0}^{\infty} \EE{\sum_{i=1}^{n} X_i h\P{X_N}\mid N = n} \cdot P\CB{N = n} \\
& \bspace\text{by the independence of }N \text{ and }X_1,X_2,\dots \\
&= \sum_{n=0}^{\infty} \EE{\sum_{i=1}^{n} X_i h\P{X_N}} \cdot P\CB{N = n} \\
&= \sum_{n=0}^{\infty} \sum_{i=1}^{n} \EE{X_i h\P{S_n}} \cdot P\CB{N = n} \\
& \bspace\EE{X_ih\P{X_1 + X_2 + \cdots + X_n}} \text{ are symmetric} \\
&= \sum_{n=0}^{\infty} n \EE{X_1 h\P{S_n}} \cdot P\CB{N = n} \\
&= \EE{N} \sum_{n=0}^{\infty} \EE{X_1 h\P{S_n}} \cdot P\CB{M = n} \\
&= \EE{N} \sum_{n=0}^{\infty} \EE{X_1 h\P{S_n} \mid M = n} \cdot P\CB{M = n} \\
& \bspace\text{independence of }M \text{ and }X_1,X_2,\dots, X_n \\
&= \EE{N} \sum_{n=0}^{\infty} \EE{X_1 h\P{S_M} \mid M=n} \cdot P\CB{M = n} \\
&= \EE{N} \EE{X_1 h\P{S_M}}
\end{align}$$
$Corollary.6$
Suppose $X_i$ are positive integer valued $r.v.$, and let $\alpha_j = P\CB{X_1 = j}$, for $j > 0$. Then:
$\bspace\begin{align}
P\CB{S_N = 0} &= P\CB{N = 0} \\
P\CB{S_N = k} &= \ffrac{1}{k} \EE{N} \sum_{j=1}^{k} j \alpha_j P\CB{S_{M-1} = k-j}, k > 0
\end{align}$
$Proof$
>For $k$ fixed, let
>
>$\bspace h\P{x} = \begin{cases}
1, & \text{if } x = k \\
0, & \text{if } x \neq k
\end{cases}$
>
>and then $S_N h\P{S_N}$ is either equal to $k$ if $S_N = k$ or is equal to $0$ otherwise. Therefore,
>
>$$\EE{S_Nh\P{S_N}} = k P\CB{S_N = k}$$
>
>and the compound identity yields:
>
>$$\begin{align}
kP\CB{S_N = k} &= \EE{N} \EE{X_1 h\P{S_M}} \\
&= \EE{N} \sum_{j=1}^{\infty} \EE{X_1 h\P{S_M} \mid X_1 = j} \alpha_j \\
&= \EE{N} \sum_{j=1}^{\infty} j \EE{h\P{S_M} \mid X_1 = j} \alpha_j \\
&= \EE{N} \sum_{j=1}^{\infty} j P\CB{S_M = k \mid X_1 = j} \alpha_j \\
\end{align}$$
>And now,
>
>$$\begin{align}
P\CB{S_M = k \mid X_1 = j} &= P\CB{\sum_{i=1}^{M} X_i = k \mid X_1 = j} \\
&= P\CB{j + \sum_{i=2}^{M} X_i = k \mid X_1 = j} \\
&= P\CB{j + \sum_{i=1}^{M-1} X_i = k} \\
&= P\CB{S_{M-1} = k-j}
\end{align}$$
$Remark$
That's almost the end. Later we will show how to use this relationship to solve the problem using recursion.
### Poisson Compounding Distribution
Using $Proposition.5$, if $N$ is the Poisson distribution with mean $\lambda$, then
$\bspace\begin{align}
P\CB{M-1 = n} &= P\CB{M = n+1} \\
&= \ffrac{\P{n+1}P\CB{N = n+1}} {\EE{N}}\\
&= \ffrac{1} {\lambda} \P{n+1} e^{-\lambda} \ffrac{\lambda^{n+1}} {\P{n+1}!} \\
&= e^{-\lambda} \ffrac{\lambda^n} {n!}
\end{align}$
Thus with $P_n = P\CB{S_N = n}$, the recursion given by $Corollary.6$ can be written as
$\bspace\begin{align}
P_0 = P\CB{S_N = 0} &= P\CB{N = 0} = e^{-\lambda} \\
P_k = P\CB{S_N = k} &= \ffrac{1}{k} \EE{N} \sum_{j=1}^{k} j \alpha_j P\CB{S_{M-1} = k-j} = \ffrac{\lambda} {k} \sum_{j=1}^{k} j \alpha_j P_{k-j}, \bspace k > 0
\end{align}$
$Remark$
When $X_i$ are chosen to be identical $1$, then the preceding expressions are reduced to a **Poisson** $r.v.$
$\bspace\begin{align}
P_0 &= e^{-\lambda} \\
P_n &= \ffrac{\lambda} {n} P\CB{N = n-1}, \bspace k > 0
\end{align}$
***
**e.g.**
Let $S$ be a compound **Poisson** $r.v.$ with $\lambda = 4$ and $P\CB{X_i = i} = 0.25, i=1,2,3,4$.
>$\bspace\begin{align}
P_0 &= e^{-\lambda} = e^{-4} \\
P_1 &= \lambda\alpha_1 P_0 = e^{-4} \\
P_2 &= \ffrac{\lambda} {2} \P{\alpha_1 P_1 + 2 \alpha_2 P_0} = \ffrac{3} {2} e^{-4} \\
P_3 &= \ffrac{\lambda} {3} \P{\alpha_1 P_2 + 2 \alpha_2 P_1 + 3 \alpha_3 P_0} = \ffrac{13} {6} e^{-4} \\
P_4 &= \ffrac{\lambda} {4} \P{\alpha_1 P_3 + 2 \alpha_2 P_2 + 3 \alpha_3 P_1 + 4 \alpha_4 P_0} = \ffrac{73} {24} e^{-4} \\
P_5 &= \ffrac{\lambda} {5} \P{\alpha_1 P_4 + 2 \alpha_2 P_3 + 3 \alpha_3 P_2 + 4 \alpha_4 P_1 + 5 \alpha_5 P_0} = \ffrac{381} {120} e^{-4} \\
\end{align}$
### Binomial Compounding Distribution
Suppose $N$ is a binomial $r.v.$ with parmeter $r$ and $p$, then
$\bspace\begin{align}
P\CB{M-1 = n} &= \ffrac{\P{n+1}P\CB{N = n+1}} {\EE{N}}\\
&= \ffrac{n+1} {rp} \binom{r} {n+1} p^{n+1} \P{1-p}^{r-n-1} \\
&= \ffrac{\P{r-1}!} {\P{r-1-n}!n!} p^n \P{1-p}^{r-1-n}
\end{align}$
Thus $M-1$ is also a binomial $r.v.$ with parameters $r-1$, $p$.
***
The missing part in this Charpter might be included in future bonus content... I need to carry on to the next chapter, the Markov Chain
| 47cef2537f37c2ead4801e0bea707348ada6536e | 60,135 | ipynb | Jupyter Notebook | Probability and Statistics/Applied Random Process/Chap_03.ipynb | XavierOwen/Notes | d262a9103b29ee043aa198b475654aabd7a2818d | [
"MIT"
] | 2 | 2018-11-27T10:31:08.000Z | 2019-01-20T03:11:58.000Z | Probability and Statistics/Applied Random Process/Chap_03.ipynb | XavierOwen/Notes | d262a9103b29ee043aa198b475654aabd7a2818d | [
"MIT"
] | null | null | null | Probability and Statistics/Applied Random Process/Chap_03.ipynb | XavierOwen/Notes | d262a9103b29ee043aa198b475654aabd7a2818d | [
"MIT"
] | 1 | 2020-07-14T19:57:23.000Z | 2020-07-14T19:57:23.000Z | 56.043802 | 588 | 0.510784 | true | 19,664 | Qwen/Qwen-72B | 1. YES
2. YES | 0.884039 | 0.890294 | 0.787055 | __label__eng_Latn | 0.890434 | 0.666925 |
## Define a nutrient profile that can be systematically manipulated
I want to build a profile that can be used to explore parameter space for the non-dimensional number $\tau_v=-Z\delta_v^2C/\delta_vC$. Arctan could be a good candidate.
```python
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import seaborn as sns
import sympy as sym
sym.init_printing() # enable fancy printing
```
```python
# Set appearance options seaborn
sns.set_style('white')
sns.set_context('notebook')
```
```python
# Constants and scales from canyon bathy
#L = 6400.0 # canyon length
#R = 5000.0 # Upstream radius of curvature
#g = 9.81 # accel. gravity
#Wsb = 13000 # Width at shelf break
#Hs = 147.5 # Shelf break depth
#Hh = 97.5 #
#Hr = 130.0 # rim depth at dn station
# NOTE: The default values of all functions correspond to the base case
```
```python
a,b,c,z,Csb,Hs = sym.symbols('a,b,c,z,Csb,Hs')
func = -1.5*sym.atan(a*(z+Hs)) + Csb
ndfunc = func/Csb
```
```python
func
```
```python
ndfunc
```
```python
func.diff(z), (func.diff(z)).diff(z)
```
```python
ndfunc.diff(z), ((ndfunc.diff(z)).diff(z))
```
```python
func = func.subs({Hs:147.5,Csb:32.6})
hand =sym.plot(func.subs(a,0.1),
func.subs(a,0.3),
func.subs(a,0.6),
func.subs(a,0.8),(z, -300, 0),
xlabel='Depth (m)',
ylabel='Concentration',
title='%s' %func,
show=False)
hand[1].line_color='r'
hand[2].line_color='g'
hand[3].line_color='purple'
hand.show()
```
```python
func = func.subs({Hs:147.5,Csb:32.6})
hand =sym.plot(func.subs(a,0.1),
func.subs(a,0.3),
func.subs(a,0.6),
func.subs(a,0.8),(z, -1200, 0),
xlabel='Depth (m)',
ylabel='Concentration',
title='%s' %func,
show=False)
hand[1].line_color='r'
hand[2].line_color='g'
hand[3].line_color='purple'
hand.show()
```
### Non-dim by C(Hs) = 32.6 $\mu$M
```python
ndfunc = ndfunc.subs({Hs:147.5,Csb:32.6})
hand =sym.plot(ndfunc.subs(a,0.1),
ndfunc.subs(a,0.3),
ndfunc.subs(a,0.6),
ndfunc.subs(a,0.8),(z, -300, 0),
xlabel='Depth (m)',
ylabel='Concentration',
title='%s' %ndfunc,
show=False)
hand[1].line_color='r'
hand[2].line_color='g'
hand[3].line_color='purple'
hand.show()
```
```python
hand =sym.plot((func.subs(a,0.1)).diff(z),
(func.subs(a,0.3)).diff(z),
(func.subs(a,0.6)).diff(z),
(func.subs(a,0.8)).diff(z),(z, -300, 0),
xlabel='Depth (m)',
ylabel='$dC/dz$',
title='%s' %func.diff(z),
show=False)
hand[1].line_color='r'
hand[2].line_color='g'
hand[3].line_color='purple'
hand.show()
```
```python
hand =sym.plot(((func.subs(a,0.1)).diff(z)).diff(z),
((func.subs(a,0.3)).diff(z)).diff(z),
((func.subs(a,0.6)).diff(z)).diff(z),
((func.subs(a,0.8)).diff(z)).diff(z),(z, -300, 0),
xlabel='Depth (m)',
ylabel='$d^2C/dz^2$',
title='%s' %((func).diff(z)).diff(z),
show=False)
hand[1].line_color='r'
hand[2].line_color='g'
hand[3].line_color='purple'
hand.show()
```
```python
Z = 50 # m approx value of upwelling depth
hand = sym.plot((-Z*(func.subs(a,0.1).diff(z)).diff(z))/func.subs(a,0.1).diff(z),
(-Z*(func.subs(a,0.3).diff(z)).diff(z))/func.subs(a,0.1).diff(z),
(-Z*(func.subs(a,0.6).diff(z)).diff(z))/func.subs(a,0.1).diff(z),
(-Z*(func.subs(a,0.8).diff(z)).diff(z))/func.subs(a,0.1).diff(z),
(z, -300, 0),
xlabel='Depth (m)',
ylabel='$Z(d^2C/dz^2)/(dC/dz)$',
title=r'$\tau_v$',
show = False)
hand[1].line_color='r'
hand[2].line_color='g'
hand[3].line_color='purple'
hand.show()
```
* I don't know if I like that the second derivative is zero at the shelf break because the second derivative and thus, $\tau_v$ are always zero.
* It is fairly easy to change dC/dz at the shelf break by changing the factor multiplying (z+Hs), for example from 0.3 to 0.8, the value of the first derivative goes from -0.045 to -0.09 at the shelf break (max). The function also gets smoother over a larger depth and that can be a problem.
```python
```
| 22629b2da2fccc5dfdd26c362b69527e3ad23a14 | 278,517 | ipynb | Jupyter Notebook | NutrientProfiles/SystematicProfile.ipynb | UBC-MOAD/outputanalysisnotebooks | 50839cde3832d26bac6641427fed03c818fbe170 | [
"Apache-2.0"
] | null | null | null | NutrientProfiles/SystematicProfile.ipynb | UBC-MOAD/outputanalysisnotebooks | 50839cde3832d26bac6641427fed03c818fbe170 | [
"Apache-2.0"
] | null | null | null | NutrientProfiles/SystematicProfile.ipynb | UBC-MOAD/outputanalysisnotebooks | 50839cde3832d26bac6641427fed03c818fbe170 | [
"Apache-2.0"
] | null | null | null | 618.926667 | 57,452 | 0.932568 | true | 1,389 | Qwen/Qwen-72B | 1. YES
2. YES | 0.931463 | 0.833325 | 0.776211 | __label__eng_Latn | 0.697606 | 0.64173 |
# Technique Guide
Package pyndynpd is able to estimate dynamic panel models that take a form as follows:
$$y_{it}=\sum_{j=1}^{p}\alpha_{j}y_{i,t-j}+\sum_{k=1}^{m}\sum_{j=0}^{q_{k}}\beta_{jk}r_{i,t-j}^{(k)}+\boldsymbol{\delta}\boldsymbol{d_{i,t}}+\boldsymbol{\gamma}\boldsymbol{s_{i,t}}+u_{i}+\epsilon_{it} $$
In the model above, $y_{i,t-j}$ ($j=1,2,\ldots,p$) denotes a group of $p$ lagged dependent variables. $r_{i,t-j}^{(k)}$ represents a group of $m$ endogeneous variables other than lagged $y$. $\boldsymbol{d_{it}}$ is a vector of predetermined variables which may potentially correlate with past errors, $\boldsymbol{s_{it}}$ is a vector of exogenous variables, and $u_{i}$ represents fixed effect. For illustration purpose, let's consider a basic form of dynamic panel model:
$$
\begin{align}
y_{it}=\alpha_{1}y_{i,t-1}+\delta d_{i,t}+u_{i}+\epsilon_{it} \label{basic}\tag{1}
\end{align}
$$
As lagged dependent variable $y_{i,t-1}$ is included as regressor, the popular techniques in static panel models, such as fixed-effect and first-difference estimators, no longer produce consistent results. Researchers have developed many methods to estimate dynamic panel model. Essentially there are two types of GMM estimates, difference GMM and system GMM.
## Difference GMM
The first step in the process is to eliminate the fixed-effect term $u_{i}$. First differencing Eq ($\ref{basic}$) yields:
$$
\begin{align}
\Delta y_{it}=\alpha_{1}\Delta y_{i,t-1}+\delta\Delta d_{i,t}+\Delta\epsilon_{it}\label{fd}\tag{2}
\end{align}$$
In the model above, $\Delta y_{i,t-1}$ correlates with $\Delta\epsilon_{i,t}$ because $\Delta y_{i,t-1}=y_{i,t-1}-y_{i,t-2}$, $\Delta\epsilon_{i,t}=\epsilon_{i,t}-\epsilon_{i,t-1}$, and $y_{i,t-1}$ is affected by $\epsilon_{i,t-1}$. As a result, estimating Eq ($\ref{fd}$) directly produces inconsistent result. Instrumental variables are used to solve the issue. [@arellano1991some]suggest to use all lagged $y$ dated $t-2$ and earlier (i.e., $y_{i,1}$, $y_{i,2}$,\..., $y_{i,t-2}$) as instruments for $\Delta y_{i,t-1}$. Similarly, the instruments for predetermined variable $\Delta d_{it}$ include $d_{i,1}$, $d_{i,2}$,\..., $d_{i,t-1}$. Let $z_{i}$ be the instrument variable matrix for individual i:
$$
z_{i}=\left[\begin{array}{ccccccccccccccccccc}
y_{i1} & 0 & 0 & \ldots & \ldots & 0 & \ldots & 0 & d_{i1} & d_{i2} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \ldots & 0\\
0 & y_{i1} & y_{i2} & 0 & 0 & 0 & \ldots & 0 & 0 & 0 & d_{i1} & d_{i2} & d_{i3} & 0 & 0 & 0 & 0 & \ldots & 0\\
\vdots & 0 & 0 & \vdots & & & \ldots & & & & & & & \vdots & & & & \ldots & 0\\
0 & 0 & 0 & 0 & 0 & 0 & \ldots & y_{i,T-2} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \ldots & d_{i,T-1}
\end{array}\right]$$
Difference GMM is based on the moment condition $E(z_{i}^{\prime}\Delta\epsilon_{i})=0$ where $\Delta \epsilon_{i}=(\Delta \epsilon_{i2}, \Delta\epsilon_{i3}\textrm{, }...,\Delta\epsilon_{iT})^{\prime}$ and $z_{i}$ is the instrument variable matrix. Applying this moment condition to sample data, we have $(1/N)\sum_{i=1}^{N}z{}_{i}^{\prime}(\Delta y_{i}-\theta\Delta x_{i})=0$ where $\theta=(\alpha_{1},\delta)'$ and $\Delta x_{i}=(\Delta y_{i,t-1},\Delta d_{it})$ for t=3, \... T. When the number of instruments is greater than the number of independent variables, the moment condition is overidentified and in general there is no $\theta$ available to satisfy the moment condition. Instead, we look for a $\theta$ to minimize moment condition. That is:
$$\hat{\theta}_{gmm}=\arg\min_{\theta}\left(\frac{1}{N}\sum_{i=1}^{N}(\Delta y_{i}-\theta\Delta x_{i})^{\prime}z_{i}\right)W\left(\frac{1}{N}\sum_{i=1}^{N}z^{\prime}{}_{i}(\Delta y_{i}-\theta\Delta x_{i})\right)$$
where W is the weighting matrix of the moments. There are two popularly used weighting matrixes. In a one-step GMM estimate, the weighting matrix is
$$W_{1}=\left(\frac{1}{N}Z^{\prime}H_{1}Z\right)^{-1}$$
where matrix H has twos in the main diagnols, minus ones in the first subdiagnols, and zeros elsewhere:
$$H_{1}=\left[\begin{array}{cccccc}
2 & -1 & 0 & 0 & \ldots & 0\\
-1 & 2 & -1 & 0 & \ldots & 0\\
0 & \ddots & \ddots & \ddots & \ddots & \vdots\\
\vdots & \ddots & -1 & 2 & -1 & 0\\
0 & \ddots & 0 & -1 & 2 & -1\\
0 & \ldots & 0 & 0 & -1 & 2
\end{array}\right]$$
On the other hand, in a two-step GMM estimate, the weighting matrix is
$$W_{2}=\left(\frac{1}{N}Z^{\prime}H_{2}Z\right)^{-1}$$
where $H_{2}=\Delta\hat{\epsilon}\Delta\hat{\epsilon}^{\prime}$ and $\Delta\hat{\epsilon}$ is the residual from one-step GMM.
## System GMM
Compared with difference GMM, sytem GMM adds additional moment conditions, resulting in more instruments:
$$
\begin{align}
z_{i}=\left[\begin{array}{cccccccccccccccc|cccccccc}
y_{i1} & 0 & 0 & 0 & 0 & 0 & \ldots & 0 & d_{i1} & d_{i2} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \ldots & 0 & 0 & 0 & 0 & \ldots & 0\\
0 & y_{i1} & y_{i2} & 0 & 0 & 0 & \ldots & 0 & 0 & 0 & d_{i1} & d_{i2} & d_{i3} & 0 & 0 & 0 & 0 & \ldots & 0 & 0 & 0 & 0 & \ldots & 0\\
& & & \vdots & & & \ldots & & \vdots & & & & & \vdots & \ldots & \vdots & & \ldots & 0 & & & & \ddots & 0\\
0 & 0 & 0 & 0 & 0 & 0 & \ldots & y_{i,T-2} & 0 & 0 & 0 & 0 & 0 & 0 & \ldots & d_{i,T-1} & 0 & \ldots & 0 & 0 & 0 & 0 & 0 & 0\\
\hline 0 & \ldots & 0 & & & & & & & & & & & & 0 & 0 & \Delta y_{i2} & \ldots & 0 & 0 & \Delta d_{i3} & & 0\\
\vdots & & & & & & & & & & & & & & & 0 & 0 & \Delta y_{i3} & \ldots & 0 & 0 & \Delta d_{i4} & & 0\\
\vdots & & & & & & & & & & & & & & & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & & \ddots\\
0 & \ldots & & & & & & & & & & & & & \ldots & 0 & 0 & 0 & \ldots & \Delta y_{i,T-1} & 0 & 0 & \ldots & \Delta y_{i,T}
\end{array}\right] \label{z_sys}\tag{3}
\end{align}$$
$$\hat{\theta}_{gmm}=\arg\min_{\theta}\left(\frac{1}{N}\sum_{i=1}^{N}(\widetilde{y}_{i}-\theta\widetilde{x_{i}})^{\prime}z_{i}\right)W\left(\frac{1}{N}\sum_{i=1}^{N}z^{\prime}{}_{i}(\widetilde{y}-\theta\widetilde{x_{i}})\right)$$
where $$\widetilde{y}=\left(\begin{array}{c}
\Delta y_{i}\\
\hline y_{i}
\end{array}\right)\textrm{ and }\widetilde{x_{i}}=\left(\begin{array}{c|c}
\Delta x_{i} & 0\\
\hline x_{i} & 1
\end{array}\right)$$
## Robust estimation of coefficients' covariance
pydynpd reports robust standard errors for one-step and two-step estimators. For detail, please refer to [Windmeijer 2005](https://doi.org/10.1016/j.jeconom.2004.02.005).
## Specification Test
### Error serial correlation test
Second-order serial correlation test: if $\epsilon_{it}$ in Eq ($\ref{basic}$) is serially correlated, GMM estimates are no longer consistent. In a first-differenced model (e.g., Eq ($\ref{fd}$)), to test whether $\epsilon_{i,t-1}$ is correlated with $\epsilon_{i,t-2}$, the second-order autocovariance of the residuals, $\textrm{AR(2)}$, is calculated as:
$$AR(2)=\frac{b_{0}}{\sqrt{b_{1}+b_{2}+b_{3}}}\textrm{ where}$$
$$b_{0}=\sum_{i=1}^{N}\Delta\hat{\hat{\epsilon}}_{i}^{\prime}L_{\Delta\hat{\hat{\epsilon}}}^{2}$$
$$b_{1}=\sum_{i=1}^{N}L_{\Delta\hat{\hat{\epsilon}}_{i}^{\prime}}^{2}H_{2}L_{\Delta\hat{\hat{\epsilon}}_{i}}^{2}$$
$$b_{2}=\textrm{-}2\left(\sum_{i=1}^{N}L_{\Delta\hat{\hat{\epsilon}}_{i}^{\prime}}^{2}x_{i}\right)\left[\left(\sum_{i=1}^{N}x_{i}^{\prime}z_{i}\right)W_{2}\left(\sum_{i=1}^{N}z_{i}^{\prime}x_{i}\right)\right]^{-1}\left(\sum_{i=1}^{N}x_{i}^{\prime}z_{i}\right)W_{2}\left(\sum_{i=1}^{N}z_{i}^{\prime}H_{2}L_{\Delta\hat{\hat{\epsilon}}_{i}}^{2}\right)$$
$$b_{3}=\left(\sum_{i=1}^{N}L_{\Delta\hat{\hat{\epsilon}}_{i}^{\prime}}^{2}x_{i}\right)\hat{V}_{\hat{\hat{\theta}}}\left(\sum_{i=1}^{N}x_{i}^{\prime}L_{\Delta\hat{\hat{\epsilon}}_{i}}^{2}\right)$$
### Hansen overidentification test
Hansen overidentification test is used to check if instruments are exogeneous. Under the null hypothesis that instruments are valid, test statistic, $S$, should be close to zero:
$$S=\left(\sum_{i=1}^{N}\Delta\hat{\hat{\epsilon}}_{i}^{\prime}z_{i}\right)W_{2}\left(\sum_{i=1}^{N}z_{i}^{\prime}\Delta\hat{\hat{\epsilon}}_{i}\right)$$
# Handling instrument proliferation issue
Difference GMM and system GMM may generate too many instruments, which causes several problems (citation). Package pydynpd allows users to reduce the number of instruments in two ways. First, users can control the number of instruments in command string. For example, $\textrm{gmm(w, 2:3)}$ states that only $n_{t-2}$ and $n_{t-3}$ are used as instruments, rather than all lagged $n$ dated $t-2$ and earlier. Second, users can choose to collapse the instrumental variable matrix. For example, if collapsed, matrix as in Eq ($\ref{z_sys}$) is changed to:
$$z_{i}=\left[\begin{array}{cccccccccc|cc}
y_{i1} & 0 & 0 & \ldots & 0 & d_{i1} & d_{i2} & 0 & \ldots & 0\\
y_{i1} & y_{i2} & 0 & \ldots & 0 & d_{i1} & d_{i2} & d_{i3} & \ldots & 0\\
\vdots & \vdots & \ddots & \ldots & \vdots & & & & \ddots & \vdots\\
y_{i1} & y_{i2} & y_{i3} & \ldots & y_{i,T-2} & d_{i1} & d_{i2} & d_{id} & \ldots & d_{i,T-1}\\
\hline 0 & & & \ldots & 0 & 0 & 0 & 0 & \ldots & 0 & \Delta y_{i2} & \Delta d_{i3}\\
0 & 0 & & & 0 & 0 & 0 & 0 & \ldots & 0 & \Delta y_{i3} & \Delta d_{i4}\\
\vdots & & \ddots & & \vdots & & & \vdots & & \vdots & \vdots & \vdots\\
0 & 0 & 0 & \ldots & 0 & 0 & & 0 & \ldots & 0 & \Delta y_{i,T-1} & \Delta d_{iT}
\end{array}\right]$$
This change dramatically reduces the number of instruments. Intuitively, the number of instruments is positively associated with the width of the matrix above.
```python
```
```python
```
```python
```
| a4922bcbdfa32e1b69e8c8171500dab3a974aea8 | 12,141 | ipynb | Jupyter Notebook | vignettes/Guide.ipynb | dazhwu/pydynpd | 910563c28000e6200c11beddd6aa80b9602c5315 | [
"MIT"
] | 3 | 2022-03-15T16:51:02.000Z | 2022-03-27T15:22:44.000Z | vignettes/Guide.ipynb | dazhwu/pydynpd | 910563c28000e6200c11beddd6aa80b9602c5315 | [
"MIT"
] | null | null | null | vignettes/Guide.ipynb | dazhwu/pydynpd | 910563c28000e6200c11beddd6aa80b9602c5315 | [
"MIT"
] | null | null | null | 64.579787 | 791 | 0.540647 | true | 3,851 | Qwen/Qwen-72B | 1. YES
2. YES
| 0.817574 | 0.709019 | 0.579676 | __label__eng_Latn | 0.730242 | 0.185111 |
#Linear Kalman Filter With Control Input Example
###Introduction
This notebook is designed to demonstrate how to use the StateSpace.jl package to execute the Kalman filter for a linear State Space model with control input. The example that has been used here closely follows the one given on "Greg Czerniak's Website". Namely the canonball example on [this page.](http://greg.czerniak.info/guides/kalman1/)
For those of you that do not need/want the explanation of the model and the code, you can skip right to the end of this notebook where the entire section of code required to run this example is given.
###The Problem
The problem considered here is that of firing a ball from a canon at a given angle and velocity from the canon muzzle. We will assume that measurements of the ball's position are recorded with a camera at a given (constant) interval. The camera has a significant error in its measurement. We also measure the ball's velocity with relatively precise detectors inside the ball.
#####Process Model
The kinematic equations for the system are:
$$
\begin{align}
x(t) &= x_0 + V_0^x t, \\
V^x(t) &= V_0^x, \\
y(t) &= y_0 + V_0^y t - \frac{1}{2}gt^2, \\
V^y(t) &= V_0^y - gt,
\end{align}
$$
where $x$ is the position of the ball in the x (horizontal) direction, $y$ is the position of the ball in the y (vertical) direction, $V^x$ is the velocity of the ball in the x (horizontal) direction, $V^y$ is the velocity of the ball in the y (vertical) direction, $t$ is time, $x_0$, $y_0$, $V_0^x$ and $V_0^y$ are the initial x and y postion and velocity of the ball. $g$ is the acceleration due to gravity, 9.81 m/s.
Since the filter is discrete we need to discretize our equations so that we get the value of the current state of the ball in terms of the previous. This leads to the following equations:
$$
\begin{align}
x_n &= x_{n-1} + V_{n-1}^x \Delta t, \\
V^x_n &= V_{n-1}^x, \\
y_n &= y_{n-1} + V_{n-1}^y \Delta t - \frac{1}{2}g\Delta t^2, \\
V^y_n &= V_{n-1}^y - g\Delta t.
\end{align}
$$
These equations. Can be written in matrix form as:
$$
\begin{bmatrix}
x_n \\[0.3em]
V^x_n \\[0.3em]
y_n \\[0.3em]
V^y_n
\end{bmatrix}
=
\begin{bmatrix}
1 & \Delta t & 0 & 0 \\[0.3em]
0 & 1 & 0 & 0 \\[0.3em]
0 & 0 & 1 & \Delta t \\[0.3em]
0 & 0 & 0 & 1
\end{bmatrix}
\begin{bmatrix}
x_{n-1} \\[0.3em]
V^x_{n-1} \\[0.3em]
y_{n-1} \\[0.3em]
V^y_{n-1}
\end{bmatrix}
+
\begin{bmatrix}
0 & 0 & 0 & 0 \\[0.3em]
0 & 0 & 0 & 0 \\[0.3em]
0 & 0 & 1 & 0 \\[0.3em]
0 & 0 & 0 & 1
\end{bmatrix}
\begin{bmatrix}
0 \\[0.3em]
0 \\[0.3em]
-\frac{1}{2}g\Delta t^2 \\[0.3em]
-g\Delta t
\end{bmatrix},
$$
Which is in the form:
$$
\mathbf{x}_n = \mathbf{A}\mathbf{x}_{n-1} + \mathbf{B}\mathbf{u}_n,
$$
where $\mathbf{x}_n$ is the state vector at the current time step, $\mathbf{A}$ is the process matrix, $\mathbf{B}$ is the control matrix and $\mathbf{u}_n$ is the control input vector.
#####Observation Model
We assume that we measure the position and velocity of the canonball directly and hence the observation (emission) matrix is the identity matrix, namely:
$$
\begin{bmatrix}
1 & 0 & 0 & 0 \\[0.3em]
0 & 1 & 0 & 0 \\[0.3em]
0 & 0 & 1 & 0 \\[0.3em]
0 & 0 & 0 & 1
\end{bmatrix}
$$
###Setting up the problem
First we'll import the required modules
```julia
using StateSpace
using Distributions
using Gadfly
using Colors
```
#####Generate noisy observations
In this section we will generate the noisy observations using the kinematic equations defined above in their continuous form.
The first thing to do is to set the parameters of the model
```julia
elevation_angle = 45.0 #Angle above the (horizontal) ground
muzzle_speed = 100.0 #Speed at which the canonball leaves the muzzle
initial_velocity = [muzzle_speed*cos(deg2rad(elevation_angle)), muzzle_speed*sin(deg2rad(elevation_angle))] #initial x and y components of the velocity
gravAcc = 9.81 #gravitational acceleration
initial_location = [0.0, 0.0] # initial position of the canonball
Δt = 0.1 #time between each measurement
```
0.1
Next we'll define the kinematic equations of the model as functions in Julia (we don't care about the horizontal velocity component because it's constant).
```julia
x_pos(x0::Float64, Vx::Float64, t::Float64) = x0 + Vx*t
y_pos(y0::Float64, Vy::Float64, t::Float64, g::Float64) = y0 + Vy*t - (g * t^2)/2
velocityY(Vy::Float64, t::Float64, g::Float64) = Vy - g * t
```
velocityY (generic function with 1 method)
Let's now set the variances of the noise for the position and velocity observations. We'll make the positional noise quite big.
```julia
x_pos_var = 200.0
y_pos_var = 200.0
Vx_var = 1.0
Vy_var = 1.0
```
1.0
Now we will preallocate the arrays to store the true values and the noisy measurements. Then we will create the measurements in a `for` loop
```julia
#Set the number of observations and preallocate vectors to store true and noisy measurement values
numObs = 145
x_pos_true = Vector{Float64}(numObs)
x_pos_obs = Vector{Float64}(numObs)
y_pos_true = Vector{Float64}(numObs)
y_pos_obs = Vector{Float64}(numObs)
Vx_true = Vector{Float64}(numObs)
Vx_obs = Vector{Float64}(numObs)
Vy_true = Vector{Float64}(numObs)
Vy_obs = Vector{Float64}(numObs)
#Generate the data (true values and noisy observations)
for i in 1:numObs
x_pos_true[i] = x_pos(initial_location[1], initial_velocity[1], (i-1)*Δt)
y_pos_true[i] = y_pos(initial_location[2], initial_velocity[2], (i-1)*Δt, gravAcc)
Vx_true[i] = initial_velocity[1]
Vy_true[i] = velocityY(initial_velocity[2], (i-1)*Δt, gravAcc)
x_pos_obs[i] = x_pos_true[i] + randn() * sqrt(x_pos_var)
y_pos_obs[i] = y_pos_true[i] + randn() * sqrt(y_pos_var)
Vx_obs[i] = Vx_true[i] + randn() * sqrt(Vx_var)
Vy_obs[i] = Vy_true[i] + randn() * sqrt(Vy_var)
end
#Create the observations vector for the Kalman filter
observations = [x_pos_obs Vx_obs y_pos_obs Vy_obs]'
```
4x145 Array{Float64,2}:
5.26679 28.426 26.8539 26.6246 … 1000.9 1008.6 1024.48
70.0963 69.7485 70.6661 71.6596 69.3015 71.0265 72.1319
18.6518 7.20288 -10.3564 28.2894 19.9131 1.48237 -9.53701
71.1312 70.1222 69.3649 67.0026 -68.644 -68.4365 -70.083
The final step in the code block above just puts all of the observations in a single array. Notice that we transpose the array to make sure that the dimensions are consistent with the StateSpace.jl convention. (Each observation is represented by a single column).
#####Define Kalman Filter Parameters
Now we can set the parameters for the process and observation model as defined above. We also set values for the corresponding covariance matrices. Because we're very sure about the process model, we set the process covariance to be very small. The observations can be set to have a higher variance but you can play about with these parameters.
NOTE: Be carefull about setting the diagonal values to zero, these can result in calculation errors downstream in the matrix calculations - rather the values can be set to be very small.
```julia
process_matrix = [[1.0, Δt, 0.0, 0.0] [0.0, 1.0, 0.0, 0.0] [0.0, 0.0, 1.0, Δt] [0.0, 0.0, 0.0, 1.0]]'
process_covariance = 0.01*eye(4)
observation_matrix = eye(4)
observation_covariance = 0.2*eye(4)
control_matrix = [[0.0, 0.0, 0.0, 0.0] [0.0, 0.0, 0.0, 0.0] [0.0, 0.0, 1.0, 0.0] [0.0, 0.0, 0.0, 1.0]]
control_input = [0.0, 0.0, -(gravAcc * Δt^2)/2, -(gravAcc * Δt)]
#Create an instance of the LKF with the control inputs
linCISMM = LinearGaussianCISSM(process_matrix, process_covariance, observation_matrix, observation_covariance, control_matrix, control_input)
```
StateSpace.LinearGaussianCISSM{Float64}(4x4 Array{Float64,2}:
1.0 0.1 0.0 0.0
0.0 1.0 0.0 0.0
0.0 0.0 1.0 0.1
0.0 0.0 0.0 1.0,4x4 Array{Float64,2}:
0.01 0.0 0.0 0.0
0.0 0.01 0.0 0.0
0.0 0.0 0.01 0.0
0.0 0.0 0.0 0.01,4x4 Array{Float64,2}:
1.0 0.0 0.0 0.0
0.0 1.0 0.0 0.0
0.0 0.0 1.0 0.0
0.0 0.0 0.0 1.0,4x4 Array{Float64,2}:
0.2 0.0 0.0 0.0
0.0 0.2 0.0 0.0
0.0 0.0 0.2 0.0
0.0 0.0 0.0 0.2,4x4 Array{Float64,2}:
0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0
0.0 0.0 1.0 0.0
0.0 0.0 0.0 1.0,[0.0,0.0,-0.04905000000000001,-0.9810000000000001])
#####Initial Guess
Now we can make an initial guess for the state of the system (position and velocity). We've purposely set the y coordinate of the initial value way off. This is to show how quickly the Kalman filter can converge to the correct solution. This isn't generally the case. It really depends on how good you model is and also the covariance matrix that you assign to the process/observation models. In this case, if you increase the observation variance (i.e. `observation_covariance = 10*eye(4)` say) then you'll see that it takes the Kalman filter longer to converge to the correct value.
```julia
initial_guess_state = [0.0, initial_velocity[1], 500.0, initial_velocity[2]]
initial_guess_covariance = eye(4)
initial_guess = MvNormal(initial_guess_state, initial_guess_covariance)
```
FullNormal(
dim: 4
μ: [0.0,70.71067811865476,500.0,70.71067811865474]
Σ: 4x4 Array{Float64,2}:
1.0 0.0 0.0 0.0
0.0 1.0 0.0 0.0
0.0 0.0 1.0 0.0
0.0 0.0 0.0 1.0
)
#####Perform Linear Kalman Filter Algorithm
Now we have all of the parameters:
1. noisy observations
2. process (transition) and observation (emission) model paramaters
3. initial guess of state
We can use the Kalman Filter to predict the true underlying state (The position - and velocity - of the canonball).
```julia
filtered_state = filter(linCISMM, observations, initial_guess)
```
SmoothedState{Float64}
145 estimates of 4-D process from 4-D observations
Log-likelihood: -619371.3025340745
###Plot Results
Now we can plot the results to see how well the Kalman Filter predicts the true position of the ball.
```julia
x_filt = Vector{Float64}(numObs)
y_filt = Vector{Float64}(numObs)
for i in 1:numObs
current_state = filtered_state.state[i]
x_filt[i] = current_state.μ[1]
y_filt[i] = current_state.μ[3]
end
n = 3
getColors = distinguishable_colors(n, Color[LCHab(70, 60, 240)],
transform=c -> deuteranopic(c, 0.5),
lchoices=Float64[65, 70, 75, 80],
cchoices=Float64[0, 50, 60, 70],
hchoices=linspace(0, 330, 24))
cannonball_plot = plot(
layer(x=x_pos_true, y=y_pos_true, Geom.line, Theme(default_color=getColors[3])),
layer(x=[initial_guess_state[1]; x_filt], y=[initial_guess_state[3]; y_filt], Geom.line, Theme(default_color=getColors[1])),
layer(x=x_pos_obs, y=y_pos_obs, Geom.point, Theme(default_color=getColors[2])),
Guide.xlabel("X position"), Guide.ylabel("Y position"),
Guide.manual_color_key("Colour Key",["Filtered Estimate", "Measurements","True Value "],[getColors[1],getColors[2],getColors[3]]),
Guide.title("Measurement of a Canonball in Flight")
)
```
Looking for the full code without having to read through the entire document? We'll here you go :)
```julia
using StateSpace
using Distributions
using Gadfly
using Colors
#Set the Parameters
elevation_angle = 45.0
muzzle_speed = 100.0
initial_velocity = [muzzle_speed*cos(deg2rad(elevation_angle)), muzzle_speed*sin(deg2rad(elevation_angle))]
gravAcc = 9.81
initial_location = [0.0, 0.0]
Δt = 0.1
#Functions describing the position of canonball
x_pos(x0::Float64, Vx::Float64, t::Float64) = x0 + Vx*t
y_pos(y0::Float64, Vy::Float64, t::Float64, g::Float64) = y0 + Vy*t - (g * t^2)/2
#Function to describe the evolution of the velocity in the vertical direction
velocityY(Vy::Float64, t::Float64, g::Float64) = Vy - g * t
#Give variances of the observation noise for the position and velocity
x_pos_var = 200.0
y_pos_var = 200.0
Vx_var = 1.0
Vy_var = 1.0
#Set the number of observations and preallocate vectors to store true and noisy measurement values
numObs = 145
x_pos_true = Vector{Float64}(numObs)
x_pos_obs = Vector{Float64}(numObs)
y_pos_true = Vector{Float64}(numObs)
y_pos_obs = Vector{Float64}(numObs)
Vx_true = Vector{Float64}(numObs)
Vx_obs = Vector{Float64}(numObs)
Vy_true = Vector{Float64}(numObs)
Vy_obs = Vector{Float64}(numObs)
#Generate the data (true values and noisy observations)
for i in 1:numObs
x_pos_true[i] = x_pos(initial_location[1], initial_velocity[1], (i-1)*Δt)
y_pos_true[i] = y_pos(initial_location[2], initial_velocity[2], (i-1)*Δt, gravAcc)
Vx_true[i] = initial_velocity[1]
Vy_true[i] = velocityY(initial_velocity[2], (i-1)*Δt, gravAcc)
x_pos_obs[i] = x_pos_true[i] + randn() * sqrt(x_pos_var)
y_pos_obs[i] = y_pos_true[i] + randn() * sqrt(y_pos_var)
Vx_obs[i] = Vx_true[i] + randn() * sqrt(Vx_var)
Vy_obs[i] = Vy_true[i] + randn() * sqrt(Vy_var)
end
#Create the observations vector for the Kalman filter
observations = [x_pos_obs Vx_obs y_pos_obs Vy_obs]'
#Describe the system parameters
process_matrix = [[1.0, Δt, 0.0, 0.0] [0.0, 1.0, 0.0, 0.0] [0.0, 0.0, 1.0, Δt] [0.0, 0.0, 0.0, 1.0]]'
process_covariance = 0.01*eye(4)
observation_matrix = eye(4)
observation_covariance = 0.2*eye(4)
control_matrix = [[0.0, 0.0, 0.0, 0.0] [0.0, 0.0, 0.0, 0.0] [0.0, 0.0, 1.0, 0.0] [0.0, 0.0, 0.0, 1.0]]
control_input = [0.0, 0.0, -(gravAcc * Δt^2)/2, -(gravAcc * Δt)]
#Create an instance of the LKF with the control inputs
linCISMM = LinearGaussianCISSM(process_matrix, process_covariance, observation_matrix, observation_covariance, control_matrix, control_input)
#Set Initial Guess
initial_guess_state = [0.0, initial_velocity[1], 500.0, initial_velocity[2]]
initial_guess_covariance = eye(4)
initial_guess = MvNormal(initial_guess_state, initial_guess_covariance)
#Execute Kalman Filter
filtered_state = filter(linCISMM, observations, initial_guess)
#Plot Filtered results
x_filt = Vector{Float64}(numObs)
y_filt = Vector{Float64}(numObs)
for i in 1:numObs
current_state = filtered_state.state[i]
x_filt[i] = current_state.μ[1]
y_filt[i] = current_state.μ[3]
end
n = 3
getColors = distinguishable_colors(n, Color[LCHab(70, 60, 240)],
transform=c -> deuteranopic(c, 0.5),
lchoices=Float64[65, 70, 75, 80],
cchoices=Float64[0, 50, 60, 70],
hchoices=linspace(0, 330, 24))
cannonball_plot = plot(
layer(x=x_pos_true, y=y_pos_true, Geom.line, Theme(default_color=getColors[3])),
layer(x=[initial_guess_state[1]; x_filt], y=[initial_guess_state[3]; y_filt], Geom.line, Theme(default_color=getColors[1])),
layer(x=x_pos_obs, y=y_pos_obs, Geom.point, Theme(default_color=getColors[2])),
Guide.xlabel("X position"), Guide.ylabel("Y position"),
Guide.manual_color_key("Colour Key",["Filtered Estimate", "Measurements","True Value "],[getColors[1],getColors[2],getColors[3]]),
Guide.title("Measurement of a Canonball in Flight")
)
```
| a7950ba7dadd6d216265b5898495009547cdf715 | 626,909 | ipynb | Jupyter Notebook | examples/LinearKalmanFilterControlInput_CanonBallExample.ipynb | npsmc/StateSpace.jl | 2175c85b23dfbf3178d508a5c749627594e719e7 | [
"MIT"
] | 33 | 2015-04-30T13:11:59.000Z | 2022-01-25T12:04:59.000Z | examples/LinearKalmanFilterControlInput_CanonBallExample.ipynb | npsmc/StateSpace.jl | 2175c85b23dfbf3178d508a5c749627594e719e7 | [
"MIT"
] | 12 | 2015-08-12T04:04:37.000Z | 2019-10-01T02:35:35.000Z | examples/LinearKalmanFilterControlInput_CanonBallExample.ipynb | npsmc/StateSpace.jl | 2175c85b23dfbf3178d508a5c749627594e719e7 | [
"MIT"
] | 12 | 2015-02-24T23:33:14.000Z | 2020-11-18T18:55:35.000Z | 122.22831 | 61,654 | 0.609286 | true | 5,244 | Qwen/Qwen-72B | 1. YES
2. YES | 0.914901 | 0.877477 | 0.802804 | __label__eng_Latn | 0.859722 | 0.703516 |
# 7 Solution Methods to Solve the Growth Model with Julia
This notebook is part of a computational appendix that accompanies the paper
> MATLAB, Python, Julia: What to Choose in Economics?
> > Coleman, Lyon, Maliar, and Maliar (2017)
In order to run the codes in this notebook you will need to install and configure a few Julia packages. We recommend following the instructions on [quantecon.org](https://lectures.quantecon.org/jl/getting_started.html).
Once your Julia installation is up and running, there are a few additional packages you will need in order to run the code here. To do this uncomment the lines in the cell below (by deleting the `#` and space at the beginning of each line) and run the cell:
```julia
using Pkg
pkg"add InstantiateFromURL"
```
[32m[1m Updating[22m[39m registry at `~/.julia/registries/General`
[32m[1m Updating[22m[39m git-repo `https://github.com/JuliaRegistries/General.git`
[?25l[2K[?25h[32m[1m Resolving[22m[39m package versions...
[32m[1m Updating[22m[39m `~/.julia/environments/v1.1/Project.toml`
[90m [no changes][39m
[32m[1m Updating[22m[39m `~/.julia/environments/v1.1/Manifest.toml`
[90m [no changes][39m
```julia
using InstantiateFromURL: activate_github_path
activate_github_path("sglyon/CLMMJuliaPythonMatlab", path="Growth/julia", activate=true, force=true)
```
┌ Info: Recompiling stale cache file /Users/sglyon/.julia/compiled/v1.1/InstantiateFromURL/vAXbt.ji for InstantiateFromURL [43edad99-fa64-5e4f-9937-1c09a410b73f]
└ @ Base loading.jl:1184
[32m[1m Updating[22m[39m registry at `~/.julia/registries/General`
[32m[1m Updating[22m[39m git-repo `https://github.com/JuliaRegistries/General.git`
[?25l[2K[?25h[32m[1mPrecompiling[22m[39m project...
```julia
using Printf, Random, LinearAlgebra
using BasisMatrices, Optim, QuantEcon, Parameters
using BasisMatrices: Degree, Derivative
```
┌ Info: Recompiling stale cache file /Users/sglyon/.julia/compiled/v1.1/BasisMatrices/65PSC.ji for BasisMatrices [08854c51-b66b-5062-a90d-8e7ae4547a49]
└ @ Base loading.jl:1184
## Model
This section gives a short description of the commonly used stochastic Neoclassical growth model.
There is a single infinitely-lived representative agent who consumes and saves using capital. The consumer discounts the future with factor $\beta$ and derives utility from only consumption. Additionally, saved capital will depreciate at $\delta$.
The consumer has access to a Cobb-Douglas technology which uses capital saved from the previous period to produce and is subject to stochastic productivity shocks.
Productivity shocks follow an AR(1) in logs.
The agent's problem can be written recursively using the following Bellman equation
\begin{align}
V(k_t, z_t) &= \max_{k_{t+1}} u(c_t) + \beta E \left[ V(k_{t+1}, z_{t+1}) \right] \\
&\text{subject to } \\
c_t &= z_t f(k_t) + (1 - \delta) k_t - k_{t+1} \\
\log z_{t+1} &= \rho \log z_t + \sigma \varepsilon
\end{align}
## Julia Code
We begin by defining a type that describes our model. It will hold the three things
1. Parameters of the growth model
2. Grids used for approximating the solution
3. Nodes and weights used to approximate integration
Note the `@with_kw` comes from the `Parameters` package -- It allows one to specify default arguments for the parameters when building a type (for more information refer to their [documentation](http://parametersjl.readthedocs.io/en/latest/)). One of the benefits of using the `Parameters` package is their code allows us to do things like, `@unpack a, b, c = Params` which takes elements from inside the type `Params` and "unpacks" them... i.e. it automates code of the form `a, b, c = Params.a, Params.b, Params.c`
```julia
"""
The stochastic Neoclassical growth model type contains parameters
which define the model
* α: Capital share in output
* β: Discount factor
* δ: Depreciation rate
* γ: Risk aversion
* ρ: Persistence of the log of the productivity level
* σ: Standard deviation of shocks to log productivity level
* A: Coefficient on C-D production function
* kgrid: Grid over capital
* zgrid: Grid over productivity
* grid: Grid of (k, z) pairs
* eps_nodes: Nodes used to integrate
* weights: Weights used to integrate
* z1: A grid of the possible z1s tomorrow given eps_nodes and zgrid
"""
@with_kw struct NeoclassicalGrowth
# Parameters
α::Float64 = 0.36
β::Float64 = 0.99
δ::Float64 = 0.02
γ::Float64 = 2.0
ρ::Float64 = 0.95
σ::Float64 = 0.01
A::Float64 = (1.0/β - (1 - δ)) / α
# Grids
kgrid::Vector{Float64} = collect(range(0.9, stop=1.1, length=10))
zgrid::Vector{Float64} = collect(range(0.9, stop=1.1, length=10))
grid::Matrix{Float64} = gridmake(kgrid, zgrid)
eps_nodes::Vector{Float64} = qnwnorm(5, 0.0, σ^2)[1]
weights::Vector{Float64} = qnwnorm(5, 0.0, σ^2)[2]
z1::Matrix{Float64} = (zgrid.^(ρ))' .* exp.(eps_nodes)
end
```
NeoclassicalGrowth
We also define some useful functions so that we [don't repeat ourselves](https://lectures.quantecon.org/py/writing_good_code.html#don-t-repeat-yourself) later in the code.
```julia
# Helper functions
f(ncgm::NeoclassicalGrowth, k, z) = @. z * (ncgm.A * k^ncgm.α)
df(ncgm::NeoclassicalGrowth, k, z) = @. ncgm.α * z * (ncgm.A * k^(ncgm.α - 1.0))
u(ncgm::NeoclassicalGrowth, c) = c > 1e-10 ? @.(c^(1-ncgm.γ)-1)/(1-ncgm.γ) : -1e10
du(ncgm::NeoclassicalGrowth, c) = c > 1e-10 ? c.^(-ncgm.γ) : 1e10
duinv(ncgm::NeoclassicalGrowth, u) = u .^ (-1 / ncgm.γ)
expendables_t(ncgm::NeoclassicalGrowth, k, z) = (1-ncgm.δ)*k + f(ncgm, k, z)
```
expendables_t (generic function with 1 method)
## Solution Methods
In this notebook, we describe the following solution methods:
* Conventional Value Function Iteration
* Envelope Condition Value Function Iteration
* Envelope Condition Derivative Value Function Iteration
* Endogenous Grid Value Function Iteration
* Conventional Policy Function Iteration
* Envelope Condition Policy Function Iteration
* Euler Equation Method
Each of these solution methods will have a very similar structure that follows a few basic steps:
1. Guess a function (either value function or policy function).
2. Using this function, update our guess of both the value and policy functions.
3. Check whether the function we guessed and what it was updated to are similar enough. If so, proceed. If not, return to step 2 using the newly updated functions.
4. Output the policy and value functions.
In order to reduce the amount of repeated code and keep the exposition as clean as possible (the notebook is plenty long as is...), we will define multiple solution types that will have a more general (abstract) type called `SolutionMethod`. A solution can then be characterized by a concrete type `ValueCoeffs` (a special case for each solution method) which consists of an approximation degree, coefficients for the value function, and coefficients for the policy function. The rest of the functions below that are just more helper methods. We will then define a general solve method that applies steps 1, 3, and 4 from the algorithm above. Finally, we will implement a special method to do step 2 for each of the algorithms.
These implementation may seem a bit confusing at first (though hopefully the idea itself feels intuitive) -- The implementation takes advantage of a powerful type system in Julia.
```julia
# Types for solution methods
abstract type SolutionMethod end
struct IterateOnPolicy <: SolutionMethod end
struct VFI_ECM <: SolutionMethod end
struct VFI_EGM <: SolutionMethod end
struct VFI <: SolutionMethod end
struct PFI_ECM <: SolutionMethod end
struct PFI <: SolutionMethod end
struct dVFI_ECM <: SolutionMethod end
struct EulEq <: SolutionMethod end
#
# Type for Approximating Value and Policy
#
mutable struct ValueCoeffs{T <: SolutionMethod,D <: Degree}
d::D
v_coeffs::Vector{Float64}
k_coeffs::Vector{Float64}
end
function ValueCoeffs(::Type{Val{d}}, method::T) where T <: SolutionMethod where d
# Initialize two vectors of zeros
deg = Degree{d}()
n = n_complete(2, deg)
v_coeffs = zeros(n)
k_coeffs = zeros(n)
return ValueCoeffs{T,Degree{d}}(deg, v_coeffs, k_coeffs)
end
function ValueCoeffs(
ncgm::NeoclassicalGrowth, ::Type{Val{d}}, method::T
) where T <: SolutionMethod where d
# Initialize with vector of zeros
deg = Degree{d}()
n = n_complete(2, deg)
v_coeffs = zeros(n)
# Policy guesses based on k and z
k, z = ncgm.grid[:, 1], ncgm.grid[:, 2]
css = ncgm.A - ncgm.δ
yss = ncgm.A
c_pol = f(ncgm, k, z) * (css/yss)
# Figure out what kp is
k_pol = expendables_t(ncgm, k, z) - c_pol
k_coeffs = complete_polynomial(ncgm.grid, d) \ k_pol
return ValueCoeffs{T,Degree{d}}(deg, v_coeffs, k_coeffs)
end
solutionmethod(::ValueCoeffs{T}) where T <:SolutionMethod = T
# A few copy methods to make life easier
Base.copy(vp::ValueCoeffs{T,D}) where T where D =
ValueCoeffs{T,D}(vp.d, vp.v_coeffs, vp.k_coeffs)
function Base.copy(vp::ValueCoeffs{T1,D}, ::T2) where T1 where D where T2 <: SolutionMethod
ValueCoeffs{T2,D}(vp.d, vp.v_coeffs, vp.k_coeffs)
end
function Base.copy(
ncgm::NeoclassicalGrowth, vp::ValueCoeffs{T}, ::Type{Val{new_degree}}
) where T where new_degree
# Build Value and policy matrix
deg = Degree{new_degree}()
V = build_V(ncgm, vp)
k = build_k(ncgm, vp)
# Build new Phi
Phi = complete_polynomial(ncgm.grid, deg)
v_coeffs = Phi \ V
k_coeffs = Phi \ k
return ValueCoeffs{T,Degree{new_degree}}(deg, v_coeffs, k_coeffs)
end
```
We will need to repeatedly update coefficients, build $V$ (or $dV$ depending on the solution method), and be able to compute expected values, so we define some additional helper functions below.
```julia
"""
Updates the coefficients for the value function inplace in `vp`
"""
function update_v!(vp::ValueCoeffs, new_coeffs::Vector{Float64}, dampen::Float64)
vp.v_coeffs = (1-dampen)*vp.v_coeffs + dampen*new_coeffs
end
"""
Updates the coefficients for the policy function inplace in `vp`
"""
function update_k!(vp::ValueCoeffs, new_coeffs::Vector{Float64}, dampen::Float64)
vp.k_coeffs = (1-dampen)*vp.k_coeffs + dampen*new_coeffs
end
"""
Builds either V or dV depending on the solution method that is given. If it
is a solution method that iterates on the derivative of the value function
then it will return derivative of the value function, otherwise the
value function itself
"""
build_V_or_dV(ncgm::NeoclassicalGrowth, vp::ValueCoeffs) =
build_V_or_dV(ncgm, vp, solutionmethod(vp)())
build_V_or_dV(ncgm, vp::ValueCoeffs, ::SolutionMethod) = build_V(ncgm, vp)
build_V_or_dV(ncgm, vp::ValueCoeffs, T::dVFI_ECM) = build_dV(ncgm, vp)
function build_dV(ncgm::NeoclassicalGrowth, vp::ValueCoeffs)
Φ = complete_polynomial(ncgm.grid, vp.d, Derivative{1}())
Φ*vp.v_coeffs
end
function build_V(ncgm::NeoclassicalGrowth, vp::ValueCoeffs)
Φ = complete_polynomial(ncgm.grid, vp.d)
Φ*vp.v_coeffs
end
function build_k(ncgm::NeoclassicalGrowth, vp::ValueCoeffs)
Φ = complete_polynomial(ncgm.grid, vp.d)
Φ*vp.k_coeffs
end
```
build_k (generic function with 1 method)
Additionally, in order to evaluate the value function, we will need to be able to take expectations.
These functions evaluates expectations by taking the policy $k_{t+1}$ and the current productivity state $z_t$ as inputs. They then integrates over the possible $z_{t+1}$s.
```julia
function compute_EV!(cp_kpzp::Vector{Float64}, ncgm::NeoclassicalGrowth,
vp::ValueCoeffs, kp, iz)
# Pull out information from types
z1, weightsz = ncgm.z1, ncgm.weights
# Get number nodes
nzp = length(weightsz)
EV = 0.0
for izp in 1:nzp
zp = z1[izp, iz]
complete_polynomial!(cp_kpzp, [kp, zp], vp.d)
EV += weightsz[izp] * dot(vp.v_coeffs, cp_kpzp)
end
return EV
end
function compute_EV(ncgm::NeoclassicalGrowth, vp::ValueCoeffs, kp, iz)
cp_kpzp = Array{Float64}(undef, n_complete(2, vp.d))
return compute_EV!(cp_kpzp, ncgm, vp, kp, iz)
end
function compute_EV(ncgm::NeoclassicalGrowth, vp::ValueCoeffs)
# Get length of k and z grids
kgrid, zgrid = ncgm.kgrid, ncgm.zgrid
nk, nz = length(kgrid), length(zgrid)
temp = Array{Float64}(undef, n_complete(2, vp.d))
# Allocate space to store EV
EV = Array{Float64}(undef, nk*nz)
_inds = LinearIndices((nk, nz))
for ik in 1:nk, iz in 1:nz
# Pull out states
k = kgrid[ik]
z = zgrid[iz]
ikiz_index = _inds[ik, iz]
# Pass to scalar EV
complete_polynomial!(temp, [k, z], vp.d)
kp = dot(vp.k_coeffs, temp)
EV[ikiz_index] = compute_EV!(temp, ncgm, vp, kp, iz)
end
return EV
end
function compute_dEV!(cp_dkpzp::Vector, ncgm::NeoclassicalGrowth,
vp::ValueCoeffs, kp, iz)
# Pull out information from types
z1, weightsz = ncgm.z1, ncgm.weights
# Get number nodes
nzp = length(weightsz)
dEV = 0.0
for izp in 1:nzp
zp = z1[izp, iz]
complete_polynomial!(cp_dkpzp, [kp, zp], vp.d, Derivative{1}())
dEV += weightsz[izp] * dot(vp.v_coeffs, cp_dkpzp)
end
return dEV
end
function compute_dEV(ncgm::NeoclassicalGrowth, vp::ValueCoeffs, kp, iz)
compute_dEV!(Array{Float64}(undef, n_complete(2, vp.d)), ncgm, vp, kp, iz)
end
```
compute_dEV (generic function with 1 method)
### General Solution Method
As promised, below is some code that "generally" applies the algorithm that we described -- Notice that it is implemented for a type `ValueCoeffs{SolutionMethod}` which is our abstract type. We will define a special version of `update` for each solution method and then we will only need this one `solve` method and won't repeat the more tedious portions of our code.
```julia
function solve(
ncgm::NeoclassicalGrowth, vp::ValueCoeffs;
tol::Float64=1e-6, maxiter::Int=5000, dampen::Float64=1.0,
nskipprint::Int=1, verbose::Bool=true
)
# Get number of k and z on grid
nk, nz = length(ncgm.kgrid), length(ncgm.zgrid)
# Build basis matrix and value function
dPhi = complete_polynomial(ncgm.grid, vp.d, Derivative{1}())
Phi = complete_polynomial(ncgm.grid, vp.d)
V = build_V_or_dV(ncgm, vp)
k = build_k(ncgm, vp)
Vnew = copy(V)
knew = copy(k)
# Print column names
if verbose
@printf("| Iteration | Distance V | Distance K |\n")
end
# Iterate to convergence
dist, iter = 10.0, 0
while (tol < dist) & (iter < maxiter)
# Update the value function using appropriate update methods
update!(Vnew, knew, ncgm, vp, Phi, dPhi)
# Compute distance and update all relevant elements
iter += 1
dist_v = maximum(abs, 1.0 .- Vnew./V)
dist_k = maximum(abs, 1.0 .- knew./k)
copy!(V, Vnew)
copy!(k, knew)
# If we are iterating on a policy, use the difference of values
# otherwise use the distance on policy
dist = ifelse(solutionmethod(vp) == IterateOnPolicy, dist_v, dist_k)
# Print status update
if verbose && (iter%nskipprint == 0)
@printf("|%-11d|%-12e|%-12e|\n", iter, dist_v, dist_k)
end
end
# Update value and policy functions one last time as long as the
# solution method isn't IterateOnPolicy
if ~(solutionmethod(vp) == IterateOnPolicy)
# Update capital policy after finished
kp = env_condition_kp(ncgm, vp)
update_k!(vp, complete_polynomial(ncgm.grid, vp.d) \ kp, 1.0)
# Update value function according to specified policy
vp_igp = copy(vp, IterateOnPolicy())
solve(ncgm, vp_igp; tol=1e-10, maxiter=5000, verbose=false)
update_v!(vp, vp_igp.v_coeffs, 1.0)
end
return vp
end
```
solve (generic function with 1 method)
### Iterating to Convergence (given policy)
This isn't one of the methods described above, but it is used as an element of a few of our methods (and also as a way to get a first guess at the value function). This method takes an initial policy function, $\bar{k}(k_t, z_t)$, as given, and then, without changing the policy, iterates until the value function has converged.
Thus the "update section" of the algorithm in this instance would be:
* Leave policy function unchanged
* At each point of grid, $(k_t, z_t)$, compute $\hat{V}(k_t, z_t) = u(c(\bar{k}(k_t, z_t))) + \beta E \left[ V(\bar{k}(k_t, z_t), z_{t+1}) \right]$
```julia
function update!(V::Vector{Float64}, kpol::Vector{Float64},
ncgm::NeoclassicalGrowth, vp::ValueCoeffs{IterateOnPolicy},
Φ::Matrix{Float64}, dΦ::Matrix{Float64})
# Get sizes and allocate for complete_polynomial
kgrid = ncgm.kgrid; zgrid = ncgm.zgrid;
nk, nz = length(kgrid), length(zgrid)
_inds = LinearIndices((nk, nz))
# Iterate over all states
for ik in 1:nk, iz in 1:nz
# Pull out states
k = kgrid[ik]
z = zgrid[iz]
# Pull out policy and evaluate consumption
ikiz_index = _inds[ik, iz]
k1 = kpol[ikiz_index]
c = expendables_t(ncgm, k, z) - k1
# New value
EV = compute_EV(ncgm, vp, k1, iz)
V[ikiz_index] = u(ncgm, c) + ncgm.β*EV
end
# Update coefficients
update_v!(vp, Φ \ V, 1.0)
update_k!(vp, Φ \ kpol, 1.0)
return V
end
```
update! (generic function with 1 method)
### Conventional Value Function Iteration
This is one of the first solution methods for macroeconomics a graduate student in economics typically learns.
In this solution method, one takes as given a value function, $V(k_t, z_t)$, and then solves for the optimal policy given the value function.
The update section takes the form:
* For each point, $(k_t, z_t)$, numerically solve for $c^*(k_t, z_t)$ to satisfy the first order condition $u'(c^*) = \beta E \left[ V_1((1 - \delta) k_t + z_t f(k_t) - c^*, z_{t+1}) \right]$
* Define $k^*(k_t, z_t) = (1 - \delta) k_t + z_t f(k_t) - c^*(k_t, z_t)$
* Update value function according to $\hat{V}(k_t, z_t) = u(c^*(k_t, z_t)) + \beta E \left[ V(k^*(k_t, z_t), z_{t+1}) \right]$
```julia
function update!(V::Vector{Float64}, kpol::Vector{Float64},
ncgm::NeoclassicalGrowth, vp::ValueCoeffs{VFI},
Φ::Matrix{Float64}, dΦ::Matrix{Float64})
# Get sizes and allocate for complete_polynomial
kgrid = ncgm.kgrid; zgrid = ncgm.zgrid
nk, nz = length(kgrid), length(zgrid)
# Iterate over all states
temp = Array{Float64}(undef, n_complete(2, vp.d))
_inds = LinearIndices((nk, nz))
for iz=1:nz, ik=1:nk
k = kgrid[ik]; z = zgrid[iz]
# Define an objective function (negative for minimization)
y = expendables_t(ncgm, k, z)
solme(kp) = du(ncgm, y - kp) - ncgm.β*compute_dEV!(temp, ncgm, vp, kp, iz)
# Find sol to foc
kp = brent(solme, 1e-8, y-1e-8; rtol=1e-12)
c = expendables_t(ncgm, k, z) - kp
# New value
ikiz_index = _inds[ik, iz]
EV = compute_EV!(temp, ncgm, vp, kp, iz)
V[ikiz_index] = u(ncgm, c) + ncgm.β*EV
kpol[ikiz_index] = kp
end
# Update coefficients
update_v!(vp, Φ \ V, 1.0)
update_k!(vp, Φ \ kpol, 1.0)
return V
end
```
update! (generic function with 2 methods)
### Endogenous Grid Value Function Iteration
Method introduced by Chris Carroll. The key to this method is that the grid of points being used to approximate is over $(k_{t+1}, z_{t})$ instead of $(k_t, z_t)$. The insightful piece of this algorithm is that the transformation allows one to write a closed form for the consumption function, $c^*(k_{t+1}, z_t) = u'^{-1} \left( V_1(k_{t+1}, z_{t+1}) \right]$.
Then for a given $(k_{t+1}, z_{t})$ the update section would be
* Define $c^*(k_{t+1}, z_t) = u'^{-1} \left( \beta E \left[ V_1(k_{t+1}, z_{t+1}) \right] \right)$
* Find $k_t$ such that $k_t = (1 - \delta) k_t + z_t f(k_t) - k_{t+1}$
* Update value function according to $\hat{V}(k_t, z_t) = u(c^*(k_{t+1}, z_t)) + \beta E \left[ V(k_{t+1}, z_{t+1}) \right]$
```julia
function update!(V::Vector{Float64}, kpol::Vector{Float64},
ncgm::NeoclassicalGrowth, vp::ValueCoeffs{VFI_EGM},
Φ::Matrix{Float64}, dΦ::Matrix{Float64})
# Get sizes and allocate for complete_polynomial
kgrid = ncgm.kgrid; zgrid = ncgm.zgrid; grid = ncgm.grid;
nk, nz = length(kgrid), length(zgrid)
# Iterate
temp = Array{Float64}(undef, n_complete(2, vp.d))
_inds = LinearIndices((nk, nz))
for iz=1:nz, ik=1:nk
# In EGM we use the grid points as if they were our
# policy for yesterday and find implied kt
ikiz_index = _inds[ik, iz]
k1 = kgrid[ik];z = zgrid[iz];
# Compute the derivative of expected values
dEV = compute_dEV!(temp, ncgm, vp, k1, iz)
# Compute optimal consumption
c = duinv(ncgm, ncgm.β*dEV)
# Need to find corresponding kt for optimal c
obj(kt) = expendables_t(ncgm, kt, z) - c - k1
kt_star = brent(obj, 0.0, 2.0, xtol=1e-10)
# New value
EV = compute_EV!(temp, ncgm, vp, k1, iz)
V[ikiz_index] = u(ncgm, c) + ncgm.β*EV
kpol[ikiz_index] = kt_star
end
# New Φ (has our new "kt_star" and z points)
Φ_egm = complete_polynomial([kpol grid[:, 2]], vp.d)
# Update coefficients
update_v!(vp, Φ_egm \ V, 1.0)
update_k!(vp, Φ_egm \ grid[:, 1], 1.0)
# Update V and kpol to be value and policy corresponding
# to our grid again
copy!(V, Φ*vp.v_coeffs)
copy!(kpol, Φ*vp.k_coeffs)
return V
end
```
update! (generic function with 3 methods)
### Envelope Condition Value Function Iteration
Very similar to the previous method. The insight of this algorithm is that since we are already approximating the value function and can evaluate its derivative, we can skip the numerical optimization piece of the update method and compute directly the policy using the envelope condition (hence the name).
The envelope condition says:
$$c^*(k_t, z_t) = u'^{-1} \left( \frac{\partial V(k_t, z_t)}{\partial k_t} (1 - \delta + r)^{-1} \right)$$
so
$$k^*(k_t, z_t) = z_t f(k_t) + (1-\delta)k_t - c^*(k_t, z_t)$$
The functions below compute the policy using the envelope condition.
```julia
function env_condition_kp!(cp_out::Vector{Float64}, ncgm::NeoclassicalGrowth,
vp::ValueCoeffs, k::Float64, z::Float64)
# Compute derivative of VF
dV = dot(vp.v_coeffs, complete_polynomial!(cp_out, [k, z], vp.d, Derivative{1}()))
# Consumption is then computed as
c = duinv(ncgm, dV / (1 - ncgm.δ .+ df(ncgm, k, z)))
return expendables_t(ncgm, k, z) - c
end
function env_condition_kp(ncgm::NeoclassicalGrowth, vp::ValueCoeffs,
k::Float64, z::Float64)
cp_out = Array{Float64}(undef, n_complete(2, vp.d))
env_condition_kp!(cp_out, ncgm, vp, k, z)
end
function env_condition_kp(ncgm::NeoclassicalGrowth, vp::ValueCoeffs)
# Pull out k and z from grid
k = ncgm.grid[:, 1]
z = ncgm.grid[:, 2]
# Create basis matrix for entire grid
dPhi = complete_polynomial(ncgm.grid, vp.d, Derivative{1}())
# Compute consumption
c = duinv(ncgm, (dPhi*vp.v_coeffs) ./ (1-ncgm.δ.+df(ncgm, k, z)))
return expendables_t(ncgm, k, z) .- c
end
```
env_condition_kp (generic function with 2 methods)
The update method is then very similar to other value iteration style methods, but avoids the numerical solver.
* For each point, $(k_t, z_t)$ get $c^*(k_t, z_t)$ from the envelope condition
* Define $k^*(k_t, z_t) = (1 - \delta) k_t + z_t f(k_t) - c^*(k_t, z_t)$
* Update value function according to $\hat{V}(k_t, z_t) = u(c^*(k_t, z_t)) + \beta E \left[ V(k^*(k_t, z_t), z_{t+1}) \right]$
```julia
function update!(V::Vector{Float64}, kpol::Vector{Float64},
ncgm::NeoclassicalGrowth, vp::ValueCoeffs{VFI_ECM},
Φ::Matrix{Float64}, dΦ::Matrix{Float64})
# Get sizes and allocate for complete_polynomial
kgrid = ncgm.kgrid; zgrid = ncgm.zgrid;
nk, nz = length(kgrid), length(zgrid)
# Iterate over all states
temp = Array{Float64}(undef, n_complete(2, vp.d))
_inds = LinearIndices((nk, nz))
for ik in 1:nk, iz in 1:nz
ikiz_index = _inds[ik, iz]
k = kgrid[ik]
z = zgrid[iz]
# Policy from envelope condition
kp = env_condition_kp!(temp, ncgm, vp, k, z)
c = expendables_t(ncgm, k, z) - kp
kpol[ikiz_index] = kp
# New value
EV = compute_EV!(temp, ncgm, vp, kp, iz)
V[ikiz_index] = u(ncgm, c) + ncgm.β*EV
end
# Update coefficients
update_v!(vp, Φ \ V, 1.0)
update_k!(vp, Φ \ kpol, 1.0)
return V
end
```
update! (generic function with 4 methods)
### Conventional Policy Function Iteration
Policy function iteration is different than value function iteration in that it starts with a policy function, then updates the value function, and finally finds the new optimal policy function. Given a policy $c(k_t, z_t)$ and for each pair $(k_t, z_t)$
* Define $k(k_t, z_t) = (1 - \delta) k_t + z_t f(k_t) - c(k_t, z_t)$
* Find fixed point of $V(k_t, z_t) = u(c(k_t, z_t)) + \beta E \left[ V(k(k_t, z_t), z_t) \right]$ (Iterate to convergence given policy)
* Given $V(k_t, z_t)$, numerically solve for new policy $c^*(k_t, z_t)$ -- Stop when $c(k_t, z_t) \approx c^*(k_t, z_t)$
```julia
function update!(V::Vector{Float64}, kpol::Vector{Float64},
ncgm::NeoclassicalGrowth, vp::ValueCoeffs{PFI},
Φ::Matrix{Float64}, dΦ::Matrix{Float64})
# Get sizes and allocate for complete_polynomial
kgrid = ncgm.kgrid; zgrid = ncgm.zgrid; grid = ncgm.grid;
nk, nz = length(kgrid), length(zgrid)
# Copy valuecoeffs object and use to iterate to
# convergence given a policy
vp_igp = copy(vp, IterateOnPolicy())
solve(ncgm, vp_igp; nskipprint=1000, maxiter=5000, verbose=false)
# Update the policy and values
temp = Array{Float64}(undef, n_complete(2, vp.d))
_inds = LinearIndices((nk, nz))
for ik in 1:nk, iz in 1:nz
k = kgrid[ik]; z = zgrid[iz];
# Define an objective function (negative for minimization)
y = expendables_t(ncgm, k, z)
solme(kp) = du(ncgm, y - kp) - ncgm.β*compute_dEV!(temp, ncgm, vp, kp, iz)
# Find minimum of objective
kp = brent(solme, 1e-8, y-1e-8; rtol=1e-12)
# Update policy function
ikiz_index = _inds[ik, iz]
kpol[ikiz_index] = kp
end
# Get new coeffs
update_k!(vp, Φ \ kpol, 1.0)
update_v!(vp, vp_igp.v_coeffs, 1.0)
# Update all elements of value
copy!(V, Φ*vp.v_coeffs)
return V
end
```
update! (generic function with 5 methods)
### Envelope Condition Policy Function Iteration
Similar to policy function iteration, but, rather than numerically solve for new policies, it uses the envelope condition to directly compute them. Given a starting policy $c(k_t, z_t)$ and for each pair $(k_t, z_t)$
* Define $k(k_t, z_t) = (1 - \delta) k_t + z_t f(k_t) - c(k_t, z_t)$
* Find fixed point of $V(k_t, z_t) = u(c(k_t, z_t)) + \beta E \left[ V(k(k_t, z_t), z_t) \right]$ (Iterate to convergence given policy)
* Given $V(k_t, z_t)$ find $c^*(k_t, z_t)$ using envelope condition -- Stop when $c(k_t, z_t) \approx c^*(k_t, z_t)$
```julia
function update!(V::Vector{Float64}, kpol::Vector{Float64},
ncgm::NeoclassicalGrowth, vp::ValueCoeffs{PFI_ECM},
Φ::Matrix{Float64}, dΦ::Matrix{Float64})
# Copy valuecoeffs object and use to iterate to
# convergence given a policy
vp_igp = copy(vp, IterateOnPolicy())
solve(ncgm, vp_igp; nskipprint=1000, maxiter=5000, verbose=false)
# Update the policy and values
kp = env_condition_kp(ncgm, vp)
update_k!(vp, Φ \ kp, 1.0)
update_v!(vp, vp_igp.v_coeffs, 1.0)
# Update all elements of value
copy!(V, Φ*vp.v_coeffs)
copy!(kpol, kp)
return V
end
```
update! (generic function with 6 methods)
### Euler Equation Method
Euler equation methods operate directly on the Euler equation: $u'(c_t) = \beta E \left[ u'(c_{t+1}) (1 - \delta + z_t f'(k_t)) \right]$.
Given an initial policy $c(k_t, z_t)$ for each grid point $(k_t, z_t)$
* Find $k(k_t, z_t) = (1-\delta)k_t + z_t f(k_t) - c(k_t, z_t)$
* Let $c_{t+1} = c(k(k_t, z_t), z_t)$
* Numerically solve for a $c^*$ that satisfies the Euler equation i.e. $u'(c^*) = \beta E \left[ u'(c_{t+1}) (1 - \delta + z_t f'(k_t)) \right]$
* Stop when $c^*(k_t, z_t) \approx c(k_t, z_t)$
```julia
# Conventional Euler equation method
function update!(V::Vector{Float64}, kpol::Vector{Float64},
ncgm::NeoclassicalGrowth, vp::ValueCoeffs{EulEq},
Φ::Matrix{Float64}, dΦ::Matrix{Float64})
# Get sizes and allocate for complete_polynomial
@unpack kgrid, zgrid, weights, z1 = ncgm
nz1, nz = size(z1)
nk = length(kgrid)
# Iterate over all states
temp = Array{Float64}(undef, n_complete(2, vp.d))
_inds = LinearIndices((nk, nz))
for iz in 1:nz, ik in 1:nk
k = kgrid[ik]; z = zgrid[iz];
# Create current polynomial
complete_polynomial!(temp, [k, z], vp.d)
# Compute what capital will be tomorrow according to policy
kp = dot(temp, vp.k_coeffs)
# Compute RHS of EE
rhs_ee = 0.0
for iz1 in 1:nz1
# Possible z in t+1
zp = z1[iz1, iz]
# Policy for k_{t+2}
complete_polynomial!(temp, [kp, zp], vp.d)
kpp = dot(temp, vp.k_coeffs)
# Implied t+1 consumption
cp = expendables_t(ncgm, kp, zp) - kpp
# Add to running expectation
rhs_ee += ncgm.β*weights[iz1]*du(ncgm, cp)*(1-ncgm.δ+df(ncgm, kp, zp))
end
# The rhs of EE implies consumption and investment in t
c = duinv(ncgm, rhs_ee)
kp_star = expendables_t(ncgm, k, z) - c
# New value
ikiz_index = _inds[ik, iz]
EV = compute_EV!(temp, ncgm, vp, kp_star, iz)
V[ikiz_index] = u(ncgm, c) + ncgm.β*EV
kpol[ikiz_index] = kp_star
end
# Update coefficients
update_v!(vp, Φ \ V, 1.0)
update_k!(vp, Φ \ kpol, 1.0)
return V
end
```
update! (generic function with 7 methods)
### Envelope Condition Derivative Value Function Iteration
This method uses the same insight of the "Envelope Condition Value Function Iteration," but, rather than iterate directly on the value function, it iterates on the derivative of the value function. The update steps are
* For each point, $(k_t, z_t)$ get $c^*(k_t, z_t)$ from the envelope condition (which only depends on the derivative of the value function!)
* Define $k^*(k_t, z_t) = (1 - \delta) k_t + z_t f(k_t) - c^*(k_t, z_t)$
* Update value function according to $\hat{V}_1(k_t, z_t) = \beta (1 - \delta + z_t f'(k_t)) E \left[ V_1(k^*(k_t, z_t), z_{t+1}) \right]$
Once it has converged, you use the implied policy rule and iterate to convergence using the "iterate to convergence (given policy)" method.
```julia
function update!(dV::Vector{Float64}, kpol::Vector{Float64},
ncgm::NeoclassicalGrowth, vp::ValueCoeffs{dVFI_ECM},
Φ::Matrix{Float64}, dΦ::Matrix{Float64})
# Get sizes and allocate for complete_polynomial
kgrid = ncgm.kgrid; zgrid = ncgm.zgrid; grid = ncgm.grid;
nk, nz, ns = length(kgrid), length(zgrid), size(grid, 1)
# Iterate over all states
temp = Array{Float64}(undef, n_complete(2, vp.d))
_inds = LinearIndices((nk, nz))
for iz=1:nz, ik=1:nk
k = kgrid[ik]; z = zgrid[iz];
# Envelope condition implies optimal kp
kp = env_condition_kp!(temp, ncgm, vp, k, z)
c = expendables_t(ncgm, k, z) - kp
# New value
ikiz_index = _inds[ik, iz]
dEV = compute_dEV!(temp, ncgm, vp, kp, iz)
dV[ikiz_index] = (1-ncgm.δ+df(ncgm, k, z))*ncgm.β*dEV
kpol[ikiz_index] = kp
end
# Get new coeffs
update_k!(vp, Φ \ kpol, 1.0)
update_v!(vp, dΦ \ dV, 1.0)
return dV
end
```
update! (generic function with 8 methods)
### Simulation and Euler Error Methods
The following functions to simulate and compute Euler errors are easily defined given our model type and the solution type.
```julia
"""
Simulates the neoclassical growth model for a given set of solution
coefficients. It simulates for `capT` periods and discards first
`nburn` observations.
"""
function simulate(ncgm::NeoclassicalGrowth, vp::ValueCoeffs,
shocks::Vector{Float64}; capT::Int=10_000,
nburn::Int=200)
# Unpack parameters
kp = 0.0 # Policy holder
temp = Array{Float64}(undef, n_complete(2, vp.d))
# Allocate space for k and z
ksim = Array{Float64}(undef, capT+nburn)
zsim = Array{Float64}(undef, capT+nburn)
# Initialize both k and z at 1
ksim[1] = 1.0
zsim[1] = 1.0
# Simulate
temp = Array{Float64}(undef, n_complete(2, vp.d))
for t in 2:capT+nburn
# Evaluate k_t given yesterday's (k_{t-1}, z_{t-1})
kp = env_condition_kp!(temp, ncgm, vp, ksim[t-1], zsim[t-1])
# Draw new z and update k using policy above
zsim[t] = zsim[t-1]^ncgm.ρ * exp(ncgm.σ*shocks[t])
ksim[t] = kp
end
return ksim[nburn+1:end], zsim[nburn+1:end]
end
function simulate(ncgm::NeoclassicalGrowth, vp::ValueCoeffs;
capT::Int=10_000, nburn::Int=200, seed=42)
Random.seed!(seed) # Set specific seed
shocks = randn(capT + nburn)
return simulate(ncgm, vp, shocks; capT=capT, nburn=nburn)
end
"""
This function evaluates the Euler Equation residual for a single point (k, z)
"""
function EulerEquation!(out::Vector{Float64}, ncgm::NeoclassicalGrowth,
vp::ValueCoeffs, k::Float64, z::Float64,
nodes::Vector{Float64}, weights::Vector{Float64})
# Evaluate consumption today
k1 = env_condition_kp!(out, ncgm, vp, k, z)
c = expendables_t(ncgm, k, z) - k1
LHS = du(ncgm, c)
# For each of realizations tomorrow, evaluate expectation on RHS
RHS = 0.0
for (eps, w) in zip(nodes, weights)
# Compute ztp1
z1 = z^ncgm.ρ * exp(eps)
# Evaluate the ktp2
ktp2 = env_condition_kp!(out, ncgm, vp, k1, z1)
# Get c1
c1 = expendables_t(ncgm, k1, z1) - ktp2
# Update RHS of equation
RHS = RHS + w*du(ncgm, c1)*(1 - ncgm.δ + df(ncgm, k1, z1))
end
return abs(ncgm.β*RHS/LHS - 1.0)
end
"""
Given simulations for k and z, it computes the euler equation residuals
along the entire simulation. It reports the mean and max values in
log10.
"""
function ee_residuals(ncgm::NeoclassicalGrowth, vp::ValueCoeffs,
ksim::Vector{Float64}, zsim::Vector{Float64}; Qn::Int=10)
# Figure out how many periods we simulated for and make sure k and z
# are same length
capT = length(ksim)
@assert length(zsim) == capT
# Finer integration nodes
eps_nodes, weight_nodes = qnwnorm(Qn, 0.0, ncgm.σ^2)
temp = Array{Float64}(undef, n_complete(2, vp.d))
# Compute EE for each period
EE_resid = Array{Float64}(undef, capT)
for t=1:capT
# Pull out current state
k, z = ksim[t], zsim[t]
# Compute residual of Euler Equation
EE_resid[t] = EulerEquation!(temp, ncgm, vp, k, z, eps_nodes, weight_nodes)
end
return EE_resid
end
function ee_residuals(ncgm::NeoclassicalGrowth, vp::ValueCoeffs; Qn::Int=10)
# Simulate and then call other ee_residuals method
ksim, zsim = simulate(ncgm, vp)
return ee_residuals(ncgm, vp, ksim, zsim; Qn=Qn)
end
```
ee_residuals (generic function with 2 methods)
## A Horse Race
We can now run a horse race to compare the methods in terms of both accuracy and speed.
```julia
function main(sm::SolutionMethod, nd::Int=5, shocks=randn(capT+nburn);
capT=10_000, nburn=200, tol=1e-9, maxiter=2500,
nskipprint=25, verbose=true)
# Create model
ncgm = NeoclassicalGrowth()
# Create initial quadratic guess
vp = ValueCoeffs(ncgm, Val{2}, IterateOnPolicy())
solve(ncgm, vp; tol=1e-6, verbose=false)
# Allocate memory for timings
times = Array{Float64}(undef, nd-1)
sols = Array{ValueCoeffs}(undef, nd-1)
mean_ees = Array{Float64}(undef, nd-1)
max_ees = Array{Float64}(undef, nd-1)
# Solve using the solution method for degree 2 to 5
vp = copy(vp, sm)
for d in 2:nd
# Change degree of solution method
vp = copy(ncgm, vp, Val{d})
# Time the current method
start_time = time()
solve(ncgm, vp; tol=tol, maxiter=maxiter, nskipprint=nskipprint,
verbose=verbose)
end_time = time()
# Save the time and solution
times[d-1] = end_time - start_time
sols[d-1] = vp
# Simulate and compute EE
ks, zs = simulate(ncgm, vp, shocks; capT=capT, nburn=nburn)
resids = ee_residuals(ncgm, vp, ks, zs; Qn=10)
mean_ees[d-1] = log10.(mean(abs.(resids)))
max_ees[d-1] = log10.(maximum(abs, resids))
end
return sols, times, mean_ees, max_ees
end
```
main (generic function with 3 methods)
```julia
Random.seed!(52)
shocks = randn(10200)
for sol_method in [VFI(), VFI_EGM(), VFI_ECM(), dVFI_ECM(),
PFI(), PFI_ECM(), EulEq()]
# Make sure everything is compiled
main(sol_method, 5, shocks; maxiter=2, verbose=false)
# Run for real
s_sm, t_sm, mean_eem, max_eem = main(sol_method, 5, shocks;
tol=1e-8, verbose=false)
println("Solution Method: $sol_method")
for (d, t) in zip([2, 3, 4, 5], t_sm)
println("\tDegree $d took time $t")
println("\tMean & Max EE are" *
"$(round(mean_eem[d-1], digits=3)) & $(round(max_eem[d-1], digits=3))")
end
end
```
Solution Method: VFI()
Degree 2 took time 0.8547611236572266
Mean & Max EE are-3.803 & -2.875
Degree 3 took time 0.7150120735168457
Mean & Max EE are-4.914 & -3.487
Degree 4 took time 0.8598308563232422
Mean & Max EE are-5.978 & -4.226
Degree 5 took time 0.7123589515686035
Mean & Max EE are-6.916 & -4.942
Solution Method: VFI_EGM()
Degree 2 took time 0.3311178684234619
Mean & Max EE are-3.803 & -2.876
Degree 3 took time 0.24843502044677734
Mean & Max EE are-4.914 & -3.487
Degree 4 took time 0.30118298530578613
Mean & Max EE are-5.978 & -4.226
Degree 5 took time 0.18770384788513184
Mean & Max EE are-6.916 & -4.942
Solution Method: VFI_ECM()
Degree 2 took time 0.2970089912414551
Mean & Max EE are-3.803 & -2.875
Degree 3 took time 0.2128901481628418
Mean & Max EE are-4.914 & -3.487
Degree 4 took time 0.25684499740600586
Mean & Max EE are-5.978 & -4.226
Degree 5 took time 0.1400759220123291
Mean & Max EE are-6.916 & -4.942
Solution Method: dVFI_ECM()
Degree 2 took time 0.517967939376831
Mean & Max EE are-3.803 & -2.876
Degree 3 took time 0.6294620037078857
Mean & Max EE are-4.914 & -3.487
Degree 4 took time 0.8079090118408203
Mean & Max EE are-5.978 & -4.226
Degree 5 took time 0.8993039131164551
Mean & Max EE are-6.916 & -4.942
Solution Method: PFI()
Degree 2 took time 0.30332493782043457
Mean & Max EE are-3.803 & -2.876
Degree 3 took time 0.6854491233825684
Mean & Max EE are-4.914 & -3.487
Degree 4 took time 0.9637911319732666
Mean & Max EE are-5.978 & -4.226
Degree 5 took time 0.7570579051971436
Mean & Max EE are-6.916 & -4.942
Solution Method: PFI_ECM()
Degree 2 took time 0.24885082244873047
Mean & Max EE are-3.803 & -2.876
Degree 3 took time 0.19704389572143555
Mean & Max EE are-4.914 & -3.487
Degree 4 took time 0.23227882385253906
Mean & Max EE are-5.978 & -4.226
Degree 5 took time 0.1265571117401123
Mean & Max EE are-6.916 & -4.942
Solution Method: EulEq()
Degree 2 took time 0.34078478813171387
Mean & Max EE are-3.803 & -2.876
Degree 3 took time 0.24483489990234375
Mean & Max EE are-4.914 & -3.487
Degree 4 took time 0.27704310417175293
Mean & Max EE are-5.978 & -4.226
Degree 5 took time 0.14841604232788086
Mean & Max EE are-6.916 & -4.942
```julia
```
| b670314b9cb582bb29454b44984a383ec23b7007 | 57,035 | ipynb | Jupyter Notebook | Growth/notebooks/GrowthModelSolutionMethods_jl.ipynb | sglyon/CLMMJuliaPythonMatlab | 4a80c0099f980c9880c6199dd02380e6bc293aa7 | [
"BSD-3-Clause"
] | 19 | 2019-07-09T04:23:14.000Z | 2022-01-24T17:29:07.000Z | Growth/notebooks/GrowthModelSolutionMethods_jl.ipynb | sglyon/CLMMJuliaPythonMatlab | 4a80c0099f980c9880c6199dd02380e6bc293aa7 | [
"BSD-3-Clause"
] | 1 | 2020-03-08T22:30:15.000Z | 2020-03-08T22:30:15.000Z | Growth/notebooks/GrowthModelSolutionMethods_jl.ipynb | sglyon/CLMMJuliaPythonMatlab | 4a80c0099f980c9880c6199dd02380e6bc293aa7 | [
"BSD-3-Clause"
] | 18 | 2019-07-09T07:22:32.000Z | 2021-07-11T18:12:31.000Z | 37.059779 | 736 | 0.542106 | true | 12,899 | Qwen/Qwen-72B | 1. YES
2. YES | 0.817574 | 0.835484 | 0.68307 | __label__eng_Latn | 0.833362 | 0.425332 |
```python
# Header starts here.
from sympy.physics.units import *
from sympy import *
# Rounding:
import decimal
from decimal import Decimal as DX
from copy import deepcopy
def iso_round(obj, pv, rounding=decimal.ROUND_HALF_EVEN):
import sympy
"""
Rounding acc. to DIN EN ISO 80000-1:2013-08
place value = Rundestellenwert
"""
assert pv in set([
# place value # round to:
1, # 1
0.1, # 1st digit after decimal
0.01, # 2nd
0.001, # 3rd
0.0001, # 4th
0.00001, # 5th
0.000001, # 6th
0.0000001, # 7th
0.00000001, # 8th
0.000000001, # 9th
0.0000000001, # 10th
])
objc = deepcopy(obj)
try:
tmp = DX(str(float(objc)))
objc = tmp.quantize(DX(str(pv)), rounding=rounding)
except:
for i in range(len(objc)):
tmp = DX(str(float(objc[i])))
objc[i] = tmp.quantize(DX(str(pv)), rounding=rounding)
return objc
# LateX:
kwargs = {}
kwargs["mat_str"] = "bmatrix"
kwargs["mat_delim"] = ""
# kwargs["symbol_names"] = {FB: "F^{\mathsf B}", }
# Units:
(k, M, G ) = ( 10**3, 10**6, 10**9 )
(mm, cm) = ( m/1000, m/100 )
Newton = kg*m/s**2
Pa = Newton/m**2
MPa = M*Pa
GPa = G*Pa
kN = k*Newton
deg = pi/180
half = S(1)/2
# Header ends here.
#
# https://colab.research.google.com/github/kassbohm/wb-snippets/blob/master/ipynb/TEM_10/ESA/a1_cc.ipynb
F,l = var("F,l")
R = 3*F/2
lu = l/sqrt(3)
Ah,Av,Bh,Bv,Ch,Cv = var("Ah,Av,Bh,Bv,Ch,Cv")
e1 = Eq(Ah + Bh + F)
e2 = Eq(Av + Bv - R)
e3 = Eq(Bv*l - Bh*l - F*l/2 - R*7/18*l)
e4 = Eq(Ch - Bh)
e5 = Eq(Cv - F - Bv)
e6 = Eq(F*lu/2 + Bv*lu + Bh*l)
eqs = [e1,e2,e3,e4,e5,e6]
unknowns = [Ah,Av,Bh,Bv,Ch,Cv]
pprint("\nEquations:")
for e in eqs:
pprint(e)
pprint("\n")
# Alternative Solution (also correct):
# Ah,Av,Bh,Bv,Gh,Gv = var("Ah,Av,Bh,Bv,Gh,Gv")
#
# e1 = Eq(Av + Gv - R)
# e2 = Eq(Ah + F - Gh)
# e3 = Eq(F/2 + 7*R/18 - Gv - Gh)
# e4 = Eq(-Gv -F + Bv)
# e5 = Eq(Gh - Bh)
# e6 = Eq(Gh - sqrt(3)*F/6 - Gv/sqrt(3))
#
# eqs = [e1,e2,e3,e4,e5,e6]
# unknowns = [Ah,Av,Bh,Bv,Gh,Gv]
sol = solve(eqs,unknowns)
pprint("\nReactions:")
pprint(sol)
pprint("\nReactions / F (rounded to 0.01):")
for v in sorted(sol,key=default_sort_key):
pprint("\n\n")
s = sol[v]
tmp = (s/F)
tmp = tmp.simplify()
# pprint(tmp)
pprint([v, tmp, iso_round(tmp,0.01)])
# Reactions / F:
#
# ⎡ 43 19⋅√3 ⎤
# ⎢Ah, - ── + ─────, -0.42⎥
# ⎣ 24 24 ⎦
#
#
# ⎡ 3 19⋅√3 ⎤
# ⎢Av, - ─ + ─────, 1.0⎥
# ⎣ 8 24 ⎦
#
#
# ⎡ 19⋅√3 19 ⎤
# ⎢Bh, - ───── + ──, -0.58⎥
# ⎣ 24 24 ⎦
#
#
# ⎡ 19⋅√3 15 ⎤
# ⎢Bv, - ───── + ──, 0.5⎥
# ⎣ 24 8 ⎦
#
#
# ⎡ 19⋅√3 19 ⎤
# ⎢Ch, - ───── + ──, -0.58⎥
# ⎣ 24 24 ⎦
#
#
# ⎡ 19⋅√3 23 ⎤
# ⎢Cv, - ───── + ──, 1.5⎥
# ⎣ 24 8 ⎦
```
| d066ef5f3aede2adddeaeefc4c81496b01931be3 | 6,008 | ipynb | Jupyter Notebook | ipynb/TEM_10/ESA/a1_cc.ipynb | kassbohm/wb-snippets | f1ac5194e9f60a9260d096ba5ed1ce40b844a3fe | [
"MIT"
] | null | null | null | ipynb/TEM_10/ESA/a1_cc.ipynb | kassbohm/wb-snippets | f1ac5194e9f60a9260d096ba5ed1ce40b844a3fe | [
"MIT"
] | null | null | null | ipynb/TEM_10/ESA/a1_cc.ipynb | kassbohm/wb-snippets | f1ac5194e9f60a9260d096ba5ed1ce40b844a3fe | [
"MIT"
] | null | null | null | 34.136364 | 117 | 0.385486 | true | 1,351 | Qwen/Qwen-72B | 1. YES
2. YES | 0.833325 | 0.76908 | 0.640893 | __label__eng_Latn | 0.179473 | 0.327341 |
# Laplace 2D
This example shows how to declare a bilinear form using sympde, then create and evaluate a GLT symbol.
We first start by importing what is needed from sympde and gelato:
```python
# imports from sympde, to write bilinear forms
from sympde.core import Constant
from sympde.calculus import grad, dot
from sympde.topology import ScalarFunctionSpace
from sympde.topology import Domain
from sympde.topology import elements_of
from sympde.expr import BilinearForm
from sympde.expr import integral
```
```python
# imports from gelato
from gelato import gelatize, GltExpr
```
A domain is created as follows
```python
domain = Domain('Omega', dim=2)
```
Then we declare a space of function over our domain
```python
V = ScalarFunctionSpace('V', domain)
```
and define dummy test functions living in the space $V$
```python
u,v = elements_of(V, names='u,v')
```
Finaly, we declare a blinear form, as a lambda expression
```python
# declaring a constant from sympde
c = Constant('c')
expr = dot(grad(v), grad(u)) + c*v*u
a = BilinearForm((u,v), integral(domain, expr))
```
Now we can create the associated GLT expression to the bilinear form $a$
```python
glt = GltExpr(a)
```
The following instruction inspects what is a GltExpr: it is a lambda expression, that has two tuples as inputs: fourier space variables denoted by $(t_x, t_y)$ and no space variables in this case.
```python
print(glt)
```
GltExpr([tx, ty], [], BilinearForm(((u,), (v,)), DomainIntegral(Dot(Grad(u), Grad(v)), Omega) + DomainIntegral(c*u*v, Omega)))
We can use a partial evaluation of the GLT expression, by providing the spline degrees
```python
print(glt(degrees=[2,2]))
```
c*(13*cos(tx)/30 + cos(2*tx)/60 + 11/20)*(13*cos(ty)/30 + cos(2*ty)/60 + 11/20)/(nx*ny) + nx*(-2*cos(tx)/3 - cos(2*tx)/3 + 1)*(13*cos(ty)/30 + cos(2*ty)/60 + 11/20)/ny + ny*(13*cos(tx)/30 + cos(2*tx)/60 + 11/20)*(-2*cos(ty)/3 - cos(2*ty)/3 + 1)/nx
Or numerically, evaluate it, although we are relaying on sympy to perform the evaluation, which is not the right way to proceed. One may use the *lambdify* function from sympy, or rely on *PsyDac* to generate automaticaly the discrete GltExpr, when having more complicated expressions (involving a mpping or fields)
```python
print(glt(tx=0.1, ty=0.2, degrees=[2,2], n_elements=[16,16]))
```
0.00385771212059162*c + 0.0493788050561308
```python
# css style
from IPython.core.display import HTML
def css_styling():
styles = open("../styles/custom.css", "r").read()
return HTML(styles)
css_styling()
```
<link href='http://fonts.googleapis.com/css?family=Fenix' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Alegreya+Sans:100,300,400,500,700,800,900,100italic,300italic,400italic,500italic,700italic,800italic,900italic' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Source+Code+Pro:300,400' rel='stylesheet' type='text/css'>
<style>
@font-face {
font-family: "Computer Modern";
src: url('http://mirrors.ctan.org/fonts/cm-unicode/fonts/otf/cmunss.otf');
}
div.cell{
width:600px;
margin-left:16% !important;
margin-right:auto;
}
h1 {
font-family: 'Alegreya Sans', sans-serif;
}
h2 {
font-family: 'Fenix', serif;
}
h3{
font-family: 'Fenix', serif;
margin-top:12px;
margin-bottom: 3px;
}
h4{
font-family: 'Fenix', serif;
}
h5 {
font-family: 'Alegreya Sans', sans-serif;
}
div.text_cell_render{
font-family: 'Alegreya Sans',Computer Modern, "Helvetica Neue", Arial, Helvetica, Geneva, sans-serif;
line-height: 135%;
font-size: 120%;
width:600px;
margin-left:auto;
margin-right:auto;
}
.CodeMirror{
font-family: "Source Code Pro";
font-size: 90%;
}
/* .prompt{
display: None;
}*/
.text_cell_render h1 {
font-weight: 200;
font-size: 50pt;
line-height: 100%;
color:#054BCD;
margin-bottom: 0.5em;
margin-top: 0.5em;
display: block;
}
.text_cell_render h5 {
font-weight: 300;
font-size: 16pt;
color: #054BCD;
font-style: italic;
margin-bottom: .5em;
margin-top: 0.5em;
display: block;
}
.warning{
color: rgb( 240, 20, 20 )
}
</style>
```python
```
| fc5c61855380ada7860f43870434c8a8d3edf9f8 | 9,259 | ipynb | Jupyter Notebook | notebooks/Laplace_2d.ipynb | ratnania/glt | f77d9e2930119f2a95f7a2361c1a4bb2d949d8c4 | [
"MIT"
] | 4 | 2017-08-31T14:11:49.000Z | 2018-04-07T20:44:12.000Z | notebooks/Laplace_2d.ipynb | ratnania/glt | f77d9e2930119f2a95f7a2361c1a4bb2d949d8c4 | [
"MIT"
] | 5 | 2020-09-07T14:38:50.000Z | 2021-11-05T11:16:21.000Z | notebooks/Laplace_2d.ipynb | ratnania/glt | f77d9e2930119f2a95f7a2361c1a4bb2d949d8c4 | [
"MIT"
] | 1 | 2018-09-26T16:20:17.000Z | 2018-09-26T16:20:17.000Z | 27.152493 | 321 | 0.480505 | true | 1,302 | Qwen/Qwen-72B | 1. YES
2. YES | 0.863392 | 0.785309 | 0.678029 | __label__eng_Latn | 0.712187 | 0.413619 |
# pyGemPick Tutorial 2: Dual High-Contrast Filtering with Detection
## How To Effectively Pick A Lot of Inmmunogold Particles, Fast & Accurate!
{ Insert Video Here Once Completed!}
## Step 1: Micrograph Filtering
After the images are compressed - the resulting jpg can be compressed and and then detected simpultaneously! __*(Note: I like working with compressed images in a separate folder so they can be reused in subsequent processing steps!)*__
I have successfully derived high-contrast versions of common low-pass, edge detecting filters! These show the modifications of the laplace edge detection kernel and laplace of gaussian kernel respectively. Firstly, the kernels were negated and the anchoring value at the center of the kernel was removed and replaced with a scaling factor! Each pass of the filter - produces a binary image where the gold particles are isolated and can be easily detected with optimization of [OpenCv's Simple Blob Detector](https://www.learnopencv.com/blob-detection-using-opencv-python-c/). Note for the HCLAP kernel, anchor values can be chosen 6+ whereas the HLOG kernel anchor values of 18+ are reccommended! These kernels are applied to the image by a matrix convolution operation made possible by [OpenCv's **filter2D( )** function](https://docs.opencv.org/3.0-beta/modules/imgproc/doc/filtering.html)!
### High Contrast Laplace Kernel (HCLAP)
$$\begin{align} \begin{bmatrix}0 &-1 & 0\\-1 & p & -1\\0 & -1 & 0 \end{bmatrix} \end{align}$$
### High Contrast Laplace of Gaussian Kernel (HLOG)
$$\begin{align} \begin{bmatrix}0 & 0 &-1 & 0 & 0\\0 & -1 & -2 & -1 & 0\\-1 & -2 & p & -2 & -1\\0 & -1 & -2 & -1 & 0\\0 & 0 &-1 & 0 & 0 \end{bmatrix} \end{align}$$
## Step 2: Gold Particle Detection
After sucessful detection, the binary images are then passed into the py.pick() function. This function takes the binary images from one of the filters provided plus 5 additional picking values. __*Note: For best results, it is reccomended that images taken at different magnifications are processed separately in different image sets!*__
1. __minArea:__ allows you to set the minimum number of pixels that will be present in a gold particle on the images which you're trying to detect.
2. __minCirc:__ allows you to filter the detected gold particles based on how close to a circle that particle looks like. Due to the magnification, excessive counter staining and the properties of the filters themselves - this value will be less than the ideal which is one!
$$minCirc = \frac{4*\pi*Area}{(perimeter)^2}$$
3. __minConv__: As the article above suggests, "Convexity is defined as the (Area of the Blob / Area of it’s convex hull). Convex Hull of a shape is the tightest convex shape that completely encloses the shape.'
4. __minIner__: The Inertia ratio of a object allows you to filter the gold particles by how elongated that shape looks. minIner=1 defines a complete circle. Ideally, this parameter uses a type of Hessian Matrix analysis to gain the required result! Due to the magnification, excessive counter staining and the properties of the filters themselves - this value will be less than the ideal!
5. __minThresh__: applies simple binary thresholding to the image. Usually, when using other filtering techniques and binary image already obtained, this value is set to zero!
```python
import glob
import cv2
import numpy as np
import pygempick.core as py
import pygempick.modeling as mod
import pygempick.spatialstats as spa
import matplotlib.pyplot as plt
%matplotlib inline
```
```python
#input your folder location with compressed jpg images
images = glob.glob('/home/joseph/Documents/pygempick/samples/compressed/*.jpg')
```
```python
#difine filtering paramaters
pclap = 25 #HCLAP anchor value
plog = 20 #HLOG anchor value
i = 0 #image counter
```
```python
image_number = [] #list for image number counter
detected = [] #list for detected numbr of keypoints
```
```python
for image in images:
orig_img = cv2.imread(image) ##reads specific jpg image from folder with compressed images
output1 = py.hclap_filt(pclap, orig_img, 'no')
output2 = py.hlog_filt(plog, orig_img, 'no')
#output1 = py.dog_filt(p,orig_img)
#output2 = py.bin_filt(p, orig_img)
#write each binary images to the binary folder of your choice!
cv2.imwrite('/home/joseph/Documents/pygempick/samples/binary/{}_hclap_{}.jpg'.format(i, pclap), output1)
cv2.imwrite('/home/joseph/Documents/pygempick/samples/binary/{}_hlog_{}.jpg'.format(i, plog),output2)
#image, minArea, minCirc, minConv, minIner, minThres
keypoints1 = py.pick(output1, 37, .71, .5 , .5, 0)
keypoints2 = py.pick(output2, 37, .71, .5 , .5, 0)
#this function removes duplicated detections
keypoints1, dup1 = py.key_filt(keypoints1, keypoints2)
keypoints = keypoints1 + keypoints2
# Draws detected blobs on image with green circles using opencv's draw keypoints
imd = cv2.drawKeypoints(orig_img, keypoints, np.array([]), (0,255,0),\
cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
#write the image with the picked blobs to the picked folder - change name!
cv2.imwrite('/home/joseph/Documents/pygempick/samples/picked/{}_picked.jpg'.format(i), imd)
image_number.append(i)
i+=1
if len(keypoints) > 0:
detected.append(len(keypoints1))
else:
detected.append(0)
```
```python
print('Total gold particles detected:', sum(detected))
```
Total gold particles detected: 97
```python
plt.figure()
plt.title('Aggregate Count in 6E10-Anti AB images.')
plt.plot(image_number, np.array(detected) , '.b-', label = "Dual High-Contrast Filtering")
#plt.plot(image_number, detected2, '.r-', label = "Filtered w/ DOG")
plt.xlabel("Image Number")
plt.ylabel("Particles Detected")
plt.legend(loc='best')
plt.show()
```
```python
plt.imshow(imd, cmap='gray')
```
```python
plt.imshow(output1, cmap='gray') #binary image of HCLAP filter
```
```python
plt.imshow(output2, cmap='gray') #binary image of HLOG filter
```
```python
```
| 616add7835bf0c328d2675286e6a491e7295d2cc | 326,948 | ipynb | Jupyter Notebook | supdocs/2_pygempick-tutorial-dual-high-contrast-detection.ipynb | jmarsil/pygempick | 9e65b0ec76c81dd0851c80f8fd7b36c75317ff51 | [
"MIT"
] | 1 | 2018-05-10T20:12:21.000Z | 2018-05-10T20:12:21.000Z | supdocs/2_pygempick-tutorial-dual-high-contrast-detection.ipynb | jmarsil/pygempick | 9e65b0ec76c81dd0851c80f8fd7b36c75317ff51 | [
"MIT"
] | null | null | null | supdocs/2_pygempick-tutorial-dual-high-contrast-detection.ipynb | jmarsil/pygempick | 9e65b0ec76c81dd0851c80f8fd7b36c75317ff51 | [
"MIT"
] | null | null | null | 1,058.084142 | 206,792 | 0.953901 | true | 1,617 | Qwen/Qwen-72B | 1. YES
2. YES | 0.853913 | 0.73412 | 0.626874 | __label__eng_Latn | 0.963398 | 0.294769 |
# 15.5. A bit of number theory with SymPy
```
from sympy import *
import sympy.ntheory as nt
init_printing()
```
```
nt.isprime(2017)
```
True
```
nt.nextprime(2017)
```
```
nt.prime(1000)
```
```
nt.primepi(2017)
```
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
x = np.arange(2, 10000)
fig, ax = plt.subplots(1, 1, figsize=(6, 4))
ax.plot(x, list(map(nt.primepi, x)), '-k',
label='$\pi(x)$')
ax.plot(x, x / np.log(x), '--k',
label='$x/\log(x)$')
ax.legend(loc=2)
```
```
nt.factorint(1998)
```
```
2 * 3**3 * 37
```
```
from sympy.ntheory.modular import solve_congruence
solve_congruence((1, 3), (2, 4), (3, 5))
```
| bec5e8a8820cc76f30f9cbdee2743076f2328494 | 62,727 | ipynb | Jupyter Notebook | chapter15_symbolic/05_number_theory.ipynb | guoci/cookbook-2nd-code | 1e6d8b1b66fcffa6362b13893bbd0e43f2829cb6 | [
"MIT"
] | 645 | 2018-02-01T09:16:45.000Z | 2022-03-03T17:47:59.000Z | chapter15_symbolic/05_number_theory.ipynb | dnzengou/cookbook-2nd-code | 85128694cdb9206b4325f95fe0060e01cf72e7f5 | [
"MIT"
] | 3 | 2019-03-11T09:47:21.000Z | 2022-01-11T06:32:00.000Z | chapter15_symbolic/05_number_theory.ipynb | dnzengou/cookbook-2nd-code | 85128694cdb9206b4325f95fe0060e01cf72e7f5 | [
"MIT"
] | 418 | 2018-02-13T03:17:05.000Z | 2022-03-18T21:04:45.000Z | 280.03125 | 45,144 | 0.923382 | true | 251 | Qwen/Qwen-72B | 1. YES
2. YES | 0.964321 | 0.896251 | 0.864274 | __label__eng_Latn | 0.264024 | 0.846333 |
<a href="https://colab.research.google.com/github/FoleyLab/FoleyLab.github.io/blob/master/notebooks/Linear_Variational_Method.ipynb" target="_parent"></a>
# Linear Variational Principle
This notebook demonstrates the Linear Variational Method to the particle in a box of length $L = 10$ atomic units
with a delta function potential centered at $x_0=5$ atomic units. This notebook will attempt to show graphically why inclusion of excited-states of the ordinary particle in a box system can improve the energy of the particle in a box with a delta potential.
# The approach
We will optimize
the trial wavefunction given by
\begin{equation}
\Phi(x) = \sum_{n=1}^N c_n \psi_n(x)
\end{equation}
where the coefficients $c_n$ are real numbers
and $\psi_n(x)$ are the energy eigenfunctions of the particle in a box with no potential:
\begin{equation}
\psi_n(x) = \sqrt{\frac{2}{L} } {\rm sin}\left(\frac{n \pi x}{L} \right).
\end{equation}
We will seek to minimize the energy functional through the expansion coefficients, where the
energy functional can be written as
\begin{equation}
E[\Phi(x)] = \frac{\int_0^{L} \Phi^* (x) \: \hat{H} \: \Phi(x) dx }{\int_0^{L} \Phi^* (x) \: \Phi(x) dx }.
\end{equation}
The Hamiltonian operator in the box is given by
\begin{equation}
\hat{H} = -\frac{\hbar^2}{2M} \frac{d^2}{dx^2} + \delta(x-x_0);
\end{equation}
in natural units, $\hbar$ and the electron mass $M$ are equal to 1.
$E[\Phi(x)]$ can be expanded as
\begin{equation}
E[\Phi(x)] \sum_{n=1}^N \sum_{m=1}^N c_n c_m S_{nm} = \sum_{n=1}^N \sum_{m=1}^N c_n c_m H_{nm}
\end{equation}
where
\begin{equation}
S_{nm} = \int_0^L \psi_n(x) \psi_m(x) dx = \delta_{nm}
\end{equation}
and
\begin{equation}
H_{nm} = \int_0^L \psi_n(x) \hat{H} \psi_m(x) dx.
\end{equation}
Solving this equation can be seen to be identical to diagonalizing the matrix ${\bf H}$, whose elements are $H_{nm}$ to obtain energy eigenvalues $E$ and eigenvectors ${\bf c}$. The lowest eigenvalue corresponds to the variational ground state energy, and the corresponding eigenvector can be
used to expand the variational ground state wavefunction through Equation 1.
# Computing elements of the matrix
We can work out a general expression for the integrals $H_{nm}$:
\begin{equation}
H_{nm} = \frac{\hbar^2 \pi^2 n^2}{2 M L^2} \delta_{nm} + \frac{2}{L} {\rm sin}\left( \frac{n \pi x_0}{L} \right) {\rm sin}\left( \frac{m\pi x_0}{L} \right).
\end{equation}
Import NumPy and PyPlot libraries
```python
import numpy as np
from matplotlib import pyplot as plt
from scipy import signal
```
Write a function that computes the matrix elements $H_{ij}$ given quantum numbers $i$ and $j$, length of the box $L$, and location of the delta function $x_0$. We will assume atomic units where $\hbar = 1$ and $M = 1$.
```python
### Function to return integrals involving Hamiltonian and basis functions
def H_nm(n, m, L, x0):
''' We will use the expression for Hnm along with given values of L and x_0
to compute the elements of the Hamiltonian matrix '''
if n==m:
ham_int = np.pi**2 * m**2/(2 * L**2) + (2/L) * np.sin(n*np.pi*x0/L) * np.sin(m*np.pi*x0/L)
else:
ham_int = (2/L) * np.sin(n*np.pi*x0/L) * np.sin(m*np.pi*x0/L)
return ham_int
def psi_n(n, L, x):
return np.sqrt(2/L) * np.sin(n * np.pi * x/L)
```
Next we will create a numpy array called $H_{mat}$ that can be used to store the Hamiltonian matrix elements. We can start by considering a trial wavefunction that is an expansion of the first 3 PIB energy eigenfunctions, so our Hamiltonian in this case should be a 3x3 numpy array.
```python
dim = 3
H_mat = np.zeros((dim,dim))
```
You can use two nested $for$ loops along with your $H_{ij}$ function to fill out the values of this matrix. Note that the indices for numpy arrays start from zero while the quantum numbers for our system start from 1, so we must offset our quantum numbers by +1 relative to our numpy array indices.
```python
### define L to be 10 and x0 to be 5
L = 10
x0 = 5
### loop over indices of the basis you are expanding in
### and compute and store the corresponding Hamiltonian matrix elements
for n in range(0,dim):
for m in range(0,dim):
H_mat[n,m] = H_nm(n+1, m+1, L, x0)
### Print the resulting Hamiltonian matrix
print(H_mat)
```
[[ 2.49348022e-01 2.44929360e-17 -2.00000000e-01]
[ 2.44929360e-17 1.97392088e-01 -2.44929360e-17]
[-2.00000000e-01 -2.44929360e-17 6.44132198e-01]]
```python
### compute eigenvalues and eigenvectors of H_mat
### store eigenvalues to E_vals and eigenvectors to c
E_vals, c = np.linalg.eig(H_mat)
### The eigenvalues will not necessarily be sorted from lowest-to-highest; this step will sort them!
idx = E_vals.argsort()[::1]
E_vals = E_vals[idx]
c = c[:,idx]
### print lowest eigenvalues corresponding to the
### variational estimate of the ground state energy
print("Ground state energy with potential is approximately ",E_vals[0])
print("Ground state energy of PIB is ",np.pi**2/(200))
```
Ground state energy with potential is approximately 0.16573541893898724
Ground state energy of PIB is 0.04934802200544679
Let's plot the first few eigenstates of the ordinary PIB against the potential:
```python
### array of x-values
x = np.linspace(0,L,100)
### first 3 energy eigenstates of ordinary PIB
psi_1 = psi_n(1, L, x)
psi_2 = psi_n(2, L, x)
psi_3 = psi_n(3, L, x)
Vx = signal.unit_impulse(100,50)
plt.plot(x, psi_1, 'orange', label='$\psi_1$')
plt.plot(x, psi_2, 'green', label='$\psi_2$')
plt.plot(x, psi_3, 'blue', label='$\psi_3$')
plt.plot(x, Vx, 'purple', label='V(x)')
plt.legend()
plt.show()
```
Now let's plot the probability density associated with the first three eigenstates along with the probability density of the variational solution against the potential:
```python
Phi = c[0,0]*psi_1 + c[1,0]*psi_2 + c[2,0]*psi_3
plt.plot(x, psi_1*psi_1, 'orange', label='$|\psi_1|^2$')
plt.plot(x, psi_2*psi_2, 'green', label='$|\psi_2|^2$')
plt.plot(x, psi_3*psi_3, 'blue', label='$|\psi_3|^2$')
plt.plot(x, Phi*Phi, 'red', label='$|\Phi|^2$')
plt.plot(x, Vx, 'purple', label='V(x)')
plt.legend()
plt.show()
```
### Questions To Think About!
1. Is the energy you calculated above higher or lower than the ground state energy of the ordinary particle in a box system (without the delta function potential)?
Answer: The energy calculated to approximate the ground state energy of the PIB + Potential using the linear variational method is higher than the true PIB ground state energy (0.165 atomic units for the PIB + Potential compared to 0.0493 atomic units for the ordinary PIB). The addition of the potential should increase the ground state energy because it is repulsive.
2. Why do you think mixing in functions that correspond to excited states in the ordinary particle in a box system actually helped to improve (i.e. lower) your energy in the system with the delta function potential?
Answer: Certain excited states (all states with even $n$) go to zero at the center of the box, and the repulsive potential is localized to the center of the box. Therefore, all excited states with even $n$ will move electron density away from the repulsive potential, which can potentially loer the energy.
3. Increase the number of basis functions to 6 (so that ${\bf H}$ is a 6x6 matrix and ${\bf c}$ is a vector with 6 entries) and repeat your calculation of the variational estimate of the ground state energy. Does the energy improve (lower) compared to what it was when 3 basis functions were used?
Answer: Yes, the energy improves. With 3 basis functions, the ground state energy is approximated to be 0.165 atomic units and with 6 basis functions, the ground state energy is approximated to be 0.155 atomic units. The added flexibility of these additional basis functions (specifically more basis functions with $n$ even) allows greater flexibility in optimizing a wavefunction that describes an electron effectively avoiding the repulsive potential in the center of the box.
```python
```
| 72eb0a0dbc114a8c2d4ae6f8ac1f8ecc89d88c09 | 63,955 | ipynb | Jupyter Notebook | notebooks/Linear_Variational_Method.ipynb | FoleyLab/FoleyLab.github.io | 1f84e4dc2f87286dbd4e07e483ac1e48943cb493 | [
"CC-BY-3.0"
] | null | null | null | notebooks/Linear_Variational_Method.ipynb | FoleyLab/FoleyLab.github.io | 1f84e4dc2f87286dbd4e07e483ac1e48943cb493 | [
"CC-BY-3.0"
] | 2 | 2020-02-25T08:45:19.000Z | 2021-05-19T04:28:30.000Z | notebooks/Linear_Variational_Method.ipynb | FoleyLab/FoleyLab.github.io | 1f84e4dc2f87286dbd4e07e483ac1e48943cb493 | [
"CC-BY-3.0"
] | null | null | null | 159.488778 | 26,534 | 0.860668 | true | 2,364 | Qwen/Qwen-72B | 1. YES
2. YES | 0.91611 | 0.921922 | 0.844581 | __label__eng_Latn | 0.990542 | 0.800579 |
# Determinant formula from Cavalieri's principle
```python
# setup SymPy
from sympy import *
init_printing()
Vector = Matrix
# setup plotting
%matplotlib inline
import matplotlib.pyplot as mpl
from util.plot_helpers import plot_vec, plot_vecs, plot_line, plot_plane, autoscale_arrows
```
## Two dimentions
```python
a, b, c, d = symbols('a b c d')
# Consider the volume of the parallelegram with sides:
u1 = Vector([a,b])
u2 = Vector([c,d])
# We can compute the volume of the parallelpiped by computing the deteminant of
A = Matrix([[a,b],
[c,d]])
A.det()
```
### Cavalieri's principle
Mathematically, we have
$$
D(\vec{u}_1, \ \vec{u}_2)
=
D(\vec{u}_1 - \alpha \vec{u}_2, \ \vec{u}_2).
$$
```python
# choose alpha so A's top right entry will be zero
alpha = symbols('alpha')
alpha = b/d
A[0,:] = A[0,:] - alpha*A[1,:]
A
```
Copmputing the area is the same as computing the product of the entries on the diagonal:
```python
A[0,0]*A[1,1]
```
```python
simplify( A[0,0]*A[1,1] )
```
**Intuition:** the coefficient $\alpha$ encodes something very important about the "alternating multi-linear" structure that deteminants and cross products embody. In words, the choice of $\alpha$ and the resulting expression $a-\frac{bc}{d}$ correponds to what happens to the first component of $\vec{u}_1$ when we make it's second component zero.
(Yeah, I know, handwavy like crazy, but better than nothing.)
## Three dimentions
```python
a,b,c, d,e,f, g,h,i = symbols('a b c d e f g h i')
# Consider the volume of the parellelpiped with sides:
u1 = Vector([a,b,c])
u2 = Vector([d,e,f])
u3 = Vector([g,h,i])
# We can compute the volume of the parallelpiped by computing the deteminant of
A = Matrix([[a,b,c],
[d,e,f],
[g,h,i]])
A.det()
```
### Cavalieri's principle
This principle leads to the following property of the determinant
$$
D(\vec{u}_1, \ \vec{u}_2, \ \vec{u}_3)
=
D(\vec{u}_1, \ \vec{u}_2- k \vec{u}_3, \ \vec{u}_3)
=
D(\vec{u}_1 -\ell\vec{u}_2 - m\vec{u}_3 , \ \vec{u}_2-k \vec{u}_3, \ \vec{u}_3).
$$
In particuler we make the following particular choice for the cofficients
$$
D(\vec{u}_1, \vec{u}_2, \vec{u}_3)
=
D(\vec{u}_1 - \beta \vec{u}_3 - \gamma(\vec{u}_2 - \alpha\vec{u}_3), \ \vec{u}_2 - \alpha\vec{u}_3, \ \vec{u}_3).
$$
Choosing the right coefficients $\alpha$, $\beta$, and $\gamma$ can transform the matrix $A$ into a lower triangular form, which will make computing the determinant easier.
```python
alpha, beta, gamma = symbols('alpha beta gamma')
A
```
```python
# first get rid of f by subtracting third row from second row
alpha = f/i
A[1,:]= A[1,:] - alpha*A[2,:]
A
```
```python
# second get rid of c by subtracting third row from first row
beta = c/i
A[0,:]= A[0,:] - beta*A[2,:]
A
```
```python
# third get rid of b-ch/i by subtracting second row from first row
gamma = A[0,1]/A[1,1]
A[0,:]= A[0,:] - gamma*A[1,:]
A
```
Stargting from the first row, the volume of the determinant is proporitonal to the coeffieicnt `A[0,0]`. The are of the parallelepiped formd by the first two rows is `A[0,0]*A[1,1]` (we can ignore `A[1,0]`) and the overall volume is
```python
A[0,0]*A[1,1]*A[2,2]
```
```python
simplify( A[0,0]*A[1,1]*A[2,2] )
```
```python
# I still don't know how to motivate the recusive formula except to say it turns out that way...
# I tried to think about decomposing the problem into subparts,
# but i cannot motivate det(Ai + Aj + Ak) != det(Ai) + det(Aj) + det(Ak)
# where Ai is the same as A but with A[0,1] and A[0,2] set to zero.
```
```python
u1 = Vector([3,0,0])
u2 = Vector([2,2,0])
u3 = Vector([3,3,3])
plot_vecs(u1,u2,u3)
plot_vec(u1, at=u2, color='k')
plot_vec(u2, at=u1, color='b')
plot_vec(u1, at=u2+u3, color='k')
plot_vec(u2, at=u1+u3, color='b')
plot_vec(u1, at=u3, color='k')
plot_vec(u2, at=u3, color='b')
plot_vec(u3, at=u1, color='g')
plot_vec(u3, at=u2, color='g')
plot_vec(u3, at=u1+u2, color='g')
autoscale_arrows()
```
```python
```
| 804fc54da739d651cc4d5ea5add5f1a92a075778 | 87,907 | ipynb | Jupyter Notebook | extra/Determinants.ipynb | minireference/noBSLAnotebooks | 3d6acb134266a5e304cb2d51c5ac4dc3eb3949b4 | [
"MIT"
] | 116 | 2016-04-20T13:56:02.000Z | 2022-03-30T08:55:08.000Z | extra/Determinants.ipynb | minireference/noBSLAnotebooks | 3d6acb134266a5e304cb2d51c5ac4dc3eb3949b4 | [
"MIT"
] | 2 | 2021-07-01T17:00:38.000Z | 2021-07-01T19:34:09.000Z | extra/Determinants.ipynb | minireference/noBSLAnotebooks | 3d6acb134266a5e304cb2d51c5ac4dc3eb3949b4 | [
"MIT"
] | 29 | 2017-02-04T05:22:23.000Z | 2021-12-28T00:06:50.000Z | 161.891344 | 34,540 | 0.882353 | true | 1,393 | Qwen/Qwen-72B | 1. YES
2. YES | 0.909907 | 0.859664 | 0.782214 | __label__eng_Latn | 0.954286 | 0.655678 |
# Stochastic calculus
> Itô and Stratonovich_interpretations
- toc: true
- badges: true
- comments: true
- categories: [jupyter]
- image: images/chart-preview.png
```javascript
%%javascript
MathJax.Hub.Config({
TeX: { equationNumbers: { autoNumber: "AMS" } }
});
MathJax.Hub.Queue(
["resetEquationNumbers", MathJax.InputJax.TeX],
["PreProcess", MathJax.Hub],
["Reprocess", MathJax.Hub]
);
```
Consider a stochastic differential equation of the form
\begin{equation}
\dot{x} = A(x) + B(x) \,\Lambda \label{langevin}
\end{equation}
$\Lambda$ is a zero-mean, unit-variance Gaussian white noise such that whole drift of the system of the system is contained in $A(x)$.
A naive integration of the equation gives
\begin{equation}
{x}(t) = x(0) + \int_{0}^t A(x) dt + \int_{0}^t B(x) \,\Lambda
\end{equation}
The last term is problematic: a continuous time white noise has infinite power in the Fourier space as it only its support in real space is a delta function. Thus, the point of integration makes difference in Riemannian sense.
Thus, we write, for a finite $dt$,
\begin{equation}
d{x} = A(x) dt + B(x) dW \label{langevinw}
\end{equation}
Here $dW$ is an increment of a Weiner process $W$ such that
\begin{equation}
{P}(W(t)) = \frac{1}{\sqrt{2\pi t}}e^{-\frac{W(t)^2}{2t}}
\end{equation}
The Weiner process is the integral of the white noise, and thus, it is not differentiable. Doob's theorem implies that Brownian motion $B$ is the only such Gaussian process. Below we plot it:
```python
import numpy as np
import matplotlib.pyplot as plt
plt.plot(np.cumsum(1-2*np.random.random(4096)))
plt.plot(np.cumsum(1-2*np.random.random(4096)))
plt.plot(np.cumsum(1-2*np.random.random(4096)))
```
### Stochastic calculus
To emphasize the subtleties of the integral, we now consider
\begin{equation}
\int_0^t W dW = \lim_{n\rightarrow \infty}\sum_{1}^n W(t_i^*) [W(t_i)-W(t_{i-1})]
\end{equation}
We can choose $t_i^*=t_{i-1}+\alpha\,(t_i-t_{i-1})$. Now $<W(t)W(s)>=min(t,s)$.
Thus,
$$
\lim_{n\rightarrow \infty}\sum_{1}^n W(t_i^*) [W(t_i)-W(t_{i-1})] = \alpha t
$$
we see that the result of the integration depends on the choice $\alpha$. We will now discuss two common choices.
#### Itô formulation
In this formulation, we set $\alpha=0$. Thus, we have
\begin{equation}
\int_0^t B dB = \lim_{n\rightarrow \infty}\sum_{1}^n B(t_{i-1}) [B(t_i)-B(t_{i-1})]
\end{equation}
By definition, the Brownian increment $B(t_i)-B(t_{i-1})$ is independent of B(t_i), and thus, the increments are uncorrelated.
Thus, we can rewrite it as
\begin{equation}
\int_0^t B dB = \lim_{n\rightarrow \infty}\sum_{1}^n \Bigg( -\frac12 [B(t_i)-B(t_{i-1})]^2 + \frac12 [B^2(t_i)-B(t_{i-1})^2] \Bigg)
\end{equation}
```python
```
```python
```
| 96b1f4105f4b7a5f2e1a50d63dc6f83e52bf0d74 | 40,609 | ipynb | Jupyter Notebook | _notebooks/2020-08-05_Ito_Strato.ipynb | tubho/blog | 15aa6122ea06e0f999f8716f6ed9b8fee0975c19 | [
"Apache-2.0"
] | null | null | null | _notebooks/2020-08-05_Ito_Strato.ipynb | tubho/blog | 15aa6122ea06e0f999f8716f6ed9b8fee0975c19 | [
"Apache-2.0"
] | 3 | 2021-03-30T12:32:00.000Z | 2022-02-26T09:48:21.000Z | _notebooks/2020-08-05_Ito_Strato.ipynb | tubho/blog | 15aa6122ea06e0f999f8716f6ed9b8fee0975c19 | [
"Apache-2.0"
] | null | null | null | 224.359116 | 35,572 | 0.909552 | true | 924 | Qwen/Qwen-72B | 1. YES
2. YES | 0.894789 | 0.812867 | 0.727345 | __label__eng_Latn | 0.876369 | 0.528198 |
# Worksheet 4
```
%matplotlib inline
```
## Question 1
Convert the ODE
\begin{equation}
y''' + x y'' + 3 y' + y = e^{−x}
\end{equation}
into a first order system of ODEs.
### Answer Question 1
Step by step we introduce
\begin{align}
u &= y' \\
v &= u' \\
&= y''.
\end{align}
We can therefore write the ODE into a system of ODEs. The first order ODEs for $y$ and $u$ are given by the definitions above. The ODE for $v$ is given from the original equation, substituting in the definition of $u$ where appropriate, to get
\begin{align}
\begin{pmatrix} y \\ u \\ v \end{pmatrix}' & = \begin{pmatrix} u \\ v \\ e^{-x} - x y'' - 3 y' - y \end{pmatrix} \\
& = \begin{pmatrix} u \\ v \\ e^{-x} - x v - 3 u - y \end{pmatrix}.
\end{align}
## Question 2
Show by Taylor expansion that the backwards differencing estimate of $f(x)$,
\begin{equation}
f(x) = \frac{f(x) − f(x − h)}{h}
\end{equation}
is first order accurate.
### Answer Question 2
We have the Taylor series expansion of $f(x − h)$ about $x$ is
\begin{equation}
f(x − h) = f(x) − h f'(x) + \frac{h^2}{2!} f''(x) + {\mathcal O} (h^3).
\end{equation}
Substituting this in to the backwards difference formula we find
\begin{align}
\frac{f(x) - f(x - h)}{h} & = \frac{f(x) - f(x) + h f'(x) + \frac{h^2}{2!} f''(x) + {\mathcal O} (h^3)}{h} \\
& = f'(x) + {\mathcal O} (h)
\end{align}
Therefore the difference between the exact derivative $f'$ and the backwards difference estimate is $\propto h$ and hence the finite difference estimate is first order accurate.
## Question 3
Use Taylor expansion to derive a symmetric or central difference estimate of $f^{(4)}(x)$ on a grid with spacing $h$.
### Answer Question 3
For this we need the Taylor expansions
\begin{align*}
f(x + h) & = f(x) + h f^{(1)}(x) + \frac{h^2}{2!} f^{(2)}(x) +
\frac{h^3}{3!} f^{(3)}(x) + \frac{h^4}{4!} f^{(4)}(x) +
\frac{h^5}{5!} f^{(5)}(x) + \dots \\
f(x - h) & = f(x) - h f^{(1)}(x) + \frac{h^2}{2!} f^{(2)}(x) -
\frac{h^3}{3!} f^{(3)}(x) + \frac{h^4}{4!} f^{(4)}(x) -
\frac{h^5}{5!} f^{(5)}(x) + \dots \\
f(x + 2 h) & = f(x) + 2 h f^{(1)}(x) + \frac{4 h^2}{2!} f^{(2)}(x) +
\frac{8 h^3}{3!} f^{(3)}(x) + \frac{16 h^4}{4!} f^{(4)}(x) +
\frac{32 h^5}{5!} f^{(5)}(x) + \dots \\
f(x - 2 h) & = f(x) - 2 h f^{(1)}(x) + \frac{4 h^2}{2!} f^{(2)}(x) -
\frac{8 h^3}{3!} f^{(3)}(x) + \frac{16 h^4}{4!} f^{(4)}(x) -
\frac{32 h^5}{5!} f^{(5)}(x) + \dots
\end{align*}
By a central or symmetric difference estimate we mean that the coefficient of $f(x \pm n h)$ should have the same magnitude. By comparison with central difference estimates for first and second derivatives we see that for odd order derivatives the coefficients should have opposite signs and for even order the same sign.
So we write our estimate as
\begin{equation*}
f^{(4)}(x) \simeq A f(x) + B \left( f(x + h) + f(x - h) \right)
+ C \left( f(x + 2 h) + f(x - 2 h) \right)
\end{equation*}
and we then need to constrain the coefficients $A, B, C$. By looking at terms proportional to $h^s$ we see
\begin{align*}
h^0: && 0 & = A + 2 B + 2 C \\
h^1: && 0 & = 0 \\
h^2: && 0 & = B + 4 C \\
h^3: && 0 & = 0 \\
h^4: && \frac{1}{h^4} & = \frac{B}{12} + \frac{16 C}{12}.
\end{align*}
This gives three constraints on our three unknowns so we cannot go to higher order. Solving the equations gives
\begin{equation*}
A = \frac{6}{h^4}, \qquad B = -\frac{4}{h^4}, \qquad C =
\frac{1}{h^4}.
\end{equation*}
Writing it out in obvious notation we have
\begin{equation*}
f_1^{(4)} = \frac{1}{h^4} \left( 6 f_i - 4 (f_{i+1} + f_{i-1}) +
(f_{i+2} + f_{i-2}) \right).
\end{equation*}
## Question 4
State the convergence rate of Euler's method and the Euler predictor-corrector method.
### Answer Question 4
Euler's method converges as $h$ and the predictor-corrector method as $h^2$.
## Question 5
Explain when multistage methods such as Runge-Kutta methods are useful.
### Answer Question 5
Multistage methods require only one vector of initial data, which must be provided to completely specify the IVP; that is, the method is self-starting. It is also easy to adapt a multistage method to use variable step sizes; that is, to make the algorithm adaptive depending on local error estimates in order to keep the global error within some tolerance. Finally, it is relatively easy to theoretically show convergence. Combining this we see that multistage methods are useful as generic workhorse algorithms and in cases where the function defining the IVP may vary widely in behaviour, so that adaptive algorithms are required.
## Question 6
Explain the power method for finding the largest eigenvalue of a matrix. In particular, explain why it is simpler to find the absolute value, and how to find the phase information.
### Answer Question 6
The idea behind the power method is that most easily seen by writing out a generic vector ${\bf x}$ in terms of the eigenvectors of the matrix $A$ whose eigenvalues we wish to find,
\begin{equation}
{\bf x} = \sum_{i=1}^N a_i {\bf e}_i,
\end{equation}
where we assume that the eigenvectors are ordered such that the associated eigenvalues have the order $|\lambda_1 | > |\lambda_2 | \ge |\lambda_3 | \ge \dots \ge |\lambda_N |$. Note that we always assume that there is a unique eigenvalue $\lambda_1$ with largest magnitude.
We then note that multiplying this generic vector by the matrix $A$ a number of times gives
\begin{equation}
A^k {\bf x} = \lambda_1^k \sum_{i=1}^N a_i \left( \frac{\lambda_i}{\lambda_1} \right)^k {\bf e}_i.
\end{equation}
We then note that, for $i = 1$, the ratio of the eigenvalues $(\lambda_i / \lambda_1)^k$ must tend to zero as $k \to \infty$. Therefore in the limit we will "pick out" $\lambda_1$.
Of course, to actually get the eigenvalue itself we have to essentially divide two vectors. That is, we define a sequence $x^{(k)}$ where the initial value $x^{(0)}$ is arbitrary and at each step we multiply by $A$, so that
\begin{equation}
x^{(k)} = A^k x^{(0)}.
\end{equation}
It follows that we can straightforwardly get $\lambda_1$ by looking at "the ratio of successive iterations". E.g.,
\begin{equation}
\lim_{k \to \infty} \frac{ \| {\bf x}_{k+1} \| }{ \| {\bf x}_k \| } = | \lambda_1 |.
\end{equation}
This only gives information about the magnitude as we have used the simplest way of getting from a vector to a real number, the absolute value. To retain information about the phase we need to replace the absolute value of the vectors with some linear functional such as the sum of the coefficients.
## Coding Question 1
Apply Euler's method to the ODE
\begin{equation}
y' + 2y = 2 − e^{−4 x}, \qquad y(0) = 1.
\end{equation}
Find the value of $y(1)$ (analytic answer is $1 − (e^{−2} − e^{−4})/2$) and see how your method converges with resolution.
### Answer Coding Question 1
```
def Euler(f, y0, interval, N = 100):
"""Solve the IVP y' = f(x, y) on the given interval using N+1 points (counting the initial point) with initial data y0."""
import numpy as np
h = (interval[1] - interval[0]) / N
x = np.linspace(interval[0], interval[1], N+1)
y = np.zeros((len(y0), N+1))
y[:, 0] = y0
for i in range(N):
y[:, i+1] = y[:, i] + h * f(x[i], y[:, i])
return x, y
def fn_q1(x, y):
"""Function defining the IVP in question 1."""
import numpy as np
return 2.0 - np.exp(-4.0*x) - 2.0*y
# Now do the test
import numpy as np
exact_y_end = 1.0 - (np.exp(-2.0) - np.exp(-4.0)) / 2.0
# Test at default resolution
x, y = Euler(fn_q1, np.array([1.0]), [0.0, 1.0])
print "Error at the end point is ", y[:, -1] - exact_y_end
import matplotlib.pyplot as plt
fig = plt.figure(figsize = (12, 8), dpi = 50)
plt.plot(x, y[0, :], 'b-+')
plt.xlabel('$x$', size = 16)
plt.ylabel('$y$', size = 16)
# Now do the convergence test
levels = np.array(range(4, 10))
Npoints = 2**levels
abs_err = np.zeros(len(Npoints))
for i in range(len(Npoints)):
x, y = Euler(fn_q1, np.array([1.0]), [0.0, 1.0], Npoints[i])
abs_err[i] = abs(y[0, -1] - exact_y_end)
# Best fit to the errors
h = 1.0 / Npoints
p = np.polyfit(np.log(h), np.log(abs_err), 1)
fig = plt.figure(figsize = (12, 8), dpi = 50)
plt.loglog(h, abs_err, 'kx')
plt.loglog(h, np.exp(p[1]) * h**(p[0]), 'b-')
plt.xlabel('$h$', size = 16)
plt.ylabel('$|$Error$|$', size = 16)
plt.legend(('Euler Errors', "Best fit line slope {:.3}".format(p[0])), loc = "upper left")
plt.show()
```
## Coding Question 2
Apply the standard RK4 method to the above system, again checking that it converges with resolution.
### Answer Coding Question 2
```
def RK4(f, y0, interval, N = 100):
"""Solve the IVP y' = f(x, y) on the given interval using N+1 points (counting the initial point) with initial data y0."""
import numpy as np
h = (interval[1] - interval[0]) / N
x = np.linspace(interval[0], interval[1], N+1)
y = np.zeros((len(y0), N+1))
y[:, 0] = y0
for i in range(N):
k1 = h * f(x[i] , y[:, i])
k2 = h * f(x[i] + h / 2.0, y[:, i] + k1 / 2.0)
k3 = h * f(x[i] + h / 2.0, y[:, i] + k2 / 2.0)
k4 = h * f(x[i] + h , y[:, i] + k3)
y[:, i+1] = y[:, i] + (k1 + k4 + 2.0 * (k2 + k3)) / 6.0
return x, y
def fn_q2(x, y):
"""Function defining the IVP in question 2."""
import numpy as np
return 2.0 - np.exp(-4.0*x) - 2.0*y
# Now do the test
import numpy as np
exact_y_end = 1.0 - (np.exp(-2.0) - np.exp(-4.0)) / 2.0
# Test at default resolution
x, y = RK4(fn_q1, np.array([1.0]), [0.0, 1.0])
print "Error at the end point is ", y[:, -1] - exact_y_end
import matplotlib.pyplot as plt
fig = plt.figure(figsize = (12, 8), dpi = 50)
plt.plot(x, y[0, :], 'b-+')
plt.xlabel('$x$', size = 16)
plt.ylabel('$y$', size = 16)
# Now do the convergence test
levels = np.array(range(4, 10))
Npoints = 2**levels
abs_err = np.zeros(len(Npoints))
for i in range(len(Npoints)):
x, y = RK4(fn_q1, np.array([1.0]), [0.0, 1.0], Npoints[i])
abs_err[i] = abs(y[0, -1] - exact_y_end)
# Best fit to the errors
h = 1.0 / Npoints
p = np.polyfit(np.log(h), np.log(abs_err), 1)
fig = plt.figure(figsize = (12, 8), dpi = 50)
plt.loglog(h, abs_err, 'kx')
plt.loglog(h, np.exp(p[1]) * h**(p[0]), 'b-')
plt.xlabel('$h$', size = 16)
plt.ylabel('$|$Error$|$', size = 16)
plt.legend(('RK4 Errors', "Best fit line slope {0:.3}".format(p[0])), loc = "upper left")
plt.show()
```
## Coding Question 3
Write a code using the power method and inverse power method to compute the largest and smallest eigenvalues of an arbitrary matrix. Apply it to a random $n = 3$ matrix, checking that the correct answer is found. How does the number of iterations required for convergence to a given level vary with the size of the matrix?
### Answer Coding Question 3
```
def PowerMethod(A, tolerance = 1e-10, MaxSteps = 100):
"""Apply the power method to the matrix A to find the largest eigenvalue in magnitude."""
import numpy as np
import numpy.linalg as la
n = np.size(A, 0)
# Simple initial value
x = np.ones(n)
x /= la.norm(x)
ratio = 1.0
for k in range(MaxSteps):
ratio_old = ratio
x_old = x.copy()
x = np.dot(A, x)
ratio = np.sum(x) / np.sum(x_old)
x /= la.norm(x)
if (abs(ratio - ratio_old) < tolerance):
break
return [ratio, k]
def InversePowerMethod(A, tolerance = 1e-10, MaxSteps = 100):
"""Apply the inverse power method to the matrix A to find the smallest eigenvalue in magnitude."""
import numpy as np
import scipy.linalg as la
n = np.size(A, 0)
# Simple initial value
x = np.ones(n)
x /= la.norm(x)
ratio = 1.0
for k in range(MaxSteps):
ratio_old = ratio
x_old = x.copy()
x = la.solve(A, x)
ratio = np.sum(x) / np.sum(x_old)
x /= la.norm(x)
if (abs(ratio - ratio_old) < tolerance):
break
return [1.0/ratio, k]
# Test on a random 3x3 matrix
import numpy as np
import scipy.linalg as la
A = np.random.rand(3,3)
max_lambda, iterations_max = PowerMethod(A)
min_lambda, iterations_min = InversePowerMethod(A)
eigenvalues, eigenvectors = la.eig(A)
print "Computed maximum and minimum eigenvalues are", max_lambda, min_lambda
print "True eigenvalues are", eigenvalues
# Now check how the number of iterations depends on the matrix size.
# As we are computing random matrices, do average of 10 attempts
MinMatrixSize = 3
MaxMatrixSize = 50
Attempts = 10
iterations = np.zeros((MaxMatrixSize - MinMatrixSize + 1, Attempts))
for n in range(MinMatrixSize, MaxMatrixSize):
for a in range(Attempts):
A = np.random.rand(n, n)
ratio, iterations[n - MinMatrixSize, a] = PowerMethod(A)
import matplotlib.pyplot as plt
ii = np.mean(iterations, 1)
nn = np.array(range(MinMatrixSize, MaxMatrixSize))
fig = plt.figure(figsize = (12, 8), dpi = 50)
plt.plot(range(MinMatrixSize, MaxMatrixSize+1), np.mean(iterations, 1), 'kx')
plt.xlabel('Matrix Size')
plt.ylabel('Mean number of iterations')
plt.show()
```
We see that the number of iterations is practically unchanged with the size of the matrix.
```
from IPython.core.display import HTML
def css_styling():
styles = open("../../IPythonNotebookStyles/custom.css", "r").read()
return HTML(styles)
css_styling()
```
<style>
@font-face {
font-family: "Computer Modern";
src: url('http://mirrors.ctan.org/fonts/cm-unicode/fonts/otf/cmunss.otf');
}
div.cell{
width:800px;
margin-left:16% !important;
margin-right:auto;
}
h1 {
font-family: Verdana, Arial, Helvetica, sans-serif;
}
h2 {
font-family: Verdana, Arial, Helvetica, sans-serif;
}
h3 {
font-family: Verdana, Arial, Helvetica, sans-serif;
}
div.text_cell_render{
font-family: Gill, Verdana, Arial, Helvetica, sans-serif;
line-height: 110%;
font-size: 120%;
width:700px;
margin-left:auto;
margin-right:auto;
}
.CodeMirror{
font-family: "Source Code Pro", source-code-pro,Consolas, monospace;
}
/* .prompt{
display: None;
}*/
.text_cell_render h5 {
font-weight: 300;
font-size: 12pt;
color: #4057A1;
font-style: italic;
margin-bottom: .5em;
margin-top: 0.5em;
display: block;
}
.warning{
color: rgb( 240, 20, 20 )
}
</style>
> (The cell above executes the style for this notebook. It closely follows the style used in the [12 Steps to Navier Stokes](http://lorenabarba.com/blog/cfd-python-12-steps-to-navier-stokes/) course.)
| 0497f055451dfd2ed8228bda697a6b9a00dac896 | 146,927 | ipynb | Jupyter Notebook | Worksheets/Worksheet4_Notebook.ipynb | srinivasvl81/NumericalMethods | 7bc25d882f287a21e9646c676c95672414c980ff | [
"CC-BY-3.0"
] | 1 | 2021-12-01T09:15:04.000Z | 2021-12-01T09:15:04.000Z | Worksheets/Worksheet4_Notebook.ipynb | indranilsinharoy/NumericalMethods | 989e0205565131057c9807ed9d55b6c1a5a38d42 | [
"MIT"
] | null | null | null | Worksheets/Worksheet4_Notebook.ipynb | indranilsinharoy/NumericalMethods | 989e0205565131057c9807ed9d55b6c1a5a38d42 | [
"MIT"
] | 1 | 2019-11-08T06:52:24.000Z | 2019-11-08T06:52:24.000Z | 186.455584 | 27,511 | 0.866634 | true | 4,787 | Qwen/Qwen-72B | 1. YES
2. YES | 0.787931 | 0.921922 | 0.726411 | __label__eng_Latn | 0.937654 | 0.526028 |
# NUME
Instructor: Keny **ORDAZ**
# Final Exam
$\newcommand{\vb}[1]{\boldsymbol{#1}}\newcommand{\RR}[1]{\mathbb{R}^{#1}}
\newcommand{\vt}[1]{\vb{#1}^\top}$
$\newcommand{\x}{\vb{x}}
\newcommand{\xT}{\vt{x}}
\newcommand{\b}{\vb{b}}
\newcommand{\A}{\vb{A}}
\newcommand{\bT}{\vt{b}}
\newcommand{\AT}{\vt{A}}$
## Instructions
- You have exactly 24 hours to complete and submit your exam
- The results and the solving process are to be evaluated
- Clarity of code and good programming practices must be observed
- It must be presented as a **self-contained Jupyter Python Notebook** by e-mail.
- Take all the appropriate precautions to deliver the exam on time.
- Out-of-time deliveries will not be graded.
- Python libraries accepted: `numpy`, `scipy`, `sympy`, `matplotlib`.
- You MUST prove your conclusions or show a counter-example for all problems unless otherwise noted.
- Submit solutions to four (and no more) of the following six problems.
```python
import numpy as np
import scipy
import sympy as sp
import matplotlib.pyplot as plt
from IPython.display import SVG
```
# Problema 1
Sea $\A \in \RR{n \times n}$ y $\x \in \RR{k}$.
1) Encontrar la primer columna de $\vb{M} = (\A - x_1 \vb{I})(\A - x_2 \vb{I}) \cdots (\A - x_k \vb{I})$ empleando una secuencia de operaciones `GAXPY` ($\vb{y} = \A\x + \vb{y}$).
2) Probar para $n = 6$ y $k = 5$.
3) ¿Se simplifica si: (a) $k < n$, (b) $k = n$, (c) $k > n$, (d) u otra condición?
4) Proveer 6 ejemplos numéricos y su verificación.
5) Reporte observaciones sobre errores numéricos en los ejemplos del inciso anterior
# Problema 2
Sean $\vb{p}, \vb{q} \in \RR{n}, n \in \{ 2, 3\}$, vértices de un triángulo en un espacio euclidiano, $\vec{\vb{r}}, \vec{\vb{s}} \in \mathcal{V}$ vectores al espacio vectorial asociado al espacio euclidiano, isomórfico a $\RR{n}$.
1) Encontrar 3 formas de obtener el tercer vértice $\vb{o}$ dadas las siguientes precondiciones:
- Los dos vectores no son paralelos y su producto punto es $< 0$.
- El vector $\vec{\vb{t}} = \vb{q} - \vb{p}$ y los vectores $\vec{\vb{r}}, \vec{\vb{s}}$ son coplanares.
2) Proveer funciones que realicen la determinación de $\vb{o}$ y otra función para la verificación de las precondiciones.
3) Revisar:
- Ley de senos, ley de cosenos, producto punto, producto cruz, sistemas de ecuaciones lineales, triple producto escalar, determinante de $\vb{A} \in \RR{2 \times 2}$
4) Encontrar el método / algoritmo más eficiente (menor número de operaciones) y que produzca menor error numérico.
5) Proveer casos de prueba y casos aleatorios y sus correspondientes verificaciones.
6) Todos los incisos para $n=2$ y para $n=3$. Generalizar para $n$ de ser posible.
```python
display(SVG('p2.svg'))
```
# Problema 3
Considera el problema de mínimos cuadrados, con precisión de 16 bits (media precisión: `numpy.half`):
$$\min_{\x \in \RR{n}} \|\A\x - \b \|\quad \text{donde } \A = \begin{pmatrix}%
1 & -1 \\
0 & 10^{-5}\\
0 & 0
\end{pmatrix} \text{ y } \b\begin{pmatrix}%
0 \\
10^{-5}\\
1
\end{pmatrix}
$$
1) Proporciona la aproximación a la solución en términos de mínimos cuadrados, usando:
- Cholesky
- Factorización QR
2) Con respecto a este caso, menciona las diferencias entre Cholesky y QR en términos de la estabilidad numérica.
# Problema 4
Sean $\A \in \RR{n \times n}$ y $\b, \x \in \RR{n}$. $\A(t), t \in \RR{}, a_{0, n- 1}(t) = a_{n-1, 0}(t) = f(t)$.
$n = 20, \; t \in [0, 10], \epsilon = 2^{52}, \epsilon_\text{neg} = 2^{53}$
1) Provea un sistema con $f(t) = -1^{\lceil t \rceil}$, y encuentre el método con mayor eficiencia temporal para solucionar el sistema en todos los momentos $t$ indicados, con $\Delta t = 0.125$
2) Provea un sistema con $f(t) = 1 - \frac{\epsilon_\text{neg}}{2}$, y encuentre el método con mayor eficiencia temporal para solucionar el sistema en todos los momentos $t$ indicados, con $\Delta t = 0.125$
3) Provea un sistema con $f(t) = 1 + \frac{\epsilon}{2}$, y encuentre el método con mayor eficiencia temporal para solucionar el sistema en todos los momentos $t$ indicados, con $\Delta t = 0.125$
4) Provea un sistema con $f(t) = \sin^2(t) + 5t$, y encuentre el método con mayor eficiencia temporal para solucionar el sistema en todos los momentos $t$ indicados, con $\Delta t = 0.125$
```python
```
| acc81374723a6bfe9ffb710de35a5b0704176872 | 17,598 | ipynb | Jupyter Notebook | src/nume-2021-24h-exam.ipynb | krontzo/nume.py | 9d1e576fb3474333a8e2cf4f26f4236ee4f9deea | [
"MIT"
] | null | null | null | src/nume-2021-24h-exam.ipynb | krontzo/nume.py | 9d1e576fb3474333a8e2cf4f26f4236ee4f9deea | [
"MIT"
] | null | null | null | src/nume-2021-24h-exam.ipynb | krontzo/nume.py | 9d1e576fb3474333a8e2cf4f26f4236ee4f9deea | [
"MIT"
] | null | null | null | 63.530686 | 607 | 0.602568 | true | 1,458 | Qwen/Qwen-72B | 1. YES
2. YES | 0.73412 | 0.831143 | 0.610158 | __label__spa_Latn | 0.860056 | 0.255933 |
Osnabrück University - Machine Learning (Summer Term 2018) - Prof. Dr.-Ing. G. Heidemann, Ulf Krumnack
# Exercise Sheet 05
## Introduction
This week's sheet should be solved and handed in before the end of **Sunday, May 13, 2018**. If you need help (and Google and other resources were not enough), feel free to contact your groups designated tutor or whomever of us you run into first. Please upload your results to your group's studip folder.
## Assignment 0: Math recap (Derivatives in higher dimensions) [2 Bonus Points]
This exercise is supposed to be very easy, does not give any points, and is voluntary. There will be a similar exercise on every sheet. It is intended to revise some basic mathematical notions that are assumed throughout this class and to allow you to check if you are comfortable with them. Usually you should have no problem to answer these questions offhand, but if you feel unsure, this is a good time to look them up again. You are always welcome to discuss questions with the tutors or in the practice session. Also, if you have a (math) topic you would like to recap, please let us know.
**a)** What is a partial derivative? What is a directional derivative? How are these computed?
Let $f$ be a function $f: \mathbb{R}^n \rightarrow \mathbb{R}, \vec{x}=(x_1, ..., x_n) \rightarrow f(x_1, ..., x_n)$.
A **partial derivative** is then a derivative of $f$ with respect to only one $x_j$ : $$\frac{\partial f}{\partial x_j}$$.
This means you assume all other $x_i$ to be constants while deriving $f$.
The **gradient** now is just a vector with all partial derivatives:
$$ \nabla f = (\frac{\partial f}{\partial x_1}, ..., \frac{\partial f}{\partial x_n}) $$
Now we can write the **partial derivative** as $$\frac{\partial f}{\partial x_j} = \vec{e_j} \cdot \nabla f$$ where $e_j = (0,..,1,..,0)$ is the jth vector of the standard basis.
The directional derivative is now the generalization of the equation above, allowing us to compute the derivative in every direction (not only in the directions of the vectors of the standard basis): $$\nabla_v f = v \cdot \nabla f$$
**b)** What is the gradient, the Jacobian matrix, and the Hessian matrix? How are they computed?
**Gradient** is already explained above.
The **Jacobian matrix** generalizes the concept of the gradient to the case of having a function $$f: \mathbb{R}^n \rightarrow \mathbb{R}^m, \vec{x}=(x_1, ..., x_n) \rightarrow \begin{pmatrix} f_1(x_1,..,x_n) \\ : \\ f_m(x_1, ..,x_n) \end{pmatrix}$$ It is defined as
$$ J = \begin{pmatrix} \frac{\partial f_1}{\partial x_1} & ... & \frac{\partial f_1}{\partial x_n} \\
: & \ & : \\
\frac{\partial f_m}{\partial x_1} & ... & \frac{\partial f_m}{\partial x_n} \end{pmatrix} $$
For the **Hessian matrix** we are now back at a function $f: \mathbb{R}^n \rightarrow \mathbb{R}, \vec{x}=(x_1, ..., x_n) \rightarrow f(x_1, ..., x_n)$. The Hessian matrix contains all secocd-order partial derivatives. So the (i,j)-th entry of the Hessian is $$ H_{ij} = \frac{\partial^2 f}{\partial x_i \partial x_j},$$ meaning you first derive w.r.t. $x_j$ and then $x_i$.
**c)** What is the chain rule (in calculus)? How does it look in the higher-dimensional case?
The chain rule is a rule for deriving the composition of functions.
In it's simplest form it states if $h(x) = f(g(x))$ then $$ h'(x) = f'(g(x)) \cdot g'(x)$$
In higher dimensions we this is
$$ J_h(x) = J_f(g(x)) \cdot J_g(x)$$
## Assignment 1: Curse of Dimensionality [6 Points]
For the following exercise, be detailed in your answers and provide some examples. Think about keywords like: random vectors in high dimensional space, manifolds and Bertillonage.
**a)** What are the curse of dimensionality and its implication for pattern classification?
Curse of dimensionality describes the phenomenon that in high dimensional vector spaces, two randomly drawn vectors will almost always be close to orthogonal to each other. This is a real problem in data mining problems, where for a higher number of features, the number of possible combinations and therefore the volume of the resulting feature space exponentionally increases.
In such a high dimensional space, data vectors from real data sets lie far away from each other (which means dense sampling becomes impossible, as there aren't enough samples close to each other). This also leads to the problem that pairs of data vectors have a high probability of having similar distances and to be close to orthogonal to each other. The result is that clustering becomes really difficult, as the vectors more or less stand on their own and distance measures cannot be applied easily.
**b)** Explain how this phenomenom could be used to one's advantage.
This is actually an advantage if you want to discriminate between a high number of individuals (see Bertillonage, where using only 11 features results in a feature space big enough to discriminate humans), but if you want to get usable information out of data, such a 'singling out' of samples is a great disadvantage.
**c)** Explain in your own words the concepts of descriptive and intrinsic dimensionality.
Intrinsic dimensionality exists in contrast to the descriptive dimensionality of data, which is defined by the numbers of parameters used to produce or represent the raw data (i.e. the number of pixels in an unprocessed image).
Additionally to this representive dimensionality, there is also a (most of the time smaller) number of independent parameters which is necessary to describe the data, always in regard to a specific problem we want to use the data on.
For example: a data set might consist of a number of portraits, all with size 1920x1080 pixels, which constitutes their descriptive dimensionality. To do some facial recognition on these portraits however, we do not need the complete descriptive dimension space (which would be way too big anyway), but only a few independent parameters (which we can get by doing PCA and looking at the eigenfaces).
This is possible because the data never fill out the entire high dimensional vector space but instead concentrate along a manifold of a much lower dimensionality.
## Assignment 2: Implement and Apply PCA [8 Points]
In this assignment you will implement PCA from the ground up and apply it to the `cars` dataset (simplified from the JSE [2004 New Car and Truck Data](http://www.amstat.org/publications/jse/jse_data_archive.htm)). This dataset consists of measurements taken on 97 different cars. The eleven features measured are: Suggested retail price (USD), Price to dealer (USD), Engine size (liters), Number of engine cylinders, Engine horsepower, City gas mileage, Highway gas mileage, Weight (pounds), Wheelbase (inches), Length (inches) and Width (inches).
We would like to visualize these high dimensional features to get a feeling for how the cars relate to each other so we need to find a subspace of dimension two or three into which we can project the data.
```python
import numpy as np
# TODO: Load the cars dataset in cars.csv .
### BEGIN SOLUTION
cars = np.loadtxt('cars.csv', delimiter=',')
### END SOLUTION
assert cars.shape == (97, 11), "Shape is not (97, 11), was {}".format(cars.shape)
```
Excecute the following code which will create a scatter plot matrix (it might take some time to execute). This should give you an idea about trends and correlations in the dataset.
```python
%matplotlib inline
import pandas as pd
import seaborn as sns
sns.set()
cols = ['Suggested retail price (USD)', 'Price to dealer (USD)',
'Engine size (liters)', 'Number of engine cylinders',
'Engine horsepower', 'City gas mileage' ,
'Highway gas mileage', 'Weight (pounds)',
'Wheelbase (inches)', 'Length (inches)', 'Width (inches)']
df = pd.DataFrame(cars, columns=cols)
sns.pairplot(df)
```
As a first step we need to normalize the data such that they have a zero mean and a unit standard deviation. Use the standard score for this:
$$\frac{X - \mu}{\sigma}$$
```python
import numpy as np
# TODO: Normalize the data and store them in a variable called cars_norm.
### BEGIN SOLUTION
cars_norm = (cars - np.mean(cars, axis=0)) / np.std(cars, axis=0)
# Alternatively one could use:
# import sklearn.preprocessing
# cars_norm = sklearn.preprocessing.scale(cars)
### END SOLUTION
assert cars_norm.shape == (97, 11), "Shape is not (97, 11), was {}".format(cars.shape)
assert np.abs(np.sum(cars_norm)) < 1e-10, "Absolute sum was {} but should be close to 0".format(np.abs(np.sum(cars_norm)))
assert np.abs(np.sum(cars_norm ** 2) / cars_norm.size - 1) < 1e-10, "The data is not normalized, sum/N was {} not 1".format(np.sum(cars_norm ** 2) / cars_norm.size)
```
PCA finds a subspace that maximizes the variance by determining the eigenvectors of the covariance matrix. So we need to calculate the autocovariance matrix and afterwards the eigenvalues. When the data is normalized the autocovariance is calculated as
$$C = X^T\cdot X$$
with $X$ being an $m \times n$ matrix with $n$ features and $m$ samples.
The entry $c_{i,j}$ in $C$ tells you how much feature $i$ correlates with feature $j$.
(Note: sometimes the formula $C=X\cdot X^T$ can be found, i.e., with rows and columns swapped. This depends on whether your put the individual datapoints as rows or columns in you matrix $X$. However, in the end you want to know how the individual features correlate, i.e., in our example you want a $11\times11$-matrix).
```python
import numpy as np
# TODO: Compute the autocovariance matrix and store it into autocovar
### BEGIN SOLUTION
autocovar = cars_norm.T @ cars_norm
### END SOLUTION
assert autocovar.shape == (11, 11)
# TODO: Compute the eigenvalues und eigenvectors and store them into eigenval and eigenvec
# (Figure out a function to do this for you)
### BEGIN SOLUTION
eigenval, eigenvec = np.linalg.eig(autocovar)
# Alternatively, np.linalg.eig solves the eigenvector problem for general matrices, while eigh
# only solves it for symmetric/hermitian matrices. (The auto-covariance matrix is always symmetric.)
# eigenval, eigenvec = np.linalg.eig(autocovar)
# If you wanted to use the singular value decomposition (SVD), you would have to use the centered
# version of cars:
# u, s, v = np.linalg.svd(cars-np.mean(cars, 0))
# eigenval, eigenvec = s ** 2, v
# However, this method usually leads to different results (there are some numerical issues).
# For a more detailed overview over different methods of PCA check the file "PCA Comparison.ipynb".
# In most cases these difference don't matter much, but it is always wise to handle your results
# with a certain critical eye.
### END SOLUTION
assert eigenval.shape == (11,)
assert eigenvec.shape == (11, 11)
```
Plot the spectrum of the eigenvalues to make sure that they are sorted by their magnitude. How many principal components should you include based on the spectrum plot?
```python
import matplotlib.pyplot as plt
%matplotlib inline
### BEGIN SOLUTION
# Sort eigenvectors (and -values) by descending order of eigenvalues.
sort = np.argsort(-eigenval)
eigenval = eigenval[sort]
eigenvec = eigenvec[:,sort]
# To get an idea of the eigenvalues we plot them.
figure = plt.figure('Eigenvalue comparison')
plt.bar(np.arange(len(eigenval)), eigenval)
### END SOLUTION
```
Now you should have a matrix full of eigenvectors. We can now do two things: project the data down onto the two dimensional subspace to visualize it and we can also plot the two first principle component vectors as eleven two dimensional points to get a feeling for how the features are projected into the subspace. Execute the cells below and describe what you see. Is PCA a good method for this problem? Was it justifiable that we only considered the first two principle components? What kinds of cars are in the four quadrants of the first plot? (**put your answer in the cell below of this code cell**)
```python
%matplotlib inline
import matplotlib.pyplot as plt
# Project the data down into the two dimensional subspace
proj = cars_norm @ eigenvec[:,0:2]
# Plot projected data
fig = plt.figure('Data projected onto first two Principal Components')
fig.gca().set_xlim(-8, 8)
fig.gca().set_ylim(-4, 7)
plt.scatter(proj[:,0], proj[:,1])
# Divide plot into quadrants
plt.axhline(0, color='green')
plt.axvline(0, color='green')
# force drawing on 'run all'
fig.canvas.draw()
# Plot eigenvectors
eig_fig = plt.figure('Eigenvector plot')
plt.scatter(eigenvec[:,0], eigenvec[:,1])
# add labels
labels = ['Suggested retail price (USD)', 'Price to dealer (USD)',
'Engine size (liters)', 'Number of engine cylinders',
'Engine horsepower', 'City gas mileage' ,
'Highway gas mileage', 'Weight (pounds)',
'Wheelbase (inches)', 'Length (inches)', 'Width (inches)']
for label, x, y in zip(labels, eigenvec[:,0], eigenvec[:,1]):
plt.annotate(
label, xy = (x, y), xytext = (-20, 20),
textcoords = 'offset points', ha = 'left', va = 'bottom',
bbox = dict(boxstyle = 'round,pad=0.5', fc = 'blue', alpha = 0.5),
arrowprops = dict(arrowstyle = '->', connectionstyle = 'arc3,rad=0'))
# force drawing on 'run all'
eig_fig.canvas.draw()
```
#### PCA is good
The first plot shows the complete dataset projected down onto the two first principle components. Only few points overlap and the points are generally spread out well in the subspace. There is not much trend in the plot which is what we desired, i.e. the axes are not redundant. No clusters can be recognized.
#### The first two PCs are good
It is admissible to pick two dimensions, although only the first eigenvector has a very high magnitude in comparison.
The 1D plot, which might be better if we take the eigenvalues into account, yields little information in the data on a visual level. However, it gives a better idea of how the original features will be distributed in the space.
The 3D plot is already very hard to grasp by just taking a look at it. So 2D seems to be a good choice.
In general there are several different strategies to decide the number of dimensions onto which you want to project your data. Here is a short overview over just a few common choices:
- Eigenvalue magnitudes: Find the cut-off depth. This is useful for classification problems, especially
for problems to be solved by computers.
- Visualization: Choose the number of dimensions which is useful to visualize the data in a meaningful way. This
choice depends a lot on your problem definition. For printing 2D is usually a good choice - but maybe your data
is just very nice for 1D already. Or maybe you are using a glyph plot (see sheet 06) which can represent high
dimensional data.
- Classification results: In the Eigenfaces assignment below we figured out that the number of principal
components (and thus the number of dimensions) can have a crucial impact on classification rates. It is thus
an option to fine tune the number of dimensions for a good classification result on some training set.
#### Interpretation of the plot is very subjective
Let's first take a look at the second plot. Each of the eleven points denotes one of the original base vectors. If you would draw arrows from the origin to each of the eleven points, you would draw projections of the original axes.
To give it a little bit more meaning, let's investigate this further. Let's take a car with six cylinders and all other values on average. We can change the number of cylinders and see how it moves along the cylinder axis (blue arrow).
As you can see, the six cylinder car is close to the average (i.e. close to the center), while the eight cylinder car is further away in the positive cylinder direction and the two cylinder car is further away in the negative direction.
If we now increase (or decrease) the city gas mileage of the eight cylinder car by 30%, the car will move along the mileage axis (orange arrow).
The dashed arrows indicate the linear combination of the two features for the eight cylinder car with higher city gas mileage. You can see how they are parallel to the respective axes.
Taking all that into account we can say:
- **Top right**: Cars with high gas mileages. This might be limousines.
- **Bottom right**: Cars with low prices and low power, but average sizes and higher gas mileages. This might also be
limousines, but smaller ones.
- **Bottom left**: Cars with big measurements and average pricing. This might be family cars.
- **Top left**: Cars with considerably high power and prices which are still light and small.
This might be sports cars.
Note that this interpretation is just describing the general trend. Due to the nature of linear combinations, it is easily possible to come up with a car which has some exceptional values which lead to cancellation of others.
Note also that depending on the method used to calculate the eigenvectors, your axes and thus your interpretation might slightly differ.
## Assignment 3: PCA [6 Points]
In this exercise we investigate the statement from the lecture that PCA finds the subspace that captures most of the data variance. To be more precise, we show that the orthonormal projection onto an $m$-dimensional subspace that maximizes the variance of the projected data is defined by the principal components, i.e. by the $m$ eigenvectors of the autocorrelation matrix $C$ corresponding to the $m$ largest eigenvalues. We proceed in two steps:
### a)
First consider a one dimensional subspace: determine a (unit) vector $\vec{p}$, such that the variance of the data, when projected onto the subspace determined by that vector, is maximal.
The autocorrelation matrix $C$ allows to compute the variance of the projected data as $\vec{p}^{T}C\vec{p}$. We want to maximize this expression. To avoid $\|\vec{p}\|\to\infty$ we will only consider unit vectors, i.e. we constrain $\vec{p}$ to be normalized: $\vec{p}^T\vec{p}=1$. Maximize the expression with this constraint (which can be done using a Lagrangian multiplier). Conclude that a suitable $\vec{p}$ has to be an eigenvector of $C$ and describe which of the eigenvectors is optimal.
**Solution:**
We want to maximize the expression
$$\vec{p}^T C\vec{p} + \lambda(1-\vec{p}^T\vec{p})$$
with respect to $\vec{p}$, i.e. we have to find solutions for
$$\frac{\partial}{\partial\vec{p}}\left[ \vec{p}^T C\vec{p} + \lambda(1-\vec{p}^T\vec{p})\right] = 0$$
This leads to the equation
$$C\vec{p} = \lambda\vec{p}$$
in other words: for a vector $\vec{p}$ to maximize our expression, it has to be an eigenvector $C$ and $\lambda$ has to be the corresponding eigenvalue.
By left multiplying with $\vec{p}^T$ and using the fact that $\vec{p}^T\vec{p}=1$, we gain
$$\vec{p}^TC\vec{p}=\lambda$$
i.e. the projected variance will correspond to the eigenvalue $\lambda$ and hence is maximized when choosing the largest eigenvalue.
### b)
Now proof the statement for the general case of an $m$-dimensional projection space.
Use an inductive argument: assume the statement has been shown for the $(m-1)$-dimensional projection space, spanned by the $m-1$ (orthonormal) eigenvectors $\vec{p}_1,\ldots,\vec{p}_{m-1}$ corresponding to the $(m-1)$ largest eigenvalues $\lambda_1,\ldots,\lambda_{m-1}$. Now find a (unit) vector $\vec{p}_m$, orthogonal to the existing vectors $\vec{p}_1,\ldots,\vec{p}_{m-1}$, that maximizes the projected variance $\vec{p}_m^TC\vec{p}_m$. Proceed similar to case (a), but with additional Lagrangian multipliers to enforce the orthogonality constraint. Show that the new vector $\vec{p}_m$ is an eigenvector of $C$. Finally show that the variance is maximized for the eigenvector corresponding to the $m$-th largest eigenvalue $\lambda_m$.
**Solution:** Assume that the result holds for projection spaces of dimensionality $m-1$. We will now show that it then also holds for dimensionality $m$: we consider a subspace spanned by the $m-1$ (orthonormal) eigenvectors $\vec{p}_1,\ldots,\vec{p}_{m-1}$ corresponding to the $(m-1)$ largest eigenvalues $\lambda_1,\ldots,\lambda_{m-1}$, and a new vector $\vec{p}_{m}$ whos properties we will now examine. First, this vector should be linearly independent from $\vec{p}_1,\ldots,\vec{p}_{m-1}$, as it should define the new $m$-th dimension. The property can be enforced by the (stronger) requirement that $\vec{p}_{m}$ should be orthogonal to $\vec{p}_1,\ldots,\vec{p}_{m-1}$, i.e.
$$\vec{p}_m^T\vec{p}_{i}=0 \text{ for } i=1,\ldots,m-1,$$
which can be expressed using Lagrange multipliers $\eta_1,\ldots,\eta_{m-1}$. As argued in part (a), the variance in direction $\vec{p}_m$ is given by
$$\vec{p}_{m}^TC\vec{p}_{m}.$$
We want to maximize this value, again with the additional constraint that $\vec{p}_{m}$ is normalized, i.e.
$$\vec{p}_{m}^T\vec{p}_m=1,$$
which will be expressed by an additional Lagrange multiplier $\lambda_M$. So in total we want to maximize the function
$$\vec{p}_{m}^TC\vec{p}_{m} + \sum_{i=1}^{m-1}\eta_i\vec{p}_m^T\vec{p}_{i} + \lambda_m(1-\vec{p}_{m}^T\vec{p}_{m})$$
with respect to $\vec{p}_m$, i.e. we have to find solutions for
\begin{align}
0
& = \frac{\partial}{\partial\vec{p}_m}\left[\vec{p}_{m}^TC\vec{p}_{m}
+ \sum_{i=1}^{m-1}\eta_i\vec{p}_m^T\vec{p}_{i}
+ \lambda_m(1-\vec{p}_{m}^T\vec{p}_m)\right] \\
& = 2C\vec{p}_m + \sum_{i=1}^{m-1}\eta_i\vec{p}_{i} - 2\lambda_m\vec{p}_{m}
\end{align}
Multiplying this equation with $\vec{p}_{j}^T$ from the left yields (due to the orthogonality constraint)
\begin{align}
0 = \vec{p}_{j}^T 0
& = \vec{p}_{j}^T 2C\vec{p}_m +
\vec{p}_{j}^T \sum_{i=1}^{m-1}\eta_i\vec{p}_{i} -
\vec{p}_{j}^T 2\lambda_m\vec{p}_{m} \\
&= 0 + \eta_j\vec{p}_{j}^T \vec{p}_{j}- 0 \\
& = \eta_j
\end{align}
for $j=1,\ldots,m-1$. So the problem simplifies to
$$0 = 2C\vec{p}_m - 2\lambda_m\vec{p}_{m}$$
from which we see that a critical point of the Lagrange equation has to fulfill
$$C\vec{p}_m =\lambda_m\vec{p}_{m}$$
which just means it has to be an eigenvector of the matrix $C$ with eigenvalue $\lambda_M$. There may be multiple eigenvectors for $C$, so we have to select $\vec{p}_m$ in a way that it maximizes the variance in direction $\vec{p}_m$, i.e. the value
$$\vec{p}_{m}^TC\vec{p}_{m} = \vec{p}_{m}^T\lambda_M\vec{p}_{m} = \lambda_M.$$
This just means that we have to choose $\vec{p}_m$ to be the eigenvector with the largest eigenvalue (amongst those not previously selected). This completes the inductive step.
| 0c989aed56ea2bccaaca80e8ac9db42ee70dcc36 | 35,061 | ipynb | Jupyter Notebook | sheet_05/sheet_05_machine-learning_solution.ipynb | ArielMant0/ml2018 | 676dcf028766c369f94c164529ce16c4ef7716aa | [
"MIT"
] | null | null | null | sheet_05/sheet_05_machine-learning_solution.ipynb | ArielMant0/ml2018 | 676dcf028766c369f94c164529ce16c4ef7716aa | [
"MIT"
] | null | null | null | sheet_05/sheet_05_machine-learning_solution.ipynb | ArielMant0/ml2018 | 676dcf028766c369f94c164529ce16c4ef7716aa | [
"MIT"
] | null | null | null | 42.809524 | 1,021 | 0.620091 | true | 5,927 | Qwen/Qwen-72B | 1. YES
2. YES
| 0.626124 | 0.839734 | 0.525778 | __label__eng_Latn | 0.997981 | 0.059887 |
# Characterization of Discrete Systems in the Spectral Domain
*This Jupyter notebook is part of a [collection of notebooks](../index.ipynb) in the bachelors module Signals and Systems, Communications Engineering, Universität Rostock. Please direct questions and suggestions to [Sascha.Spors@uni-rostock.de](mailto:Sascha.Spors@uni-rostock.de).*
## Measurement of an Electroacoustic Transfer Function
The propagation of sound from a loudspeaker placed at one position in a room to a microphone placed at another position can be interpreted as system. The system can be regarded as linear time-invariant (LTI) system under the assumption that the [sound pressure level](https://en.wikipedia.org/wiki/Sound_pressure#Sound_pressure_level) is kept reasonably low (e.g. $L_p < 100$ dB, **pay attention to your ears**) and neither the loudspeaker nor the microphone are 'overdriven'. Consequently, its impulse response $h(t)$ characterizes the propagation of sound between theses two positions. Room impulse responses (RIRs) have a wide range of applications in acoustics, e.g. for the characterization of electroacoustic equipment, room acoustics or auralization. Therefore their measurement and modeling by discrete systems has found a lot of attention in digital signal processing.
Similarly, electromagnetic wave radiation between a transmitter and a receiver could be modeled as a linear transmission channel.
The following example demonstrates how an electroacoustic transfer function can be measured and [modeled by an finite impulse response (FIR) system](transfer_function.ipynb#Modeling-a-Continuous-System-by-a-Discrete-System) using the soundcard of a computer. The module [`sounddevice`](http://python-sounddevice.readthedocs.org/) provides access to the soundcard via [`portaudio`](http://www.portaudio.com/). The basic procedure involves the following three steps
1. generation of the measurement signal $x[k]$,
2. playback of measurement signal $x(t)$ and synchronous recording of system response $y(t)$ (digitalization to $y[k]$), and
3. computation of transfer function $H[\mu]$ and impulse response $h[k]$.
```python
import numpy as np
import matplotlib.pyplot as plt
import scipy.signal as signal
import sounddevice as sd
%matplotlib inline
fs = 44100 # sampling rate
T = 10 # length of measurement signal in s
Tr = 2 # length of system impulse response in s
t = np.linspace(0, T, T*fs)
```
### Generation of the Measurement Signal
The measurement signal has to fulfill $H[\mu] \neq 0$, as outlined [before](transfer_function.ipynb#Modeling-a-Continuous-System-by-a-Discrete-System). A Dirac impulse $x[k] = \delta[k]$ cannot be used for electroacoustic equipment due to its high [Crest factor](https://en.wikipedia.org/wiki/Crest_factor). Chirp signals show a variety of favourable properties for the measurement of electroacoustic systems, e.g. a low Crest factor and a magnitude spectrum which can be freely adjusted, for example to the background noise.
A sampled linear chirp signal is used as measurement signal
\begin{equation}
x[k] = A \sin \left( \frac{\omega_\text{stop} - \omega_\text{start}}{2 T} \cdot (k T)^2 + \omega_\text{start} \cdot k T \right)
\end{equation}
where $\omega_\text{start}$ and $\omega_\text{stop}$ denotes its start and end frequency, respectively and $T$ the sampling period. The measurement signal $x[k]$ is generated and normalized
```python
x = signal.chirp(t, 20, T, 20000, 'linear', phi=90)
x = 0.9 * x / np.max(np.abs(x))
```
and its magnitude spectrum $|H[\mu]|$ plotted for illustration
```python
X = np.fft.rfft(x)
f = np.fft.rfftfreq(len(x))*fs
plt.plot(f, 20*np.log10(np.abs(X)))
plt.xlabel(r'$f$ in Hz')
plt.ylabel(r'$|X(f)|$ in dB')
plt.grid()
plt.axis([0, fs/2, 0, 55])
```
[0, 22050.0, 0, 55]
### Playback of Measurement Signal and Recording of Room Response
The measurement signal $x[k]$ is played through the output of the soundcard and the response $y(t)$ is captured synchronously by the input of the soundcard. The length of the played/captured signal has to be of equal length when using the soundcard. The measurement signal $x[k]$ is zero-padded so that the captured signal $y[k]$ includes the complete system response.
Be sure not to overdrive the speaker and the microphone by keeping the input level well below 0 dBFS.
```python
x = np.concatenate((x, np.zeros(Tr*fs)))
y = sd.playrec(x, fs, channels=1)
sd.wait()
y = np.squeeze(y)
print('Playback level: ', 20*np.log10(max(x)), ' dB')
print('Input level: ', 20*np.log10(max(y)), ' dB')
```
Playback level: -0.915149811328209 dB
Input level: -2.4185984431271472 dB
### Computation of the Electroacoustic Transfer Function
The acoustic transfer function is computed by driving the spectrum of the system output $Y[\mu] = \text{DFT} \{ y[k] \}$ by the spectrum of the measurement signal $X[\mu] = \text{DFT} \{ x[k] \}$. Since both signals are real-valued, a real-valued fast Fourier transform (FFT) is used.
```python
H = np.fft.rfft(y) / np.fft.rfft(x)
```
The magnitude of the transfer function is plotted for illustration
```python
f = np.fft.rfftfreq(len(x))*fs
plt.plot(f, 20*np.log10(np.abs(H)))
plt.xlabel(r'$f$ in Hz')
plt.ylabel(r'$|H(f)|$ in dB')
plt.grid()
plt.xlim([0, fs/2])
```
(0, 22050.0)
### Computation of the Electroacoustic Impulse Response
The impulse response is computed by inverse DFT $h[k] = \text{IDFT} \{ H[\mu] \}$ and truncation to its assumed length
```python
h = np.fft.irfft(H)
h = h[0:Tr*fs]
```
It is plotted for illustration
```python
t = 1/fs * np.arange(len(h))
plt.plot(t, h)
plt.xlim([0.1, .2])
plt.xlabel(r'$t$ in s')
plt.ylabel(r'$h[k]$')
plt.grid()
```
**Copyright**
This notebook is provided as [Open Educational Resource](https://en.wikipedia.org/wiki/Open_educational_resources). Feel free to use the notebook for your own purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Sascha Spors, Continuous- and Discrete-Time Signals and Systems - Theory and Computational Examples*.
| 3326ba42adaac54cddf852d12bcc565687dcf784 | 869,660 | ipynb | Jupyter Notebook | discrete_systems_spectral_domain/measurement_acoustic_transfer_function.ipynb | spatialaudio/signals-and-systems-lecture | 93e2f3488dc8f7ae111a34732bd4d13116763c5d | [
"MIT"
] | 243 | 2016-04-01T14:21:00.000Z | 2022-03-28T20:35:09.000Z | discrete_systems_spectral_domain/measurement_acoustic_transfer_function.ipynb | iamzhd1977/signals-and-systems-lecture | b134608d336ceb94d83cdb66bc11c6d4d035f99c | [
"MIT"
] | 6 | 2016-04-11T06:28:17.000Z | 2021-11-10T10:59:35.000Z | discrete_systems_spectral_domain/measurement_acoustic_transfer_function.ipynb | iamzhd1977/signals-and-systems-lecture | b134608d336ceb94d83cdb66bc11c6d4d035f99c | [
"MIT"
] | 63 | 2017-04-20T00:46:03.000Z | 2022-03-30T14:07:09.000Z | 49.5561 | 100,210 | 0.631451 | true | 1,621 | Qwen/Qwen-72B | 1. YES
2. YES | 0.828939 | 0.654895 | 0.542868 | __label__eng_Latn | 0.981708 | 0.099593 |
The Maxwell's equations are
\begin{align}
\frac{\partial \vec{B}}{\partial t} &= - \nabla \times \vec{E} \hspace{2cm} \text{Faraday equations}\\
\frac{\partial \vec{D}}{\partial t} &= - \nabla \times \vec{H} - \vec{J} - \vec{s} \hspace{2cm} \text{Ampere's law.}
\end{align}
where $\vec{E}$ is the electric field, $\vec{H}$ is the magnetic field, $\vec{J}$ is the electric flux, $\vec{D}$ is the displacement flux, $\vec{B}$ is the magnetic flux and $\vec{s}$ is the source.
The constitutive realtations between the fields and flux are as follows.
\begin{align}
\vec{J} &= \sigma \vec{E}\\
\vec{B} &= \mu \vec{H}\\
\vec{D} &= \epsilon \vec{E}
\end{align}
Using the contitutive relations, the Maxwell's equations can written in terms of the electric field and the magnetic flux
\begin{align}
\mu \frac{\partial \vec{B}}{\partial t} &= - \nabla \times \vec{E} \\
\epsilon \frac{\partial \vec{E}}{\partial t} &= - \nabla \times \frac{1}{\mu} \vec{B} - \sigma \vec{E} - \vec{s}
\end{align}
Futher by droping the $\epsilon$ term and using a $e^{-i\omega t}$ Fourier time relation, Maxwell's equations can be written in the frequency domain as
\begin{align}
-i \omega \mu \hat{\vec{B}} &= - \nabla \times \hat{\vec{E}} \\
- \vec{s} &= - \nabla \times \frac{1}{\mu} \hat{\vec{B}} - \sigma \hat{\vec{E}}
\end{align}
The weak form is
\begin{align}
-i \omega \mu (\hat{\vec{B}},\hat{\vec{F}}) + (\nabla \times \hat{\vec{E}},\hat{\vec{F}}) &= 0 \\
(\nabla \times \mu^{-1} \hat{\vec{B}},\hat{\vec{W}}) + (\sigma \hat{\vec{E}},\hat{\vec{W}}) &= (\hat{\vec{s}},\hat{\vec{W}})
\end{align}
$ (\mu^{-1} B, \nabla \times W) + (\sigma E, W) = (\mu^{-1}B W )|_0^{end} $
$ (i \omega B,F) + (\nabla \times E , f) = 0 $
```
```
```
```
| d94031f6afaeb388c8c988a51b1cbad5332b5cbf | 3,168 | ipynb | Jupyter Notebook | notebooks/3D theroy overview-Copy0.ipynb | hanbo1735/simpegmt | 73e35c4944c3e14a0433fbd7c3f01cae2fc11ac6 | [
"MIT"
] | 9 | 2015-03-31T23:21:16.000Z | 2021-07-26T11:14:31.000Z | notebooks/3D theroy overview-Copy0.ipynb | hanbo1735/simpegmt | 73e35c4944c3e14a0433fbd7c3f01cae2fc11ac6 | [
"MIT"
] | 4 | 2015-05-04T18:17:29.000Z | 2015-12-23T00:11:51.000Z | notebooks/3D theroy overview.ipynb | simpeg/simpegmt | 73e35c4944c3e14a0433fbd7c3f01cae2fc11ac6 | [
"MIT"
] | 7 | 2015-03-31T23:21:18.000Z | 2021-10-06T12:35:44.000Z | 33.347368 | 213 | 0.462753 | true | 658 | Qwen/Qwen-72B | 1. YES
2. YES | 0.943348 | 0.83762 | 0.790167 | __label__eng_Latn | 0.807927 | 0.674155 |
$\newcommand{\mb}[1]{\mathbf{ #1 }}$
$\newcommand{\bs}[1]{\boldsymbol{ #1 }}$
$\newcommand{\bb}[1]{\mathbb{ #1 }}$
$\newcommand{\R}{\bb{R}}$
$\newcommand{\ip}[2]{\left\langle #1, #2 \right\rangle}$
$\newcommand{\norm}[1]{\left\Vert #1 \right\Vert}$
$\newcommand{\der}[2]{\frac{\mathrm{d} #1 }{\mathrm{d} #2 }}$
$\newcommand{\derp}[2]{\frac{\partial #1 }{\partial #2 }}$
# Cart Pole
Consider a cart on a frictionless track. Suppose a pendulum is attached to the cart by a frictionless joint. The cart is modeled as a point mass $m_c$ and the pendulum is modeled as a massless rigid link with point mass $m_p$ a distance $l$ away from the cart.
Let $\mathcal{I} = (\mb{i}^1, \mb{i}^2, \mb{i}^3)$ denote an inertial frame. Suppose the position of the cart is resolved in the inertial frame as $\mb{r}_{co}^{\mathcal{I}} = (x, 0, 0)$. Additionally, suppose the gravitational force acting on the pendulum is resolved in the inertial frame as $\mb{f}_g^{\mathcal{I}} = (0, 0, -m_p g)$.
Let $\mathcal{B} = (\mb{b}^1, \mb{b}^2, \mb{b}^3)$ denote a body reference frame, with $\mb{b}^2 = \mb{i}^2$. The position of the pendulum mass relative to the cart is resolved in the body frame as $\mb{r}_{pc}^\mathcal{B} = (0, 0, l)$.
The kinetic energy of the system is:
\begin{equation}
\frac{1}{2} m_c \norm{\dot{\mb{r}}_{co}^\mathcal{I}}_2^2 + \frac{1}{2} m_p \norm{\dot{\mb{r}}_{po}^\mathcal{I}}_2^2
\end{equation}
First, note that $\dot{\mb{r}}_{co}^{\mathcal{I}} = (\dot{x}, 0, 0)$.
Next, note that $\mb{r}_{po}^\mathcal{I} = \mb{r}_{pc}^\mathcal{I} + \mb{r}_{co}^\mathcal{I} = \mb{C}_{\mathcal{I}\mathcal{B}}\mb{r}_{pc}^\mathcal{B} + \mb{r}_{co}^\mathcal{I}$, where $\mb{C}_{\mathcal{I}\mathcal{B}}$ is the direction cosine matrix (DCM) satisfying:
\begin{equation}
\mb{C}_{\mathcal{I}\mathcal{B}} = \begin{bmatrix} \ip{\mb{i}_1}{\mb{b}_1} & \ip{\mb{i}_1}{\mb{b}_2} & \ip{\mb{i}_1}{\mb{b}_3} \\ \ip{\mb{i}_2}{\mb{b}_1} & \ip{\mb{i}_2}{\mb{b}_2} & \ip{\mb{i}_2}{\mb{b}_3} \\ \ip{\mb{i}_3}{\mb{b}_1} & \ip{\mb{i}_3}{\mb{b}_2} & \ip{\mb{i}_3}{\mb{b}_3} \end{bmatrix}.
\end{equation}
We parameterize the DCM using $\theta$, measuring the clockwise angle of the pendulum from upright in radians. In this case, the DCM is:
\begin{equation}
\mb{C}_{\mathcal{I}\mathcal{B}} = \begin{bmatrix} \cos{\theta} & 0 & \sin{\theta} \\ 0 & 1 & 0 \\ -\sin{\theta} & 0 & \cos{\theta} \end{bmatrix},
\end{equation}
following from $\cos{\left( \frac{\pi}{2} - \theta \right)} = \sin{\theta}$. Therefore:
\begin{equation}
\mb{r}_{po}^\mathcal{I} = \begin{bmatrix} x + l\sin{\theta} \\ 0 \\ l\cos{\theta} \end{bmatrix}
\end{equation}
We have $\dot{\mb{r}}_{po}^\mathcal{I} = \dot{\mb{C}}_{\mathcal{I}\mathcal{B}} \mb{r}_{pc}^\mathcal{B} + \dot{\mb{r}}_{co}^\mathcal{I}$, following from $\dot{\mb{r}}_{pc}^\mathcal{B} = \mb{0}_3$ since the pendulum is rigid. The derivative of the DCM is:
\begin{equation}
\der{{\mb{C}}_{\mathcal{I}\mathcal{B}}}{\theta} = \begin{bmatrix} -\sin{\theta} & 0 & \cos{\theta} \\ 0 & 0 & 0 \\ -\cos{\theta} & 0 & -\sin{\theta} \end{bmatrix},
\end{equation}
finally yielding:
\begin{equation}
\dot{\mb{r}}_{po}^\mathcal{I} = \dot{\theta} \der{\mb{C}_{\mathcal{I}\mathcal{B}}}{\theta} \mb{r}^{\mathcal{B}}_{pc} + \dot{\mb{r}}_{co}^\mathcal{I} = \begin{bmatrix} l\dot{\theta}\cos{\theta} + \dot{x} \\ 0 \\ -l\dot{\theta}\sin{\theta} \end{bmatrix}
\end{equation}
Define generalized coordinates $\mb{q} = (x, \theta)$ with configuration space $\mathcal{Q} = \R \times \bb{S}^1$, where $\bb{S}^1$ denotes the $1$-sphere. The kinetic energy can then be expressed as:
\begin{align}
T(\mb{q}, \dot{\mb{q}}) &= \frac{1}{2} m_c \begin{bmatrix} \dot{x} \\ \dot{\theta} \end{bmatrix}^\top \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} \begin{bmatrix} \dot{x} \\ \dot{\theta} \end{bmatrix} + \frac{1}{2} m_p \begin{bmatrix} \dot{x} \\ \dot{\theta} \end{bmatrix}^\top \begin{bmatrix} 1 & l \cos{\theta} \\ l \cos{\theta} & l^2 \end{bmatrix} \begin{bmatrix} \dot{x} \\ \dot{\theta} \end{bmatrix}\\
&= \frac{1}{2} \dot{\mb{q}}^\top\mb{D}(\mb{q})\dot{\mb{q}},
\end{align}
where inertia matrix function $\mb{D}: \mathcal{Q} \to \bb{S}^2_{++}$ is defined as:
\begin{equation}
\mb{D}(\mb{q}) = \begin{bmatrix} m_c + m_p & m_p l \cos{\theta} \\ m_p l \cos{\theta} & m_pl^2 \end{bmatrix}.
\end{equation}
Note that:
\begin{equation}
\derp{\mb{D}}{x} = \mb{0}_{2 \times 2},
\end{equation}
and:
\begin{equation}
\derp{\mb{D}}{\theta} = \begin{bmatrix} 0 & -m_p l \sin{\theta} \\ -m_p l \sin{\theta} & 0 \end{bmatrix},
\end{equation}
so we can express:
\begin{equation}
\derp{\mb{D}}{\mb{q}} = -m_p l \sin{\theta} (\mb{e}_1 \otimes \mb{e}_2 \otimes \mb{e}_2 + \mb{e}_2 \otimes \mb{e}_1 \otimes \mb{e}_2).
\end{equation}
The potential energy of the system is $U: \mathcal{Q} \to \R$ defined as:
\begin{equation}
U(\mb{q}) = -\ip{\mb{f}_g^\mathcal{I}}{\mb{r}^{\mathcal{I}}_{po}} = m_p g l \cos{\theta}.
\end{equation}
Define $\mb{G}: \mathcal{Q} \to \R^2$ as:
\begin{equation}
\mb{G}(\mb{q}) = \left(\derp{U}{\mb{q}}\right)^\top = \begin{bmatrix} 0 \\ -m_p g l \sin{\theta} \end{bmatrix}.
\end{equation}
Assume a force $(F, 0, 0)$ (resolved in the inertial frame) can be applied to the cart. The Euler-Lagrange equation yields:
\begin{align}
\der{}{t} \left( \derp{T}{\dot{\mb{q}}} \right)^\top - \left( \derp{T}{\mb{q}} - \derp{U}{\mb{q}} \right)^\top &= \der{}{t} \left( \mb{D}(\mb{q})\dot{\mb{q}} \right) - \frac{1}{2}\derp{\mb{D}}{\mb{q}}(\dot{\mb{q}}, \dot{\mb{q}}, \cdot) + \mb{G}(\mb{q})\\
&= \mb{D}(\mb{q})\ddot{\mb{q}} + \derp{\mb{D}}{\mb{q}}(\cdot, \dot{\mb{q}}, \dot{\mb{q}}) - \frac{1}{2}\derp{\mb{D}}{\mb{q}}(\dot{\mb{q}}, \dot{\mb{q}}, \cdot) + \mb{G}(\mb{q})\\
&= \mb{B} F,
\end{align}
with static actuation matrix:
\begin{equation}
\mb{B} = \begin{bmatrix} 1 \\ 0 \end{bmatrix}.
\end{equation}
Note that:
\begin{align}
\derp{\mb{D}}{\mb{q}}(\cdot, \dot{\mb{q}}, \dot{\mb{q}}) - \frac{1}{2}\derp{\mb{D}}{\mb{q}}(\dot{\mb{q}}, \dot{\mb{q}}, \cdot) &= -m_p l \sin{\theta} (\mb{e}_1 \dot{\theta}\dot{\theta} + \mb{e}_2\dot{x}\dot{\theta}) + \frac{1}{2} m_p l \sin{\theta} (\dot{x}\dot{\theta} \mb{e}_2 + \dot{\theta}\dot{x} \mb{e}_2)\\
&= \begin{bmatrix} -m_p l \dot{\theta}^2 \sin{\theta} \\ 0 \end{bmatrix}\\
&= \mb{C}(\mb{q}, \dot{\mb{q}})\dot{\mb{q}},
\end{align}
with Coriolis terms defined as:
\begin{equation}
\mb{C}(\mb{q}, \dot{\mb{q}}) = \begin{bmatrix} 0 & -m_p l \sin{\theta} \\ 0 & 0 \end{bmatrix}.
\end{equation}
Finally, we have:
\begin{equation}
\mb{D}(\mb{q})\ddot{\mb{q}} + \mb{C}(\mb{q}, \dot{\mb{q}})\dot{\mb{q}} + \mb{G}(\mb{q}) = \mb{B}F
\end{equation}
```python
from numpy import array, concatenate, cos, dot, reshape, sin, zeros
from core.dynamics import RoboticDynamics
class CartPole(RoboticDynamics):
def __init__(self, m_c, m_p, l, g=9.81):
RoboticDynamics.__init__(self, 2, 1)
self.params = m_c, m_p, l, g
def D(self, q):
m_c, m_p, l, _ = self.params
_, theta = q
return array([[m_c + m_p, m_p * l * cos(theta)], [m_p * l * cos(theta), m_p * (l ** 2)]])
def C(self, q, q_dot):
_, m_p, l, _ = self.params
_, theta = q
_, theta_dot = q_dot
return array([[0, -m_p * l * theta_dot * sin(theta)], [0, 0]])
def U(self, q):
_, m_p, l, g = self.params
_, theta = q
return m_p * g * l * cos(theta)
def G(self, q):
_, m_p, l, g = self.params
_, theta = q
return array([0, -m_p * g * l * sin(theta)])
def B(self, q):
return array([[1], [0]])
m_c = 0.5
m_p = 0.25
l = 0.5
cart_pole = CartPole(m_c, m_p, l)
```
We attempt to stabilize the pendulum upright, that is, drive $\theta$ to $0$. We'll use the normal form transformation:
\begin{equation}
\bs{\Phi}(\mb{q}, \dot{\mb{q}}) = \begin{bmatrix} \bs{\eta}(\mb{q}, \dot{\mb{q}}) \\ \mb{z}(\mb{q}, \dot{\mb{q}}) \end{bmatrix} = \begin{bmatrix} \theta \\ \dot{\theta} \\ x \\ m_p l \dot{x} \cos{\theta} + m_p l^2 \dot{\theta} \end{bmatrix}.
\end{equation}
```python
from core.dynamics import ConfigurationDynamics
class CartPoleOutput(ConfigurationDynamics):
def __init__(self, cart_pole):
ConfigurationDynamics.__init__(self, cart_pole, 1)
self.cart_pole = cart_pole
def y(self, q):
return q[1:]
def dydq(self, q):
return array([[0, 1]])
def d2ydq2(self, q):
return zeros((1, 2, 2))
output = CartPoleOutput(cart_pole)
```
```python
from numpy import identity
from core.controllers import FBLinController, LQRController
Q = 10 * identity(2)
R = identity(1)
lqr = LQRController.build(output, Q, R)
fb_lin = FBLinController(output, lqr)
```
```python
from numpy import linspace, pi
x_0 = array([0, pi / 4, 0, 0])
ts = linspace(0, 10, 1000 + 1)
xs, us = cart_pole.simulate(x_0, fb_lin, ts)
```
```python
from matplotlib.pyplot import subplots, show, tight_layout
```
```python
_, axs = subplots(2, 2, figsize=(8, 8))
ylabels = ['$x$ (m)', '$\\theta$ (rad)', '$\\dot{x}$ (m / sec)', '$\\dot{\\theta}$ (rad / sec)']
for ax, data, ylabel in zip(axs.flatten(), xs.T, ylabels):
ax.plot(ts, data, linewidth=3)
ax.set_ylabel(ylabel, fontsize=16)
ax.grid()
for ax in axs[-1]:
ax.set_xlabel('$t$ (sec)', fontsize=16)
tight_layout()
show()
```
```python
_, ax = subplots(figsize=(4, 4))
ax.plot(ts[:-1], us, linewidth=3)
ax.grid()
ax.set_xlabel('$t$ (sec)', fontsize=16)
ax.set_ylabel('$F$ (N)', fontsize=16)
show()
```
| beb6ab6f74291eb4159985d1337d242f1c134130 | 71,882 | ipynb | Jupyter Notebook | cart-pole.ipynb | ivandariojr/core | c4dec054a3e80355ed3812d48ca2bba286584a67 | [
"MIT"
] | 6 | 2021-01-26T21:00:24.000Z | 2022-02-28T23:57:50.000Z | cart-pole.ipynb | ivandariojr/core | c4dec054a3e80355ed3812d48ca2bba286584a67 | [
"MIT"
] | 15 | 2020-01-28T22:49:18.000Z | 2021-12-14T08:34:39.000Z | cart-pole.ipynb | ivandariojr/core | c4dec054a3e80355ed3812d48ca2bba286584a67 | [
"MIT"
] | 6 | 2019-06-07T21:31:20.000Z | 2021-12-13T01:00:02.000Z | 169.533019 | 45,868 | 0.865335 | true | 3,839 | Qwen/Qwen-72B | 1. YES
2. YES | 0.944995 | 0.896251 | 0.846953 | __label__eng_Latn | 0.347826 | 0.806088 |
<p align="center">
</p>
## Data Analytics
### Basic Univariate Statistics in Python
#### Michael Pyrcz, Associate Professor, University of Texas at Austin
##### [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1)
### Data Analytics: Basic Univariate Statistics
Here's a demonstration of calculation of univariate statistics in Python. This demonstration is part of the resources that I include for my courses in Spatial / Subsurface Data Analytics and Geostatistics at the Cockrell School of Engineering and Jackson School of Goesciences at the University of Texas at Austin.
We will cover the following statistics:
#### Measures of Centrality
* Arithmetic Average / Mean
* Median
* Mode (most frequent binned)
* Geometric Mean
* Harmonic Mean
* Power Law Average
#### Measures of Dispersion
* Population Variance
* Sample Variance
* Population Standard Deviation
* Sample Standard Deviation
* Range
* Percentile w. Tail Assumptions
* Interquartile Range
#### Tukey Outlier Test
* Lower Quartile/P25
* Upper Quartile/P75
* Interquartile Range
* Lower Fence
* Upper Fence
* Calculating Outliers
#### Measures of Shape
* Skew
* Excess Kurtosis
* Pearson' Mode Skewness
* Quartile Skew Coefficient
#### Nonparmetric Cumulative Distribution Functions (CDFs)
* plotting a nonparametric CDF
* fitting a parametric distribution and plotting
I have a lecture on these univariate statistics available on [YouTube](https://www.youtube.com/watch?v=wAcbA2cIqec&list=PLG19vXLQHvSB-D4XKYieEku9GQMQyAzjJ&index=11&t=0s).
#### Getting Started
Here's the steps to get setup in Python with the GeostatsPy package:
1. Install Anaconda 3 on your machine (https://www.anaconda.com/download/).
2. From Anaconda Navigator (within Anaconda3 group), go to the environment tab, click on base (root) green arrow and open a terminal.
3. In the terminal type: pip install geostatspy.
4. Open Jupyter and in the top block get started by copy and pasting the code block below from this Jupyter Notebook to start using the geostatspy functionality.
You will need to copy the data file to your working directory. The dataset is available on my GitHub account in my GeoDataSets repository at:
* Tabular data - [2D_MV_200wells.csv](https://github.com/GeostatsGuy/GeoDataSets/blob/master/2D_MV_200wells.csv)
#### Importing Packages
We will need some standard packages. These should have been installed with Anaconda 3.
```python
import numpy as np # ndarrys for gridded data
import pandas as pd # DataFrames for tabular data
import os # set working directory, run executables
import matplotlib.pyplot as plt # plotting
import scipy # statistics
import statistics as stats # statistics like the mode
from scipy.stats import norm # fitting a Gaussian distribution
```
#### Set the Working Directory
I always like to do this so I don't lose files and to simplify subsequent read and writes (avoid including the full address each time). Set this to your working directory, with the above mentioned data file.
```python
os.chdir("c:/PGE383") # set the working directory
```
#### Loading Data
Let's load the provided multivariate, spatial dataset. '2D_MV_200wells.csv' is available at https://github.com/GeostatsGuy/GeoDataSets. It is a comma delimited file with X and Y coordinates,facies 1 and 2 (1 is sandstone and 2 interbedded sand and mudstone), porosity (fraction), permeability (mDarcy) and acoustic impedance (kg/m2s*10^6). We load it with the pandas 'read_csv' function into a data frame we called 'df' and then preview it by printing a slice and by utilizing the 'head' DataFrame member function (with a nice and clean format, see below).
```python
df = pd.read_csv("2D_MV_200wells.csv") # read a .csv file in as a DataFrame
#print(df.iloc[0:5,:]) # display first 4 samples in the table as a preview
df.head() # we could also use this command for a table preview
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>X</th>
<th>Y</th>
<th>facies_threshold_0.3</th>
<th>porosity</th>
<th>permeability</th>
<th>acoustic_impedance</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>565</td>
<td>1485</td>
<td>1</td>
<td>0.1184</td>
<td>6.170</td>
<td>2.009</td>
</tr>
<tr>
<th>1</th>
<td>2585</td>
<td>1185</td>
<td>1</td>
<td>0.1566</td>
<td>6.275</td>
<td>2.864</td>
</tr>
<tr>
<th>2</th>
<td>2065</td>
<td>2865</td>
<td>2</td>
<td>0.1920</td>
<td>92.297</td>
<td>3.524</td>
</tr>
<tr>
<th>3</th>
<td>3575</td>
<td>2655</td>
<td>1</td>
<td>0.1621</td>
<td>9.048</td>
<td>2.157</td>
</tr>
<tr>
<th>4</th>
<td>1835</td>
<td>35</td>
<td>1</td>
<td>0.1766</td>
<td>7.123</td>
<td>3.979</td>
</tr>
</tbody>
</table>
</div>
Let's extract one of the features, porosity, into a 1D ndarray and do our statistics on porosity.
* then we can use NumPy's statistics methods
```python
por = df['porosity'].values
```
Now let's go through all the univariate statistics listed above one-by-one.
#### Measures of Central Tendency
Let's start with measures of central tendency.
##### The Arithmetic Average / Mean
\begin{equation}
\overline{x} = \frac{1}{n}\sum^n_{i=1} x_i
\end{equation}
```python
por_average = np.average(por)
print('Porosity average is ' + str(round(por_average,2)) + '.')
```
Porosity average is 0.15.
##### Median
\begin{equation}
P50_x = F^{-1}_{x}(0.50)
\end{equation}
```python
por_median = np.median(por)
print('Porosity median is ' + str(round(por_median,2)) + '.')
```
Porosity median is 0.15.
##### Mode
The most common value. To do this we should bin the data, like into histogram bins/bars. To do this we will round the data to the 2nd decimal place. We are assume bin boundaries, $0.01, 0.02,\ldots, 0.30$.
```python
por_mode = stats.mode(np.round(por,2))
print('Porosity mode is ' + str(round(por_mode,2)) + '.')
```
Porosity mode is 0.14.
##### Geometric Mean
\begin{equation}
\overline{x}_G = ( \prod^n_{i=1} x_i )^{\frac{1}{n}}
\end{equation}
```python
por_geometric = scipy.stats.mstats.gmean(por)
print('Porosity geometric mean is ' + str(round(por_geometric,2)) + '.')
```
Porosity geometric mean is 0.15.
##### Harmonic Mean
\begin{equation}
\overline{x}_H = \frac{n}{\sum^n_{i=1} \frac{1}{x_i}}
\end{equation}
```python
por_hmean = scipy.stats.mstats.hmean(por)
print('Porosity harmonic mean is ' + str(round(por_hmean,2)) + '.')
```
Porosity harmonic mean is 0.14.
##### Power Law Average
\begin{equation}
\overline{x}_p = (\frac{1}{n}\sum^n_{i=1}{x_i^{p}})^\frac{1}{p}
\end{equation}
```python
power = 1.0
por_power = np.average(np.power(por,power))**(1/power)
print('Porosity law mean for p = ' + str(power) + ' is ' + str(round(por_power,2)) + '.')
```
Porosity law mean for p = 1.0 is 0.15.
#### Measures of Dispersion
##### Population Variance
\begin{equation}
\sigma^2_{x} = \frac{1}{n}\sum^n_{i=1}(x_i - \mu)
\end{equation}
```python
por_varp = stats.pvariance(por)
print('Porosity population variance is ' + str(round(por_varp,4)) + '.')
```
Porosity population variance is 0.0011.
##### Sample Variance
\begin{equation}
\sigma^2_{x} = \frac{1}{n-1}\sum^n_{i=1}(x_i - \overline{x})^2
\end{equation}
```python
por_var = stats.variance(por)
print('Porosity sample variance is ' + str(round(por_var,4)) + '.')
```
Porosity sample variance is 0.0011.
##### Population Standard Deviation
\begin{equation}
\sigma_{x} = \sqrt{ \frac{1}{n}\sum^n_{i=1}(x_i - \mu)^2 }
\end{equation}
```python
por_stdp = stats.pstdev(por)
print('Porosity sample variance is ' + str(round(por_stdp,4)) + '.')
```
Porosity sample variance is 0.0329.
##### Sample Standard Deviation
\begin{equation}
\sigma_{x} = \sqrt{ \frac{1}{n-1}\sum^n_{i=1}(x_i - \mu)^2 }
\end{equation}
```python
por_std = stats.stdev(por)
print('Porosity sample variance is ' + str(round(por_std,4)) + '.')
```
Porosity sample variance is 0.0329.
##### Range
\begin{equation}
range_x = P100_x - P00_x
\end{equation}
```python
por_range = np.max(por) - np.min(por)
print('Porosity range is ' + str(round(por_range,2)) + '.')
```
Porosity range is 0.17.
##### Percentile
\begin{equation}
P(p)_x = F^{-1}_{x}(p)
\end{equation}
```python
p_value = 13
por_percentile = np.percentile(por,p_value)
print('Porosity ' + str(int(p_value)) + 'th percentile is ' + str(round(por_percentile,2)) + '.')
```
Porosity 13th percentile is 0.11.
##### Inter Quartile Range
\begin{equation}
IQR = P(0.75)_x - P(0.25)_x
\end{equation}
```python
por_iqr = scipy.stats.iqr(por)
print('Porosity interquartile range is ' + str(round(por_iqr,2)) + '.')
```
Porosity interquartile range is 0.04.
#### Tukey Test for Outliers
Let's demonstrate the Tukey test for outliers based on the lower and upper fences.
\begin{equation}
fence_{lower} = P_x(0.25) - 1.5 \times [P_x(0.75) - P_x(0.25)]
\end{equation}
\begin{equation}
fence_{upper} = P_x(0.75) + 1.5 \times [P_x(0.75) - P_x(0.25)]
\end{equation}
Then we declare samples values above the upper fence or below the lower fence as outliers.
```python
p25, p75 = np.percentile(por, [25, 75])
lower_fence = p25 - por_iqr * 1.5
upper_fence = p75 + por_iqr * 1.5
por_outliers = por[np.where((por > upper_fence) | (por < lower_fence))[0]]
print('Porosity outliers by Tukey test include ' + str(por_outliers) + '.')
por_outliers_indices = np.where((por > upper_fence) | (por < lower_fence))[0]
print('Porosity outlier indices by Tukey test are ' + str(por_outliers_indices) + '.')
```
Porosity outliers by Tukey test include [0.06726 0.05 0.06092].
Porosity outlier indices by Tukey test are [110 152 198].
#### Measures of Shape
##### Pearson's Mode Skewness
\begin{equation}
skew = \frac{3 (\overline{x} - P50_x)}{\sigma_x}
\end{equation}
```python
por_skew = (por_average - por_median)/por_std
print('Porosity skew is ' + str(round(por_skew,2)) + '.')
```
Porosity skew is -0.03.
##### Population Skew, 3rd Central Moment
\begin{equation}
\gamma_{x} = \frac{1}{n}\sum^n_{i=1}(x_i - \mu)^3
\end{equation}
```python
por_cm = scipy.stats.moment(por,moment=3)
print('Porosity 3rd cenral moment is ' + str(round(por_cm,7)) + '.')
```
Porosity 3rd cenral moment is -1.22e-05.
##### Quartile Skew Coefficient
\begin{equation}
QS = \frac{(P75_x - P50_x) - (P50_x - P25_x)}{(P75_x - P25_x)}
\end{equation}
```python
por_qs = ((np.percentile(por,75)-np.percentile(por,50))
-(np.percentile(por,50)-np.percentile(por,25))) /((np.percentile(por,75))-np.percentile(por,25))
print('Porosity quartile skew coefficient is ' + str(round(por_qs,2)) + '.')
```
Porosity quartile skew coefficient is 0.14.
#### Plot the Nonparametric CDF
Let's demonstrate plotting a nonparametric cumulative distribution function (CDF) in Python
```python
# sort the data:
por_sort = np.sort(por)
# calculate the cumulative probabilities assuming known tails
p = np.arange(len(por)) / (len(por) - 1)
# plot the cumulative probabilities vs. the sorted porosity values
plt.subplot(122)
plt.scatter(por_sort, p, c = 'red', edgecolors = 'black', s = 10, alpha = 0.7)
plt.xlabel('Porosity (fraction)'); plt.ylabel('Cumulative Probability'); plt.grid();
plt.title('Nonparametric Porosity CDF')
plt.ylim([0,1]); plt.xlim([0,0.25])
plt.subplots_adjust(left=0.0, bottom=0.0, right=2.0, top=1.2, wspace=0.2, hspace=0.3)
```
#### Fit a Gaussian Distribution
Let's fit a Gaussian distribution
* we get fancy with Maximuum Likelihood Estimation (MLE) for the Gaussian parametric distribution fit mean and standard deviation
```python
por_values = np.linspace(0.0,0.25,100)
fit_mean, fit_stdev = norm.fit(por,loc = por_average, scale = por_std) # fit MLE of the distribution
cumul_p = norm.cdf(por_values, loc = fit_mean, scale = fit_stdev)
# plot the cumulative probabilities vs. the sorted porosity values
plt.subplot(122)
plt.scatter(por_sort, p, c = 'red', edgecolors = 'black', s = 10, alpha = 0.7)
plt.plot(por_values,cumul_p, c = 'black')
plt.xlabel('Porosity (fraction)'); plt.ylabel('Cumulative Probability'); plt.grid();
plt.title('Nonparametric Porosity CDF')
plt.ylim([0,1]); plt.xlim([0,0.25])
plt.subplots_adjust(left=0.0, bottom=0.0, right=2.0, top=1.2, wspace=0.2, hspace=0.3)
```
#### Comments
This was a basic demonstration of univariate statistics in Python.
I have other demonstrations on the basics of working with DataFrames, ndarrays, univariate statistics, plotting data, declustering, data transformations, trend modeling and many other workflows available at [Python Demos](https://github.com/GeostatsGuy/PythonNumericalDemos) and a Python package for data analytics and geostatistics at [GeostatsPy](https://github.com/GeostatsGuy/GeostatsPy).
I hope this was helpful,
*Michael*
#### The Author:
### Michael Pyrcz, Associate Professor, University of Texas at Austin
*Novel Data Analytics, Geostatistics and Machine Learning Subsurface Solutions*
With over 17 years of experience in subsurface consulting, research and development, Michael has returned to academia driven by his passion for teaching and enthusiasm for enhancing engineers' and geoscientists' impact in subsurface resource development.
For more about Michael check out these links:
#### [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1)
#### Want to Work Together?
I hope this content is helpful to those that want to learn more about subsurface modeling, data analytics and machine learning. Students and working professionals are welcome to participate.
* Want to invite me to visit your company for training, mentoring, project review, workflow design and / or consulting? I'd be happy to drop by and work with you!
* Interested in partnering, supporting my graduate student research or my Subsurface Data Analytics and Machine Learning consortium (co-PIs including Profs. Foster, Torres-Verdin and van Oort)? My research combines data analytics, stochastic modeling and machine learning theory with practice to develop novel methods and workflows to add value. We are solving challenging subsurface problems!
* I can be reached at mpyrcz@austin.utexas.edu.
I'm always happy to discuss,
*Michael*
Michael Pyrcz, Ph.D., P.Eng. Associate Professor The Hildebrand Department of Petroleum and Geosystems Engineering, Bureau of Economic Geology, The Jackson School of Geosciences, The University of Texas at Austin
#### More Resources Available at: [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1)
```python
```
| 32eb0695fdc6bfaaa30432fbb12676b116492ee4 | 75,497 | ipynb | Jupyter Notebook | PythonDataBasics_Univariate_Statistics.ipynb | caf3676/PythonNumericalDemos | 206a3d876f79e137af88b85ba98aff171e8d8e06 | [
"MIT"
] | 403 | 2017-10-15T02:07:38.000Z | 2022-03-30T15:27:14.000Z | PythonDataBasics_Univariate_Statistics.ipynb | caf3676/PythonNumericalDemos | 206a3d876f79e137af88b85ba98aff171e8d8e06 | [
"MIT"
] | 4 | 2019-08-21T10:35:09.000Z | 2021-02-04T04:57:13.000Z | PythonDataBasics_Univariate_Statistics.ipynb | caf3676/PythonNumericalDemos | 206a3d876f79e137af88b85ba98aff171e8d8e06 | [
"MIT"
] | 276 | 2018-06-27T11:20:30.000Z | 2022-03-25T16:04:24.000Z | 80.401491 | 26,824 | 0.80782 | true | 4,891 | Qwen/Qwen-72B | 1. YES
2. YES | 0.857768 | 0.760651 | 0.652462 | __label__eng_Latn | 0.798182 | 0.354218 |
# Coupled ODE Integration
This notebook demonstrates integration of coupled ODEs and numerical integration of arrays of values in Python using a physically motivated example.
## A Ballistic Trajectory with Drag
The force on an object (a golfball or cannonball) moving in a uniform gravitation field with air resistance is given by
$$
\vec{F} = -mg\hat{z}-\frac{1}{2}\rho C_d Av^2 \hat{v},
$$
where gravity acts in the $\hat{z}$ direction, $\rho$ is the density of air, $C_d$ is the drag coefficient of the object in air, and $A$ is its cross-sectional area. $C_d$ depends heavily on whether or not the air flow in the wake of the object is laminar or turbulent, and thus depends on both the speed and shape of the object.
Writing this expression in terms of the object's position vector $\vec{r}$ we have
$$
\begin{align}
m\ddot{\vec{r}} &= -mg\hat{z} -\frac{1}{2}\rho C_d A|\dot{\vec{r}}|^2 \frac{\dot{\vec{r}}}{|\dot{\vec{r}}|} \\
\frac{d}{dt}\dot{\vec{r}} &= -g\hat{z} - \alpha|\dot{\vec{r}}|\dot{\vec{r}},
\end{align}
$$
where $\alpha=1/(2m)~\rho C_d A$. Without loss of generality, restrict the motion to the $xz$ plane. Then the expression above can be written as two coupled first-order ODEs in velocity:
$$
\begin{align}
\frac{d}{dt}\dot{x} &= -\alpha\dot{x}\sqrt{\dot{x}^2 + \dot{z}^2},
&
\frac{d}{dt}\dot{z} &= -mg -\alpha\dot{z}\sqrt{\dot{x}^2 + \dot{z}^2}.
\end{align}
$$
## Numerical Solution
In this example, we use the `odeint` function from `scipy.integrate` to solve for the components of the velocity $\dot{x}$ and $\dot{z}$. Then we can use the trapezoidal rule to estimate the trajectory $x(t)$ and $z(t)$.
```python
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
from scipy.integrate import odeint, cumtrapz
mpl.rc('font', size=14)
```
```python
g = 9.8
def ballistic(V, t, m, A, rho, Cd):
"""Velocity equations for ballistic motion with drag.
Assume SI units for all inputs.
Parameters
----------
V : list
Velocity in x and z.
t : float
Time point used to solve for v.
m : float
Object mass [kg].
A : float
Object area [m2].
rho : float
Air density [kg/m3].
Cd : float
Drag coefficient.
Returns
-------
dVdt : list
Velocity time derivative along x and z axes.
"""
vx, vz = V
alpha = rho*Cd*A/(2*m)
vmag = np.sqrt(vx**2 + vz**2)
dVdt = [ -alpha*vx*vmag,
-g -alpha*vz*vmag]
return dVdt
```
## Initial Conditions
Choose a $45^\circ$ initial trajectory and an initial velocity of 40 m/s or about 90 mph. Then integrate the motion for 6 seconds.
The object properties and drag coefficients are roughly those of a baseball.
```python
v0 = 40.
angle = 45*np.pi/180.
V0 = [v0*np.cos(angle), v0*np.sin(angle)]
t = np.linspace(0, 6., 1001)
# Object properties and drag coefficients.
m = 0.15 # kg
rho = 1.225 # kg/m3
Cd = 0.5
r = 0.0366
A = np.pi * r**2 # m2
```
## Solve for the Motion
Consider several cases:
1. No drag (vacuum).
2. Drag, using sea-level air density.
3. Drag, using air density at 1.4 km.
4. Drag, using air density at 4 km.
```python
# Solve for vx, vz with no drag.
# Then numerically integrate vx, vz to get x(t) and z(t).
v = odeint(ballistic, V0, t, args=(m, A, rho, 0.)).T
x_vac, z_vac = [cumtrapz(v[i], t) for i in range(2)]
# Solve with drag.
# Numerically integrate vx, vz to get x(t) and z(t).
v = odeint(ballistic, V0, t, args=(m, A, rho, Cd)).T
x_air, z_air = [cumtrapz(v[i], t) for i in range(2)]
# Solve with drag, using high-altitude air density (~1.4 km up).
# Numerically integrate vx, vz to get x(t) and z(t).
v = odeint(ballistic, V0, t, args=(m, A, 1.069, Cd)).T
x_alt, z_alt = [cumtrapz(v[i], t) for i in range(2)]
# Solve with drag, using very high-altitude air density (~4 km).
# Numerically integrate vx, vz to get x(t) and z(t).
v = odeint(ballistic, V0, t, args=(m, A, 0.81, Cd)).T
x_ha, z_ha = [cumtrapz(v[i], t) for i in range(2)]
```
```python
fig, ax = plt.subplots(1,1, figsize=(10,4), tight_layout=True)
ax.plot(x_vac, z_vac, 'k:', label='vacuum')
ax.plot(x_air, z_air, 'k', label=r'$\rho=1.225$ kg m$^{-3}$')
ax.plot(x_alt, z_alt, 'k--', label=r'$\rho=1.069$ kg m$^{-3}$')
ax.plot(x_ha, z_ha, 'k-.', label=r'$\rho=0.819$ kg m$^{-3}$')
ax.set(aspect='equal',
xlabel='$x$ [m]',
ylim=(0,1.3*np.max(z_vac)),
ylabel='$z$ [m]')
leg = ax.legend(fontsize=11)
```
| dd37d9a69dd6f00d28a2da10a79df64366d557da | 42,273 | ipynb | Jupyter Notebook | nb/ode-integration.ipynb | sybenzvi/PHY403 | 159f1203b5fc92ffc1f2b7dc9fef3c2f78207dd7 | [
"BSD-3-Clause"
] | 3 | 2020-05-27T23:51:39.000Z | 2021-02-03T03:34:53.000Z | nb/ode-integration.ipynb | sybenzvi/PHY403 | 159f1203b5fc92ffc1f2b7dc9fef3c2f78207dd7 | [
"BSD-3-Clause"
] | null | null | null | nb/ode-integration.ipynb | sybenzvi/PHY403 | 159f1203b5fc92ffc1f2b7dc9fef3c2f78207dd7 | [
"BSD-3-Clause"
] | 7 | 2020-05-06T16:01:09.000Z | 2022-02-04T18:47:26.000Z | 187.048673 | 35,232 | 0.887067 | true | 1,475 | Qwen/Qwen-72B | 1. YES
2. YES | 0.891811 | 0.870597 | 0.776408 | __label__eng_Latn | 0.936087 | 0.642189 |
Határozzuk meg az alábbi ábrán látható tartó súlypontvonalának eltolódását leíró $v\left(x\right)$ függvényt végeselemes módszer használatával, síkbeli egyenes gerendalemek alkalmazásával.
Vizsgáljuk meg a végeselemes megoldással kapott hajlítónyomatéki igénybevétel hibáját az egyes szakaszokon.
Határozzuk meg az $x = a/2$ keresztmetszetben a hajlítónyomatéki igénybevétel nagyságát 2, illetve 3 síkbeli egyenes gerendaelem alkalmazásával.
A tartók két különböző átmérőjű ($d_1 = 2d$, illetve $d_2 = d$) kör keresztmetszetű tartókból
vannak összeépítve.
A tartók anyaga lineárisan rugalmas, homogén, izotrop. A $d_1$ átmérőjű rész rugalmassági modulusza $E$, míg a $d_2$ átmérővel rendelkező részé $4E$.
```python
import sympy as sp
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
sp.init_printing()
```
```python
ro, xi, L, A, I, E = sp.symbols("rho, xi L A I E")
```
```python
Nv = 1/8*sp.Matrix([2*(1-xi)**2*(2+xi),
L*(1-xi)**2*(1+xi),
2*(1+xi)**2*(2-xi),
L*(1+xi)**2*(xi-1)])
Nv
```
```python
Nfi=sp.diff(Nv,xi)*(2/L)
sp.simplify((1/L)**(-1)*Nfi)
```
```python
Bv=sp.diff(Nv,xi,2)*(2/L)**2
sp.simplify((1/L**2)**(-1)*Bv)
```
```python
KeBEAM1D = I*E*sp.integrate(Bv*(Bv.T),(xi,-1,1))*L/2
(I*E/L**3)**(-1)*KeBEAM1D
```
```python
MeBEAM1D = A*ro*sp.integrate(Nv*(Nv.T),(xi,-1,1))*L/2
print((A*ro*L/420)**(-1)*MeBEAM1D)
```
Matrix([[156.000000000000, 22.0*L, 54.0000000000000, -13.0*L], [22.0*L, 4.0*L**2, 13.0*L, -3.0*L**2], [54.0000000000000, 13.0*L, 156.000000000000, -22.0*L], [-13.0*L, -3.0*L**2, -22.0*L, 4.0*L**2]])
```python
def Nvfgv(xi,L):
return np.array([1/4*(1-xi)**2*(2+xi),
L/8*(1-xi)**2*(1+xi),
1/4*(1+xi)**2*(2-xi),
L/8*(1+xi)**2*(xi-1)])
```
```python
def Nfifgv(xi,L):
return np.multiply(1/L,np.array([1.5*xi**2-1.5,
L*(0.75*xi+0.25)*(xi-1),
-1.5*xi**2+1.5,
L*(0.75*xi-0.25)*(xi+1)]))
```
```python
def KeBEAM1D(I,E,L):
return np.multiply(I*E/L**3, np.array([[12, 6*L, -12, 6*L],
[6*L, 4*L**2, -6*L, 2*L**2],
[-12, -6*L, 12, -6*L],
[6*L, 2*L**2, -6*L, 4*L**2]]))
```
```python
def MeBEAM1D(A,ro,L):
return np.multiply(A*ro*L/420, np.array([
[156, 22*L, 54, -13*L],
[22*L, 4*L**2, 13*L, -3*L**2],
[54, 13*L, 156, -22*L],
[-13*L, -3*L**2, -22*L, 4*L**2]]))
```
```python
L1 = 200e-3
L2 = 600e-3
L3 = 300e-3
d1 = 20e-3
d2 = 30e-3
d3 = 20e-3
d4 = 300e-3
s1 = 10e3
s2 = 1000e3
s3 = 2500e3
m = 0.5
E = 200e9
ro = 7850
```
```python
I1 = (d1)**4*np.pi/64
I2 = (d2)**4*np.pi/64
I3 = (d3)**4*np.pi/64
A1 = (d1)**2*np.pi/4
A2 = (d2)**2*np.pi/4
A3 = (d3)**2*np.pi/4
Ke1 = KeBEAM1D(I1,E,L1)
Ke2 = KeBEAM1D(I2,E,L2)
Ke3 = KeBEAM1D(I3,E,L3)
Me1 = MeBEAM1D(A1,ro,L1)
Me2 = MeBEAM1D(A2,ro,L2)
Me3 = MeBEAM1D(A3,ro,L3)
thetaz = 1/4*m*(d4/2)**2
```
```python
elemSZF = np.array([[1,2,3,4],[3,4,5,6],[5,6,7,8]]) - 1
```
```python
KG=np.zeros((8,8))
MG=np.zeros((8,8))
```
```python
eind=0
KG[np.ix_(elemSZF[eind],elemSZF[eind])]+=Ke1
MG[np.ix_(elemSZF[eind],elemSZF[eind])]+=Me1
```
```python
eind=1
KG[np.ix_(elemSZF[eind],elemSZF[eind])]+=Ke2
MG[np.ix_(elemSZF[eind],elemSZF[eind])]+=Me2
```
```python
eind=2
KG[np.ix_(elemSZF[eind],elemSZF[eind])]+=Ke3
MG[np.ix_(elemSZF[eind],elemSZF[eind])]+=Me3
```
```python
KG[elemSZF[0][0],elemSZF[0][0]]+=s1
KG[elemSZF[0][2],elemSZF[0][2]]+=s2
KG[elemSZF[1][2],elemSZF[1][2]]+=s3
```
```python
MG[elemSZF[2][2],elemSZF[2][2]] += m
MG[elemSZF[2][3],elemSZF[2][3]] += thetaz
```
```python
fixSZF=np.array([])
```
```python
szabadSZF=[i for i in range(0,8) if i not in fixSZF]
szabadSZF
```
```python
KK = KG[np.ix_(szabadSZF,szabadSZF)]
MK = MG[np.ix_(szabadSZF,szabadSZF)]
```
```python
sajatErtek, sajatVektor = np.linalg.eig(np.linalg.solve(MK,KK))
np.sqrt(sajatErtek)
```
array([ 26458.01845952, 7429.77280567, 4367.3802812 , 2525.46582768,
1682.73657981, 1012.43565175, 715.60241866, 374.06415577])
```python
UG1 = np.zeros(8)
UG1[np.ix_(szabadSZF)] = sajatVektor[:,-1]
UG2 = np.zeros(8)
UG2[np.ix_(szabadSZF)] = sajatVektor[:,-2]
UG3 = np.zeros(8)
UG3[np.ix_(szabadSZF)] = sajatVektor[:,-3]
```
# Eredmények ábrázolása
```python
xiLista = np.linspace(-1,1,num = 10)
cspxKRD=[[0,L1],[L1,L1+L2],[L1+L2,L1+L2+L3]]
cspxKRD
```
```python
x1Lista=[(cspxKRD[0][1]-cspxKRD[0][0])/2*(xi+1) + cspxKRD[0][0] for xi in xiLista]
x2Lista=[(cspxKRD[1][1]-cspxKRD[1][0])/2*(xi+1) + cspxKRD[1][0] for xi in xiLista]
x3Lista=[(cspxKRD[2][1]-cspxKRD[2][0])/2*(xi+1) + cspxKRD[2][0] for xi in xiLista]
xLista = np.concatenate((x1Lista,x2Lista,x3Lista))
```
```python
v1Lista = [np.dot(Nvfgv(xi,L1),UG1[elemSZF[0]]) for xi in xiLista]
v2Lista = [np.dot(Nvfgv(xi,L2),UG1[elemSZF[1]]) for xi in xiLista]
v3Lista = [np.dot(Nvfgv(xi,L3),UG1[elemSZF[2]]) for xi in xiLista]
vLista = np.concatenate((v1Lista,v2Lista,v3Lista))
```
```python
figv = plt.figure(num = 1, figsize=(16/2.54,10/2.54))
axv = figv.add_subplot(111)
axv.plot(xLista,vLista)
plt.xlabel(r"$x \, \left[\mathrm{m}\right]$")
plt.ylabel(r"$v \, \left[\mathrm{m}\right]$")
plt.grid()
plt.legend()
plt.show()
```
```python
v1Lista = [np.dot(Nvfgv(xi,L1),UG2[elemSZF[0]]) for xi in xiLista]
v2Lista = [np.dot(Nvfgv(xi,L2),UG2[elemSZF[1]]) for xi in xiLista]
v3Lista = [np.dot(Nvfgv(xi,L3),UG2[elemSZF[2]]) for xi in xiLista]
vLista = np.concatenate((v1Lista,v2Lista,v3Lista))
```
```python
figv = plt.figure(num = 2, figsize=(16/2.54,10/2.54))
axv = figv.add_subplot(111)
axv.plot(xLista,vLista)
plt.xlabel(r"$x \, \left[\mathrm{m}\right]$")
plt.ylabel(r"$v \, \left[\mathrm{m}\right]$")
plt.grid()
plt.legend()
plt.show()
```
```python
v1Lista = [np.dot(Nvfgv(xi,L1),UG3[elemSZF[0]]) for xi in xiLista]
v2Lista = [np.dot(Nvfgv(xi,L2),UG3[elemSZF[1]]) for xi in xiLista]
v3Lista = [np.dot(Nvfgv(xi,L3),UG3[elemSZF[2]]) for xi in xiLista]
vLista = np.concatenate((v1Lista,v2Lista,v3Lista))
```
```python
figv = plt.figure(num = 3, figsize=(16/2.54,10/2.54))
axv = figv.add_subplot(111)
axv.plot(xLista,vLista)
plt.xlabel(r"$x \, \left[\mathrm{m}\right]$")
plt.ylabel(r"$v \, \left[\mathrm{m}\right]$")
plt.grid()
plt.legend()
plt.show()
```
| eb5200f337df6289fdb5c95e377412063b2b36ed | 75,483 | ipynb | Jupyter Notebook | 11/11_2_elem_2_csp.ipynb | TamasPoloskei/BME-VEMA | 542725bf78e9ad0962018c1cf9ff40c860f8e1f0 | [
"MIT"
] | null | null | null | 11/11_2_elem_2_csp.ipynb | TamasPoloskei/BME-VEMA | 542725bf78e9ad0962018c1cf9ff40c860f8e1f0 | [
"MIT"
] | 1 | 2018-11-20T14:17:52.000Z | 2018-11-20T14:17:52.000Z | 11/11_2_elem_2_csp.ipynb | TamasPoloskei/BME-VEMA | 542725bf78e9ad0962018c1cf9ff40c860f8e1f0 | [
"MIT"
] | null | null | null | 110.033528 | 15,418 | 0.841646 | true | 2,862 | Qwen/Qwen-72B | 1. YES
2. YES | 0.877477 | 0.66888 | 0.586927 | __label__hun_Latn | 0.487912 | 0.201958 |
```python
import numpy as np
import sympy as sp
import matplotlib.pyplot as plt
import math
import plot_ball as p1
%matplotlib inline
```
## Andrew Malfavon Exercise 5.9
Plot a formula for the trajectory of a ball
$$y(t) = v_ot - \frac{1}{2} gt^2$$
```python
p1.plot_here()
```
| dcf7b2befd13bb2b15561e57a3234cc9fd7c119f | 15,666 | ipynb | Jupyter Notebook | plot_ballNB.ipynb | chapman-phys227-2016s/cw-2-classwork-team | 24ab8e661ec7b842fcb4688f25605bef573180a7 | [
"MIT"
] | null | null | null | plot_ballNB.ipynb | chapman-phys227-2016s/cw-2-classwork-team | 24ab8e661ec7b842fcb4688f25605bef573180a7 | [
"MIT"
] | null | null | null | plot_ballNB.ipynb | chapman-phys227-2016s/cw-2-classwork-team | 24ab8e661ec7b842fcb4688f25605bef573180a7 | [
"MIT"
] | null | null | null | 203.454545 | 14,280 | 0.916699 | true | 88 | Qwen/Qwen-72B | 1. YES
2. YES
| 0.899121 | 0.763484 | 0.686465 | __label__eng_Latn | 0.837432 | 0.433218 |
# Emission of Thermal Bremsstrahlung by a Maxwellian Plasma
[thermal_bremsstrahlung]: ../../api/plasmapy.formulary.radiation.thermal_bremsstrahlung.rst#plasmapy.formulary.radiation.thermal_bremsstrahlung
The [radiation.thermal_bremsstrahlung][thermal_bremsstrahlung] function calculates the bremsstrahlung spectrum emitted by the collision of electrons and ions in a thermal (Maxwellian) plasma. This function calculates this quantity in the Rayleigh-Jeans limit where $\hbar\omega \ll k_B T_e$. In this regime, the power spectrum of the emitted radiation is
\begin{equation}
\frac{dP}{d\omega} = \frac{8 \sqrt{2}}{3\sqrt{\pi}} \bigg ( \frac{e^2}{4 \pi \epsilon_0} \bigg )^3 \bigg ( m_e c^2 \bigg )^{-\frac{3}{2}} \bigg ( 1 - \frac{\omega_{pe}^2}{\omega^2} \bigg )^\frac{1}{2} \frac{Z_i^2 n_i n_e}{\sqrt{k_B T_e}} E_1(y)
\end{equation}
where $w_{pe}$ is the electron plasma frequency and $E_1$ is the exponential integral
\begin{equation}
E_1 (y) = - \int_{-y}^\infty \frac{e^{-t}}{t}dt
\end{equation}
and y is the dimensionless argument
\begin{equation}
y = \frac{1}{2} \frac{\omega^2 m_e}{k_{max}^2 k_B T_e}
\end{equation}
where $k_{max}$ is a maximum wavenumber arising from binary collisions approximated here as
\begin{equation}
k_{max} = \frac{1}{\lambda_B} = \frac{\sqrt{m_e k_B T_e}}{\hbar}
\end{equation}
where $\lambda_B$ is the electron de Broglie wavelength. In some regimes other values for $k_{max}$ may be appropriate, so its value may be set using a keyword. Bremsstrahlung emission is greatly reduced below the electron plasma frequency (where the plasma is opaque to EM radiation), so these expressions are only valid in the regime $w < w_{pe}$.
```python
%matplotlib inline
import astropy.constants as const
import astropy.units as u
import matplotlib.pyplot as plt
import numpy as np
from plasmapy.formulary.radiation import thermal_bremsstrahlung
```
Create an array of frequencies over which to calculate the bremsstrahlung spectrum and convert these frequencies to photon energies for the purpose of plotting the results. Set the plasma density, temperature, and ion species.
```python
frequencies = np.arange(15, 16, 0.01)
frequencies = (10 ** frequencies) / u.s
energies = (frequencies * const.h.si).to(u.eV)
ne = 1e22 * u.cm ** -3
Te = 1e2 * u.eV
ion_species = "C-12 4+"
```
Calculate the spectrum, then plot it.
```python
spectrum = thermal_bremsstrahlung(frequencies, ne, Te, ion_species=ion_species)
print(spectrum.unit)
lbl = "$T_e$ = {:.1e} eV,\n".format(Te.value) + "$n_e$ = {:.1e} 1/cm^3".format(ne.value)
plt.plot(energies, spectrum, label=lbl)
plt.title(f"Thermal Bremsstrahlung Spectrum")
plt.xlabel("Energy (eV)")
plt.ylabel("Power Spectral Density (W s/m^3)")
plt.legend()
plt.show()
```
The power spectrum is the power per angular frequency per volume integrated over $4\pi$ sr of solid angle, and therefore has units of watts / (rad/s) / m$^3$ * $4\pi$ rad = W s/m$^3$.
```python
spectrum = spectrum.to(u.W * u.s / u.m ** 3)
spectrum.unit
```
This means that, for a given volume and time period, the total energy emitted can be determined by integrating the power spectrum
```python
t = 5 * u.ns
vol = 0.5 * u.cm ** 3
dw = 2 * np.pi * np.gradient(frequencies) # Frequency step size
total_energy = (np.sum(spectrum * dw) * t * vol).to(u.J)
print("Total Energy: {:.2e} J".format(total_energy.value))
```
| c1b7204ac4789f198dc9a70d33b7eb4a639bc25f | 5,477 | ipynb | Jupyter Notebook | docs/notebooks/formulary/thermal_bremsstrahlung.ipynb | seanjunheng2/PlasmaPy | 7b4e4aaf8b03d88b654456bca881329ade09e377 | [
"BSD-2-Clause",
"MIT",
"BSD-2-Clause-Patent",
"BSD-1-Clause",
"BSD-3-Clause"
] | 429 | 2016-10-31T19:40:32.000Z | 2022-03-25T12:27:11.000Z | docs/notebooks/formulary/thermal_bremsstrahlung.ipynb | RAJAGOPALAN-GANGADHARAN/PlasmaPy | 6df9583cc47375687a07300c0aa11ba31634d770 | [
"BSD-2-Clause",
"MIT",
"BSD-2-Clause-Patent",
"BSD-1-Clause",
"BSD-3-Clause"
] | 1,400 | 2015-11-24T23:00:44.000Z | 2022-03-30T21:03:25.000Z | docs/notebooks/formulary/thermal_bremsstrahlung.ipynb | RAJAGOPALAN-GANGADHARAN/PlasmaPy | 6df9583cc47375687a07300c0aa11ba31634d770 | [
"BSD-2-Clause",
"MIT",
"BSD-2-Clause-Patent",
"BSD-1-Clause",
"BSD-3-Clause"
] | 289 | 2015-11-24T18:54:57.000Z | 2022-03-18T17:26:59.000Z | 31.843023 | 366 | 0.585174 | true | 1,052 | Qwen/Qwen-72B | 1. YES
2. YES | 0.882428 | 0.757794 | 0.668699 | __label__eng_Latn | 0.868201 | 0.391943 |
---
# Section 5.3: The Power Method and Some Simple Extensions
---
Let $A \in \mathbb{C}^{n \times n}$ be a matrix with _linearly independent_ eigenvectors
$$
v_1, \ldots, v_n
$$
and corresponding eigenvalues
$$
\lambda_1, \ldots, \lambda_n
$$
(i.e., $A v_i = \lambda_i v_i$, for $i=1,\ldots,n$) ordered such that
$$
|\lambda_1| \ge |\lambda_2| \ge \cdots \ge |\lambda_n|.
$$
We say that $A$ has a **dominant eigenvalue** if
$$
|\lambda_1| > |\lambda_2|.
$$
---
## The Power Method
The basic idea of the **power method** is to pick a vector $q \in \mathbb{C}^n$ and compute the sequence
$$
q,\ A q,\ A^2 q,\ A^3 q,\ \ldots.
$$
Since the eigenvectors $v_1,\ldots,v_n$ form a basis for $\mathbb{C}^n$, we have that
$$
q = c_1 v_1 + \cdots + c_n v_n.
$$
For a random $q$, we expect $c_1 \ne 0$.
Then
$$
\begin{align}
A q
&= c_1 A v_1 + \cdots + c_n A v_n \\
&= c_1 \lambda_1 v_1 + \cdots + c_n \lambda_n v_n
\end{align}
$$
and
$$
\begin{align}
A^2 q
&= c_1 \lambda_1 A v_1 + \cdots + c_n \lambda_n A v_n \\
&= c_1 \lambda_1^2 v_1 + \cdots + c_n \lambda_n^2 v_n.
\end{align}
$$
In general, we have
$$
A^j q = c_1 \lambda_1^j v_1 + \cdots + c_n \lambda_n^j v_n
$$
and
$$
\frac{A^j q}{\lambda_1^j} = c_1 v_1 + c_2 \left(\frac{\lambda_2}{\lambda_1}\right)^j v_2 + \cdots + c_n \left(\frac{\lambda_n}{\lambda_1}\right)^j v_n.
$$
Letting
$$
q_j = \frac{A^j q}{\lambda_1^j},
$$
we have
$$
\begin{align}
\| q_j - c_1 v_1 \|
&= \left\| c_2 \left(\frac{\lambda_2}{\lambda_1}\right)^j v_2 + \cdots + c_n \left(\frac{\lambda_n}{\lambda_1}\right)^j v_n \right\| \\
&\le |c_2| \left|\frac{\lambda_2}{\lambda_1}\right|^j \|v_2\| + \cdots + |c_n| \left|\frac{\lambda_n}{\lambda_1}\right|^j \|v_n\| \\
&\le \left|\frac{\lambda_2}{\lambda_1}\right|^j \big(|c_2| \|v_2\| + \cdots + |c_n| \|v_n\|\big).
\end{align}
$$
Now suppose $|\lambda_1| > |\lambda_2|$. Then
$$
\left|\frac{\lambda_2}{\lambda_1}\right| < 1.
$$
Therefore,
$$
\left|\frac{\lambda_2}{\lambda_1}\right|^j \to 0 \quad \text{as} \ j \to \infty.
$$
Thus, $\| q_j - c_1 v_1 \| \to 0$ as $j \to \infty$, so we conclude that
$$
q_j \to c_1 v_1 \quad \text{as $j \to \infty$.}
$$
The rate of the convergence of the power method is generally linear ($\|q_{j+1} - c_1 v_1\| \approx r \|q_j - c_1 v_1\|$ for all $j$ sufficiently large) with convergence ratio
$$
r = \left|\frac{\lambda_2}{\lambda_1}\right|.
$$
Thus, the larger the gap between $|\lambda_1|$ and $|\lambda_2|$, the smaller the convergence ratio and the faster the convergence.
---
## Scaling
Since we usually do not know $\lambda_1$ while running the power method, we will not be able to compute $q_j = A^j q/\lambda_1^j$. However, it is important that we scale $A^j q$ since $\|A^j q\| \to \infty$ if $|\lambda_1| > 1$ and $\|A^j q\| \to 0$ if $|\lambda_1| < 1$.
A simple choice is to scale $A^j q$ so that its largest entry is equal to one. Thus, we let
$$
q_{j+1} = \frac{A q_j}{s_{j+1}},
$$
where $s_{j+1}$ is the component of $A q_j$ which has the largest absolute value.
---
## Algorithm
Given $q_0 \in \mathbb{C}^n$, we iterate
1. $\hat{q} = A q_j$
2. $s_{j+1} =$ entry of $\hat{q}$ with largest absolute value
3. $q_{j+1} \gets \hat{q}/s_{j+1}$
for $j = 0, 1, 2, \ldots$.
Then $q_j$ approaches a multiple of $v_1$ and $s_j$ approaches the eigenvalue $\lambda_1$.
If $A$ is a dense $n \times n$ matrix, then each iteration of this algorithm will require $2n^2 + O(n)$ flops. However, if $A$ is sparse and has at most $k$ nonzeros on each row, then each iteration will require approximately $2 k n$ flops. Therefore, the power method is very well suited for computing the dominant eigenvalue and associated eigenvector of large sparse matrices.
---
## `power_method`
```julia
using LinearAlgebra, SparseArrays
```
```julia
function scale!(q)
maxval, idx = maximum((abs(q[i]),i) for i=1:length(q))
s = q[idx]
q ./= s
return s
end
```
```julia
function power_method(A; tol=sqrt(eps())/2, maxiter=100_000)
m, n = size(A)
n == m || error("Matrix must be square.")
q = randn(n)
s = scale!(q)
lam = s
qold = similar(q)
k = 0
done = false
while !done && k < maxiter
k += 1
copy!(qold, q) # qold = q
mul!(q, A, qold) # q = A*qold
s = scale!(q) # q = q/s
lam = s
done = norm(A*q - lam*q)/norm(q) <= tol
end
if done
println("Converged after $k iterations.")
else
println("Failed to converge.")
end
return lam, q
end
```
```julia
n = 1_000
k = 10
density = (k - 1)/n # density = (k*n - n)/n^2
A = triu(sprand(n, n, density), 1)
A = A + A' + I
# Expect nnz(A) ≈ k*n
@show nnz(A)
if n <= 1000
λ = eigvals(Matrix(A))
abseig = abs.(λ) |> sort
r = abseig[end-1]/abseig[end]
@show r
end
println()
@time lam, q = power_method(A)
@show lam
@show norm(A*q - lam*q)/norm(q);
```
---
## Google PageRank Algorithm
Google uses its [PageRank](https://en.wikipedia.org/wiki/PageRank) algorithm to determine its ranking of webpages in search results.
The [Google matrix](https://en.wikipedia.org/wiki/Google_matrix) represents how webpages on the Internet link to one another.
PageRank uses the power method to compute the dominant eigenvector of the Google matrix, and this dominant eigenvector is then used to rank the importance of webpages.
By design, the convergence ratio of the Google matrix is
$$
\left|\frac{\lambda_2}{\lambda_1}\right| = 0.85,
$$
so the number of power method iterations is reasonable.
---
## The Inverse Power Method
Let $A \in \mathbb{C}^{n \times n}$ be nonsingular. Since $A$ is nonsingular, all of its eigenvalues are nonzero.
Since
$$
A v = \lambda v \quad \implies \quad A^{-1} v = \lambda^{-1} v,
$$
the eigenvalues of $A^{-1}$ are $\lambda_1^{-1},\ldots,\lambda_n^{-1}$ and the corresponding eigenvectors are $v_1,\ldots,v_n$.
Since
$$
|\lambda_1| \ge |\lambda_2| \ge \cdots \ge |\lambda_n|,
$$
we have that
$$
\left|\lambda_1^{-1}\right| \le \left|\lambda_2^{-1}\right| \le \cdots \le \left|\lambda_n^{-1}\right|.
$$
If $|\lambda_{n-1}| > |\lambda_n|$, then $\left|\lambda_n^{-1}\right| > \left|\lambda_{n-1}^{-1}\right|$, so the **inverse power method**,
$$
q,\ A^{-1} q,\ A^{-2} q,\ A^{-3} q,\ \ldots,
$$
will generate a sequence $q_j$ that converges to a multiple of $v_n$ (i.e., the eigenvector corresponding to the _smallest_ eigenvalue of $A$).
---
## `inverse_power_method`
```julia
function inverse_power_method(A; tol=sqrt(eps())/2, maxiter=100_000)
m, n = size(A)
n == m || error("Matrix must be square.")
F = lu(A)
q = randn(n)
s = scale!(q)
lam = 1/s
qold = similar(q)
k = 0
done = false
while !done && k < maxiter
k += 1
copy!(qold, q) # qold = q
ldiv!(q, F, qold) # q = F\qold
s = scale!(q) # q = q/s
lam = 1/s
done = norm(A*q - lam*q)/norm(q) <= tol
end
if done
println("Converged after $k iterations.")
else
println("Failed to converge.")
end
return lam, q
end
```
```julia
n = 1000
k = 5
density = (k - 1)/n # density = (k*n - n)/n^2
A = triu(sprand(n, n, density), 1)
A = A + A' + I
# Expect nnz(A) ≈ k*n
@show nnz(A)
if n <= 1000
λ = eigvals(Matrix(A))
abseig = abs.(λ) |> sort
r = abseig[1]/abseig[2]
@show r
end
println()
@time lam, q = inverse_power_method(A)
@show lam
@show norm(A*q - lam*q)/norm(q);
```
---
## The Shift-and-Invert Method
If $A v = \lambda v$, then
$$
\big( A - \rho I \big) v = \big( \lambda - \rho \big) v.
$$
Therefore, using the inverse power method on $A - \rho I$, we can compute an eigenvector with eigenvalue closest to the shift $\rho$.
That is, if
$$
|\lambda_i - \rho| \ll |\lambda_j - \rho|, \quad \forall j \ne i,
$$
then the **shift-and-invert method**,
$$
q,\ (A - \rho I)^{-1} q,\ (A - \rho I)^{-2} q,\ (A - \rho I)^{-3} q,\ \ldots,
$$
will generate a sequence $q_j$ that converges to a multiple of $v_i$.
The rate of convergence is
$$
\left| \frac{\lambda_i - \rho}{\lambda_k - \rho} \right|,
$$
where $\lambda_k - \rho$ is the second smallest eigenvalue of $A - \rho I$ in absolute value.
Once we have an $LU$ decomposition of $A - \rho I$ (which requires $\frac{2}{3}n^3 + O(n^2)$ flops), we can compute
$$
q \gets (A - \rho I)^{-1} q
$$
each iteration in $2 n^2$ flops.
---
## `inverse_power_method` with shift $\rho$
```julia
function inverse_power_method(A; ρ=0.0, tol=sqrt(eps())/2, maxiter=100_000)
m, n = size(A)
n == m || error("Matrix must be square.")
F = lu(A - ρ*I)
q = randn(n)
s = scale!(q)
lam = 1/s + ρ
qold = similar(q)
k = 0
done = false
while !done && k < maxiter
k += 1
copy!(qold, q) # qold = q
ldiv!(q, F, qold) # q = F\qold
s = scale!(q) # q = q/s
lam = 1/s + ρ
done = norm(A*q - lam*q)/norm(q) <= tol
end
if done
println("Converged after $k iterations.")
else
println("Failed to converge.")
end
return lam, q
end
```
```julia
n = 1000
k = 5
density = (k - 1)/n # density = (k*n - n)/n^2
A = triu(sprand(n, n, density), 1)
A = A + A' + I
ρ = 2.0
# Expect nnz(A) ≈ k*n
@show nnz(A)
if n <= 1000
λ = eigvals(Matrix(A))
abseig = abs.(λ .- ρ) |> sort
r = abseig[1]/abseig[2]
@show r
end
println()
@time lam, q = inverse_power_method(A, ρ=ρ)
@show ρ, lam
@show norm(A*q - lam*q)/norm(q);
```
---
## Rayleigh Quotient Iteration
Suppose $q \in \mathbb{C}^n$ approximates an eigenvector of $A$. If $A q = \rho q$, then $\rho$ is an eigenvalue of $A$. Otherwise, we want to find the value of $\rho$ that minimizes
$$
\| A q - \rho q \|_2.
$$
The _normal equations_ for this least squares problem is
$$
(q^* q) \rho = q^* A q,
$$
where $q^*$ is the **conjugate transpose** of $q$.
For example, if
$$
q = \begin{bmatrix} 1 + 3 i \\ 4 - 2 i \end{bmatrix},
$$
then
$$
q^* = \begin{bmatrix} 1 - 3 i & 4 + 2 i \end{bmatrix}.
$$
Note that $q^* q = \|q\|_2^2$.
The solution of the normal equations is
$$
\rho = \frac{q^* A q}{q^* q}
$$
and is called the **Rayleigh quotient**.
The **Rayleigh quotient iteration** uses
$$
\rho_j = \frac{q_j^* A q_j}{q_j^* q_j}
$$
as the _shift_ in each iteration of the inverse power method.
Since the shift changes each iteration, we need to compute an $LU$ decomposition _each iteration_. This can be very expensive since each iteration will now cost $\frac{2}{3}n^3 + O(n^2)$ flops.
To make the Raleigh quotient iteration practical, we can first compute a "simple" matrix $H$ that is _similar_ to $A$, such as an **upper Hessenberg** matrix
$$
H =
\begin{bmatrix}
* & * & * & * & * \\
* & * & * & * & * \\
& * & * & * & * \\
& & * & * & * \\
& & & * & * \\
\end{bmatrix}
$$
or a **tridiagonal** matrix
$$
H =
\begin{bmatrix}
* & * & & & \\
* & * & * & & \\
& * & * & * & \\
& & * & * & * \\
& & & * & * \\
\end{bmatrix}.
$$
Computing $LU$ decomposition of an upper Hessenberg matrix only needs $O(n^2)$ flops, and the same for a tridiagonal matrix only needs $O(n)$ flops. We will return to this topic in Section 5.5.
---
## `rayleigh`
```julia
using SuiteSparse
function rayleigh(A; ρ0=0.0, tol=sqrt(eps())/2, maxiter=100)
m, n = size(A)
n == m || error("Matrix must be square.")
q = randn(n)
normalize!(q)
ρ = ρ0
lam = dot(q, A*q)
qold = similar(q)
F = SuiteSparse.UMFPACK.UmfpackLU(
Ptr{Nothing}(), Ptr{Nothing}(), 0, 0,
Int[], Int[], Float64[], 0)
k = 0
done = false
while !done && k < maxiter
k += 1
copy!(qold, q) # qold = q
if k == 1
F = lu(A - ρ*I) # Creates symbolic factorization F
else
lu!(F, A - ρ*I) # Overwrites F with new factorization
end
ldiv!(q, F, qold) # q = (A - ρ*I)\qold
normalize!(q)
lam = dot(q, A*q)
if k > 1
ρ = lam
end
done = norm(A*q - lam*q) <= tol
end
if done
println("Converged after $k iterations.")
else
println("Failed to converge.")
end
return lam, q
end
```
```julia
n = 4000
k = 5
density = (k - 1)/n # density = (k*n - n)/n^2
A = triu(sprand(n, n, density), 1)
A = A + A' + I
ρ = 2.0
# Expect nnz(A) ≈ k*n
@show nnz(A)
if n <= 1000
λ = eigvals(Matrix(A))
abseig = abs.(λ .- ρ) |> sort
r = abseig[1]/abseig[2]
@show r
end
println()
println("Inverse power method:")
@time lam, q = inverse_power_method(A, ρ=ρ)
@show ρ, lam
@show norm(A*q - lam*q)/norm(q);
println()
println("Rayleigh quotient method:")
@time lam, q = rayleigh(A, ρ0=ρ)
@show ρ, lam
@show norm(A*q - lam*q);
```
---
## Quadratic convergence of the Raleigh Quotient Iteration
> ### Theorem: (Raleigh Quotient Approximates Eigenvalue)
>
> Let $A \in \mathbb{C}^{n \times n}$. Let $v$ be an eigenvector of $A$ with eigenvalue $\lambda$, and $\|v\|_2 = 1$.
>
> Let $q \in \mathbb{C}^n$ with $\|q\|_2 = 1$ and
>
> $$ \rho = q^* A q $$
>
> be the Raleigh quotient of $q$. Then
>
> $$ |\lambda - \rho| \le 2 \|A\|_2 \|v - q\|_2. $$
Therefore, if $\|v - q\|_2 = O(\varepsilon)$, then $|\lambda - \rho| = O(\varepsilon)$.
Let $q_0 \in \mathbb{C}^n$ such that $\|q_0\|_2 = 1$, and let $q_j$, for $j=1,2,\ldots$, be defined by
$$
\rho_j = q_j^* A q_j,
\qquad
(A - \rho_j I) \hat{q}_{j+1} = q_j,
\qquad
q_{j+1} = \hat{q}_{j+1}/\|\hat{q}_{j+1}\|_2.
$$
Then $\|q_j\|_2 = 1$, for all $j$.
1. Suppose that $q_j \to v_i$ as $j \to \infty$. Then $\|v_i\|_2 = 1$ and $\rho_j \to \lambda_i$ as $j \to \infty$.
2. Let $\lambda_k$ be the closest eigenvalue to $\lambda_i$.
3. Suppose that $\rho_j \approx \lambda_i$.
Then
$$
\begin{align}
\|v_i - q_{j+1}\|_2
&\approx \left| \frac{(\lambda_k - \rho_j)^{-1}}{(\lambda_i - \rho_j)^{-1}} \right| \|v_i - q_j\|_2 \\
&= \left| \frac{\lambda_i - \rho_j}{\lambda_k - \rho_j} \right| \|v_i - q_j\|_2 \\
&\le \frac{2 \|A\|_2 \|v_i - q_j\|_2}{|\lambda_k - \rho_j|} \|v_i - q_j\|_2 \\
&\approx \frac{2 \|A\|_2}{|\lambda_k - \lambda_i|} \|v_i - q_j\|_2^2. \\
\end{align}
$$
Thus, we obtain the estimate
$$ \|v_i - q_{j+1}\|_2 \approx C \|v_i - q_j\|_2^2, $$
where $C = 2 \|A\|_2 / |\lambda_k - \lambda_i|$. This indicates that the Rayleigh quotient iteration typically converges _quadratically_ when it does converge.
Moreover, if $A$ is a symmetric matrix, then $\|v - q\|_2 = O(\varepsilon)$ implies that $|\lambda - \rho| = O(\varepsilon^2)$, which indicates _cubic_ convergence:
$$ \|v_i - q_{j+1}\|_2 \approx C \|v_i - q_j\|_2^3. $$
---
| a2e7dec56a1e9d28831c2f9702f53adf0292aa00 | 25,548 | ipynb | Jupyter Notebook | Section 5.3 - The Power Method and Some Simple Extensions.ipynb | math434/fall2021math434 | 6317ce76de1eb7dbfdc3ea37a21dc5e1e3228316 | [
"MIT"
] | 1 | 2021-08-31T21:01:22.000Z | 2021-08-31T21:01:22.000Z | Section 5.3 - The Power Method and Some Simple Extensions.ipynb | math434/fall2021math434 | 6317ce76de1eb7dbfdc3ea37a21dc5e1e3228316 | [
"MIT"
] | null | null | null | Section 5.3 - The Power Method and Some Simple Extensions.ipynb | math434/fall2021math434 | 6317ce76de1eb7dbfdc3ea37a21dc5e1e3228316 | [
"MIT"
] | 1 | 2021-11-16T19:28:56.000Z | 2021-11-16T19:28:56.000Z | 27.034921 | 386 | 0.449663 | true | 5,458 | Qwen/Qwen-72B | 1. YES
2. YES | 0.845942 | 0.859664 | 0.727226 | __label__eng_Latn | 0.814838 | 0.527922 |
```
# default_exp system_response
```
```
#hide
%load_ext autoreload
%autoreload 2
```
```
#hide
# Makes it possible to do symbolic maths and use a control system lib
import sympy
sympy.init_printing()
# Let's also ignore some warnings here due to sympy using an old matplotlib function to render Latex equations.
import warnings
warnings.filterwarnings('ignore')
```
```
#export
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
```
```
#hide
%matplotlib inline
```
# System Response
This notebook analyses how different systems might respond to different classes of inputs.
- Dominal pole approximation
- First and second order systems
- System response and performance requirements
-----------------------
## First and second order systems, and the dominant pole approximation
Lower order (1st and 2nd) are well understood and easy to characterize (speed of system, oscillations, damping...), but his is much more difficult with higher order systems.
One way to make many such systems easier to think about is to approximate the system by a lower order system using a technique called the dominant pole approximation. This approximation assumes that the slowest part of the system dominates the response, and that the faster part(s) of the system can be ignored.
**Notes:**
- In a transfer function representation, the *order* is the highest exponent in the transfer function. In a proper system, the system order is defined as the degree of the denominator polynomial.
- A *proper system* is a system where the degree of the denominator is larger than or equal to the degree of the numerator polynomial.
- A *strictly proper* system is a system where the degree of the denominator polynomial is larger than the degree of the numerator polynomial.
Let's take some examples and verify how a few systems behaves.
We will use Sympy to do it, so we need to define some variables first.
```
t, K, tau = sympy.symbols('t, K, tau',real=True, positive=True)
s = sympy.Symbol('s')
u = sympy.Heaviside(t) # step function
def L(f):
return sympy.laplace_transform(f, t, s, noconds=True)
def invL(F):
return sympy.inverse_laplace_transform(F, s, t)
```
```
def evaluate(f, times):
res = []
for time in times:
res.append(f.evalf(subs={t:time}).n(chop=1e-5))
return res
```
**Side note:**
`chop` makes it possible to round float numbers off to a desired precision. Below is an example
```
q1 = (-1.53283653303955 + 6.08703605256546e-17*sympy.I)
q2 = (-1.53283653303955 + 6.08703605256546e-5*sympy.I)
print('q1:', q1.n(chop=1e-5))
print('q2:', q2.n(chop=1e-5))
```
q1: -1.53283653303955
q2: -1.53283653303955 + 6.08703605256546e-5*I
### Dominant Poles
- When we have a BIBO stable system, every mode of the system is an exponentially dumped signal.
- Beyond the initial transient, the main effect is driven by the slowest modes
For example let's consider these poles:
$$
p_1 = -0.1
$$
$$
p_{2,3} = -10 \pm 17.32j
$$
$$
p_4 = -15.5
$$
$$
p_{5,6} = -20 \pm 34.64j
$$
```
fig, axs = plt.subplots(1,1,figsize=(15,7))
plt.plot(-0.1, 0, marker='.', markersize=25, color='blue')
plt.plot([-5, -5], [-8.66, 8.66], marker='.', markersize=25, linestyle='', color='orange')
plt.plot(-15.5, 0, marker='.', markersize=25, color='green')
plt.plot([-20, -20], [-34.64, 34.64], marker='.', markersize=25, linestyle='', color='red')
axs.set_xlim([-24, 1])
axs.set_ylim([-36, 36])
axs.set_xlabel('Re')
axs.set_ylabel('Img')
plt.legend(['$p_1$', '$p_{2,3}$', '$p_4$', '$p_{5,6}$'])
plt.grid()
```
We can now verify what output is associated to each of these poles.
To do this we define four systems that have poles at the positions depicted above:
$$ G_1(s) = \frac{1}{(s + 0.1)} $$
$$ G_2(s) = \frac{100}{(s^2 + 10s + 100)} $$
$$ G_3(s) = \frac{15.5}{(s + 15.5)} $$
$$ G_2(s) = \frac{1600}{(s^2 + 40s + 1600)} $$
```
fig, axs = plt.subplots(1,4,figsize=(15,7))
time = np.linspace(0,10,100)
# Define our systems
G1 = 1/(s + 0.1)
zeta, w_n = 0.5, 10
G2 = w_n**2/(s**2 + 2*zeta*w_n*s + w_n**2)
G3 = 15.5/(s + 15.5)
zeta, w_n = 0.5, 40
G4 = w_n**2/(s**2 + 2*zeta*w_n*s + w_n**2)
# plot them
axs[0].plot(time, evaluate(invL(G1), time), color='blue', linewidth=3), axs[0].set_ylim(-.5, 1), axs[0].set_title('G1'), axs[0].set_xlabel('time (s)')
axs[1].plot(time, evaluate(invL(G2), time), color='orange', linewidth=3), axs[1].set_ylim(-1, 1), axs[1].set_title('G2'), axs[1].set_xlabel('time (s)')
axs[2].plot(time, evaluate(invL(G3), time), color='green', linewidth=3), axs[2].set_ylim(-.5, 1), axs[2].set_title('G3'), axs[2].set_xlabel('time (s)')
axs[3].plot(time, evaluate(invL(G4), time), color='red', linewidth=3), axs[3].set_ylim(-1, 1), axs[3].set_title('G4'), axs[3].set_xlabel('time (s)')
fig.tight_layout()
```
```
invL(G2)
```
- Depending on how far they are in the $s$ plane their influence in time is different.
- **Beyond the initial transient, the slower modes are those the matter**
### Reduction of a second order system to first order
Consider an overdamped second order system:
$$
G(s) = K \frac{\alpha\beta}{(s+\alpha)(s+\beta)}
$$
whose step response is:
$$
Y(s) = K \frac{\alpha\beta}{(s+\alpha)(s+\beta)}\frac{1}{s}
$$
and hence, applying partial fraction expansion:
$$
y(t) = K \bigg ( 1 - \frac{\beta e^{-\alpha t} - \alpha e^{-\beta t}}{\beta - \alpha}\bigg )
$$
If the magnitude of $\beta$ is large compared to $\alpha$ (typically if $\beta/\alpha > 5$), and assuming $s$ is sufficiently small compared to $\beta$, we can write the following approximation for the transfer function (and as well as an approximation for the step response):
$$
G(s) \approx K \frac{\alpha\beta}{(s+\alpha)(\beta)}
$$
at which corresponds the following step response:
$$
y(t) \approx K \bigg ( 1 - \frac{\beta e^{-\alpha t}}{\beta} \bigg ) = K(1-e^{-\alpha t})
$$
Note that $G(0)$ is unchanged and this is necessary to ensure that the steady state value remains the same.
### Example 1, second order system
Let's now consider when we have two cascaded systems:
$$ G_1 = \frac{0.1}{(s+0.1)} $$
$$ G_2 = \frac{1}{(s+1)} $$
```
G1 = 0.1/(s+0.1)
G2 = 1/(s+1)
```
In this case, one pole is 10 times bigger than the other.
The entire system is then
$$ G(s) = G_1(s)G_2(s) = \frac{0.1}{(s+0.1)(s+1)} $$
```
G = G1*G2
print(G)
```
0.1/((s + 0.1)*(s + 1))
And now we can plot their output:
```
fig, ax = plt.subplots(1,1,figsize=(8,5))
time = np.linspace(0,40,100)
ax.plot(time, evaluate(invL(G), time), linewidth=3)
ax.plot(time, evaluate(invL(G1), time), linewidth=3)
ax.plot(time, evaluate(invL(G2), time), linewidth=3)
ax.set_ylim(-.1, 0.3)
ax.set_xlabel('time (s)')
ax.grid()
fig.legend(['y_g(t)', 'y_g1(t)', 'y_g2(t)']);
```
And we can also calculate the step response:
$$
Y(s)= G(s) U(s) = G(s)\frac{1}{s}
$$
And this is the output of our system when we have a step input:
<table style='margin: 0 auto' rules=none>
<tr>
<td> </td>
</tr>
</table>
```
# We can plot the figure above running this cell. Unfortunately, github CI throws an error and I still need to figure out why.
# fig, ax = plt.subplots(1,1,figsize=(8,5))
# time = np.linspace(0,40,100)
# ax.plot(time, evaluate(invL(G*1/s), time), linewidth=3)
# ax.plot(time, evaluate(invL(G1*1/s), time), linewidth=3)
# ax.plot(time, evaluate(invL(G2*1/s), time), linewidth=3)
# ax.set_ylim(-.1, 1.1)
# ax.set_xlabel('time (s)')
# ax.grid()
# fig.legend(['y_g(t)', 'y_g1(t)', 'y_g2(t)']);
```
- The step response plot shows three plots: the blue plot is the exact response, the orange plot is the approximation assuming the pole at -0.1 dominates; the green plot is the approximation assuming that the pole at 1 dominates (which clearly it doesn't because the plot is nowhere near the exact response)
- Note that the blue and orange plots are very close to each other, so the dominant pole approximation is a good one.
- The exact response has two exponentials, a fast one with a relatively short time constant of $\tau_1 = 1/(p_1)$ and a much slower exponential with time constant $\tau_2 = 1/(p_2)$.
- If we look at the overall resonse, the fast exponential comes to equilibrium much more quickly than the slow explonential. From the perspective of the overall response, the faster exponential comes to equilibrium (i.e., has decayed to zero) instantaneously compared to the slower exponential.
- Therefore, the slower response (due to the pole closer to the origin — at s=-0.1) _dominates_.
The second order system $G(s)$ behaves approximately like $G_1(s)$ which is the one with the slower dynamics: $$G(s) = \frac{0.1}{(s+0.1)(s+1)} \approx \frac{0.1}{(s+0.1)}$$
### Example 2, Second order, Pole dominates but not as strongly
Let's now consider a different case and build an overall system composing the following two systems in cascade:
$$
G_1(s) = \frac{0.2}{s+0.2}
$$
$$
G_2(s) = \frac{1}{s+1}
$$
In this case, one pole is only 5 times bigger than the other.
The entire system is then
$$ G(s) = G_1(s)G_2(s) = \frac{0.2}{(s+0.2)(s+1)} $$
```
G1 = 0.2/(s+0.2)
G2 = 1/(s+1)
G=G1*G2
print('G:', G)
```
G: 0.2/((s + 0.2)*(s + 1))
If we now evaluate the step response for this system:
```
fig, ax = plt.subplots(1,1,figsize=(8,5))
time = np.linspace(0,40,100)
ax.plot(time, evaluate(invL(G*1/s), time), linewidth=3)
ax.plot(time, evaluate(invL(G1*1/s), time), linewidth=3)
ax.plot(time, evaluate(invL(G2*1/s), time), linewidth=3)
ax.set_ylim(-.1, 1.1)
ax.set_xlabel('time (s)')
ax.grid()
fig.legend(['y_g(t)', 'y_g1(t)', 'y_g2(t)']);
```
The approximation is still fairly good, but not not quite as good as when $p_1=0.1$, as we would expect.
## Example 3, Second Order, neither pole dominates
If we instead have two poles quite close to each other (note that it is their relative location - or ratio of the pole locations - that is of interest to determine if the dominant approximation is applicable).
Let's consider:
$$
G_1(s) = \frac{1.25}{s+1.25}
$$
$$
G_2(s) = \frac{1}{s+1}
$$
and the final system is:
$$
G(s) = G_1(s)G_2(s) = \frac{1.25}{(s+1)(s+1.25)}
$$
```
G1 = 1.25/(s+1.25)
G2 = 1/(s+1)
G = G1*G2
print('G:', G)
```
G: 1.25/((s + 1)*(s + 1.25))
If we now calculate and plot the step response:
```
fig, ax = plt.subplots(1,1,figsize=(8,5))
time = np.linspace(0,10,100)
ax.plot(time, evaluate(invL(G*1/s), time), linewidth=3)
ax.plot(time, evaluate(invL(G1*1/s), time), linewidth=3)
ax.plot(time, evaluate(invL(G2*1/s), time), linewidth=3)
ax.set_ylim(-.1, 1.1)
ax.set_xlabel('time (s)')
ax.grid()
fig.legend(['y_g(t)', 'y_g1(t)', 'y_g2(t)']);
```
In this case, the two poles are very close to each other and the dominat pole approximation cannot be applied.
The three plots above are different: the blue (exact) response is not at all close the approximation of the approximation in which a first order pole dominates (orange or green).
Note that the step response for this case has a different time scale than the other two.
### Simplifying Higher Order Systems
The dominant pole approximation can also be applied to higher order systems. Here we consider a third order system with one real root, and a pair of complex conjugate roots.
<table style='margin: 0 auto' rules=none>
<tr>
<td> </td>
</tr>
</table>
- In this case the test for the dominant pole compare $\alpha$ against $\zeta \omega_0$
- Note: sometimes $\zeta \omega_0$ is also written $\xi \omega_n$ - they are the same thing
- This is because $\zeta \omega_0$ is the real part of the complex conjugate root
- We only compare the real parts of the roots when determining dominance because it is the real part that determines how fast the response decreases.
- Note, that as with the previous case, the steady state gain $H(0)$ of the exact system and the two approximate systems are equal. This is necessary to ensure that the final value of the step response (which is determined by $H(0)$ is unchanged).
## Example 5, Third order, Real Pole Dominates
Let's now consider the following system
$$G(s) = \frac{\alpha \omega_n^2}{(s+\alpha)(s^2+2\xi\omega_n s + \omega_n^2)} = \frac{17}{(s+0.1)(s^2+2s+17)}$$
$$ \approx G_dp(s) = \frac{\alpha}{s+\alpha} = \frac{0.1}{s+0.1}$$
The second order poles are at $s=-1 \pm j4$ ($\xi$=0.24 and $\omega_n=\sqrt{17}$=4.1) and the real pole is at $s = -\alpha = -0.1$.
As before we write the entire system as a cascade of two systems
$$
G_1(s)=\frac{\alpha}{s + \alpha}
$$
and
$$
G_2(s)=\frac{\omega_n^2}{s^2+2\xi\omega_n s + \omega_n^2)}
$$
with
$$
G(s) = G_1(s)G_2(s)
$$
```
alpha = 0.1 # real pole that dominates
```
```
G1 = 0.1/(s+0.1)
G2 = 17/(s**2+2*s+17)
G = G1*G2
print('G:', G)
```
G: 1.7/((s + 0.1)*(s**2 + 2*s + 17))
```
fig, ax = plt.subplots(1,1,figsize=(8,5))
time = np.arange(0,40,0.2)
#ax.plot(time, evaluate(invL(G*1/s), time), linewidth=5) # this is commented out only because github CI failes. The response of G matches up the reponse of G1.
ax.plot(time, evaluate(invL(G1*1/s), time), linewidth=3)
ax.plot(time, evaluate(invL(G2*1/s), time), linewidth=3)
ax.set_ylim(-.1, 1.6)
ax.set_xlabel('time (s)')
ax.grid()
fig.legend(['y_g(t)', 'y_g1(t)', 'y_g2(t)']);
```
- The blue (exact) and orange (due to pole at $s=-\alpha=-0.1$) lines are very close since that is the pole that dominates in this example.
- The green line (corresponding to $G_2(s)$, and that would correspond to the case where the second order poles are assumed to dominate) is obviously a bad approximation and not useful, as expected.
**Dominant pole approximation can simplify systems analysis**
The dominant pole approximation is a method for approximating a (more complicated) high order system with a (simpler) system of lower order if the location of the real part of some of the system poles are sufficiently close to the origin compared to the other poles.
Let's consider another example:
$$
G1 = \frac{150.5}{s+150.5}
$$
with pole at -150.5
$$
G2 = \frac{100}{s^2 + 10s +100}
$$
with poles at: $p = -5 \pm 8.66j$
```
G1 = 150.5/(s + 150.5)
print('G1: ', G1)
zeta, w_n = 0.5, 10
G2 = w_n**2/(s**2 + 2*zeta*w_n*s + w_n**2)
print('G2: ', G2)
G = G1*G2
print('G: ', G)
```
G1: 150.5/(s + 150.5)
G2: 100/(s**2 + 10.0*s + 100)
G: 15050.0/((s + 150.5)*(s**2 + 10.0*s + 100))
The step response is:
<table style='margin: 0 auto' rules=none>
<tr>
<td> </td>
</tr>
</table>
We can obtain the figure above uncommenting and running the cell below.
```
# fig, ax = plt.subplots(1,1,figsize=(8,5))
# time = np.arange(0, 4, 0.05)
# ax.plot(time, evaluate(invL(G*1/s), time), linewidth=7, color = 'blue')
# ax.plot(time, evaluate(invL(G1*1/s), time), linewidth=4, color = 'orange')
# ax.plot(time, evaluate(invL(G2*1/s), time), linewidth=3, color = 'green')
# ax.set_ylim(-.1, 1.6)
# ax.set_xlabel('time (s)')
# ax.grid()
# fig.legend(['y_g(t)', 'y_g1(t)', 'y_g2(t)']);
```
--------------------
## First-order and second-order systems
- Given that the main effect is driven by the slowest modes, then the dynamics of the system can be approximated by the modes associated to the dominant poles
- Typically this ends up being a first-order system (real dominant pole) or a second-oder system (complex-conjugated poles), possibly with a constant delay
- It is hence important to understand the response of the first-order and second-order systems as they can be representive of a broader class.
### Step response of first order systems
Given a system:
$$G(s) = \frac{1}{s+p} = \frac{\tau}{1+\tau s}$$
with pole in $s = -p = \frac{-1}{\tau} $
and given a input: $$u(t) = 1(t)$$
the output of the system is:
$$ y(t) = 1 - e^{-\frac{t}{\tau}}$$
This is obtained:
$$\frac{1}{1+\tau s}\frac{1}{s} = \frac{1/\tau}{s + 1/\tau} \frac{1}{s} = \frac{A}{s + 1/\tau} + \frac{B}{s}$$
where $A = -1$, $B = 1$
We can also verify it with Sympy:
```
F = 1/(K*s + 1)*1/s
invL(F)
```
We can then plot the output $y(t)$ for a particular value of the time constant $\tau=2$:
```
time = np.linspace(0,20,100)
tau = 2 # define the time constant
y_t = 1 - np.exp(-time/tau)
```
```
fig = plt.figure(figsize=(10,5))
plt.plot(time, y_t, linewidth=3)
plt.title('Step response')
plt.xlabel('time (s)')
plt.grid()
```
We call:
- $\tau$: time constant - Characterises completely the response of a first order system
- Settling time: the time it takes to get to 95% (or other times 90%, 98%) of the steady state value $y(t)=1$:
- $t_s = -\tau ln(0.05)$
- This is because:
$$
y(t) = 1 - e^{-\frac{t}{\tau}} \Rightarrow 1 - y(t) = e^{-\frac{t}{\tau}} \Rightarrow \ln(1-y(t)) = -\frac{t}{\tau}, \;\; \text{when} \;\; y(t)=0.95 \Rightarrow t = -\tau ln(0.05)
$$
```
#y_t = -1 # Desired steady state value
final_value_pc = 0.95 # percentage of y_t
# final_value = 1 - np.exp(-t/tau)
# np.log(1-final_value) = -t/tau
t_s = -tau*np.log(1-final_value_pc)
print('Time to get to {}% of final value {:.1f}: {:.2f}s'.format(final_value_pc, y_t[-1], t_s))
```
Time to get to 0.95% of final value 1.0: 5.99s
Note that, after $\tau$ seconds, the system gets to the $64\%$ of the final value
### To recap
- We have defined two times in the response of a first order system:
- Settling time $t_s$: time to get to 0.95\% of the final value
- Time constant $\tau$, which is the time to get to the 64\% of the final value.
We can plot them, together with the step response of the system:
$$ y(t) = 1 - e^{-\frac{t}{\tau}}$$
```
fig = plt.figure(figsize=(10,5))
# step response
plt.plot(time, y_t, linewidth=3)
# tau - time constant
plt.plot(tau, 1 - np.exp(-tau/tau), marker='.', markersize=15)
# settling time
plt.plot(t_s, final_value_pc*y_t[-1], marker='.', markersize=15)
fig.legend(['Step response', 'Tau seconds', '$t_s$ seconds'], loc='upper left')
plt.title('Step response')
plt.xlabel('time (s)')
plt.grid()
```
------------------------------------------
### Step response of second order systems
Given a system:
$$G(s) = \frac{1}{\frac{s^2}{w_n^2} + \frac{2\xi}{w_n}s + 1}$$
with $ 0 < \xi < 1 $
- $\xi$ is called damping ratio
- $\omega_n$ (sometime also called $\omega_0$) is the natural frequency of the system.
The system has two conjugate-complex poles:
$$s = -\xi w_n \pm j \sqrt{1-\xi^2} $$
As before, we can calculate the step reponse:
- input $$u(t)=1(t)$$
- output $$y(t) = 1 - \frac{1}{\sqrt{1-\xi^2}} e^{-\xi w_n t} \sin\Big( w_n\sqrt{1-\xi^2}t + arccos(\xi)\Big)$$
_Note: Proving this is a useful exercise_
As before we can plot it.
We will do it for a set of damping ratios: $\xi = [0.1, 0.25, 0.5, 0.7]$ and for a fixed value of $\omega_n=1$.
```
xis = [0.1, 0.25, 0.5, 0.7] # damping ratios
wn = 1 # natural frequency.
```
```
fig = plt.figure(figsize=(15,5))
legend_strs = []
for xi in xis:
y_t = 1 - (1/np.sqrt(1-xi**2)) * np.exp(-xi*wn*time) * np.sin(wn*np.sqrt(1-xi**2)*time+np.arccos(xi))
plt.plot(time, y_t, linewidth=4)
# this part is to have a legend
for xi in xis:
legend_strs.append('xi: ' + str(xi))
fig.legend(legend_strs, loc='upper left')
plt.xlim(0, 20)
plt.title('Step response')
plt.xlabel('time (s)')
plt.grid()
```
There are a few notable points that we can identify for the response of a second order system:
- Rise time (to 90%): $$t_r \approx \frac{1.8}{w_n} $$
- Maximum overshoot: $$ S \% = 100 e^{\Large -\frac{\xi\pi}{\sqrt{1-\xi^2}}} $$
- Time of Maximum overshoot: $$t_{max} = \frac{\pi}{w_n\sqrt{1-\xi^2}} $$
- Settling time (within a desired interval, e.g., 5%): $$t_{s} \approx -\frac{1}{\xi w_n}ln(0.05) $$
- Oscillation period: $$T_{P} = \frac{2\pi}{w_n\sqrt{1-\xi^2}} $$
We can connect $\zeta$ and $w_n$ to $S, T_p, t_{max}, t_s$ with two important caveats:
- some of the relationships are approximate
- additional poles and zeros will change the results, so all of the results should be viewed as guidelines.
We can obtain the previous relationships using the _step response_ of $G(s)$ (see above)
Let's see where they are on the plot for one specific choise of $\xi=0.2$ and $\omega_n=1$.
```
xi = 0.2
wn = 1
# Maximum overshoot
S = np.exp(-xi*3.14/(np.sqrt(1-xi**2)))
# Time of maximum overshoot
t_max = 3.14/(wn*np.sqrt(1-xi**2))
# Settling time within 5%
t_s = -1/(xi*wn)*np.log(0.05)
# Period of the oscillations
Tp = 2*3.14/(wn*np.sqrt(1-xi**2))
# time vector
time = np.linspace(0,30,100)
# create the figure
fig = plt.figure(figsize=(15,5))
# overall response y(t)
y_t = 1 - (1/np.sqrt(1-xi**2)) * np.exp(-xi*wn*time) * np.sin(wn*np.sqrt(1-xi**2)*time+np.arccos(xi))
plt.plot(time, y_t, linewidth=4)
# maximum overshoot
plt.plot([t_max, t_max], [y_t[-1], y_t[-1]+S], marker='.', markersize=12)
plt.text(t_max+0.2, y_t[-1]+S/2, 'S', fontsize=15) # text box
# Time of maximum overshoot
plt.plot([t_max, t_max], [0, 1], markersize=12, linestyle='--')
plt.text(t_max+0.2, 0.0, 't_max', fontsize=15) # text box
# Let's also plot +-0.05 boundary lines around y(t)
plt.plot([0, time[-1]], [1-0.05, 1-0.05], linestyle='--', color='k')
plt.plot([0, time[-1]], [1+0.05, 1+0.05], linestyle='--', color='k')
# Settling time within a desired +-0.05 interval
plt.plot([t_s, t_s], [0, 1.1], linestyle='--')
plt.text(t_s+0.2, 0.0, 't_s', fontsize=15) # textbox
# Oscillation period (this is only to show it on the plot)
plt.plot([t_max, t_max+Tp], [y_t[-1]+S+0.1, y_t[-1]+S+0.1], marker='.', linestyle='--', color='k', markersize=10)
plt.plot([t_max+Tp, t_max+Tp], [y_t[-1], y_t[-1]+S+0.1], linestyle='--', color='k')
plt.text((t_max+t_max+Tp)/2, y_t[-1]+S, 'Tp', fontsize=15)
fig.legend(['xi:{:.1f}'.format(xi)], loc='upper left')
plt.title('Step response')
plt.xlabel('time (s)')
plt.grid()
```
---------------------
| 48b56465261be15939a3b16020163feaf09516dc | 339,300 | ipynb | Jupyter Notebook | 05_System_response.ipynb | andreamunafo/classical_control_theory | 5e1bef562e32fb9efcde83891cb19ce5825a6a7f | [
"Apache-2.0"
] | null | null | null | 05_System_response.ipynb | andreamunafo/classical_control_theory | 5e1bef562e32fb9efcde83891cb19ce5825a6a7f | [
"Apache-2.0"
] | null | null | null | 05_System_response.ipynb | andreamunafo/classical_control_theory | 5e1bef562e32fb9efcde83891cb19ce5825a6a7f | [
"Apache-2.0"
] | null | null | null | 215.291878 | 60,904 | 0.908205 | true | 7,248 | Qwen/Qwen-72B | 1. YES
2. YES | 0.861538 | 0.867036 | 0.746984 | __label__eng_Latn | 0.970974 | 0.573827 |
```python
%matplotlib inline
```
# Bayesian optimization with `skopt`
Gilles Louppe, Manoj Kumar July 2016.
Reformatted by Holger Nahrstaedt 2020
.. currentmodule:: skopt
## Problem statement
We are interested in solving
\begin{align}x^* = arg \min_x f(x)\end{align}
under the constraints that
- $f$ is a black box for which no closed form is known
(nor its gradients);
- $f$ is expensive to evaluate;
- and evaluations of $y = f(x)$ may be noisy.
**Disclaimer.** If you do not have these constraints, then there
is certainly a better optimization algorithm than Bayesian optimization.
This example uses :class:`plots.plot_gaussian_process` which is available
since version 0.8.
## Bayesian optimization loop
For $t=1:T$:
1. Given observations $(x_i, y_i=f(x_i))$ for $i=1:t$, build a
probabilistic model for the objective $f$. Integrate out all
possible true functions, using Gaussian process regression.
2. optimize a cheap acquisition/utility function $u$ based on the
posterior distribution for sampling the next point.
$x_{t+1} = arg \min_x u(x)$
Exploit uncertainty to balance exploration against exploitation.
3. Sample the next observation $y_{t+1}$ at $x_{t+1}$.
## Acquisition functions
Acquisition functions $u(x)$ specify which sample $x$: should be
tried next:
- Expected improvement (default):
$-EI(x) = -\mathbb{E} [f(x) - f(x_t^+)]$
- Lower confidence bound: $LCB(x) = \mu_{GP}(x) + \kappa \sigma_{GP}(x)$
- Probability of improvement: $-PI(x) = -P(f(x) \geq f(x_t^+) + \kappa)$
where $x_t^+$ is the best point observed so far.
In most cases, acquisition functions provide knobs (e.g., $\kappa$) for
controlling the exploration-exploitation trade-off.
- Search in regions where $\mu_{GP}(x)$ is high (exploitation)
- Probe regions where uncertainty $\sigma_{GP}(x)$ is high (exploration)
```python
print(__doc__)
import numpy as np
np.random.seed(237)
import matplotlib.pyplot as plt
from skopt.plots import plot_gaussian_process
```
## Toy example
Let assume the following noisy function $f$:
```python
noise_level = 0.1
def f(x, noise_level=noise_level):
return np.sin(5 * x[0]) * (1 - np.tanh(x[0] ** 2))\
+ np.random.randn() * noise_level
```
**Note.** In `skopt`, functions $f$ are assumed to take as input a 1D
vector $x$: represented as an array-like and to return a scalar
$f(x)$:.
```python
# Plot f(x) + contours
x = np.linspace(-2, 2, 400).reshape(-1, 1)
fx = [f(x_i, noise_level=0.0) for x_i in x]
plt.plot(x, fx, "r--", label="True (unknown)")
plt.fill(np.concatenate([x, x[::-1]]),
np.concatenate(([fx_i - 1.9600 * noise_level for fx_i in fx],
[fx_i + 1.9600 * noise_level for fx_i in fx[::-1]])),
alpha=.2, fc="r", ec="None")
plt.legend()
plt.grid()
plt.show()
```
Bayesian optimization based on gaussian process regression is implemented in
:class:`gp_minimize` and can be carried out as follows:
```python
from skopt import gp_minimize
res = gp_minimize(f, # the function to minimize
[(-2.0, 2.0)], # the bounds on each dimension of x
acq_func="EI", # the acquisition function
n_calls=15, # the number of evaluations of f
n_random_starts=5, # the number of random initialization points
noise=0.1**2, # the noise level (optional)
random_state=1234) # the random seed
```
Accordingly, the approximated minimum is found to be:
```python
"x^*=%.4f, f(x^*)=%.4f" % (res.x[0], res.fun)
```
For further inspection of the results, attributes of the `res` named tuple
provide the following information:
- `x` [float]: location of the minimum.
- `fun` [float]: function value at the minimum.
- `models`: surrogate models used for each iteration.
- `x_iters` [array]:
location of function evaluation for each iteration.
- `func_vals` [array]: function value for each iteration.
- `space` [Space]: the optimization space.
- `specs` [dict]: parameters passed to the function.
```python
print(res)
```
Together these attributes can be used to visually inspect the results of the
minimization, such as the convergence trace or the acquisition function at
the last iteration:
```python
from skopt.plots import plot_convergence
plot_convergence(res);
```
Let us now visually examine
1. The approximation of the fit gp model to the original function.
2. The acquisition values that determine the next point to be queried.
```python
plt.rcParams["figure.figsize"] = (8, 14)
def f_wo_noise(x):
return f(x, noise_level=0)
```
Plot the 5 iterations following the 5 random points
```python
for n_iter in range(5):
# Plot true function.
plt.subplot(5, 2, 2*n_iter+1)
if n_iter == 0:
show_legend = True
else:
show_legend = False
ax = plot_gaussian_process(res, n_calls=n_iter,
objective=f_wo_noise,
noise_level=noise_level,
show_legend=show_legend, show_title=False,
show_next_point=False, show_acq_func=False)
ax.set_ylabel("")
ax.set_xlabel("")
# Plot EI(x)
plt.subplot(5, 2, 2*n_iter+2)
ax = plot_gaussian_process(res, n_calls=n_iter,
show_legend=show_legend, show_title=False,
show_mu=False, show_acq_func=True,
show_observations=False,
show_next_point=True)
ax.set_ylabel("")
ax.set_xlabel("")
plt.show()
```
The first column shows the following:
1. The true function.
2. The approximation to the original function by the gaussian process model
3. How sure the GP is about the function.
The second column shows the acquisition function values after every
surrogate model is fit. It is possible that we do not choose the global
minimum but a local minimum depending on the minimizer used to minimize
the acquisition function.
At the points closer to the points previously evaluated at, the variance
dips to zero.
Finally, as we increase the number of points, the GP model approaches
the actual function. The final few points are clustered around the minimum
because the GP does not gain anything more by further exploration:
```python
plt.rcParams["figure.figsize"] = (6, 4)
# Plot f(x) + contours
_ = plot_gaussian_process(res, objective=f_wo_noise,
noise_level=noise_level)
plt.show()
```
| 8046a6ea24ff5cd2e56c6454cb893cb500c7edf1 | 10,133 | ipynb | Jupyter Notebook | dev/notebooks/auto_examples/bayesian-optimization.ipynb | scikit-optimize/scikit-optimize.github.io | 209d20f8603b7b6663f27f058560f3e15a546d76 | [
"BSD-3-Clause"
] | 15 | 2016-07-27T13:17:06.000Z | 2021-08-31T14:18:07.000Z | 0.9/_downloads/1c9bc01d15cf0a1e95b499b64cae5679/bayesian-optimization.ipynb | scikit-optimize/scikit-optimize.github.io | 209d20f8603b7b6663f27f058560f3e15a546d76 | [
"BSD-3-Clause"
] | 2 | 2018-05-09T15:01:09.000Z | 2020-10-22T00:56:21.000Z | dev/_downloads/1c9bc01d15cf0a1e95b499b64cae5679/bayesian-optimization.ipynb | scikit-optimize/scikit-optimize.github.io | 209d20f8603b7b6663f27f058560f3e15a546d76 | [
"BSD-3-Clause"
] | 6 | 2017-08-19T12:05:57.000Z | 2021-02-16T20:54:58.000Z | 46.912037 | 1,893 | 0.569525 | true | 1,656 | Qwen/Qwen-72B | 1. YES
2. YES | 0.798187 | 0.793106 | 0.633047 | __label__eng_Latn | 0.984839 | 0.30911 |
```python
# Geometric Properties of Hollow Rectangle
# E.Durham 5-Jul-2019
```
```python
from sympy import *
# from sympy import symbols
```
```python
# define symbols
b, d, t = symbols('b d t')
```
```python
A = 2*t*(d+b-2*t) # Area
I_x = (b*d**3 - (b-2*t) * (d-2*t)**3) / 12 # Second Moment of Inertia for Major Axis
I_y = (d*b**3 - (d-2*t) * (b-2*t)**3) / 12 # Second Moment of Inertia for Minor Axis
S_x = (b*d**3 - (b-2*t) * (d-2*t)**3) / (6*d) # Elastic Section Modulus for Major Axis
S_y = (d*b**3 - (d-2*t) * (b-2*t)**3) / (6*b) # Elastic Section Modulus for Minor Axis
r_x = sqrt((b*d**3 - (b-2*t) * (d-2*t)**3) / (24*t * (d+b-2*t))) # Radius of Gyration for Major Axis
r_y = sqrt((d*b**3 - (d-2*t) * (b-2*t)**3) / (24*t * (b+d-2*t))) # Radius of Gyration for Minor Axis
Z_x = b*t*(d-t) + 2*t * ((d/2)-t)**2 # Plastic Section Modulus for Major Axis
Z_y = d*t*(b-t) + 2*t * ((b/2)-t)**2 # Plastic Section Modulus for Minor Axis
J = (2*t*(d-t)**2 * (b-t)**2) / (d+b-2*t) # St. Venant's Torsional Constant
```
```python
# Specify Depth, d, Width, b, AND Wall Thickness, t in identical units
# Example:
# if case values are: d = 3.00, b = 2.00 and t = 0.250
# then use case_values = [(d, 3.00), (b, 2.00), (t, 0.250)]
# Enter actual case values below per above
case_values = [(d, 3.00), (b, 2.00), (t, 0.250)]
```
```python
print('Geometric Properties:')
print('Given:')
print('Depth, ', case_values[0], 'unit')
print('Width, ', case_values[1], 'unit')
print('Wall, ', case_values[2], 'unit')
print('A =', A.subs(case_values).evalf(3), 'unit**2')
print('I_x =', I_x.subs(case_values).evalf(3), 'unit**4')
print('S_x =', S_x.subs(case_values).evalf(3), 'unit**3')
print('Z_x =', Z_x.subs(case_values).evalf(3), 'unit**3')
print('r_x =', r_x.subs(case_values).evalf(3), 'unit')
print('I_y =', I_y.subs(case_values).evalf(3), 'unit**4')
print('S_y =', S_y.subs(case_values).evalf(3), 'unit**3')
print('Z_y =', Z_y.subs(case_values).evalf(3), 'unit**3')
print('r_y =', r_y.subs(case_values).evalf(3), 'unit')
print('J =', J.subs(case_values).evalf(3), 'unit**4')
```
Geometric Properties:
Given:
Depth, (d, 3.0) unit
Width, (b, 2.0) unit
Wall, (t, 0.25) unit
A = 2.25 unit**2
I_x = 2.55 unit**4
S_x = 1.70 unit**3
Z_x = 2.16 unit**3
r_x = 1.06 unit
I_y = 1.30 unit**4
S_y = 1.30 unit**3
Z_y = 1.59 unit**3
r_y = 0.759 unit
J = 2.57 unit**4
```python
# Display formulas in proper math formatting
init_printing()
print('Area , A ='); A
```
```python
print('Second Moment of Inertia, I ='); I_x
```
```python
print('Elastic Section Modulus, S ='); S_x
```
```python
print('Plastic Section Modulus, Z ='); Z_x
```
```python
print('Radius of Gyration, r ='); r_x
```
```python
print("St. Venant's Torsional Constant, J ="); J
```
## Test Values and Expected Results
### RT 3 x x 1/4 from Aluminum Design Manual, 2015 page V-39
d = 3
b = 2
t = 0.25
A = 2.25
I_x = 2.55
S_x = 1.70
Z_x = 2.16
r_x = 1.06
I_y = 1.30
S_y = 1.30
Z_y = 1.59
r_y = 0.759
J = 2.57
## Glossary of Torsional Terms [1]
### HSS Shear Constant
The shear constant, $C_{RT}$, is used for calculating the maximum shear stress due to an applied shear force.
For hollow structural section, the maximum shear stress in the cross section is given by:
$\tau_{max} = \frac{V Q}{2 t I}$
where $V$ is the applied shear force, $Q$ is the statical moment of the portion of the section lying outside the neutral axis taken about the neutral axis, $I$ is the moment of inertia, and $t$ is the wall thickness.
The shear constant is expressed as the ratio of the applied shear force to the maximum shear stress [3]:
$C_{RT} = \frac{V}{\tau_{max}} = \frac{2 t I}{Q}$
### HSS Torsional Constant
The torsional constant, $C$, is used for calculating the shear stress due to an applied torqu. It is expressed as the ratio of the applied torque, $T$, to the shear stress in the cross section, $\tau$:
$C = \frac{T}{\tau}$
### St. Venant Torsional Constant
The St. Venant torsional constant, $J$, measures the resistance of a structural member to *pure* or *uniform* torsion. It is used in calculating the buckling moment resistance of laterally unsupported beams and torsional-flexural buckling of compression members in accordance with CSA S16.
For open cross section, the general formula is given by Galambos (1968):
$J = \sum(\frac{b't^3}{3})$
where $b'$ are the plate lengths between points of intersection on their axes, and $t$ are the plate thicknesses. Summation includes all component plates. It is noted that the tabulated values in the Handbook of Steel Construction are based on net plate lengths instead of lengths between intersection points, a mostly conservative approach.
The expressions for $J$ given herein do not take into account the flange-to-web fillets. Formulas which account for this effect are given by El Darwish and Johnston (1965).
For thin-walled closed sections, the general formula is given by Salmon and Johnson (1980):
$J = \frac{4 A_o^2}{\int_s ds/t}$
where $A_o$ is the enclosed area by the walls, $t$ is the wall thickness, $ds$ is a length element along the perimeter. Integration is performed over the entire perimeter $S$.
### Warping Torsional Constant
The warping torsional constant, $C_w$, measures the resistance of a structural member to *nonuniform* or *warping* torsion. It is used in calculating the buckling moment resistance of laterally unsupported beams and torsional-flexural buckling of compression members in accordance with CSA Standard S16.
For open section, a general calculation method is given by Galambos (1968). For section in which all component plates meet at a single poitn, such as angles and T-sections, a calculation method is given by Bleich (1952). For hollow structural sections (HSS), warping deformations are small, and the warping torsional constant is generally taken as zero.
### References for Torsional Constants
[1] CISC, 2002. Torsional Section Properties of Steel Shapes.
Canadian Institute of Steel Construction, 2002
[2] Seaburg, P.A. and Carter, C.J. 1997. Torsional Analysis of Structural Steel Members.
American Institute of Steel Construction, Chicago, Ill.
[3] Stelco. 1981. Hollow Structural Sections - Sizes and Section Properties, 6th Edition.
Stelco Inc., Hamilton, Ont.
[4] CISC 2016. Handbook of Steel Construction, 11th Edition, page 7-84
www.cisc-icca.ca
### Revision History
0.1 - 2019-07-08 - E.Durham - added graphic image
0.0 - 2019-07-05 - E.Durham - Created initial notebook
| 9986924094bb5e5eb990849713f8120fa9d80c20 | 35,177 | ipynb | Jupyter Notebook | Properties_of_Geometric_Sections/Hollow_Rectangle/Hollow_Rectangle.ipynb | Ewdy/Structural-Design | b4bf079c2153e7fcac9f09e8b8aafd8400e411c1 | [
"MIT"
] | 3 | 2019-07-19T11:43:26.000Z | 2022-02-24T22:26:03.000Z | Properties_of_Geometric_Sections/Hollow_Rectangle/Hollow_Rectangle.ipynb | Ewdy/Structural-Design | b4bf079c2153e7fcac9f09e8b8aafd8400e411c1 | [
"MIT"
] | 5 | 2019-07-19T11:31:47.000Z | 2019-07-26T12:11:39.000Z | Properties_of_Geometric_Sections/Hollow_Rectangle/Hollow_Rectangle.ipynb | Ewdy/Structural-Design | b4bf079c2153e7fcac9f09e8b8aafd8400e411c1 | [
"MIT"
] | 1 | 2021-12-03T21:07:44.000Z | 2021-12-03T21:07:44.000Z | 72.380658 | 4,674 | 0.781562 | true | 2,131 | Qwen/Qwen-72B | 1. YES
2. YES | 0.904651 | 0.917303 | 0.829838 | __label__eng_Latn | 0.955279 | 0.766325 |
# 5. Gyakorlat - 1 DoF csillapított lengő kar
2021.03.08
## Feladat:
```python
from IPython.display import Image
Image(filename='gyak_5_1.png',width=500)
```
A mellékelt ábrán egy lengőkar látható, ami két különböző tömegű és hosszúságú rúdból és a hozzá csatlakozó $R$ sugarú korongból áll. A két rúd két $k_1$, illetve $k_2$ merevségű rugón és egy $c_1$ csillapítási tényezőjű csillapító elemen keresztül csatlakozik a környezetehéz. A lengőkar csak az $A$ csukló körül tud elfordulni. A mozgást leíró általános koordináta a $\varphi$ szög, melyet a vízszintes egyenestől mérünk. A lengőkar a Föld gravitációs terében helyezkedik el, egyensúlyi helyezete a $\varphi=0$ pozícióban van, ahol a $k_2$ merevségű rugó feszítetlen.
### Adatok:
|||
|-------------------|------------------|
| $l$ = 0,2 m | $k_1$ = 300 N/m |
| $R$ = 0,1 m | $k_2$ = 10 N/m |
| $m$ = 0,12 kg | $c_1$ = 2 Ns/m |
### Részfeladatok:
1. Írja fel a mozgásegyenletet és számítsa ki a csillapítatlan ($\omega_n$) illetve csillapított sajátfrekvenciákat ($\omega_d$), és a realtív csillapítási tényezőt ($\zeta$)!
2. Számítsa ki a kritikus csillapítási tényezőt ($c_{cr}$)!
3. Számítsa ki a $k_1$ merevségű rugóhoz tartozó maximális rugóerő értékét a következő kezdeti feltételek esetén: ($\varphi(t=0)=\varphi_0=0,01$[rad];$\dot{\varphi}(t=0)=0$[rad/s])!
## Megoldás:
```python
from IPython.display import Image
Image(filename='gyak_5_2.png',width=500)
```
### 1. Feladat:
A fenti ábrán a lengőkar szabadtest ábrája látható az egyensúgyi helyzetből kitérített helyezetében. Egy korábbi példában (3. gyakorlat) már be lett mutatva, hogy milyen egyszerűsítéséskkel lehet élni annak érdekében, hogy linearizáljunk egy ilyen lengőrendszert. Röviden összefoglalva az egyszerűsítéseket:
- a vízszintes rúdelemre ható gravitációs erőnek nincs hatása a sajátkörfrekvenciára (viszont hatása lehet a maximális rugóerőkre);
- a rugók deformációja az egyensúlyi helyzetből mért ívhosszakkal jól közelíthetők.
A mozgásegyenlet Newton II. alapján:
\begin{equation}
\dot{\mathbf{I}}=\mathbf{F}.
\end{equation}
Ebből az SZTA alapján (és cos$\varphi\approx 1$, illetve a sin$\varphi\approx \varphi$ közelítéseket alkalmazva) adódik:
\begin{equation}
\Theta_A\ddot{\varphi}=-F_{r,1}3l-F_{r,2}l-F_{cs,1}3l-3mg\frac{3}{2}l-2mgl\varphi-mg(2l+R)\varphi,
\end{equation}
ahol a csillapításból adódó erő és a rugerők
\begin{equation}
F_{cs,1}\cong c_13l\dot{\varphi}, \hspace{10pt} F_{r,1}\cong F_{r,1st}+k_13l\varphi, \hspace{10pt} F_{r,2}\cong k_2l\varphi
\end{equation}
(megjegyzés: a mozgásegyenletben szerplő $-3mg\frac{3}{2}l$ tag és a $k_1$ merevségű rugó statikus deformációjáből adódó erő ($F_{r,1st}$) kiegyenlítik egymást, így kiesenek az egyenletből). Tehát a mozgásegyenlet:
\begin{equation}
\Theta_A\ddot{\varphi}=-9l^2k_1\varphi-l^2k_2\varphi-9l^2c_1\dot{\varphi}-2mgl\varphi-mg(2l+R)\varphi.
\end{equation}
A szerkezet tehetetlenségi nyomatékát a Steiner tétel segítségével lehet az $A$ pontra meghatározni:
\begin{equation}
\Theta_A=\frac{1}{3}3ml(3l)^2+\frac{1}{3}2ml(2l)^2+\frac{1}{2}mR^2+m(2l+R)^2.
\end{equation}
```python
import sympy as sp
from IPython.display import display, Math
sp.init_printing()
```
```python
l, R, m, k1, k2, c1, Θ_A, g = sp.symbols("l, R, m, k1, k2, c1, Θ_A, g", real=True)
# Készítsünk behelyettesítési listát az adatok alapján, SI-ben
adatok = [(l, 0.2), (R, 0.1), (m, 0.12), (k1, 300), (k2, 10), (c1, 2), (g, 9.81)]
# Az általános koordináta definiálása az idő függvényeként
t = sp.symbols("t",real=True, positive=True)
φ_t = sp.Function('φ')(t)
# A z tengelyre számított perdület derivált az A pontban
dΠ_Az = Θ_A*φ_t.diff(t,2)
# A z tengelyre számított nyomaték az A pontban
M_Az = -9*l**2*k1*φ_t-l**2*k2*φ_t-9*l**2*c1*φ_t.diff(t)-2*m*g*l*φ_t-m*g*(2*l+R)*φ_t
# A dinamika alapegyenlete
# (nullára rendezve)
mozgegy = dΠ_Az-M_Az
mozgegy
```
```python
# Osszunk le a főegyütthatóval:
foegyutthato = mozgegy.coeff(sp.diff(φ_t,t,2))
mozgegy = (mozgegy/foegyutthato).expand().apart(φ_t)
mozgegy
```
```python
# Írjuk be a tehetetlenségi nyomatékot
# a rúdhosszakkal és tömegekkel kifejezve
mozgegy = mozgegy.subs(Θ_A, 1/3*3*m*(3*l)**2+1/3*2*m*(2*l)**2+1/2*m*R**2+m*(2*l+R)**2)
```
```python
# A mozgásegyenletnek ebben az alakjában
# d/dt(φ(t)) együtthatója 2ζω_n -nel
# a φ(t) együtthatója pedig ω_n^2-tel egyezik meg,
# tehát mind a három kérdéses paraméter megadható.
ω_n_num = sp.sqrt((mozgegy.coeff(φ_t)).subs(adatok)).evalf(6)
ζ_num = ((mozgegy.coeff(φ_t.diff(t))).subs(adatok)/(2*ω_n_num)).evalf(4)
ω_d_num = (ω_n_num*sp.sqrt(1-ζ_num**2)).evalf(6)
display(Math('\omega_n = {}'.format(sp.latex(ω_n_num))),
Math('\zeta = {}'.format(sp.latex(ζ_num))),
Math('\omega_d = {}'.format(sp.latex(ω_d_num))))
# [rad/s]
# [1]
# [rad/s]
```
$\displaystyle \omega_n = 35.5523$
$\displaystyle \zeta = 0.1169$
$\displaystyle \omega_d = 35.3085$
```python
# Később még szükség lesz a csillapított frekvenciára és periódusidőre is:
T_d_num = (2*sp.pi/ω_d_num).evalf(4)
f_d_num = (ω_d_num/(2*sp.pi)).evalf(5)
display(Math('T_d = {}'.format(sp.latex(T_d_num))),
Math('f_d = {}'.format(sp.latex(f_d_num))))
# [s]
# [1/s]
```
$\displaystyle T_d = 0.178$
$\displaystyle f_d = 5.6195$
## 2. Feladat
Kritikus csillapításról akkor beszélünk, amikor a relatív csillapítási tényező éppen 1.
```python
# A mozgásegyenletnek d/dt(φ(t)) együtthatóját kell vizsgálni
mozgegy.coeff(φ_t.diff(t)).evalf(5)
```
```python
# Ez az együttható pont 2ζω_n -nel egyenlő.
# Az így adódó egyenlet megoldásásval kapjuk
# a kritikus csillapítshoz tartozó c1 értéket:
ζ_cr = 1
# itt még nem helyettesítünk be, hanem csak kifejezzük a kritikus csillapítási együtthatót
c1_cr = sp.solve(mozgegy.coeff(φ_t.diff(t))-2*ζ_cr*ω_n_num,c1)[0]
# most már be lehet helyettesíteni
c1_cr_num = c1_cr.subs(adatok).evalf(5)
display(Math('c_{{1,cr}} = {}'.format(sp.latex(c1_cr_num))))
# [1]
```
$\displaystyle c_{1,cr} = 17.105$
## 3. Feladat
A $k_1$ merevségű rugóban az ébredő erő az alábbi alakban adható meg:
\begin{equation}
F_{r,1}(t) = F_{r,1st}+k_13l\varphi(t).
\end{equation}
A rugó az egyensúlyi helyzetben feszített állapotban van. A statikus deformáció tehát az egyensúlyi egyenletből határozható meg:
\begin{equation}
\sum M_A=0:\hspace{20pt} -F_{r,1st}3l-3mg\frac{3}{2}l=0.
\end{equation}
```python
Fr_1st = sp.symbols("Fr_1st")
Fr_1st_num = (sp.solve(-Fr_1st*3*l-3*m*g*3/2*l,Fr_1st)[0]).subs(adatok)
display(Math('F_{{r,1st}} = {}'.format(sp.latex(Fr_1st_num))))
# [N]
```
$\displaystyle F_{r,1st} = -1.7658$
A dinamikus rugóerőnek ott lesz a maximuma ahol a legnagyobb a kitérés. Első körben tehát a mozgástörvényt kell meghatározni.
```python
kezdeti_ert = {φ_t.subs(t,0): 0.01, φ_t.diff(t).subs(t,0): 0}
display(kezdeti_ert)
mozg_torv = (sp.dsolve(mozgegy.subs(adatok),φ_t,ics=kezdeti_ert)).evalf(6)
mozg_torv
```
Keressük meg a kitérés maximumát numerikus módszerek segítéségvel. Ehhez először célszerű kirajzoltatni a függvényünket. (Analatikus megoldáshoz lásd 4. gyakorlat hasoló példa megoldása!)
```python
import numpy as np
import matplotlib.pyplot as plt
```
```python
# A matplotlib plottere az általunk megadott pontokat fogja egyenes vonalakkal összekötni.
# Elegendően kis lépésközt választva az így kapott görbe simának fog tűnni.
# Állítsuk elő az (x,y) koordinátákat!
t_val = np.linspace(0,0.5,1001) # lista létrehozása a [0 ; 0,5] intervallum 1001 részre való bontásával
φ_val = np.zeros(len(t_val)) # nulla lista létrehozása (ugyanannyi elemszámmal)
# for ciklus segítségével írjuk felül a nulla listában szerplő elemelet az adott x értékhez tartozó y értékekkel
for i in range(len(t_val)):
φ_val[i] = mozg_torv.rhs.subs(t,t_val[i]) # Ezt lehetne list comprehension-nel is.
# rajzterület létrehozása
plt.figure(figsize=(40/2.54,30/2.54))
# függvény kirajzolása az x és y kordináta értékeket tartalmazó listák megadásásval
plt.plot(t_val,φ_val,color='b',label=r'num_sim')
# tengelyek
axes = plt.gca()
axes.set_xlim([0,t_val[-1]])
axes.set_ylim([-0.01, 0.01])
# rácsozás
plt.grid()
# tengely feliratozás
plt.xlabel(r'$ t [s] $',fontsize=30)
plt.ylabel(r'$ \varphi(t) [rad] $',fontsize=30)
plt.show()
```
A statikus rugóerő negatív előjelű, ezért két szélsőérték helyet is meg kell vizsgálni: az első lokális maximumot és a minimumot. Ezeket az értékekeket könnyen ki tudjuk szedni a korábban meghatározott listából.
```python
lok_max = max(φ_val)
lok_min = min(φ_val)
# Rugóerők meghatározása
Fr_11 = (Fr_1st_num+k1*3*l*φ_t).subs(adatok).subs(φ_t,lok_max).evalf(5)
Fr_12 = (Fr_1st_num+k1*3*l*φ_t).subs(adatok).subs(φ_t,lok_min).evalf(5)
display(Math('F_{{r,1}} = {}'.format(sp.latex(Fr_11))),Math('F_{{r,2}} = {}'.format(sp.latex(Fr_12))))
# [N]
```
$\displaystyle F_{r,1} = 0.0342$
$\displaystyle F_{r,2} = -3.0093$
A 1-es rugóban ébredő maximális erő tehát 3,009 N.
Készítette:
Juhos-Kiss Álmos (Alkalmazott Mechanika Szakosztály)
Takács Dénes (BME MM) ábrái és Vörös Illés (BME MM) kidolgozása alapján.
Hibák, javaslatok:
amsz.bme@gmail.com
csuzdi02@gmail.com
almosjuhoskiss@gmail.com
2021.03.08
| 068e68d16c4d190f99e3621bb64abad1c44e3aa7 | 837,927 | ipynb | Jupyter Notebook | otodik_het/gyak_5.ipynb | barnabaspiri/RezgestanPython | 3fcc4374c90d041436c816d26ded63af95b44103 | [
"MIT"
] | null | null | null | otodik_het/gyak_5.ipynb | barnabaspiri/RezgestanPython | 3fcc4374c90d041436c816d26ded63af95b44103 | [
"MIT"
] | 12 | 2021-03-29T19:12:39.000Z | 2021-04-26T18:06:02.000Z | otodik_het/gyak_5.ipynb | barnabaspiri/RezgestanPython | 3fcc4374c90d041436c816d26ded63af95b44103 | [
"MIT"
] | 3 | 2021-03-29T19:29:08.000Z | 2021-04-10T20:58:06.000Z | 1,241.373333 | 618,676 | 0.945668 | true | 4,030 | Qwen/Qwen-72B | 1. YES
2. YES | 0.798187 | 0.72487 | 0.578582 | __label__hun_Latn | 0.999898 | 0.18257 |
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W1D5-DimensionalityReduction/student/W1D5_Tutorial1.ipynb" target="_parent"></a>
# Neuromatch Academy: Week 1, Day 5, Tutorial 1
# Dimensionality Reduction: Geometric view of data
---
Tutorial objectives
In this notebook we'll explore how multivariate data can be represented in different orthonormal bases. This will help us build intuition that will be helpful in understanding PCA in the following tutorial.
Steps:
1. Generate correlated multivariate data.
2. Define an arbitrary orthonormal basis.
3. Project data onto new basis.
---
```python
#@title Video: Geometric view of data
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="emLW0F-VUag", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
```
Video available at https://youtube.com/watch?v=emLW0F-VUag
# Setup
Run these cells to get the tutorial started.
```python
#library imports
import time # import time
import numpy as np # import numpy
import scipy as sp # import scipy
import math # import basic math functions
import random # import basic random number generator functions
import matplotlib.pyplot as plt # import matplotlib
from IPython import display
```
```python
#@title Figure Settings
%matplotlib inline
fig_w, fig_h = (8, 8)
plt.rcParams.update({'figure.figsize': (fig_w, fig_h)})
plt.style.use('ggplot')
%config InlineBackend.figure_format = 'retina'
```
```python
#@title Helper functions
def get_data(cov_matrix):
"""
Returns a matrix of 1000 samples from a bivariate, zero-mean Gaussian
Note that samples are sorted in ascending order for the first random variable.
Args:
cov_matrix (numpy array of floats): desired covariance matrix
Returns:
(numpy array of floats) : samples from the bivariate Gaussian, with
each column corresponding to a different random variable
"""
mean = np.array([0,0])
X = np.random.multivariate_normal(mean,cov_matrix,size = 1000)
indices_for_sorting = np.argsort(X[:,0])
X = X[indices_for_sorting,:]
return X
def plot_data(X):
"""
Plots bivariate data. Includes a plot of each random variable, and a scatter
plot of their joint activity. The title indicates the sample correlation
calculated from the data.
Args:
X (numpy array of floats): Data matrix
each column corresponds to a different random variable
Returns:
Nothing.
"""
fig = plt.figure(figsize=[8,4])
gs = fig.add_gridspec(2,2)
ax1 = fig.add_subplot(gs[0,0])
ax1.plot(X[:,0],color='k')
plt.ylabel('Neuron 1')
plt.title('Sample var 1: {:.1f}'.format(np.var(X[:,0])))
ax1.set_xticklabels([])
ax2 = fig.add_subplot(gs[1,0])
ax2.plot(X[:,1],color='k')
plt.xlabel('Sample Number')
plt.ylabel('Neuron 2')
plt.title('Sample var 2: {:.1f}'.format(np.var(X[:,1])))
ax3 = fig.add_subplot(gs[:, 1])
ax3.plot(X[:,0],X[:,1],'.',markerfacecolor=[.5,.5,.5], markeredgewidth=0)
ax3.axis('equal')
plt.xlabel('Neuron 1 activity')
plt.ylabel('Neuron 2 activity')
plt.title('Sample corr: {:.1f}'.format(np.corrcoef(X[:,0],X[:,1])[0,1]))
def plot_basis_vectors(X,W):
"""
Plots bivariate data as well as new basis vectors.
Args:
X (numpy array of floats): Data matrix
each column corresponds to a different random variable
W (numpy array of floats): Square matrix representing new orthonormal basis
each column represents a basis vector
Returns:
Nothing.
"""
plt.figure(figsize=[4,4])
plt.plot(X[:,0],X[:,1],'.',color=[.5,.5,.5],label='Data')
plt.axis('equal')
plt.xlabel('Neuron 1 activity')
plt.ylabel('Neuron 2 activity')
plt.plot([0,W[0,0]],[0,W[1,0]],color='r',linewidth=3,label = 'Basis vector 1')
plt.plot([0,W[0,1]],[0,W[1,1]],color='b',linewidth=3,label = 'Basis vector 2')
plt.legend()
def plot_data_new_basis(Y):
"""
Plots bivariate data after transformation to new bases. Similar to plot_data but
with colors corresponding to projections onto basis 1 (red) and basis 2 (blue).
The title indicates the sample correlation calculated from the data.
Note that samples are re-sorted in ascending order for the first random variable.
Args:
Y (numpy array of floats): Data matrix in new basis
each column corresponds to a different random variable
Returns:
Nothing.
"""
fig = plt.figure(figsize=[8,4])
gs = fig.add_gridspec(2,2)
ax1 = fig.add_subplot(gs[0,0])
ax1.plot(Y[:,0],'r')
plt.xlabel
plt.ylabel('Projection \n basis vector 1')
plt.title('Sample var 1: {:.1f}'.format(np.var(Y[:,0])))
ax1.set_xticklabels([])
ax2 = fig.add_subplot(gs[1,0])
ax2.plot(Y[:,1],'b')
plt.xlabel('Sample number')
plt.ylabel('Projection \n basis vector 2')
plt.title('Sample var 2: {:.1f}'.format(np.var(Y[:,1])))
ax3 = fig.add_subplot(gs[:, 1])
ax3.plot(Y[:,0],Y[:,1],'.',color=[.5,.5,.5])
ax3.axis('equal')
plt.xlabel('Projection basis vector 1')
plt.ylabel('Projection basis vector 2')
plt.title('Sample corr: {:.1f}'.format(np.corrcoef(Y[:,0],Y[:,1])[0,1]))
```
# Generate correlated multivariate data
```python
#@title Video: Multivariate data
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="YOan2BQVzTQ", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
```
Video available at https://youtube.com/watch?v=YOan2BQVzTQ
To study multivariate data, first we generate it. In this exercise we generate data from a *bivariate normal distribution*. This is an extension of the one-dimensional normal distribution to two dimensions, in which each $x_i$ is marginally normal with mean $\mu_i$ and variance $\sigma_i^2$:
\begin{align}
x_i \sim \mathcal{N}(\mu_i,\sigma_i^2)
\end{align}
Additionally, the joint distribution for $x_1$ and $x_2$ has a specified correlation coefficient $\rho$. Recall that the correlation coefficient is a normalized version of the covariance, and ranges between -1 and +1.
\begin{align}
\rho = \frac{\text{cov}(x_1,x_2)}{\sqrt{\sigma_1^2 \sigma_2^2}}
\end{align}
For simplicity, we will assume that the mean of each variable has already been subtracted, so that $\mu_i=0$. The remaining parameters can be summarized in the covariance matrix:
\begin{equation*}
{\bf \Sigma} =
\begin{pmatrix}
\text{var}(x_1) & \text{cov}(x_1,x_2) \\
\text{cov}(x_1,x_2) &\text{var}(x_2)
\end{pmatrix}
\end{equation*}
Note that this is a symmetric matrix with the variances $\text{var}(x_i) = \sigma_i^2$ on the diagonal, and the covariance on the off-diagonal.
### Exercise
We have provided code to draw random samples from a zero-mean bivariate normal distribution. These samples could be used to simulate changes in firing rates for two neurons. Fill in the function below to calculate the covariance matrix given the desired variances and correlation coefficient. The covariance can be found by rearranging the equation above:
\begin{align}
\text{cov}(x_1,x_2) = \rho \sqrt{\sigma_1^2 \sigma_2^2}
\end{align}
Use these functions to generate and plot data while varying the parameters. You should get a feel for how changing the correlation coefficient affects the geometry of the simulated data.
**Suggestions**
* Fill in the function `calculate_cov_matrix` to calculate the covariance.
* Generate and plot the data for $\sigma_1^2 =1$, $\sigma_1^2 =1$, and $\rho = .8$. Try plotting the data for different values of the correlation coefficent: $\rho = -1, -.5, 0, .5, 1$.
```python
help(plot_data)
help(get_data)
```
Help on function plot_data in module __main__:
plot_data(X)
Plots bivariate data. Includes a plot of each random variable, and a scatter
plot of their joint activity. The title indicates the sample correlation
calculated from the data.
Args:
X (numpy array of floats): Data matrix
each column corresponds to a different random variable
Returns:
Nothing.
Help on function get_data in module __main__:
get_data(cov_matrix)
Returns a matrix of 1000 samples from a bivariate, zero-mean Gaussian
Note that samples are sorted in ascending order for the first random variable.
Args:
cov_matrix (numpy array of floats): desired covariance matrix
Returns:
(numpy array of floats) : samples from the bivariate Gaussian, with
each column corresponding to a different random variable
```python
def calculate_cov_matrix(var_1,var_2,corr_coef):
"""
Calculates the covariance matrix based on the variances and correlation coefficient.
Args:
var_1 (scalar): variance of the first random variable
var_2 (scalar): variance of the second random variable
corr_coef (scalar): correlation coefficient
Returns:
(numpy array of floats) : covariance matrix
"""
###################################################################
## Insert your code here to:
## calculate the covariance from the variances and correlation
# cov = ...
cov_matrix = np.array([[var_1,cov],[cov,var_2]])
#uncomment once you've filled in the function
raise NotImplementedError("Student excercise: calculate the covariance matrix!")
###################################################################
return cov
###################################################################
## Insert your code here to:
## generate and plot bivariate Gaussian data with variances of 1
## and a correlation coefficients of: 0.8
## repeat while varying the correlation coefficient from -1 to 1
###################################################################
variance_1 = 1
variance_2 = 1
corr_coef = 0.8
#uncomment to test your code and plot
#cov_matrix = calculate_cov_matrix(variance_1,variance_2,corr_coef)
#X = get_data(cov_matrix)
#plot_data(X)
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D5-DimensionalityReduction/solutions/W1D5_Tutorial1_Solution_62df7ae6.py)
*Example output:*
# Define a new orthonormal basis
```python
#@title Video: Orthonormal bases
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="dK526Nbn2Xo", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
```
Video available at https://youtube.com/watch?v=dK526Nbn2Xo
Next, we will define a new orthonormal basis of vectors ${\bf u} = [u_1,u_2]$ and ${\bf w} = [w_1,w_2]$. As we learned in the video, two vectors are orthonormal if:
1. They are orthogonal (i.e., their dot product is zero):
\begin{equation}
{\bf u\cdot w} = u_1 w_1 + u_2 w_2 = 0
\end{equation}
2. They have unit length:
\begin{equation}
||{\bf u} || = ||{\bf w} || = 1
\end{equation}
In two dimensions, it is easy to make an arbitrary orthonormal basis. All we need is a random vector ${\bf u}$, which we have normalized. If we now define the second basis vector to be ${\bf w} = [-u_2,u_1]$, we can check that both conditions are satisfied:
\begin{equation}
{\bf u\cdot w} = - u_1 u_2 + u_2 u_1 = 0
\end{equation}
and
\begin{equation}
{|| {\bf w} ||} = \sqrt{(-u_2)^2 + u_1^2} = \sqrt{u_1^2 + u_2^2} = 1,
\end{equation}
where we used the fact that ${\bf u}$ is normalized. So, with an arbitrary input vector, we can define an orthonormal basis, which we will write in matrix by stacking the basis vectors horizontally:
\begin{equation}
{{\bf W} } =
\begin{pmatrix}
u_1 & w_1 \\
u_2 & w_2
\end{pmatrix}.
\end{equation}
### Exercise
In this exercise you will fill in the function below to define an orthonormal basis, given a single arbitrary 2-dimensional vector as an input.
**Suggestions**
* Modify the function `define_orthonormal_basis` to first normalize the first basis vector $\bf u$.
* Then complete the function by finding a basis vector $\bf w$ that is orthogonal to $\bf u$.
* Test the function using initial basis vector ${\bf u} = [3,1]$. Plot the resulting basis vectors on top of the data scatter plot using the function `plot_basis_vectors`. (For the data, use $\sigma_1^2 =1$, $\sigma_1^2 =1$, and $\rho = .8$).
```python
help(plot_basis_vectors)
```
Help on function plot_basis_vectors in module __main__:
plot_basis_vectors(X, W)
Plots bivariate data as well as new basis vectors.
Args:
X (numpy array of floats): Data matrix
each column corresponds to a different random variable
W (numpy array of floats): Square matrix representing new orthonormal basis
each column represents a basis vector
Returns:
Nothing.
```python
def define_orthonormal_basis(u):
"""
Calculates an orthonormal basis given an arbitrary vector u.
Args:
u (numpy array of floats): arbitrary 2-dimensional vector used for new basis
Returns:
(numpy array of floats) : new orthonormal basis
columns correspond to basis vectors
"""
###################################################################
## Insert your code here to:
## normalize vector u
## calculate vector w that is orthogonal to w
#u = ....
#w = ...
#W = np.column_stack((u,w))
#comment this once you've filled the function
raise NotImplementedError("Student excercise: implement the orthonormal basis function")
###################################################################
return W
variance_1 = 1
variance_2 = 1
corr_coef = 0.8
cov_matrix = calculate_cov_matrix(variance_1,variance_2,corr_coef)
X = get_data(cov_matrix)
u = np.array([3,1])
#uncomment and run below to plot the basis vectors
##define_orthonormal_basis(u)
#plot_basis_vectors(X,W)
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D5-DimensionalityReduction/solutions/W1D5_Tutorial1_Solution_c9ca4afa.py)
*Example output:*
# Project data onto new basis
```python
#@title Video: Change of basis
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="5MWSUtpbSt0", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
```
Video available at https://youtube.com/watch?v=5MWSUtpbSt0
Finally, we will express our data in the new basis that we have just found. Since $\bf W$ is orthonormal, we can project the data into our new basis using simple matrix multiplication :
\begin{equation}
{\bf Y = X W}.
\end{equation}
We will explore the geometry of the transformed data $\bf Y$ as we vary the choice of basis.
#### Exercise
In this exercise you will fill in the function below to define an orthonormal basis, given a single arbitrary vector as an input.
**Suggestions**
* Complete the function `change_of_basis` to project the data onto the new basis.
* Plot the projected data using the function `plot_data_new_basis`.
* What happens to the correlation coefficient in the new basis? Does it increase or decrease?
* What happens to variance?
```python
def change_of_basis(X,W):
"""
Projects data onto new basis W.
Args:
X (numpy array of floats) : Data matrix
each column corresponding to a different random variable
W (numpy array of floats): new orthonormal basis
columns correspond to basis vectors
Returns:
(numpy array of floats) : Data matrix expressed in new basis
"""
###################################################################
## Insert your code here to:
## project data onto new basis described by W
#Y = ...
#comment this once you've filled the function
raise NotImplementedError("Student excercise: implement change of basis")
###################################################################
return Y
## Unomment below to transform the data by projecting it into the new basis
## Plot the projected data
# Y = change_of_basis(X,W)
# plot_data_new_basis(Y)
# disp(...)
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D5-DimensionalityReduction/solutions/W1D5_Tutorial1_Solution_b434bc0d.py)
*Example output:*
#### Exercise
To see what happens to the correlation as we change the basis vectors, run the cell below. The parameter $\theta$ controls the angle of $\bf u$ in degrees. Use the slider to rotate the basis vectors.
**Questions**
* What happens to the projected data as you rotate the basis?
* How does the correlation coefficient change? How does the variance of the projection onto each basis vector change?
* Are you able to find a basis in which the projected data is uncorrelated?
```python
###### MAKE SURE TO RUN THIS CELL VIA THE PLAY BUTTON TO ENABLE SLIDERS ########
import ipywidgets as widgets
def refresh(theta = 0):
u = [1,np.tan(theta * np.pi/180.)]
W = define_orthonormal_basis(u)
Y = change_of_basis(X,W)
plot_basis_vectors(X,W)
plot_data_new_basis(Y)
_ = widgets.interact(refresh,
theta = (0, 90, 5))
```
| a6bd99534fc08ceeed4c3f27a9d5ceb21f402cc1 | 358,588 | ipynb | Jupyter Notebook | tutorials/W1D5_DimensionalityReduction/student/W1D5_Tutorial1.ipynb | hyosubkim/course-content | 30370131c42fd3bf4f84c50e9c4eaf19f3193165 | [
"CC-BY-4.0"
] | null | null | null | tutorials/W1D5_DimensionalityReduction/student/W1D5_Tutorial1.ipynb | hyosubkim/course-content | 30370131c42fd3bf4f84c50e9c4eaf19f3193165 | [
"CC-BY-4.0"
] | null | null | null | tutorials/W1D5_DimensionalityReduction/student/W1D5_Tutorial1.ipynb | hyosubkim/course-content | 30370131c42fd3bf4f84c50e9c4eaf19f3193165 | [
"CC-BY-4.0"
] | null | null | null | 289.417272 | 131,460 | 0.91961 | true | 4,467 | Qwen/Qwen-72B | 1. YES
2. YES | 0.746139 | 0.815232 | 0.608277 | __label__eng_Latn | 0.934901 | 0.251561 |
# Linear Gaussian filtering and smoothing (discrete)
Provided is an example of discrete, linear state-space models on which one can perform Bayesian filtering and smoothing in order to obtain
a posterior distribution over a latent state trajectory based on noisy observations.
In order to understand the theory behind these methods in detail we refer to [1] and [2].
**References**:
> [1] Särkkä, Simo, and Solin, Arno. Applied Stochastic Differential Equations. Cambridge University Press, 2019.
>
> [2] Särkkä, Simo. Bayesian Filtering and Smoothing. Cambridge University Press, 2013.
```python
import numpy as np
import probnum as pn
from probnum import filtsmooth, randvars, randprocs
from probnum.problems import TimeSeriesRegressionProblem
```
```python
rng = np.random.default_rng(seed=123)
```
```python
# Make inline plots vector graphics instead of raster graphics
%matplotlib inline
from IPython.display import set_matplotlib_formats
set_matplotlib_formats("pdf", "svg")
# Plotting
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
plt.style.use("../../probnum.mplstyle")
```
/tmp/ipykernel_125474/236124620.py:5: DeprecationWarning: `set_matplotlib_formats` is deprecated since IPython 7.23, directly use `matplotlib_inline.backend_inline.set_matplotlib_formats()`
set_matplotlib_formats("pdf", "svg")
## **Linear Discrete** State-Space Model: Car Tracking
We showcase the arguably most simple case in which we consider the following state-space model. Consider matrices $A \in \mathbb{R}^{d \times d}$ and $H \in \mathbb{R}^{m \times d}$ where $d$ is the state dimension and $m$ is the dimension of the measurements. Then we define the dynamics and the measurement model as follows:
For $k = 1, \dots, K$ and $x_0 \sim \mathcal{N}(\mu_0, \Sigma_0)$:
$$
\begin{align}
\boldsymbol{x}_k &\sim \mathcal{N}(\boldsymbol{A} \, \boldsymbol{x}_{k-1}, \boldsymbol{Q}) \\
\boldsymbol{y}_k &\sim \mathcal{N}(\boldsymbol{H} \, \boldsymbol{x}_k, \boldsymbol{R})
\end{align}
$$
This defines a dynamics model that assumes a state $\boldsymbol{x}_k$ in a **discrete** sequence of states arising from a linear projection of the previous state $x_{k-1}$ corrupted with additive Gaussian noise under a **process noise** covariance matrix $Q$.
Similarly, the measurements $\boldsymbol{y}_k$ are assumed to be linear projections of the latent state under additive Gaussian noise according to a **measurement noise** covariance $R$.
In the following example we consider projections and covariances that are constant over the state and measurement trajectories (linear time invariant, or **LTI**). Note that this can be generalized to a linear time-varying state-space model, as well. Then $A$ is a function $A: \mathbb{T} \rightarrow \mathbb{R}^{d \times d}$ and $H$ is a function $H: \mathbb{T} \rightarrow \mathbb{R}^{m \times d}$ where $\mathbb{T}$ is the "time dimension".
In other words, here, every relationship is linear and every distribution is a Gaussian distribution.
Under these simplifying assumptions it is possible to obtain a filtering posterior distribution over the state trajectory $(\boldsymbol{x}_k)_{k=1}^{K}$ by using a
**Kalman Filter**. The example is taken from Example 3.6 in [2].
### Define State-Space Model
#### I. Discrete Dynamics Model: Linear, Time-Invariant, Gaussian Transitions
```python
state_dim = 4
observation_dim = 2
```
```python
delta_t = 0.2
# Define linear transition operator
dynamics_transition_matrix = np.eye(state_dim) + delta_t * np.diag(np.ones(2), 2)
# Define process noise (covariance) matrix
noise_matrix = (
np.diag(np.array([delta_t ** 3 / 3, delta_t ** 3 / 3, delta_t, delta_t]))
+ np.diag(np.array([delta_t ** 2 / 2, delta_t ** 2 / 2]), 2)
+ np.diag(np.array([delta_t ** 2 / 2, delta_t ** 2 / 2]), -2)
)
```
To create a discrete, LTI Gaussian dynamics model, `probnum` provides the `LTIGaussian` class.
```python
# Create discrete, Linear Time-Invariant Gaussian dynamics model
noise = randvars.Normal(mean=np.zeros(state_dim), cov=noise_matrix)
dynamics_model = randprocs.markov.discrete.LTIGaussian(
transition_matrix=dynamics_transition_matrix,
noise=noise,
)
```
#### II. Discrete Measurement Model: Linear, Time-Invariant, Gaussian Measurements
```python
measurement_marginal_variance = 0.5
measurement_matrix = np.eye(observation_dim, state_dim)
measurement_noise_matrix = measurement_marginal_variance * np.eye(observation_dim)
```
```python
noise = randvars.Normal(mean=np.zeros(observation_dim), cov=measurement_noise_matrix)
measurement_model = randprocs.markov.discrete.LTIGaussian(
transition_matrix=measurement_matrix,
noise=noise,
)
```
#### III. Initial State Random Variable
```python
mu_0 = np.zeros(state_dim)
sigma_0 = 0.5 * measurement_marginal_variance * np.eye(state_dim)
initial_state_rv = randvars.Normal(mean=mu_0, cov=sigma_0)
```
```python
prior_process = randprocs.markov.MarkovSequence(
transition=dynamics_model, initrv=initial_state_rv, initarg=0.0
)
```
### Generate Data for the State-Space Model
Next, sample both latent states and noisy observations from the specified state-space model.
```python
time_grid = np.arange(0.0, 10.0, step=delta_t)
```
```python
latent_states, observations = randprocs.markov.utils.generate_artificial_measurements(
rng=rng,
prior_process=prior_process,
measmod=measurement_model,
times=time_grid,
)
```
```python
regression_problem = TimeSeriesRegressionProblem(
observations=observations,
locations=time_grid,
measurement_models=[measurement_model] * len(time_grid),
)
```
### Kalman Filtering
#### I. Kalman Filter
```python
kalman_filter = filtsmooth.gaussian.Kalman(prior_process)
```
#### II. Perform Kalman Filtering + Rauch-Tung-Striebel Smoothing
```python
state_posterior, _ = kalman_filter.filtsmooth(regression_problem)
```
The method `filtsmooth` returns a `KalmanPosterior` object which provides convenience functions for e.g. sampling and interpolation.
We can also extract the just computed posterior smoothing state variables.
This yields a list of Gaussian random variables from which we can extract the statistics in order to visualize them.
```python
grid = state_posterior.locations
posterior_state_rvs = (
state_posterior.states
) # List of <num_time_points> Normal Random Variables
posterior_state_means = posterior_state_rvs.mean # Shape: (num_time_points, state_dim)
posterior_state_covs = (
posterior_state_rvs.cov
) # Shape: (num_time_points, state_dim, state_dim)
```
### Visualize Results
```python
state_fig = plt.figure()
state_fig_gs = gridspec.GridSpec(ncols=2, nrows=2, figure=state_fig)
ax_00 = state_fig.add_subplot(state_fig_gs[0, 0])
ax_01 = state_fig.add_subplot(state_fig_gs[0, 1])
ax_10 = state_fig.add_subplot(state_fig_gs[1, 0])
ax_11 = state_fig.add_subplot(state_fig_gs[1, 1])
# Plot means
mu_x_1, mu_x_2, mu_x_3, mu_x_4 = [posterior_state_means[:, i] for i in range(state_dim)]
ax_00.plot(grid, mu_x_1, label="posterior mean")
ax_01.plot(grid, mu_x_2)
ax_10.plot(grid, mu_x_3)
ax_11.plot(grid, mu_x_4)
# Plot marginal standard deviations
std_x_1, std_x_2, std_x_3, std_x_4 = [
np.sqrt(posterior_state_covs[:, i, i]) for i in range(state_dim)
]
ax_00.fill_between(
grid,
mu_x_1 - 1.96 * std_x_1,
mu_x_1 + 1.96 * std_x_1,
alpha=0.2,
label="1.96 marginal stddev",
)
ax_01.fill_between(grid, mu_x_2 - 1.96 * std_x_2, mu_x_2 + 1.96 * std_x_2, alpha=0.2)
ax_10.fill_between(grid, mu_x_3 - 1.96 * std_x_3, mu_x_3 + 1.96 * std_x_3, alpha=0.2)
ax_11.fill_between(grid, mu_x_4 - 1.96 * std_x_4, mu_x_4 + 1.96 * std_x_4, alpha=0.2)
# Plot groundtruth
obs_x_1, obs_x_2 = [observations[:, i] for i in range(observation_dim)]
ax_00.scatter(time_grid, obs_x_1, marker=".", label="measurements")
ax_01.scatter(time_grid, obs_x_2, marker=".")
# Add labels etc.
ax_00.set_xlabel("t")
ax_01.set_xlabel("t")
ax_10.set_xlabel("t")
ax_11.set_xlabel("t")
ax_00.set_title(r"$x_1$")
ax_01.set_title(r"$x_2$")
ax_10.set_title(r"$x_3$")
ax_11.set_title(r"$x_4$")
handles, labels = ax_00.get_legend_handles_labels()
state_fig.legend(handles, labels, loc="center left", bbox_to_anchor=(1, 0.5))
state_fig.tight_layout()
```
| 5d0b3ac5acf543075f27d21eebee94dcb857f616 | 222,441 | ipynb | Jupyter Notebook | docs/source/tutorials/filtsmooth/discrete_linear_gaussian_filtering_smoothing.ipynb | probabilistic-numerics/probabilistic-numerics | 14591396b1b7deae17ed8a40fb302919eb471805 | [
"MIT"
] | null | null | null | docs/source/tutorials/filtsmooth/discrete_linear_gaussian_filtering_smoothing.ipynb | probabilistic-numerics/probabilistic-numerics | 14591396b1b7deae17ed8a40fb302919eb471805 | [
"MIT"
] | null | null | null | docs/source/tutorials/filtsmooth/discrete_linear_gaussian_filtering_smoothing.ipynb | probabilistic-numerics/probabilistic-numerics | 14591396b1b7deae17ed8a40fb302919eb471805 | [
"MIT"
] | null | null | null | 90.755202 | 115,446 | 0.742772 | true | 2,306 | Qwen/Qwen-72B | 1. YES
2. YES | 0.822189 | 0.73412 | 0.603585 | __label__eng_Latn | 0.733415 | 0.240661 |
```python
from sympy import *
from sympy import init_printing; init_printing(use_latex='mathjax')
```
```python
var('W k')
variables = input('Ingrese las variables a usar: ')
var(variables)
potencial = sympify(input('Ingrese el potencial del sistema: '))
lim = (input('Ingrese los limites de integración separados por un espacio (si desea ingresar infinito escriba "oo"): ')).split()
a = sympify(lim[0])
b = sympify(lim[1])
n = int(input('Ingrese el numero de funciones: '))
funcion = []
for i in range(n):
ff = input('Ingrese la funcion: ')
funcion.append(ff)
funciones = sympify(funcion)
def Hamiltoniano(fun, potencial):
K = (-hbar**2/(2*m))*diff(fun,x,2)
P = potencial*fun
return K + P
H = zeros(n,n)
S = zeros(n,n)
C = ones(n,n)
for i in range(n):
for j in range(n):
c = sympify('c%d%d'%(j+1,i+1))
C[i,j] = C[i,j]*c
H[i,j] = integrate(funciones[i]*Hamiltoniano(funciones[j],potencial),(x,a,b))
S[i,j] = integrate(funciones[i]*funciones[j],(x,0,l))
Sols = solve((H-S*W).det(),W)
```
Ingrese las variables a usar: x m l hbar
Ingrese el potencial del sistema: 0
Ingrese los limites de integración separados por un espacio (si desea ingresar infinito escriba "oo"): 0 l
Ingrese el numero de funciones: 4
Ingrese la funcion: x*(l-x)
Ingrese la funcion: x**2*(l-x)**2
Ingrese la funcion: x*(l-x)*(l/2-x)
Ingrese la funcion: x**2*(l-x)**2*(l/2-x)
```python
H
```
$$\left[\begin{matrix}\frac{\hbar^{2} l^{3}}{6 m} & \frac{\hbar^{2} l^{5}}{30 m} & 0 & 0\\\frac{\hbar^{2} l^{5}}{30 m} & \frac{\hbar^{2} l^{7}}{105 m} & 0 & 0\\0 & 0 & \frac{\hbar^{2} l^{5}}{40 m} & \frac{\hbar^{2} l^{7}}{280 m}\\0 & 0 & \frac{\hbar^{2} l^{7}}{280 m} & \frac{\hbar^{2} l^{9}}{1260 m}\end{matrix}\right]$$
```python
S
```
$$\left[\begin{matrix}\frac{l^{5}}{30} & \frac{l^{7}}{140} & 0 & 0\\\frac{l^{7}}{140} & \frac{l^{9}}{630} & 0 & 0\\0 & 0 & \frac{l^{7}}{840} & \frac{l^{9}}{5040}\\0 & 0 & \frac{l^{9}}{5040} & \frac{l^{11}}{27720}\end{matrix}\right]$$
```python
for i in range(n):
Sols[i] = Sols[i]*m*l**2/(hbar**2)
Sols.sort()
for i in range(n):
Sols[i] = Sols[i]*hbar**2/(m*l**2)
Sols
```
$$\left [ \frac{\hbar^{2}}{l^{2} m} \left(- 2 \sqrt{133} + 28\right), \quad \frac{\hbar^{2}}{l^{2} m} \left(- 18 \sqrt{5} + 60\right), \quad \frac{\hbar^{2}}{l^{2} m} \left(2 \sqrt{133} + 28\right), \quad \frac{\hbar^{2}}{l^{2} m} \left(18 \sqrt{5} + 60\right)\right ]$$
```python
for q in range(len(Sols)):
sistema = (H-S*Sols[q])*C.col(q)
solucion = solve(sistema,C)
list_key_value = Matrix([[k,v] for k, v in solucion.items()])
t = len(list_key_value.col(0))
for i in range(n):
for j in range(1,t+1):
if C[i,q] == list_key_value.col(0)[t-j]:
C[i,q] = list_key_value.col(1)[t-j]
if (sympify('c%d%d'%(q+1,i+1)) in solucion) == False:
cc = sympify('c%d%d'%(q+1,i+1))
C = C.subs(cc,k)
C
```
$$\left[\begin{matrix}\frac{k l^{2}}{3} + \frac{\sqrt{133} k}{21} l^{2} & 0 & - \frac{\sqrt{133} k}{21} l^{2} + \frac{k l^{2}}{3} & 0\\k & 0 & k & 0\\0 & \frac{k l^{2}}{33} + \frac{\sqrt{5} k}{11} l^{2} & 0 & - \frac{\sqrt{5} k}{11} l^{2} + \frac{k l^{2}}{33}\\0 & k & 0 & k\end{matrix}\right]$$
```python
func = Matrix([funciones])
Phis = zeros(n,1)
Real_Phis = zeros(n,1)
for i in range(n):
Phis[i] = func*C.col(i)
cons_normal = N(solve(integrate(Phis[i]**2,(x,a,b))-1,k)[1])
Real_Phis[i] = N(Phis[i].subs(k,cons_normal))
Real_Phis
```
$$\left[\begin{matrix}4.40399751133633 l^{2} x \left(l - x\right) \left(\frac{1}{l^{9}}\right)^{0.5} + 4.99034859672658 x^{2} \left(l - x\right)^{2} \left(\frac{1}{l^{9}}\right)^{0.5}\\16.7823521557751 l^{2} x \left(0.5 l - x\right) \left(l - x\right) \left(\frac{1}{l^{11}}\right)^{0.5} + 71.8478164291383 x^{2} \left(0.5 l - x\right) \left(l - x\right)^{2} \left(\frac{1}{l^{11}}\right)^{0.5}\\- 28.6462005494649 l^{2} x \left(l - x\right) \left(\frac{1}{l^{9}}\right)^{0.5} + 132.721876195613 x^{2} \left(l - x\right)^{2} \left(\frac{1}{l^{9}}\right)^{0.5}\\- 98.9866286733697 l^{2} x \left(0.5 l - x\right) \left(l - x\right) \left(\frac{1}{l^{11}}\right)^{0.5} + 572.256840303692 x^{2} \left(0.5 l - x\right) \left(l - x\right)^{2} \left(\frac{1}{l^{11}}\right)^{0.5}\end{matrix}\right]$$
```python
from matplotlib import style
style.use('bmh')
for i in range(n):
p = plot(Real_Phis[i].subs(l,1),(x,0,1), grid = True)
```
```python
```
| 2d1ad3e43b658361765026a17ac2fb9d2710f82b | 100,932 | ipynb | Jupyter Notebook | Huckel_M0/Metodo_Variacional.ipynb | lazarusA/Density-functional-theory | c74fd44a66f857de570dc50471b24391e3fa901f | [
"MIT"
] | null | null | null | Huckel_M0/Metodo_Variacional.ipynb | lazarusA/Density-functional-theory | c74fd44a66f857de570dc50471b24391e3fa901f | [
"MIT"
] | null | null | null | Huckel_M0/Metodo_Variacional.ipynb | lazarusA/Density-functional-theory | c74fd44a66f857de570dc50471b24391e3fa901f | [
"MIT"
] | null | null | null | 248.600985 | 22,660 | 0.861114 | true | 1,819 | Qwen/Qwen-72B | 1. YES
2. YES | 0.897695 | 0.692642 | 0.621781 | __label__yue_Hant | 0.101925 | 0.282937 |
```python
from matplotlib import pyplot as plt
import numpy as np
from scipy.integrate import quad
from qiskit.quantum_info.states import Statevector, DensityMatrix
from qiskit.quantum_info.operators import Operator
from qiskit.quantum_info.operators import SuperOp, Choi, Kraus, PTM
from qiskit_ode import solve_ode, solve_lmde
from qiskit_ode.signals import Signal
from qiskit_ode.models import HamiltonianModel, LindbladModel
def gaussian(amp, sig, t0, t):
return amp * np.exp( -(t - t0)**2 / (2 * sig**2) )
```
# In this demo
This notebook demonstrates some of the core model building and differential equation solving elements:
- Hamiltonian and signal construction
- Model transformations: entering a frame, making a rotating wave approximation
- Defining and solving differential equations
Sections
1. `Signal`s
2. Constructing a `HamiltonianModel`
3. Setting `frame` and `cutoff_freq` in `HamiltonianModel`
4. Integrating the Schrodinger equation
5. Adding dissipative dynamics with a `LindbladModel` and simulating density matrix evolution
6. Simulate the Lindbladian to get a `SuperOp` representation of the quantum channel
# 1. `Signal`
A `Signal` object represents a complex mixed signal, i.e. a function of the form:
\begin{equation}
s(t) = f(t)e^{i2 \pi \nu t},
\end{equation}
where $f(t)$ is the *envelope* and $\nu$ is the *carrier frequency*.
Here we define a signal with a Gaussian envelope:
```python
amp = 1. # amplitude
sig = 2. # sigma
t0 = 3.5*sig # center of Gaussian
T = 7*sig # end of signal
gaussian_envelope = lambda t: gaussian(amp, sig, t0, t)
gauss_signal = Signal(envelope=gaussian_envelope, carrier_freq=0.5)
```
```python
print(gauss_signal.envelope(0.25))
print(gauss_signal(0.25))
```
0.0033616864879322562
0.0023770713118400873
```python
gauss_signal.draw(0, T, 100, function='envelope')
```
```python
gauss_signal.draw(0, T, 200)
```
# 2. The `HamiltonianModel` class
A `HamiltonianModel` is specified as a list of Hermitian operators with `Signal` coefficients. Here, we use a classic qubit model:
\begin{equation}
H(t) = 2 \pi \nu \frac{Z}{2} + 2 \pi r s(t) \frac{X}{2}.
\end{equation}
Generally, a `HamiltonianModel` represents a linear combination:
\begin{equation}
H(t) = \sum_j s_j(t) H_j,
\end{equation}
where:
- $H_j$ are Hermitian operators given as `terra.quantum_info.Operator` objects, and
- $s_j(t) = Re[f_j(t)e^{i2 \pi \nu_j t}]$, where the complex functions $f_j(t)e^{i2 \pi \nu_j t}$ are specified as `Signal` objects.
Constructing a `HamiltonianModel` requires specifying lists of the operators and the signals.
```python
#####################
# construct operators
#####################
r = 0.5
w = 1.
X = Operator.from_label('X')
Y = Operator.from_label('Y')
Z = Operator.from_label('Z')
operators = [2 * np.pi * w * Z/2,
2 * np.pi * r * X/2]
###################
# construct signals
###################
# Define gaussian envelope function to have max amp and area approx 2
amp = 1.
sig = 0.399128/r
t0 = 3.5*sig
T = 7*sig
gaussian_envelope = lambda t: gaussian(amp, sig, t0, t)
signals = [1.,
Signal(envelope=gaussian_envelope, carrier_freq=w)]
#################
# construct model
#################
hamiltonian = HamiltonianModel(operators=operators, signals=signals)
```
## 2.1 Evaluation and drift
Evaluate at a given time.
```python
print(hamiltonian.evaluate(0.12))
```
[[ 3.14159265+0.j 0.00419151+0.j]
[ 0.00419151+0.j -3.14159265+0.j]]
Get the drift (terms corresponding to constant coefficients).
```python
hamiltonian.drift
```
Array([[ 3.14159265+0.j, 0. +0.j],
[ 0. +0.j, -3.14159265+0.j]], backend='numpy')
```python
def plot_qubit_hamiltonian_components(hamiltonian, t0, tf, N=200):
t_vals = np.linspace(t0, tf, N)
model_vals = np.array([hamiltonian.evaluate(t) for t in t_vals])
x_coeff = model_vals[:, 0, 1].real
y_coeff = -model_vals[:, 0, 1].imag
z_coeff = model_vals[:, 0, 0].real
plt.plot(t_vals, x_coeff, label='X component')
plt.plot(t_vals, y_coeff, label='Y component')
plt.plot(t_vals, z_coeff, label='Z component')
plt.legend()
plot_qubit_hamiltonian_components(hamiltonian, 0., T)
```
## 2.2 Enter a rotating frame
We can specify a frame to enter the Hamiltonian in. Given a Hermitian operator $H_0$, *entering the frame* of $H_0$ means transforming a Hamiltonian $H(t)$:
\begin{equation}
H(t) \mapsto \tilde{H}(t) = e^{i H_0 t}H(t)e^{-iH_0 t} - H_0
\end{equation}
Here, we will enter the frame of the drift Hamiltonian, resulting in:
\begin{equation}
\begin{aligned}
\tilde{H}(t) &= e^{i2 \pi \nu \frac{Z}{2} t}\left(2 \pi \nu \frac{Z}{2} + 2 \pi r s(t) \frac{X}{2}\right)e^{-i2 \pi \nu \frac{Z}{2} t} - 2 \pi \nu \frac{Z}{2} \\
&= 2 \pi r s(t) e^{i2 \pi \nu \frac{Z}{2} t}\frac{X}{2}e^{-i2 \pi \nu \frac{Z}{2} t}\\
&= 2 \pi r s(t) \left[\cos(2 \pi \nu t) \frac{X}{2} - \sin(2 \pi \nu t) \frac{Y}{2} \right]
\end{aligned}
\end{equation}
```python
hamiltonian.frame = hamiltonian.drift
```
Evaluate again.
```python
print(hamiltonian.evaluate(0.12))
```
[[0. +0.j 0.00305548+0.00286929j]
[0.00305548-0.00286929j 0. +0.j ]]
```python
# validate with independent computation
t = 0.12
2 * np.pi * r * np.real(signals[1](t)) * (np.cos(2*np.pi * w * t) * X / 2
- np.sin(2*np.pi * w * t) * Y / 2 )
```
Operator([[0. +0.j , 0.00305548+0.00286929j],
[0.00305548-0.00286929j, 0. +0.j ]],
input_dims=(2,), output_dims=(2,))
Replot the coefficients of the model in the Pauli basis over time.
```python
plot_qubit_hamiltonian_components(hamiltonian, 0., T)
```
# 3. Set cutoff frequency (rotating wave approximation)
A common technique to simplify the dynamics of a quantum system is to perform the *rotating wave approximation* (RWA), in which terms with high frequency are averaged to $0$.
The RWA can be applied on any `HamiltonianModel` (in the given `frame`) by setting the `cutoff_freq` attribute, which sets any fast oscillating terms to $0$, effectively performing a moving average on terms with carrier frequencies above `cutoff_freq`.
For our model, the classic `cutoff_freq` is $2 \nu$ (twice the qubit frequency). This approximates the Hamiltonian $\tilde{H}(t)$ as:
\begin{equation}
\tilde{H}(t) \approx 2 \pi \frac{r}{2} \left[Re[f(t)] \frac{X}{2} + Im[f(t)] \frac{Y}{2} \right],
\end{equation}
where $f(t)$ is the envelope of the on-resonance drive. On our case $f(t) = Re[f(t)]$, and so we simply have
\begin{equation}
\tilde{H}(t) \approx 2 \pi \frac{r}{2} Re[f(t)] \frac{X}{2},
\end{equation}
```python
# set the cutoff frequency
hamiltonian.cutoff_freq = 2*w
```
```python
# evaluate again
print(hamiltonian.evaluate(0.12))
```
[[0. +0.00000000e+00j 0.00287496-2.16840434e-19j]
[0.00287496+2.16840434e-19j 0. +0.00000000e+00j]]
We also plot the coefficients of the model in the frame of the drift with the RWA applied. We now expect to see simply a plot of $\pi \frac{r}{2} f(t)$ for the $X$ coefficient.
```python
plot_qubit_hamiltonian_components(hamiltonian, 0., T)
```
# 4. Solve Shrodinger equation with a `HamiltonianModel`
To solve the Schrodinger Equation for the given Hamiltonian, we construct a `SchrodingerProblem` object, which specifies the desired simulation.
```python
# reset the frame and cutoff_freq properties
hamiltonian.frame = None
hamiltonian.cutoff_freq = None
```
```python
# solve the problem, with some options specified
y0 = Statevector([0., 1.])
%time sol = solve_lmde(hamiltonian, t_span=[0., T], y0=y0, atol=1e-10, rtol=1e-10)
y = sol.y[-1]
print("\nFinal state:")
print("----------------------------")
print(y)
print("\nPopulation in excited state:")
print("----------------------------")
print(np.abs(y.data[0])**2)
```
CPU times: user 406 ms, sys: 2.6 ms, total: 409 ms
Wall time: 407 ms
Final state:
----------------------------
Statevector([0.96087419-0.27193942j 0.0511707 +0.01230027j])
Population in excited state:
----------------------------
0.9972302627366899
When specifying a problem, we can specify which frame to solve in, and a cutoff frequency to solve with.
```python
%time sol = solve_lmde(hamiltonian, t_span=[0., T], y0=y0, solver_cutoff_freq=2*w, atol=1e-10, rtol=1e-10)
y = sol.y[-1]
print("\nFinal state:")
print("----------------------------")
print(y)
print("\nPopulation in excited state:")
print("----------------------------")
print(np.abs(y.data[0])**2)
```
CPU times: user 159 ms, sys: 9.9 ms, total: 169 ms
Wall time: 162 ms
Final state:
----------------------------
Statevector([ 9.62205827e-01-2.72323239e-01j
-2.36278329e-08+8.34847474e-08j])
Population in excited state:
----------------------------
0.9999999999756191
# 4.1 Technical solver notes:
- Behind the scenes, the `SchrodingerProblem` constructs an `OperatorModel` from the `HamiltonianModel`, representing the generator in the Schrodinger equation:
\begin{equation}
G(t) = \sum_j s_j(t)\left[-iH_j\right]
\end{equation}
which is then pass used in a generalized routine for solving DEs of the form $y'(t) = G(t)y(t)$
- The generalized solver routine will automatically solve the DE in the drift frame, as well as in the basis in which the drift is diagonal (relevent for non-diagonal drift operators, to save on exponentiations for the frame operator).
# 5. Solving with dissipative dynamics
To simulate with noise operators, we define a `LindbladModel`, containing:
- a model of a Hamiltonian (specified with either a `HamiltonianModel` object, or in the standard decomposition of operators and signals)
- an optional list of noise operators
- an optional list of time-dependent coefficients for the noise operators
Such a system is simulated in terms of the Lindblad master equation:
\begin{equation}
\dot{\rho}(t) = -i[H(t), \rho(t)] + \sum_j g_j(t) \left(L_j \rho L_j^\dagger - \frac{1}{2}\{L_j^\dagger L_j, \rho(t)\}\right),
\end{equation}
where
- $H(t)$ is the Hamiltonian,
- $L_j$ are the noise operators, and
- $g_j(t)$ are the noise coefficients
Here we will construct such a model using the above `Hamitonian`, along with a noise operator that drives the state to the ground state.
```python
# construct quantum model with noise operators
noise_ops = [np.array([[0., 0.],
[1., 0.]])]
noise_signals = [0.001]
lindblad_model = LindbladModel.from_hamiltonian(hamiltonian=hamiltonian,
noise_operators=noise_ops,
noise_signals=noise_signals)
# density matrix
y0 = DensityMatrix([[0., 0.], [0., 1.]])
%time sol = solve_lmde(lindblad_model, t_span=[0., T], y0=y0, atol=1e-10, rtol=1e-10)
sol.y[-1]
```
CPU times: user 497 ms, sys: 9.5 ms, total: 506 ms
Wall time: 500 ms
[[0.99473642+4.25007252e-17j 0.04620048-2.49934328e-02j]
[0.04620048+2.49934328e-02j 0.00526358+1.11022302e-16j]]
We may also simulate the Lindblad equation with a cutoff frequency.
```python
%time sol = solve_lmde(lindblad_model, t_span=[0., T], y0=y0, solver_cutoff_freq=2*w, atol=1e-10, rtol=1e-10)
sol.y[-1]
```
CPU times: user 480 ms, sys: 6.35 ms, total: 486 ms
Wall time: 482 ms
[[0.99614333-1.01047642e-16j 0.01049406-3.50564736e-02j]
[0.01049406+3.50564736e-02j 0.00252446-1.11022302e-16j]]
## 5.1 Technical notes
- Similarly to the flow of `SchrodingerProblem`, `LindbladProblem` constructs an `OperatorModel` representing the *vectorized* Lindblad equation, which is then used to simulate the Lindblad equation on the vectorized density matrix.
- Frame handling and cutoff frequency handling are handled at the `OperatorModel` level, and hence can be used here as well.
## 5.2 Simulate the Lindbladian/SuperOp
```python
# identity quantum channel in superop representation
y0 = SuperOp(np.eye(4))
%time sol = solve_lmde(lindblad_model, t_span=[0., T], y0=y0, atol=1e-10, rtol=1e-10)
print(sol.y[-1])
```
CPU times: user 546 ms, sys: 2.95 ms, total: 549 ms
Wall time: 547 ms
SuperOp([[ 0.00523988+1.59454572e-17j, 0.05230901-2.08651875e-03j,
0.05230901+2.08651875e-03j, 0.99473642-1.15445593e-16j],
[-0.04508833-2.62771380e-02j, 0.00223689+1.07542007e-03j,
-0.84671078-5.20988650e-01j, 0.04620048+2.49934329e-02j],
[-0.04508833+2.62771380e-02j, -0.84671078+5.20988650e-01j,
0.00223689-1.07542007e-03j, 0.04620048-2.49934329e-02j],
[ 0.99476012+6.24401295e-17j, -0.05230901+2.08651875e-03j,
-0.05230901-2.08651875e-03j, 0.00526358-9.89206549e-18j]],
input_dims=(2,), output_dims=(2,))
```python
print(PTM(y))
```
PTM([[ 5.00000000e-01+0.j, -4.54696754e-08+0.j, 7.38951024e-08+0.j,
5.00000000e-01+0.j],
[-4.54696754e-08+0.j, 4.13498276e-15+0.j, -6.71997263e-15+0.j,
-4.54696754e-08+0.j],
[ 7.38951024e-08+0.j, -6.71997263e-15+0.j, 1.09209723e-14+0.j,
7.38951024e-08+0.j],
[ 5.00000000e-01+0.j, -4.54696754e-08+0.j, 7.38951024e-08+0.j,
5.00000000e-01+0.j]],
input_dims=(2,), output_dims=(2,))
```python
```
| a87ca7853b1813f59092116e220cbe980a86736d | 135,749 | ipynb | Jupyter Notebook | example_notebooks/general_demo.ipynb | divshacker/qiskit-ode | 3b5d7afb1a80faea9b489f1d79b09c1e52580107 | [
"Apache-2.0"
] | null | null | null | example_notebooks/general_demo.ipynb | divshacker/qiskit-ode | 3b5d7afb1a80faea9b489f1d79b09c1e52580107 | [
"Apache-2.0"
] | null | null | null | example_notebooks/general_demo.ipynb | divshacker/qiskit-ode | 3b5d7afb1a80faea9b489f1d79b09c1e52580107 | [
"Apache-2.0"
] | null | null | null | 172.270305 | 31,928 | 0.893495 | true | 4,399 | Qwen/Qwen-72B | 1. YES
2. YES | 0.763484 | 0.757794 | 0.578564 | __label__eng_Latn | 0.785629 | 0.182527 |
# Rootfinding, Newton's Method, and Dynamical Systems
In the Continued Fractions unit, we met with the sequence
\begin{equation}
1, \frac{3}{2}, \frac{17}{12}, \frac{577}{408}, \frac{665857}{470832}, \ldots
\end{equation}
which was generated by $x_{n+1} = \frac12{\left(x_{n} + \frac{2}{x_n}\right)}$; in words, the average of the number and two divided by the number. This unit explores where that sequence came from, and its relationship to $\sqrt{2}$. We'll approach this algebraically, as Newton did. Consider the equation
\begin{equation}
x^2 - 2 = 0.
\end{equation}
Clearly the solutions to this equation are $x = \sqrt{2}$ and $x = -\sqrt{2}$. Let us _shift the origin_ by putting $x = 1 + s$; so $s = 0$ corresponds to $x = 1$.
We draw the vertical part of the new axis that we will shift to, in red. Notice that we use the labels and tickmarks from the old axis, in black.
```python
import numpy as np
from matplotlib import pyplot as plt
fig = plt.figure(figsize=(6, 6))
ax = fig.add_axes([0,0,1,1])
n = 501
x = np.linspace(-1,3,n)
y = x*x-2;
plt.plot(x,y,'b') # x^2-2 is in blue
ax.grid(True, which='both')
ax.axhline(y=0, color='k')
ax.axvline(x=0, color='k')
ax.axvline(x=1, color='r') # The new axis is in red
plt.show()
```
Then
\begin{equation}
\left(1 + s\right)^2 - 2 = 1 + 2s + s^2 - 2 = -1 + 2s + s^2 = 0.
\end{equation}
we now make the surprising assumption that $s$ is so small that we may ignore $s^2$ in comparison to $2s$. If it turned out that $s = 10^{-6}$, then $s^2 = 10^{-12}$, very much smaller than $2s = 2\cdot10^{-6}$; so there are small numbers $s$ for which this is true; but we don't know that this is true, here. We just hope.
```python
fig = plt.figure(figsize=(6, 6))
ax = fig.add_axes([0,0,1,1])
n = 501
s = np.linspace(-1,1,n)
y = -1+2*s+s*s; # Newton would have written s^2 as s*s, too
plt.plot(s,y,'b') # equation in blue again
ax.grid(True, which='both')
ax.axhline(y=0, color='r')
ax.axvline(x=0, color='r')
ax.axvline(x=1/2, color='k') # The new axis is in black
plt.show()
```
Then if $s^2$ can be ignored, our equation becomes
\begin{equation}
-1 + 2s = 0
\end{equation}
or $s = \frac{1}{2}$. This means $x = 1 + s = 1 + \frac{1}{2} = \frac{3}{2}$.
We now repeat the process: shift the origin to $\frac{3}{2}$, not $1$: put now
\begin{equation}
x = \frac{3}{2} +t
\end{equation}
which is equivalent to $s = 1/2 + t$, so
\begin{equation}
\left(\frac{3}{2} + t\right)^2 = \frac{9}{4} + 3t + t^2 - 2 = \frac{1}{4} + 3t + t^2 = 0.
\end{equation}
```python
fig = plt.figure(figsize=(6, 6))
ax = fig.add_axes([0,0,1,1])
n = 501
t = np.linspace(-0.2,0.2,n)
y = 1/4+3*t+t*t; # Newton would have written t^2 as t*t, too
plt.plot(t,y,'b') # equation in blue again
ax.grid(True, which='both')
ax.axhline(y=0, color='k')
ax.axvline(x=0, color='k')
ax.axvline(x=-1/12, color='r') # The new axis will be red again
plt.show()
```
This gives $3t + t^2 + \frac{1}{4} = 0$ and again we ignore $t^2$ and hope it's smaller than $3t$. This gives
\begin{equation}
3t + \frac{1}{4} = 0
\end{equation}
or $t = -\frac{1}{12}$. This means $x = \frac{3}{2} - \frac{1}{12}$ or $x = \frac{17}{12}$. Now we see the process.
Again, shift the origin: $x = \frac{17}{12} + u$. Now
\begin{equation}
\left(\dfrac{17}{12} + u\right)^2 = \dfrac{289}{144} + \dfrac{17}{6}u + u^2 - 2 = 0.
\end{equation}
Ignoring $u^2$,
\begin{equation}
\dfrac{17}{6}u + \dfrac{1}{144} = 0
\end{equation}
or
\begin{equation}
u = \dfrac{-6}{17\cdot144} = \dfrac{-1}{17\cdot24} = \dfrac{-1}{408}.
\end{equation}
Thus,
\begin{equation}
x = \dfrac{17}{12} - \dfrac{1}{408} = \dfrac{577}{408}.
\end{equation}
As we saw in the Continued Fractions unit, these are the exact square roots of numbers ever more close to 2. For instance,
\begin{equation}
\dfrac{577}{408} = \sqrt{2 + \dfrac{1}{408^2}}.
\end{equation}
## Euler again
It was Euler who took Newton's "shift the origin" strategy and made a general method—which we call Newton's method—out of it. In modern notation, Euler considered solving $f(x) = 0$ for a differentiable function $f(x)$, and used the tangent line approximation near an initial approximation $x_0$: if $x = x_0 + s$ then, using $f'(x_0)$ to denote the slope at $x_0$, $0 = f(x) = f(x_0 + s) \approx f(x_0) + f'(x_0)s$ ignoring terms of order $s^2$ or higher. Then
\begin{equation}
s = -\dfrac{f(x_0)}{f'(x_0)}
\end{equation}
so
\begin{equation}
x \approx x_0 + s = x - \dfrac{f(x_0)}{f'(x_0)}.
\end{equation}
The fundamental idea of Newton's method is that, if it worked once, we can do it again: pass the parcel! Put
$$
\begin{align}
x_1 &= x_0 - \dfrac{f(x_0)}{f'(x_0)} \\
x_2 &= x_1 - \dfrac{f(x_1)}{f'(x_1)} \\
x_3 &= x_2 - \dfrac{f(x_2)}{f'(x_2)}
\end{align}
$$
and keep going, until $f(x_k)$ is so small that you're happy to stop.
Notice that each $x_k$ solves
\begin{equation}
f(x) - f(x_k) = 0
\end{equation}
not $f(x) = 0$. But if $f(x_k)$ is really small, you've solved "almost as good" an equation, like finding $\sqrt{2 + \frac{1}{408^2}}$ instead of $\sqrt{2}$. So where did $\frac12{\left(x_n + \frac{2}{x_n}\right)}$ come from?
\begin{equation}
x_{n+1} = x_n - \dfrac{f(x_n)}{f'(x_n)} = x_n - \dfrac{\left(x_n^2 - 2\right)}{2x_n}
\end{equation}
because if $f(x) = x^2 - 2$, $f'(x) = 2x - 0 = 2x$. Therefore,
$$
\begin{align}
x_{n+1} &= x_n - \dfrac{\left(x_n^2 - 2\right)}{2x_n} \\
&= \dfrac{2x_n^2 - x_n^2 + 2}{2x_n} \\
&= \dfrac{x_n^2 + 2}{2x_n} \\
&= \dfrac{1}{2}\left(x_n + \dfrac{2}{x_n}\right)
\end{align}
$$
as claimed. (For more complicated functions one _shouldn't_ simplify. But for $x^2 - a$, it's okay.)
Executing this process in decimals, using a calculator (our handy HP48G+ again), with $x_0 = 1$, we get
$$
\begin{align}
x_0 &= 1 \nonumber \\
x_1 &= \underline{1}.5 \nonumber \\
x_2 &= \underline{1.4}1666\ldots \nonumber \\
x_3 &= \underline{1.41421}568628 \nonumber \\
x_4 &= \underline{1.41421356238} \nonumber \\
x_5 &= x_4 \text{ to all 11 places in the calculator}
\end{align}
$$
Now $\sqrt{2} = 1.41421356237$ on this calculator. We see (approximately) 1, 2, 5 then 10 correct digits. The convergence behaviour is clearer in the continued fraction representation:
\begin{equation}
1, 1 + \left[2\right], 1 + \left[2, 2, 2\right], 1 + \left[2, 2, 2, 2, 2, 2, 2 \right],
1 + \left[2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2\right]
\end{equation}
with 0, 1, 3, 7, 15 twos in the fraction part: each time doubling the previous plus 1, giving $2^0 - 1$, $2^1 - 1$, $2^2 - 1$, $2^3 - 1$, $2^4 - 1$ correct entries. This "almost doubling the number of correct digits with each iteration" is quite characteristic of Newton's method.
## Newton's Method
In the [Continued Fractions](continued-fractions.ipynb) unit, we saw Newton's method to extract square roots: $x_{n+1} = (x_n + a/x_n)/2$. That is, we simply took the average of our previous approximation with the result of dividing our approximation into what we wanted to extract the square root of. We saw rapid convergence, in that the number of correct entries in the continued fraction for $\sqrt{a}$ roughly doubled with each iteration. This simple iteration comes from a more general technique for finding zeros of nonlinear equations, such as $f(x) = x^5 + 3x + 2 = 0$. As we saw above, for general $f(x)$, Newton's method is
\begin{equation}
x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}\>,
\end{equation}
where the notation $f'(x_n)$ means the _derivative_ of $f(x)$ evaluated at $x=x_n$. In the special case $f(x) = x^2-a$, whose roots are $\pm \sqrt{a}$, the general Newton iteration above reduces to the simple formula we used before, because $f'(x) = 2x$.
### Derivatives
We're not assuming that you know so much calculus that you can take derivatives of general functions, although if you do, great. We _could_ show a Python method to get the computer to do it, and we might (or we might leave it to the exercises; it's kind of fun). But we do assume that you know that derivatives of polynomials are easy: for instance, if $f(x) = x^3$ then $f'(x) = 3x^2$, and similarly if $f(x) = x^n$ for some integer $n$ then $f'(x) = n x^{n-1}$ (which holds true even if $n$ is zero or a negative integer---or, for that matter, if $n$ is any constant). Then by the _linearity_ of the derivative, we can compute the derivative of a polynomial expressed in the monomial basis. For instance, if $f(x) = x^5 + 3x + 2$ then $f'(x) = 5x^4 + 3$. It turns out that the Python implementation of polynomials knows how to do this as well (technically, polynomials _know how to differentiate themselves_), as we will see. For this example we can use this known derivative in the Newton formula to get the iteration
\begin{equation}
x_{n+1} = x_n - \frac{x^5 + 3x + 2}{5x^4+3}\>.
\end{equation}
#### Exercises
Write Newton's method iterations for the following functions, and estimate the desired root(s).
1. $f(x) = x^2 - 3$ with $x_0 = 1$
2. $f(x) = x^3 - 2$ with $x_0 = 1$
3. Newton's original example, $f(x) = x^3 - 2x - 5 = 0$. Since $f(0) = -5$, $f(1) = -6$, $f(2) = -1$, and $f(3) = 16$, we see there is a root between $x=2$ and $x=3$. Use that knowledge to choose your initial estimate.
### The Railway Pranksters Problem
Late one night, some pranksters weld a $2$cm piece of steel into a train track $2$km long, sealing the gaps meant to allow for heat expansion. In the morning as the temperature rises, the train track expands and bows up into a perfectly circular arc. How high is the arc in the middle?
The fun part of this problem is setting it up and drawing it. We should let you do it, so we will hide our drawing and set-up. The solution, using Newton's method, is below. Our solution won't make sense until you do your own drawing. We used a symbol $R$ for the radius of the circular arc, and $s$ for the small increment in the length of the track.
```python
s = 1.0e-5
f = lambda R: np.sin((1+s)/R) - 1/R
df = lambda R: -(1 + s) / R ** 2 * np.cos((1 + s) / R) + 1 / R ** 2
n = 5
Ar = [1000.0/9.0]
for k in range(n):
nxt = Ar[k] - f(Ar[k])/df(Ar[k])
Ar.append(nxt)
print( Ar, [ f(rho) for rho in Ar] )
```
[111.11111111111111, 123.86244904780746, 128.59835439622927, 129.09631810259108, 129.10118725925943, 129.10118771897493] [-3.1503152937012446e-08, -6.973718736161261e-09, -6.092985586556021e-10, -5.843914761827218e-12, -5.516420653606247e-16, 8.673617379884035e-19]
The convergence there was a _bit_ slow; the residual was already pretty small with our initial estimate (we explain below where we got it) but Newton's method got us there in the end. The difficulty is that the root is _nearly_ a multiple root. We talk a little about that, below.
```python
height = 1/(Ar[-1] + np.sqrt(Ar[-1]**2-1))
print( "Height in is {} km or {} feet".format(height,3280.84*height))
```
Height in is 0.0038729891556916357 km or 12.706657741559347 feet
That might seem ridiculous. Consider the related problem, where instead of bowing up in a perfect circular arc, the track bows up into two straight lines meeting in the middle, making an isosceles triangle with base $2$km long and sides $1+s$. The height satisfies $h^2 +1 = (1+s)^2$ by Pythagoras; since $s=10^{-5}$km, we find pretty rapidly that $h = \sqrt{(1+s)^2-1} = \sqrt{2s + s^2}$ or $0.00447$km or about $14.7$ feet.
Okay, then. Inserting one extra centimeter in a kilometer gives a height of 4.47m or 14.7 feet if we make a triangle; or 3.9 meters (12.7 feet) if it makes a semicircular arc. This is kind of wild, but would be true if our crazed assumptions were true. By the way, we used the height estimate from the triangle problem to give us an $R$ estimate for the circular arc problem; that $1000/9$ didn't come out of nowhere!
### A more serious application
[Kepler's equation](https://en.wikipedia.org/wiki/Kepler's_equation) is
\begin{equation}
M = E - e\sin E
\end{equation}
where the _mean anomaly_ $M = n(t-t_0)$ where $t$ is time, $t_0$ is the starting time, $n = 2\pi/T$ is the sweep speed for a circular orbit with period $T$, and the _eccentricity_ $e$ of the (elliptical) orbit are all known, and one wants to compute the _eccentric anomaly_ $E$ by solving that nonlinear equation for $E$. Since we typically want to do this for a succession of times $t$, initial estimates are usually handy to the purpose (just use the $E$ from the previous instant of time), and typically only one Newton correction is needed.
```python
T = 1 # Period of one Jupiter year, say (equal to 11.8618 earth years)
n = 2*np.pi/T
# At t=t_0, which we may as well take to be zero, M=0 and so E = 0 as well.
nsamples = 12 # Let's take 13 sample positions including 0.
e = 0.0487 # Jupiter has a slightly eccentric orbit; not really visible to the eye here
E = [0.0]
f = lambda ee: ee-e*np.sin(ee)-M
df = lambda ee: 1 - e*np.cos(ee)
newtonmax = 5
for k in range(nsamples):
M = n*(k+1)/nsamples # measuring time in Jupiter years
EE = E[k]
newtoniters = 0
for j in range(newtonmax):
residual = f(EE);
if abs(residual) <= 1.0e-12:
break
EE = EE - residual/df(EE)
E.append(EE)
x = [np.cos(ee)-e for ee in E]
y = [np.sqrt(1-e*e)*np.sin(ee) for ee in E]
fig = plt.figure(figsize=(6, 6))
ax = fig.add_axes([0,0,1,1])
plt.scatter(x,y,s=200,color='orange')
ax.set_aspect('equal')
ax.grid(True, which='both')
plt.show()
```
### A useful interpretation
When we were looking at square roots, we noticed that each iteration (say $17/12$) could be considered the _exact_ square root of something near to what we wanted to extract the root of; this kind of interpretation is possible for general functions as well. Each iterate $x_n$, when we are trying to solve $f(x) = 0$, can be considered to be the _exact_ zero of $g(x) = f(x)-r_n$, where $r_n = f(x_n)$ is the so-called _residual_. The equation is so simple that it's slippery: $x_n$ solves $f(x) - f(x_n) = 0$. Of course it does. No matter what you put in for $x_n$, so long as $f(x)$ was defined there, you would have a zero. The key of course is the question of whether or not the residual is small enough to ignore. There are two issues with that idea.
1. The function $f(x)$ might be sensitive to changes (think of $f(x) = (x-\pi)^2$, for instance; adding just a bit to it makes both roots complex, and the change is quite sudden).
2. The change might not make sense in the physical/chemical/biological context in which $f(x)$ arose.
This second question is not mathematical in nature: it depends on the context of the problem. Nevertheless, as we saw for the square roots, this is _often_ a very useful look at our computations.
To answer the question of sensitivity, we need calculus, and the general notion of derivative:
$f(x)+s = 0$ makes some kind of change to the zero of $f(x)=0$, say $x=r$. If the new root is $x=r+t$ where $t$ is tiny (just like $s$ is small) then calculus says $0 = f(r+t) + s = f(r)+f'(r)t+s + O(t^2)$ where we have used the tangent-line approximation (which is, in fact, where Newton iteration comes from). Solving for this we have that $t = -s/f'(r)$, because of course $f(r)=0$.
This is nothing more (or less!) than the calculus derivation of Newton's method, but that's not what we are using it for here. We are using it to _estimate_ the sensitivity of the root to changes in the function. We see that if $f'(r)$ is small, then $t$ will be much larger than $s$; in this case we say that the zero of $f(x)$ is _sensitive_. If $f'(r)=0$ we are kind of out of luck with Newton's method, as it is; it needs to be fixed up.
### Things that can go wrong
Newton's method is a workhorse of science, there's no question. There are multidimensional versions, approximate versions, faster versions, slower versions, applications to just about everything, especially optimization. But it is not a perfect method. Solving nonlinear equations is _hard_. Here are some kinds of things that can go wrong.
1. The method can converge to the wrong root. If there's more than one root, then this can happen.
2. The method can divide by zero (if you happen to hit a place where the derivative is zero). Even just with $f(x)=x^2-2$ this can happen if you choose your initial guess as $x_0 = 0$. (See if you can either find other $x_0$ that would lead to zero, or else prove it couldn't happen for this function).
3. If the derivative is _small_ but not zero the convergence can be very, very slow. This is related to the sensitivity issue.
4. You might get caught in a _cycle_ (see the animated GIF as the header of this chapter). That is, like with the game of pass the parcel with the Gauss map, we may eventually hit the initial point we started with.
5. The iteration may take a very long time to settle down before converging.
6. The iteration may go off to infinity.
Let's take Cleve Moler's "perverse example", from section 4.3 of [Numerical Computing with Matlab](https://www.mathworks.com/moler/chapters.html), namely
\begin{align}
f(x) &= \sqrt{x-r} \mathrm{\ if\ } x \ge r \nonumber\\
&= -\sqrt{r-x} \mathrm{\ if\ } x \le r
\end{align}
We will draw this in the next cell, for some harmless $r$. This peculiar function was chosen so that _every_ initial estimate (that wasn't exactly right, i.e. $x=r$) would create a two-cycle: pretty mean.
Let's check. The derivative is
\begin{align}
f(x) &= \frac12\left(x-r\right)^{-1/2} \mathrm{\ if\ } x > r \nonumber\\
&= -\frac12\left(r-x\right)^{-1/2} \mathrm{\ if\ } x < r
\end{align}
So the Newton iteration is, if $x_n > r$,
\begin{equation}
x_{n+1} = x_n - \frac{ (x_n-r)^{1/2} }{ (x_n-r)^{-1/2}/2 } = x_n - 2(x_n-r) = 2r - x_n
\end{equation}
which can be rewritten to be $x_{n+1}-r = r -x_n$, so the distance to $r$ is exactly the same but now we are on the other side. If instead $x_n < r$,
\begin{equation}
x_{n+1} = x_n - \frac{ -(r-x_n)^{1/2} }{ +(r-x_n)^{-1/2}/2 } = x_n + 2(r-x_n) = 2r - x_n
\end{equation}
which again says $x_{n+1}-r = r -x_n$.
In particular, take $x_0 = 0$. Then $x_1 = 2r$. Then $x_2 = 0$ again.
Obviously this example was contrived to show peculiar behaviour; but these things really can happen.
```python
r = 0.73 # Some more or less random number
n = 501
x = np.linspace(-1+r,1+r,n)
y = np.sign(x-r)*np.sqrt(np.abs(x-r))
fig = plt.figure(figsize=(6, 6))
ax = fig.add_axes([0,0,1,1])
plt.plot(x,y,'k')
ax.grid(True, which='both')
ax.axhline(y=0, color='k')
ax.axvline(x=0, color='k')
plt.title("Cleve Moler's really mean example")
plt.show()
```
## A _Really Useful_ trick to find initial guesses
One of our "generic good questions" was "can you find an easier question to solve?" and "does answering that easier question help you to to solve this one?" This gives an amazing method, called _continuation_ or _homotopy continuation_ for solving many equations, such as polynomial equations. The trick is easily stated. Suppose you want to solve $H(x) = 0$ (H for "Hard") and you have no idea where the roots are. You might instead be able to solve $E(x) = 0$, a similar equation but one you know the roots of already (E for "Easy", of course). Then consider blending them: $F(x,t) = (1-t)E(x) + tH(x)$ for some parameter $t$, which we take to be a real number between $0$ and $1$.
You start by solving the Easy problem, at $t=0$, because $F(x,0) = E(x)$. Then increase $t$ a little bit, say to $t=0.1$; use the roots of $E(x)$ as your initial estimate of the roots of $F(x,0.1) = 0.9E(x) + 0.1H(x)$. If it works, great; increase $t$ again, and use those recently-computed roots of $F(x,0.1)$ as the initial estimates for (say) $F(x,0.2)$. Continue on until either the method breaks down (it can, the root paths can cross, which is annoying, or go off to infinity, which is worse) or you reach the end.
As an example, suppose you want to solve $h(x) = x e^x - 2.0 = 0$. (This isn't a very hard problem at all, but it's a nice example). If we were trying to solve $e(x) = e^x - 2.0 = 0$, then we already know the answer: $x=\ln(2.0) = 0.69314718056\ldots$. So $f(x,t) = (1-t)(e^x - 2) + t(xe^x -2) = (1-t+tx)e^x - 2 = (1+t(x-1))e^x - 2$. Newton's method for this is straightforward (this is the first non-polynomial function we've used, but we bet that you know that the derivative of $e^x$ is $e^x$, and that you know the product rule as well). So our iteration is (for a fixed $t$)
\begin{equation}
x_{n+1} = x_n - \frac{ (1+t(x_n-1))e^{x_n} - 2 }{(1-t+t(x_n+1))e^{x_n}}
\end{equation}
```python
nt = 10
newtonmax = 3
t = np.linspace(0,1,nt)
x = np.zeros(nt+1)
x[0] = np.log(2.0) # Like all snobs, Python insists on log rather than ln
for k in range(1,nt):
xi = x[k-1]
for i in range(newtonmax):
y = np.exp(xi)
ft = (1+t[k]*(xi-1))*y-2;
dft = (1-t[k] + t[k]*(xi+1))*y
xi = xi - ft/dft
x[k] = xi
print( 'The solution is {} and the residual is {}'.format(xi, xi*np.exp(xi)-2.0) )
```
The solution is 0.8526055020137254 and the residual is -2.220446049250313e-16
```python
from scipy import special
reference = special.lambertw(2.0)
print("The reference value is {}".format(reference))
```
The reference value is (0.8526055020137254+0j)
You can learn more about the [Lambert W function here](https://orcca.on.ca/LambertW/) or at the [Wikipedia link](https://en.wikipedia.org/wiki/Lambert_W_function).
### The problem with multiple roots
Newton's iteration divides by $f'(x_n)$, and if the root $r$ we are trying to find happens to be a _multiple_ root, that is, both $f(r) = 0$ and $f'(r)=0$, then the $f'(x_n)$ we are dividing by will get smaller and smaller the closer we get to $r$. This slows Newton iteration to a crawl. Consider $f(x) = W(x^3)$ where $W(s)$ is the [Lambert W function](https://orcca.on.ca/LambertW/). Then since $W(0)=0$ we have $f'(x) = 3x^2 W'(x^3)$ so $f'(0)=0$ as well. Let's look at how this slows us down. Notice that $W(s)$ is itself _defined_ to be the root of the equation $y\exp(y)-s = 0$, and it itself is usually evaluated by an iterative method (in Maple, Halley's method is used because the derivatives are cheap). But let's just let python evaluate it for us here. We can evaluate $W'(x)$ by implicit differentiation: $W(x)\exp W(x) = x$ so $W'(x)\exp W(x) + W(x) W'(x) \exp W(x) = 1$ and therefore $W'(x) = 1/(\exp W(x)(1+ W(x)))$.
```python
f = lambda x: special.lambertw( x**3 )
df = lambda x: 3*x**2/(np.exp(f(x))*(1+ f(x)))
SirIsaac = lambda x: x - f(x)/df(x)
hex = [0.1] # We pretend that we don't know the root is 0
n = 10
for k in range(n):
nxt = SirIsaac( hex[k] )
hex.append(nxt)
print( hex )
```
[0.1, (0.0666333666167554+0j), (0.044415675138000696+0j), (0.02960915295524054+0j), (0.019739179108047577+0j), (0.013159402133893345+0j), (0.008772924760017682+0j), (0.005848614532183306+0j), (0.0038990759647654794+0j), (0.002599383899468682+0j), (0.001732922584427688+0j)]
We can see that the iterates are _decreasing_ but the turtle imitation is absurd.
There is an _almost useless_ trick to speed this up. If we "just happen to know" the multiplicity $\mu$ of the root, then we can speed things up. Here the multiplicity is $\mu=3$. Then we can _modify_ Newton's iteration to speed it up, like so:
\begin{equation}
x_{n+1} = x_n - \mu\frac{f(x_n)}{f'(x_n)}
\end{equation}
Let's try it out.
```python
mu = 3
ModSirIsaac = lambda x: x - mu*f(x)/df(x)
n = 3
hex = [0.2]
for k in range(n):
nxt = ModSirIsaac( hex[k] )
hex.append(nxt)
print( hex )
```
[0.2, (-0.0015873514490433727+0j), (-6.348810149478523e-12+0j), (8.077935669463161e-28+0j)]
So, it _does_ work; instead of hardly getting anywhere in ten iterations, it got the root in three. __NB: The first time we tried this, we had a bug in our derivative and the iterates did not approach zero with the theoretical speed: so we deduced that there must have been a bug in the derivative, and indeed there was.__
But this trick is (as we said) almost useless; because if we don't know the root, how do we know its multiplicity?
### What happens if we get the derivative wrong?
Let's suppose that we get the derivative wrong—maybe deliberately, to save some computation, and we only use the original estimate $f'(x_0)$. This can be useful in some circumstances. In this case, we don't get such rapid approach to the zero, but we can get the iterates to approach the root, if not very fast. Let's try. Take $f(x) = x^2-2$, and our initial estimate $x_0 = 1$ so $f'(x_0) = 2$. This "approximate Newton" iteration then becomes
\begin{equation}
x_{n+1} = x_n - \frac{1}{2}f(x_n)
\end{equation}
or $x_{n+1} = x_n - (x_n^2-2)/2$.
```python
n = 10
hex = [1.0]
f = lambda x: x*x-2
df = 2
quasi = lambda x: x - f(x)/df;
for k in range(n):
nxt = quasi( hex[k] )
hex.append(nxt)
print( hex )
```
[1.0, 1.5, 1.375, 1.4296875, 1.407684326171875, 1.4168967450968921, 1.4130985519638084, 1.4146747931827024, 1.4140224079494415, 1.4142927228578732, 1.4141807698935047]
We see that the iterates are getting closer to the root, but quite slowly.
To be honest, this happens more often when someone codes the derivative incorrectly than it does by deliberate choice to save the effort of computing the derivative. It has to be one ridiculously costly derivative before this slow approach is considered worthwhile (it looks like we get about one more digit of accuracy after two iterations).
### What if there is more than one root?
A polynomial of degree $n$ has $n$ roots (some or all of which may be complex, and some of which may be multiple). It turns out to be important in practice to solve polynomial equations. John McNamee wrote a bibliography in the late nineties that had about _ten thousand_ entries in it—that is, the bibliography listed ten thousand published works on methods for the solution of polynomials. It was later published as a book, but would be best as a computer resource. Unfortunately, we can't find this online anywhere now, which is a great pity. But in any case ten thousand papers is a bit too much to expect anyone to digest. So we will content ourselves with a short discussion.
First, Newton's method is not very satisfactory for solving polynomials. It only finds one root at a time; you need to supply an initial estimate; and then you need to "deflate" each root as you find it, so you don't find it again by accident. This turns out to introduce numerical instability (sometimes). This all _can_ be done but it's not so simple. We will see better methods in the Mandelbrot unit.
But we really don't have to do anything: we can use Maple's `fsolve`, which is robust and fast enough for most purposes. In Python, we can use the similarly-named routine `fsolve` from SciPy, if we only want one root: there are other facilities in NumPy for polynomial rootfinding, which we will meet in a later unit.
We do point out that the "World Champion" polynomial solver is a program called MPSolve, written by Dario Bini and Leonardo Robol. It is freely available at [this GitHub link](https://github.com/robol/MPSolve). The paper describing it is [here](https://www.sciencedirect.com/science/article/pii/S037704271300232X).
```python
from scipy.optimize import fsolve
f = lambda x: x**2+x*np.exp(x)-2
oneroot = fsolve( f, 0.5 )
print( oneroot, oneroot**2 + oneroot*np.exp(oneroot)-2 )
```
[0.72048399] [-2.44249065e-15]
### Complex initial approximations, fractal boundaries, Julia sets, and chaos
Using Newton's method to extract square roots (when you have a calculator, or Google) is like growing your own wheat, grinding it to flour by hand, and then baking bread, when you live a block away from a good bakery. It's kind of fun, but faster to do it the modern way. But even for _cube_ roots, the story gets more interesting when complex numbers enter the picture.
Consider finding all $z$ with
\begin{equation}
z^3 - 8 = 0 .
\end{equation}
The results are $z = 2$, $z = 2\cdot e^{\frac{i\cdot2\pi}{3}}$, and $z = 2\cdot e^{\frac{-i2\pi}{3}}$.
See our [Appendix on complex numbers](../Appendix/complex-numbers.ipynb).
## Exercises
1. Write down as many questions as you can, about this section.
2. Sometimes Newton iteration is "too expensive"; a cheaper alternative is the so-called _secant iteration_, which goes as follows: $z_{n+1} = z_n - f(z_n)(z_{n}-z_{n-1})/(f(z_n) - f(z_{n-1}))$. You need not one, but _two_ initial approximations for this. Put $f(z) = z^2-2$ and start with the two initial approximations $z_0 = 1$, $z_1 = 3/2$. Carry out several steps of this (in exact arithmetic is better). Convert each rational $z_n$ to continued fraction form. Discuss what you find.
3. Try Newton and secant iteration on some functions of your own choosing. You should see that Newton iteration usually takes fewer iterations to converge, but since it needs a derivative evaluation while the secant method does not, each iteration is "cheaper" in terms of computational cost (if $f(z)$ is at all expensive to evaluate, $f'(z)$ usually is too; there are exceptions, of course). The consensus seems to be that the secant method is a bit more practical; but in some sense it is just a variation on Newton's method.
4. Both the Newton iteration and the secant iteration applied to $f(z) = z^2-a^2$ can be _solved analytically_ by the transformation $z = a\coth \theta$. [Hyperbolic functions](https://en.wikipedia.org/wiki/Hyperbolic_functions) The iteration $z_{n+1} = (z_n + a^2/z_n)/2$ becomes (you can check this) $\coth \theta_{n+1} = \cosh 2\theta_n/\sinh 2\theta_n = \coth 2\theta_n$, and so we may take $\theta_{n+1} = 2\theta_n$. This can be solved to get $\theta_n = 2^n\theta_0$ and so we have an analytical formula for each $z_n = a \coth( 2^n \theta_0 )$. Try this on $a^2=2$; you should find that $\theta_0 = \mathrm{invcoth}(1/\sqrt{2})$. By "invcoth" we mean the functional inverse of coth, i.e.: $\coth\theta_0 = 1/\sqrt{2}$. It may surprise you that that number is complex. Nevertheless, you will find that all subsequent iterates are real, and $\coth 2^n\theta_0$ goes to $1$ very quickly.
NB This was inadvertently difficult. Neither numpy nor scipy has an invcoth (or arccoth) function. The Digital Library of Mathematical Functions says (equation 4.37.6) that arccoth(z) = arctanh(1/z). Indeed we had to go to Maple to find out that invcoth$(1/\sqrt{2}) = \ln(1+\sqrt{2}) - i\pi/2$.
5. Try the above with $a^2=-1$. NB the initial estimate $z_0 = 1$ fails! Try $z_0 = e = \exp(1) = 2.71828...$ instead. For this, the $\theta_0 = 1j\arctan(e^{-1})$. Then you might enjoy reading Gil Strang's lovely article [A Chaotic Search for $i$](https://www.jstor.org/stable/2686733).
6. Try to solve the _secant_ iteration for $z^2-a^2$ analytically. You should eventually find a connection to Fibonacci numbers.
7. People keep inventing new rootfinding iterations. Usually they are reinventions of methods that others have invented before, such as so-called _Schroeder_ iteration and _Householder_ iteration. One step along the way is the method known as _Halley iteration_, which looks like this:
\begin{equation*}
z_{n+1} = z_n - \frac{f(z_n)}{f'(z_n) - \frac{f(z_n)f''(z_n)}{2f'(z_n)}}
\end{equation*}
which, as you can see, also involves the _second_ derivative of $f$. When it works, it works quickly, typically converging in fewer iterations than Newton (although, typically, each step is more expensive computationally). Try the method out on some examples. It may help you to reuse your code (or Maple's code) if you are told that Newton iteration on $F(z) = f(z)/\sqrt{f'(z)}$ turns out to be identical to Halley iteration on $f(z)$. __NB: this trick helps you to "re-use" code, but it doesn't generate a particularly efficient iteration. In particular, the square roots muck up the formula for the derivatives, and simplification beforehand makes a big difference to program speed. So if you want speed, you should program Halley's method directly.__
8. Try to solve Halley's iteration for $x^2-a$ analytically. Then you might enjoy reading [Revisiting Gilbert Strang's "A Chaotic Search for i"](https://doi.org/10.1145/3363520.3363521) by Ao Li and Rob Corless; Ao was a (graduate) student in the first iteration of this course at Western, and she solved—in class!—what was then an _open_ problem (this problem!).
9. Let's revisit question 4. It turns out that we don't need to use hyperbolic functions. In the OEIS when searching for the numerators of our original sequence $1$, $3/2$, $17/12$ and so on, and also in the paper [What Newton Might Have Known](https://doi.org/10.1080/00029890.2021.1964274), we find the formulas $x_n = r_n/s_n$ where
\begin{align*}
r_n &= \frac{1}{2}\left( (1+\sqrt2)^{2^n} + (1-\sqrt2)^{2^n}\right) \\
s_n &= \frac{1}{2\sqrt2}\left( (1+\sqrt2)^{2^n} - (1-\sqrt2)^{2^n}\right)
\end{align*}
Verify that this formula gives the same answers (when $a=2$) as the formula in question 4. Try to generalize this formula for other integers $a$. Discuss the growth of $r_n$ and $s_n$: it is termed _doubly exponential_. Show that the error $x_n - \sqrt2$ goes to zero like $1/(3+2\sqrt2)^{2^n}$. How many iterations would you need to get ten thousand digits of accuracy? Do you need to calculate the $(1-\sqrt2)^{2^n}$ part?
10. Do the other results of [Revisiting Gilbert Strang's "A Chaotic Search for i"](https://doi.org/10.1145/3363520.3363521) on secant iteration, Halley's iteration, Householder iteration, and so on, translate to a form like that of question 9? (We have not tried this ourselves yet).
11. Solve the Schroeder iteration problem of the paper [Revisiting Gilbert Strang's "A Chaotic Search for i"](https://doi.org/10.1145/3363520.3363521). This iteration generates the image of the "infinite number of infinity symbols" used in the Preamble, by the way. We don't know how to solve this problem (we mean, analytically, the way Newton, Secant, Halley, and Householder iterations were solved). We'd be interested in your solution.
12. A farmer has a goat, a rope, a circular field, and a pole fixed firmly at the _edge_ of the field. How much rope should the farmer allow so that the goat, tied to the pole, can eat the grass on exactly half the field? Discuss your assumptions a bit, and indicate how much precision in your solution is reasonable.
| 294d05ce6ce0491f6abee59ff6894035566fe282 | 134,928 | ipynb | Jupyter Notebook | book/Contents/rootfinding.ipynb | jameshughes89/Computational-Discovery-on-Jupyter | 614eaaae126082106e1573675599e6895d09d96d | [
"MIT"
] | 14 | 2022-02-21T23:50:22.000Z | 2022-03-23T22:21:55.000Z | book/Contents/rootfinding.ipynb | jameshughes89/Computational-Discovery-on-Jupyter | 614eaaae126082106e1573675599e6895d09d96d | [
"MIT"
] | null | null | null | book/Contents/rootfinding.ipynb | jameshughes89/Computational-Discovery-on-Jupyter | 614eaaae126082106e1573675599e6895d09d96d | [
"MIT"
] | 2 | 2022-02-22T02:43:44.000Z | 2022-02-23T14:27:31.000Z | 157.995316 | 24,416 | 0.848534 | true | 11,062 | Qwen/Qwen-72B | 1. YES
2. YES | 0.779993 | 0.888759 | 0.693226 | __label__eng_Latn | 0.995648 | 0.448927 |
```
%matplotlib inline
```
# Problem 1
Find an analytic expression for the symbol P(f) of the B‐L wavelet filter. Plot |P(f)| for –1/2 < f < +1/2
$$
\begin{aligned}
N_{1}(f)&=30 \sin^{2}{\left (\pi f \right )} \cos^{2}{\left (\pi f \right )} + 30 \cos^{2}{\left (\pi f \right )} + 5\\
N_{2}(f)&=2 \sin^{4}{\left (\pi f \right )} \cos^{2}{\left (\pi f \right )} + 70 \cos^{4}{\left (\pi f \right )}\\
S(f)&=\frac{1}{105 \sin^{8}{\left (\pi f \right )}} \left(2 \sin^{4}{\left (\pi f \right )} \cos^{2}{\left (\pi f \right )} + 30 \sin^{2}{\left (\pi f \right )} \cos^{2}{\left (\pi f \right )} + 70 \cos^{4}{\left (\pi f \right )} + 30 \cos^{2}{\left (\pi f \right )} + 5\right)\\
\hat{\phi}(f)&=\frac{\sqrt{105}}{\pi^{4} f^{4} \sqrt{\frac{1}{\sin^{8}{\left (\pi f \right )}} \left(2 \sin^{4}{\left (\pi f \right )} \cos^{2}{\left (\pi f \right )} + 30 \sin^{2}{\left (\pi f \right )} \cos^{2}{\left (\pi f \right )} + 70 \cos^{4}{\left (\pi f \right )} + 30 \cos^{2}{\left (\pi f \right )} + 5\right)}}\\
\end{aligned}$$
$$P(f)=\frac{\hat{\phi}(2f)}{\hat{\phi}(f)}=\frac{\sqrt{\frac{1}{\sin^{8}{\left (\pi f \right )}} \left(2 \sin^{4}{\left (\pi f \right )} \cos^{2}{\left (\pi f \right )} + 30 \sin^{2}{\left (\pi f \right )} \cos^{2}{\left (\pi f \right )} + 70 \cos^{4}{\left (\pi f \right )} + 30 \cos^{2}{\left (\pi f \right )} + 5\right)}}{16 \sqrt{\frac{1}{\sin^{8}{\left (2 \pi f \right )}} \left(2 \sin^{4}{\left (2 \pi f \right )} \cos^{2}{\left (2 \pi f \right )} + 30 \sin^{2}{\left (2 \pi f \right )} \cos^{2}{\left (2 \pi f \right )} + 70 \cos^{4}{\left (2 \pi f \right )} + 30 \cos^{2}{\left (2 \pi f \right )} + 5\right)}}$$
```
import matplotlib.pyplot as plt
import numpy as np
import sympy as sy
import utilities.wavelet as wv
f = sy.symbols('f')
symbol = wv.battle_lemarie_symbol()
# plot symbol
symbol_lambda = sy.lambdify(f, symbol, 'numpy') # lambdify for plotting
freq = np.linspace(-0.5,0.5)
plt.figure()
plt.title('B-L Symbol')
plt.xlabel('frequency')
plt.plot(freq, [np.abs(symbol_lambda(v)) for v in freq])
plt.show()
```
# Problem 2
Calculate the B-L wavelet filter coefficients $h_k$ for -11 < k < 11.
$$
\begin{aligned}
P(f)&=1/\sqrt{2}\sum_{-\infty}^{\infty}h_{k}e^{-i2\pi kf}\\
\sqrt{2}P(f)&=\sum_{-\infty}^{\infty}h_{k}e^{-i2\pi kf}\\
\end{aligned}
$$
The filter coefficents are the fourier coefficients of $\sqrt{2}P(f)$
## Wavelet Filter Coefficient
```
import matplotlib.pyplot as plt
import numpy as np
import sympy as sy
from scipy.signal.wavelets import cascade
import utilities.wavelet as wv
def f_coef(exp, period, n):
"""
Find fourier coefficients of expression.
Args:
exp (function): symbolic function
period (float): period of symbolic function
n (int): number of coefficients to compute
Returns:
dc (float): dc coefficient
hp (float): positive coefficients
hn (float): negative coefficients
"""
period = float(period)
# compute delta time
dt = period/n
x_sample = np.zeros([n])
for k in range(n):
if k == 0:
x_sample[k] = exp(0 + np.finfo(float).eps)
else:
x_sample[k] = exp(k*dt)
coef = np.fft.fft(x_sample)/n
dc = coef[0]
hp = coef[1:n/2+1]
hn = coef[-n/2+1:]
return dc, hp, hn
f = sy.symbols('f')
symbol = wv.battle_lemarie_symbol()
symbol_lambda = sy.lambdify(f, symbol)
exp = lambda f: np.sqrt(2)*symbol_lambda(f)
n_coef = 23
dc, hp, hn = f_coef(exp=exp, period=1, n=n_coef)
print 'dc:'
print np.real(dc)
print 'hp:'
for v in hp:
print np.real(v)
print 'hn:'
for v in hn:
print np.real(v)
```
dc:
0.775971513219
hp:
0.4349719571
-0.0590042181291
-0.112420960149
0.0384344633754
0.0443591447983
-0.020884809629
-0.0192461669425
0.0100985636138
0.00799378004201
-0.00351493706705
-0.00166579243544
hn:
-0.00166579243544
-0.00351493706705
0.00799378004201
0.0100985636138
-0.0192461669425
-0.020884809629
0.0443591447983
0.0384344633754
-0.112420960149
-0.0590042181291
0.4349719571
# Problem 3
Find an analytic expression for the Fourier Transform of the B‐L wavelet. Plot it’s absolute value.
$$
\begin{align}
\phi(t) &\rightarrow\hat{\phi}(\xi)\\
\phi(t-k) &\rightarrow e^{-i2\pi k\xi}\hat{\phi}(\xi)\\
\phi(2t-k) &\rightarrow\frac{1}{2}e^{-i2\pi k\xi}\hat{\phi}(\frac{\xi}{2})\\
\hat{\psi}(\xi) &=\left[\sum(-1)^{k}\bar{h}_{1-k}\frac{\sqrt{2}}{2}e^{-i\pi k\xi}\right]\hat{\phi}(\frac{\xi}{2})\\
l &=1-k\\
k &=1-l\\
\hat{\phi}(\xi) &=\frac{-e^{-i\pi\xi}}{\sqrt{2}}\sum_{l}(-1)^{-l}\hat{h}_{l}e^{i\pi l\xi}\\
\hat{\phi}(\xi) &=\frac{-e^{-i\pi\xi}}{\sqrt{2}}\sum_{l}(-1)^{l}\hat{h}_{l}e^{i\pi l\xi}\\
\hat{\phi}(\xi) &=\frac{-e^{-i\pi\xi}}{\sqrt{2}}\sum_{l}e^{i\pi l}\hat{h}_{l}e^{i\pi l\xi}\\
\hat{\phi}(\xi) &=\frac{-e^{-i\pi\xi}}{\sqrt{2}}\sum_{l}\hat{h}_{l}e^{i\pi l(1+\xi)}\\
\hat{\phi}(\xi) &=-e^{-i\pi\xi}\frac{1}{\sqrt{2}}\sum h_{l}e^{-2\pi il\left(\frac{1+f}{2}\right)}\\
\hat{\phi}(\xi) &=-e^{-i\pi\xi}P\left(\frac{1+\xi}{2}\right)\\
\hat{\psi}(\xi) &=-e^{-i\pi\xi}\overline{P\left(\frac{1+\xi}{2}\right)}\hat{\phi}\left(\frac{\xi}{2}\right)\\
\end{align}
$$
```
import sympy as sy
import numpy as np
import matplotlib.pyplot as plt
import utilities.wavelet as wv
f = sy.symbols('f')
psi_hat = wv.battle_lemarie_wavelet_transform()
psi_hat_lambda = sy.lambdify(f, psi_hat, 'numpy')
freq = np.linspace(-2,2, 100)
freq[0] = np.finfo(float).eps
psi_hat_signal = np.array(np.abs([psi_hat_lambda(v) for v in freq]))
plt.figure()
plt.title('Fourier Transform of Mother Wavelet')
plt.plot(freq, psi_hat_signal)
plt.show()
```
# Problem 4
Find and plot the B‐L scaling function and mother wave.
```
import copy
import matplotlib.pyplot as plt
import sympy as sy
import numpy as np
import utilities.wavelet as wv
def f_trans(x, F, N, M):
"""
Discrete Fourier Transform (frequency to time)
x (symbolic): symbolic function of f
F (float): frequency range
N (int): number of points
M (int): number of aliases
"""
f = sy.symbols('f')
dt = 1/F
df = F/N
T = N/F
xp = copy.deepcopy(x)
for k in range(1, M+1):
xp = xp+x.subs(f,f-k*F)+x.subs(f,f+k*F)
# lambdify symbolic function
xp = sy.lambdify(f, xp, 'numpy')
xps = np.zeros([N])
ts = np.zeros([N])
for n in range(N):
if n == 0:
xps[n] = xp(np.finfo(float).eps)
ts[n] = -T/2
else:
xps[n] = xp(n*df)
ts[n] = n*dt - T/2
Xs = np.fft.fft(xps)*df
Xs = np.fft.fftshift(Xs)
return Xs,ts
# Convert scaling function to time domain
phi_hat = wv.battle_lemarie_scaling_transform()
Xs, ts = f_trans(phi_hat, 8.1, 128, 0)
# Plot scaling function
plt.figure()
plt.subplot(1,2,1)
plt.title('Scaling Function')
plt.plot(ts, Xs)
# Convert wavelet to time domain
psi_hat = wv.battle_lemarie_wavelet_transform()
Xs, ts = f_trans(psi_hat, 8.1, 128, 0)
# Plot wavelet
plt.subplot(1,2,2)
plt.title('Mother Wavelet')
plt.plot(ts, Xs)
plt.show()
```
# Problem 5
Compress your image using the CDF2.4 analysis and synthesis filters given in the Matlab script “qmfilter.m”. Create corresponding high‐pass filters. Encode 4 levels. Set cutoff to .98, quantize with 8 bits, gzip, calculate compression ratio, reverse procedure, and obtain recreated image. Display original and recreated images.
```
import copy
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import numpy as np
import os
import subprocess
import utilities.image as im
import utilities.wavelet as wv
import utilities.quantization as quant
img = im.read_gecko_image()
plt.figure()
plt.subplot(1,3,1)
plt.title('Original Image')
plt.imshow(img, cm.Greys_r)
# Forward Transform
img_fwd = copy.deepcopy(img)
dim = max(img.shape)
while dim >= 8:
P = wv.permutation_matrix(dim)
T_a = wv.cdf_24_encoding_transform(dim)
img_fwd[:dim,:dim] = P.dot(T_a).dot(img_fwd[:dim,:dim]).dot(T_a.T).dot(P.T)
dim = dim / 2
plt.subplot(1,3,2)
plt.title('Transformed Image')
plt.imshow(img_fwd, cm.Greys_r)
# Threshold + Encode
t, ltmax = quant.log_thresh(img_fwd, cutoff=0.98)
img_encode = quant.encode(img_fwd, t, ltmax)
# Store to file
filename = 'encoded_image'
img_encode.tofile(filename)
file_size = os.stat(filename).st_size
# Compress File
subprocess.call(['gzip', filename])
c_file_size = os.stat(filename + '.gz').st_size
# Decompress File
subprocess.call(['gunzip', filename + '.gz'])
# # Read from file
img_encode = np.fromfile(filename).reshape(img.shape)
# Decode Image
img_decode = quant.decode(img_encode, t, ltmax)
# Inverse Transform
img_inv = copy.deepcopy(img_decode)
dim = 8
while dim <= max(img.shape):
P = wv.permutation_matrix(dim)
T_b = wv.cdf_24_decoding_transform(dim)
img_inv[:dim,:dim] = T_b.T.dot(P.T).dot(img_inv[:dim,:dim]).dot(P).dot(T_b)
dim = dim * 2
plt.subplot(1,3,3)
plt.title('Recreated Image')
plt.imshow(img_inv, cm.Greys_r)
plt.show()
print "Compression Level: %s" % (1 - float(c_file_size) / float(file_size))
```
| 95f08fe9d68fa20e3463803bc81b1a2938bc8b0e | 108,498 | ipynb | Jupyter Notebook | final_project.ipynb | cschultz123/theory_of_wavelets | 30b8b7290c5113c404ae56b92906f454055dea5e | [
"MIT"
] | 1 | 2020-03-18T09:23:59.000Z | 2020-03-18T09:23:59.000Z | final_project.ipynb | cschultz123/theory_of_wavelets | 30b8b7290c5113c404ae56b92906f454055dea5e | [
"MIT"
] | null | null | null | final_project.ipynb | cschultz123/theory_of_wavelets | 30b8b7290c5113c404ae56b92906f454055dea5e | [
"MIT"
] | null | null | null | 228.416842 | 44,157 | 0.883178 | true | 3,327 | Qwen/Qwen-72B | 1. YES
2. YES | 0.877477 | 0.793106 | 0.695932 | __label__eng_Latn | 0.339085 | 0.455215 |
# Sympy - Symbolic algebra in Python
J.R. Johansson (jrjohansson at gmail.com), updated by M. V. dos Santos (marcelo.santos at df.ufcg.edu.br)
The latest version of
this [Jupyter notebook](https://jupyter.org/) lecture is available at [https://github.com/mvsantosdev/scientific-python-lectures.git](https://github.com/mvsantosdev/scientific-python-lectures.git).
The other notebooks in this lecture series are indexed at [http://jrjohansson.github.io](http://jrjohansson.github.io).
```python
%matplotlib inline
import matplotlib.pyplot as plt
```
## Introduction
There are two notable Computer Algebra Systems (CAS) for Python:
* [SymPy](http://sympy.org/en/index.html) - A python module that can be used in any Python program, or in an IPython session, that provides powerful CAS features.
* [Sage](http://www.sagemath.org/) - Sage is a full-featured and very powerful CAS enviroment that aims to provide an open source system that competes with Mathematica and Maple. Sage is not a regular Python module, but rather a CAS environment that uses Python as its programming language.
Sage is in some aspects more powerful than SymPy, but both offer very comprehensive CAS functionality. The advantage of SymPy is that it is a regular Python module and integrates well with the IPython notebook.
In this lecture we will therefore look at how to use SymPy with IPython notebooks. If you are interested in an open source CAS environment I also recommend to read more about Sage.
To get started using SymPy in a Python program or notebook, import the module `sympy`:
```python
import sympy as sp
```
To get nice-looking $\LaTeX$ formatted output run:
```python
sp.init_printing()
```
## Symbolic variables
In SymPy we need to create symbols for the variables we want to work with. We can create a new symbol using the `Symbol` class:
```python
x = sp.Symbol('x')
x
```
```python
(sp.pi + x)**2
```
```python
# alternative way of defining symbols
a, b, c, sig = sp.symbols("a b c \hat{\sigma}")
a, b, c, sig
```
```python
type(a)
```
sympy.core.symbol.Symbol
We can add assumptions to symbols when we create them:
```python
x = sp.Symbol('x', real=True)
```
```python
x.is_imaginary
```
False
```python
x = sp.Symbol('x', positive=True)
```
```python
x > 0
```
### Complex numbers
The imaginary unit is denoted `I` in Sympy.
```python
1+sp.I
```
```python
sp.I**2
```
```python
(x * sp.I + 1)**2
```
### Rational numbers
There are three different numerical types in SymPy: `Real`, `Rational`, `Integer`:
```python
r1 = sp.Rational(4,5)
r2 = sp.Rational(5,4)
```
```python
r1
```
```python
r1+r2
```
```python
r1/r2
```
## Numerical evaluation
SymPy uses a library for artitrary precision as numerical backend, and has predefined SymPy expressions for a number of mathematical constants, such as: `pi`, `E`, `oo` for infinity.
To evaluate an expression numerically we can use the `evalf` function (or `N`). It takes an argument `n` which specifies the number of significant digits.
```python
sp.pi.evalf(n=50)
```
```python
y = (x + sp.pi)**2
y
```
```python
sp.N(y, 5) # same as evalf
```
When we numerically evaluate algebraic expressions we often want to substitute a symbol with a numerical value. In SymPy we do that using the `subs` function:
```python
y.subs(x, 1.5)
```
```python
y.subs(x, 1.5).evalf()
```
The `subs` function can of course also be used to substitute Symbols and expressions:
```python
y.subs(x, a+sp.pi)
```
We can also combine numerical evolution of expressions with NumPy arrays:
```python
import numpy as np
```
```python
x_vec = np.arange(0, 10, 0.1)
x_vec
```
array([0. , 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1. , 1.1, 1.2,
1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2. , 2.1, 2.2, 2.3, 2.4, 2.5,
2.6, 2.7, 2.8, 2.9, 3. , 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7, 3.8,
3.9, 4. , 4.1, 4.2, 4.3, 4.4, 4.5, 4.6, 4.7, 4.8, 4.9, 5. , 5.1,
5.2, 5.3, 5.4, 5.5, 5.6, 5.7, 5.8, 5.9, 6. , 6.1, 6.2, 6.3, 6.4,
6.5, 6.6, 6.7, 6.8, 6.9, 7. , 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, 7.7,
7.8, 7.9, 8. , 8.1, 8.2, 8.3, 8.4, 8.5, 8.6, 8.7, 8.8, 8.9, 9. ,
9.1, 9.2, 9.3, 9.4, 9.5, 9.6, 9.7, 9.8, 9.9])
```python
y_vec = np.array([
sp.N(y.subs(x, xx))
for xx in x_vec
])
y_vec
```
array([9.86960440108936, 10.5079229318073, 11.1662414625253,
11.8445599932432, 12.5428785239612, 13.2611970546792,
13.9995155853971, 14.7578341161151, 15.5361526468330,
16.3344711775510, 17.1527897082689, 17.9911082389869,
18.8494267697049, 19.7277453004228, 20.6260638311408,
21.5443823618587, 22.4827008925767, 23.4410194232947,
24.4193379540126, 25.4176564847306, 26.4359750154485,
27.4742935461665, 28.5326120768845, 29.6109306076024,
30.7092491383204, 31.8275676690383, 32.9658861997563,
34.1242047304742, 35.3025232611922, 36.5008417919102,
37.7191603226281, 38.9574788533461, 40.2157973840640,
41.4941159147820, 42.7924344455000, 44.1107529762179,
45.4490715069359, 46.8073900376538, 48.1857085683718,
49.5840270990898, 51.0023456298077, 52.4406641605257,
53.8989826912436, 55.3773012219616, 56.8756197526795,
58.3939382833975, 59.9322568141155, 61.4905753448334,
63.0688938755514, 64.6672124062693, 66.2855309369873,
67.9238494677053, 69.5821679984232, 71.2604865291412,
72.9588050598591, 74.6771235905771, 76.4154421212951,
78.1737606520130, 79.9520791827310, 81.7503977134489,
83.5687162441669, 85.4070347748848, 87.2653533056028,
89.1436718363208, 91.0419903670387, 92.9603088977567,
94.8986274284746, 96.8569459591926, 98.8352644899106,
100.833583020629, 102.851901551346, 104.890220082064,
106.948538612782, 109.026857143500, 111.125175674218,
113.243494204936, 115.381812735654, 117.540131266372,
119.718449797090, 121.916768327808, 124.135086858526,
126.373405389244, 128.631723919962, 130.910042450680,
133.208360981398, 135.526679512116, 137.864998042834,
140.223316573552, 142.601635104270, 144.999953634988,
147.418272165706, 149.856590696424, 152.314909227142,
154.793227757860, 157.291546288577, 159.809864819295,
162.348183350013, 164.906501880731, 167.484820411449,
170.083138942167], dtype=object)
```python
fig, ax = plt.subplots()
ax.plot(x_vec, y_vec);
```
However, this kind of numerical evolution can be very slow, and there is a much more efficient way to do it: Use the function `lambdify` to "compile" a Sympy expression into a function that is much more efficient to evaluate numerically:
```python
sp.lambdify?
```
```python
f = sp.lambdify([x], (x + sp.pi)**2, 'numpy')
# the first argument is a list of variables that
# f will be a function of: in this case only x -> f(x)
```
```python
y_vec = f(x_vec) # now we can directly pass a numpy array and f(x) is efficiently evaluated
```
The speedup when using "lambdified" functions instead of direct numerical evaluation can be significant, often several orders of magnitude. Even in this simple example we get a significant speed up:
```python
%%timeit
y_vec = np.array([sp.N(((x + sp.pi)**2).subs(x, xx)) for xx in x_vec])
```
16.8 ms ± 217 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
```python
%%timeit
y_vec = f(x_vec)
```
1.62 µs ± 24.9 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
## Algebraic manipulations
One of the main uses of an CAS is to perform algebraic manipulations of expressions. For example, we might want to expand a product, factor an expression, or simply an expression. The functions for doing these basic operations in SymPy are demonstrated in this section.
### Expand and factor
The first steps in an algebraic manipulation
```python
(x+1)*(x+2)*(x+3)
```
```python
sp.expand((x+1)*(x+2)*(x+3))
```
The `expand` function takes a number of keywords arguments which we can tell the functions what kind of expansions we want to have performed. For example, to expand trigonometric expressions, use the `trig=True` keyword argument:
```python
sp.sin(a+b)
```
```python
sp.expand(sp.sin(a+b), trig=True)
```
```python
```
See `help(sp.expand)` for a detailed explanation of the various types of expansions the `expand` functions can perform.
The opposite a product expansion is of course factoring. The factor an expression in SymPy use the `factor` function:
```python
sp.factor(x**3 + 6 * x**2 + 11*x + 6)
```
```python
sp.factor(x**2 + 2*x + 1, gaussian=True)
```
### Simplify
The `simplify` tries to simplify an expression into a nice looking expression, using various techniques. More specific alternatives to the `simplify` functions also exists: `trigsimp`, `powsimp`, `logcombine`, etc.
The basic usages of these functions are as follows:
```python
# simplify expands a product
sp.simplify((x+1)*(x+2)*(x+3))
```
```python
sp.sin(a)**2 + sp.cos(a)**2
```
```python
# simplify uses trigonometric identities
sp.simplify(sp.sin(a)**2 + sp.cos(a)**2)
```
```python
sp.simplify(sp.cos(x)/sp.sin(x))
```
### apart and together
To manipulate symbolic expressions of fractions, we can use the `apart` and `together` functions:
```python
f1 = 1/((a+1)*(a+2))
```
```python
f1
```
```python
sp.apart(f1)
```
```python
f2 = 1/(a+2) + 1/(a+3)
```
```python
f2
```
```python
sp.together(f2)
```
Simplify usually combines fractions but does not factor:
```python
sp.simplify(f2)
```
## Calculus
In addition to algebraic manipulations, the other main use of CAS is to do calculus, like derivatives and integrals of algebraic expressions.
### Differentiation
Differentiation is usually simple. Use the `diff` function. The first argument is the expression to take the derivative of, and the second argument is the symbol by which to take the derivative:
```python
y = (x + a + sp.pi)**3
y
```
```python
sp.diff(y**2, x, a, x)
```
```python
y.diff(x, x)
```
For higher order derivatives we can do:
```python
sp.diff(y**2, x, x)
```
```python
sp.diff(y**2, x, 2) # same as above
```
```python
(y**2).diff(x, 2)
```
To calculate the derivative of a multivariate expression, we can do:
```python
x, y, z = sp.symbols("x y z")
```
```python
f = sp.sin(x*y) + sp.cos(y*z)
```
$\frac{d^3f}{dxdy^2}$
```python
f.diff(x, 1, y, 2)
```
## Integration
Integration is done in a similar fashion:
```python
f = sp.cos(y*z)
f
```
```python
sp.integrate(f, x, y, z)
```
By providing limits for the integration variable we can evaluate definite integrals:
```python
sp.integrate(f, (x, -1, 1))
```
and also improper integrals
```python
sigma = sp.symbols('\sigma', real=True)
sigma.is_imaginary
```
False
```python
sp.integrate(sp.exp(-x**2/(2*sigma**2)), (x, -sp.oo, sp.oo))
```
Remember, `oo` is the SymPy notation for inifinity.
### Sums and products
We can evaluate sums and products using the functions: 'Sum'
```python
n = sp.Symbol("n")
```
```python
sp.Sum(1/n**2, (n, 1, 10))
```
```python
sp.Sum(1/n**2, (n,1, 10)).evalf()
```
```python
sp.Sum(1/n**2, (n, 1, sp.oo)).evalf()
```
Products work much the same way:
```python
sp.Product(n, (n, 1, 10)) # 10!
```
```python
sp.Product(n, (n, 1, 10)).evalf()
```
## Limits
Limits can be evaluated using the `limit` function. For example,
```python
sp.limit(sp.sin(x)/x, x, 0)
```
We can use 'limit' to check the result of derivation using the `diff` function:
```python
f
```
```python
f.diff(x)
```
$\displaystyle \frac{\mathrm{d}f(x,y)}{\mathrm{d}x} = \frac{f(x+h,y)-f(x,y)}{h}$
```python
h = sp.Symbol("h")
```
```python
sp.limit((f.subs(x, x+h) - f)/h, h, 0)
```
OK!
We can change the direction from which we approach the limiting point using the `dir` keywork argument:
```python
sp.limit(1/x, x, 0, dir="+")
```
```python
sp.limit(1/x, x, 0, dir="-")
```
## Series
Series expansion is also one of the most useful features of a CAS. In SymPy we can perform a series expansion of an expression using the `series` function:
```python
sp.series(sp.exp(x), x)
```
By default it expands the expression around $x=0$, but we can expand around any value of $x$ by explicitly include a value in the function call:
```python
sp.series(sp.exp(x), x, 1)
```
And we can explicitly define to which order the series expansion should be carried out:
```python
sp.series(sp.exp(x), x, 1, 10)
```
The series expansion includes the order of the approximation, which is very useful for keeping track of the order of validity when we do calculations with series expansions of different order:
```python
s1 = sp.cos(x).series(x, 0, 5)
s1
```
```python
s2 = sp.sin(x).series(x, 0, 2)
s2
```
```python
s1.removeO()
```
If we want to get rid of the order information we can use the `removeO` method:
```python
sp.expand(s1.removeO() * s2.removeO())
```
But note that this is not the correct expansion of $\cos(x)\sin(x)$ to $5$th order:
```python
(sp.cos(x)*sp.sin(x)).series(x, 0, 6)
```
## Linear algebra
### Matrices
Matrices are defined using the `Matrix` class:
```python
m11, m12, m21, m22 = sp.symbols("m11, m12, m21, m22")
b1, b2 = sp.symbols("b1, b2")
```
```python
A = sp.Matrix([[m11, m12],[m21, m22]])
A
```
```python
b = sp.Matrix([[b1], [b2]])
b
```
With `Matrix` class instances we can do the usual matrix algebra operations:
```python
A**2
```
```python
A * b
```
And calculate determinants and inverses, and the like:
```python
A.det()
```
```python
A.inv()
```
## Solving equations
For solving equations and systems of equations we can use the `solve` function:
```python
sp.solve(x**2 - 1, x) #the expression is equal to zero
```
```python
sp.solve(x**4 - x**2 - 1, x)
```
System of equations:
```python
sp.solve([x + y - 1, x - y - 1], [x,y])
```
In terms of other symbolic expressions:
```python
sp.solve([x + y - a, x - y - c], [x,y])
```
## Solving differential equations
```python
f = sp.Function('f')
```
```python
f(x).diff(x, 2) + a**2 * f(x)
```
```python
sp.dsolve(f(x).diff(x, 2) + a**2 * f(x), f(x))
```
```python
ics={
f(0): 1,
f(x).diff(x).subs(x, 0): 0
}
ics
```
```python
Y = sp.dsolve(f(x).diff(x, 2) + a**2 * f(x), f(x), ics=ics)
Y
```
```python
Y.simplify()
```
```python
sp.Ynm(0, 0, x, y)
```
## Further reading
* http://sympy.org/en/index.html - The SymPy projects web page.
* https://github.com/sympy/sympy - The source code of SymPy.
* http://live.sympy.org - Online version of SymPy for testing and demonstrations.
## Versions
```python
%reload_ext version_information
%version_information numpy, matplotlib, sympy
```
<table><tr><th>Software</th><th>Version</th></tr><tr><td>Python</td><td>2.7.10 64bit [GCC 4.2.1 (Apple Inc. build 5577)]</td></tr><tr><td>IPython</td><td>3.2.1</td></tr><tr><td>OS</td><td>Darwin 14.1.0 x86_64 i386 64bit</td></tr><tr><td>numpy</td><td>1.9.2</td></tr><tr><td>matplotlib</td><td>1.4.3</td></tr><tr><td>sympy</td><td>0.7.6</td></tr><tr><td colspan='2'>Sat Aug 15 11:37:37 2015 JST</td></tr></table>
| 86c354e024d747e5e85e8139f458825848b09233 | 246,686 | ipynb | Jupyter Notebook | Lecture-3-Sympy.ipynb | mvsantosdev/scientific-python-lectures | eaf791b3d51d20fefa137090fd83c618e9fe7ed8 | [
"CC-BY-3.0"
] | 6 | 2020-02-18T23:33:47.000Z | 2021-03-16T18:46:06.000Z | Lecture-3-Sympy.ipynb | mvsantosdev/scientific-python-lectures | eaf791b3d51d20fefa137090fd83c618e9fe7ed8 | [
"CC-BY-3.0"
] | null | null | null | Lecture-3-Sympy.ipynb | mvsantosdev/scientific-python-lectures | eaf791b3d51d20fefa137090fd83c618e9fe7ed8 | [
"CC-BY-3.0"
] | 2 | 2020-03-10T12:54:28.000Z | 2022-02-12T23:24:08.000Z | 82.228667 | 14,612 | 0.827445 | true | 5,168 | Qwen/Qwen-72B | 1. YES
2. YES | 0.913677 | 0.867036 | 0.79219 | __label__eng_Latn | 0.889762 | 0.678856 |
# Black Scholes 模型
Black-Scholes模型是Fischer Black和Myron Scholes在1973的论文"The Pricing of Options and Corporate Liabilities"中提出的。自论文出版以来,这个模型成为了投资者广泛使用的工具。时至今日,它依然是确定期权公允价格的最好的方式之一。
## 假设
* 欧式期权之内在到期日行权
* 在期权的生命周期内,没有股息分派
* 市场走势不能被预测
* 无风险利率和波动率为常数
* 标的物的走势服从对数正态分布
## 无股息 Black-Scholes 公式
在Black-Schole公式中, 定义如下参数.
* $S$, 在时间$t$时的资产价格
* $T$, 期权到期时间. 期权的剩余时间定义为$T−t$
* $K$, 期权行权价
* $r$, 无风险收益率,假设在其在时间段$t$和$T$中为常量
* $\sigma$, 标的资产的波动率, 即标的资产回报的标准差
### $N(d)$是服从正态分布的随机变量$Z$的积累分布函数
$$N(d) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^d e^{-\frac{1}{2}x^2} dx$$
$C(S,t)$是在时间$t$的看涨期权的价值;$P(S,t)$是在时间$t$的看跌期权的价值。
Black-Scholes看涨期权公式是:
$$C(S,t) = SN(d_1) - Ke^{-r(T - t)} N(d_2)$$
看跌期权公式是:
$$P(S,t) = Ke^{-r(T - t)}N(-d_2) - SN(-d_1)$$
其中
$$d_1 = \frac{\ln \left(\frac{S}{K} \right) + \left(r + \frac{\sigma^2}{2} \right)(T - t)}{\sigma \sqrt{T - t}}$$
$$d_2 = d_1 - \sigma \sqrt{T - t} = \frac{\ln \left(\frac{S}{K} \right) + \left(r - \frac{\sigma^2}{2}\right)(T - t)}{\sigma \sqrt{T - t}}$$
## 无股息Black-Scholes公式的Python实现
```python
import numpy as np
import scipy.stats as si
import sympy as sy
import sympy.stats as systats
sy.init_printing()
```
```python
def euro_call(S, K, T, r, sigma):
#S: spot price
#K: strike price
#T: time to maturity
#r: interest rate
#sigma: volatility of underlying asset
N = systats.Normal("x", 0.0, 1.0)
d1 = (sy.ln(S / K) + (r + 0.5 * sigma ** 2) * T) / (sigma * sy.sqrt(T))
d2 = (sy.ln(S / K) + (r - 0.5 * sigma ** 2) * T) / (sigma * sy.sqrt(T))
call = (S * systats.cdf(N)(d1) - K * sy.exp(-r * T) * systats.cdf(N)(d2))
return call
```
```python
def euro_put(S, K, T, r, sigma):
#S: spot price
#K: strike price
#T: time to maturity
#r: interest rate
#sigma: volatility of underlying asset
N = systats.Normal("x", 0.0, 1.0)
d1 = (sy.ln(S / K) + (r + 0.5 * sigma ** 2) * T) / (sigma * sy.sqrt(T))
d2 = (sy.ln(S / K) + (r - 0.5 * sigma ** 2) * T) / (sigma * sy.sqrt(T))
put = (K * sy.exp(-r * T) * systats.cdf(N)(-d2) - S * systats.cdf(N)(-d1))
return put
```
测试一下
```python
euro_call(50, 100, 1, 0.05, 0.25)
```
```python
euro_call(50, 100, 1, 0.05, 0.25).evalf()
```
### 符号计算扩展
```python
S, K, T, t, r, sigma = sy.symbols("S K T t r sigma")
call_price = euro_call(S, K, T-t, r, sigma)
```
#### Delta
$$\Delta = \frac{\partial V}{\partial S}$$
```python
delta = sy.diff(call_price, S)
```
#### Gamma
$$\Gamma = \frac{\partial^2 V}{\partial S^2}$$
```python
gamma = sy.diff(call_price, S, 2)
```
#### Theta
$$\Theta = \frac{\partial V}{\partial t}$$
```python
theta = sy.diff(call_price, t)
```
## 图像
```python
import matplotlib
import matplotlib.pyplot as plt
```
```python
# Data for plotting
s = np.arange(40.0, 200.0, 1.0)
c = []
for s_i in s:
c.append(call_price.evalf(subs={
S: s_i,
K: 100,
T: 1,
t: 0.5,
r: 0.05,
sigma: 0.25},n=2))
fig, ax = plt.subplots()
ax.plot(s, c)
ax.set(xlabel='S', ylabel='C',
title='Call Price Value of S')
ax.grid()
plt.show()
```
```python
## Delta Plot
s = np.arange(40.0, 200.0, 1.0)
c = []
for s_i in s:
c.append(delta.evalf(subs={
S: s_i,
K: 100,
T: 1,
t: 0.5,
r: 0.05,
sigma: 0.25},n=2))
fig, ax = plt.subplots()
ax.plot(s, c)
ax.set(xlabel='S', ylabel='Delta',
title='Call Delta Value of S')
ax.grid()
plt.show()
```
```python
## Gamma Plot
s = np.arange(40.0, 200.0, 1.0)
c = []
for s_i in s:
c.append(gamma.evalf(subs={
S: s_i,
K: 100,
T: 1,
t: 0.5,
r: 0.05,
sigma: 0.25},n=2))
fig, ax = plt.subplots()
ax.plot(s, c)
ax.set(xlabel='S', ylabel='Gamma',
title='Call Gamma Value of S')
ax.grid()
plt.show()
```
## 内容主要引用自
[Black-Scholes Formula and Python Implementation](https://aaronschlegel.me/black-scholes-formula-python.html)
| 28d89b643a3386bac019dcc559d175811f981e2f | 8,374 | ipynb | Jupyter Notebook | .ipynb_checkpoints/black-scholes-model-checkpoint.ipynb | chenxin1-5/option-study | 722f425e4a3cad05ec8fbb3fc980fbdb9e85b655 | [
"Unlicense"
] | 3 | 2021-04-05T14:50:01.000Z | 2021-11-12T11:27:02.000Z | .ipynb_checkpoints/black-scholes-model-checkpoint.ipynb | chenxin1-5/option-study | 722f425e4a3cad05ec8fbb3fc980fbdb9e85b655 | [
"Unlicense"
] | null | null | null | .ipynb_checkpoints/black-scholes-model-checkpoint.ipynb | chenxin1-5/option-study | 722f425e4a3cad05ec8fbb3fc980fbdb9e85b655 | [
"Unlicense"
] | null | null | null | 21.638243 | 166 | 0.444471 | true | 1,751 | Qwen/Qwen-72B | 1. YES
2. YES | 0.879147 | 0.76908 | 0.676134 | __label__eng_Latn | 0.153936 | 0.409218 |
# Optimización media-varianza
La **teoría de portafolios** es una de los avances más importantes en las finanzas modernas e inversiones.
- Apareció por primera vez en un [artículo corto](https://www.math.ust.hk/~maykwok/courses/ma362/07F/markowitz_JF.pdf) llamado "Portfolio Selection" en la edición de Marzo de 1952 de "the Journal of Finance".
- Escrito por un desconocido estudiante de la Universidad de Chicago, llamado Harry Markowitz.
- Escrito corto (sólo 14 páginas), poco texto, fácil de entender, muchas gráficas y unas cuantas referencias.
- No se le prestó mucha atención hasta los 60s.
Finalmente, este trabajo se convirtió en una de las más grandes ideas en finanzas, y le dió a Markowitz el Premio Nobel casi 40 años después.
- Markowitz estaba incidentalmente interesado en los mercados de acciones e inversiones.
- Estaba más bien interesado en entender cómo las personas tomaban sus mejores decisiones cuando se enfrentaban con "trade-offs".
- Principio de conservación de la miseria. O, dirían los instructores de gimnasio: "no pain, no gain".
- Si queremos más de algo, tenemos que perder en algún otro lado.
- El estudio de este fenómeno era el que le atraía a Markowitz.
De manera que nadie se hace rico poniendo todo su dinero en la cuenta de ahorros. La única manera de esperar altos rendimientos es si se toma bastante riesgo. Sin embargo, riesgo significa también la posibilidad de perder, tanto como ganar.
Pero, ¿qué tanto riesgo es necesario?, y ¿hay alguna manera de minimizar el riesgo mientras se maximizan las ganancias?
- Markowitz básicamente cambió la manera en que los inversionistas pensamos acerca de esas preguntas.
- Alteró completamente la práctica de la administración de inversiones.
- Incluso el título de su artículo era innovador. Portafolio: una colección de activos en lugar de tener activos individuales.
- En ese tiempo, un portafolio se refería a una carpeta de piel.
- En el resto de este módulo, nos ocuparemos de la parte analítica de la teoría de portafolios, la cual puede ser resumida en dos frases:
- No pain, no gain.
- No ponga todo el blanquillo en una sola bolsa.
**Objetivos:**
- ¿Qué es la línea de asignación de capital?
- ¿Qué es el radio de Sharpe?
- ¿Cómo deberíamos asignar nuestro capital entre un activo riesgoso y un activo libre de riesgo?
*Referencia:*
- Notas del curso "Portfolio Selection and Risk Management", Rice University, disponible en Coursera.
___
## 1. Línea de asignación de capital
### 1.1. Motivación
El proceso de construcción de un portafolio tiene entonces los siguientes dos pasos:
1. Escoger un portafolio de activos riesgosos.(el mejor)
2. Decidir qué tanto de tu riqueza invertirás en el portafolio y qué tanto invertirás en activos libres de riesgo.
Al paso 2 lo llamamos **decisión de asignación de activos**.
Preguntas importantes:
1. ¿Qué es el portafolio óptimo de activos riesgosos?
- ¿Cuál es el mejor portafolio de activos riesgosos?
- Es un portafolio eficiente en media-varianza.
2. ¿Qué es la distribución óptima de activos?
- ¿Cómo deberíamos distribuir nuestra riqueza entre el portafolo riesgoso óptimo y el activo libre de riesgo?
- Concepto de **línea de asignación de capital**.
- Concepto de **radio de Sharpe**.
Dos suposiciones importantes:
- Funciones de utilidad media-varianza.
- Inversionista averso al riesgo.
La idea sorprendente que saldrá de este análisis, es que cualquiera que sea la actitud del inversionista de cara al riesgo, el mejor portafolio de activos riesgosos es idéntico para todos los inversionistas.
Lo que nos importará a cada uno de nosotros en particular, es simplemente la desición óptima de asignación de activos.
___
### 1.2. Línea de asignación de capital
Sean:
- $r_s$ el rendimiento del activo riesgoso,
- $r_f$ el rendimiento libre de riesgo, y
- $w$ la fracción invertida en el activo riesgoso.
<font color=blue> Realizar deducción de la línea de asignación de capital en el tablero.</font>
**Tres doritos después...**
#### Línea de asignación de capital (LAC):
$E[r_p]$ se relaciona con $\sigma_p$ de manera afín. Es decir, mediante la ecuación de una recta:
$$E[r_p]=r_f+\frac{E[r_s-r_f]}{\sigma_s}\sigma_p.$$
- La pendiente de la LAC es el radio de Sharpe $\frac{E[r_s-r_f]}{\sigma_s}=\frac{E[r_s]-r_f}{\sigma_s}$,
- el cual nos dice qué tanto rendimiento obtenemos por unidad de riesgo asumido en la tenencia del activo (portafolio) riesgoso.
Ahora, la pregunta es, ¿dónde sobre esta línea queremos estar?
___
### 1.3. Resolviendo para la asignación óptima de capital
Recapitulando de la clase pasada, tenemos las curvas de indiferencia: **queremos estar en la curva de indiferencia más alta posible, que sea tangente a la LAC**.
<font color=blue> Ver en el tablero.</font>
Analíticamente, el problema es
$$\max_{w} \quad E[U(r_p)]\equiv\max_{w} \quad E[r_p]-\frac{1}{2}\gamma\sigma_p^2,$$
donde los puntos $(\sigma_p,E[r_p])$ se restringen a estar en la LAC, esto es $E[r_p]=r_f+\frac{E[r_s-r_f]}{\sigma_s}\sigma_p$ y $\sigma_p=w\sigma_s$. Entonces el problema anterior se puede escribir de la siguiente manera:
$$\max_{w} \quad r_f+wE[r_s-r_f]-\frac{1}{2}\gamma w^2\sigma_s^2.$$
<font color=blue> Encontrar la $w$ que maximiza la anterior expresión en el tablero.</font>
**Tres doritos después...**
La solución es entonces:
$$w^\ast=\frac{E[r_s-r_f]}{\gamma\sigma_s^2}.$$
De manera intuitiva:
- $w^\ast\propto E[r_s-r_f]$: a más exceso de rendimiento que se obtenga del activo riesgoso, más querremos invertir en él.
- $w^\ast\propto \frac{1}{\gamma}$: mientras más averso al riesgo seas, menos querrás invertir en el activo riesgoso.
- $w^\ast\propto \frac{1}{\sigma_s^2}$: mientras más riesgoso sea el activo, menos querrás invertir en él.
___
## 2. Ejemplo de asignación óptima de capital: acciones y billetes de EU
Pongamos algunos números con algunos datos, para ilustrar la derivación que acabamos de hacer.
En este caso, consideraremos:
- **Portafolio riesgoso**: mercado de acciones de EU (representados en algún índice de mercado como el S&P500).
- **Activo libre de riesgo**: billetes del departamento de tesorería de EU (T-bills).
Tenemos los siguientes datos:
$$E[r_{US}]=11.9\%,\quad \sigma_{US}=19.15\%, \quad r_f=1\%.$$
Recordamos que podemos escribir la expresión de la LAC como:
\begin{align}
E[r_p]&=r_f+\left[\frac{E[r_{US}-r_f]}{\sigma_{US}}\right]\sigma_p\\
&=0.01+\text{S.R.}\sigma_p,
\end{align}
donde $\text{S.R}=\frac{0.119-0.01}{0.1915}\approx0.569$ es el radio de Sharpe (¿qué es lo que es esto?).
Grafiquemos la LAC con estos datos reales:
```python
# Importamos librerías que vamos a utilizar
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
```
```python
# Datos
Ers= 0.119
ss=0.1915
rf=0.01
# Radio de Sharpe para este activo
RS= (Ers-rf)/ss
# Vector de volatilidades del portafolio (sugerido: 0% a 50%)
sp=np.linspace(0,0.5,100)
# LAC
Erp= rf+RS*sp
```
```python
# Gráfica
plt.figure(figsize=(8,6))
plt.plot(sp,Erp,lw=2,label='LAC')
plt.plot(0,rf,'or',ms=10,label='risk free')
plt.plot(ss,Ers,'ob',ms=10,label='Portafolio riesgoso')
plt.xlabel('Volatilidad $\sigma_p$')
plt.ylabel('Rendimiento Esperado $E[r_p]$')
plt.grid()
plt.legend(loc='best')
plt.show()
```
Bueno, y ¿en qué punto de esta línea querríamos estar?
- Pues ya vimos que depende de tus preferencias.
- En particular, de tu actitud de cara al riesgo, medido por tu coeficiente de aversión al riesgo.
Solución al problema de asignación óptima de capital:
$$\max_{w} \quad E[U(r_p)]$$
$$w^\ast=\frac{E[r_s-r_f]}{\gamma\sigma_s^2}$$
Dado que ya tenemos datos, podemos intentar para varios coeficientes de aversión al riesgo:
```python
# importar pandas
import pandas as pd
```
```python
# Crear un DataFrame con los pesos, rendimiento
# esperado y volatilidad del portafolio óptimo
# entre los activos riesgoso y libre de riesgo
# cuyo índice sean los coeficientes de aversión
# al riesgo del 1 al 10 (enteros)
gamma=np.linspace(1,10,10)
tabla=pd.DataFrame(data={'$\gamma$': gamma,
'$w^*$':(Ers-rf)/(gamma*ss**2)})
tabla
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>$\gamma$</th>
<th>$w^*$</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>1.0</td>
<td>2.972275</td>
</tr>
<tr>
<th>1</th>
<td>2.0</td>
<td>1.486137</td>
</tr>
<tr>
<th>2</th>
<td>3.0</td>
<td>0.990758</td>
</tr>
<tr>
<th>3</th>
<td>4.0</td>
<td>0.743069</td>
</tr>
<tr>
<th>4</th>
<td>5.0</td>
<td>0.594455</td>
</tr>
<tr>
<th>5</th>
<td>6.0</td>
<td>0.495379</td>
</tr>
<tr>
<th>6</th>
<td>7.0</td>
<td>0.424611</td>
</tr>
<tr>
<th>7</th>
<td>8.0</td>
<td>0.371534</td>
</tr>
<tr>
<th>8</th>
<td>9.0</td>
<td>0.330253</td>
</tr>
<tr>
<th>9</th>
<td>10.0</td>
<td>0.297227</td>
</tr>
</tbody>
</table>
</div>
**Interpretacion**
entre mas averso al riesgo menos se va a invertir en portafolios riesgosos.
¿Cómo se interpreta $w^\ast>1$?
- Cuando $0<w^\ast<1$, entonces $0<1-w^\ast<1$. Lo cual implica posiciones largas en el mercado de activos y en el activo libre de riesgo.
- Por el contrario, cuando $w^\ast>1$, tenemos $1-w^\ast<0$. Lo anterior implica una posición corta en el activo libre de riesgo (suponiendo que se puede) y una posición larga (de más del 100%) en el mercado de activos: apalancamiento.
# Anuncios parroquiales.
## 1. Quiz la siguiente clase.
## 2. Tarea 5 segunda entrega para el viernes.
<footer id="attribution" style="float:right; color:#808080; background:#fff;">
Created with Jupyter by Esteban Jiménez Rodríguez.
</footer>
| 871e4ffbadf23145a99070da3ef3b91715177d44 | 42,837 | ipynb | Jupyter Notebook | Modulo3/Clase13_OptimizacionMediaVarianza.ipynb | ariadnagalindom/portafolios | 5dd192773b22cbb53b291030afc248e482de97b1 | [
"MIT"
] | null | null | null | Modulo3/Clase13_OptimizacionMediaVarianza.ipynb | ariadnagalindom/portafolios | 5dd192773b22cbb53b291030afc248e482de97b1 | [
"MIT"
] | null | null | null | Modulo3/Clase13_OptimizacionMediaVarianza.ipynb | ariadnagalindom/portafolios | 5dd192773b22cbb53b291030afc248e482de97b1 | [
"MIT"
] | null | null | null | 85.163022 | 26,344 | 0.799426 | true | 3,195 | Qwen/Qwen-72B | 1. YES
2. YES | 0.592667 | 0.847968 | 0.502562 | __label__spa_Latn | 0.986431 | 0.005949 |
# "행렬분해 추천시스템"
> "d2l.ai 16장의 행렬분해 추천시스템을 구현하고 ndcg 평가 방법을 이해한다."
- toc: true
- badges: true
- author: 단호진
- categories: [recommender]
행렬 분해 알고리즘은 2006년 Netflix 경연에서 탁월한 성과를 보이며 널리 알리지게 되었다. 이 글에서는 [d2l.ai](https://d2l.ai/chapter_recommender-systems/mf.html)에서 소개하는 행렬 분해 내용과 코드를 기반으로 pytorch 위에 구현해 본다. 이 블로그에 포함된 코드는 모범과는 거리가 멀고 임시변통으로 작성한 경우가 많은 점을 미리 알려둔다. 다음 사항을 염두에 두고 코드를 작성하였다.
- d2l mxnet 코드를 pytorch 방식으로 이식
- pytorch_lightening 사용
- lenskit의 데이터 불러오기 및 평가 기능 시용
- ndcg 이해
## MovieLens 데이터 셋
훈련 및 검증 셋의 통계적인 특성이 유사하게 나타난다. 추천 시스템은 시간 축의 정보도 추천 결과에 반영되어야 하나 본 블로그에서는 시간 정보는 무시하겠다.
```python
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from lenskit.datasets import ML100K
from lenskit.crossfold import sample_rows
%matplotlib inline
ml100k = ML100K('input/ml-100k')
ratings = ml100k.ratings
ratings['user'] = ratings['user'] - 1
ratings['item'] = ratings['item'] - 1
train = ratings.iloc[:80000]
test = ratings.iloc[80000:]
len(train), len(test)
```
(80000, 20000)
```python
fig, ax = plt.subplots(1, 2, figsize=(12, 6))
sns.countplot(x='rating', data=train, ax=ax[0])
sns.countplot(x='rating', data=test, ax=ax[1]);
```
```python
fig, ax = plt.subplots(1, 2, figsize=(12, 6))
sns.histplot(x='timestamp', data=train, ax=ax[0])
sns.histplot(x='timestamp', data=test, ax=ax[1]);
```
## pytorch 학습을 위한 Dataset 준비
```python
from torch.utils.data import Dataset, DataLoader
class MLDataset(Dataset):
def __init__(self, df):
super().__init__()
self.df = df.drop(columns=['timestamp'])
self.df = self.df.astype({'user': int, 'item': int})
def __len__(self):
return len(self.df)
def __getitem__(self, index):
user = self.df.iloc[index, 0]
item = self.df.iloc[index, 1]
rating = self.df.iloc[index, 2]
return {
'user': user,
'item': item,
'rating': rating,
}
```
```python
ds_tr = MLDataset(train)
ds_va = MLDataset(test)
```
## 16.3. 행렬 분해(Matrix Factorization)
- 협업 필터링 모델(collaborative filtering model)이다.
- 사용자-아이템 행렬을 두개의 저 차원 행렬로 분해하고 $R=PQ^T$, $P \in \mathbb{R}^{m \times k}$, $Q \in \mathbb{R}^{n \times k}$, 바이어스 추가해 주었다.
- 과적합을 막기 위해 $L_2$ 정규화를 추가하였다.
$R \in \mathbb{R}^{m \times n}$
$\hat R_{ui} = p_u q_i^T + b_u + b_i$
\begin{equation}
\text{argmin}_{P, Q, b} \sum ||R_{ui} - \hat R_{ui}||^2 + \lambda (||P||_F^2 + ||Q||_F^2 + b_u^2 + b_i^2)
\end{equation}
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
import pytorch_lightning as pl
import d2l.torch as d2l
```
### 16.3.2. 모델 구현
```python
class MF(pl.LightningModule):
def __init__(self, num_factors, num_users, num_items, **kwargs):
super().__init__(**kwargs)
self.P = nn.Embedding(num_users, num_factors)
self.Q = nn.Embedding(num_items, num_factors)
self.user_bias = nn.Embedding(num_users, 1)
self.item_bias = nn.Embedding(num_items, 1)
def forward(self, user_id, item_id):
# in lightning, forward defines the prediction/inference actions
P_u = self.P(user_id)
Q_i = self.Q(item_id)
b_u = self.user_bias(user_id)
b_i = self.item_bias(item_id)
hat_R = (P_u * Q_i).sum(axis=1) + torch.squeeze(b_u) + torch.squeeze(b_i)
return hat_R
def training_step(self, batch, batch_index):
# training_step defines the train loop. It is independent of forward
users = batch['user']
items = batch['item']
ratings = batch['rating']
hat_R = self.forward(users, items)
loss = F.mse_loss(hat_R, ratings)
self.log('train_loss', loss)
return loss
def validation_step(self, batch, batch_index):
users = batch['user']
items = batch['item']
ratings = batch['rating']
hat_R = self.forward(users, items)
loss = F.mse_loss(hat_R, ratings)
self.log('val_loss', loss)
def configure_optimizers(self):
optimizer = torch.optim.Adam(self.parameters(), lr=0.002, weight_decay=1e-5)
return optimizer
```
### 16.3.4. 훈련 및 평가
```python
num_users, num_items = ratings['user'].nunique(), ratings['item'].nunique()
print(num_users, num_items)
mf = MF(50, num_users, num_items)
trainer = pl.Trainer(
gpus=1,
max_epochs=150,
callbacks=[pl.callbacks.EarlyStopping('val_loss')], # Need to study various callbacks
)
```
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
943 1682
```python
trainer.fit(
mf,
DataLoader(ds_tr, batch_size=512, shuffle=True, num_workers=12),
DataLoader(ds_va, batch_size=512, num_workers=12))
```
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
2021-07-31 12:32:02.771252: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
| Name | Type | Params
----------------------------------------
0 | P | Embedding | 47.1 K
1 | Q | Embedding | 84.1 K
2 | user_bias | Embedding | 943
3 | item_bias | Embedding | 1.7 K
----------------------------------------
133 K Trainable params
0 Non-trainable params
133 K Total params
0.535 Total estimated model params size (MB)
Validation sanity check: 0it [00:00, ?it/s]
Training: -1it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
Validating: 0it [00:00, ?it/s]
## NDCG(normalized discounted cumulative gain) - Lenskit 사용
```python
users = test.user.unique()
users = users.astype(int)
recs = []
mf.eval()
for u in users:
scores = mf(
torch.ones(num_items, dtype=int) * u,
torch.arange(num_items)
)
scores = scores.detach().numpy()
scores = pd.DataFrame({
'user': u,
'item': np.arange(num_items),
'score': scores,
})
scores = scores.sort_values('score', ascending=False)
scores = scores.iloc[:100]
recs.append(scores)
```
```python
recs = pd.concat(recs, ignore_index=True)
recs.tail()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>user</th>
<th>item</th>
<th>score</th>
</tr>
</thead>
<tbody>
<tr>
<th>92695</th>
<td>363</td>
<td>290</td>
<td>3.985114</td>
</tr>
<tr>
<th>92696</th>
<td>363</td>
<td>989</td>
<td>3.981152</td>
</tr>
<tr>
<th>92697</th>
<td>363</td>
<td>268</td>
<td>3.979286</td>
</tr>
<tr>
<th>92698</th>
<td>363</td>
<td>54</td>
<td>3.978426</td>
</tr>
<tr>
<th>92699</th>
<td>363</td>
<td>78</td>
<td>3.964217</td>
</tr>
</tbody>
</table>
</div>
```python
from lenskit import topn
rla = topn.RecListAnalysis()
rla.add_metric(topn.ndcg)
results = rla.compute(recs, test)
results.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>nrecs</th>
<th>ndcg</th>
</tr>
<tr>
<th>user</th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<th>862</th>
<td>100.0</td>
<td>0.167055</td>
</tr>
<tr>
<th>760</th>
<td>100.0</td>
<td>0.028844</td>
</tr>
<tr>
<th>827</th>
<td>100.0</td>
<td>0.086463</td>
</tr>
<tr>
<th>888</th>
<td>100.0</td>
<td>0.221742</td>
</tr>
<tr>
<th>847</th>
<td>100.0</td>
<td>0.144720</td>
</tr>
</tbody>
</table>
</div>
```python
sns.histplot(x='ndcg', data=results);
```
```python
print(f'mean ndcg: {results.ndcg.mean():.4f}')
```
mean ndcg: 0.1070
## NDCG(normalized discounted cumulative gain) - 직접 구현
추천 내용에 알려진 rating을 부여하여 dcg를 계산한다. Ideal dcg는 알려진 rating을 정렬하여 dcg를 계산하고 그 비율을 ndcg로 본다. 점수를 충분히 매긴 검정 세트가 존재해야 더 나은 ndcg를 얻을 수 있는 듯 하다. ndcg는 다른 알고리즘과 비교에 사용할 수 있다.
```python
ndcgs = []
for u in test['user'].unique():
test_u = test[test['user'] == u].copy()
recs_u = recs[recs['user'] == u].copy()
recs_u = recs_u.join(test_u.set_index('item'), on='item', lsuffix='_r', rsuffix='_t')
recs_u['rating'] = np.nan_to_num(recs_u['rating'])
recs_u['dcg'] = 1.0 / np.log2(np.arange(2, len(recs_u) + 2))
dcg = np.dot(recs_u.rating, recs_u.dcg)
test_u = test_u.sort_values('rating', ascending=False)
test_u['ideal'] = 1.0 / np.log2(np.arange(2, len(test_u) + 2))
ideal = np.dot(test_u.rating, test_u.ideal)
ndcgs.append((u, dcg/ideal))
```
```python
users, ndcg = zip(*ndcgs)
ndcgs = pd.DataFrame({'user': users, 'ndcg': ndcg})
ndcgs.tail()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>user</th>
<th>ndcg</th>
</tr>
</thead>
<tbody>
<tr>
<th>922</th>
<td>228</td>
<td>0.000000</td>
</tr>
<tr>
<th>923</th>
<td>244</td>
<td>0.000000</td>
</tr>
<tr>
<th>924</th>
<td>60</td>
<td>0.000000</td>
</tr>
<tr>
<th>925</th>
<td>96</td>
<td>0.093113</td>
</tr>
<tr>
<th>926</th>
<td>363</td>
<td>0.000000</td>
</tr>
</tbody>
</table>
</div>
```python
ndcgs.join(results, on='user', lsuffix='_l', rsuffix='_r')
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>user</th>
<th>ndcg_l</th>
<th>nrecs</th>
<th>ndcg_r</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>862</td>
<td>0.173109</td>
<td>100.0</td>
<td>0.167055</td>
</tr>
<tr>
<th>1</th>
<td>760</td>
<td>0.032809</td>
<td>100.0</td>
<td>0.028844</td>
</tr>
<tr>
<th>2</th>
<td>827</td>
<td>0.093277</td>
<td>100.0</td>
<td>0.086463</td>
</tr>
<tr>
<th>3</th>
<td>888</td>
<td>0.211704</td>
<td>100.0</td>
<td>0.221742</td>
</tr>
<tr>
<th>4</th>
<td>847</td>
<td>0.150294</td>
<td>100.0</td>
<td>0.144720</td>
</tr>
<tr>
<th>...</th>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<th>922</th>
<td>228</td>
<td>0.000000</td>
<td>100.0</td>
<td>0.000000</td>
</tr>
<tr>
<th>923</th>
<td>244</td>
<td>0.000000</td>
<td>100.0</td>
<td>0.000000</td>
</tr>
<tr>
<th>924</th>
<td>60</td>
<td>0.000000</td>
<td>100.0</td>
<td>0.000000</td>
</tr>
<tr>
<th>925</th>
<td>96</td>
<td>0.093113</td>
<td>100.0</td>
<td>0.076105</td>
</tr>
<tr>
<th>926</th>
<td>363</td>
<td>0.000000</td>
<td>100.0</td>
<td>0.000000</td>
</tr>
</tbody>
</table>
<p>927 rows × 4 columns</p>
</div>
```python
sns.histplot(x='ndcg', data=ndcgs);
```
```python
ndcgs.ndcg.mean(), results.ndcg.mean()
```
(0.11535802559380859, 0.10698967271109275)
Lenskit과 직접 구현한 ndcg가 비슷한 결과를 보이고 있다.
## 맺으며
추천 시스템에 발가락을 담가 보았다. 알고리즘 평가 방법을 더 깊게 고민해 봐야겠다.
| 21f501037a12c22aff9e01bb37eaddf3b0803bd5 | 113,181 | ipynb | Jupyter Notebook | _notebooks/2021-07-31-mf-recommender-system.ipynb | danhojin/jupyter-blog | a765d0169a666fdbafaa84ff9efba9d9ca48c41c | [
"Apache-2.0"
] | null | null | null | _notebooks/2021-07-31-mf-recommender-system.ipynb | danhojin/jupyter-blog | a765d0169a666fdbafaa84ff9efba9d9ca48c41c | [
"Apache-2.0"
] | 5 | 2020-12-26T23:43:58.000Z | 2021-05-01T03:32:46.000Z | _notebooks/2021-07-31-mf-recommender-system.ipynb | danhojin/jupyter-blog | a765d0169a666fdbafaa84ff9efba9d9ca48c41c | [
"Apache-2.0"
] | null | null | null | 36.878788 | 16,208 | 0.642431 | true | 6,895 | Qwen/Qwen-72B | 1. YES
2. YES | 0.867036 | 0.679179 | 0.588872 | __label__kor_Hang | 0.549871 | 0.206478 |
# An Introduction to Bayesian Statistical Analysis
Before we jump in to model-building and using MCMC to do wonderful things, it is useful to understand a few of the theoretical underpinnings of the Bayesian statistical paradigm. A little theory (and I do mean a *little*) goes a long way towards being able to apply the methods correctly and effectively.
There are several introductory references to Bayesian statistics that go well beyond what we will cover here.
## What *is* Bayesian Statistical Analysis?
Though many of you will have taken a statistics course or two during your undergraduate (or graduate) education, most of those who have will likely not have had a course in *Bayesian* statistics. Most introductory courses, particularly for non-statisticians, still do not cover Bayesian methods at all, except perhaps to derive Bayes' formula as a trivial rearrangement of the definition of conditional probability. Even today, Bayesian courses are typically tacked onto the curriculum, rather than being integrated into the program.
In fact, Bayesian statistics is not just a particular method, or even a class of methods; it is an entirely different paradigm for doing statistical analysis.
> Practical methods for making inferences from data using probability models for quantities we observe and about which we wish to learn.
*-- Gelman et al. 2013*
A Bayesian model is described by parameters, uncertainty in those parameters is described using probability distributions.
All conclusions from Bayesian statistical procedures are stated in terms of *probability statements*
This confers several benefits to the analyst, including:
- ease of interpretation, summarization of uncertainty
- can incorporate uncertainty in parent parameters
- easy to calculate summary statistics
## Bayesian vs Frequentist Statistics: What's the difference?
See the [VanderPlas talk on Tuesday](https://conference.scipy.org/scipy2014/schedule/presentation/444/).
Any statistical paradigm, Bayesian or otherwise, involves at least the following:
1. Some **unknown quantities** about which we are interested in learning or testing. We call these *parameters*.
2. Some **data** which have been observed, and hopefully contain information about (1).
3. One or more **models** that relate the data to the parameters, and is the instrument that is used to learn.
### The Frequentist World View
- The data that have been observed are considered **random**, because they are realizations of random processes, and hence will vary each time one goes to observe the system.
- Model parameters are considered **fixed**. The parameters' values are unknown, but they are fixed, and so we *condition* on them.
In mathematical notation, this implies a (very) general model of the following form:
<div style="font-size:35px">
\\[f(y | \theta)\\]
</div>
Here, the model \\(f\\) accepts data values \\(y\\) as an argument, conditional on particular values of \\(\theta\\).
Frequentist inference typically involves deriving **estimators** for the unknown parameters. Estimators are formulae that return estimates for particular estimands, as a function of data. They are selected based on some chosen optimality criterion, such as *unbiasedness*, *variance minimization*, or *efficiency*.
> For example, lets say that we have collected some data on the prevalence of autism spectrum disorder (ASD) in some defined population. Our sample includes \\(n\\) sampled children, \\(y\\) of them having been diagnosed with autism. A frequentist estimator of the prevalence \\(p\\) is:
> <div style="font-size:25px">
> \\[\hat{p} = \frac{y}{n}\\]
> </div>
> Why this particular function? Because it can be shown to be unbiased and minimum-variance.
It is important to note that new estimators need to be derived for every estimand that is introduced.
### The Bayesian World View
- Data are considered **fixed**. They used to be random, but once they were written into your lab notebook/spreadsheet/IPython notebook they do not change.
- Model parameters themselves may not be random, but Bayesians use probability distribtutions to describe their uncertainty in parameter values, and are therefore treated as **random**. In some cases, it is useful to consider parameters as having been sampled from probability distributions.
This implies the following form:
<div style="font-size:35px">
\\[p(\theta | y)\\]
</div>
This formulation used to be referred to as ***inverse probability***, because it infers from observations to parameters, or from effects to causes.
Bayesians do not seek new estimators for every estimation problem they encounter. There is only one estimator for Bayesian inference: **Bayes' Formula**.
## Bayesian Inference, in 3 Easy Steps
Gelman et al. (2013) describe the process of conducting Bayesian statistical analysis in 3 steps.
### Step 1: Specify a probability model
As was noted above, Bayesian statistics involves using probability models to solve problems. So, the first task is to *completely specify* the model in terms of probability distributions. This includes everything: unknown parameters, data, covariates, missing data, predictions. All must be assigned some probability density.
This step involves making choices.
- what is the form of the sampling distribution of the data?
- what form best describes our uncertainty in the unknown parameters?
### Step 2: Calculate a posterior distribution
The mathematical form \\(p(\theta | y)\\) that we associated with the Bayesian approach is referred to as a **posterior distribution**.
> posterior /pos·ter·i·or/ (pos-tēr´e-er) later in time; subsequent.
Why posterior? Because it tells us what we know about the unknown \\(\theta\\) *after* having observed \\(y\\).
This posterior distribution is formulated as a function of the probability model that was specified in Step 1. Usually, we can write it down but we cannot calculate it analytically. In fact, the difficulty inherent in calculating the posterior distribution for most models of interest is perhaps the major contributing factor for the lack of widespread adoption of Bayesian methods for data analysis. Various strategies for doing so comprise this tutorial.
**But**, once the posterior distribution is calculated, you get a lot for free:
- point estimates
- credible intervals
- quantiles
- predictions
### Step 3: Check your model
Though frequently ignored in practice, it is critical that the model and its outputs be assessed before using the outputs for inference. Models are specified based on assumptions that are largely unverifiable, so the least we can do is examine the output in detail, relative to the specified model and the data that were used to fit the model.
Specifically, we must ask:
- does the model fit data?
- are the conclusions reasonable?
- are the outputs sensitive to changes in model structure?
## Why be Bayesian?
At this point, it is worth addressing the question of why one might consider an alternative statistical paradigm to the classical/frequentist statistical approach. After all, it is not always easy to specify a full probabilistic model, nor to obtain output from the model once it is specified. So, why bother?
> ... the Bayesian approach is attractive because it is useful. Its usefulness derives in large measure from its simplicity. Its simplicity allows the investigation of far more complex models than can be handled by the tools in the classical toolbox.
*-- Link and Barker 2010*
We already noted that there is just one estimator in Bayesian inference, which lends to its ***simplicity***. Moreover, Bayes affords a conceptually simple way of coping with multiple parameters; the use of probabilistic models allows very complex models to be assembled in a modular fashion, by factoring a large joint model into the product of several conditional probabilities.
Bayesian statistics is also attractive for its ***coherence***. All unknown quantities for a particular problem are treated as random variables, to be estimated in the same way. Existing knowledge is given precise mathematical expression, allowing it to be integrated with information from the study dataset, and there is formal mechanism for incorporating new information into an existing analysis.
Finally, Bayesian statistics confers an advantage in the ***iterpretability*** of analytic outputs. Because models are expressed probabilistically, results can be interpreted probabilistically. Probabilities are easy for users (particularly non-technical users) to understand and apply.
### Example: confidence vs. credible intervals
A commonly-used measure of uncertainty for a statistical point estimate in classical statistics is the ***confidence interval***. Most scientists were introduced to the confidence interval during their introductory statistics course(s) in college. Yet, a large number of users mis-interpret the confidence interval.
Here is the mathematical definition of a 95% confidence interval for some unknown scalar quantity that we will here call \\(\theta\\):
<div style="font-size:25px">
\\[Pr(a(Y) < \theta < b(Y) | \theta) = 0.95\\]
</div>
how the endpoints of this interval are calculated varies according to the sampling distribution of \\(Y\\), but for as an example, the confidence interval for the population mean when \\(Y\\) is normally distributed is calculated by:
\\[Pr(\bar{Y} - 1.96\frac{\sigma}{\sqrt{n}}< \theta < \bar{Y} + 1.96\frac{\sigma}{\sqrt{n}}) = 0.95\\]
It would be tempting to use this definition to conclude that there is a 95% chance \\(\theta\\) is between \\(a(Y)\\) and \\(b(Y)\\), but that would be a mistake.
Recall that for frequentists, unknown parameters are **fixed**, which means there is no probability associated with them being any value except what they are fixed to. Here, the interval itself, and not \\(\theta\\) is the random variable. The actual interval calculated from the data is just one possible realization of a random process, and it must be strictly interpreted only in relation to an infinite sequence of identical trials that might be (but never are) conducted in practice.
A valid interpretation of the above would be:
> If the experiment were repeated an infinite number of times, 95% of the calculated intervals would contain \\(\theta\\).
This is what the statistical notion of "confidence" entails, and this sets it apart from probability intervals.
Since they regard unknown parameters as random variables, Bayesians can and do use probability intervals to describe what is known about the value of an unknown quantity. These intervals are commonly known as ***credible intervals***.
The definition of a 95% credible interval is:
<div style="font-size:25px">
\\[Pr(a(y) < \theta < b(y) | Y=y) = 0.95\\]
</div>
Notice that we condition here on the data \\(y\\) instead of the unknown \\(\theta\\). Thus, the endpoints are fixed and the variable is random.
We are allowed to interpret this interval as:
> There is a 95% chance \\(\theta\\) is between \\(a\\) and \\(b\\).
Hence, the credible interval is a statement of what we know about the value of \\(\theta\\) based on the observed data.
## Probability
> *Misunderstanding of probability may be the greatest of all impediments to scientific literacy.*
> — Stephen Jay Gould
Because of its reliance on probabilty models, its worth talking a little bit about probability. There are three different ways to define probability, depending on how it is being used.
### 1. Classical probability
<div style="font-size:25px">
\\[Pr(X=x) = \frac{\text{# x outcomes}}{\text{# possible outcomes}}\\]
</div>
Classical probability is an assessment of **possible** outcomes of elementary events. Elementary events are assumed to be equally likely.
### 2. Frequentist probability
<div style="font-size:25px">
\\[Pr(X=x) = \lim_{n \rightarrow \infty} \frac{\text{# times x has occurred}}{\text{# independent and identical trials}}\\]
</div>
Unlike classical probability, frequentist probability is an EMPIRICAL definition. It is an objective statement desribing events that have occurred.
### 3. Subjective probability
<div style="font-size:25px">
\\[Pr(X=x)\\]
</div>
Subjective probability is a measure of one's uncertainty in the value of \\(X\\). It characterizes the state of knowledge regarding some unknown quantity using probability.
It is not associated with long-term frequencies nor with equal-probability events.
For example:
- X = the true prevalence of diabetes in Austin is < 15%
- X = the blood type of the person sitting next to you is type A
- X = the Nashville Predators will win next year's Stanley Cup
- X = it is raining in Nashville
## Probability distributions
Bayesian inference uses probability distributions as building blocks for specifying models. They are used to assign uncertainty to unknown parameters, or sampling distributions to data.
Probability distributions assign probabilities to each possible outcome or value of a particular variable. Probabilities must be **positive**, and the sum (or integral) of the probabilities of all possible values must be 1.
There are two distinct classes of distributions, determined by whether the variable being described is **discrete** or **continuous**.
### Discrete Probability Distributions
$$X = \{0,1\}$$
$$Y = \{\ldots,-2,-1,0,1,2,\ldots\}$$
**Probability Mass Function**:
For discrete $X$,
$$Pr(X=x) = f(x|\theta)$$
***e.g. Poisson distribution***
The Poisson distribution models unbounded counts:
<div style="font-size: 150%;">
$$Pr(X=x)=\frac{e^{-\lambda}\lambda^x}{x!}$$
</div>
* $X=\{0,1,2,\ldots\}$
* $\lambda > 0$
$$E(X) = \text{Var}(X) = \lambda$$
```python
%matplotlib inline
import numpy as np
from scipy import stats
import matplotlib.pyplot as plt
theta = 3
x = np.random.poisson(theta, size=1000)
x.mean(), x.var()
```
(2.9500000000000002, 3.1594999999999929)
```python
_ = plt.hist(x, bins=30)
```
```python
y = stats.poisson.pmf(range(10), theta)
plt.plot(y, 'ro')
```
### Continuous Random Variables
$$X \in [0,1]$$
$$Y \in (-\infty, \infty)$$
**Probability Density Function**:
For continuous $X$,
$$Pr(x \le X \le x + dx) = f(x|\theta)dx \, \text{ as } \, dx \rightarrow 0$$
***e.g. normal distribution***
<div style="font-size: 150%;">
$$f(x) = \frac{1}{\sqrt{2\pi\sigma^2}}\exp\left[-\frac{(x-\mu)^2}{2\sigma^2}\right]$$
</div>
* $X \in \mathbf{R}$
* $\mu \in \mathbf{R}$
* $\sigma>0$
$$\begin{align}E(X) &= \mu \cr
\text{Var}(X) &= \sigma^2 \end{align}$$
```python
mu, sig = 10, 3
x = np.random.normal(mu, sig, 1000)
x.mean(), x.std()
```
(9.9980174423479724, 2.990905875326102)
```python
_ = plt.hist(x, bins=30)
```
```python
xvals = np.linspace(-10, 30)
y = stats.norm.pdf(xvals, mu, sig)
plt.plot(xvals, y, 'ro')
```
As a reference, the PyMC provides a description of most of the commonly-encountered probability distibutions used in statistical analysis.
```python
from IPython.display import HTML
HTML('')
```
## Bayes' Formula
Now that we have some probability under our belt, we turn to Bayes' formula. This, as you recall, is the engine that allows us to obtain estimates of unknown quantities that we care about, using information retained by the data we observe. It turns out to be quite simple to derive Bayes' formula directly from the definition of conditional probability.
Again, the goal in Bayesian inference is to calculate the **posterior distribution** of our unknowns:
<div style="font-size: 150%;">
\\[Pr(\theta|Y=y)\\]
</div>
This expression is a **conditional probability**. It is the probability of \\(\theta\\) *given* the observed values of \\(Y=y\\).
In general, the conditional probability of A given B is defined as follows:
\\[Pr(B|A) = \frac{Pr(A \cap B)}{Pr(A)}\\]
To gain an intuition for this, it is helpful to use a Venn diagram:
Notice from this diagram that the following conditional probability is also true:
\\[Pr(A|B) = \frac{Pr(A \cap B)}{Pr(B)}\\]
These can both be rearranged to be expressions of the joint probability of A and B. Setting these equal to one another:
\\[Pr(B|A)Pr(A) = Pr(A|B)Pr(B)\\]
Then rearranging:
\\[Pr(B|A) = \frac{Pr(A|B)Pr(B)}{Pr(A)}\\]
This is Bayes' formula. Replacing the generic A and B with things we care about reveals why Bayes' formula is so important:
The equation expresses how our belief about the value of \\(\theta\\), as expressed by the **prior distribution** \\(P(\theta)\\) is reallocated following the observation of the data \\(y\\), as expressed by the posterior distribution the posterior distribution.
The innocuous denominator \\(P(y)\\) cannot be calculated directly, and is actually the expression in the numerator, integrated over all \\(\theta\\):
<div style="font-size: 150%;">
\\[Pr(\theta|y) = \frac{Pr(y|\theta)Pr(\theta)}{\int Pr(y|\theta)Pr(\theta) d\theta}\\]
</div>
The intractability of this integral is one of the factors that has contributed to the under-utilization of Bayesian methods by statisticians.
### Priors
Once considered a controversial aspect of Bayesian analysis, the prior distribution characterizes what is known about an unknown quantity before observing the data from the present study. Thus, it represents the information state of that parameter. It can be used to reflect the information obtained in previous studies, to constrain the parameter to plausible values, or to represent the population of possible parameter values, of which the current study's parameter value can be considered a sample.
### Likelihood functions
The likelihood represents the information in the observed data, and is used to update prior distributions to posterior distributions. This updating of belief is justified becuase of the **likelihood principle**, which states:
> Following observation of \\(y\\), the likelihood \\(L(\theta|y)\\) contains all experimental information from \\(y\\) about the unknown \\(\theta\\).
Bayesian analysis satisfies the likelihood principle because the posterior distribution's dependence on the data is only through the likelihood. In comparison, most frequentist inference procedures violate the likelihood principle, because inference will depend on the design of the trial or experiment.
What is a likelihood function? It is closely related to the probability density (or mass) function. Taking a common example, consider some data that are binomially distributed (that is, they describe the outcomes of \\(n\\) binary events). Here is the binomial sampling distribution:
\\[p(Y|\theta) = {n \choose y} \theta^{y} (1-\theta)^{n-y}\\]
We can code this easily in Python:
```python
from scipy.misc import comb
pbinom = lambda y, n, p: comb(n, y) * p**y * (1-p)**(n-y)
```
This function returns the probability of observing \\(y\\) events from \\(n\\) trials, where events occur independently with probability \\(p\\).
```python
pbinom(3, 10, 0.5)
```
0.1171875
```python
pbinom(1, 25, 0.5)
```
7.4505805969238281e-07
```python
yvals = range(10+1)
plt.plot(yvals, [pbinom(y, 10, 0.5) for y in yvals], 'ro')
```
What about the likelihood function?
The likelihood function is the exact same form as the sampling distribution, except that we are now interested in varying the parameter for a given dataset.
```python
pvals = np.linspace(0, 1)
y = 4
plt.plot(pvals, [pbinom(y, 10, p) for p in pvals])
```
So, though we are dealing with the same equation, these are entirely different functions; the distribution is discrete, while the likelihood is continuous; the distribtion's range is from 0 to 10, while the likelihood's is 0 to 1; the distribution integrates (sums) to one, while the likelhood does not.
## Example: Genetic probabilities
Let's put Bayesian inference into action using a very simple example. I've chosen this example because it is one of the rare occasions where the posterior can be calculated by hand. We will show how data can be used to update our belief in competing hypotheses.
Hemophilia is a rare genetic disorder that impairs the ability for the body's clotting factors to coagualate the blood in response to broken blood vessels. The disease is an **x-linked recessive** trait, meaning that there is only one copy of the gene in males but two in males, and the trait can be masked by the dominant allele of the gene.
This implies that males with 1 gene are *affected*, while females with 1 gene are unaffected, but *carriers* of the disease. Having 2 copies of the disease is fatal, so this genotype does not exist in the population.
In this example, consider a woman whose mother is a carrier (because her brother is affected) and who marries an unaffected man. Let's now observe some data: the woman has two consecutive (non-twin) sons who are unaffected. We are interested in determining **if the woman is a carrier**.
To set up this problem, we need to set up our probability model. The unknown quantity of interest is simply an indicator variable \\(W\\) that equals 1 if the woman is affected, and zero if she is not. We are interested in the probability that the variable equals one, given what we have observed:
\\[Pr(W=1 | s_1=0, s_2=0)\\]
Our prior information is based on what we know about the woman's ancestry: her mother was a carrier. Hence, the prior is \\(Pr(W=1) = 0.5\\). Another way of expressing this is in terms of the **prior odds**, or:
\\[O(W=1) = \frac{Pr(W=1)}{Pr(W=0)} = 1\\]
Now for the likelihood: The form of this function is:
\\[L(W | s_1=0, s_2=0)\\]
This can be calculated as the probability of observing the data for any passed value for the parameter. For this simple problem, the likelihood takes only two possible values:
\\[\begin{aligned}
L(W=1 &| s_1=0, s_2=0) = (0.5)(0.5) = 0.25 \cr
L(W=0 &| s_1=0, s_2=0) = (1)(1) = 1
\end{aligned}\\]
With all the pieces in place, we can now apply Bayes' formula to calculate the posterior probability that the woman is a carrier:
\\[\begin{aligned}
Pr(W=1 | s_1=0, s_2=0) &= \frac{L(W=1 | s_1=0, s_2=0) Pr(W=1)}{L(W=1 | s_1=0, s_2=0) Pr(W=1) + L(W=0 | s_1=0, s_2=0) Pr(W=0)} \cr
&= \frac{(0.25)(0.5)}{(0.25)(0.5) + (1)(0.5)} \cr
&= 0.2
\end{aligned}\\]
Hence, there is a 0.2 probability of the woman being a carrier.
Its a bit trivial, but we can code this in Python:
```python
prior = 0.5
p = 0.5
L = lambda w, s: np.prod([(1-i, p**i * (1-p)**(1-i))[w] for i in s])
```
```python
s = [0,0]
post = L(1, s) * prior / (L(1, s) * prior + L(0, s) * (1 - prior))
post
```
0.20000000000000001
Now, what happens if the woman has a third unaffected child? What is our estimate of her probability of being a carrier then?
Bayes' formula makes it easy to update analyses with new information, in a sequential fashion. We simply assign the posterior from the previous analysis to be the prior for the new analysis, and proceed as before:
```python
L(1, [0])
```
0.5
```python
s = [0]
prior = post
L(1, s) * prior / (L(1, s) * prior + L(0, s) * (1 - prior))
```
0.11111111111111112
Thus, observing a third unaffected child has further reduced our belief that the mother is a carrier.
---
## References
Gelman A, Carlin JB, Stern HS, Dunson DB, Vehtari A, Rubin DB. Bayesian Data Analysis, Third Edition. CRC Press; 2013.
```python
from IPython.core.display import HTML
def css_styling():
styles = open("styles/custom.css", "r").read()
return HTML(styles)
css_styling()
```
<style>
@font-face {
font-family: "Computer Modern";
src: url('http://mirrors.ctan.org/fonts/cm-unicode/fonts/otf/cmunss.otf');
}
div.cell{
width: 90%;
/* margin-left:auto;*/
/* margin-right:auto;*/
}
ul {
line-height: 145%;
font-size: 90%;
}
li {
margin-bottom: 1em;
}
h1 {
font-family: Helvetica, serif;
}
h4{
margin-top: 12px;
margin-bottom: 3px;
}
div.text_cell_render{
font-family: Computer Modern, "Helvetica Neue", Arial, Helvetica, Geneva, sans-serif;
line-height: 145%;
font-size: 130%;
width: 90%;
margin-left:auto;
margin-right:auto;
}
.CodeMirror{
font-family: "Source Code Pro", source-code-pro,Consolas, monospace;
}
/* .prompt{
display: None;
}*/
.text_cell_render h5 {
font-weight: 300;
font-size: 16pt;
color: #4057A1;
font-style: italic;
margin-bottom: 0.5em;
margin-top: 0.5em;
display: block;
}
.warning{
color: rgb( 240, 20, 20 )
}
</style>
| ce51d6ed1b4e5c7c4ba38c7bb4592ba987f4f0b9 | 90,091 | ipynb | Jupyter Notebook | notebooks/.ipynb_checkpoints/7. Introduction to Bayes-checkpoint.ipynb | rouseguy/scipy2015_tutorial | 6d419dbda904d363192d516c1c8fe1c732c7f69d | [
"CC0-1.0"
] | 53 | 2015-06-20T02:59:05.000Z | 2020-10-04T22:58:08.000Z | notebooks/.ipynb_checkpoints/7. Introduction to Bayes-checkpoint.ipynb | rouseguy/scipy2015_tutorial | 6d419dbda904d363192d516c1c8fe1c732c7f69d | [
"CC0-1.0"
] | 2 | 2015-07-05T10:16:00.000Z | 2015-07-09T17:18:44.000Z | notebooks/.ipynb_checkpoints/7. Introduction to Bayes-checkpoint.ipynb | fonnesbeck/scipy2015_tutorial | 6d419dbda904d363192d516c1c8fe1c732c7f69d | [
"CC0-1.0"
] | 54 | 2015-07-03T20:50:11.000Z | 2021-12-21T16:04:01.000Z | 55.202819 | 542 | 0.72702 | true | 6,020 | Qwen/Qwen-72B | 1. YES
2. YES | 0.743168 | 0.831143 | 0.617679 | __label__eng_Latn | 0.997896 | 0.273406 |
# Stochastic Variational Inference
Notes from the Hoffmann *et al.* (2013) paper
In probabilistic modelling, we use hidden variables to encode hidden structure in observed data; we articulate the relationship between the hidden and observed variables with a factorized probability distribution (i.e. a graphical model) and we use inference algorithms to estimate the **posterior distribution**, the **conditional distribution** of hidden structure given the observations.
Consider a graphical model of hidden and observed random variables for which we want to compute the posterior. For many models of interest, this posterior is not tractable to compute and we must appeal to approximate methods. The two most prominent strategies in statistics and machine learning are Markov chain Monte Carlo (MCMC) sampling and variational inference
* **MCMC Sampling**:
We construct a Markov chain over the hidden variables whose stationary distribution is the posterior of interest. We run the chain until it has (hopefully) reached equilibrium and collect samples to approximate the posterior.
* **Variational inference**:
We define a flexible family of distribution over the hidden variables, indexed by free parameters. We then find the setting of the parameters (i.e. the member of the family) that is closest to the posterior. Thus we solve the inference problem by solving an optimization problem.
The aim here is to develop a general variational method that scales.
Form of stochastic variational inference:
1. Subsample one or more data points from the data
2. Analyze the subsample using the current variational parameters
3. Implement a closed-form update of the variational parameters.
4. Repeat.
While traditional algorithms require repeatedly analyzing the whole dataset before updating the variational parameters, this algorithm only requires that we analyze randomly sampled subsets.
SVI is a sochastic **optimization algorithm** for mean-field variational inference. It approximates the posterior distribution of a probabilistic model with hidden variables, and can handle massive data sets of observations.
A graphical model with observations $x_{1:N}$, local hidden variables $z_{1:N}$ and global hidden variables $\beta$. The distribution of each observation $x_n$ only depends on its corresponding local variable $z_n$ and the global variables $\beta$.
## 1. Define the class of models to which our algorithm applies.
We define *local* and *global* hidden variables, and requirements on the conditional distributions within the model.
The joint distribution factorizes into a global term and a product of local terms:
$$
p(x, z, \beta |\alpha) = p(\beta | \alpha)\prod_{n=1}^N p(x_n, z_n |\beta)
$$
Our goal is to approximate the posterior distribution of the hidden variables given the observations, $p(\beta, z|x)$.
**Assumption 1**: The $n$th observation $x_n$ and the $n$th local variable $z_n$ are conditionally independent, given global variables $\beta$, of all other observations and local hidden variables,
$$
p(x_n, z_n |x_{-n}, z_{-n},\beta,\alpha) = p(x_n, z_n|\beta,\alpha)
$$
**Assumption 2**: The *complete conditionals* in the model. A complete conditional is the conditional distribution of a hidden variable given the other hidden variables and the observations. We assume that these distributions are in the **exponential family**,
\begin{eqnarray}
p(\beta|x, z, \alpha) & = & h(\beta)\exp\{\eta_g(x, z, \alpha)^Tt(\beta)-a_g(\eta_g(x, z, \alpha))\}\\
p(z_{nj}|x_n, z_{n,-j}, \beta) & = & h(z_{nj})\exp\{\eta_l(x_n, z_{n,-j},\beta)^Tt(z_{nj}) - a_l(\eta_l(x_n, z_{n,-j},\beta))\}
\end{eqnarray}
The scala functions $h(\cdot)$ and $a(\cdot)$ are respectively the *base measure* and *log-normalizer*, the vector functions $\eta(\cdot)$ and $t(\cdot)$ are respectively the *natural parameter* and *sufficient statistics*. **(For details of this consult a basic statistic book on exponential distributions)**.
These are conditional distributions, so the natural parameter is a function of the variables that are being conditioned on. For the local variables $z_{nj}$, the complete conditional distribution is determined by the global variables $\beta$ and the other local variables in the $n$th context, i.e. the $n$th data point $x_n$ and the local variables $z_{n,-j}$.
These assumptions on the complete conditional imply a **conjugacy relationship** between the global variables $\beta$ and the local contexts $(z_n, x_n)$, and this relationship implies the distribution of the local context given the global variables must be in an exponential family,
\begin{equation}
p(x_n, z_n |\beta) = h(x_n, z_n)\exp\{\beta^Tt(x_n, z_n) - a_l(\beta)\}
\end{equation}
The prior distribution $p(\beta)$ must also be in an exponential family,
$$
p(\beta) = h(\beta)\exp\{\alpha^Tt(\beta)-a_g(\alpha)\}
$$
The sufficient statistics are $t(\beta) = (\beta, -a_l(\beta))$ and thus the hyperparameter $\alpha$ has two components $\alpha = (\alpha_1, \alpha_2)$. The first component $\alpha_1$ is a vector of the same dimension as $\beta$, the second component $\alpha_2$ is a scalar.
The two equations above imply that the complete conditional for the global variable is in the same exponential family as the prior with natural parameter
$$
\eta_g(x, z, \alpha) = (\alpha_1 + \sum_{n=1}^N t(z_n, x_n), \alpha_2+N).
$$
Analysing data with one of the model associated with this family of distributions (e.g. Bayesian mixture models, Latent Dirichlet allocation) amounts to computing the posterior distribution of the hidden variables given the observations,
$$
p(z, \beta |x ) = \frac{p(x, z, \beta)}{\int p(x, z, \beta)dz d\beta}.
$$
We then use this posterior to explore the hidden structure of our data or to make predictions about future data.
## 2. Mean field variational inference
An approximate inference strategy that seeks a tractable distribution over the hidden variables which is close to the posterior distribution. Derive the traditional variational inference algorithm for our class of models, which is a coordinate ascent algorithm. Closeness is measured with the KL divergence. We use the resulting distribution, called the *variational distribution* to approximate the posterior.
### The evidence lower bound
Variational inference minimizes the KL divergence from the variational distribution to the posterior distribution. It maximizes the *evidence lower bound* (ELBO), a lower bound on the logarithm of the marginal probability of the observations $\log p(x)$. The ELBO is equal to the negative KL divergence up to an additive constant.
We derive the ELBO by introducing a distribution over the hidden variables $q(\alpha, \beta)$ and using Jensen's inequality. (This implies $\log\mathbb{E}[f(y)]\ge \mathbb{E}[\log f(y)]$ for any random variable $y$).
This gives the following bound on the log marginal,
\begin{eqnarray}
\log p(x) & = & \log\int p(x, z, \beta)dz d\beta\\
& = & \log\int p(x, z, \beta)\frac{q(z, \beta)}{q(z, \beta)}dzd\beta\\
& = & \log\left(\mathbb{E}_q\left[\frac{p(x,z,\beta)}{q(z,\beta)}\right]\right)\\
&\ge & \mathbb{E}_q[\log p(x, z, \beta)]-\mathbb{E}[\log q(z, \beta)]\\
&\triangleq &\mathcal{L}(q).
\end{eqnarray}
The ELBO contains two terms. The first term is the expected log joint, $\mathbb{E}_q[\log p(x, z, \beta)]$. The second is the entropy of the variational distribution, $-\mathbb{E}_q[\log q(z, \beta)]$. Both of these terms depend on $q(z, \beta)$, the variational distribution of the hidden variables.
We restrict $q(z, \beta)$ to be in a family that is tractable, one for which the expectations in the ELBO can be efficiently computed. We then try to find the member of the family that maximizes the ELBO. Finally, we use the optimized distribution as a proxy for the posterior.
Solving this maximization problem is equivalent to finding the member of the family that is closest in KL divergence to the posterior:
\begin{eqnarray}
KL(q(z,\beta)||p(z, \beta|x)) & = & \mathbb{E}_q[\log q(z, \beta)] - \mathbb{E}_q[\log p(z, \beta|x)]\\
& = & \mathbb{E}_q[\log q(z, \beta)]-\mathbb{E}_q[\log p(x, \, \beta)] + \log p(x)\\
& = & -\mathcal{L}(q) + \mathrm{const}.
\end{eqnarray}
$\log p(x)$ is replaced by a constant because it does not depend on $q$.
### The mean-field variational family.
The simplest variational family of distributions. In this family, each hidden variable is independent and governed by its own parameter,
$$
q(z, \beta) = q(\beta |\lambda)\prod_{n=1}^N \prod_{j=1}^J q(z_{nj}|\phi_{nj})
$$
The global parameters $\lambda$ govern the global variables, the local parameters $\phi_n$ govern the local variables in the $n$th context. The ELBO is a function of these parameters.
We set $q(\beta|\lambda)$ and $q(z_{nj}|\phi_{nj})$ to be in the same exponential family as the complete conditional distributions $p(\beta|x, z)$ and $p(z_{nj}|x_n,z_{n,-j},\beta)$. The variational parameters $\lambda$ and $\phi_{nj}$ are the natural parameters to those families,
\begin{eqnarray}
q(\beta|\lambda) & = & h(\beta)\exp\{\lambda^Tt(\beta)-a_g(\lambda)\}\\
q(z_{nj}|\phi_{nj}) & = & h(z_{nj})\exp\{\phi_{nj}^Tt(z_{nj}) - a_{l}(\phi_{nj})\}
\end{eqnarray}
| 92ee578d9e99df1698d6d04dbc0b772fe0db37bc | 11,728 | ipynb | Jupyter Notebook | variational-inference/Stochastic-Variational-Inference.ipynb | yusueliu/murphy-book | 71d62cc083a683fb861be1e5acb8eeb948b00c54 | [
"Apache-2.0"
] | 2 | 2019-03-25T22:22:23.000Z | 2019-09-29T20:46:58.000Z | variational-inference/Stochastic-Variational-Inference.ipynb | yusueliu/murphy-book | 71d62cc083a683fb861be1e5acb8eeb948b00c54 | [
"Apache-2.0"
] | null | null | null | variational-inference/Stochastic-Variational-Inference.ipynb | yusueliu/murphy-book | 71d62cc083a683fb861be1e5acb8eeb948b00c54 | [
"Apache-2.0"
] | 1 | 2021-12-24T01:14:12.000Z | 2021-12-24T01:14:12.000Z | 56.114833 | 419 | 0.645123 | true | 2,394 | Qwen/Qwen-72B | 1. YES
2. YES | 0.909907 | 0.760651 | 0.692121 | __label__eng_Latn | 0.992735 | 0.446361 |
```
import sympy
ha = qubit('a', latex_label='\\alpha', dtype=sympy)
hb = qubit('b', latex_label='\\beta', dtype=sympy)
sympy.var('x,y')
```
$$\begin{pmatrix}x, & y\end{pmatrix}$$
```
U=x*ha.fourier()
U
```
<table style='margin: 0px 0px;'>
<colgroup style='border: 2px solid black;'></colgroup>
<colgroup span=2 style='border: 2px solid black;'></colgroup>
<tbody style='border: 2px solid black;'>
<tr style='border: 1px dotted; padding: 2px;'><td style='border: 1px dotted; padding: 2px; text-align: center;'> </td><td style='border: 1px dotted; padding: 2px; text-align: center;'><nobr>$\scriptsize{\left< 0_{\alpha} \right|}$</nobr></td><td style='border: 1px dotted; padding: 2px; text-align: center;'><nobr>$\scriptsize{\left< 1_{\alpha} \right|}$</nobr></td></tr>
</tbody>
<tr style='border: 1px dotted; padding: 2px;'><td style='border: 1px dotted; padding: 2px; text-align: center;'><nobr>$\scriptsize{\left| 0_{\alpha} \right>}$</nobr></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><nobr><tt>$\frac{1}{2} \sqrt{2} x$</tt></nobr></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><nobr><tt>$\frac{1}{2} \sqrt{2} x$</tt></nobr></td></tr>
<tr style='border: 1px dotted; padding: 2px;'><td style='border: 1px dotted; padding: 2px; text-align: center;'><nobr>$\scriptsize{\left| 1_{\alpha} \right>}$</nobr></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><nobr><tt>$\frac{1}{2} \sqrt{2} x$</tt></nobr></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><nobr><tt>$- \frac{1}{2} \sqrt{2} x$</tt></nobr></td></tr>
</tbody>
</table>
```
s = ha.array([1, x])
t = hb.array([1, y])
rho = (s*t).O
rho
```
<table style='margin: 0px 0px;'>
<colgroup style='border: 2px solid black;'></colgroup>
<colgroup span=2 style='border: 2px solid black;'></colgroup>
<colgroup span=2 style='border: 2px solid black;'></colgroup>
<tbody style='border: 2px solid black;'>
<tr style='border: 1px dotted; padding: 2px;'><td style='border: 1px dotted; padding: 2px; text-align: center;'> </td><td style='border: 1px dotted; padding: 2px; text-align: center;'><nobr>$\scriptsize{\left< 0_{\alpha}0_{\beta} \right|}$</nobr></td><td style='border: 1px dotted; padding: 2px; text-align: center;'><nobr>$\scriptsize{\left< 0_{\alpha}1_{\beta} \right|}$</nobr></td><td style='border: 1px dotted; padding: 2px; text-align: center;'><nobr>$\scriptsize{\left< 1_{\alpha}0_{\beta} \right|}$</nobr></td><td style='border: 1px dotted; padding: 2px; text-align: center;'><nobr>$\scriptsize{\left< 1_{\alpha}1_{\beta} \right|}$</nobr></td></tr>
</tbody>
<tbody style='border: 2px solid black;'>
<tr style='border: 1px dotted; padding: 2px;'><td style='border: 1px dotted; padding: 2px; text-align: center;'><nobr>$\scriptsize{\left| 0_{\alpha}0_{\beta} \right>}$</nobr></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><nobr><tt>$1$</tt></nobr></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><nobr><tt>$\overline{y}$</tt></nobr></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><nobr><tt>$\overline{x}$</tt></nobr></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><nobr><tt>$\overline{x} \overline{y}$</tt></nobr></td></tr>
<tr style='border: 1px dotted; padding: 2px;'><td style='border: 1px dotted; padding: 2px; text-align: center;'><nobr>$\scriptsize{\left| 0_{\alpha}1_{\beta} \right>}$</nobr></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><nobr><tt>$y$</tt></nobr></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><nobr><tt>$y \overline{y}$</tt></nobr></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><nobr><tt>$y \overline{x}$</tt></nobr></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><nobr><tt>$y \overline{x} \overline{y}$</tt></nobr></td></tr>
</tbody>
<tbody style='border: 2px solid black;'>
<tr style='border: 1px dotted; padding: 2px;'><td style='border: 1px dotted; padding: 2px; text-align: center;'><nobr>$\scriptsize{\left| 1_{\alpha}0_{\beta} \right>}$</nobr></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><nobr><tt>$x$</tt></nobr></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><nobr><tt>$x \overline{y}$</tt></nobr></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><nobr><tt>$x \overline{x}$</tt></nobr></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><nobr><tt>$x \overline{x} \overline{y}$</tt></nobr></td></tr>
<tr style='border: 1px dotted; padding: 2px;'><td style='border: 1px dotted; padding: 2px; text-align: center;'><nobr>$\scriptsize{\left| 1_{\alpha}1_{\beta} \right>}$</nobr></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><nobr><tt>$x y$</tt></nobr></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><nobr><tt>$x y \overline{y}$</tt></nobr></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><nobr><tt>$x y \overline{x}$</tt></nobr></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><nobr><tt>$x y \overline{x} \overline{y}$</tt></nobr></td></tr>
</tbody>
</table>
```
U = (ha * hb).eye()
# arrays can be indexed using dictionaries
U[{ ha: 0, ha.H: 0, hb: 0, hb.H: 0 }] = x
U[{ ha: 0, ha.H: 0, hb: 0, hb.H: 1 }] = y
U
```
<table style='margin: 0px 0px;'>
<colgroup style='border: 2px solid black;'></colgroup>
<colgroup span=2 style='border: 2px solid black;'></colgroup>
<colgroup span=2 style='border: 2px solid black;'></colgroup>
<tbody style='border: 2px solid black;'>
<tr style='border: 1px dotted; padding: 2px;'><td style='border: 1px dotted; padding: 2px; text-align: center;'> </td><td style='border: 1px dotted; padding: 2px; text-align: center;'><nobr>$\scriptsize{\left< 0_{\alpha}0_{\beta} \right|}$</nobr></td><td style='border: 1px dotted; padding: 2px; text-align: center;'><nobr>$\scriptsize{\left< 0_{\alpha}1_{\beta} \right|}$</nobr></td><td style='border: 1px dotted; padding: 2px; text-align: center;'><nobr>$\scriptsize{\left< 1_{\alpha}0_{\beta} \right|}$</nobr></td><td style='border: 1px dotted; padding: 2px; text-align: center;'><nobr>$\scriptsize{\left< 1_{\alpha}1_{\beta} \right|}$</nobr></td></tr>
</tbody>
<tbody style='border: 2px solid black;'>
<tr style='border: 1px dotted; padding: 2px;'><td style='border: 1px dotted; padding: 2px; text-align: center;'><nobr>$\scriptsize{\left| 0_{\alpha}0_{\beta} \right>}$</nobr></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><nobr><tt>$x$</tt></nobr></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><nobr><tt>$y$</tt></nobr></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><font color='#cccccc'>0</font></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><font color='#cccccc'>0</font></td></tr>
<tr style='border: 1px dotted; padding: 2px;'><td style='border: 1px dotted; padding: 2px; text-align: center;'><nobr>$\scriptsize{\left| 0_{\alpha}1_{\beta} \right>}$</nobr></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><font color='#cccccc'>0</font></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><nobr><tt>$1$</tt></nobr></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><font color='#cccccc'>0</font></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><font color='#cccccc'>0</font></td></tr>
</tbody>
<tbody style='border: 2px solid black;'>
<tr style='border: 1px dotted; padding: 2px;'><td style='border: 1px dotted; padding: 2px; text-align: center;'><nobr>$\scriptsize{\left| 1_{\alpha}0_{\beta} \right>}$</nobr></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><font color='#cccccc'>0</font></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><font color='#cccccc'>0</font></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><nobr><tt>$1$</tt></nobr></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><font color='#cccccc'>0</font></td></tr>
<tr style='border: 1px dotted; padding: 2px;'><td style='border: 1px dotted; padding: 2px; text-align: center;'><nobr>$\scriptsize{\left| 1_{\alpha}1_{\beta} \right>}$</nobr></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><font color='#cccccc'>0</font></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><font color='#cccccc'>0</font></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><font color='#cccccc'>0</font></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><nobr><tt>$1$</tt></nobr></td></tr>
</tbody>
</table>
```
U.I
```
<table style='margin: 0px 0px;'>
<colgroup style='border: 2px solid black;'></colgroup>
<colgroup span=2 style='border: 2px solid black;'></colgroup>
<colgroup span=2 style='border: 2px solid black;'></colgroup>
<tbody style='border: 2px solid black;'>
<tr style='border: 1px dotted; padding: 2px;'><td style='border: 1px dotted; padding: 2px; text-align: center;'> </td><td style='border: 1px dotted; padding: 2px; text-align: center;'><nobr>$\scriptsize{\left< 0_{\alpha}0_{\beta} \right|}$</nobr></td><td style='border: 1px dotted; padding: 2px; text-align: center;'><nobr>$\scriptsize{\left< 0_{\alpha}1_{\beta} \right|}$</nobr></td><td style='border: 1px dotted; padding: 2px; text-align: center;'><nobr>$\scriptsize{\left< 1_{\alpha}0_{\beta} \right|}$</nobr></td><td style='border: 1px dotted; padding: 2px; text-align: center;'><nobr>$\scriptsize{\left< 1_{\alpha}1_{\beta} \right|}$</nobr></td></tr>
</tbody>
<tbody style='border: 2px solid black;'>
<tr style='border: 1px dotted; padding: 2px;'><td style='border: 1px dotted; padding: 2px; text-align: center;'><nobr>$\scriptsize{\left| 0_{\alpha}0_{\beta} \right>}$</nobr></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><nobr><tt>$\frac{1}{x}$</tt></nobr></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><nobr><tt>$- \frac{y}{x}$</tt></nobr></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><font color='#cccccc'>0</font></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><font color='#cccccc'>0</font></td></tr>
<tr style='border: 1px dotted; padding: 2px;'><td style='border: 1px dotted; padding: 2px; text-align: center;'><nobr>$\scriptsize{\left| 0_{\alpha}1_{\beta} \right>}$</nobr></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><font color='#cccccc'>0</font></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><nobr><tt>$1$</tt></nobr></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><font color='#cccccc'>0</font></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><font color='#cccccc'>0</font></td></tr>
</tbody>
<tbody style='border: 2px solid black;'>
<tr style='border: 1px dotted; padding: 2px;'><td style='border: 1px dotted; padding: 2px; text-align: center;'><nobr>$\scriptsize{\left| 1_{\alpha}0_{\beta} \right>}$</nobr></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><font color='#cccccc'>0</font></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><font color='#cccccc'>0</font></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><nobr><tt>$1$</tt></nobr></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><font color='#cccccc'>0</font></td></tr>
<tr style='border: 1px dotted; padding: 2px;'><td style='border: 1px dotted; padding: 2px; text-align: center;'><nobr>$\scriptsize{\left| 1_{\alpha}1_{\beta} \right>}$</nobr></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><font color='#cccccc'>0</font></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><font color='#cccccc'>0</font></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><font color='#cccccc'>0</font></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><nobr><tt>$1$</tt></nobr></td></tr>
</tbody>
</table>
```
U * U.I
```
<table style='margin: 0px 0px;'>
<colgroup style='border: 2px solid black;'></colgroup>
<colgroup span=2 style='border: 2px solid black;'></colgroup>
<colgroup span=2 style='border: 2px solid black;'></colgroup>
<tbody style='border: 2px solid black;'>
<tr style='border: 1px dotted; padding: 2px;'><td style='border: 1px dotted; padding: 2px; text-align: center;'> </td><td style='border: 1px dotted; padding: 2px; text-align: center;'><nobr>$\scriptsize{\left< 0_{\alpha}0_{\beta} \right|}$</nobr></td><td style='border: 1px dotted; padding: 2px; text-align: center;'><nobr>$\scriptsize{\left< 0_{\alpha}1_{\beta} \right|}$</nobr></td><td style='border: 1px dotted; padding: 2px; text-align: center;'><nobr>$\scriptsize{\left< 1_{\alpha}0_{\beta} \right|}$</nobr></td><td style='border: 1px dotted; padding: 2px; text-align: center;'><nobr>$\scriptsize{\left< 1_{\alpha}1_{\beta} \right|}$</nobr></td></tr>
</tbody>
<tbody style='border: 2px solid black;'>
<tr style='border: 1px dotted; padding: 2px;'><td style='border: 1px dotted; padding: 2px; text-align: center;'><nobr>$\scriptsize{\left| 0_{\alpha}0_{\beta} \right>}$</nobr></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><nobr><tt>$1$</tt></nobr></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><font color='#cccccc'>0</font></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><font color='#cccccc'>0</font></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><font color='#cccccc'>0</font></td></tr>
<tr style='border: 1px dotted; padding: 2px;'><td style='border: 1px dotted; padding: 2px; text-align: center;'><nobr>$\scriptsize{\left| 0_{\alpha}1_{\beta} \right>}$</nobr></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><font color='#cccccc'>0</font></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><nobr><tt>$1$</tt></nobr></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><font color='#cccccc'>0</font></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><font color='#cccccc'>0</font></td></tr>
</tbody>
<tbody style='border: 2px solid black;'>
<tr style='border: 1px dotted; padding: 2px;'><td style='border: 1px dotted; padding: 2px; text-align: center;'><nobr>$\scriptsize{\left| 1_{\alpha}0_{\beta} \right>}$</nobr></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><font color='#cccccc'>0</font></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><font color='#cccccc'>0</font></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><nobr><tt>$1$</tt></nobr></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><font color='#cccccc'>0</font></td></tr>
<tr style='border: 1px dotted; padding: 2px;'><td style='border: 1px dotted; padding: 2px; text-align: center;'><nobr>$\scriptsize{\left| 1_{\alpha}1_{\beta} \right>}$</nobr></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><font color='#cccccc'>0</font></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><font color='#cccccc'>0</font></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><font color='#cccccc'>0</font></td><td style='border: 1px dotted; padding: 2px; text-align: right;'><nobr><tt>$1$</tt></nobr></td></tr>
</tbody>
</table>
```
ha
```
$\left| \alpha \right\rangle$
```
```
| 381ad455a5aa2cec29a8479aa51d68449690ac8c | 21,372 | ipynb | Jupyter Notebook | stash/qitensor_sympy.ipynb | dstahlke/qitensor | 2b430e01e3f0d3c8488e35f417faaca27f930af3 | [
"BSD-2-Clause"
] | 6 | 2015-04-28T00:45:51.000Z | 2019-02-08T17:28:43.000Z | stash/qitensor_sympy.ipynb | dstahlke/qitensor | 2b430e01e3f0d3c8488e35f417faaca27f930af3 | [
"BSD-2-Clause"
] | null | null | null | stash/qitensor_sympy.ipynb | dstahlke/qitensor | 2b430e01e3f0d3c8488e35f417faaca27f930af3 | [
"BSD-2-Clause"
] | null | null | null | 68.720257 | 688 | 0.541409 | true | 5,742 | Qwen/Qwen-72B | 1. YES
2. YES | 0.874077 | 0.731059 | 0.639002 | __label__hun_Latn | 0.116587 | 0.322946 |
# University of Applied Sciences Munich
## Kalman Filter Tutorial
---
(c) Lukas Köstler (lkskstlr@gmail.com)
```python
import ipywidgets as widgets
from ipywidgets import interact_manual
from IPython.display import display
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (6, 3)
import numpy as np
%matplotlib notebook
```
```python
def normal_pdf(x, mu=0.0, sigma=1.0):
return 1.0 / np.sqrt(2*np.pi*sigma**2) * np.exp(-0.5/sigma**2 * (x-mu)**2)
```
#### Possible sources:
+ (One of the most prominent books on robotics) https://docs.ufpr.br/~danielsantos/ProbabilisticRobotics.pdf
+ (Many nice pictures) http://www.bzarg.com/p/how-a-kalman-filter-works-in-pictures/
+ (Stanford lecture slides) https://stanford.edu/class/ee363/lectures/kf.pdf
## Kalman Filter
---
* We will develop everything with an example in mind:
One throws an object into the air and for some timepoints $t_0, t_2, \dots, t_N$ measures the position $x$ and the velocity $v$ of the object. The measurements have some error. We want to find the best estimate of the positions $x_0, x_1, \dots, x_N$ for said timepoints.
* Under certain conditions the Kalman Filter (~ 1960) is the optimal tool for this task
### State & Control
---
* The state at time $t_n$ is denoted by $x_n$
Example: Height above ground in meters
* The control at time $t_n$ is denoted $u_n$
Example: Vertical velocity in meters/second
### Transition Model
---
* The *simplified* transition model gives the new state
$$ x_{n+1} = A x_n + B u_{n+1} $$
Example (simple mechanics):
$$x_{n+1} = \underbrace{1}_{A} \, x_n + \underbrace{\Delta t}_{B} \,u_{n+1}$$
### Transition Model with Noise
---
* The transition model might not be perfect. Thus we add a random term (gaussian)
$$ x_{n+1} = A x_n + B u_{n+1} + w_n, \,\,\, w_n \sim N(0, \sigma_w)$$
Example: Because $\mu = 0$ we assume that we have no systematic error in the transition model. $\sigma_w$ is the expected error per transition, e.g. 0.1 meters due to friction etc..
**Important**: It is reasonable to expect that $u_{n+1}$ is noisy as well. This has to be accounted for! Therefor $\sigma_w$ is usually a function of $x_n, u_{n+1}$. So $\sigma_w = \sigma_{w, n}$ i.e. different for each timestep.
### Observation Model
---
* At each timepoint $t_n$ we observe/measure the state:
$$ z_{n} = C x_n$$
Example (we measure the height in meters directly):
$$z_{n} = \underbrace{1}_{C} \, x_n$$
If we would measure the height in centimeters one would get:
$$z_{n} = \underbrace{100}_{C} \, x_n $$
### Observation Model with Noise
---
* The measurement (either the device or our model) might not be perfect. Thus we add a random term (gaussian)
$$ z_{n} = C x_n + v_n, \,\,\, v_n \sim N(0, \sigma_v)$$
Example: Because $\mu = 0$ we assume that we have no systematic error in the measurement model. $\sigma_v$ is the expected error per measurement, e.g. 0.5 meters as given in the sensors data sheet.
Again it is reasonable to expect that $\sigma_v$ is not constant. A normal distance sensor usually has some fixed noise and some which is relative to the distance measured, so $\sigma_v \approx \sigma_{v, fix} + x_n * \sigma_{v, linear}$.
## The Filter
---
* **Predict**
Take the old estimate $\hat{x}_n$ and the transition model to get $\hat{x}_{n+1\vert n}$ which is the new "guess" without using the measurement $z_{n+1}$.
* **Update/Correct**
Take the new "guess"/prediction $\hat{x}_{n+1\vert n}$ and the measurement $z_{n+1}$ to get the final estimate $\hat{x}_{n+1}$.
### Predict
---
* Reminder: $ x_{n+1} = a x_n + b u_n + w_n, \,\,\, w_n \sim N(0, \sigma_w), \, a, b \in \mathcal{R}$
* Which gives:
\begin{align}
\text{mean:}& &E[\hat{x}_{n+1\vert n}] &= a E[\hat{x}_n] + b u_n + 0 \\
\text{variance:}& &Var(\hat{x}_{n+1 \vert n}) &= a^2 Var(\hat{x}_n) + 0 + \sigma_w^2 \\[2ex]
&& \mu_{n+1 \vert n} &= a \mu_{n} + b u_n \\
&& \sigma_{n+1 \vert n}^2 &= a^2 \sigma_{n}^2 + \sigma_w^2
\end{align}
```python
%%capture
mu_n = 1.0
sigma_n = 0.5
# parameters to be animated
a = 1.0
bu_n = 0.1
sigma_w = 0.4
xx = np.linspace(-5,5,1000)
yyxn = normal_pdf(xx, mu_n, sigma_n)
yyaxn = normal_pdf(xx, a*mu_n, np.abs(a)*sigma_n)
yybun = normal_pdf(xx, a*mu_n + bu_n, np.abs(a)*sigma_n)
yyw = normal_pdf(xx, a*mu_n + bu_n, sigma_w)
yyxnp1 = normal_pdf(xx, a*mu_n + bu_n, np.sqrt(a**2 * sigma_n**2 + sigma_w**2))
fig01 = plt.figure();
ax01 = fig01.add_subplot(1,1,1);
line01_xn, = ax01.plot(xx, yyxn, '--', label="$p(x_n)$");
line01_axn, = ax01.plot(xx, yyaxn, '-', label="$p(a x_n)$", alpha=0.5);
line01_bun, = ax01.plot(xx, yybun, '-', label="$p(a x_n + b u_n)$", alpha=0.5);
#line01_w, = ax01.plot(xx, yyw, '--', label="$p(w_n)$ shifted");
line01_xnp1, = ax01.plot(xx, yyxnp1, label="$p(x_{n+1})$");
ax01.set_xlim(-5,5)
ax01.set_ylim(0.0, 1.2*max(np.amax(yyxn), np.amax(yyxnp1), np.amax(yyaxn)))
ax01.legend()
def update01(a, bu_n, sigma_w):
global xx, ax01, yyw, yyxnp1, yyxn, yyaxn, yybun, line01_w, line01_xnp1, line01_axn, line01_bun, mu_n, sigma_n
mu_xnp1 = a*mu_n + bu_n
sigma_xnp1 = np.sqrt(a**2 * sigma_n**2 + sigma_w**2)
yyw = normal_pdf(xx, mu_xnp1, sigma_w)
yyxnp1 = normal_pdf(xx, mu_xnp1, sigma_xnp1)
yyaxn = normal_pdf(xx, a*mu_n, np.abs(a)*sigma_n)
yybun = normal_pdf(xx, a*mu_n + bu_n, np.abs(a)*sigma_n)
line01_axn.set_ydata(yyaxn)
line01_bun.set_ydata(yybun)
#line01_w.set_ydata(yyw)
line01_xnp1.set_ydata(yyxnp1)
ax01.set_ylim(0.0, 1.2*max(np.amax(yyxn), np.amax(yyxnp1), np.amax(yyaxn)))
#ax02.set_xlabel("$\\mu_Y={:.2f}$, $\\sigma_Y={:.2}$ $\\rightarrow$ $\\mu_Z={:.2}$, $\\sigma_Z={:.2}$".format(mu_Y, sigma_Y, mu_Z, sigma_Z))
fig01.canvas.draw()
w01_a = widgets.FloatSlider(value=1.0, min=0.0, max=2.0, step = 0.1)
w01_bu_n = widgets.FloatSlider(value=0.2, min=-1.0, max=1.0, step = 0.1)
w01_sigma_w = widgets.FloatSlider(value=sigma_w, min=0.01, max=1.0, step=0.01)
```
```python
display(interact_manual(update01, a=w01_a, bu_n=w01_bu_n, sigma_w=w01_sigma_w));
display(fig01);
```
A Jupyter Widget
<function __main__.update01>
<IPython.core.display.Javascript object>
### Update
---
* Reminder: $ z_{n} = c x_n + v_n, \,\,\, v_n \sim N(0, \sigma_v), c \in \mathcal{R}$
* Which gives:
\begin{align}
&& \mu_{n+1 \vert n+1} &= \mu_{n+1 \vert n} + \frac{\sigma_{n+1 \vert n}^2 c}{\sigma_v^2 + c^2 \sigma_{n+1 \vert n}^2}\left(z_{n+1} - c \mu_{n+1 \vert n} \right) \\
&& \sigma_{n+1 \vert n+1}^2 &= \sigma_{n+1 \vert n}^2 - \frac{\sigma_{n+1 \vert n}^4 c^2}{\sigma_v^2 + c^2 \sigma_{n+1 \vert n}^2}
\end{align}
Note: The above formulas are only valid for the specific case discussed.
```python
%%capture
def kf_1d_update(mu_np1_n, sigma_np1_n, z_np1, c, sigma_v):
mu_np1_np1 = mu_np1_n + ((sigma_np1_n**2 * c) / (sigma_v**2 + c**2 * sigma_np1_n**2)) * (z_np1 - c*mu_np1_n)
sigma_np1_np1 = np.sqrt(sigma_np1_n**2 - ((sigma_np1_n**4 * c**2) / (sigma_v**2 + c**2 * sigma_np1_n**2)))
return mu_np1_np1, sigma_np1_np1
mu_np1_n = 1.0
sigma_np1_n = 1.0
# parameters to be animated
z_np1 = 1.5
c = 1.0
sigma_v = 0.8
mu_np1_np1, sigma_np1_np1 = kf_1d_update(mu_np1_n, sigma_np1_n, z_np1, c, sigma_v)
xx = np.linspace(-5,5,1000)
yy_np1_n = normal_pdf(xx, mu_np1_n, sigma_np1_n)
yy_np1_np1 = normal_pdf(xx, mu_np1_np1, sigma_np1_np1)
yy_only_z = normal_pdf(xx, z_np1/c, sigma_v/c)
fig02 = plt.figure();
ax02 = fig02.add_subplot(1,1,1);
line02_np1_n, = ax02.plot(xx, yy_np1_n, '--', label=r"$p(x_{n+1 \vert n})$");
line02_np1_np1, = ax02.plot(xx, yy_np1_np1, '-', label=r"$p(x_{n+1 \vert n+1})$");
line02_only_z, = ax02.plot(xx, yy_only_z, '-', label="only measurement")
ax02.set_xlim(-5,5)
ax02.set_ylim(0.0, 1.2*max(np.amax(yy_np1_n), np.amax(yy_np1_np1)))
ax02.legend()
def update02(z_np1, c, sigma_v):
global xx, ax02, yy_np1_np1, yy_np1_n, yy_only_z, mu_np1_n, sigma_np1_n
mu_np1_np1, sigma_np1_np1 = kf_1d_update(mu_np1_n, sigma_np1_n, z_np1, c, sigma_v)
yy_np1_np1 = normal_pdf(xx, mu_np1_np1, sigma_np1_np1)
yy_only_z = normal_pdf(xx, z_np1/c, sigma_v/c)
line02_np1_np1.set_ydata(yy_np1_np1)
line02_only_z.set_ydata(yy_only_z)
ax02.set_ylim(0.0, 1.2*max(np.amax(yy_np1_n), np.amax(yy_np1_np1)))
fig02.canvas.draw()
w02_znp1 = widgets.FloatSlider(value=z_np1, min=-1.5, max=2.5, step = 0.1)
w02_c = widgets.FloatSlider(value=c, min=0.1, max=2.0, step = 0.1)
w02_sigma_v = widgets.FloatSlider(value=sigma_v, min=0.01, max=2.0, step=0.01)
```
```python
display(interact_manual(update02, z_np1=w02_znp1, c=w02_c, sigma_v=w02_sigma_v));
display(fig02);
```
A Jupyter Widget
<function __main__.update02>
<IPython.core.display.Javascript object>
| fb5e7d4d179fc5b87927b881f687af86eeffc728 | 187,438 | ipynb | Jupyter Notebook | lecture/.ipynb_checkpoints/Lecture2-1DKalmanFilter-checkpoint.ipynb | lkskstlr/mm-kf | 8b7bc6e2f486cd0df3489b9bcdbdf6181cd95e70 | [
"MIT"
] | 1 | 2022-01-20T10:42:29.000Z | 2022-01-20T10:42:29.000Z | lecture/Lecture2-1DKalmanFilter.ipynb | lkskstlr/mm-kf | 8b7bc6e2f486cd0df3489b9bcdbdf6181cd95e70 | [
"MIT"
] | null | null | null | lecture/Lecture2-1DKalmanFilter.ipynb | lkskstlr/mm-kf | 8b7bc6e2f486cd0df3489b9bcdbdf6181cd95e70 | [
"MIT"
] | null | null | null | 92.653485 | 54,185 | 0.754687 | true | 3,168 | Qwen/Qwen-72B | 1. YES
2. YES | 0.909907 | 0.870597 | 0.792163 | __label__eng_Latn | 0.543937 | 0.678792 |
# Automated finite difference operators from symbolic equations
This notebook is the first in a series of hands-on tutorial notebooks that are intended to give a brief practical overview of the [Devito](http://www.opesci.org/devito-public) finite difference framework. We will present an overview of the symbolic layers of Devito and solve a set of small computational science problems that covers a range of partial differential equations (PDEs).
But before we start, let's import Devito and a few SymPy utilities:
```python
from devito import *
from sympy import init_printing, symbols, solve
init_printing(use_latex=True)
%matplotlib inline
```
## From equation to stencil code in a few lines of Python
Today's objective is to demonstrate how Devito and its [SymPy](http://www.sympy.org/en/index.html)-powered symbolic API can be used to solve partial differential equations using the finite difference method with highly optimized stencils in a few lines of Python. We will show how to derive computational stencils directly from the equation in an automated fashion and how we can use Devito to generate and execute optimized C code at runtime to solve our problem.
## Defining the physical domain
Before we can start creating stencils we will need to give Devito a few details about the computational domain in which we want to solve our problem. For this purpose we create a `Grid` object that stores the physical `extent` (the size) of our domain and knows how many points we want to use in each dimension to discretize our data.
```python
grid = Grid(shape=(5, 6), extent=(1., 1.))
grid
```
Grid[extent=(1.0, 1.0), shape=(5, 6), dimensions=(x, y)]
## Functions and data
To express our equation in symbolic form and discretize it using finite differences, Devito provides a set of `Function` types. A `Function` object created from these does two things:
1. It behaves like a `sympy.Function` symbol
2. It manages data associated with the symbol
To get more information on how to create and use a `Function` object, or any type provided by Devito, we can use the magic function `?` to look at its documentation from within our notebook.
```python
?Function
```
Ok, let's create a function $f(x, y)$ and look at the data Devito has associated with it. Please note that it is important to use explicit keywords, such as `name` or `grid` when creating Devitos `Function` objects.
```python
f = Function(name='f', grid=grid)
f
```
```python
f.data
```
Data([[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.]], dtype=float32)
By default Devito's `Function` objects will use the spatial dimensions `(x, y)` for 2D grids and `(x, y, z)` for 3D grids. To solve a PDE for several timesteps, we need a time dimension for our symbolic function. For this Devito provides a second function type, `TimeFunction`, that provides the correct dimension and some other intricacies needed to create a time stepping scheme.
```python
g = TimeFunction(name='g', grid=grid)
g
```
What does the shape of the associated data look like? Can you guess why?
```python
```
<button data-toggle="collapse" data-target="#sol1" class='btn btn-primary'>Solution</button>
<div id="sol1" class="collapse">
```
The shape is (2, 5, 6). Devito has allocated two buffers to represent g(t, x, y) and g(t + dt, x, y).
```
## Exercise 1: Derivatives of symbolic functions
The Devito functions we have created so far all act as `sympy.Function` objects, which means that we can form symbolic derivative expressions for them. Devito provides a set of shorthand expressions (implemented as Python properties) that allow us to generate finite differences in symbolic form. For example, the property `f.dx` denotes $\frac{\partial}{\partial x} f(x, y)$ - only that Devito has already discretized it with a finite difference expression. There are also a set of shorthand expressions for left (backward) and right (forward) derivatives:
| Derivative | Shorthand | Discretized | Stencil |
| ---------- |:---------:|:-----------:|:-------:|
| $\frac{\partial}{\partial x}f(x, y)$ (right) | `f.dxr` | $\frac{f(x+h_x,y)}{h_x} - \frac{f(x,y)}{h_x}$ | |
| $\frac{\partial}{\partial x}f(x, y)$ (left) | `f.dxl` | $\frac{f(x,y)}{h_x} - \frac{f(x-h_x,y)}{h_x}$ | |
A similar set of expressions exist for each spatial dimension defined on our grid, for example `f.dy` and `f.dyl`. For this exercise, please have a go at creating some derivatives and see if the resulting symbolic output matches what you expect.
Can you take similar derivatives in time using $g(t, x, y)$? Can you spot anything different? What does the shorthand `g.forward` denote?
```python
```
<button data-toggle="collapse" data-target="#sol2" class='btn btn-primary'>Solution</button>
<div id="sol2" class="collapse">
```
The first derivative in time is g.dt and u.forward represent the forward stencil point g.(t+dt, x, y).
```
## Exercise 2: A linear convection operator
**Note:** The following example is derived from [step 5](http://nbviewer.ipython.org/github/barbagroup/CFDPython/blob/master/lessons/07_Step_5.ipynb) of the tutorials in the excellent tutorial series [CFD Python: 12 steps to Navier-Stokes](http://lorenabarba.com/blog/cfd-python-12-steps-to-navier-stokes/).
In this simple example we will show how to derive a very simple convection operator from a high-level description of the governing equation. We will go through the process of deriving a discretized finite difference formulation of the state update for the field variable $u$, before creating a callable `Operator` object. Luckily, the automation provided by SymPy makes the derivation very nice and easy.
The governing equation we want to implement is the linear convection equation:
$$\frac{\partial u}{\partial t}+c\frac{\partial u}{\partial x} + c\frac{\partial u}{\partial y} = 0$$
Before we start, we need to define some parameters, such as the grid, the number of timesteps and the timestep size. We will also initialize our initial velocity field `u` with a smooth initial condition.
```python
from examples.cfd import init_smooth, plot_field
nt = 100 # Number of timesteps
dt = 0.2 * 2. / 80 # Timestep size (sigma=0.2)
c = 1 # Value for c
# Then we create a grid and our function
grid = Grid(shape=(81, 81), extent=(2., 2.))
u = TimeFunction(name='u', grid=grid)
# We can now set the initial condition and plot it
init_smooth(field=u.data[0], dx=grid.spacing[0], dy=grid.spacing[1])
init_smooth(field=u.data[1], dx=grid.spacing[0], dy=grid.spacing[1])
plot_field(u.data[0])
```
Next, we want to discretize our governing equation so that we can create a functional `Operator` from it. We can start by simply writing out the equation as a symbolic expression, while using the shorthand expressions for derivatives that the `Function` objects provide. This will create a symbolic object of the dicrestized equation.
Can you write out the governing equation using the Devito shorthand expressions? Remember, the governing equation is given as
$$\frac{\partial u}{\partial t}+c\frac{\partial u}{\partial x} + c\frac{\partial u}{\partial y} = 0$$
```python
```
<button data-toggle="collapse" data-target="#sol3" class='btn btn-primary'>Solution</button>
<div id="sol3" class="collapse">
```
eq = Eq(u.dt + c * u.dxl + c * u.dyl)
eq
```
As we can see, SymPy has kindly resolved our derivatives. Next, we need to rearrange our equation so that the term $u(t+dt, x, y)$ is on the left-hand side, since it represents the next point in time for our state variable $u$. We can use a utility called `solve` to rearrange our equation for us, so that it represents a valid state update for $u$. While sympy provides a version of `solve`, Devito provides its own customized version.
Can you use `solve` to create a valid stencil for our update to $u(t+dt, x, y)$? Hint: `solve` always returns a list of potential solutions, even if there is only one.
Can you then create a `devito.Eq` object to represent a valid state update for the variable $u$?
```python
```
<button data-toggle="collapse" data-target="#sol4" class='btn btn-primary'>Solution</button>
<div id="sol4" class="collapse">
```
stencil = solve(eq, u.forward)[0]
update = Eq(u.forward, stencil)
update
```
The right-hand side of the update equation should be a stencil of the shape
Once we have created this update expression, we can create a Devito `Operator`. This `Operator` will basically behave like a Python function that we can call to apply the created stencil over our associated data, as long as we provide all necessary unknowns. In this case we need to provide the number of timesteps to compute via the keyword `time` and the timestep size to use via `dt` (both have been defined above).
```python
op = Operator(update)
op(time=nt+1, dt=dt)
plot_field(u.data[0])
```
Please note that the `Operator` is where all the Devito power is hidden, as the it will automatically generate and compile optimized C stencil code. We can look at this code - although we don't need to execute it.
```python
print(op.ccode)
```
## Second derivatives and high-order stencils
For the above example all we had to do was combine some first derivatives. However, lots of common scientific problems require second derivative, most notably any PDE including diffusion. To generate second order derivatives we need to give the `devito.Function` object another piece of information: the desired discretization of the stencils.
First, let's do a simple second derivative in $x$, for which we need to give $u$ at least a `space_order` of `2`. The shorthand for the second derivative is then `u.dx2`.
```python
u = TimeFunction(name='u', grid=grid, space_order=2)
u.dx2
```
We can arbitrarily drive the discretization order up if require higher order stencils.
```python
u = TimeFunction(name='u', grid=grid, space_order=4)
u.dx2
```
To implement diffusion or wave equations, we need to take the Laplacian $\nabla^2 u$, which is simply the second derivative in all space dimensions. For this, Devito also provides a shorthand expression, which means we do not have to hard-code the problem dimension (2D or 3D) in the code. To change the problem dimension we can create another `Grid` object and use this to re-define our `Function`s.
```python
grid_3d = Grid(shape=(5, 6, 7), extent=(1., 1., 1.))
u = TimeFunction(name='u', grid=grid_3d, space_order=2)
u
```
## Exercise 3: Higher order derivatives
We can re-define our function `u` with a different `space_order` argument to change the discretization order of the created stencil expression. Using the `grid_3d` object, can you derive and expression of the 12th-order Laplacian $\nabla^2 u$? What about the 16th-order stencil for the Laplacian?
Hint: Devito functions provides a `.laplace` shorthand expression that will work in 2D and 3D.
```python
```
<button data-toggle="collapse" data-target="#sol5" class='btn btn-primary'>Solution</button>
<div id="sol5" class="collapse">
```
u = TimeFunction(name='u', grid=grid_3d, space_order=12)
u.laplace
```
## Exercise 4: Making a wave
In the final exercise of the introduction we will implement a simple wave equation operator to the ones used in seismic imaging. For this we will implement the isotropic wave equation without boundary conditions. The equation defines the propagation of a wave in an isotropic medium and is defined as
$$m \frac{\partial^2 u}{\partial t^2} = \nabla^2 u$$
where $m$ is the square slowness of the wave, defined in terms of the wave speed $c$ as $m = 1 / c^2$. For the purpose of this exercise, we will ignore any source terms and instead use a "warmed-up" wavefield from file.
In the cell below we define the time parameters of our simulation, as well as the spatial dimensions and the shape of our computational grid with a `Grid` object. Using this grid object we can define two functions:
* The wavefield $u(t, x, y)$ which we initialise from the file `wavefield.npy`
* The square slowness $m(x, y)$ which, for now we will keep constant, for $c = 1.5km/s$.
```python
import numpy as np
from examples.seismic import plot_image
t0, tn, dt = 214., 400, 4.2 # Start, end and timestep size
nt = int(1 + (tn - t0) / dt) # Number of timesteps
# A 120x120 grid that defines our square domain
grid = Grid(shape=(120, 120), extent=(1800., 1800.))
# Load and plot the initial "warmed-up" wavefield
u = TimeFunction(name='u', grid=grid, space_order=2, time_order=2)
u.data[:] = np.load('wavefield.npy')
plot_image(u.data[0])
# Square slowness for a constant wave speed of 1.5m/s
m = Function(name='m', grid=grid)
m.data[:] = 1. / 1.5**2
```
To remind ourselves, the governing equation we want to implement is
$$m \frac{\partial^2 u}{\partial t^2} = \nabla^2 u$$
Please have a go and try to implement the operator below. You will need to follow the same strategy to discretize the equation and create a symbolic stencil expression that updates $u(t + dt, x, y)$. Once we apply our `Operator` for `nt` timesteps we should see that the wave has expanded homogeneously.
```python
# Reset the wavefield, so that we can run the cell multiple times
u.data[:] = np.load('wavefield.npy')
# Please implement your wave equation operator here
```
```python
plot_image(u.data[0])
```
<button data-toggle="collapse" data-target="#sol6" class='btn btn-primary'>Solution</button>
<div id="sol6" class="collapse">
```python
eqn = Eq(m * u.dt2 - u.laplace)
stencil = solve(eqn, u.forward)[0]
update = Eq(u.forward, stencil)
op = Operator(update)
op(t=nt, dt=dt)
```
Now, let's see what happens if we change the square slowness field `m` by increasing the wave speed to $2.5$ in the bottom half of the domain.
```python
m.data[:, 60:] = 1. / 2.5**2 # Set a new wave speed
plot_image(m.data)
u.data[:] = np.load('wavefield.npy') # Reset our wave field u
plot_image(u.data[0])
op(t=60, dt=dt)
plot_image(u.data[0])
```
<sup>This notebook is part of the tutorial "Optimised Symbolic Finite Difference Computation with Devito" presented at the University of Sao Paulo in April 2019.</sup>
```python
```
| cc6254a4c733236aab0a4fd10a595276a8482617 | 148,582 | ipynb | Jupyter Notebook | 02_introduction.ipynb | navjotk/devitoworkshop | ebb5dcd40ba32caf2be520bfc420251c32ad2079 | [
"MIT"
] | null | null | null | 02_introduction.ipynb | navjotk/devitoworkshop | ebb5dcd40ba32caf2be520bfc420251c32ad2079 | [
"MIT"
] | null | null | null | 02_introduction.ipynb | navjotk/devitoworkshop | ebb5dcd40ba32caf2be520bfc420251c32ad2079 | [
"MIT"
] | null | null | null | 230.717391 | 122,412 | 0.911268 | true | 3,726 | Qwen/Qwen-72B | 1. YES
2. YES | 0.874077 | 0.7773 | 0.67942 | __label__eng_Latn | 0.994514 | 0.416852 |
# 勾配消失問題
多層パーセプトロンでは、層の長さを長くすればするほど表現力は増します。一方で、学習が難しくなるという問題が知られています。
```python
# Tensorflowが使うCPUの数を制限します。(VMを使う場合)
%env OMP_NUM_THREADS=1
%env TF_NUM_INTEROP_THREADS=1
%env TF_NUM_INTRAOP_THREADS=1
from tensorflow.config import threading
num_threads = 1
threading.set_inter_op_parallelism_threads(num_threads)
threading.set_intra_op_parallelism_threads(num_threads)
#ライブラリのインポート
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
```
```python
# このセルはヘルパー関数です。
# i 番目のレイヤーの出力を取得する
def getActivation(model, i, x):
from tensorflow.keras import Model
activation = Model(model.input, model.layers[i].output)(x)
return activation.numpy()
# iLayer 番目のレイヤーにつながる重み(wij)の勾配を取得する
def getGradientParameter(model, i, x, t):
from tensorflow.nest import flatten
from tensorflow import constant
from tensorflow.python.eager import backprop
weights = model.layers[i].weights[0] # get only kernel (not bias)
with backprop.GradientTape() as tape:
pred = model(x)
loss = model.compiled_loss(constant(t), pred)
gradients = tape.gradient(loss, weights)
return gradients.numpy()
# 左上: ウェイト(wij)の初期値をプロット
def plot_weights(model, x, t):
iLayers = [0, 3, 6, 10]
labels = [
' 0th layer',
' 3th layer',
' 6th layer',
'Last layer',
]
values = [model.weights[i * 2].numpy().flatten() for i in iLayers] # get only kernel (not bias)
plt.hist(values, bins=50, stacked=False, density=True, label=labels, histtype='step')
plt.xlabel('weight')
plt.ylabel('Probability density')
plt.legend(loc='upper left', fontsize='x-small')
plt.show()
# 左下: 各ノードの出力(sigma(ai))をプロット
def plot_nodes(model, x, t):
iLayers = [0, 3, 6, 10]
labels = [
' 0th layer',
' 3th layer',
' 6th layer',
'Last layer',
]
values = [getActivation(model, i, x).flatten() for i in iLayers]
plt.hist(values, bins=50, stacked=False, density=True, label=labels, histtype='step')
plt.xlabel('activation')
plt.ylabel('Probability density')
plt.legend(loc='upper center', fontsize='x-small')
plt.show()
# 右上: ウェイト(wij)の微分(dE/dwij)をプロット
def plot_gradients(model, x, t):
iLayers = [0, 3, 6, 10]
labels = [
' 0th layer',
' 3th layer',
' 6th layer',
'Last layer',
]
grads = [np.abs(getGradientParameter(model, i, x, t).flatten()) for i in iLayers]
grads = [np.log10(x[x > 0]) for x in grads]
plt.hist(grads, bins=50, stacked=False, density=True, label=labels, histtype='step')
plt.xlabel('log10(|gradient of weights|)')
plt.ylabel('Probability density')
plt.legend(loc='upper left', fontsize='x-small')
plt.show()
```
中間層が10層という深い多層パーセプトロンを用いて、モデル中の重みパラメータの大きさ、勾配の大きさを調べてみます。
```python
from tensorflow.keras.models import Sequential
from tensorflow.keras.initializers import RandomNormal, RandomUniform
from tensorflow.keras.layers import Dense
# データセットの生成
nSamples = 1000
nFeatures = 50
x = np.random.randn(nSamples, nFeatures) # 100個の入力変数を持つイベント1000個生成。それぞれの入力変数は正規分布に従う
t = np.random.randint(2, size=nSamples).reshape([nSamples, 1]) # 正解ラベルは0 or 1でランダムに生成
# モデルの定義
activation = 'sigmoid' # 中間層の各ノードで使う活性化関数
initializer = RandomNormal(mean=0.0, stddev=1.0) # weight(wij)の初期値。ここでは正規分布に従って初期化する
# initializer = RandomUniform(minval=-1, maxval=1) # weight(wij)の初期値。ここでは一様分布に従って初期化する
# 中間層が10層の多層パーセプトロン。各レイヤーのノード数は全て50。
model = Sequential([
Dense(50, activation=activation, kernel_initializer=initializer, input_dim=nFeatures),
Dense(50, activation=activation, kernel_initializer=initializer),
Dense(50, activation=activation, kernel_initializer=initializer),
Dense(50, activation=activation, kernel_initializer=initializer),
Dense(50, activation=activation, kernel_initializer=initializer),
Dense(50, activation=activation, kernel_initializer=initializer),
Dense(50, activation=activation, kernel_initializer=initializer),
Dense(50, activation=activation, kernel_initializer=initializer),
Dense(50, activation=activation, kernel_initializer=initializer),
Dense(50, activation=activation, kernel_initializer=initializer),
Dense(1, activation='sigmoid', kernel_initializer=initializer)
])
model.compile(loss='binary_crossentropy', optimizer='adam')
# ウェイト(wij)の初期値をプロット
plot_weights(model, x, t)
# 各ノードの出力(sigma(ai))をプロット
plot_nodes(model, x, t)
# ウェイト(wij)の微分(dE/dwij)をプロット
plot_gradients(model, x, t)
```
左上のプロットはパラメータ($w_{ij}$)の初期値を表しています。指定したとおり、各層で正規分布に従って初期化されています。
左下のプロットは活性化関数の出力($z_i$)を表しています。パラメータ($w_{ij}$)の初期値として正規分布を指定すると、シグモイド関数の出力はそのほとんどが0か1に非常に近い値となっています。シグモイド関数の微分は$\sigma^{'}(x)=\sigma(x)\cdot(1-\sigma(x))$なので、$\sigma(x)$が0や1に近いときは微分値も非常に小さな値となります。
誤差逆伝播の式は
$$
\begin{align}
\delta_{i}^{(k)} &= \sigma^{'}(a_i^{(k)}) \left( \sum_j w_{ij}^{(k+1)} \cdot \delta_{j}^{(k+1)} \right) \\
\frac{\partial E_n}{\partial w_{ij}^{(k)}} &= \delta_{j}^{(k)} \cdot z_{i}^{(k)}
\end{align}
$$
でした。$\sigma^{'}(a_i^{(k)})$が小さいと後方の層から前方の層に誤差が伝わる際に、値が小さくなってしまいます。
中上、中下のプロットはそれぞれ各層での$\frac{\partial E_n}{\partial w_{ij}^{(k)}}$、$\delta_{i}^{(k)}$を表しています。
前方の層(0th layer)は後方の層と比較して分布の絶対値が小さくなっています。
右上と右下のプロットは各レイヤーでの$\frac{\partial E_n}{\partial w_{ij}^{(k)}}$と$\delta_{i}^{(k)}$の分布の分散を表しています。
このように誤差が前の層にいくにつれて小さくなるため、前の層が後ろの層と比較して学習が進まなくなります。
この問題は勾配消失の問題として知られています。
勾配消失はパラメータの初期値や、活性化関数を変更することによって解決・緩和することがわかっています。
Kerasの
- [初期化のページ](https://keras.io/ja/initializers/)
- [活性化関数のページ](https://keras.io/ja/activations/)
も参考にしながら、この問題の解決を試みてみましょう。
活性化関数・パラメータの初期化方法の変更はそれぞれコード中の"activation"、"initializer"を変更することによって行えます。
例えばパラメータの初期化を(-0.01, +0.01)の一様分布に変更するときは以下のコードのようにすれば良いです。
```python
from tensorflow.keras.models import Sequential
from tensorflow.keras.initializers import RandomNormal, RandomUniform
from tensorflow.keras.layers import Dense
# データセットの生成
nSamples = 1000
nFeatures = 50
x = np.random.randn(nSamples, nFeatures) # 100個の入力変数を持つイベント1000個生成。それぞれの入力変数は正規分布に従う
t = np.random.randint(2, size=nSamples).reshape([nSamples, 1]) # 正解ラベルは0 or 1でランダムに生成
# モデルの定義
activation = 'sigmoid' # 中間層の各ノードで使う活性化関数
# initializer = RandomNormal(mean=0.0, stddev=1.0) # weight(wij)の初期値。ここでは正規分布に従って初期化する
initializer = RandomUniform(minval=-0.01, maxval=0.01) # weight(wij)の初期値。ここでは一様分布に従って初期化する
# 中間層が10層の多層パーセプトロン。各レイヤーのノード数は全て50。
model = Sequential([
Dense(50, activation=activation, kernel_initializer=initializer, input_dim=nFeatures),
Dense(50, activation=activation, kernel_initializer=initializer),
Dense(50, activation=activation, kernel_initializer=initializer),
Dense(50, activation=activation, kernel_initializer=initializer),
Dense(50, activation=activation, kernel_initializer=initializer),
Dense(50, activation=activation, kernel_initializer=initializer),
Dense(50, activation=activation, kernel_initializer=initializer),
Dense(50, activation=activation, kernel_initializer=initializer),
Dense(50, activation=activation, kernel_initializer=initializer),
Dense(50, activation=activation, kernel_initializer=initializer),
Dense(1, activation='sigmoid', kernel_initializer=initializer)
])
model.compile(loss='binary_crossentropy', optimizer='adam')
# ウェイト(wij)の初期値をプロット
plot_weights(model, x, t)
# 各ノードの出力(sigma(ai))をプロット
plot_nodes(model, x, t)
# ウェイト(wij)の微分(dE/dwij)をプロット
plot_gradients(model, x, t)
```
この例では活性化関数の出力が0.5付近に集中しています。
どのノードも同じ出力をしているということはノード数を増やした意味があまりなくなっており、多層パーセプトロンの表現力が十分に活かしきれていないことがわかります。
また、勾配消失も先程の例と比較して大きくなっています。
| 766bce19a3a9d271e87bf68d993aa1238159d07f | 11,341 | ipynb | Jupyter Notebook | basic/ActivationFunction.ipynb | saitoicepp/ppcc2021_DeepLearning | ba679567a9648584ada2394c4219f891d9af60bd | [
"Apache-2.0"
] | null | null | null | basic/ActivationFunction.ipynb | saitoicepp/ppcc2021_DeepLearning | ba679567a9648584ada2394c4219f891d9af60bd | [
"Apache-2.0"
] | null | null | null | basic/ActivationFunction.ipynb | saitoicepp/ppcc2021_DeepLearning | ba679567a9648584ada2394c4219f891d9af60bd | [
"Apache-2.0"
] | null | null | null | 34.788344 | 210 | 0.593951 | true | 3,089 | Qwen/Qwen-72B | 1. YES
2. YES | 0.815232 | 0.647798 | 0.528106 | __label__eng_Latn | 0.182291 | 0.065297 |
```python
# imports
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.datasets.samples_generator import make_regression
from scipy import stats
%matplotlib inline
```
# Basic Concepts
## What is "learning from data"?
> In general **Learning from Data** is a scientific discipline that is concerned with the design and development of algorithms that allow computers to infer (from data) a model that allows *compact representation* (unsupervised learning) and/or *good generalization* (supervised learning).
This is an important technology because it enables computational systems to adaptively improve their performance with experience accumulated from the observed data.
Most of these algorithms are based on the *iterative solution* of a mathematical problem that involves data and model. If there was an analytical solution to the problem, this should be the adopted one, but this is not the case for most of the cases.
So, the most common strategy for **learning from data** is based on solving a system of equations as a way to find a series of parameters of the model that minimizes a mathematical problem. This is called **optimization**.
The most important technique for solving optimization problems is **gradient descend**.
## Preliminary: Nelder-Mead method for function minimization.
The most simple thing we can try to minimize a function $f(x)$ would be to sample two points relatively near each other, and just repeatedly take a step down away from the largest value. This simple algorithm has a severe limitation: it can't get closer to the true minima than the step size.
The Nelder-Mead method dynamically adjusts the step size based off the loss of the new point. If the new point is better than any previously seen value, it **expands** the step size to accelerate towards the bottom. Likewise if the new point is worse it **contracts** the step size to converge around the minima. The usual settings are to half the step size when contracting and double the step size when expanding.
This method can be easily extended into higher dimensional examples, all that's required is taking one more point than there are dimensions. Then, the simplest approach is to replace the worst point with a point reflected through the centroid of the remaining n points. If this point is better than the best current point, then we can try stretching exponentially out along this line. On the other hand, if this new point isn't much better than the previous value, then we are stepping across a valley, so we shrink the step towards a better point.
> See "An Interactive Tutorial on Numerical Optimization": http://www.benfrederickson.com/numerical-optimization/
## Gradient descend (for *hackers*) for function minimization: 1-D
Let's suppose that we have a function $f: \mathbb{R} \rightarrow \mathbb{R}$. For example:
$$f(x) = x^2$$
Our objective is to find the argument $x$ that minimizes this function (for maximization, consider $-f(x)$). To this end, the critical concept is the **derivative**.
The derivative of $f$ of a variable $x$, $f'(x)$ or $\frac{\partial f}{\partial x}$, is a measure of the rate at which the value of the function changes with respect to the change of the variable.
It is defined as the following limit:
$$ f'(x) = \lim_{h \rightarrow 0} \frac{f(x + h) - f(x)}{h} $$
The derivative specifies how to scale a small change in the input in order to obtain the corresponding change in the output:
$$ f(x + h) \approx f(x) + h f'(x)$$
```python
# numerical derivative at a point x
def f(x):
return x**2
def fin_dif(x,
f,
h = 0.00001):
'''
This method returns the derivative of f at x
by using the finite difference method
'''
return (f(x+h) - f(x))/h
x = 2.0
print("{:2.4f}".format(fin_dif(x,f)))
```
4.0000
The limit as $h$ approaches zero, if it exists, should represent the **slope of the tangent line** to $(x, f(x))$.
For values that are not zero it is only an approximation.
> **NOTE**: It can be shown that the “centered difference formula" is better when computing numerical derivatives:
> $$ \lim_{h \rightarrow 0} \frac{f(x + h) - f(x - h)}{2h} $$
> The error in the "finite difference" approximation can be derived from Taylor's theorem and, assuming that $f$ is differentiable, is $O(h)$. In the case of “centered difference" the error is $O(h^2)$.
The derivative tells how to chage $x$ in order to make a small improvement in $f$.
Then, we can follow these steps to decrease the value of the function:
+ Start from a random $x$ value.
+ Compute the derivative $f'(x) = \lim_{h \rightarrow 0} \frac{f(x + h) - f(x - h)}{2h}$.
+ Walk a small step (possibly weighted by the derivative module) in the **opposite** direction of the derivative, because we know that $f(x - h \mbox{ sign}(f'(x)) < f(x)$ for small enough $h$.
The search for the minima ends when the derivative is zero because we have no more information about which direction to move. $x$ is a critical o stationary point if $f'(x)=0$.
+ A **minimum (maximum)** is a critical point where $f(x)$ is lower (higher) than at all neighboring points.
+ There is a third class of critical points: **saddle points**.
If $f$ is a **convex function**, this should be the minimum (maximum) of our functions. In other cases it could be a local minimum (maximum) or a saddle point.
```python
x = np.linspace(-15,15,100)
y = x**2
fig, ax = plt.subplots(1, 1)
fig.set_facecolor('#EAEAF2')
plt.plot(x,y, 'r-')
plt.plot([0],[0],'o')
plt.ylim([-10,250])
plt.gcf().set_size_inches((10,3))
plt.grid(True)
ax.text(0,
20,
'Minimum',
ha='center',
color=sns.xkcd_rgb['pale red'],
)
plt.show
```
```python
x = np.linspace(-15,15,100)
y = -x**2
fig, ax = plt.subplots(1, 1)
fig.set_facecolor('#EAEAF2')
plt.plot(x,y, 'r-')
plt.plot([0],[0],'o')
plt.ylim([-250,10])
plt.gcf().set_size_inches((10,3))
plt.grid(True)
ax.text(0,
-30,
'Maximum',
ha='center',
color=sns.xkcd_rgb['pale red'],
)
plt.show
```
```python
x = np.linspace(-15,15,100)
y = x**3
fig, ax = plt.subplots(1, 1)
fig.set_facecolor('#EAEAF2')
plt.plot(x,y, 'r-')
plt.plot([0],[0],'o')
plt.ylim([-3000,3000])
plt.gcf().set_size_inches((10,3))
plt.grid(True)
ax.text(0,
400,
'Saddle Point',
ha='center',
color=sns.xkcd_rgb['pale red'],
)
plt.show
```
There are two problems with numerical derivatives:
+ It is approximate.
+ It is very slow to evaluate (two function evaluations: $f(x + h) , f(x - h)$ ).
Our knowledge from Calculus could help!
We know that we can get an **analytical expression** of the derivative for **some** functions.
For example, let's suppose we have a simple quadratic function, $f(x)=x^2−6x+5$, and we want to find the minimum of this function.
#### First approach
We can solve this analytically using Calculus, by finding the derivate $f'(x) = 2x-6$ and setting it to zero:
\begin{equation}
\begin{split}
2x-6 & = & 0 \\
2x & = & 6 \\
x & = & 3 \\
\end{split}
\end{equation}
```python
x = np.linspace(-10,20,100)
y = x**2 - 6*x + 5
fig, ax = plt.subplots(1, 1)
fig.set_facecolor('#EAEAF2')
plt.plot(x,y, 'r-')
plt.plot([3],[3**2 - 6*3 + 5],'o')
plt.ylim([-10,250])
plt.gcf().set_size_inches((10,3))
plt.grid(True)
ax.text(3,
10,
'Min: x = 3',
ha='center',
color=sns.xkcd_rgb['pale red'],
)
plt.show
```
#### Second approach
To find the local minimum using **gradient descend**: you start at a random point, and move into the direction of steepest **descent** relative to the derivative:
+ Start from a random $x$ value.
+ Compute the derivative $f'(x)$ analitically.
+ Walk a small step in the **opposite** direction of the derivative.
In this example, let's suppose we start at $x=15$. The derivative at this point is $2×15−6=24$.
Because we're using gradient descent, we need to subtract the gradient from our $x$-coordinate: $f(x - f'(x))$. However, notice that $15−24$ gives us $−9$, clearly overshooting over target of $3$.
```python
x = np.linspace(-10,20,100)
y = x**2 - 6*x + 5
start = 15
fig, ax = plt.subplots(1, 1)
fig.set_facecolor('#EAEAF2')
plt.plot(x,y, 'r-')
plt.plot([start],[start**2 - 6*start + 5],'o')
ax.text(start,
start**2 - 6*start + 35,
'Start',
ha='center',
color=sns.xkcd_rgb['blue'],
)
d = 2 * start - 6
end = start - d
plt.plot([end],[end**2 - 6*end + 5],'o')
plt.ylim([-10,250])
plt.gcf().set_size_inches((10,3))
plt.grid(True)
ax.text(end,
start**2 - 6*start + 35,
'End',
ha='center',
color=sns.xkcd_rgb['green'],
)
plt.show
```
To fix this, we multiply the gradient by a step size. This step size (often called **alpha**) has to be chosen carefully, as a value too small will result in a long computation time, while a value too large will not give you the right result (by overshooting) or even fail to converge.
In this example, we'll set the step size to 0.01, which means we'll subtract $24×0.01$ from $15$, which is $14.76$.
This is now our new temporary local minimum: We continue this method until we either don't see a change after we subtracted the derivative step size (or until we've completed a pre-set number of iterations).
```python
old_min = 0
temp_min = 15
step_size = 0.01
precision = 0.0001
def f(x):
return x**2 - 6*x + 5
def f_derivative(x):
import math
return 2*x -6
mins = []
cost = []
while abs(temp_min - old_min) > precision:
old_min = temp_min
gradient = f_derivative(old_min)
move = gradient * step_size
temp_min = old_min - move
cost.append((3-temp_min)**2)
mins.append(temp_min)
# rounding the result to 2 digits because of the step size
print("Local minimum occurs at {:3.6f}.".format(round(temp_min,2)))
```
Local minimum occurs at 3.000000.
An important feature of gradient descent is that **there should be a visible improvement over time**: In this example, we simply plotted the squared distance from the local minima calculated by gradient descent and the true local minimum, ``cost``, against the iteration during which it was calculated. As we can see, the distance gets smaller over time, but barely changes in later iterations.
```python
x = np.linspace(-10,20,100)
y = x**2 - 6*x + 5
x, y = (zip(*enumerate(cost)))
fig, ax = plt.subplots(1, 1)
fig.set_facecolor('#EAEAF2')
plt.plot(x,y, 'r-', alpha=0.7)
plt.ylim([-10,150])
plt.gcf().set_size_inches((10,3))
plt.grid(True)
plt.show
```
```python
x = np.linspace(-10,20,100)
y = x**2 - 6*x + 5
fig, ax = plt.subplots(1, 1)
fig.set_facecolor('#EAEAF2')
plt.plot(x,y, 'r-')
plt.ylim([-10,250])
plt.gcf().set_size_inches((10,3))
plt.grid(True)
plt.plot(mins,cost,'o', alpha=0.3)
ax.text(start,
start**2 - 6*start + 25,
'Start',
ha='center',
color=sns.xkcd_rgb['blue'],
)
ax.text(mins[-1],
cost[-1]+20,
'End (%s steps)' % len(mins),
ha='center',
color=sns.xkcd_rgb['blue'],
)
plt.show
```
## From derivatives to gradient: $n$-dimensional function minimization.
Let's consider a $n$-dimensional function $f: \Re^n \rightarrow \Re$. For example:
$$f(\mathbf{x}) = \sum_{n} x_n^2$$
Our objective is to find the argument $\mathbf{x}$ that minimizes this function.
The **gradient** of $f$ is the vector whose components are the $n$ partial derivatives of $f$. It is thus a vector-valued function.
The gradient points in the direction of the greatest rate of **increase** of the function.
$$\nabla {f} = (\frac{\partial f}{\partial x_1}, \dots, \frac{\partial f}{\partial x_n})$$
```python
def f(x):
return sum(x_i**2 for x_i in x)
def fin_dif_partial_centered(x,
f,
i,
h=1e-6):
'''
This method returns the partial derivative of the i-th
component of f at x
by using the centered finite difference method
'''
w1 = [x_j + (h if j==i else 0) for j, x_j in enumerate(x)]
w2 = [x_j - (h if j==i else 0) for j, x_j in enumerate(x)]
return (f(w1) - f(w2))/(2*h)
def fin_dif_partial_old(x,
f,
i,
h=1e-6):
'''
This method returns the partial derivative of the i-th
component of f at x
by using the (non-centered) finite difference method
'''
w1 = [x_j + (h if j==i else 0) for j, x_j in enumerate(x)]
return (f(w1) - f(x))/h
def gradient_centered(x,
f,
h=1e-6):
'''
This method returns the gradient vector of f at x
by using the centered finite difference method
'''
return[round(fin_dif_partial_centered(x,f,i,h), 10) for i,_ in enumerate(x)]
def gradient_old(x,
f,
h=1e-6):
'''
This method returns the the gradient vector of f at x
by using the (non-centered)ç finite difference method
'''
return[round(fin_dif_partial_old(x,f,i,h), 10) for i,_ in enumerate(x)]
x = [1.0,1.0,1.0]
print('{:.6f}'.format(f(x)), gradient_centered(x,f))
print('{:.6f}'.format(f(x)), gradient_old(x,f))
```
3.000000 [2.0000000001, 2.0000000001, 2.0000000001]
3.000000 [2.0000009999, 2.0000009999, 2.0000009999]
The function we have evaluated, $f({\mathbf x}) = x_1^2+x_2^2+x_3^2$, is $3$ at $(1,1,1)$ and the gradient vector at this point is $(2,2,2)$.
Then, we can follow this steps to maximize (or minimize) the function:
+ Start from a random $\mathbf{x}$ vector.
+ Compute the gradient vector.
+ Walk a small step in the opposite direction of the gradient vector.
> It is important to be aware that gradient computation is very expensive: if $\mathbf{x}$ has dimension $n$, we have to evaluate $f$ at $2*n$ points.
### How to use the gradient.
$f(x) = \sum_i x_i^2$, takes its mimimum value when all $x$ are 0.
Let's check it for $n=3$:
```python
def euc_dist(v1,v2):
import numpy as np
import math
v = np.array(v1)-np.array(v2)
return math.sqrt(sum(v_i ** 2 for v_i in v))
```
Let's start by choosing a random vector and then walking a step in the opposite direction of the gradient vector. We will stop when the difference between the new solution and the old solution is less than a tolerance value.
```python
# choosing a random vector
import random
import numpy as np
x = [random.randint(-10,10) for i in range(3)]
x
```
[-9, -2, -9]
```python
def step(x,
grad,
alpha):
'''
This function makes a step in the opposite direction of
the gradient vector
in order to compute a new value for the target function.
'''
return [x_i - alpha * grad_i for x_i, grad_i in zip(x,grad)]
tol = 1e-15
alpha = 0.01
while True:
grad = gradient_centered(x,f)
next_x = step(x,grad,alpha)
if euc_dist(next_x,x) < tol:
break
x = next_x
print([round(i,10) for i in x])
```
[-0.0, -0.0, -0.0]
### Alpha
The step size, **alpha**, is a slippy concept: if it is too small we will slowly converge to the solution, if it is too large we can diverge from the solution.
There are several policies to follow when selecting the step size:
+ Constant size steps. In this case, the size step determines the precision of the solution.
+ Decreasing step sizes.
+ At each step, select the optimal step.
The last policy is good, but too expensive. In this case we would consider a fixed set of values:
```python
step_size = [100, 10, 1, 0.1, 0.01, 0.001, 0.0001, 0.00001]
```
## Learning from data
In general, we have:
+ A dataset $(\mathbf{x},y)$ of $n$ examples.
+ A target function $f_\mathbf{w}$, that we want to minimize, representing the **discrepancy between our data and the model** we want to fit. The model is represented by a set of parameters $\mathbf{w}$.
+ The gradient of the target function, $g_f$.
In the most common case $f$ represents the errors from a data representation model $M$. To fit the model is to find the optimal parameters $\mathbf{w}$ that minimize the following expression:
$$ f_\mathbf{w} = \frac{1}{n} \sum_{i} (y_i - M(\mathbf{x}_i,\mathbf{w}))^2 $$
For example, $(\mathbf{x},y)$ can represent:
+ $\mathbf{x}$: the behavior of a "Candy Crush" player; $y$: monthly payments.
+ $\mathbf{x}$: sensor data about your car engine; $y$: probability of engine error.
+ $\mathbf{x}$: finantial data of a bank customer; $y$: customer rating.
> If $y$ is a real value, it is called a *regression* problem.
> If $y$ is binary/categorical, it is called a *classification* problem.
Let's suppose that our model is a one-dimensional linear model $M(\mathbf{x},\mathbf{w}) = w \cdot x $.
### Batch gradient descend
We can implement **gradient descend** in the following way (*batch gradient descend*):
```python
import numpy as np
import random
# f = 2x
x = np.arange(10)
y = np.array([2*i for i in x])
# f_target = 1/n Sum (y - wx)**2
def target_f(x,y,w):
return np.sum((y - x * w)**2.0) / x.size
# gradient_f = 2/n Sum 2wx**2 - 2xy
def gradient_f(x,y,w):
return 2 * np.sum(2*w*(x**2) - 2*x*y) / x.size
def step(w,grad,alpha):
return w - alpha * grad
def BGD_multi_step(target_f,
gradient_f,
x,
y,
toler = 1e-6):
'''
Batch gradient descend by using a multi-step approach
'''
alphas = [100, 10, 1, 0.1, 0.001, 0.00001]
w = random.random()
val = target_f(x,y,w)
i = 0
while True:
i += 1
gradient = gradient_f(x,y,w)
next_ws = [step(w, gradient, alpha) for alpha in alphas]
next_vals = [target_f(x,y,w) for w in next_ws]
min_val = min(next_vals)
next_w = next_ws[next_vals.index(min_val)]
next_val = target_f(x,y,next_w)
if (abs(val - next_val) < toler):
return w
else:
w, val = next_w, next_val
```
```python
print('{:.6f}'.format(BGD_multi_step(target_f, gradient_f, x, y)))
```
1.999618
```python
%%timeit
BGD_multi_step(target_f, gradient_f, x, y)
```
4.88 ms ± 353 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
```python
def BGD(target_f,
gradient_f,
x,
y,
toler = 1e-6,
alpha=0.01):
'''
Batch gradient descend by using a given step
'''
w = random.random()
val = target_f(x,y,w)
i = 0
while True:
i += 1
gradient = gradient_f(x,y,w)
next_w = step(w, gradient, alpha)
next_val = target_f(x,y,next_w)
if (abs(val - next_val) < toler):
return w
else:
w, val = next_w, next_val
```
```python
print('{:.6f}'.format(BGD(target_f, gradient_f, x, y)))
```
2.000065
```python
%%timeit
BGD(target_f, gradient_f, x, y)
```
127 µs ± 9.75 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
### Stochastic Gradient Descend
The last function evals the whole dataset $(\mathbf{x}_i,y_i)$ at every step.
If the dataset is large, this strategy is too costly. In this case we will use a strategy called **SGD** (*Stochastic Gradient Descend*).
When learning from data, the cost function is additive: it is computed by adding sample reconstruction errors.
Then, we can compute the estimate the gradient (and move towards the minimum) by using only **one data sample** (or a small data sample).
Thus, we will find the minimum by iterating this gradient estimation over the dataset.
A full iteration over the dataset is called **epoch**. During an epoch, data must be used in a random order.
If we apply this method we have some theoretical guarantees to find a good minimum:
+ SGD essentially uses the inaccurate gradient per iteration. Since there is no free food, what is the cost by using approximate gradient? The answer is that the convergence rate is slower than the gradient descent algorithm.
+ The convergence of SGD has been analyzed using the theories of convex minimization and of stochastic approximation: it converges almost surely to a global minimum when the objective function is convex or pseudoconvex, and otherwise converges almost surely to a local minimum.
```python
import numpy as np
x = np.arange(10)
y = np.array([2*i for i in x])
data = zip(x,y)
for (x_i,y_i) in data:
print('{:3d} {:3d}'.format(x_i,y_i))
print()
def in_random_order(data):
'''
Random data generator
'''
import random
indexes = [i for i,_ in enumerate(data)]
random.shuffle(indexes)
for i in indexes:
yield data[i]
for (x_i,y_i) in in_random_order(data):
print('{:3d} {:3d}'.format(x_i,y_i))
```
0 0
1 2
2 4
3 6
4 8
5 10
6 12
7 14
8 16
9 18
```python
import numpy as np
import random
def SGD(target_f,
gradient_f,
x,
y,
toler = 1e-6,
epochs=100,
alpha_0=0.01):
'''
Stochastic gradient descend with automatic step adaptation (by
reducing the step to its 95% when there are iterations with
no increase)
'''
data = list(zip(x,y))
w = random.random()
alpha = alpha_0
min_w, min_val = float('inf'), float('inf')
epoch = 0
iteration_no_increase = 0
while epoch < epochs and iteration_no_increase < 100:
val = target_f(x, y, w)
if min_val - val > toler:
min_w, min_val = w, val
alpha = alpha_0
iteration_no_increase = 0
else:
iteration_no_increase += 1
alpha *= 0.95
for x_i, y_i in in_random_order(data):
gradient_i = gradient_f(x_i, y_i, w)
w = w - (alpha * gradient_i)
epoch += 1
return min_w
```
```python
print('w: {:.6f}'.format(SGD(target_f, gradient_f, x, y)))
```
w: 2.000000
## Exercise: Stochastic Gradient Descent and Linear Regression
The linear regression model assumes a linear relationship between data:
$$ y_i = w_1 x_i + w_0 $$
Let's generate a more realistic dataset (with noise), where $w_1 = 2$ and $w_0 = 0$.
```python
%reset
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.datasets.samples_generator import make_regression
from scipy import stats
import random
%matplotlib inline
```
Once deleted, variables cannot be recovered. Proceed (y/[n])? y
```python
# x: input data
# y: noisy output data
x = np.random.uniform(0,1,20)
# f = 2x + 0
def f(x): return 2*x + 0
noise_variance =0.1
noise = np.random.randn(x.shape[0])*noise_variance
y = f(x) + noise
fig, ax = plt.subplots(1, 1)
fig.set_facecolor('#EAEAF2')
plt.xlabel('$x$', fontsize=15)
plt.ylabel('$f(x)$', fontsize=15)
plt.plot(x, y, 'o', label='y')
plt.plot([0, 1], [f(0), f(1)], 'b-', label='f(x)')
plt.ylim([0,2])
plt.gcf().set_size_inches((10,3))
plt.grid(True)
plt.show
```
Complete the following code in order to:
+ Compute the value of $w$ by using a estimator based on minimizing the squared error.
+ Get from SGD function a list, `target_value`, representing the value of the target function at each iteration.
```python
# Write your target function as f_target 1/n Sum (y - wx)**2
def target_f(x,y,w):
# your code here
return
# Write your gradient function
def gradient_f(x,y,w):
# your code here
return
def in_random_order(data):
'''
Random data generator
'''
import random
indexes = [i for i,_ in enumerate(data)]
random.shuffle(indexes)
for i in indexes:
yield data[i]
# Modify the SGD function to return a 'target_value' vector
def SGD(target_f,
gradient_f,
x,
y,
toler = 1e-6,
epochs=100,
alpha_0=0.01):
# Insert your code among the following lines
data = zip(x,y)
w = random.random()
alpha = alpha_0
min_w, min_val = float('inf'), float('inf')
iteration_no_increase = 0
epoch = 0
while epoch < epochs and iteration_no_increase < 100:
val = target_f(x, y, w)
if min_val - val > toler:
min_w, min_val = w, val
alpha = alpha_0
iteration_no_increase = 0
else:
iteration_no_increase += 1
alpha *= 0.95
for x_i, y_i in in_random_order(data):
gradient_i = gradient_f(x_i, y_i, w)
w = w - (alpha * gradient_i)
epoch += 1
return min_w
```
```python
# Print the value of the solution
w, target_value = SGD(target_f, gradient_f, x, y)
print('w: {:.6f}'.format(w))
```
```python
# Visualize the solution regression line
fig, ax = plt.subplots(1, 1)
fig.set_facecolor('#EAEAF2')
plt.plot(x, y, 'o', label='t')
plt.plot([0, 1], [f(0), f(1)], 'b-', label='f(x)', alpha=0.5)
plt.plot([0, 1], [0*w, 1*w], 'r-', label='fitted line', alpha=0.5, linestyle='--')
plt.xlabel('input x')
plt.ylabel('target t')
plt.title('input vs. target')
plt.ylim([0,2])
plt.gcf().set_size_inches((10,3))
plt.grid(True)
plt.show
```
```python
# Visualize the evolution of the target function value during iterations.
fig, ax = plt.subplots(1, 1)
fig.set_facecolor('#EAEAF2')
plt.plot(np.arange(target_value.size), target_value, 'o', alpha = 0.2)
plt.xlabel('Iteration')
plt.ylabel('Cost')
plt.grid()
plt.gcf().set_size_inches((10,3))
plt.grid(True)
plt.show()
```
## Mini-batch Gradient Descent
In code, general batch gradient descent looks something like this:
```python
nb_epochs = 100
for i in range(nb_epochs):
grad = evaluate_gradient(target_f, data, w)
w = w - learning_rate * grad
```
For a pre-defined number of epochs, we first compute the gradient vector of the target function for the whole dataset w.r.t. our parameter vector.
**Stochastic gradient descent** (SGD) in contrast performs a parameter update for each training example and label:
```python
nb_epochs = 100
for i in range(nb_epochs):
np.random.shuffle(data)
for sample in data:
grad = evaluate_gradient(target_f, sample, w)
w = w - learning_rate * grad
```
Mini-batch gradient descent finally takes the best of both worlds and performs an update for every mini-batch of $n$ training examples:
```python
nb_epochs = 100
for i in range(nb_epochs):
np.random.shuffle(data)
for batch in get_batches(data, batch_size=50):
grad = evaluate_gradient(target_f, batch, w)
w = w - learning_rate * grad
```
Minibatch SGD has the advantage that it works with a slightly less noisy estimate of the gradient. However, as the minibatch size increases, the number of updates done per computation done decreases (eventually it becomes very inefficient, like batch gradient descent).
There is an optimal trade-off (in terms of computational efficiency) that may vary depending on the data distribution and the particulars of the class of function considered, as well as how computations are implemented.
## Loss Funtions
Loss functions $L(y, f(\mathbf{x})) = \frac{1}{n} \sum_i \ell(y_i, f(\mathbf{x_i}))$ represent the price paid for inaccuracy of predictions in classification/regression problems.
In classification this function is often the **zero-one loss**, that is, $ \ell(y_i, f(\mathbf{x_i}))$ is zero when $y_i = f(\mathbf{x}_i)$ and one otherwise.
This function is discontinuous with flat regions and is thus extremely hard to optimize using gradient-based methods. For this reason it is usual to consider a proxy to the loss called a *surrogate loss function*. For computational reasons this is usually convex function. Here we have some examples:
### Square / Euclidean Loss
In regression problems, the most common loss function is the square loss function:
$$ L(y, f(\mathbf{x})) = \frac{1}{n} \sum_i (y_i - f(\mathbf{x}_i))^2 $$
The square loss function can be re-written and utilized for classification:
$$ L(y, f(\mathbf{x})) = \frac{1}{n} \sum_i (1 - y_i f(\mathbf{x}_i))^2 $$
### Hinge / Margin Loss (i.e. Suport Vector Machines)
The hinge loss function is defined as:
$$ L(y, f(\mathbf{x})) = \frac{1}{n} \sum_i \mbox{max}(0, 1 - y_i f(\mathbf{x}_i)) $$
The hinge loss provides a relatively tight, convex upper bound on the 0–1 Loss.
### Logistic Loss (Logistic Regression)
This function displays a similar convergence rate to the hinge loss function, and since it is continuous, simple gradient descent methods can be utilized.
$$ L(y, f(\mathbf{x})) = \frac{1}{n} \sum_i log(1 + exp(-y_i f(\mathbf{x}_i))) $$
### Sigmoid Cross-Entropy Loss (Softmax classifier)
Cross-Entropy is a loss function that is very used for training **multiclass problems**. We'll focus on models that assume that classes are mutually exclusive.
In this case, our labels have this form $\mathbf{y}_i =(1.0,0.0,0.0)$. If our model predicts a different distribution, say $ f(\mathbf{x}_i)=(0.4,0.1,0.5)$, then we'd like to nudge the parameters so that $f(\mathbf{x}_i)$ gets closer to $\mathbf{y}_i$.
C.Shannon showed that if you want to send a series of messages composed of symbols from an alphabet with distribution $y$ ($y_j$ is the probability of the $j$-th symbol), then to use the smallest number of bits on average, you should assign $\log(\frac{1}{y_j})$ bits to the $j$-th symbol.
The optimal number of bits is known as **entropy**:
$$ H(\mathbf{y}) = \sum_j y_j \log\frac{1}{y_j} = - \sum_j y_j \log y_j$$
**Cross entropy** is the number of bits we'll need if we encode symbols by using a wrong distribution $\hat y$:
$$ H(y, \hat y) = - \sum_j y_j \log \hat y_j $$
In our case, the real distribution is $\mathbf{y}$ and the "wrong" one is $f(\mathbf{x}_i)$. So, minimizing **cross entropy** with respect our model parameters will result in the model that best approximates our labels if considered as a probabilistic distribution.
Cross entropy is used in combination with **Softmax** classifier. In order to classify $\mathbf{x}_i$ we could take the index corresponding to the max value of $f(\mathbf{x}_i)$, but Softmax gives a slightly more intuitive output (normalized class probabilities) and also has a probabilistic interpretation:
$$ P(\mathbf{y}_i = j \mid \mathbf{x_i}) = - log \left( \frac{e^{f_j(\mathbf{x_i})}}{\sum_k e^{f_k(\mathbf{x_i})} } \right) $$
where $f_k$ is a linear classifier.
## Advanced gradient descend
### Momentum
SGD has trouble navigating ravines, i.e. areas where the surface curves much more steeply in one dimension than in another, which are common around local optima. In these scenarios, SGD oscillates across the slopes of the ravine while only making hesitant progress along the bottom towards the local optimum.
Momentum is a method that helps accelerate SGD in the relevant direction and dampens oscillations. It does this by adding a fraction of the update vector of the past time step to the current update vector:
$$ v_t = m v_{t-1} + \alpha \nabla_w f $$
$$ w = w - v_t $$
The momentum $m$ is commonly set to $0.9$.
### Nesterov
However, a ball that rolls down a hill, blindly following the slope, is highly unsatisfactory. We'd like to have a smarter ball, a ball that has a notion of where it is going so that it knows to slow down before the hill slopes up again.
Nesterov accelerated gradient (NAG) is a way to give our momentum term this kind of prescience. We know that we will use our momentum term $m v_{t-1}$ to move the parameters $w$. Computing
$w - m v_{t-1}$ thus gives us an approximation of the next position of the parameters (the gradient is missing for the full update), a rough idea where our parameters are going to be. We can now effectively look ahead by calculating the gradient not w.r.t. to our current parameters $w$ but w.r.t. the approximate future position of our parameters:
$$ w_{new} = w - m v_{t-1} $$
$$ v_t = m v_{t-1} + \alpha \nabla_{w_{new}} f $$
$$ w = w - v_t $$
### Adagrad
All previous approaches manipulated the learning rate globally and equally for all parameters. Tuning the learning rates is an expensive process, so much work has gone into devising methods that can adaptively tune the learning rates, and even do so per parameter.
Adagrad is an algorithm for gradient-based optimization that does just this: It adapts the learning rate to the parameters, performing larger updates for infrequent and smaller updates for frequent parameters.
$$ c = c + (\nabla_w f)^2 $$
$$ w = w - \frac{\alpha}{\sqrt{c}} $$
### RMProp
RMSProp update adjusts the Adagrad method in a very simple way in an attempt to reduce its aggressive, monotonically decreasing learning rate. In particular, it uses a moving average of squared gradients instead, giving:
$$ c = \beta c + (1 - \beta)(\nabla_w f)^2 $$
$$ w = w - \frac{\alpha}{\sqrt{c}} $$
where $\beta$ is a decay rate that controls the size of the moving average.
(Image credit: Alec Radford)
(Image credit: Alec Radford)
```python
%reset
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.datasets.samples_generator import make_regression
from scipy import stats
import random
%matplotlib inline
```
Once deleted, variables cannot be recovered. Proceed (y/[n])? y
```python
# the function that I'm going to plot
def f(x,y):
return x**2 + 5*y**2
x = np.arange(-3.0,3.0,0.1)
y = np.arange(-3.0,3.0,0.1)
X,Y = np.meshgrid(x, y, indexing='ij') # grid of point
Z = f(X, Y) # evaluation of the function on the grid
plt.pcolor(X, Y, Z, cmap=plt.cm.gist_earth)
plt.axis([x.min(), x.max(), y.min(), y.max()])
plt.gca().set_aspect('equal', adjustable='box')
plt.gcf().set_size_inches((6,6))
plt.show()
```
```python
def target_f(x):
return x[0]**2.0 + 5*x[1]**2.0
def part_f(x,
f,
i,
h=1e-6):
w1 = [x_j + (h if j==i else 0) for j, x_j in enumerate(x)]
w2 = [x_j - (h if j==i else 0) for j, x_j in enumerate(x)]
return (f(w1) - f(w2))/(2*h)
def gradient_f(x,
f,
h=1e-6):
return np.array([round(part_f(x,f,i,h), 10) for i,_ in enumerate(x)])
```
```python
def SGD(target_f,
gradient_f,
x,
alpha_0=0.01,
toler = 0.000001):
alpha = alpha_0
min_val = float('inf')
steps = 0
iteration_no_increase = 0
trace = []
while iteration_no_increase < 100:
val = target_f(x)
if min_val - val > toler:
min_val = val
alpha = alpha_0
iteration_no_increase = 0
else:
alpha *= 0.95
iteration_no_increase += 1
trace.append(x)
gradient_i = gradient_f(x, target_f)
x = x - (alpha * gradient_i)
steps += 1
return x, val, steps, trace
x = np.array([2,-2])
x, val, steps, trace = SGD(target_f, gradient_f, x)
print(x)
print('Val: {:.6f}, steps: {:.0f}'.format(val, steps))
```
[ 8.73043864e-04 -4.00026038e-12]
Val: 0.000001, steps: 475
```python
def SGD_M(target_f,
gradient_f,
x,
alpha_0=0.01,
toler = 0.000001,
m = 0.9):
alpha = alpha_0
min_val = float('inf')
steps = 0
iteration_no_increase = 0
v = 0.0
trace = []
while iteration_no_increase < 100:
val = target_f(x)
if min_val - val > toler:
min_val = val
alpha = alpha_0
iteration_no_increase = 0
else:
alpha *= 0.95
iteration_no_increase += 1
trace.append(x)
gradient_i = gradient_f(x, target_f)
v = m * v + (alpha * gradient_i)
x = x - v
steps += 1
return x, val, steps, trace
x = np.array([2,-2])
x, val, steps, trace2 = SGD_M(target_f, gradient_f, x)
print('\n',x)
print('Val: {:.6f}, steps: {:.0f}'.format(val, steps))
```
[1.90552394e-05 1.27080227e-05]
Val: 0.000000, steps: 241
```python
x2 = np.array(range(len(trace)))
x3 = np.array(range(len(trace2)))
plt.xlim([0,len(trace)])
plt.gcf().set_size_inches((10,3))
plt.plot(x3, trace2)
plt.plot(x2, trace, '-')
```
| 39d12e326d441974e36f2109cc78999e1dad7021 | 250,043 | ipynb | Jupyter Notebook | 1. Basic Concepts.ipynb | existeundelta/DeepLearningfromScratch2018 | cebe67ab44279597027972a71bcf29ada532cd85 | [
"MIT"
] | 15 | 2018-06-14T09:45:15.000Z | 2020-04-29T21:32:03.000Z | 1. Basic Concepts.ipynb | existeundelta/DeepLearningfromScratch2018 | cebe67ab44279597027972a71bcf29ada532cd85 | [
"MIT"
] | null | null | null | 1. Basic Concepts.ipynb | existeundelta/DeepLearningfromScratch2018 | cebe67ab44279597027972a71bcf29ada532cd85 | [
"MIT"
] | 14 | 2018-06-14T08:06:34.000Z | 2022-03-24T08:29:13.000Z | 137.009863 | 20,604 | 0.856997 | true | 10,213 | Qwen/Qwen-72B | 1. YES
2. YES | 0.847968 | 0.79053 | 0.670344 | __label__eng_Latn | 0.984772 | 0.395765 |
# 02 - Reverse Time Migration
This notebook is the second in a series of tutorial highlighting various aspects of seismic inversion based on Devito operators. In this second example we aim to highlight the core ideas behind seismic inversion, where we create an image of the subsurface from field recorded data. This tutorial follows on the modelling tutorial and will reuse the modelling operator and velocity model.
## Imaging requirement
Seismic imaging relies on two known parameters:
- **Field data** - or also called **recorded data**. This is a shot record corresponding to the true velocity model. In practice this data is acquired as described in the first tutorial. In order to simplify this tutorial we will generate synthetic field data by modelling it with the **true velocity model**.
- **Background velocity model**. This is a velocity model that has been obtained by processing and inverting the field data. We will look at this methods in the following tutorial as it relies on the method we are describing here. This velocity model is usually a **smooth version** of the true velocity model.
## Imaging computational setup
In this tutorial, we will introduce the back-propagation operator. This operator simulates the adjoint wave-equation, that is a wave-equation solved in a reversed time order. This time reversal led to the naming of the method we present here, called Reverse Time Migration. The notion of adjoint in exploration geophysics is fundamental as most of the wave-equation based imaging and inversion methods rely on adjoint based optimization methods.
## Notes on the operators
As we already describe the creation of a forward modelling operator, we will use a thin wrapper function instead. This wrapper is provided by a utility class called `AcousticWaveSolver`, which provides all the necessary operators for seismic modeling, imaging and inversion. The `AcousticWaveSolver` provides a more concise API for common wave propagation operators and caches the the Devito `Operator` objects to avoid unnecessary recompilation. However, any newly introduced operators will be fully described and only used from the wrapper in the next tutorials.
As before we initialize printing and import some utilities. We also raise the Devito log level to avoid excessive logging for repeated operator invocations.
```python
import numpy as np
%matplotlib inline
from devito import configuration
configuration['log_level'] = 'WARNING'
```
## Computational considerations
Seismic inversion algorithms are generally very computationally demanding and require a large amount of memory to store the forward wavefield. In order to keep this tutorial as light-weight as possible we are using a very simple
velocity model that requires low temporal and special resolution. For a more realistic model, a second set of preset parameters for a reduced version of the 2D Marmousi data set [1] is provided below in comments. This can be run to create some more realistic subsurface images. However, this second present is more computationally demanding and requires a slightly more powerful workstation.
```python
# Configure model presets
from examples.seismic import demo_model
# Enable model presets here:
preset = 'twolayer-isotropic' # A simple but cheap model (recommended)
# preset = 'marmousi2d' # A larger more realistic model
# Standard preset with a simple two-layer model
if preset == 'twolayer-isotropic':
def create_model(grid=None):
return demo_model('twolayer-isotropic', origin=(0., 0.), shape=(101, 101),
spacing=(10., 10.), nbpml=20, grid=grid)
filter_sigma = (1, 1)
nshots = 21
nreceivers = 101
t0 = 0.
tn = 1000. # Simulation last 1 second (1000 ms)
f0 = 0.010 # Source peak frequency is 10Hz (0.010 kHz)
# A more computationally demanding preset based on the 2D Marmousi model
if preset == 'marmousi2d-isotropic':
def create_model(grid=None):
return demo_model('marmousi2d-isotropic', data_path='../../../../opesci-data/',
grid=grid)
filter_sigma = (6, 6)
nshots = 301 # Need good covergae in shots, one every two grid points
nreceivers = 601 # One recevier every grid point
t0 = 0.
tn = 3500. # Simulation last 3.5 second (3500 ms)
f0 = 0.025 # Source peak frequency is 25Hz (0.025 kHz)
```
# True and smooth velocity models
First, we create the model data for the "true" model from a given demonstration preset. This model represents the subsurface topology for the purposes of this example and we will later use it to generate our synthetic data readings. We also generate a second model and apply a smoothing filter to it, which represents our initial model for the imaging algorithm. The perturbation between these two models can be thought of as the image we are trying to recover.
```python
#NBVAL_IGNORE_OUTPUT
from examples.seismic import plot_velocity, plot_perturbation
from scipy import ndimage
# Create true model from a preset
model = create_model()
# Create initial model and smooth the boundaries
model0 = create_model(grid=model.grid)
model0.vp = ndimage.gaussian_filter(model0.vp, sigma=filter_sigma, order=0)
# Plot the true and initial model and the perturbation between them
plot_velocity(model)
plot_velocity(model0)
plot_perturbation(model0, model)
```
## Acquisition geometry
Next we define the positioning and the wave signal of our source and the location of our receivers. To generate the wavelet for our source we require the discretized values of time that we are going to use to model a single "shot",,
which again depends on the grid spacing used in our model. For consistency this initial setup will look exactly as in the previous modelling tutorial, although we will vary the position of our source later on during the actual imaging algorithm.
```python
#NBVAL_IGNORE_OUTPUT
# Define acquisition geometry: source
from examples.seismic import TimeAxis, RickerSource
# Define time discretization according to grid spacing
dt = model.critical_dt # Time step from model grid spacing
time_range = TimeAxis(start=t0, stop=tn, step=dt)
src = RickerSource(name='src', grid=model.grid, f0=f0, time_range=time_range)
# First, position source centrally in all dimensions, then set depth
src.coordinates.data[0, :] = np.array(model.domain_size) * .5
src.coordinates.data[0, -1] = 20. # Depth is 20m
# We can plot the time signature to see the wavelet
src.show()
```
```python
# Define acquisition geometry: receivers
from examples.seismic import Receiver
# Initialize receivers for synthetic and imaging data
rec = Receiver(name='rec', grid=model.grid, npoint=nreceivers, time_range=time_range)
rec.coordinates.data[:, 0] = np.linspace(0, model.domain_size[0], num=nreceivers)
rec.coordinates.data[:, 1] = 30.
```
# True and smooth data
We can now generate the shot record (receiver readings) corresponding to our true and initial models. The difference between these two records will be the basis of the imaging procedure.
For this purpose we will use the same forward modelling operator that was introduced in the previous tutorial, provided by the `WaveSolver` utility class. This object instantiates a set of pre-defined operators according to an initial definition of the acquisition geometry, consisting of source and receiver symbols. The solver objects caches the individual operators and provides a slightly more high-level API that allows us to invoke the modelling modelling operators from the initial tutorial in a single line. In the following cells we use this to generate shot data by only specifying the respective model symbol `m` to use, and the solver will create and return a new `Receiver` object the represents the readings at the previously defined receiver coordinates.
```python
# Compute synthetic data with forward operator
from examples.seismic.acoustic import AcousticWaveSolver
solver = AcousticWaveSolver(model, src, rec, space_order=4)
true_d , _, _ = solver.forward(src=src, m=model.m)
```
```python
# Compute initial data with forward operator
smooth_d, _, _ = solver.forward(src=src, m=model0.m)
```
```python
#NBVAL_IGNORE_OUTPUT
# Plot shot record for true and smooth velocity model and the difference
from examples.seismic import plot_shotrecord
plot_shotrecord(true_d.data, model, t0, tn)
plot_shotrecord(smooth_d.data, model, t0, tn)
plot_shotrecord(smooth_d.data - true_d.data, model, t0, tn)
```
# Imaging with back-propagation
As we explained in the introduction of this tutorial, this method is based on back-propagation.
## Adjoint wave equation
If we go back to the modelling part, we can rewrite the simulation as a linear system solve:
\begin{equation}
\mathbf{A}(\mathbf{m}) \mathbf{u} = \mathbf{q}
\end{equation}
where $\mathbf{m}$ is the discretized square slowness, $\mathbf{q}$ is the discretized source and $\mathbf{A}(\mathbf{m})$ is the discretized wave-equation. The discretized wave-equation matricial representation is a lower triangular matrix that can be solve with forward substitution. The pointwise writing or the forward substitution leads to the time-stepping stencil.
On a small problem one could form the matrix explicitly and transpose it to obtain the adjoint discrete wave-equation:
\begin{equation}
\mathbf{A}(\mathbf{m})^T \mathbf{v} = \delta \mathbf{d}
\end{equation}
where $\mathbf{v}$ is the discrete **adjoint wavefield** and $\delta \mathbf{d}$ is the data residual defined as the difference between the field/observed data and the synthetic data $\mathbf{d}_s = \mathbf{P}_r \mathbf{u}$. In our case we derive the discrete adjoint wave-equation from the discrete forward wave-equation to get its stencil.
## Imaging
Wave-equation based imaging relies on one simple concept:
- If the background velocity model is cinematically correct, the forward wavefield $\mathbf{u}$ and the adjoint wavefield $\mathbf{v}$ meet at the reflectors position at zero time offset.
The sum over time of the zero time-offset correlation of these two fields then creates an image of the subsurface. Mathematically this leads to the simple imaging condition:
\begin{equation}
\text{Image} = \sum_{t=1}^{n_t} \mathbf{u}[t] \mathbf{v}[t]
\end{equation}
In the following tutorials we will describe a more advanced imaging condition that produces shaper and more accurate results.
## Operator
We will now define the imaging operator that computes the adjoint wavefield $\mathbf{v}$ and correlates it with the forward wavefield $\mathbf{u}$. This operator essentially consist of three components:
* Stencil update of the adjoint wavefield `v`
* Injection of the data residual at the adjoint source (forward receiver) location
* Correlation of `u` and `v` to compute the image contribution at each timestep
```python
# Define gradient operator for imaging
from devito import TimeFunction, Operator
from examples.seismic import PointSource
from sympy import solve, Eq
def ImagingOperator(model, image):
# Define the wavefield with the size of the model and the time dimension
v = TimeFunction(name='v', grid=model.grid, time_order=2, space_order=4)
u = TimeFunction(name='u', grid=model.grid, time_order=2, space_order=4,
save=time_range.num)
# Define the wave equation, but with a negated damping term
eqn = model.m * v.dt2 - v.laplace - model.damp * v.dt
# Use SymPy to rearranged the equation into a stencil expression
stencil = Eq(v.backward, solve(eqn, v.backward)[0])
# Define residual injection at the location of the forward receivers
dt = model.critical_dt
residual = PointSource(name='residual', grid=model.grid,
time_range=time_range,
coordinates=rec.coordinates.data)
res_term = residual.inject(field=v, expr=residual * dt**2 / model.m,
offset=model.nbpml)
# Correlate u and v for the current time step and add it to the image
image_update = Eq(image, image - u * v)
return Operator([stencil] + res_term + [image_update],
subs=model.spacing_map)
```
## Implementation of the imaging loop
As we just explained, the forward wave-equation is solved forward in time while the adjoint wave-equation is solved in a reversed time order. Therefore, the correlation of these two fields over time requires to store one of the two fields. The computational procedure for imaging follows:
- Simulate the forward wave-equation with the background velocity model to get the synthetic data and save the full wavefield $\mathbf{u}$
- Compute the data residual
- Back-propagate the data residual and compute on the fly the image contribution at each time step.
This procedure is applied to multiple source positions (shots) and summed to obtain the full image of the subsurface. We can first visualize the varying locations of the sources that we will use.
```python
#NBVAL_IGNORE_OUTPUT
# Prepare the varying source locations
source_locations = np.empty((nshots, 2), dtype=np.float32)
source_locations[:, 0] = np.linspace(0., 1000, num=nshots)
source_locations[:, 1] = 30.
plot_velocity(model, source=source_locations)
```
```python
# Run imaging loop over shots
from devito import Function, clear_cache
# Create image symbol and instantiate the previously defined imaging operator
image = Function(name='image', grid=model.grid)
op_imaging = ImagingOperator(model, image)
# Create a wavefield for saving to avoid memory overload
u0 = TimeFunction(name='u', grid=model0.grid, time_order=2, space_order=4,
save=time_range.num)
for i in range(nshots):
# Important: We force previous wavefields to be destroyed,
# so that we may reuse the memory.
clear_cache()
print('Imaging source %d out of %d' % (i+1, nshots))
# Update source location
src.coordinates.data[0, :] = source_locations[i, :]
# Generate synthetic data from true model
true_d, _, _ = solver.forward(src=src, m=model.m)
# Compute smooth data and full forward wavefield u0
u0.data.fill(0.)
smooth_d, _, _ = solver.forward(src=src, m=model0.m, save=True, u=u0)
# Compute gradient from the data residual
v = TimeFunction(name='v', grid=model.grid, time_order=2, space_order=4)
residual = smooth_d.data - true_d.data
op_imaging(u=u0, v=v, m=model0.m, dt=model0.critical_dt,
residual=residual)
```
Imaging source 1 out of 21
Imaging source 2 out of 21
Imaging source 3 out of 21
Imaging source 4 out of 21
Imaging source 5 out of 21
Imaging source 6 out of 21
Imaging source 7 out of 21
Imaging source 8 out of 21
Imaging source 9 out of 21
Imaging source 10 out of 21
Imaging source 11 out of 21
Imaging source 12 out of 21
Imaging source 13 out of 21
Imaging source 14 out of 21
Imaging source 15 out of 21
Imaging source 16 out of 21
Imaging source 17 out of 21
Imaging source 18 out of 21
Imaging source 19 out of 21
Imaging source 20 out of 21
Imaging source 21 out of 21
```python
#NBVAL_IGNORE_OUTPUT
from examples.seismic import plot_image
# Plot the inverted image
plot_image(np.diff(image.data, axis=1))
```
```python
assert np.isclose(np.linalg.norm(image.data), 1e6, rtol=1e1)
```
And we have an image of the subsurface with a strong reflector at the original location.
## References
[1] _Versteeg, R.J. & Grau, G. (eds.) (1991): The Marmousi experience. Proc. EAGE workshop on Practical Aspects of Seismic Data Inversion (Copenhagen, 1990), Eur. Assoc. Explor. Geophysicists, Zeist._
| fd30e3e1c99b02495135adf452a2056dba8bd70b | 262,703 | ipynb | Jupyter Notebook | examples/seismic/tutorials/02_rtm.ipynb | RajatRasal/devito | 162abb6b318e77eaa4e8f719047327c45782056f | [
"MIT"
] | null | null | null | examples/seismic/tutorials/02_rtm.ipynb | RajatRasal/devito | 162abb6b318e77eaa4e8f719047327c45782056f | [
"MIT"
] | null | null | null | examples/seismic/tutorials/02_rtm.ipynb | RajatRasal/devito | 162abb6b318e77eaa4e8f719047327c45782056f | [
"MIT"
] | null | null | null | 453.71848 | 41,612 | 0.94222 | true | 3,700 | Qwen/Qwen-72B | 1. YES
2. YES
| 0.715424 | 0.731059 | 0.523017 | __label__eng_Latn | 0.992269 | 0.053473 |
```python
import sympy
from sympy import symbols, sqrt
sympy.init_printing()
```
```python
rho, kappa, r = symbols('rho kappa r')
```
```python
z = rho ** 2/(r*(1 + sqrt(1 - kappa*(rho/r) ** 2)))
z
```
```python
z.diff(rho).expand().simplify().subs(kappa, 1)
```
```python
zs = z.radsimp()
zs
```
```python
zs.diff(rho)
```
| 1710a7852b5c447f7227d4b1f80b11a18572c480 | 22,833 | ipynb | Jupyter Notebook | notes/zemax_conic.ipynb | draustin/otk | c6e91423ec79b85b380ee9385f6d27c91f92503d | [
"MIT"
] | 7 | 2020-05-17T14:26:42.000Z | 2022-02-14T04:52:54.000Z | notes/zemax_conic.ipynb | uamhforever/otk | c6e91423ec79b85b380ee9385f6d27c91f92503d | [
"MIT"
] | 17 | 2020-04-10T22:50:00.000Z | 2020-06-18T04:54:19.000Z | notes/zemax_conic.ipynb | uamhforever/otk | c6e91423ec79b85b380ee9385f6d27c91f92503d | [
"MIT"
] | 1 | 2022-02-14T04:52:45.000Z | 2022-02-14T04:52:45.000Z | 177 | 9,596 | 0.851837 | true | 120 | Qwen/Qwen-72B | 1. YES
2. YES | 0.946597 | 0.787931 | 0.745853 | __label__ces_Latn | 0.137954 | 0.571199 |
```python
%matplotlib inline
from sympy import *
init_printing(use_unicode=True)
```
```python
r, u, v, c, r_c, u_c, v_c, E, p, r_p, u_p, v_p, e, a, b, q, b_0, b_1, b_2, b_3, q_0, q_1, q_2, q_3, q_4, q_5 = symbols('r u v c r_c u_c v_c E p r_p u_p v_p e a b q b_0 b_1 b_2 b_3 q_0 q_1 q_2 q_3 q_4 q_5')
```
```python
gamma = symbols('gamma',positive=True)
```
####$f_{2}(c,p) = \dfrac{1}{2}r_{c}c^{2}+\dfrac{1}{4}u_{c}c^{4}+\dfrac{1}{6}v_{c}c^{6}+\dfrac{1}{2}r_{p}p^{2}+\dfrac{1}{4}u_{p}p^{4}+\dfrac{1}{6}v_{p}p^{6}-\gamma cp-ec^{2}p^{2}-Ep$
```python
f = ((1/2)*r_c*c**2+(1/4)*u_c*c**4+(1/6)*v_c*c**6+(1/2)*p**2
+(1/4)*p**4+(1/6)*v_p*p**6-E*p-gamma*c*p-c**2*p**2/2)
nsimplify(f)
```
###Solve for $E(c,p)$
###$\dfrac{\partial f_{2}(c,p)}{\partial p} = 0 = $
```python
E_c = solve(f.diff(p),E)[0]
E_c
```
###Solve for $p_{min}(c)$
###$\dfrac{\partial f_{2}(c,p)}{\partial c} = 0 = $
```python
p_min = nsimplify(solve(f.diff(c),p)[0])
p_min
```
###Plug $p_{min}(c)$ into $E(p,c)$:
```python
E_c = nsimplify(E_c.subs(p,p_min))
E_c
```
###Series expand $E(p_{min}(c),c)$ in powers of $c$ to order 7:
```python
series(E_c,c,n=7)
```
```python
Etrun = a*c+b*c**3+q*c**5
```
```python
solve(Etrun.diff(c),c)
```
```python
c_L = solve(Etrun.diff(c),c)[1]
c_U = solve(Etrun.diff(c),c)[3]
c_L,c_U
```
```python
E_L = simplify(Etrun.subs(c,c_U))
E_U = simplify(Etrun.subs(c,c_L))
```
```python
E_L,E_U
```
```python
rc = (gamma**2+a*gamma)
B = (-r_c/gamma+u_c/gamma+(r_c/gamma)**3-r_c**2/gamma**3)
Q = (-u_c/gamma+v_c/gamma+3*u_c*r_c**2/gamma**3+r_c**2/gamma**3
-2*r_c*u_c/gamma**3+v_p*(r_c/gamma)**5-3*r_c**4/gamma**5+2*r_c**3/gamma**5)
```
```python
collect(expand(B.subs(r_c,rc)),a)
```
```python
collect(expand(Q.subs(r_c,rc)),a)
```
```python
b0 = gamma**3-2*gamma+u_c/gamma
b1 = 3*gamma**2-3
b2 = 3*gamma-1/gamma
b3 = 1
q0 = gamma**5*v_p-3*gamma**3+3*gamma*u_c+3*gamma-3*u_c/gamma+v_c/gamma
q1 = 5*gamma**4*v_p-12*gamma**2+6*u_c+8-2*u_c/gamma**2
q2 = 10*v_p*gamma**3-18*gamma+(7+3*u_c)/gamma
q3 = 10*v_p*gamma**2-12+2/gamma**2
q4 = 5*v_p*gamma-3/gamma
q5 = v_p
```
```python
uc = solve(b0-b_0,u_c)[0]
up = solve(b3-b_3,u_p)[0]
vc = solve(q0-q_0,v_c)[0]
vp = solve(q5-q_5,v_p)[0]
```
```python
replacements = [(v_p,vp),(u_p,up),(u_c,uc)]
```
```python
vc = simplify(vc.subs([i for i in replacements]))
```
```python
expand(vc)
```
```python
uc = simplify(uc.subs([i for i in replacements]))
```
```python
expand(uc)
```
```python
up
```
```python
vp
```
###$b_0$
```python
b0
```
###$b_1$
```python
b1 = simplify(b1.subs(u_p,up))
```
```python
b1
```
###$b_2$
```python
b2 = simplify(b2.subs(u_p,up))
```
```python
b2
```
###$b_3$
```python
b3
```
###$B(a)$ i.t.o. $b_3$
```python
B_a = b_0+b1*a+b2*a**2+b_3*a**3
```
```python
B_a
```
###$q_0$
```python
q0
```
###$q_1$
```python
q1 = simplify(q1.subs([i for i in replacements]))
```
```python
q1
```
###$q_2$
```python
q2 = q2.subs([i for i in replacements])
```
```python
expand(q2)
```
###$q_3$
```python
q3 = simplify(q3.subs([i for i in replacements]))
```
```python
q3
```
###$q_4$
```python
q4 = simplify(q4.subs([i for i in replacements]))
```
```python
q4
```
###$Q(a)$ i.t.o. $b_0$, $b_3$, $q_5$
```python
Q_a = q_0+q1*a+q2*a**2+q3*a**3+q4*a**4+q_5*a**5
```
```python
collect(expand(Q_a),a)
```
###$R(a)$
```python
series(B_a**2-(20*a*Q_a/9),a,n=7)
```
```python
```
| 76d1a1e5b60d1b74c50fca68874b19aadb67c1d3 | 130,573 | ipynb | Jupyter Notebook | Smectic/GenPolSympy2.ipynb | brettavedisian/Liquid-Crystals | c7c6eaec594e0de8966408264ca7ee06c2fdb5d3 | [
"MIT"
] | null | null | null | Smectic/GenPolSympy2.ipynb | brettavedisian/Liquid-Crystals | c7c6eaec594e0de8966408264ca7ee06c2fdb5d3 | [
"MIT"
] | null | null | null | Smectic/GenPolSympy2.ipynb | brettavedisian/Liquid-Crystals | c7c6eaec594e0de8966408264ca7ee06c2fdb5d3 | [
"MIT"
] | null | null | null | 101.930523 | 12,946 | 0.776301 | true | 1,575 | Qwen/Qwen-72B | 1. YES
2. YES | 0.849971 | 0.705785 | 0.599897 | __label__yue_Hant | 0.225982 | 0.232092 |
##### Neuromorphic engineering I
## Photoreceptors; Photoreceptor Circuits
Created Oct 2020-Dec 2020 by Tobi Delbruck & Rui Graca
#### Group number: 4.5
#### Team members
- First name: `Jan` Last name: `Hohenheim`
- First name: `Maxim` Last name: `Gärtner`
#### TA: ...
**Objectives of this lab**
You will compare the 2 circuits sketched below.
The left one is the _source-follower_ (**SF**) photoreceptor
and the right one is the unity-gain active _transimpendance_ feedback (**TI**) photoreceptor.
### Exercise type and dates
COVID made it too difficult to set up a remote arrangement for testing the classchip photoreceptor circuits.
Instead, we will do some circuit analysis and numerical evaluation to
understand the concepts of feedback, loop gain, and transimpedence speedup.
The exercise spans 2 weeks with 2h per week.
There will be two groups Thursday afternoon and Monday morning.
Excercise dates: Monday group: Nov 30, Dec 7, Thursday group: Dec 2, Dec 9,
Monday group: Dec 6 and Dec 13.
Due date: Dec 20 2021
### Running the notebook
You will run this exercise on your own computer using any available Jupyter server. If you have one
already, you can use it. But you don't need to:
https://www.dataschool.io/cloud-services-for-jupyter-notebook/
provides a list of free servers on the cloud that you can use after registration.
### Requirements: libraries needed
python 3.7+
You might need to install libraries. You can install them from terminal into your python enviroment with
``` bash
pip install jupyter matplotlib numpy scipy engineering_notation engineering_notation
```
Remember, when using any python, conda is your friend. Make a unique conda enviroment
for each project to save yourself a lot of trouble with conflicting libraries. Here we will use only
very standard libraries that are provided by all the Jupyter servers.
```python
import matplotlib.pyplot as plt # plotting
import numpy as np # for math
from scipy.integrate import solve_ivp # - for timestepping ODEs
import os
from engineering_notation import (
EngNumber as eng,
) # useful library to e.g. format eng(1e-3) as 1m
from scipy.stats import linregress
### ------------------------------------------------------------------------------------------------------------------------------------------------------------
```
```python
import matplotlib as mpl
mpl.rcParams["figure.facecolor"] = "white"
mpl.rcParams["axes.facecolor"] = "white"
mpl.rcParams["savefig.facecolor"] = "white"
```
### Define useful constants
```python
I_0 = (
1e-14 # FET off current - you measured it TODO check correct for classchip process
)
U_T = 25e-3 # you better know this
kappa = 0.8 # choose a reasonable value
vdd = 1.8 # power supply voltage
q = 1.6e-19 # charge of electron
V_e = 10 # Early voltage for the TI photoreceptor amplifier input FET that we will use later. 1V is very small and would be less than what you would get from minimum length FET
```
### Define useful functions
Let's define a function for subthreshold current that
includes optional Early voltage for finite drain
conductance:
```python
def id_sub(V_g, V_s=0, V_d=1.8, U_T=U_T, I_0=I_0, kappa=kappa, V_e=V_e):
"""Computes the drain current from gate, source and drain voltage.
At most one of V_g, V_s, V_d can be a vector in which case I_d is a vector
:param V_g: gate voltage
:param V_s: source voltage, by default 0
:param V_d: drain voltage, by default 1V
:param: U_T: thermal voltage
:param I_0: the off current
:param V_e: the Early voltage; drain conductance is Idsat/V_e
:returns: the drain current in amps
"""
Vds = V_d - V_s
Id_sat = I_0 * np.exp(((kappa * V_g) - V_s) / U_T)
I_d = Id_sat * (1 - np.exp(-Vds / U_T))
if V_e != np.infty:
I_d = I_d * (1 + Vds / V_e)
return I_d
```
Check that the subthreshold equation makes sense. Start by plotting the drain current versus gate voltage, and
check that the slope is 1 e-fold per U_T/kappa.
```python
import matplotlib.pyplot as plt # plotting
import numpy as np # for math
vg = np.linspace(0, 1, 100)
# drain current vs gate voltage (transconductance)
idvsvg = id_sub(vg)
plt.figure("idsat")
plt.semilogy(vg, idvsvg)
plt.xlabel("V_g (V)")
plt.ylabel("I_ds (A)")
plt.title("Id vs Vg")
plt.grid(True)
reg = linregress(np.log(idvsvg), vg)
efold_v_meas = reg[0]
efold_v_theory = U_T / kappa
print(
f"Transconductance: Measured efold current gate voltage={eng(efold_v_meas)}V, predicted from U_T/kappa={eng(efold_v_theory)}V"
)
if (
np.abs((efold_v_meas - efold_v_theory) / (0.5 * (efold_v_meas + efold_v_theory)))
> 0.01
):
raise ValueError("Something wrong with subthreshold equations")
else:
print("Transconductance OK")
```
Now plot the drain current vs drain voltage and check that the actual drain conductance matches the expected value.
```python
vd = np.linspace(0, 1, 100)
V_g = 0.5
idsat = I_0 * np.exp(kappa * V_g / U_T)
# drain current vs drain voltage (drain conductance)
idvsvd = id_sub(V_g=V_g, V_d=vd, V_e=V_e)
plt.figure("idsat2")
plt.plot(vd, idvsvd)
plt.xlabel("V_d (V)")
plt.ylabel("I_ds (A)")
plt.grid(True)
plt.title("I_ds vs V_d (V_g={:.1f} V_e={:.1f})".format(V_g, V_e))
r = [i for i in range(len(idvsvd)) if idvsvd[i] >= idsat]
plt.plot(vd[r], idvsvd[r], ".r")
reg = linregress(vd[r], idvsvd[r])
gout_meas = reg[0]
gout_pred = I_0 * np.exp(kappa * V_g / U_T) / V_e
print(
f"Output conductance: Measured g_out={eng(gout_meas)}, predicted from g_out=Id_sat/Ve={eng(gout_pred)}"
)
if np.abs((gout_meas - gout_pred) / (0.5 * (gout_meas + gout_pred))) > 0.05:
raise ValueError("Something wrong with subthreshold equations")
else:
print("Drain conductance OK within 5%")
```
It seems to make sense. Now we have an equation we can use in ODE equation for photoreceptors
### Estimating actual photocurrent
Now we need to compute reasonable values for photocurrent and dark current.
Let's take the interesting situation of operation in dark
conditions at 1 lux scene illumination, which is about 10 times moonlight.
The light falling onto the chip will be reduced by the optics
according to the equation below.
We will also assume a photodiode area of 10um^2
which is a reasonably-large photodiode, and we will
assume a not-so-great junction leakage "dark current" of 1nA/cm^2.
```python
scene_flux_lux = 10 # 1 lux is about ten times moonlight
photodiode_area_um2 = 10 # photodiode area m^2
# optics reduces light intensity by square of aperture ratio
# we will assume a cheap f/3 lens with ratio focal length to aperture of 3
def optics_reduction(flux):
f_number = 3
return flux / (4 * f_number**2)
avg_reflectance = 0.18 # kodak's estimate of average scene reflectance
chip_flux_lux = optics_reduction(scene_flux_lux)
photons_per_um2_per_lux = 1e4 # you get about this many photons per lux falling on chip with "visible" sunlight spectrum
photocurrent_e_per_sec = chip_flux_lux * (photons_per_um2_per_lux) * photodiode_area_um2
dark_current_amps_per_um2 = (1e-9 / 1e-4) * 1e-12 # junction leakage per m^2
dark_current_amps = photodiode_area_um2 * dark_current_amps_per_um2
dark_current_e_per_sec = dark_current_amps / q
photocurrent_amps = photocurrent_e_per_sec * q
photocurrent_total_amps = photocurrent_amps + dark_current_amps
print(
f"scene illumination level {eng(scene_flux_lux)}lux\n"
f"photodiode area: {photodiode_area_um2}um^2\n"
f"DC photocurrent: {eng(photocurrent_total_amps)}A\n"
f"dark current: {eng(dark_current_e_per_sec)}e/s or {eng(dark_current_amps)}A\n"
f"I_0 off current: {eng(I_0/q)}e/s or {eng(I_0)}A\n"
)
```
scene illumination level 10lux
photodiode area: 10um^2
DC photocurrent: 4.54fA
dark current: 625e/s or 100aA
I_0 off current: 62.50ke/s or 10fA
Is the value smaller than the off-current? This is not surprising; under dark conditions, the photocurrent can be
a small fraction of the FET off-current.
#### Making a photocurrent stimulus waveform
Now we will make a waveform input stimulus to drive our time-domain simulations of the photoreceptors.
Let's define our input photocurrent waveform we will use
it is a square wave with modulation of contrast signal_contrast and starts with bright DC level and then goes
to dark DC level
```python
import numpy as np
from scipy.signal import square
import matplotlib.pyplot as plt # plotting
dark = dark_current_amps # dark current level
sigdc1 = photocurrent_amps # DC photocurrent for bright half
sigdc2 = photocurrent_amps / 10 # and dark half
signal_contrast = 5 # contrast in each half, i.e. cont=10 means that the bright part will be 10 times the dark half period, 2 means 2 times
nper = 2 # how many periods to simulate for each half bright/dark
# to compute the period, let's make it so that half a period is 1 time constants for SF in dark part of scene
C_pd = 100e-15 # guesstimate about 100fF
tau_sf = C_pd * U_T / sigdc2
per = 2 * tau_sf # period in seconds
print(
f"source follower photodiode capacitance C_pd={eng(C_pd)}F and tau_sf={eng(tau_sf)}s\n"
f"Computed period: {eng(per)}s"
)
dt = per / 500 # timesteps per half period
time_basis = np.arange(
0, 2 * nper * per, dt
) # start,stop,step generate time basis that is nper long
npts = len(time_basis)
npts2 = int(npts / 2)
# generate square wave with period per using time basis t that has steps dt
# square(x) has period 2*pi, i.e. its output is 1 when input is 0-pi, then -1 for pi-2pi, then 1 again
# thus if we want to have nper cycles in each half of our stimulus, we need to
# make its argument go to 2pi when the time goes to per
# Also, shift it up and divide by 2 to get 0-1 modulated square
sq = (square((2 * np.pi * time_basis) / (per)) + 1) / 2
# convolve with a short box filter to
# make the edges not perfectly square to simulate finite optical aperture
# sq=np.convolve(sq,np.ones(10)/10,mode='same') # causes some wierd transient, didn't debug
sig = np.zeros_like(sq)
sig[:npts2] = sigdc1 * (1 + (signal_contrast - 1) * sq[:npts2])
sig[npts2 + 1 :] = sigdc2 * (1 + (signal_contrast - 1) * sq[npts2 + 1 :])
sig[npts2] = sigdc2 * (1 + (signal_contrast - 1) * sq[npts2 + 1])
photocurrent_waveform = sig
# plt.plot(t,cur)
fig, ax1 = plt.subplots(sharex=True)
ax1.plot(
time_basis,
photocurrent_waveform,
"g",
)
ax1.set_ylim([0, None])
ax1.set_yscale("linear")
ax1.set_xscale("linear")
ax1.tick_params(axis="y", colors="green")
ax1.set_xlabel("t [s]")
ax1.set_ylabel("I_pd [A]")
```
We need to make a function out of our nice I_pd vector so that we can get I_pd at any point in time
```python
def find_nearest_idx(array, value):
idx = np.searchsorted(array, value, side="right")
return idx
def I_pd_function(time, time_basis, photocurrent_waveform):
idx = find_nearest_idx(time_basis, time)
idx -= 1 # go to next point just to left since search finds point just before time
if idx < 0:
return photocurrent_waveform[0]
if idx >= (len(time_basis) - 1):
return photocurrent_waveform[-1]
t1 = time_basis[idx]
i1 = photocurrent_waveform[idx]
idx += 1
t2 = time_basis[idx]
i2 = photocurrent_waveform[idx]
tfrac = (time - t1) / (t2 - t1) if t2 - t1 > 0 else 0.5
i = (1 - tfrac) * i1 + tfrac * i2
return i
I_pd = lambda t: I_pd_function(t, time_basis, photocurrent_waveform)
# test it
ttest = np.linspace(0, time_basis[-1], 1000)
itest = np.array([I_pd(t) for t in ttest])
plt.plot(ttest, itest)
```
### Exercise 1: Static vs. active unity-gain photoreceptors DC responses
First you will plot the theoretical DC responses of SF and TI to input photocurrent.
We will compute expressions for the DC response of the simple and unity gain feedback photoreceptor circuits.
We compute the SF output for you.
```python
i_pd=np.logspace(-19,-6,100) # input photocurrent vector, log scale from well under I_0 and dark current to microamp, which is huge
# equation for SF DC output, assuming gate voltage of 1.4V
v_g=1.4
v_sf= kappa*v_g-U_T*np.log((i_pd+dark_current_amps)/I_0)
```
**(a)** Computing the TI photoreceptor DC output
Let's define a function for the TI photoreceptor DC output. We can use that to plot it, and later on use
it to define the initial condition for the TI photoreceptor voltage at the start of transient simulation.
Assume for the TI circuit that the gain of the feedback amplifier is infinite and that the amplifier is ideal, i.e. that
the input FET never goes out of saturation.
You should fill in the expressions for the TI photodiode and output voltages:
```python
def ti_dc(I_b, I_pd, I_0=I_0, V_e=V_e, U_T=U_T, kappa=kappa, I_dark=dark_current_amps):
"""Computes the theoretical DC operating point of TI photoreceptor given parameters
:param I_b: bias current, should be scalar
:param I_pd: photocurrent, can be vector
:param V_e: amplfifier input FET Early voltage
:param U_T: thermal voltage
:param kappa: back gate coefficient of all FETs
:returns: [V_pd, V_out] voltages in form suitable for solve_ivp initial condition
"""
# TODO include effect of finite amplifier gain, not accounted for now
# check that I_b is scalar
if not np.isscalar(I_b):
raise ValueError("I_b should be a scalar")
# TODO compute the photodiode voltage. It is determined by Ib, right?
V_pd = np.log(I_b / I_0) * U_T / kappa
# we need to handle that I_pd might be scalar or vector
if not np.isscalar(I_pd):
V_pd = np.ones(len(I_pd)) * V_pd
# TODO compute the TI output voltage expression
V_out = (np.log((I_pd + I_dark) / I_0) * U_T + V_pd) / kappa
return [V_pd, V_out]
# check DC output
I_b = 10e-9 # bias current for amplifier pullup Mp in TI photoreceptor
ip = I_pd(0)
vti0 = ti_dc(I_b, photocurrent_amps)
print(
f"DC output of TI with bias current I_b={eng(I_b)} "
f"and photocurrent I_p={eng(photocurrent_amps)} "
f"are vpd={eng(vti0[0])} vout={eng(vti0[1])}"
)
```
DC output of TI with bias current I_b=10n and photocurrent I_p=4.44f are vpd=431.73m vout=515.02m
**(b)** Plot the SF and TI DC output together on a log-linear plot of $V_{out}$ versus
$I_{pd}$, covering a range of photocurrents of $I_{pd}$ from 0.01fA to 10nA. Assume $I_0$=1e-13A and that
there is dark current of $I_{dark}$=0.1fA.
```python
# compute the vector of TI outputs. The function returns a list of 2 vectors, [V_pd,V_out]
v_ti=ti_dc(I_b, i_pd)
plt.figure('DC responses')
plt.title('DC photoreceptor responses')
plt.semilogx(i_pd,v_sf,'r-', i_pd,v_ti[1],'b-', i_pd,v_ti[0],'g-')
plt.legend(['source-follower','TI output','TI photodiode'])
plt.xlabel('Photocurrent (A)')
plt.ylabel('Output voltage (V)')
```
### Preparation for large signal transient (time-domain) simulations
It seems to make sense. Now we have equations we can use in ODE equation for photoreceptors
### Dynamical equations
Next we will write the dynamical equations for the source-follower and feedback photoreceptor using the id_sub equation
for the currents. We need to write the right hand side equation for
```
dy / dt = f(t, y)
```
given initial condition
```
y(t0) = y0
```
The source follower only has one node so the output is a scalar derivative.
The TI photoreceptor has 2 nodes (the photodiode and output), so the output is a vector of 2 deriatives w.r.t. time.
```python
```
### Exercise 2: Large signal transient response of source follower and active photoreceptors
As an example, below we define the RHS for the SF. The time derivative of the output voltage is the current
divided by the node capacitance:
```python
def sfdvdt(
t, y, V_g=1.4, C_pd=100e-15
): # fill in reasonable photodiode capacitance, e.g. 100fF
vdot = (id_sub(V_g=V_g, V_s=y) - I_pd(t)) / C_pd
return vdot
```
### Exercise 2.1: define RHS of ODE for TI receptor
Now you should do the same thing, but for the vector of TI node voltages [vpd,vout] for the vpd input (photodiode)
and vout output nodes:
```python
def tidvdt(t, y, Ib, I_pd, V_e=V_e, C_pd=100e-15, C_out=1e-15):
"""Compute time derivatives of TI photoreceptor node voltages
:param t: the time in s
:param y: the TI PD and output voltages vector [vpd,vout]
:param V_e: the amplifier input n-fet Early voltage in V
:param C_pd: the photodiode cap in Culombs
:param C_out: the output capacitance
:returns: the vector of photodiode/output voltage time derivatives
"""
vpd = y[0]
vout = y[1]
# you fill in next parts from equations for TI photoreceptor.
# TODO
vpd_dot = (id_sub(V_g=vout, V_s=vpd, V_e=V_e) - I_pd(t) - dark_current_amps) / C_pd
vout_dot = (Ib - id_sub(V_g=vpd, V_d=vout, V_e=V_e)) / C_out
yout = [vpd_dot, vout_dot]
return yout
```
### Exercise 2.2: Timestepping transient simulation of photoreceptors
Below we have done it for the SF photoreceptor.
You should add the TI photoreceptor to the simulation so you can compare them to each other.
YOu may find issues with the simulator not responding to some of the edges in
the photocurrent. If this is the case, you can try to decrease tolerance (rtol
and atol), and also try different methods. Check solve_ivp() documentation for the
different options.
```python
import matplotlib.pyplot as plt # plotting
import numpy as np # for math
from scipy.integrate import solve_ivp # for timestepping ODEs
V_sf0 = [
1.1
] # initial condition of v, just guess it to be approx Vg-a bit, e.g. 1.4-.3
sf_sol = solve_ivp(
sfdvdt,
(time_basis[0], time_basis[-1]),
V_sf0,
t_eval=time_basis,
rtol=1e-10,
atol=1e-20,
method="LSODA",
# rtol=1e-9,atol=1e-20, method='Radau',
args=(1.4, C_pd),
)
# output is sol.t and sol.y
if sf_sol.message is not None:
print(sf_sol.message)
v_sf = sf_sol.y[0]
t_sf = sf_sol.t
# TODO you can solve the TI by filling in below
# check DC output
ib = I_b
ip = I_pd(0)
V_ti0 = ti_dc(I_b, ip)
print(
f"DC output of TI with I_b={eng(I_b)}A and I_pd={eng(ip)}A are vpd={eng(V_ti0[0])}V vout={eng(V_ti0[1])}V"
)
C_out = 1e-15
ti_sol = solve_ivp(
tidvdt,
(time_basis[0], time_basis[-1]),
V_ti0,
t_eval=time_basis,
rtol=1e-10,
atol=1e-21,
method="Radau",
# rtol=1e-9, atol=1e-19, method='RK45',
args=(I_b, I_pd, V_e, C_pd, C_out),
)
### output is sol.t and sol.y
if ti_sol.message is not None:
print(ti_sol.message)
v_ti = ti_sol.y[1]
t_ti = ti_sol.t
v_pd = ti_sol.y[0]
t_pd = ti_sol.t
# use this plotting style to put several plots sharing same x-axis
# we will plot V_sf together with the input photocurrent
# using another axis since it is volts, not current, and linear not log
fig = plt.figure(figsize=(12, 8))
ax1 = plt.subplot(121)
ax2 = plt.subplot(122, sharex=ax1)
ax1.plot(time_basis, photocurrent_waveform, "g")
ax1.set_yscale("log")
ax1.set_xscale("linear")
ax1.tick_params(axis="y", colors="green")
ax1.set_xlabel("t [s]")
ax1.set_ylabel("I_pd [A]")
ax1.title.set_text("Source Follower Photoreceptor")
ax2.title.set_text("Transimpendance Photoreceptor")
ax3 = ax1.twinx()
ax3.plot(t_sf, v_sf, "r-")
ax3.tick_params(axis="y", colors="red")
ax1.legend(["Input current"], loc="upper left")
ax3.legend(["Vout"], loc="upper right")
ax2.set_yscale("log")
ax2.set_xlabel("t [s]")
ax2.tick_params(axis="y", colors="green")
ax2.set_xscale("linear")
ax2.plot(
time_basis,
photocurrent_waveform,
"g",
)
ax2.legend(["Input current"], loc="upper left")
# TODO: Uncomment for TI photoreceptor
ax4 = ax2.twinx()
ax4.set_ylabel("Voltage [V]")
# ax4.set_ylim([0.4,0.8])
ax4.plot(t_ti, v_ti, "b-", t_pd, v_pd, "m-")
ax4.tick_params(axis="y", colors="blue")
ax4.legend(["Vout", "Vpd"], loc="upper right")
plt.show()
```
Note that in above transient solution, you might get a startup glitch particularly
for the TI photoreceptor because your TI DC solution
at the starting time is not quite correct. It means that the V_pd and V_out and not consistent,
and so the circuit will go through a short period of adjustment to come to the steady-state level.
### Exercise 3: Small signal modeling
You already did the most difficult part which is the large signal modeling. Now we will fix
the operating point around some DC level and compute the small signal transfer functions.
From these we can see the cutoff frequencies and stability.
### Exercise 3.1: AC transfer functions (Bode plots) of static vs. active photorecptors
**(a)** Write the small-signal differential equations for the simple photoreceptor and the unity-gain feedback photoreceptor assuming
photodiode capacitance $C_{pd}$ and (for the feedback photoreceptor) output load capacitance $C_{out}$.
For the feedback photoreceptor, you can assume that that amplifier has a output resistance $g_{out}$=$I_b$/$V_e$
(recall that the DC voltage gain is A=-$g_m$/$g_{out}$).
> Simple photoreceptor:
$$
\begin{align}
C_{out}\dot{v_{out}} &= g_m V_b - g_s V_{out} - i\\
&=\frac{\kappa I_{pd}}{U_T}V_b - \frac{I_{pd}}{U_T}V_{out} - i\\
&= \frac{I_{pd}}{U_T}(\kappa V_b - V_{out}) - i\\
&= -\frac{I_{pd}}{U_T}V_{out} - i & \kappa V_b \text{ is a constant DC value}\\
\end{align}
$$
> Unity-gain feedback photoreceptor:
$$
\begin{align}
C_{pd}\dot{v_{pd}} &= g_{m_{M_{fb}}} v_{out} - g_{s_{M_{fb}}} v_{pd} -i\\
C_{out} \dot{v_{out}} &= -g_{m_{M_{n}}} v_{pd} - g_{out} v_{out} \\
\end{align}
$$
**(b)** From the differential equations, derive the transfer functions $H(s)$ for each circuit.
Your equations should end up with time constants $\tau_{in}$ (for the photodiode node) and $\tau_{out}$
for the feedback photoreceptor.
For this derivation, the input to the circuit is the small-signal photocurrent $i_{pd}$ which is its deviation from
the DC value $I_{pd}$. The output is the small signal output voltage $v_{out}$.
But since the circuit is a log photoreceptor, a better way to express the transfer function
is to write it as output voltage per log input current. Thus $H(s)$ will be the transimpedance
'gain' that transduces from $i_{pd}/I_{pd}$ to $v_{out}$, i.e.
the units of H(s) are volts/(fractional change in current).
> Simple photoreceptor:
$$
\begin{align}
C_{out}\dot{v_{out}} &= -\frac{I_{pd}}{U_T}V_{out} - i
\end{align}
$$
$$
\begin{align}
\tau_{in} &= \frac{C_{out}}{g_s}\\
g_s &= \frac{I_{pd}}{U_T}\\
\end{align}
$$
$$
\begin{align}
\tau_{in} \dot{v_{out}} &= - V_{out} - \frac{i}{g_s}\\
\tau_{in} s v_{out} &= - V_{out} - \frac{i}{g_s} & \text{s plane}\\
(s\tau_{in} + 1)v_{out} &= - \frac{i}{g_s}\\
&= -U_T \frac{i}{I_{pd}}\\
H(s) = \frac{V_{out}}{\frac{i}{I_{pd}}} &= -\frac{U_T}{s\tau + 1}
\end{align}
$$
> Unity-gain feedback photoreceptor:
$$
\begin{align}
\tau_{in} &= \frac{C_{pd}}{g_{s_{M_{fb}}}}\\
\tau_{out} &= \frac{C_{out}}{g_{out}}\\
A &= \frac{g_{m_{M_{n}}}} {g_{out}} \\
\kappa &= \frac{g_{m_{M_{fb}}}} {g_{s_{M_{fb}}}} \\
\end{align}
$$
$$
\begin{align}
C_{pd}\dot{v_{pd}} &= g_{m_{M_{fb}}} v_{out} - g_{s_{M_{fb}}} v_{pd} -i\\
\tau_{in} \dot{v_{pd}} &= \kappa v_{out} - v_{pd} - \frac{i}{g_{s_{M_{fb}}}}\\
&= \kappa v_{out} - v_{pd} - U_T \frac{i}{I_{pd}} \\
\tau_{in} \dot{v_{pd}} &= \kappa v_{out} - v_{pd} - U_T \frac{i}{I_{pd}} & \text{s plane} \\
(s\tau_{in} + 1)v_{pd} &= \kappa v_{out} - U_T \frac{i}{I_{pd}} & \text{Eq. 1}\\
\end{align}
$$
$$
\begin{align}
C_{out} \dot{v_{out}} &= -g_{m_{M_{n}}} v_{pd} - g_{out} v_{out} \\
\tau_{out} \dot{v_{out}} &= -A v_{pd} - v_{out}\\
\tau_{out} s v_{out} &= -A v_{pd} - v_{out} & \text{s plane}\\
(s\tau_{out} + 1)v_{out} &= -A v_{pd} & \text{Eq. 2} \\
\end{align}
$$
$$
\begin{align}
(\tau_{out} s + 1) v_{out} &= -A \frac{\kappa v_{out} - U_T \frac{i}{I_{pd}}}{\tau_{in} s + 1} & \text{Combine Eq. 1 and Eq. 2}\\
(\tau_{out} s + 1) (\tau_{in} s + 1) v_{out} &= -A \kappa v_{out} + A U_T \frac{i}{I_{pd}}\\
((\tau_{out} s + 1) (\tau_{in} s + 1) + A \kappa) v_{out} &= A U_T \frac{i}{I_{pd}}\\
H(s) = \frac{v_{out}}{\frac{i}{I_{pd}}} &= \frac{A U_T}{(\tau_{out} s + 1) (\tau_{in} s + 1) + A \kappa}\\
&= \frac{A U_T}{\tau_{in} \tau_{out} s^2 + (\tau_{in} + \tau_{out}) s + 1 + A \kappa}
\end{align}
$$
**(c)** The TI feedback should make the TI photoreceptor
faster to respond to changes in photocurrent than the SF photoreceptor
(and also noisier).
By setting $\tau_{out}$ to zero (taking the limit as $\tau_{out}$ goes to zero), compute the expected speedup from the feedback.
I.e., what is the ratio of cutoff frequency of TI to SF circuit when $I_b$ is really large?
You will see if it true (at least in the model) in the Bode magnitude transfer function plots.
> Simple photoreceptor:
$$
\begin{align}
H(s) &= -\frac{U_T}{\tau s + 1}\\
\Rightarrow s_0 &= -\frac{1}{\tau} & \text{pole}\\
-s_0 &= \frac{1}{\tau}\\
\omega_0 = |s_0| &= \frac{1}{\tau}\\
\Rightarrow f_{sf} &= \frac{1}{2 \pi \tau}\\
\end{align}
$$
> Unity-gain photoreceptor:
$$
\begin{align}
\omega &:= 2\pi f\\
\lim_{\tau_{out} \to 0} H(s) &= A \frac{U_T}{\tau_{in} s + 1 + A \kappa}\\
\Rightarrow s_0 &= \frac{-1 - A \kappa}{\tau} & \text{pole}\\
-s_0 &= \frac{1 + A \kappa}{\tau} \\
\omega_c = |s_0| &= \frac{1 + A \kappa}{\tau} \\
\Rightarrow f_{ti} &= \frac{1 + A\kappa}{2 \pi \tau} \\
\end{align}
$$
> ratio of cutoff frequencies:
$$
\begin{align}
\frac{f_{ti}}{f_{sf}} &= \frac{\frac{1 + A\kappa}{2 \pi \tau}}{\frac{1}{2 \pi \tau}}\\
&=1 + A \kappa
\end{align}
$$
**(d)** Plot the magnitude of the transfer functions versus frequency, assuming reasonable values for $\tau_{in}$, $\tau_{out}$, etc
and an intermediate DC value of photocurrent, e.g. $I_{pd}$=1pA. You can assume that the bias current of the amplifier
for the feedback photoreceptor is $I_b$=10nA and $V_e$=10V.
Remember that frequency in radians per second is $w$=$2 \pi f$ where $f$ is frequency in Hertz.
You can use numpy to compute the magnitude of the complex transfer function by using $s$=$jw$ where $j$ is $\sqrt{-1}$.
Assume the DC photocurrent is still photocurrent_amps from above.
```python
import matplotlib.pyplot as plt # plotting
import numpy as np # for math
Ipd_dim = photocurrent_total_amps
C_out = 1e-15
freq = np.logspace(-2, 6, 100) # plot from 1kHz to 1GHz
w = 2 * np.pi * freq
tau_sf = C_pd * U_T / Ipd_dim # tau_sf = C_pd/gs_sf
H_sf = U_T / (np.sqrt(np.square(w * tau_sf) + 1))
sf_cutoff_hz = 1 / (2 * np.pi * tau_sf)
print(
f"source follower photodiode capacitance C_pd={eng(C_pd)}F and tau_sf={eng(tau_sf)}s\n"
f"SF cutoff frequency: {eng(sf_cutoff_hz)}Hz"
)
# TODO TI photoreceptor
tau_in = C_pd * U_T / Ipd_dim
tau_out = C_out * V_e / I_b
A = kappa * V_e / U_T
H_ti = np.abs(
(A * U_T)
/ (tau_in * tau_out * (1j * w) ** 2 + (tau_in + tau_out) * (1j * w) + 1 + A * kappa)
)
# Same as
# H_ti=(A*U_T)/(np.sqrt((tau_in + tau_out)**2*w**2 + (1 + A*kappa-tau_in*tau_out*w**2 )**2))
ti_cutoff_hz = sf_cutoff_hz * (kappa * A + 1)
print(f"transimpedance TI cutoff frequency: {eng(ti_cutoff_hz)}Hz")
fig, ax1 = plt.subplots(sharex=True)
# ax1.plot(freq,H_sf,'b')
ax1.plot(freq, H_sf, "b", label="$|H(s)|$ of source-follower")
ax1.plot(freq, H_ti, "r", label="$|H(s)|$ of transimpedance")
ax1.axvline(sf_cutoff_hz, color='b', linestyle = '--', label="source-follower cutoff frequency")
ax1.axvline(ti_cutoff_hz, color='r', linestyle = '-.', label="transimpedance cutoff frequency")
# ax1.set_ylim([0,None])
ax1.set_yscale("log")
ax1.set_xscale("log")
# ax1.tick_params(axis='y', colors='green')
ax1.set_xlabel("f [Hz]")
ax1.set_ylabel("|H(s)| [V/(A/A)]")
ax1.legend()
ax1.grid()
ax1.title.set_text("Photoreceptor Characteristics")
plt.show()
```
**(c)** Comment on your results. Can you see the effect of feedback on the bandwidth in the TI circuit? Can you observe
some ringing? I.e. is the circuit overdamped or underdamped at this $I_b$ and $I_{pd}$?
> The feedback decreases $\tau$, which increases the bandwidth since $f_c = \frac{1}{2\pi\tau}$ in a low pass filter. So the TI circuit can react faster to changes and follow along higher frequencies. From the bode plot we see that the circuit is overdamped, since there are no peaks in the frequency response.
### Exercise 4: Root-locus plot of TI photoreceptor
As the photocurrent increases, it should be clear that the single pole of the SF photoreceptor moves farther away
from the origin, as it speeds up.
Here you will compute the poles of the TI photoreceptor transfer function and then plot their locations
on the complex plane as a function of the amplfier bias current $I_b$, given a fixed photocurrent $I_{pd}$.
The two poles of the quadratic demoninator $D(s)$ of $H(s)$ will either both be real or form a complex conjugate pair
(since all the coefficients of the polynominal are real).
**(a)** First let's define a function to get the poles and the Q factor given the
circuit parameters. Remember that a second order system (with no zeros) can be described by a
transfer function of the type
$H(s)=\frac{A}{\frac{1}{\omega_0^2}s^2+\frac{1}{Q\omega_0}s+1}$, where
$\omega_0$ (natural frequency) and $Q$ (quality factor) are characteristics of the system.
```python
def get_poles(Ipd,Ib,C_pd,C_out,V_e,U_T=U_T,kappa=kappa):
# TODO:
tau_in=C_pd * U_T / Ipd
tau_out=C_out * V_e / Ib
A = kappa * V_e / U_T
canonical_tau = np.sqrt(tau_in * tau_out / (A * kappa))
Q=np.sqrt(A*kappa*tau_in*tau_out)/(tau_in+tau_out)
coef2=canonical_tau**2
coef1=canonical_tau/Q
coef0=1 + 1/(A*kappa)
# apply quadratic formula to find roots of the transfer function denominator
# 0j in sqrt forces result to be complex
pole1=(-coef1+np.sqrt(0j+coef1*coef1-4*coef0*coef2))/(2*coef2)
pole2=(-coef1-np.sqrt(0j+coef1*coef1-4*coef0*coef2))/(2*coef2)
return [pole1,pole2,Q]
```
**(b)** Now let's plot the root locus. The resulting plot is a trajectory in the
complex plane as a function of $I_b$
```python
import matplotlib.pyplot as plt # plotting
import numpy as np # for math
Ib_sweep = np.logspace(-15,-6,10000) # define range of Ib, adjust it to show the loop
Ipd_bright = 100e-12 # Photocurrent of 100pA
pole1,pole2,Q=get_poles(Ipd=Ipd_bright,Ib=Ib_sweep,C_pd=C_pd,C_out=C_out,V_e=V_e)
# plot
fig,ax1=plt.subplots(sharex=True)
ax1.plot(np.real(pole1),np.imag(pole1),'b')
ax1.plot(np.real(pole2),np.imag(pole2),'r')
ax1.set_xlabel('Real')
ax1.set_ylabel('Imag')
ax1.legend(['pole1','pole2'])
ax1.title.set_text('Poles of the transimpedance photoreceptor')
ax1.grid()
plt.show()
```
As you should be able to see in the root locus plot, for some values of
$I_b$ both poles lie in the real axis, but by decreasing $I_b$, the poles become
complex cojugates.
Let's now see the impact of pole location on the transient
behavior of the circuit.
**(c)** First, for a particular photocurrent (which of course is not true in practice, it varies tremendously),
find the minimum value of $I_b$ that results in a critically damped
circuit. In the exactly critically-damped condition, Q equals 0.5, and the transient response should
show no ringing since it consists of 2 low pass filters in series with equal time constants.
This bias current is also the minimum value of $I_b$ that results in real
valued poles at that photocurrent.
For the photocurrent, you can use the DC photocurrent under the illumination condition at the start
of the waveform, which is called Ipd_bright.
```python
# plot root locus again
fig, ax1 = plt.subplots(sharex=True)
ax1.plot(np.real(pole1), np.imag(pole1), "b")
ax1.plot(np.real(pole2), np.imag(pole2), "r")
ax1.set_xlabel("Real (rad/s)")
ax1.set_ylabel("Imag (rad/s)")
ax1.legend(["pole1", "pole2"])
# TODO fill expression Ib that results in a Q of 0.5
A = kappa * V_e / U_T
tau_in = C_pd * U_T / Ipd_bright
Ib_Qhalf = 4 * V_e * C_out * A * kappa / tau_in
pole1_Qhalf, pole2_Qhalf, Q_Qhalf = get_poles(
Ipd=Ipd_bright, Ib=Ib_Qhalf, C_pd=C_pd, C_out=C_out, V_e=V_e
)
ax1.plot(np.real(pole1_Qhalf), np.imag(pole1_Qhalf), "bo")
ax1.plot(np.real(pole2_Qhalf), np.imag(pole2_Qhalf), "ro")
ax1.grid()
ax1.legend(["pole1", "pole2", "$Q=\\frac{1}{2}$", "$Q=\\frac{1}{2}$"])
plt.show()
print(f"Q: {Q_Qhalf}")
print(f"Ib_Qhalf: {Ib_Qhalf}A")
```
Note that you might have made some small approximations that result in the $Q=1/2$ poles not quite
coming together on the real axis.
I.e. in the transfer function $H(s)$, maybe you dropped a constant term?
Or maybe to find the $Q=1/2$ condition, you simplified by assuming $\tau_2<<\tau_1$?
**(d)** Let's now look at the transient response of a TI photoreceptor operating
under such conditions. First let's define a small signal transient input photocurrent
```python
sigdc_ss=Ipd_bright
#sigdc=1e-15
signal_contrast_ss=1.1 # contrast in each half, i.e. cont=10 means that the bright part will be 10 times the dark half period, 2 means 2 times
tau_ss=C_pd*U_T/sigdc_ss
t_warmup=tau_ss*30
t_total=t_warmup+tau_ss*20
Ipd_warmup=Ipd_bright
Ipd_final=Ipd_bright/signal_contrast_ss
def Ipd_step_func(t,I_t0=Ipd_warmup, I_t1=Ipd_final, t_warmup=t_warmup):
if t<t_warmup:
return I_t0
else:
return I_t1
dt_ss=tau_ss/1000
time_basis_ss=np.arange(0,t_total,dt_ss) # start,stop,step generate time basis that is nper long
npts_ss=len(time_basis_ss)
# compute actual Ipd for timesteps
Ipd_ss=np.empty(npts_ss)
for t,i in zip(time_basis_ss,range(npts_ss)):
Ipd_ss[i]=Ipd_step_func(t)
fig,ax1=plt.subplots(sharex=True)
ax1.plot(time_basis_ss,Ipd_ss,'g',)
#ax1.set_ylim([0,None])
ax1.set_yscale('linear')
ax1.set_xscale('linear')
ax1.tick_params(axis='y', colors='green')
ax1.set_xlabel('t [s]')
ax1.set_ylabel('I_pd [A]')
```
**(e)** Observe the transient response of the photoreceptor under such
conditions. Does it behave as expected? Try also other values of Q, both above
and below 0.5. Plot them in the root locus and observe the transient
response. When is the system overdamped and when is it underdamped? Observe how
the root locus trajectory changes with $I_b$.
```python
# initial condition
V_ti0=ti_dc(Ib_Qhalf,Ipd_bright,I_0=I_0,V_e=V_e,U_T=U_T,kappa=kappa)
print(f'DC output of TI with Ib={eng(Ib_Qhalf)} and Ip={eng(Ipd_bright)} are vpd={eng(V_ti0[0])} vout={eng(V_ti0[1])}')
ti_sol=solve_ivp(tidvdt, (time_basis_ss[0],time_basis_ss[-1]),
V_ti0, t_eval=time_basis_ss, rtol=1e-9, atol=1e-19, method='Radau',
args=(Ib_Qhalf,Ipd_step_func,V_e,C_pd,C_out))
# output is sol.t and sol.y
if ti_sol.message is not None:
print(ti_sol.message)
v_ti_Qhalf=ti_sol.y[1]
t_ti_Qhalf=ti_sol.t
v_pd_Qhalf=ti_sol.y[0]
t_pd_Qhalf=ti_sol.t
ib_factor=2 # how much larger and smaller to try seeing how sensitive is the Q=1/2 condition
# now solve for a bit larger Ib
ib=Ib_Qhalf*ib_factor
V_ti0=ti_dc(ib,Ipd_bright,I_0=I_0,V_e=V_e,U_T=U_T,kappa=kappa)
print(f'DC output of TI with Ib={eng(ib)} and Ip={eng(Ipd_bright)} are vpd={eng(V_ti0[0])} vout={eng(V_ti0[1])}')
ti_sol=solve_ivp(tidvdt, (time_basis_ss[0],time_basis_ss[-1]),
V_ti0, t_eval=time_basis_ss, rtol=1e-9, atol=1e-19, method='Radau',
args=(ib,Ipd_step_func,V_e,C_pd,C_out))
# output is sol.t and sol.y
if ti_sol.message is not None:
print(ti_sol.message)
v_ti_Qhalf1=ti_sol.y[1]
t_ti_Qhalf1=ti_sol.t
v_pd_Qhalf1=ti_sol.y[0]
t_pd_Qhalf1=ti_sol.t
# and solve for a bit smaller Ib
ib=Ib_Qhalf/ib_factor
V_ti0=ti_dc(ib,Ipd_bright,I_0=I_0,V_e=V_e,U_T=U_T,kappa=kappa)
print(f'DC output of TI with Ib={eng(ib)} and Ip={eng(Ipd_bright)} are vpd={eng(V_ti0[0])} vout={eng(V_ti0[1])}')
ti_sol=solve_ivp(tidvdt, (time_basis_ss[0],time_basis_ss[-1]),
V_ti0, t_eval=time_basis_ss, rtol=1e-9, atol=1e-19, method='Radau',
args=(ib,Ipd_step_func,V_e,C_pd,C_out))
# output is sol.t and sol.y
if ti_sol.message is not None:
print(ti_sol.message)
v_ti_Qhalf2=ti_sol.y[1]
t_ti_Qhalf2=ti_sol.t
v_pd_Qhalf2=ti_sol.y[0]
t_pd_Qhalf2=ti_sol.t
```
DC output of TI with Ib=409.60n and Ip=100p are vpd=547.75m vout=972.51m
The solver successfully reached the end of the integration interval.
DC output of TI with Ib=819.20n and Ip=100p are vpd=569.41m vout=999.59m
The solver successfully reached the end of the integration interval.
DC output of TI with Ib=204.80n and Ip=100p are vpd=526.09m vout=945.44m
The solver successfully reached the end of the integration interval.
```python
# use this plotting style to put several plots sharing same x-axis
# we will plot V_sf together with the input photocurrent
# using another axis since it is volts, not current, and linear not log
t_start = t_warmup-tau_ss/50
t_end = t_warmup+tau_ss/40
six=np.argmax(time_basis_ss>t_start)
eix=np.argmax(time_basis_ss>t_end)
r=range(six,eix) # range to plot, to eliminate startup transient
# because of transient, we need to limit the output voltage plotting range
# around the DC level before and after the step
fig=plt.figure(figsize=(12,8))
fig,ax1=plt.subplots(sharex=True)
# tlim=[t_start,t_end]
# lookup idx of time point just before step
# vlim=[v_ti_Qhalf2[len(time_basis_ss)//5],v_ti_Qhalf2[-1]*1.1]
ax1.plot(time_basis_ss[r],Ipd_ss[r],'g')
ax1.set_yscale('linear')
ax1.set_xscale('linear')
ax1.tick_params(axis='y', colors='green')
ax1.set_xlabel('t [s]')
ax1.set_ylabel('I_pd [A]')
# ax1.set_xlim(tlim)
ax2=ax1.twinx()
ax2.plot(t_ti_Qhalf[r],v_ti_Qhalf[r]-np.mean(v_ti_Qhalf[r]),'b-')
ax2.plot(t_ti_Qhalf1[r],v_ti_Qhalf1[r]-np.mean(v_ti_Qhalf1[r]),'c--')
ax2.plot(t_ti_Qhalf2[r],v_ti_Qhalf2[r]-np.mean(v_ti_Qhalf2[r]),'c-.')
# ax2.set_xlim(tlim)
ax2.set_ylabel('V_ti -mean [V]')
# ax2.set_ylim(vlim)
ax2.tick_params(axis='y', colors='blue')
ax2.legend(['$I_{b,Qhalf}$','$I_{b,Qhalf}$'+f'*{ib_factor}','$I_{b,Qhalf}$'+f'/{ib_factor}'], loc='upper right')
ax2.title.set_text('Transimpendance Photoreceptor')
plt.show()
```
Let's simulate open loop amplifier with very slow ramp of input voltage to meausre the open loop voltage gain and compare it with theory
```python
tt=1
v0=.43
v1=.44
def ampdvdt(t,vd):
vg=v0-(v0-v1)*t/tt
id=id_sub(vg,0,vd,V_e=V_e,kappa=kappa)
i=I_b-id
vdot= i / 1e-15
# print(f'vg={vg}V vd={vd}V id={id}A idiff={i}A')
return vdot
t=np.linspace(0,tt,10000)
vd0=[0]
s=solve_ivp(ampdvdt,(t[0],t[-1]),
vd0, t_eval=t, rtol=1e-9, atol=1e-12, method='Radau')
vd=s.y[0]
vg=v0-(v0-v1)*t/tt
plt.plot(vg,vd,'-b')
r=[i for i in range(len(vd)) if vd[i]> .4 and vd[i]<.5]
plt.xlabel('Vg (V)')
plt.ylabel('Vout (V)')
plt.plot(vg[r],vd[r],'r-')
plt.grid(True)
A_meas=linregress(vg[r],vd[r])[0]
A_pred=kappa*V_e/U_T
print(f'Amplifier gain: Measured {eng(A_meas)}, predicted from kappa*Ve/U_T={eng(A_pred)}')
```
**(f)** If the poles are not purely real, the photoreceptor output will have a
ringing behavior. The larger the Q, the more the system will ring. In the TI
photoreceptor, Q is maximum when $\tau_{out}=\tau_{in}$. Find the value of $I_b$
which results in maximum Q, and plot it on the root locus. Compare the value
obtained for Q with the theoretical value.
```python
# plot root locus
fig,ax1=plt.subplots(sharex=True)
ax1.plot(np.real(pole1),np.imag(pole1),'b')
ax1.plot(np.real(pole2),np.imag(pole2),'r')
ax1.set_xlabel('Real')
ax1.set_ylabel('Imag')
# TODO fill the expression with the value of Ib which maximizes Q
Ib_Qmax=Ipd_bright*V_e*C_out/(C_pd*U_T)
pole1_Qmax,pole2_Qmax,Q_Qmax=get_poles(Ipd=Ipd_bright,Ib=Ib_Qmax,C_pd=C_pd,C_out=C_out,V_e=V_e)
ax1.plot(np.real(pole1_Qmax),np.imag(pole1_Qmax),'bo')
ax1.plot(np.real(pole2_Qmax),np.imag(pole2_Qmax),'ro')
ax1.legend(['pole1','pole2','pole1_Qmax','pole2_Qmax'])
ax1.grid()
plt.show()
print(f"Q: {Q_Qmax}")
print(f"Ib: {Ib_Qmax}")
```
**(g)** Observe the transient response of the photoreceptor in such
conditions. Does it behave as expected? Again, try other values of Q, both above
and below the maximum. Plot them in the root locus and observe the transient
response and how Q affects ringing.
```python
# initial condition
V_ti0=ti_dc(Ib_Qmax,Ipd_bright,I_0=I_0,V_e=V_e,U_T=U_T,kappa=kappa)
ti_sol=solve_ivp(tidvdt, (time_basis_ss[0],time_basis_ss[-1]),
V_ti0, t_eval=time_basis_ss, rtol=1e-9, atol=1e-19, method='Radau',
args=(Ib_Qmax,Ipd_step_func,V_e,C_pd,C_out))
t_start = t_warmup-tau_ss/30
t_end = t_warmup+tau_ss*2
six=np.argmax(time_basis_ss>t_start)
eix=np.argmax(time_basis_ss>t_end)
r=range(six,eix) # range to plot, to eliminate startup transient
# output is sol.t and sol.y
if ti_sol.message is not None:
print(ti_sol.message)
v_ti_Qmax=ti_sol.y[1][r]
t_ti_Qmax=ti_sol.t[r]
v_pd_Qmax=ti_sol.y[0][r]
t_pd_Qmax=ti_sol.t[r]
ti_sol=solve_ivp(tidvdt, (time_basis_ss[0],time_basis_ss[-1]),
V_ti0, t_eval=time_basis_ss, rtol=1e-9, atol=1e-19, method='LSODA',
args=(Ib_Qmax*2,Ipd_step_func,V_e,C_pd,C_out))
# output is sol.t and sol.y
if ti_sol.message is not None:
print(ti_sol.message)
v_ti_Qmax1=ti_sol.y[1][r]
t_ti_Qmax1=ti_sol.t[r]
v_pd_Qmax1=ti_sol.y[0][r]
t_pd_Qmax1=ti_sol.t[r]
ti_sol=solve_ivp(tidvdt, (time_basis_ss[0],time_basis_ss[-1]),
V_ti0, t_eval=time_basis_ss, rtol=1e-9, atol=1e-19, method='LSODA',
args=(Ib_Qmax/2,Ipd_step_func,V_e,C_pd,C_out))
# output is sol.t and sol.y
if ti_sol.message is not None:
print(ti_sol.message)
v_ti_Qmax2=ti_sol.y[1][r]
t_ti_Qmax2=ti_sol.t[r]
v_pd_Qmax2=ti_sol.y[0][r]
t_pd_Qmax2=ti_sol.t[r]
```
The solver successfully reached the end of the integration interval.
The solver successfully reached the end of the integration interval.
The solver successfully reached the end of the integration interval.
```python
# use this plotting style to put several plots sharing same x-axis
# we will plot V_sf together with the input photocurrent
# using another axis since it is volts, not current, and linear not log
fig=plt.figure(figsize=(12,8))
fig,ax1=plt.subplots(sharex=True)
ax1.plot(time_basis_ss[r],Ipd_ss[r],'g')
ax1.set_yscale('linear')
ax1.set_xscale('linear')
ax1.tick_params(axis='y', colors='green')
ax1.set_xlabel('t [s]')
ax1.set_ylabel('I_pd [A]')
ax2=ax1.twinx()
#ax2.plot(t_ti_Qmax,v_ti_Qmax,'b-')
ax2.plot(t_ti_Qmax,v_ti_Qmax-v_ti_Qmax[0],'b-')
ax2.plot(t_ti_Qmax1,v_ti_Qmax1-v_ti_Qmax1[0],'c--')
ax2.plot(t_ti_Qmax2,v_ti_Qmax2-v_ti_Qmax2[0],'c-.')
#ax2.plot(t_pd,v_pd,'m-')
ax2.tick_params(axis='y', colors='blue')
ax2.legend(['$I_{b,Qmax}$','$I_{b,Qmax}$/2','$I_{b,Qmax}$*2'], loc='upper right')
ax2.title.set_text('Transimpendance Photoreceptor')
ax2.title.set_text('Transimpendance Photoreceptor')
ax2.grid()
plt.show()
```
> By comparing the amplitudes at each peak, we see that the solid line has the highest ringing.
| 554e079c060cf7e75c1f1e8a0045a49698b775c9 | 521,959 | ipynb | Jupyter Notebook | neuromorphic_engineering_one/session10_Photoreceptor/2021_Photo_4.5_Jan_Hohenheim.ipynb | janhohenheim/neuromorphic-engineering-one | 2923b095c4ddec935d2ea2d60685beebdd7da097 | [
"MIT"
] | 3 | 2021-12-17T23:08:38.000Z | 2021-12-20T14:02:16.000Z | neuromorphic_engineering_one/session10_Photoreceptor/2021_Photo_4.5_Jan_Hohenheim.ipynb | janhohenheim/neuromorphic-engineering-one | 2923b095c4ddec935d2ea2d60685beebdd7da097 | [
"MIT"
] | null | null | null | neuromorphic_engineering_one/session10_Photoreceptor/2021_Photo_4.5_Jan_Hohenheim.ipynb | janhohenheim/neuromorphic-engineering-one | 2923b095c4ddec935d2ea2d60685beebdd7da097 | [
"MIT"
] | 1 | 2021-12-08T19:15:40.000Z | 2021-12-08T19:15:40.000Z | 238.337443 | 125,640 | 0.912112 | true | 13,670 | Qwen/Qwen-72B | 1. YES
2. YES | 0.692642 | 0.749087 | 0.518849 | __label__eng_Latn | 0.893252 | 0.04379 |
```python
%display typeset
```
# Mapas de Dimensão finita
Também conhecidos como equações de diferença são sistemas dinâmicos sobre tempo discreto. Neste Notebook iremos explorar alguns resultados básicos sobre mapas discretos e mapas clássicos em uma e duas dimensões.
Vamos começar com um exemplo simples:
$$U_{n+1}=a sin(U_n), \,\,U_0=U$$
A partir da expressão acima vemos que Mapas são funções de $\mathbb{R}^p \rightarrow \mathbb{R}^p$ que produzem sequências de números da forma $\{U_n\}_{n=0}^{\infty}$. No caso de mapas unidimensionais, $p=1$.
```python
@interact
def oneDmap(a=slider(0.5,3,.1,1.1, label='$a$'), u0=slider(0,1,.1,.1, label='$U_0$'), n=slider(10,300,1,100, label='Iterações:')):
pts = [(0,a*sin(u0))]
for i in range(1,n+1):
nu = a*sin(pts[-1][1])
pts.append((i,nu))
p = points(pts,axes_labels=['$n$', '$U_n$'])
show("Uf= ",pts[-1][1])
show(p)
```
Interactive function <function oneDmap at 0x7fa450a5ac10> with 3 widgets
a: TransformFloatSlider(value=1.1, …
### Mapa quadrático
Como um segundo exemplo, vamos considerar o mapa quadrático:
$$U_{n+1}=a U_n (1-U_n), \,\,\, U_0=U$$
```python
qmap = lambda u,a: a*u*(1-u)
@interact
def quad_map(a=slider(0.5,4.5,.1,1.1, label='$a$'), u0=slider(0,1,.1,.1, label='$U_0$'), n=slider(10,300,1,100, label='Iterações:')):
pts = [(u0, qmap(u0,a))]
for i in range(1,n+1):
nu = qmap(pts[-1][1],a)
pts.append((i,nu))
p = points(pts,axes_labels=['$n$', '$U_n$'])
show("Uf= ",pts[-1][1])
show(p)
```
Interactive function <function quad_map at 0x7fa45067e1f0> with 3 widgets
a: TransformFloatSlider(value=1.1,…
## Diagramas Cobweb (teia de aranha)
Também conhecidos como diagramas de Verhulst, diagramas cobweb nos permitem explorar a dinâmica de mapas discretos. vejamos o exemplo abaixo com um mapa quadrático também conhecido como mapa logístico.
$$U_{n+1} = a U_n (1-U_n)$$
Estes diagramas podem ser construídos utilizado-se os seguintes passos:
1. Encontre o ponto $(U_0, U_1)$;
1. Trace uma linha horizontal deste ponto até encontrar a diagonal no ponto $(U_1,U_1)$;
1. Trace uma linha vertical deste último ponto na diagonal até encontrar a função no ponto $(U_1, U_2)$;
1. Repita os passos 2 e 3 incrementando os indices.
No diagrama de Teia de aranha, um *ponto fixo* aparece como uma **espiral convergente**, Um *ponto instável* como uma **espiral divergente**, Uma *órbita de período 2* aparece como um **retângulo**, *períodos maiores* aparecem como **múltiplos retângulos**. *Atratores caóticos* aparecem como **inúmeros retângulos**.
```python
def cobweb(a_function, start, mask = 0, iterations = 20, xmin = 0, xmax = 1):
'''
Returns a graphics object of a plot of the function and a cobweb trajectory starting from the value start.
INPUT:
a_function: a function of one variable
start: the starting value of the iteration
mask: (optional) the number of initial iterates to ignore
iterations: (optional) the number of iterations to draw, following the masked iterations
xmin: (optional) the lower end of the plotted interval
xmax: (optional) the upper end of the plotted interval
EXAMPLES:
sage: f = lambda x: 3.9*x*(1-x)
sage: show(cobweb(f,.01,iterations=200), xmin = 0, xmax = 1, ymin=0)
'''
basic_plot = plot(a_function, xmin = xmin, xmax = xmax, axes_labels=['$U_n$', '$U_{n+1}$'])
iv = point((start,a_function(start)),color='green', size=30)
id_plot = plot(lambda x: x, xmin = xmin, xmax = xmax)
iter_list = []
current = start
for i in range(mask):
current = a_function(current)
for i in range(iterations):
iter_list.append([current,a_function(current)])
current = a_function(current)
iter_list.append([current,current])
series = list_plot([[i,pt[0]] for i,pt in enumerate(iter_list)], color='red', legend_label='$U_n$')
cobweb = line(iter_list, rgbcolor = (1,0,0))
ga = graphics_array([
[basic_plot +iv + id_plot + cobweb],
[series]
],)
ga.show(figsize=[8,10])#basic_plot +iv + id_plot + cobweb
var('x')
@interact
def cobwebber(f_text = input_box(default = "a*x*(1-x)",label = "mapa", type=str),
a = slider(1,4,0.1,3.8, label='$a$'),
start_val = slider(0,1,.01,.5,label = 'valor inicial'),
its = slider([2^i for i in range(0,12)],default = 16, label="iterações")):
def f(x):
return eval(f_text.replace('a',str(a)))
show(cobweb(f, start_val, iterations = its))
```
Interactive function <function cobwebber at 0x7fa4509b53a0> with 4 widgets
f_text: TransformText(value='a*x*…
### Diagrama orbital
Vimos que o número de orbitas distintas no modelo logístico varia com o parâmetro $a$. Podemos construir outro diagrama interessante representando o número de órbitas ao longo de um intervalo do parâmetro.
```python
import numpy as np
logmap = lambda a,x: a*x*(1-x)
@interact
def orb_diagram(mask=50, start=0.5, a_min=slider(2.9,4.0,.01,3.4), a_max=slider(3.5,4.0,.01,4),
ymax = slider(0,1,0.01,1),
its = slider([2^i for i in range(0,12)],default = 16, label="iterações")):
func = logmap
states = []
for a in np.linspace(a_min,a_max,150, dtype=float):
iter_list = []
current = start
for i in range(mask):
current = func(a,current)
for i in range(its):
iter_list.append([a,func(a, current)])
current = func(a,current)
# iter_list.append([a,current])
states.extend(iter_list)
p = list_plot(states)
p.show(figsize=[8,8],ymax=ymax)
orb_diagram(logmap)
```
```python
a_min=slider(3.5,4,.01,3.4)
a_min.value
```
3.5
## Mapa de Hénon, um mapa bi-dimensional
Stuart e Humphries apresentam no capítulo 1, equação 1.1.3 o mapa de Henón:
\begin{align}
X_{n+1} &= a -bY_n -X^2_n\\
Y_{n+1} &= X_n
\end{align}
```python
@interact
def Henon_map(x0=slider(-0.4,.4,.1,.1, label='$X_0$'),
y0=slider(-.4,.4,.1,.1, label='$Y_0$'),
a=slider(0,1,.1,0,label='$a$'),
b=slider(0,1,.1,1,label='$b$'),
n=slider(10,1000,50,500, label='Iterações:')):
pts = [(a-b*y0-x0**2,x0)]
for i in range(1,n+1):
xi = a-b*pts[-1][1]-pts[-1][0]**2
yi = pts[-1][0]
pts.append((xi,yi))
p = points(pts, axes_labels=['$X_n$', '$Y_n$'])
xp = list_plot([[i,pt[0]] for i,pt in enumerate(pts)], color='red', legend_label='$X_n$')
yp = list_plot([[i,pt[1]] for i,pt in enumerate(pts)], color='green', legend_label='$Y_n$', alpha=.6)
ga = graphics_array((p,xp+yp))
ga.show(figsize=[10,5])
```
Interactive function <function Henon_map at 0x7f4c8e6f6268> with 5 widgets
x0: TransformFloatSlider(value=0.…
Entretando o Mapa de Hénon [mais conhecido](https://en.wikipedia.org/wiki/H%C3%A9non_map) tem uma formulação ligeiramente diferente:
\begin{align}
X_{n+1} &= 1 -a X^2_n + Y_n \\
Y_{n+1} &= b X_n
\end{align}
É importante mencionar que o sistema acima é a forma como o mapa aparece no [artigo original de Hénon](https://venturi.soe.ucsc.edu/sites/default/files/Henon1976.pdf) de 1976.
```python
@interact
def Henon_map_2(x0=slider(-0.4,.4,.1,0, label='$X_0$'),
y0=slider(-.4,.4,.1,0, label='$Y_0$'),
a=slider(0,2,.1,1.2,label='$a$'),
b=slider(0,1,.1,0.3,label='$b$'),
n=slider(10,1000,50,500, label='Iterações:')):
pts = [(a-b*y0-x0**2,x0)]
for i in range(1,n+1):
xi = 1-a* pts[-1][0]**2 + pts[-1][1]
yi = b*pts[-1][0]
pts.append((xi,yi))
p = points(pts, axes_labels=['$X_n$', '$Y_n$'])
xp = list_plot([[i,pt[0]] for i,pt in enumerate(pts)], color='red', legend_label='$X_n$')
yp = list_plot([[i,pt[1]] for i,pt in enumerate(pts)], color='green', legend_label='$Y_n$', alpha=.3)
ga = graphics_array((p,xp+yp))
ga.show(figsize=[10,5])
```
Interactive function <function Henon_map_2 at 0x7f4c8e28a7b8> with 5 widgets
x0: TransformFloatSlider(value=…
## O Semigrupo $S^n$.
Vamos introduzir o subgrupo $S^n$ como uma notação para a realização de n passos(iterações) do mapa.
Dado um sistema dinâmico sobre um conjunto $E$. Podemos definir o semigrupo de evolução $S^n:E \rightarrow E$ como sendo o operador tal que $U_n=S^n U_0$. Este operador tem as seguintes propriedades:
1. $U_{n+m}=S^nU_m=S^mU_n=S^{n+m}U_0$, para todo $n,m \geq 0$;
1. $S^0= I $, a identidade.
o semigrupo $S^n$ é comumente chamado de *mapa de evolução*.
## Conjuntos Limite (Limit sets)
As seguintes perguntas são relevantes para sistemas dinâmicos de qualquer natureza:
- Quais os seus equilíbrios?
Pontos de equilíbrio são estados de onde o sistema não sairá a menos que perturbado. No caso dos mapas discretos estes equilíbrios são chamados de conjuntos limites, pois podem ser definidos por um conjunto de pontos ao invés de um único.
- Quais condições iniciais levam a que conjuntos limites?
ou seja, que conjuntos de pontos constituem sua **bacia de atração**?
Considere o seguinte mapa: $$U_{n+1}=U_n^2,\,\,U_0=U$$
Este é um sistema dinâmico em $\mathbb{R}^+$. É óbvio que se $U<1$, $U_n \rightarrow 0$ quando $n\rightarrow \infty$. Logo, $0$ é um ponto fixo. O conjunto $[0,1)$ é a bacia de atração de 0. Se $U=1$ então $U_n=1, \forall n$. Logo $1$ também é um ponto fixo. Por fim, se $U>1$ então $U_n \rightarrow \infty$ quando $n\rightarrow \infty$. Portanto $\infty$ é um equilíbrio do sistema, e o conjunto $(1,\infty]$ é sua bacia de atração.
Encontrar os equilíbrios de uma Mapa nem sempre é tão simples. Consider este outro mapa:
$$U_{n+1}=a U -U^3, \, U_0=U$$
```python
@interact
def cobwebber(f_text = input_box(default = "a*x - x**3",label = "mapa", type=str),
a = slider(1,4,0.1,3, label='$a$'),
start_val = slider(0,2,.01,n(sqrt(3)),label = 'valor inicial'),
its = slider([2^i for i in range(0,12)],default = 16, label="iterações")):
def f(x):
return eval(f_text.replace('a',str(a)))
show(cobweb(f, start_val, iterations = its,xmin=-2,xmax=2))
```
Interactive function <function cobwebber at 0x7f4c8e6f6a60> with 4 widgets
f_text: TransformText(value='a*x …
## Estabilidade
Pontos fixos por definição são valores de $U$ para os quais $U_{n+1}=U_n$. Logo quando existirem sempre pertencerão à diagonal do gráfico de cobweb.
Para uma definição mais formal, considere o seguinte mapa linear:
$$U_{n+1}=AU_n.$$
$A$ tem autovalores $\{\lambda_i\}_{i=1}^l$, onde $l\leq p$. Logo,
1. A origem é assintoticamente estável se e somente se $|\lambda_i|<1$. para todos os $i$.
2. Se $|\lambda_i|\leq1$ para todo $i$ e os autovalores iguais a um são "não-defeituosos", então a origem é estável.
Para mapas não lineares pode-se aplicar técnicas de linearização local nos pontos fixos, e aplicar os critérios acima.
### Analisando o Mapa Logístico
Vamos revisitar o Mapa logístico e analisar seus pontos fixos. Vamos denotar por $x^*$ os pontos fixos.
$$x_{n+1}=rx_n(1-x_n)$$
Os pontos fixos satisfazem $x^* = f(x^*)=rx^*(1-x^*)$. Logo $x^*=0$ ou $x^*=1-\frac{1}{r}$
```python
var('r x')
solve(x==r*x*(1-x),x)
```
<html></html>
A estabilidade dos pontos fixos depende da derivada de $f(x^*)$, $f'(x^*) = r - 2rx^*$. NO caso de $x^*=0$, $f'(0)=r$ este ponto fixo é estável se $r<1$ e instável se $r>1$. Para o segundo ponto fixo, $f'(x^*)=r-2r(1-1/r)=2-r$. Logo o segundo ponto fixo é estável quando $1<r<3$ e instável quando $r>3$.
```python
```
| c343f2cf796c929e46ec5ccfc4ae0e69287fdd13 | 22,971 | ipynb | Jupyter Notebook | Sage/Mapas discretos.ipynb | fccoelho/sistemas_din-micos_aplicados | e8639674248276aa2d4138012511ea18994105a1 | [
"CC0-1.0"
] | 1 | 2020-09-13T21:53:36.000Z | 2020-09-13T21:53:36.000Z | Sage/Mapas discretos.ipynb | fccoelho/sistemas_din-micos_aplicados | e8639674248276aa2d4138012511ea18994105a1 | [
"CC0-1.0"
] | null | null | null | Sage/Mapas discretos.ipynb | fccoelho/sistemas_din-micos_aplicados | e8639674248276aa2d4138012511ea18994105a1 | [
"CC0-1.0"
] | null | null | null | 40.441901 | 1,405 | 0.567716 | true | 3,913 | Qwen/Qwen-72B | 1. YES
2. YES | 0.83762 | 0.795658 | 0.666459 | __label__por_Latn | 0.944005 | 0.386739 |
# Supplemental Information E - Non-linear Regression
(c) 2017 the authors. This work is licensed under a [Creative Commons Attribution License CC-BY 4.0](https://creativecommons.org/licenses/by/4.0/). All code contained herein is licensed under an [MIT license](https://opensource.org/licenses/MIT).
```python
# For operating system interaction
import os
import glob
import datetime
import sys
# For loading .pkl files.
import pickle
# For scientific computing
import numpy as np
import pandas as pd
import scipy.special
import statsmodels.tools.numdiff as smnd # to compute the Hessian matrix
# Import custom utilities
import mwc_induction_utils as mwc
# Useful plotting libraries
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import seaborn as sns
mwc.set_plotting_style()
# Magic function to make matplotlib inline; other style specs must come AFTER
%matplotlib inline
# This enables SVG graphics inline
%config InlineBackend.figure_format = 'svg'
```
/Users/gchure/anaconda/lib/python3.4/site-packages/matplotlib/__init__.py:872: UserWarning: axes.color_cycle is deprecated and replaced with axes.prop_cycle; please use the latter.
warnings.warn(self.msg_depr % (key, alt_key))
# Non-linear regression.
In order to obtain the MWc parameters given the fold-change measurements and a credible region on such parameters we will use a Bayesian approach to perform a non-linear regression.
Our theoretical model dictates that the fold change in gene expression is given by
\begin{equation}
\text{fold-change} = \frac{1}{1 + \frac{R p_{act}(c)}{N_{NS}} e^{-\beta \Delta \varepsilon_{RA}}},
\end{equation}
where $p_{act}(c)$ is given by
\begin{equation}
p_{act}(c) = \frac{\left( 1 + c e^{\tilde{k_A}}\right)^2}{\left( 1 + c e^{\tilde{k_A}}\right)^2 + e^{-\beta \Delta\varepsilon_{AI}} \left( 1 + c e^{\tilde{k_I}}\right)^2}.
\end{equation}
We define $\tilde{k_A} = -\ln K_A$ and $\tilde{k_I} = -\ln K_I$ for convenience during the regression.
If we want to fit the parameters $\tilde{k_A}$ and $\tilde{k_I}$, by Bayes theorem we have that
\begin{equation}
P(\tilde{k_A}, \tilde{k_I} \mid D, I) \propto P(D \mid \tilde{k_A}, \tilde{k_I}, I) \cdot P(\tilde{k_A}, \tilde{k_I} \mid I),
\end{equation}
where $D$ is the experimental data and $I$ is all the previous information.
## Gaussian likelihood and constant error
The simplest model to perform the regression is to assume the following:
1. each measurement is independent
2. the errors are Gaussian distributed
3. this error is constant along the range of IPTG.
Now it is important to indicate that each element of $D$ is a "pair" of a dependent variable (the experimental fold change $fc_{exp}$) and the independent variables (the repressor copy number $R$, the binding energy $\Delta \varepsilon_{RA}$ and the IPTG concentration $C$). With this in hand we implement the first assumption as
\begin{equation}
P(D \mid \tilde{k_A}, \tilde{k_I}, I) = \prod_{i = 1}^n P(fc_{exp}^{(i)} \mid \tilde{k_A}, \tilde{k_I}, R^{(i)}, \Delta\varepsilon_{RA}^{(i)}, C^{(i)}, I),
\end{equation}
where $n$ is the number of data points and the superscript $(i)$ indicates the $i$th element of $D$.
Implementing the second and third assumption we obtain
\begin{equation}
P(D \mid \tilde{k_A}, \tilde{k_I}, \sigma, I) = \left( 2\pi\sigma^2 \right)^{-\frac{n}{2}} \prod_{i = 1}^n \exp \left[ \frac{1}{2 \sigma^2} \left( fc_{exp}^{(i)} - fc\left(\tilde{k_A}, \tilde{k_I}, R^{(i)}, \Delta\varepsilon_{RA}^{(i)}, C^{(i)} \right) \right)^2 \right],
\end{equation}
where we include the parameter $\sigma$ associated with the Gaussian distributed error.
For the priors we can assume that the 3 parameters $\tilde{k_A}, \tilde{k_I}$ and $\sigma$ are not only independent, but since they have a uniform prior in log scale they can have a Jeffres' prior, i.e.
\begin{equation}
P(\tilde{k_A}, \tilde{k_I}, \sigma \mid I) \equiv \frac{1}{\tilde{k_A}}\cdot\frac{1}{\tilde{k_I}}\cdot\frac{1}{\sigma}
\end{equation}
Putting all the pieces together we can compute the posterior distribution as
\begin{equation}
P(\tilde{k_A}, \tilde{k_I}, \sigma \mid D, I) \propto \left( 2\pi\sigma^2 \right)^{-\frac{n}{2}} \prod_{i = 1}^n \exp \left[ \frac{1}{2 \sigma^2} \left( fc_{exp}^{(i)} - fc\left(\tilde{k_A}, \tilde{k_I}, R^{(i)}, \Delta\varepsilon_{RA}^{(i)}, C^{(i)} \right) \right)^2 \right] \frac{1}{\tilde{k_A}}\cdot\frac{1}{\tilde{k_I}}\cdot\frac{1}{\sigma}
\end{equation}
But we are left with the nuance parameter $\sigma$ that we don't care about. To eliminate this parameter we need to marginalize over all values of $\sigma$ as
\begin{equation}
P(\tilde{k_A}, \tilde{k_I} \mid D, I) = \int_{- \infty}^\infty d\sigma P(\tilde{k_A}, \tilde{k_I}, \sigma \mid D, I).
\end{equation}
And when everything settles down, i.e. after some nasty integration, we find that the posterior is given by the student-t distribution
\begin{equation}
P(\tilde{k_A}, \tilde{k_I} \mid D, I) \propto \left[ \sum_{i=1}^n \left( fc_{exp}^{(i)} - fc\left(\tilde{k_A}, \tilde{k_I}, R^{(i)}, \Delta\varepsilon_{RA}^{(i)}, C^{(i)} \right) \right)^2 \right]^{\frac{n}{2}}.
\end{equation}
Numerically is always better to work with the log posterior probability, therefore for the student-t distribution we have that
\begin{equation}
\ln P(\tilde{k_A}, \tilde{k_I} \mid D, I) \propto \frac{n}{2} \ln \left[ \sum_{i=1}^n \left( fc_{exp}^{(i)} - fc\left(\tilde{k_A}, \tilde{k_I}, R^{(i)}, \Delta\varepsilon_{RA}^{(i)}, C^{(i)} \right) \right)^2 \right]
\end{equation}
Let's code up the functions to compute the theoretical fold-change
```python
# define a funciton to compute the fold change as a funciton of IPTG
def pact(IPTG, ea, ei, epsilon=4.5):
'''
Returns the probability of a repressor being active as described by the MWC
model.
Parameter
---------
IPTG : array-like.
concentrations of inducer on which to evaluate the function
ea, ei : float.
minus log of the dissociation constants of the active and the inactive
states respectively
epsilon : float.
energy difference between the active and the inactive state
Returns
-------
pact : float.
probability of a repressor of being in the active state. Active state is
defined as the state that can bind to the DNA.
'''
pact = (1 + IPTG * np.exp(ea))**2 / \
((1 + IPTG * np.exp(ea))**2 + np.exp(-epsilon) * (1 + IPTG * np.exp(ei))**2)
return pact
def fold_change(IPTG, ea, ei, epsilon, R, epsilon_r):
'''
Returns the gene expression fold change according to the thermodynamic model
with the extension that takes into account the effect of the inducer.
Parameter
---------
IPTG : array-like.
concentrations of inducer on which to evaluate the function
ea, ei : float.
minus log of the dissociation constants of the active and the inactive
states respectively
epsilon : float.
energy difference between the active and the inactive state
R : array-like.
repressor copy number for each of the strains. The length of this array
should be equal to the IPTG array. If only one value of the repressor is
given it is asssume that all the data points should be evaluated with
the same repressor copy number
epsilon_r : array-like
repressor binding energy. The length of this array
should be equal to the IPTG array. If only one value of the binding
energy is given it is asssume that all the data points
should be evaluated with the same repressor copy number
Returns
-------
fold-change : float.
gene expression fold change as dictated by the thermodynamic model.
'''
return 1 / (1 + 2 * R / 5E6 * pact(IPTG, ea, ei, epsilon) * \
(1 + np.exp(-epsilon)) * np.exp(-epsilon_r))
```
Now let's code up the log posterior
```python
def log_post(param, indep_var, dep_var):
'''
Computes the log posterior for a single set of parameters.
Parameters
----------
param : array-like.
param[0] = epsilon_a
param[1] = epsilon_i
indep_var : n x 3 array.
series of independent variables to compute the theoretical fold-change.
1st column : IPTG concentration
2nd column : repressor copy number
3rd column : repressor binding energy
dep_var : array-like
dependent variable, i.e. experimental fold-change. Then length of this
array should be the same as the number of rows in indep_var.
Returns
-------
log_post : float.
the log posterior probability
'''
# unpack parameters
ea, ei = param
# unpack independent variables
IPTG, R, epsilon_r = indep_var[:, 0], indep_var[:, 1], indep_var[:, 2]
# compute the theoretical fold-change
fc_theory = fold_change(IPTG, ea, ei, 4.5, R, epsilon_r)
# return the log posterior
return -len(dep_var) / 2 * np.log(np.sum((dep_var - fc_theory)**2))
```
# Testing the functions with only 1 strain and one operator
Now it is time to test this! But first let's read the data
```python
datadir = '../../data/'
# read the list of data-sets to ignore
df = pd.read_csv(datadir + 'flow_master.csv', comment='#')
# Now we remove the autofluorescence and delta values
df = df[(df.rbs != 'auto') & (df.rbs != 'delta')]
df.head()
```
<div>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>date</th>
<th>username</th>
<th>operator</th>
<th>binding_energy</th>
<th>rbs</th>
<th>repressors</th>
<th>IPTG_uM</th>
<th>mean_YFP_A</th>
<th>mean_YFP_bgcorr_A</th>
<th>fold_change_A</th>
</tr>
</thead>
<tbody>
<tr>
<th>2</th>
<td>20160804</td>
<td>mrazomej</td>
<td>O2</td>
<td>-13.9</td>
<td>RBS1L</td>
<td>870</td>
<td>0.0</td>
<td>3624.474605</td>
<td>111.851286</td>
<td>0.007146</td>
</tr>
<tr>
<th>3</th>
<td>20160804</td>
<td>mrazomej</td>
<td>O2</td>
<td>-13.9</td>
<td>RBS1</td>
<td>610</td>
<td>0.0</td>
<td>3619.786265</td>
<td>107.162946</td>
<td>0.006847</td>
</tr>
<tr>
<th>4</th>
<td>20160804</td>
<td>mrazomej</td>
<td>O2</td>
<td>-13.9</td>
<td>RBS1027</td>
<td>130</td>
<td>0.0</td>
<td>3717.019527</td>
<td>204.396208</td>
<td>0.013059</td>
</tr>
<tr>
<th>5</th>
<td>20160804</td>
<td>mrazomej</td>
<td>O2</td>
<td>-13.9</td>
<td>RBS446</td>
<td>62</td>
<td>0.0</td>
<td>3854.650585</td>
<td>342.027265</td>
<td>0.021853</td>
</tr>
<tr>
<th>6</th>
<td>20160804</td>
<td>mrazomej</td>
<td>O2</td>
<td>-13.9</td>
<td>RBS1147</td>
<td>30</td>
<td>0.0</td>
<td>4169.802851</td>
<td>657.179531</td>
<td>0.041988</td>
</tr>
</tbody>
</table>
</div>
Let's focus first on a single strain: `O2 - RBS1027`
```python
rbs = df[(df.rbs=='RBS1027') & (df.binding_energy==-13.9)]
plt.figure()
for date in rbs.date.unique():
plt.plot(rbs[rbs.date==date].IPTG_uM / 1E6,
rbs[rbs.date==date].fold_change_A, 'o',
label=str(date), alpha=0.7)
plt.xscale('symlog', linthreshx=1E-7)
plt.xlim(left=-5E-9)
plt.xlabel('[IPTG] (M)')
plt.ylabel('fold-change')
plt.legend(loc='upper left', fontsize=11)
plt.title('RBS1027 lacI/cell = 130')
plt.tight_layout()
```
### Plotting the posterior distribution
Before computing the MAP and doing the proper regression, let's look at the posterior itself
```python
# Parameter values to plot
ea = np.linspace(-5.2, -4.7, 100)
ei = np.linspace(0.45, 0.7, 100)
# make a grid to plot
ea_grid, ei_grid = np.meshgrid(ea, ei)
# compute the log posterior
indep_var = rbs[['IPTG_uM', 'repressors', 'binding_energy']]
dep_var = rbs.fold_change_A
log_posterior = np.empty_like(ea_grid)
for i in range(len(ea)):
for j in range(len(ei)):
log_posterior[i, j] = log_post([ea_grid[i, j], ei_grid[i, j]],
indep_var.values, dep_var.values)
# Get things to scale better
log_posterior -= log_posterior.max()
# plot the results
plt.figure()
plt.contourf(ea_grid, ei_grid, np.exp(log_posterior), alpha=0.7,
cmap=plt.cm.Blues)
plt.xlabel(r'$\tilde{k_A}$')
plt.ylabel(r'$\tilde{k_I}$')
plt.title('Posterior probability, O2 - RBS1027')
```
<matplotlib.text.Text at 0x11b31a518>
### Computing the MAP
In order to compute the Maximum a posteriori parameters or MAP for short we will use the `scipy.optimize.leastsq()` function.
For this we need to define a function that computes the residuals.
```python
def resid(param, indep_var, dep_var, epsilon=4.5):
'''
Residuals for the theoretical fold change.
Parameters
----------
param : array-like.
param[0] = epsilon_a
param[1] = epsilon_i
indep_var : n x 3 array.
series of independent variables to compute the theoretical fold-change.
1st column : IPTG concentration
2nd column : repressor copy number
3rd column : repressor binding energy
dep_var : array-like
dependent variable, i.e. experimental fold-change. Then length of this
array should be the same as the number of rows in indep_var.
Returns
-------
fold-change_exp - fold-change_theory
'''
# unpack parameters
ea, ei = param
# unpack independent variables
IPTG, R, epsilon_r = indep_var[:, 0], indep_var[:, 1], indep_var[:, 2]
# compute the theoretical fold-change
fc_theory = fold_change(IPTG, ea, ei, epsilon, R, epsilon_r)
# return the log posterior
return dep_var - fc_theory
```
To find the most likely parameters we need to provide an initial guess. The optimization routine only finds a local maximum and is not in general guaranteed to converge. Therefore, the initial guess can be very important.
After that we will be ready to use `scipy.optimize.leastsq()` to compute the MAP. We uses the args kwarg to pass in the other arguments to the resid() function. In our case, these arguments are the data points. The `leastsq()` function returns multiple values, but the first, the optimal parameter values (the MAP), is all we are interested in.
```python
# Initial guess
p0 = np.array([1, 7]) # From plotting the posterior
# Extra arguments given as tuple
args = (indep_var.values, dep_var.values)
# Compute the MAP
popt, _ = scipy.optimize.leastsq(resid, p0, args=args)
# Extract the values
ea, ei = popt
# Print results
print("""
The most probable parameters for the MWC model
----------------------------------------------
Ka = {0:.2f} uM
Ki = {1:.3f} uM
""".format(np.exp(-ea), np.exp(-ei)))
```
The most probable parameters for the MWC model
----------------------------------------------
Ka = 140.38 uM
Ki = 0.560 uM
Just to show that these parameters indeed give a good fit let's plot the theory and the data
```python
IPTG = np.logspace(-8, -2, 200)
fc_theory = fold_change(IPTG * 1E6, ea, ei, 4.5, R=130, epsilon_r=-13.9)
plt.figure()
plt.plot(IPTG, fc_theory, '--', label='best parameter fit', color='darkblue')
for date in rbs.date.unique():
plt.plot(rbs[rbs.date==date].IPTG_uM / 1E6,
rbs[rbs.date==date].fold_change_A, 'o',
label=str(date), alpha=0.7)
plt.xscale('symlog', linthreshx=1E-7)
plt.xlim(left=-5E-9)
plt.xlabel('IPTG (M)')
plt.ylabel('fold-change')
plt.legend(loc='upper left', fontsize=11)
plt.tight_layout()
```
# Computing error bars on the parameters.
In order to get a **credible region** on our parameter estimate we will use an aproximation in which the posterior probability can be represented as a Gaussian distribution. This approximation can be justified as a truncated Taylor expansion as follows:
Given our log posterior distribution with parameters $\mathbf{\tilde{k}} = (\tilde{k_A}, \tilde{k_I})$ we can perform a Taylor expansion around our MAP $\mathbf{\tilde{k}}^*$
\begin{equation}
\ln P(\mathbf{\tilde{k}} \mid D, I) \approx \text{constant} + \frac{1}{2} \left( \mathbf{\tilde{k} - \tilde{k}^*}\right)^T \cdot H \cdot \left(\mathbf{\tilde{k} - \tilde{k}^*}\right),
\end{equation}
where $H$ is the symmetric **Hessian matrix** whose entries are given by the second derivatives, i.e.
\begin{equation}
H_{ij} = \frac{\partial ^2 \ln P(\mathbf{\tilde{k}} \mid D, I)}{\partial \tilde{k}_i \partial \tilde{k}_j} \biggr\rvert_{\mathbf{\tilde{k}} = \mathbf{\tilde{k}^*}}.
\end{equation}
If we exponentiate this truncated expansion to remove the log we find something that remarkably resembles a multivariate Gaussian distribution
\begin{equation}
P(\mathbf{\tilde{k}} \mid D, I) \approx \text{constant} \cdot \exp \left[ \frac{1}{2} \left( \mathbf{\tilde{k}} - \mathbf{\tilde{k}^*} \right)^T \cdot H \cdot \left( \mathbf{\tilde{k}} - \mathbf{\tilde{k}^*} \right) \right].
\end{equation}
From this we can see that the Hessian matrix plays the role of the negative inverse **covariance matrix**. As a matter of fact since the second derivatives are evaluated at the MAP the Hessian is *positive definite* and therefore this matrix can be inverted, obtaining our desired covariance matrix. So if we compute the Hessian at the MAP, and then invert this matrix, the diagonal terms of this inverted matrix will be the error bars for our parameters under this Gaussian approximation of the posterior!
Let's now compute the covariance matrix. For this we will numerically compute the Hessian using the `statsmodels.tools.numdiff` package.
```python
# list the arguments to be fed to the log_post function
args = (indep_var.values, dep_var.values)
# Compute the Hessian at the map
hes = smnd.approx_hess(popt, log_post, args=args)
hes
```
array([[ -623.2526089 , 1291.19249453],
[ 1291.19249453, -3351.4824521 ]])
Now that we computed the Hessian let's compute the negative inverse to get our precious covariance matrix!
```python
# Compute the covariance matrix
cov = -np.linalg.inv(hes)
cov
```
array([[ 0.00794864, 0.00306229],
[ 0.00306229, 0.00147816]])
Again, the diagonal terms of this matrix give the approximate variance in the regression parameters. The offdiagonal terms give the covariance, which describe how parameters relate to each other. From the plot of the posterior previously we saw that there is definitely a positive correlation between the parameters, and that is reflected by non-zero entries in these offdiagonal terms.
But recall that this is giving the error bar on $\tilde{k_A}$ and $\tilde{k_I}$, not the dissociation constants themselves. Therefore we must "propagate the error" properly by doing the proper change of variables.
For this we use the approximation that if the error on $\tilde{k_A}$ is given by $\delta \tilde{k_A}$, we can use this relationship to compute $\delta K_A$, the error on the dissociation constant.
First we know the relationshipt between $\tilde{k_A}$ and $K_A$ is
\begin{equation}
\tilde{k_A} = - \ln K_A.
\end{equation}
Differenciating both sides we obtain
\begin{equation}
\delta \tilde{k_A} = - \frac{1}{K_A} \delta K_A.
\end{equation}
We now squre both sides and take the expected value
\begin{equation}
\langle \delta \tilde{k_A} \rangle^2 = \frac{\langle \delta K_A\rangle^2}{K_A^2}.
\end{equation}
Finally we re-arrange terms to find that the error bar on the dissociation constant is given by
\begin{equation}
\delta K_A = \sqrt{\langle \delta K_A \rangle^2} = \sqrt{\langle \delta \tilde{k_A} \rangle^2 \cdot K_A^2} = \delta \tilde{k_A} \cdot K_A
\end{equation}
Now let's report the parameter values with the proper error bars!
```python
# Get the values for the dissociation constants and their respective error bars
Ka = np.exp(-ea)
Ki = np.exp(-ei)
deltaKa = np.sqrt(cov[0,0]) * Ka
deltaKi = np.sqrt(cov[1,1]) * Ki
# Print results
print("""
The most probable parameters for the MWC model
----------------------------------------------
Ka = {0:.2f} +- {1:0.3f} uM
Ki = {2:.5f} +- {3:0.6f} uM
""".format(Ka, deltaKa, Ki, deltaKi))
```
The most probable parameters for the MWC model
----------------------------------------------
Ka = 140.38 +- 12.516 uM
Ki = 0.55964 +- 0.021516 uM
### Using these parameters to predict other strains.
Let's use these parameters to see how well we can predict the other strains.
```python
# Given this result let's plot all the curves using this parameters.
# Set the colors for the strains
colors = sns.color_palette('colorblind', n_colors=7)
colors[4] = sns.xkcd_palette(['dusty purple'])[0]
df_O2 = df[df.operator=='O2']
plt.figure()
for i, rbs in enumerate(df_O2.rbs.unique()):
# plot the theory using the parameters from the fit.
plt.plot(IPTG, fold_change(IPTG * 1E6,
ea=ea, ei=ei, epsilon=4.5,
R=df_O2[(df_O2.rbs == rbs)].repressors.unique(),
epsilon_r=-13.9),
color=colors[i])
# compute the mean value for each concentration
fc_mean = df_O2[(df_O2.rbs==rbs)].groupby('IPTG_uM').fold_change_A.mean()
# compute the standard error of the mean
fc_err = df_O2[df_O2.rbs==rbs].groupby('IPTG_uM').fold_change_A.std() / \
np.sqrt(df_O2[df_O2.rbs==rbs].groupby('IPTG_uM').size())
# plot the experimental data
plt.errorbar(df_O2[df_O2.rbs==rbs].IPTG_uM.unique() / 1E6, fc_mean,
yerr=fc_err,
fmt='o', label=df_O2[df_O2.rbs==rbs].repressors.unique()[0] * 2,
color=colors[i])
plt.xscale('symlog', linthreshx=1E-7)
plt.xlim(left=-5E-9)
plt.xlabel('IPTG (M)')
plt.ylabel('fold-change')
plt.ylim([-0.01, 1.2])
plt.legend(loc='upper left', title='repressors / cell')
plt.tight_layout()
```
# Cross checking the fit with other strains.
An interesting exercise is to perform the fit using the other strains, or pooling all the data together.
To make this in a simple straight forward way let's define a function that takes a `pandas DataFrame` and extracts the independent and dependent variables, performs the regression and returns the MAP and error bar on the parameters $\tilde{k_A}$ and $\tilde{k_I}$.
```python
def non_lin_reg_mwc(df, p0,
indep_var=['IPTG_uM', 'repressors', 'binding_energy'],
dep_var='fold_change_A', epsilon=4.5, diss_const=False):
'''
Performs a non-linear regression on the lacI IPTG titration data assuming
Gaussian errors with constant variance. Returns the parameters
e_A == -ln(K_A)
e_I == -ln(K_I)
and it's corresponding error bars by approximating the posterior distribution
as Gaussian.
Parameters
----------
df : DataFrame.
DataFrame containing all the titration information. It should at minimum
contain the IPTG concentration used, the repressor copy number for each
strain and the binding energy of such strain as the independent variables
and obviously the gene expression fold-change as the dependent variable.
p0 : array-like (length = 2).
Initial guess for the parameter values. The first entry is the guess for
e_A == -ln(K_A) and the second is the initial guess for e_I == -ln(K_I).
indep_var : array-like (length = 3).
Array of length 3 with the name of the DataFrame columns that contain
the following parameters:
1) IPTG concentration
2) repressor copy number
3) repressor binding energy to the operator
dep_var : str.
Name of the DataFrame column containing the gene expression fold-change.
epsilon : float.
Value of the allosteric parameter, i.e. the energy difference between
the active and the inactive state.
diss_const : bool.
Indicates if the dissociation constants should be returned instead of
the e_A and e_I parameteres.
Returns
-------
if diss_const == True:
e_A : MAP for the e_A parameter.
de_A : error bar on the e_A parameter
e_I : MAP for the e_I parameter.
de_I : error bar on the e_I parameter
else:
K_A : MAP for the K_A parameter.
dK_A : error bar on the K_A parameter
K_I : MAP for the K_I parameter.
dK_I : error bar on the K_I parameter
'''
df_indep = df[indep_var]
df_dep = df[dep_var]
# Extra arguments given as tuple
args = (df_indep.values, df_dep.values, epsilon)
# Compute the MAP
popt, _ = scipy.optimize.leastsq(resid, p0, args=args)
# Extract the values
ea, ei = popt
# Compute the Hessian at the map
hes = smnd.approx_hess(popt, log_post,
args=(df_indep.values, df_dep.values))
# Compute the covariance matrix
cov = -np.linalg.inv(hes)
if diss_const:
# Get the values for the dissociation constants and their
# respective error bars
Ka = np.exp(-ea)
Ki = np.exp(-ei)
deltaKa = np.sqrt(cov[0,0]) * Ka
deltaKi = np.sqrt(cov[1,1]) * Ki
return Ka, deltaKa, Ki, deltaKi
else:
return ea, cov[0,0], ei, cov[1,1]
```
Now that we have the function, let's systematically perform the regression on each of the strains to check how different the parameter values are.
```python
# initialize a data frame to save the regression parameters
param_df = pd.DataFrame()
# loop through the RBS performing the regression on each strain
for i, rbs in enumerate(df.rbs.unique()):
param = pd.Series(non_lin_reg_mwc(df[df.rbs==rbs], p0=[1, 7],
diss_const=True),
index=['Ka', 'delta_Ka', 'Ki', 'delta_Ki'])
param_df = pd.concat([param_df, param], axis=1)
# rename the columns by the rbs name
param_df.columns = df.rbs.unique()
# add the regression on all the pool data
param_df['pool_data'] = pd.Series(non_lin_reg_mwc(df, p0=[-5, 1],
diss_const=True),
index=['Ka', 'delta_Ka', 'Ki', 'delta_Ki'])
param_df
```
<div>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>RBS1L</th>
<th>RBS1</th>
<th>RBS1027</th>
<th>RBS446</th>
<th>RBS1147</th>
<th>HG104</th>
<th>pool_data</th>
</tr>
</thead>
<tbody>
<tr>
<th>Ka</th>
<td>240.402528</td>
<td>160.056120</td>
<td>134.562671</td>
<td>143.114327</td>
<td>175.695247</td>
<td>128.673829</td>
<td>199.482826</td>
</tr>
<tr>
<th>Ki</th>
<td>0.669918</td>
<td>0.546850</td>
<td>0.559323</td>
<td>0.631284</td>
<td>0.791484</td>
<td>0.537988</td>
<td>0.668853</td>
</tr>
<tr>
<th>delta_Ka</th>
<td>17.045449</td>
<td>8.112211</td>
<td>7.421712</td>
<td>8.400356</td>
<td>13.492545</td>
<td>21.273247</td>
<td>5.581350</td>
</tr>
<tr>
<th>delta_Ki</th>
<td>0.032676</td>
<td>0.019271</td>
<td>0.018149</td>
<td>0.018724</td>
<td>0.024183</td>
<td>0.026700</td>
<td>0.009919</td>
</tr>
</tbody>
</table>
</div>
| 93ae3024955ef4a7eaf2ab23fa51a9e91d9f5070 | 469,873 | ipynb | Jupyter Notebook | code/analysis/SI_E_nonlinear_regression.ipynb | RPGroup-PBoC/mwc_induction | 8dc5ad6019383edfb6f6b05476fb049371820805 | [
"MIT"
] | 1 | 2017-03-08T05:52:15.000Z | 2017-03-08T05:52:15.000Z | code/analysis/SI_E_nonlinear_regression.ipynb | RPGroup-PBoC/mwc_induction | 8dc5ad6019383edfb6f6b05476fb049371820805 | [
"MIT"
] | 2 | 2017-09-27T19:38:15.000Z | 2018-02-10T01:13:29.000Z | code/analysis/SI_E_nonlinear_regression.ipynb | RPGroup-PBoC/mwc_induction | 8dc5ad6019383edfb6f6b05476fb049371820805 | [
"MIT"
] | null | null | null | 46.977904 | 512 | 0.508706 | true | 8,083 | Qwen/Qwen-72B | 1. YES
2. YES | 0.822189 | 0.798187 | 0.65626 | __label__eng_Latn | 0.927148 | 0.363044 |
# Design of Digital Filters
*This jupyter notebook is part of a [collection of notebooks](../index.ipynb) on various topics of Digital Signal Processing. Please direct questions and suggestions to [Sascha.Spors@uni-rostock.de](mailto:Sascha.Spors@uni-rostock.de).*
## Design of Non-Recursive Filters using the Window Method
The design of non-recursive filters with a finite-length impulse response (FIR) is a frequent task in practical applications. The designed filter should approximate a prescribed frequency response as close as possible. First, the design of causal filters is considered. For many applications the resulting filter should have a linear phase characteristic since this results in a constant (frequency independent) group delay. We therefore specialize the design to causal linear-phase filters in a second step.
### Causal Filters
Let's assume that the desired frequency characteristic of the discrete filter is given by its continuous frequency response $H_\text{d}(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ in the discrete-time Fourier domain. Its impulse response is given by inverse discrete-time Fourier transform (inverse DTFT) of the frequency response
\begin{equation}
h_\text{d}[k] = \frac{1}{2 \pi} \int\limits_{- \pi}^{\pi} H_\text{d}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) \, \mathrm{e}^{\,\mathrm{j}\,\Omega\,k} \; \mathrm{d}\Omega
\end{equation}
In the general case, $h_\text{d}[k]$ will not be a causal FIR. The [Paley-Wiener theorem](https://en.wikipedia.org/wiki/Paley%E2%80%93Wiener_theorem) states, that the transfer function of a causal system may only have zeros at a countable number of single frequencies. This is not the case for idealized filters, like e.g. the [ideal low-pass filter](https://en.wikipedia.org/wiki/Low-pass_filter#Ideal_and_real_filters), were the transfer function is zeros over an interval of frequencies. The basic idea of the window method is to truncate the impulse response $h_\text{d}[k]$ in order to derive a causal FIR filter. This can be achieved by applying a window $w[k]$ of finite length $N$ to $h_\text{d}[k]$
\begin{equation}
h[k] = h_\text{d}[k] \cdot w[k]
\end{equation}
where $h[k]$ denotes the impulse response of the designed filter and $w[k] = 0$ for $k < 0 \land k \geq N$. Its frequency response $H(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ is given by the multiplication theorem of the discrete-time Fourier transform (DTFT)
\begin{equation}
H(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \frac{1}{2 \pi} \; H_\text{d}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) \circledast W(\mathrm{e}^{\,\mathrm{j}\,\Omega})
\end{equation}
where $W(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ denotes the DTFT of the window function $w[k]$. The frequency response $H(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ of the filter is given as the periodic convolution of the desired frequency response $H_\text{d}(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ and the frequency response of the window function $W(\mathrm{e}^{\,\mathrm{j}\,\Omega})$. The frequency response $H(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ is equal to the desired frequency response $H_\text{d}(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ only if $W(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = 2 \pi \cdot \delta(\Omega)$. This would require that $w[k] = 1$ for $k = -\infty, \dots, \infty$. Hence for a window $w[k]$ of finite length, deviations from the desired frequency response are to be expected.
In order to investigate the effect of truncation on the frequency response $H(\mathrm{e}^{\,\mathrm{j}\,\Omega})$, a particular window is considered. A straightforward choice is the rectangular window $w[k] = \text{rect}_N[k]$ of length $N$. Its DTFT is given as
\begin{equation}
W(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \mathrm{e}^{-\mathrm{j} \, \Omega \,\frac{N-1}{2}} \cdot \frac{\sin(\frac{N \,\Omega}{2})}{\sin(\frac{\Omega}{2})}
\end{equation}
The frequency-domain properties of the rectangular window have already been discussed for the [leakage effect](../spectral_analysis_deterministic_signals/leakage_effect.ipynb). The rectangular window features a narrow main lobe at the cost of relative high sidelobe level. The main lobe gets narrower with increasing length $N$. The convolution of the desired frequency response with the frequency response of the window function effectively results in smoothing and ringing. While the main lobe will smooth discontinuities of the desired transfer function, the sidelobes result in undesirable ringing effects. The latter can be alleviated by using other window functions. Note that typical [window functions](../spectral_analysis_deterministic_signals/window_functions.ipynb) decay towards their ends and are symmetric with respect to their center. This may cause problems for desired impulse responses with large magnitudes towards their ends.
#### Example - Causal approximation of ideal low-pass
The design of an ideal low-pass filter using the window method is illustrated in the following. For $|\Omega| < \pi$ the transfer function of the ideal low-pass is given as
\begin{equation}
H_\text{d}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \begin{cases}
1 & \text{for } |\Omega| \leq \Omega_\text{c} \\
0 & \text{otherwise}
\end{cases}
\end{equation}
where $\Omega_\text{c}$ denotes the cut frequency of the low-pass. An inverse DTFT of the desired transfer function yields
\begin{equation}
h_\text{d}[k] = \frac{\Omega_\text{c}}{\pi} \cdot \text{sinc}(\Omega_\text{c} \, k)
\end{equation}
The impulse response $h_\text{d}[k]$ is not causal nor FIR. In order to derive a causal FIR approximation, a rectangular window $w[k]$ of length $N$ is applied
\begin{equation}
h[k] = h_\text{d}[k] \cdot \text{rect}_N[k]
\end{equation}
The resulting magnitude and phase response is computed numerically in the following.
```python
import numpy as np
import matplotlib.pyplot as plt
import scipy.signal as sig
%matplotlib inline
N = 32 # length of filter
Omc = np.pi/2
# compute impulse response
k = np.arange(N)
hd = Omc/np.pi * np.sinc(k*Omc/np.pi)
# windowing
w = np.ones(N)
h = hd * w
# frequency response
Om, H = sig.freqz(h)
# plot impulse response
plt.figure(figsize=(10, 3))
plt.stem(h, use_line_collection=True)
plt.title('Impulse response')
plt.xlabel(r'$k$')
plt.ylabel(r'$h[k]$')
# plot magnitude responses
plt.figure(figsize=(10, 3))
plt.plot([0, Omc, Omc], [0, 0, -100], 'r--', label='desired')
plt.plot(Om, 20 * np.log10(abs(H)), label='window method')
plt.title('Magnitude response')
plt.xlabel(r'$\Omega$')
plt.ylabel(r'$|H(e^{j \Omega})|$ in dB')
plt.axis([0, np.pi, -20, 3])
plt.grid()
plt.legend()
# plot phase responses
plt.figure(figsize=(10, 3))
plt.plot([0, Om[-1]], [0, 0], 'r--', label='desired')
plt.plot(Om, np.unwrap(np.angle(H)), label='window method')
plt.title('Phase')
plt.xlabel(r'$\Omega$')
plt.ylabel(r'$\varphi (\Omega)$ in rad')
plt.grid()
plt.legend()
```
<matplotlib.legend.Legend at 0x1c1af92ed0>
**Exercises**
* Does the resulting filter have the desired phase?
* Increase the length `N` of the filter. What changes?
Solution: The desired filter has zero-phase for all frequencies, hence $\varphi_\text{d}(\Omega) = 0$. The phase of the resulting filter is not zero as can be concluded from the lower illustration. The small local variations (ripples) in the magnitude $|H(\mathrm{e}^{\,\mathrm{j}\,\Omega})|$ of the transfer function of the resulting filter decrease with an increasing number `N` of filter coefficients. The achievable attenuation in the stop-band of the low-pass does not change.
### Zero-Phase Filters
Lets assume a general zero-phase filter with transfer function $H_d(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = A(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ with magnitude $A(\mathrm{e}^{\,\mathrm{j}\,\Omega}) \in \mathbb{R}$. Due to the symmetry relations of the DTFT, its impulse response $h_d[k] = \mathcal{F}_*^{-1} \{ H_d(e^{j \Omega} \}$ is conjugate complex symmetric
\begin{equation}
h_d[k] = h_d^*[-k]
\end{equation}
A zero-phase filter of length $N > 1$ is not causal as a consequence. The anti-causal part could simply be removed by windowing with a heaviside signal. However, this will result in large deviations between the desired transfer function and the designed filter. This explains the findings from the previous example, that an ideal-low pass cannot be realized very well by the window method. The reason is that an ideal-low pass has zero-phase, as most of the idealized filters.
The impulse response of a stable system, in the sense of the bounded-input/bounded-output (BIBO) criterion, has to be absolutely summable. Which in general is given when its magnitude decays by tendency with increasing time-index $k$.
This observation motivates to shift the desired impulse response to the center of the window in order to limit the effect of windowing. This can be achieved by replacing the zero-phase with a linear-phase, as is illustrated below.
### Causal Linear-Phase Filters
The design of a non-recursive causal FIR filter with a linear phase is often desired due to its constant group delay. Let's assume a filter with generalized linear phase. For $|\Omega| < \pi$ its transfer function is given as
\begin{equation}
H_\text{d}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = A(\mathrm{e}^{\,\mathrm{j}\,\Omega}) \cdot \mathrm{e}^{-\mathrm{j} \alpha \Omega + \mathrm{j} \beta}
\end{equation}
where $A(\mathrm{e}^{\,\mathrm{j}\,\Omega}) \in \mathbb{R}$ denotes the amplitude of the filter, $\alpha$ the linear slope of the phase and $\beta$ a constant phase offset. Such a system can be decomposed into two cascaded systems: a zero-phase system with transfer function $A(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ and an all-pass with phase $\varphi(\Omega) = - \alpha \Omega + \beta$. The linear phase term $- \alpha \Omega$ results in the constant group delay $t_g(\Omega) = \alpha$.
The impulse response $h[k]$ of a linear-phase system shows a specific symmetry which can be deduced from the symmetry relations of the DTFT for odd/even symmetry of $H_\text{d}(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ as
\begin{equation}
h[k] = \pm h[N-1-k]
\end{equation}
for $k=0, 1, \dots, N-1$ where $N \in \mathbb{N}$ denotes the length of the (finite) impulse response. The transfer function of a linear phase filter is given by its DTFT
\begin{equation}
H_\text{d}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \sum_{k=0}^{N-1} h[k] \, \mathrm{e}^{\,-\mathrm{j}\,\Omega\,k}
\end{equation}
Introducing the symmetry relations of the impulse response $h[k]$ into the DTFT and comparing the result with above definition of a generalized linear phase system reveals four different types of linear-phase systems. These can be discriminated with respect to their phase and magnitude characteristics
| Type | Length $N$ | Impulse Response $h[k]\;$ | Group Delay $\alpha$ in Samples| Constant Phase $\beta$ | Transfer Function $A(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ |
| :---: | :---: | :---: | :---:| :---: | :---: |
| 1 | odd | $h[k] = h[N-1-k]$ | $\alpha = \frac{N-1}{2} \in \mathbb{N}$ | $\beta = \{0, \pi\}$ | $A(\mathrm{e}^{\,\mathrm{j}\,\Omega})=A(\mathrm{e}^{-\,\mathrm{j}\,\Omega})$, all filter characteristics|
| 2 | even| $h[k] = h[N-1-k]$ | $\alpha = \frac{N-1}{2} \notin \mathbb{N}$ | $\beta = \{0, \pi\}$ | $A(\mathrm{e}^{\,\mathrm{j}\,\Omega})=A(\mathrm{e}^{-\,\mathrm{j}\,\Omega})$, $A(\mathrm{e}^{\,\mathrm{j}\,\pi}) = 0$, only lowpass or bandpass|
| 3 | odd | $h[k] = -h[N-1-k]$ | $\alpha = \frac{N-1}{2} \in \mathbb{N}$ | $\beta = \{ \frac{\pi}{2}, \frac{3 \pi}{2} \}$ | $A(\mathrm{e}^{\,\mathrm{j}\,\Omega})=-A(\mathrm{e}^{-\,\mathrm{j}\,\Omega})$, $A(\mathrm{e}^{\,\mathrm{j}\,0}) = A(\mathrm{e}^{\,\mathrm{j}\,\pi}) = 0$, only bandpass|
| 4 | even | $h[k] = -h[N-1-k]$ | $\alpha = \frac{N-1}{2} \notin \mathbb{N}$ | $\beta = \{ \frac{\pi}{2}, \frac{3 \pi}{2} \}$ | $A(\mathrm{e}^{\,\mathrm{j}\,\Omega})=-A(\mathrm{e}^{-\,\mathrm{j}\,\Omega})$, $A(\mathrm{e}^{\,\mathrm{j}\,0}) = 0$, only highpass or bandpass|
These relations have to be considered in the design of a causal linear phase filter. Depending on the desired magnitude characteristics $A(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ the suitable type is chosen. The odd/even length $N$ of the filter and the phase (or group delay) is chosen accordingly for the design of the filter.
#### Example - Causal linear-phase approximation of ideal low-pass
We aim at the design of a causal linear-phase low-pass using the window technique. According to the previous example, the desired frequency response has an even symmetry $A(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = A(\mathrm{e}^{\,-\mathrm{j}\,\Omega})$ with $A(\mathrm{e}^{\mathrm{j}\,0}) = 1$. This could be realized by a filter of type 1 or 2. We choose type 1 with $\beta = 0$, since the resulting filter exhibits an integer group delay of $t_g(\Omega) = \frac{N-1}{2}$ samples. Consequently the length of the filter $N$ has to be odd.
The impulse response $h_\text{d}[k]$ is given by the inverse DTFT of $H_\text{d}(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ incorporating the linear phase
\begin{equation}
h_\text{d}[k] = \frac{\Omega_\text{c}}{\pi} \cdot \text{sinc} \left( \Omega_\text{c} \left(k-\frac{N-1}{2} \right) \right)
\end{equation}
The impulse response fulfills the desired symmetry for $k=0,1, \dots, N-1$. A causal FIR approximation is obtained by applying a window function of length $N$ to the impulse response $h_\text{d}[k]$
\begin{equation}
h[k] = h_\text{d}[k] \cdot w[k]
\end{equation}
Note that the window function $w[k]$ also has to fulfill the desired symmetries.
As already outlined, the chosen window determines the properties of the transfer function $H(\mathrm{e}^{\,\mathrm{j}\,\Omega})$. The [spectral properties of commonly applied windows](../spectral_analysis_deterministic_signals/window_functions.ipynb) have been discussed previously. The width of the main lobe will generally influence the smoothing of the desired transfer function $H_\text{d}(\mathrm{e}^{\,\mathrm{j}\,\Omega})$, while the sidelobes influence the typical ringing artifacts. This is illustrated in the following.
```python
N = 33 # length of filter
Omc = np.pi/2
# compute impulse response
k = np.arange(N)
hd = Omc/np.pi * np.sinc((k-(N-1)/2)*Omc/np.pi)
# windowing
w1 = np.ones(N)
w2 = np.blackman(N)
h1 = hd * w1
h2 = hd * w2
# frequency responses
Om, H1 = sig.freqz(h1)
Om, H2 = sig.freqz(h2)
# plot impulse response
plt.figure(figsize=(10, 3))
plt.stem(h1, use_line_collection=True)
plt.title('Impulse response (rectangular window)')
plt.xlabel(r'$k$')
plt.ylabel(r'$h[k]$')
# plot magnitude responses
plt.figure(figsize=(10, 3))
plt.plot([0, Omc, Omc], [0, 0, -300], 'r--', label='desired')
plt.plot(Om, 20 * np.log10(abs(H1)), label='rectangular window')
plt.plot(Om, 20 * np.log10(abs(H2)), label='Blackmann window')
plt.title('Magnitude response')
plt.xlabel(r'$\Omega$')
plt.ylabel(r'$|H(e^{j \Omega})|$ in dB')
plt.axis([0, np.pi, -120, 3])
plt.legend(loc=3)
plt.grid()
# plot phase responses
plt.figure(figsize=(10, 3))
plt.plot(Om, np.unwrap(np.angle(H1)), label='rectangular window')
plt.plot(Om, np.unwrap(np.angle(H2)), label='Blackmann window')
plt.title('Phase')
plt.xlabel(r'$\Omega$')
plt.ylabel(r'$\varphi (\Omega)$ in rad')
plt.legend(loc=3)
plt.grid()
```
**Exercises**
* Does the impulse response fulfill the required symmetries for a type 1 filter?
* Can you explain the differences between the magnitude responses $|H(\mathrm{e}^{\,\mathrm{j}\,\Omega})|$ for the different window functions?
* What happens if you increase the length `N` of the filter?
Solution: Inspection of the impulse response reveals that it shows the symmetry $h[k] = h[N-1-k]$ of a type 1 filter for odd `N`. The rectangular window features a narrow main lobe at the cost of a high level of the side lobes, the main lobe of the Blackmann window is wider but the level of the side lobes is lower compared to the rectangular window. This explains the behavior of the magnitude responses $|H(\mathrm{e}^{\,\mathrm{j}\,\Omega})|$ in the stop-band of the realized low-passes. The distance between the local minima in the magnitude responses $|H(\mathrm{e}^{\,\mathrm{j}\,\Omega})|$ decreases with increasing length `N` and the attenuation for frequencies towards the Nyquist frequency increases.
**Copyright**
This notebook is provided as [Open Educational Resource](https://en.wikipedia.org/wiki/Open_educational_resources). Feel free to use the notebook for your own purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Sascha Spors, Digital Signal Processing - Lecture notes featuring computational examples*.
| 643250c9fe6e5a4689e79ee54aa190eece7ce8e6 | 481,996 | ipynb | Jupyter Notebook | filter_design/window_method.ipynb | Fun-pee/signal-processing | 205d5e55e3168a1ec9da76b569af92c0056619aa | [
"MIT"
] | 630 | 2016-01-05T17:11:43.000Z | 2022-03-30T07:48:27.000Z | filter_design/window_method.ipynb | jools76/digital-signal-processing-lecture | 4bdfe13fa4a7502412f3f0d54deb8f034aef1ce2 | [
"MIT"
] | 12 | 2016-11-07T15:49:55.000Z | 2022-03-10T13:05:50.000Z | filter_design/window_method.ipynb | jools76/digital-signal-processing-lecture | 4bdfe13fa4a7502412f3f0d54deb8f034aef1ce2 | [
"MIT"
] | 172 | 2015-12-26T21:05:40.000Z | 2022-03-10T23:13:30.000Z | 61.818135 | 28,890 | 0.631819 | true | 4,788 | Qwen/Qwen-72B | 1. YES
2. YES | 0.855851 | 0.828939 | 0.709448 | __label__eng_Latn | 0.968299 | 0.486617 |
# Simple Car
One of the easiest examples to understand is the simple car, which is shown in Figure 13.1. We all know that a car cannot drive sideways because the back wheels would have to slide instead of roll. This is why parallel parking is challenging. If all four wheels could be turned simultaneously toward the curb, it would be trivial to park a car. The complicated maneuvers for parking a simple car arise because of rolling constraints.
The car can be imagined as a rigid body that moves in the plane. Therefore, its C-space is $ {\cal C}= {\mathbb{R}}^2 \times {\mathbb{S}}^1$. Figure 13.1 indicates several parameters associated with the car. A configuration is denoted by $ q = (x,y,\theta)$. The body frame of the car places the origin at the center of rear axle, and the $ x$-axis points along the main axis of the car. Let $ s$ denote the (signed) speed13.2 of the car. Let $ \phi $ denote the steering angle (it is negative for the wheel orientations shown in Figure 13.1). The distance between the front and rear axles is represented as $ L$. If the steering angle is fixed at $ \phi $, the car travels in a circular motion, in which the radius of the circle is $ \rho$. Note that $ \rho$ can be determined from the intersection of the two axes shown in Figure 13.1 (the angle between these axes is $ \vert\phi\vert$).
Using the current notation, the task is to represent the motion of the car as a set of equations of the form (13.11)
\begin{equation}\dot x = f_1(x,y,\theta,s,\phi)\end{equation}
\begin{equation}\dot y = f_2(x,y,\theta,s,\phi)\end{equation}
\begin{equation}{\dot \theta} = f_3(x,y,\theta,s,\phi)\end{equation}
# Dubins Curve
It was shown in Dubin's paper "On curves of minimal length with a constraint on average curvature, and with prescribed initial and terminal positions and tangents" that between any two configurations, the shortest path for the Dubins car can always be expressed as a combination of no more than three motion primitives. Each motion primitive applies a constant action over an interval of time. Furthermore, the only actions that are needed to traverse the shortest paths are $ u \in \{-1,0,1\}$. The primitives and their associated symbols are shown in Figure 15.3. The $ S$ primitive drives the car straight ahead. The $ L$ and $ R$ primitives turn as sharply as possible to the left and right, respectively. Using these symbols, each possible kind of shortest path can be designated as a sequence of three symbols that corresponds to the order in which the primitives are applied. Let such a sequence be called a word . There is no need to have two consecutive primitives of the same kind because they can be merged into one. Under this observation, ten possible words of length three are possible. Dubins showed that only these six words are possibly optimal:
$\displaystyle \{ LRL,\; RLR, \; LSL,\; LSR,\; RSL,\; RSR \} .$ (15.44)
The shortest path between any two configurations can always be characterized by one of these words. These are called the Dubins curves.
Figure 15.4: The trajectories for two words are shown in $ {\cal W}= {\mathbb{R}}^2$.
To be more precise, the duration of each primitive should also be specified. For $ L$ or $ R$, let a subscript denote the total amount of rotation that accumulates during the application of the primitive. For $ S$, let a subscript denote the total distance traveled. Using such subscripts, the Dubins curves can be more precisely characterized as
$$\displaystyle \{ L_\alpha R_\beta L_\gamma, \; R_\alpha L_\beta R_\gamma, \; L_\alpha S_d L_\gamma, \; L_\alpha S_d R_\gamma, \; R_\alpha S_d L_\gamma, \; R_\alpha S_d R_\gamma \} ,$$ (15.45)
in which $ \alpha, \gamma \in [0,2 \pi)$, $ \beta \in (\pi,2 \pi)$, and $ d \geq 0$. Figure 15.4 illustrates two cases. Note that $ \beta$ must be greater than $ \pi $ (if it is less, then some other word becomes optimal).
It will be convenient to invent a compressed form of the words to group together paths that are qualitatively similar. This will be particularly valuable when Reeds-Shepp curves are introduced in Section 15.3.2 because there are 46 of them, as opposed to $ 6$ Dubins curves. Let $ C$ denote a symbol that means ``curve,'' and represents either $ R$ or $ L$. Using $ C$, the six words in (15.44) can be compressed to only two base words:
$$\displaystyle \{ CCC, \; CSC \} .$$ (15.46)
In this compressed form, remember that two consecutive $ C$s must be filled in by distinct turns ($ RR$ and $ LL$ are not allowed as subsequences). In compressed form, the base words can be specified more precisely as
$$\displaystyle \{ C_\alpha C_\beta C_\gamma, \; C_\alpha S_d C_\gamma \} ,$$ (15.47)
in which $ \alpha, \gamma \in [0,2 \pi)$, $ \beta \in (\pi,2 \pi)$, and $ d \geq 0$.
## Diagram for all six types of dubins curves are given below :-
From the above figure, we can see that $$ 90 + \phi + \theta + 90 + \theta_i = 360 $$
$$ \therefore \theta_i = 180 - (\theta + \phi) $$
Our robot car is moving along a left arc from source position at point $ t_1(x_s,y_s) $ to goal position value at point $ t_2(x_g,y_g) $
$$ x_g = x_s - \rho cos(90-\theta) + \rho cos(90-\theta_i) $$
$$ x_g = x_s - \rho (cos90cos\theta + sin90sin\theta) + \rho (cos90cos\theta_i + sin90sin\theta_i) $$
we know that cos90 = 0, sin90 = 1 $$ x_g = x_s - \rho sin\theta + \rho sin\theta_i $$
$$ x_g = x_s - \rho sin\theta + \rho sin[180 - (\theta + \phi)] $$
$$ x_g = x_s - \rho sin\theta + \rho sin90cos[90-(\theta + \phi)]+cos90sin[90-(\theta + \phi)] $$
$$ x_g = x_s - \rho sin\theta + \rho cos[90-(\theta + \phi)] $$
$$ x_g = x_s - \rho sin\theta + \rho cos90cos(\theta + \phi)+ sin90sin(\theta + \phi) $$
$$ \therefore x_g = x_s - \rho sin\theta + \rho sin(\theta + \phi) $$
$$ y_g = y_s + \rho cos\theta + \rho cos\theta_i $$
$$ y_g = y_s + \rho cos\theta + \rho cos[180 - (\theta + \phi)] $$
$$ y_g = y_s + \rho cos\theta + \rho [cos90cos(90-(\theta + \phi)) - sin90sin(90-(\theta + \phi))] $$
$$ y_g = y_s + \rho cos\theta - \rho sin[90-(\theta + \phi)] $$
$$ y_g = y_s + \rho cos\theta - \rho [sin90cos(\theta + \phi) - cos90sin(\theta + \phi)] $$
$$ \therefore y_g = y_s + \rho cos\theta - \rho cos(\theta + \phi) $$
Let $(x_0,y_0;\alpha)$ be the initial configuration and $(x_1,y_1;\beta)$ e the terminal configuration.
Three corresponding operators, $L_v$(for left turn), $R_v$(for right turn) and $S_v$(for straight), are needed which transform an arbitrary configuration $(x,y,\theta)$ into its corresponding image configuration,
$$L_v(x,y;\theta) = (x_s - \rho sin\theta + \rho sin(\theta + \phi),(y_s + \rho cos\theta - \rho cos(\theta + \phi);(\theta + \phi))$$
Now imagine a right turn from $t_2$ to $t_1$ where, assume that the intial yaw angle is $\theta_h = \theta + \phi$ and final yaw angle is $(\theta_h - \phi)$ with respect to x axis
$$R_v(x,y;\theta_h) = (x_s - \rho sin(\theta_h - \phi) + \rho sin\theta_h),(y_s + \rho cos(\theta_h - \phi) - \rho cos\theta_h);(\theta - \phi))$$
$$S_v(x,y;\theta) = (x_s + vsin\theta, y_s + vcos\theta; \theta)$$
where, v is the length along the line segment
From above figure, $$\angle t_2Ct_1 = (90 - \theta_i) + (90 - \theta) = 180 - \theta_i - \theta = \theta + \phi - \theta = \phi$$
For left or right arc, the length of the line segment is denoted by $$v = r\phi$$
Euclidean distance represented as $$ d = \sqrt {(x_1 - x_0)^2 + (y_1 - y_0)^2}$$
Dubin path lengths of three segments of a whole path are t, p, q respectively where, l = t + p + q
```python
"""
Dubins Path
"""
import math
import numpy as np
import matplotlib.pyplot as plt
from scipy.spatial.transform import Rotation as Rot
import draw
import random
%matplotlib inline
```
```python
# class for PATH element
class PATH:
def __init__(self, L, mode, x, y, yaw):
self.L = L # total path length [float]
self.mode = mode # type of each part of the path [string]
self.x = x # final x positions [m]
self.y = y # final y positions [m]
self.yaw = yaw # final yaw angles [rad]
```
```python
# utils
def pi_2_pi(theta):
while theta > math.pi:
theta -= 2.0 * math.pi
while theta < -math.pi:
theta += 2.0 * math.pi
return theta
```
```python
def mod2pi(theta):
return theta - 2.0 * math.pi * math.floor(theta / math.pi / 2.0)
```
Now apply all three operators upon source waypoint to get the goal position values for any LSL dubin path
$$L_q(S_p(L_t(x_0,y_0;\alpha)))=(x_1,y_1;\beta)$$
We know that $t=\rho\phi_1$ and $q=\rho\phi_2$
By applying the corresponding operators, we will get the following equation for $x_1$
$$x_0 - \rho sin\alpha + \rho sin(\alpha+\phi_1)+ pcos(\alpha+\phi_1)-\rho sin(\alpha+\phi_1)+\rho sin(\alpha+\phi_1+\phi_2)=x_1$$
$$x_0 - \rho sin\alpha + \rho sin(\alpha+\frac{t}{\rho}) + pcos(\alpha+\frac{t}{\rho})-\rho sin(\alpha+\frac{t}{\rho})+\rho sin(\alpha+\frac{t}{\rho}+\frac{q}{\rho})=x_1$$
$$\frac{x_0}{\rho}-sin\alpha + sin(\alpha+\frac{t}{\rho})+ \frac{p}{\rho}cos(\alpha+\frac{t}{\rho})-sin(\alpha+\frac{t}{\rho})+sin(\alpha+\frac{t}{\rho}+\frac{q}{\rho})=\frac{x_1}{\rho}$$
Let us assume that $$\hat{x_0}=\frac{x_0}{\rho},\hat{x_1}=\frac{x_1}{\rho},\hat{y_0}=\frac{y_0}{\rho},\hat{y_1}=\frac{y_1}{\rho},\hat{t}=\frac{t}{\rho},\hat{p}=\frac{p}{\rho},\hat{q}=\frac{q}{\rho},\hat{d}=\frac{d}{\rho}$$
Now above equation can be overwritten as equation(1) $$\hat{x_0}-sin\alpha + sin(\alpha+\hat{t})+ \hat{p}cos(\alpha+\hat{t})-sin(\alpha+\hat{t})+sin(\alpha+\hat{t}+\hat{q})=\hat{x_1}$$
By applying the corresponding operators, we will get the following equation (2) for $y_1$
$$\hat{y_0}+cos\alpha-cos(\alpha+\hat{t}) + \hat{p}sin(\alpha+\hat{t})+cos(\alpha+\hat{t})-cos(\alpha+\hat{t}+\hat{q})=\hat{y_1}$$
We will get another equation $$\alpha+\hat{t}+\hat{q}=\beta(mod2\pi) $$
Now equation (1) and (2) can be rewritten as follows
$$\hat{p}cos(\alpha+\hat{t})=\hat{x_1}-\hat{x_0}+sin\alpha-sin(\alpha+\hat{t}+\hat{q})$$
$$\hat{p}cos(\alpha+\hat{t})=(\hat{x_1}-\hat{x_0})+(sin\alpha-sin\beta)$$
$$\hat{p}sin(\alpha+\hat{t})=(\hat{y_1}-\hat{y_0})-(cos\alpha-cos\beta)$$
After $(1)^2 + (2)^2$, we get
$$\hat{p}^2 = (\hat{x_1}-\hat{x_0})^2 + (sin\alpha-sin\beta)^2 + 2(\hat{x_1}-\hat{x_0})(sin\alpha-sin\beta) + (\hat{y_1}-\hat{y_0})^2 + (cos\alpha-cos\beta)^2 - 2(\hat{y_1}-\hat{y_0})(cos\alpha-cos\beta)$$
$$\hat{p}^2 = (\hat{x_1}-\hat{x_0})^2 + (\hat{y_1}-\hat{y_0})^2 + (sin\alpha-sin\beta)^2 + (cos\alpha-cos\beta)^2 + 2(\hat{x_1}-\hat{x_0})(sin\alpha-sin\beta) - 2(\hat{y_1}-\hat{y_0})(cos\alpha-cos\beta)$$
$$\hat{p}^2 = (\hat{d})^2 + (sin\alpha)^2+(sin\beta)^2-2sin\alpha sin\beta + (cos\alpha)^2+(cos\beta)^2-2cos\alpha cos\beta - 2(\hat{x_1}-\hat{x_0})(sin\alpha+sin\beta) + 2(\hat{y_1}-\hat{y_0})(cos\alpha+cos\beta)$$
$$\hat{p}^2 = (\hat{d})^2 + [(sin\alpha)^2+(cos\alpha)^2]+[(sin\beta)^2+(cos\beta)^2]-2(sin\alpha sin\beta+cos\alpha cos\beta) - 2(\hat{x_1}-\hat{x_0})(sin\alpha+sin\beta) + 2(\hat{y_1}-\hat{y_0})(cos\alpha+cos\beta)$$
$$\hat{p}^2 = (\hat{d})^2 + 2 - 2cos(\alpha - \beta) - 2(\hat{x_1}-\hat{x_0})(sin\alpha+sin\beta) + 2(\hat{y_1}-\hat{y_0})(cos\alpha+cos\beta)$$
Let us assume $$d_n = (\hat{d})^2 + 2 - 2cos(\alpha - \beta) - 2(\hat{x_1}-\hat{x_0})(sin\alpha+sin\beta) + 2(\hat{y_1}-\hat{y_0})(cos\alpha+cos\beta)$$
$$\therefore \hat{p} = \sqrt{d_n}$$
After dividing equation(2) by (1), we get
$$ tan(\alpha+\hat{t}) = \frac{(\hat{y_1}-\hat{y_0})-(cos\alpha-cos\beta)}{(\hat{x_1}-\hat{x_0})+(sin\alpha-sin\beta)} = k$$
where, k is a constant
$$(\alpha+\hat{t})= \arctan{k} = rec $$
$$\hat{t} = (-\alpha + rec)mod2\pi $$
$$\hat{q} = (\beta - rec)mod2\pi $$
```python
def LSL(alpha, beta, dist, dx, dy):
sin_a = math.sin(alpha)
sin_b = math.sin(beta)
cos_a = math.cos(alpha)
cos_b = math.cos(beta)
cos_a_b = math.cos(alpha - beta)
p_lsl = 2 + dist ** 2 - 2 * cos_a_b + 2 * dx * (sin_a - sin_b) + 2 * dy * (cos_a + cos_b)
if p_lsl < 0:
return None, None, None, ["L", "S", "L"]
else:
p_lsl = math.sqrt(p_lsl)
k = (dy - cos_a + cos_b)/(dx + sin_a - sin_b)
t_lsl = mod2pi(-alpha + math.atan(k))
q_lsl = mod2pi(beta - math.atan(k))
return t_lsl, p_lsl, q_lsl, ["L", "S", "L"]
```
Now apply all three operators upon source waypoint to get the goal position values for any RSR dubin path
$$R_q(S_p(R_t(x_0,y_0;\alpha)))=(x_1,y_1;\beta)$$
We know that $t=\rho\phi_1$ and $q=\rho\phi_2$
Let us assume that $$\hat{x_0}=\frac{x_0}{\rho},\hat{x_1}=\frac{x_1}{\rho},\hat{y_0}=\frac{y_0}{\rho},\hat{y_1}=\frac{y_1}{\rho},\hat{t}=\frac{t}{\rho},\hat{p}=\frac{p}{\rho},\hat{q}=\frac{q}{\rho},\hat{d}=\frac{d}{\rho}$$
By applying the corresponding operators, we will get the following equation (2) for $x_1$
$$\hat{x_0}-sin(\alpha-\hat{t}) + sin\alpha + \hat{p}cos(\alpha-\hat{t})-sin(\alpha-\hat{t}-\hat{q})+sin(\alpha-\hat{t}) =\hat{x_1}$$
By applying the corresponding operators, we will get the following equation (2) for $y_1$
$$\hat{y_0}+cos(\alpha-\hat{t})-cos\alpha + \hat{p}sin(\alpha-\hat{t})+cos(\alpha-\hat{t}-\hat{q})-cos(\alpha-\hat{t})=\hat{y_1}$$
We will get another equation $$\alpha-\hat{t}-\hat{q}=\beta(mod2\pi) $$
Now equation (1) and (2) can be rewritten as follows
$$\hat{p}cos(\alpha-\hat{t})=(\hat{x_1}-\hat{x_0})-(sin\alpha-sin\beta)$$
$$\hat{p}sin(\alpha-\hat{t})=(\hat{y_1}-\hat{y_0})+(cos\alpha-cos\beta)$$
After $(1)^2 + (2)^2$, we get
$$\hat{p}^2 = (\hat{x_1}-\hat{x_0})^2 + (sin\alpha-sin\beta)^2 - 2(\hat{x_1}-\hat{x_0})(sin\alpha-sin\beta) + (\hat{y_1}-\hat{y_0})^2 + (cos\alpha-cos\beta)^2 + 2(\hat{y_1}-\hat{y_0})(cos\alpha-cos\beta)$$
$$\hat{p}^2 = (\hat{x_1}-\hat{x_0})^2 + (\hat{y_1}-\hat{y_0})^2 + (sin\alpha-sin\beta)^2 + (cos\alpha-cos\beta)^2 - 2(\hat{x_1}-\hat{x_0})(sin\alpha-sin\beta) + 2(\hat{y_1}-\hat{y_0})(cos\alpha-cos\beta)$$
$$\hat{p}^2 = (\hat{d})^2 + (sin\alpha)^2+(sin\beta)^2-2sin\alpha sin\beta + (cos\alpha)^2+(cos\beta)^2-2cos\alpha cos\beta - 2(\hat{x_1}-\hat{x_0})(sin\alpha-sin\beta) + 2(\hat{y_1}-\hat{y_0})(cos\alpha-cos\beta)$$
$$\hat{p}^2 = (\hat{d})^2 + [(sin\alpha)^2+(cos\alpha)^2]+[(sin\beta)^2+(cos\beta)^2]-2(sin\alpha sin\beta+cos\alpha cos\beta) - 2(\hat{x_1}-\hat{x_0})(sin\alpha-sin\beta) + 2(\hat{y_1}-\hat{y_0})(cos\alpha-cos\beta)$$
$$\hat{p}^2 = (\hat{d})^2 + 2 - 2cos(\alpha - \beta) - 2(\hat{x_1}-\hat{x_0})(sin\alpha-sin\beta) + 2(\hat{y_1}-\hat{y_0})(cos\alpha-cos\beta)$$
Let us assume $$d_n = (\hat{d})^2 + 2 - 2cos(\alpha - \beta) - 2(\hat{x_1}-\hat{x_0})(sin\alpha-sin\beta) + 2(\hat{y_1}-\hat{y_0})(cos\alpha-cos\beta)$$
$$\therefore \hat{p} = \sqrt{d_n}$$
After dividing equation(2) by (1), we get
$$ tan(\alpha-\hat{t}) = \frac{(\hat{y_1}-\hat{y_0})+(cos\alpha-cos\beta)}{(\hat{x_1}-\hat{x_0})-(sin\alpha-sin\beta)} = k$$
where, k is a constant
$$(\alpha-\hat{t})= \arctan{k} = rec $$
$$\hat{t} = (\alpha - rec)mod2\pi $$
$$\hat{q} = (-\beta + rec)mod2\pi $$
```python
def RSR(alpha, beta, dist, dx, dy):
sin_a = math.sin(alpha)
sin_b = math.sin(beta)
cos_a = math.cos(alpha)
cos_b = math.cos(beta)
cos_a_b = math.cos(alpha - beta)
p_rsr = 2 + dist ** 2 - 2 * cos_a_b - 2 * dx * (sin_a - sin_b) + 2 * dy * (cos_a - cos_b)
if p_rsr < 0:
return None, None, None, ["R", "S", "R"]
else:
p_rsr = math.sqrt(p_rsr)
k = (dy + cos_a - cos_b)/(dx - sin_a + sin_b)
t_rsr = mod2pi(alpha - math.atan(k))
q_rsr = mod2pi(-beta + math.atan(k))
return t_rsr, p_rsr, q_rsr, ["R", "S", "R"]
```
Now apply all three operators upon source waypoint to get the goal position values for any LSR dubin path
$$L_q(S_p(R_t(x_0,y_0;\alpha)))=(x_1,y_1;\beta)$$
We know that $t=\rho\phi_1$ and $q=\rho\phi_2$
By applying the corresponding operators, we will get the following equation for $x_1$
$$x_0 - \rho sin(\alpha-\phi_1)+\rho sin\alpha + pcos(\alpha-\phi_1)-\rho sin(\alpha-\phi_1)+\rho sin(\alpha-\phi_1+\phi_2)=x_1$$
$$x_0 - \rho sin(\alpha-\frac{t}{\rho})+\rho sin\alpha + pcos(\alpha-\frac{t}{\rho})-\rho sin(\alpha-\frac{t}{\rho})+\rho sin(\alpha-\frac{t}{\rho}+\frac{q}{\rho})=x_1$$
$$\frac{x_0}{\rho}-sin(\alpha-\frac{t}{\rho})+sin\alpha + \frac{p}{\rho}cos(\alpha-\frac{t}{\rho})-sin(\alpha-\frac{t}{\rho})+sin(\alpha-\frac{t}{\rho}+\frac{q}{\rho})=\frac{x_1}{\rho}$$
Let us assume that $$\hat{x_0}=\frac{x_0}{\rho},\hat{x_1}=\frac{x_1}{\rho},\hat{y_0}=\frac{y_0}{\rho},\hat{y_1}=\frac{y_1}{\rho},\hat{t}=\frac{t}{\rho},\hat{p}=\frac{p}{\rho},\hat{q}=\frac{q}{\rho},\hat{d}=\frac{d}{\rho}$$
Now above equation can be overwritten as equation(1) $$\hat{x_0}-sin(\alpha-\hat{t})+sin\alpha + \hat{p}cos(\alpha-\hat{t})-sin(\alpha-\hat{t})+sin(\alpha-\hat{t}+\hat{q})=\hat{x_1}$$
By applying the corresponding operators, we will get the following equation (2) for $y_1$
$$\hat{y_0}+cos(\alpha-\hat{t})-cos\alpha + \hat{p}sin(\alpha-\hat{t})+cos(\alpha-\hat{t})-cos(\alpha-\hat{t}+\hat{q})=\hat{y_1}$$
We will get another equation $$\alpha-\hat{t}+\hat{q}=\beta(mod2\pi) $$
Now equation (1) and (2) can be rewritten as follows
$$\hat{p}cos(\alpha-\hat{t})-2sin(\alpha-\hat{t})=\hat{x_1}-\hat{x_0}-sin\alpha-sin(\alpha-\hat{t}+\hat{q})$$
$$\hat{p}cos(\alpha-\hat{t})-2sin(\alpha-\hat{t})=(\hat{x_1}-\hat{x_0})-(sin\alpha+sin\beta)$$
$$\hat{p}sin(\alpha-\hat{t})+2sin(\alpha-\hat{t})=(\hat{y_1}-\hat{y_0})+(cos\alpha+cos\beta)$$
After $(1)^2 + (2)^2$, we get
$$\hat{p}^2 + 4 = (\hat{x_1}-\hat{x_0})^2 + (sin\alpha+sin\beta)^2 - 2(\hat{x_1}-\hat{x_0})(sin\alpha+sin\beta) + (\hat{y_1}-\hat{y_0})^2 + (cos\alpha+cos\beta)^2 + 2(\hat{y_1}-\hat{y_0})(cos\alpha+cos\beta)$$
$$\hat{p}^2 + 4 = (\hat{x_1}-\hat{x_0})^2 + (\hat{y_1}-\hat{y_0})^2 + (sin\alpha+sin\beta)^2 + (cos\alpha+cos\beta)^2 - 2(\hat{x_1}-\hat{x_0})(sin\alpha+sin\beta) + 2(\hat{y_1}-\hat{y_0})(cos\alpha+cos\beta)$$
$$\hat{p}^2 + 4 = (\hat{d})^2 + (sin\alpha)^2+(sin\beta)^2+2sin\alpha sin\beta + (cos\alpha)^2+(cos\beta)^2+2cos\alpha cos\beta - 2(\hat{x_1}-\hat{x_0})(sin\alpha+sin\beta) + 2(\hat{y_1}-\hat{y_0})(cos\alpha+cos\beta)$$
$$\hat{p}^2 + 4 = (\hat{d})^2 + [(sin\alpha)^2+(cos\alpha)^2]+[(sin\beta)^2+(cos\beta)^2]+2(sin\alpha sin\beta+cos\alpha cos\beta) - 2(\hat{x_1}-\hat{x_0})(sin\alpha+sin\beta) + 2(\hat{y_1}-\hat{y_0})(cos\alpha+cos\beta)$$
$$\hat{p}^2 + 4 = (\hat{d})^2 + 2 + 2cos(\alpha - \beta) - 2(\hat{x_1}-\hat{x_0})(sin\alpha+sin\beta) + 2(\hat{y_1}-\hat{y_0})(cos\alpha+cos\beta)$$
$$\hat{p}^2 = (\hat{d})^2 - 2 + 2cos(\alpha - \beta) - 2(\hat{x_1}-\hat{x_0})(sin\alpha+sin\beta) + 2(\hat{y_1}-\hat{y_0})(cos\alpha+cos\beta)$$
Let us assume $$d_n = (\hat{d})^2 - 2 + 2cos(\alpha - \beta) - 2(\hat{x_1}-\hat{x_0})(sin\alpha+sin\beta) + 2(\hat{y_1}-\hat{y_0})(cos\alpha+cos\beta)$$
$$\therefore \hat{p} = \sqrt{d_n}$$
After dividing equation(1) by (2), we get
$$\frac{\hat{p}cos(\alpha-\hat{t})-2sin(\alpha-\hat{t})}{\hat{p}sin(\alpha-\hat{t})+2cos(\alpha-\hat{t})} = \frac{(\hat{x_1}-\hat{x_0})-(sin\alpha+sin\beta)}{(\hat{y_1}-\hat{y_0})+(cos\alpha+cos\beta)} = k$$
where, k is a constant
$$\frac{\hat{p}-2tan(\alpha-\hat{t})}{\hat{p}tan(\alpha-\hat{t})+ 2} = k $$
$$\hat{p}-2tan(\alpha-\hat{t}) = k\hat{p}tan(\alpha-\hat{t})+ 2k $$
$$tan(\alpha-\hat{t})= \frac{\hat{p}-2k}{k\hat{p}+2} $$
$$(\alpha-\hat{t})= \arctan{\frac{\hat{p}-2k}{k\hat{p}+2}} = rec $$
$$\hat{t} = (\alpha - rec)mod2\pi $$
$$\hat{q} = (\beta - rec)mod2\pi $$
```python
def LSR(alpha, beta, dist, dx, dy):
sin_a = math.sin(alpha)
sin_b = math.sin(beta)
cos_a = math.cos(alpha)
cos_b = math.cos(beta)
cos_a_b = math.cos(alpha - beta)
p_lsr = -2 + dist ** 2 + 2 * cos_a_b - 2 * dx * (sin_a + sin_b) + 2 * dy * (cos_a + cos_b)
if p_lsr < 0:
return None, None, None, ["L", "S", "R"]
else:
p_lsr = math.sqrt(p_lsr)
k = (dx - sin_a - sin_b)/(dy + cos_a + cos_b)
rec = math.atan2(p_lsr - 2*k , k*p_lsr + 2)
t_lsr = mod2pi(alpha - rec)
q_lsr = mod2pi(beta - rec)
return t_lsr, p_lsr, q_lsr, ["L", "S", "R"]
```
Now apply all three operators upon source waypoint to get the goal position values for any RSL dubin path
$$R_q(S_p(L_t(x_0,y_0;\alpha)))=(x_1,y_1;\beta)$$
We know that $t=\rho\phi_1$ and $q=\rho\phi_2$
Let us assume that $$\hat{x_0}=\frac{x_0}{\rho},\hat{x_1}=\frac{x_1}{\rho},\hat{y_0}=\frac{y_0}{\rho},\hat{y_1}=\frac{y_1}{\rho},\hat{t}=\frac{t}{\rho},\hat{p}=\frac{p}{\rho},\hat{q}=\frac{q}{\rho},\hat{d}=\frac{d}{\rho}$$
By applying the corresponding operators, we will get the following equation for $x_1$
$$\hat{x_0}-sin\alpha+sin(\alpha+\hat{t}) + \hat{p}cos(\alpha+\hat{t})-sin(\alpha+\hat{t}-\hat{q})+sin(\alpha+\hat{t})=\hat{x_1}$$
By applying the corresponding operators, we will get the following equation (2) for $y_1$
$$\hat{y_0}+cos\alpha-cos(\alpha+\hat{t}) + \hat{p}sin(\alpha+\hat{t})+cos(\alpha+\hat{t}-\hat{q})-cos(\alpha+\hat{t})=\hat{y_1}$$
We will get another equation $$\alpha+\hat{t}-\hat{q}=\beta(mod2\pi) $$
Now equation (1) and (2) can be rewritten as follows
$$\hat{p}cos(\alpha+\hat{t})+2sin(\alpha+\hat{t})=(\hat{x_1}-\hat{x_0})+(sin\alpha+sin\beta)$$
$$\hat{p}sin(\alpha+\hat{t})-2cos(\alpha+\hat{t})=(\hat{y_1}-\hat{y_0})-(cos\alpha+cos\beta)$$
After $(1)^2 + (2)^2$, we get
$$\hat{p}^2 + 4 = (\hat{x_1}-\hat{x_0})^2 + (sin\alpha+sin\beta)^2 + 2(\hat{x_1}-\hat{x_0})(sin\alpha+sin\beta) + (\hat{y_1}-\hat{y_0})^2 + (cos\alpha+cos\beta)^2 - 2(\hat{y_1}-\hat{y_0})(cos\alpha+cos\beta)$$
$$\hat{p}^2 + 4 = (\hat{x_1}-\hat{x_0})^2 + (\hat{y_1}-\hat{y_0})^2 + (sin\alpha+sin\beta)^2 + (cos\alpha+cos\beta)^2 + 2(\hat{x_1}-\hat{x_0})(sin\alpha+sin\beta) - 2(\hat{y_1}-\hat{y_0})(cos\alpha+cos\beta)$$
$$\hat{p}^2 + 4 = (\hat{d})^2 + (sin\alpha)^2+(sin\beta)^2+2sin\alpha sin\beta + (cos\alpha)^2+(cos\beta)^2+2cos\alpha cos\beta + 2(\hat{x_1}-\hat{x_0})(sin\alpha+sin\beta) - 2(\hat{y_1}-\hat{y_0})(cos\alpha+cos\beta)$$
$$\hat{p}^2 + 4 = (\hat{d})^2 + [(sin\alpha)^2+(cos\alpha)^2]+[(sin\beta)^2+(cos\beta)^2]+2(sin\alpha sin\beta+cos\alpha cos\beta) + 2(\hat{x_1}-\hat{x_0})(sin\alpha+sin\beta) - 2(\hat{y_1}-\hat{y_0})(cos\alpha+cos\beta)$$
$$\hat{p}^2 + 4 = (\hat{d})^2 + 2 + 2cos(\alpha - \beta) + 2(\hat{x_1}-\hat{x_0})(sin\alpha+sin\beta) - 2(\hat{y_1}-\hat{y_0})(cos\alpha+cos\beta)$$
$$\hat{p}^2 = (\hat{d})^2 - 2 + 2cos(\alpha - \beta) + 2(\hat{x_1}-\hat{x_0})(sin\alpha+sin\beta) - 2(\hat{y_1}-\hat{y_0})(cos\alpha+cos\beta)$$
Let us assume $$d_n = (\hat{d})^2 - 2 + 2cos(\alpha - \beta) + 2(\hat{x_1}-\hat{x_0})(sin\alpha+sin\beta) - 2(\hat{y_1}-\hat{y_0})(cos\alpha+cos\beta)$$
$$\therefore \hat{p} = \sqrt{d_n}$$
After dividing equation(1) by (2), we get
$$\frac{\hat{p}cos(\alpha+\hat{t})+2sin(\alpha+\hat{t})}{\hat{p}sin(\alpha+\hat{t})-2cos(\alpha+\hat{t})} = \frac{(\hat{x_1}-\hat{x_0})+(sin\alpha+sin\beta)}{(\hat{y_1}-\hat{y_0})-(cos\alpha+cos\beta)} = k$$
where, k is a constant
$$\frac{\hat{p}+2tan(\alpha+\hat{t})}{\hat{p}tan(\alpha+\hat{t})- 2} = k $$
$$\hat{p}+2tan(\alpha+\hat{t}) = k\hat{p}tan(\alpha+\hat{t})- 2k $$
$$tan(\alpha+\hat{t})= \frac{\hat{p}+2k}{k\hat{p}-2} $$
$$(\alpha+\hat{t})= \arctan{\frac{\hat{p}+2k}{k\hat{p}-2}} = rec $$
$$\hat{t} = (-\alpha + rec)mod2\pi $$
$$\hat{q} = (-\beta + rec)mod2\pi $$
```python
def RSL(alpha, beta, dist, dx, dy):
sin_a = math.sin(alpha)
sin_b = math.sin(beta)
cos_a = math.cos(alpha)
cos_b = math.cos(beta)
cos_a_b = math.cos(alpha - beta)
p_rsl = -2 + dist ** 2 + 2 * cos_a_b + 2 * dx * (sin_a + sin_b) - 2* dy * (cos_a + cos_b)
if p_rsl < 0:
return None, None, None, ["R", "S", "L"]
else:
p_rsl = math.sqrt(p_rsl)
k = (dx + sin_a + sin_b)/(dy - cos_a - cos_b)
rec = math.atan2(p_rsl + 2 * k , k * p_rsl - 2)
t_rsl = mod2pi(-alpha + rec)
q_rsl = mod2pi(-beta + rec)
return t_rsl, p_rsl, q_rsl, ["R", "S", "L"]
```
Now apply all three operators upon source waypoint to get the goal position values for any RLR dubin path
$$R_q(L_p(R_t(x_0,y_0;\alpha)))=(x_1,y_1;\beta)$$
We know that $t=\rho\phi_1$ , $p=\rho\phi_2$ and $q=\rho\phi_3$
Let us assume that $$\hat{x_0}=\frac{x_0}{\rho},\hat{x_1}=\frac{x_1}{\rho},\hat{y_0}=\frac{y_0}{\rho},\hat{y_1}=\frac{y_1}{\rho},\hat{t}=\frac{t}{\rho},\hat{p}=\frac{p}{\rho},\hat{q}=\frac{q}{\rho},\hat{d}=\frac{d}{\rho}$$
By applying the corresponding operators, we will get the following equation (1) for $x_1$
$$\hat{x_0}-sin(\alpha-\hat{t}) + sin\alpha - sin(\alpha-\hat{t})+ sin(\alpha-\hat{t}+\hat{p}) - sin(\alpha-\hat{t}+\hat{p}-\hat{q}) + sin(\alpha-\hat{t}+\hat{p}) =\hat{x_1}$$
By applying the corresponding operators, we will get the following equation (2) for $y_1$
$$\hat{y_0}+cos(\alpha-\hat{t})-cos\alpha + cos(\alpha-\hat{t}) - cos(\alpha-\hat{t}+\hat{p})+cos(\alpha-\hat{t}+\hat{p}-\hat{q}) - cos(\alpha-\hat{t}+\hat{p}) =\hat{y_1}$$
We will get another equation $$\alpha-\hat{t}+\hat{p}-\hat{q}=\beta(mod2\pi) $$
Now equation (1) and (2) can be rewritten as follows
$$ 2sin(\alpha-\hat{t}+\hat{p}) - 2sin(\alpha-\hat{t}) = (\hat{x_1}-\hat{x_0})-(sin\alpha-sin\beta)$$
$$ - 2cos(\alpha-\hat{t}+\hat{p}) + 2cos(\alpha-\hat{t}) = (\hat{y_1}-\hat{y_0})+(cos\alpha-cos\beta)$$
After $(1)^2 + (2)^2$, we get
$$ 4 + 4 = (\hat{x_1}-\hat{x_0})^2 + (sin\alpha-sin\beta)^2 - 2(\hat{x_1}-\hat{x_0})(sin\alpha-sin\beta) + (\hat{y_1}-\hat{y_0})^2 + (cos\alpha-cos\beta)^2 + 2(\hat{y_1}-\hat{y_0})(cos\alpha-cos\beta)$$
$$ 8 = (\hat{x_1}-\hat{x_0})^2 + (\hat{y_1}-\hat{y_0})^2 + (sin\alpha-sin\beta)^2 + (cos\alpha-cos\beta)^2 - 2(\hat{x_1}-\hat{x_0})(sin\alpha-sin\beta) + 2(\hat{y_1}-\hat{y_0})(cos\alpha-cos\beta)$$
$$ 8 = (\hat{d})^2 + (sin\alpha)^2+(sin\beta)^2-2sin\alpha sin\beta + (cos\alpha)^2+(cos\beta)^2-2cos\alpha cos\beta - 2(\hat{x_1}-\hat{x_0})(sin\alpha-sin\beta) + 2(\hat{y_1}-\hat{y_0})(cos\alpha-cos\beta)$$
$$ 8 = (\hat{d})^2 + [(sin\alpha)^2+(cos\alpha)^2]+[(sin\beta)^2+(cos\beta)^2]-2(sin\alpha sin\beta+cos\alpha cos\beta) - 2(\hat{x_1}-\hat{x_0})(sin\alpha-sin\beta) + 2(\hat{y_1}-\hat{y_0})(cos\alpha-cos\beta)$$
$$\hat{p}^2 = (\hat{d})^2 + 2 - 2cos(\alpha - \beta) - 2(\hat{x_1}-\hat{x_0})(sin\alpha-sin\beta) + 2(\hat{y_1}-\hat{y_0})(cos\alpha-cos\beta)$$
Let us assume $$d_n = (\hat{d})^2 + 2 - 2cos(\alpha - \beta) - 2(\hat{x_1}-\hat{x_0})(sin\alpha-sin\beta) + 2(\hat{y_1}-\hat{y_0})(cos\alpha-cos\beta)$$
$$\therefore \hat{p} = \sqrt{d_n}$$
After dividing equation(2) by (1), we get
$$ tan(\alpha-\hat{t}) = \frac{(\hat{y_1}-\hat{y_0})+(cos\alpha-cos\beta)}{(\hat{x_1}-\hat{x_0})-(sin\alpha-sin\beta)} = k$$
where, k is a constant
$$(\alpha-\hat{t})= \arctan{k} = rec $$
$$\hat{t} = (\alpha - rec)mod2\pi $$
$$\hat{q} = (-\beta + rec)mod2\pi $$
```python
def RLR(alpha, beta, dist, dx, dy):
sin_a = math.sin(alpha)
sin_b = math.sin(beta)
cos_a = math.cos(alpha)
cos_b = math.cos(beta)
cos_a_b = math.cos(alpha - beta)
rec = (6.0 - dist ** 2 + 2.0 * cos_a_b + 2.0 * dist * (sin_a - sin_b)) / 8.0
if abs(rec) > 1.0:
return None, None, None, ["R", "L", "R"]
p_rlr = mod2pi(2 * math.pi - math.acos(rec))
t_rlr = mod2pi(alpha - math.atan2(cos_a - cos_b, dist - sin_a + sin_b) + mod2pi(p_rlr / 2.0))
q_rlr = mod2pi(alpha - beta - t_rlr + mod2pi(p_rlr))
return t_rlr, p_rlr, q_rlr, ["R", "L", "R"]
```
```python
def LRL(alpha, beta, dist, dx, dy):
sin_a = math.sin(alpha)
sin_b = math.sin(beta)
cos_a = math.cos(alpha)
cos_b = math.cos(beta)
cos_a_b = math.cos(alpha - beta)
rec = (6.0 - dist ** 2 + 2.0 * cos_a_b + 2.0 * dist * (sin_b - sin_a)) / 8.0
if abs(rec) > 1.0:
return None, None, None, ["L", "R", "L"]
p_lrl = mod2pi(2 * math.pi - math.acos(rec))
t_lrl = mod2pi(-alpha - math.atan2(cos_a - cos_b, dist + sin_a - sin_b) + p_lrl / 2.0)
q_lrl = mod2pi(mod2pi(beta) - alpha - t_lrl + mod2pi(p_lrl))
return t_lrl, p_lrl, q_lrl, ["L", "R", "L"]
```
```python
def interpolate(ind, l, m, maxc, ox, oy, oyaw, px, py, pyaw, directions):
# for straight path computing the next wayponts' state values
# l is length of a single part out of three
# maxc is maximum curvature of car in radian
# note that straight
if m == "S":
px[ind] = ox + l / maxc * math.cos(oyaw)
py[ind] = oy + l / maxc * math.sin(oyaw)
pyaw[ind] = oyaw
# for curve path computing the next waypoints' state values
# why sin(l) ?
else:
ldx = math.sin(l) / maxc
if m == "L":
ldy = (1.0 - math.cos(l)) / maxc
elif m == "R":
ldy = (1.0 - math.cos(l)) / (-maxc)
gdx = math.cos(-oyaw) * ldx + math.sin(-oyaw) * ldy
gdy = -math.sin(-oyaw) * ldx + math.cos(-oyaw) * ldy
px[ind] = ox + gdx
py[ind] = oy + gdy
# for left curve tangent's slope or car's yaw angle value is increasing via 1 that means rotating in anti clockwise direction
# for right curve tangent's slope or car's yaw angle value is decreasing via 1 that means rotating in clockwise direction
if m == "L":
pyaw[ind] = oyaw + l
elif m == "R":
pyaw[ind] = oyaw - l
if l > 0.0:
directions[ind] = 1
else:
directions[ind] = -1
return px, py, pyaw, directions
```
```python
def generate_local_course(total_length, lengths, mode, max_curv, step_size):
# point_num specifies the number of intermediate points between start and goal waypoints
point_num = int(total_length / step_size) + len(lengths)
# for index variable '_' in range of point_num
# all values in the arrays like px, py, pyaw and directions are initialized to 0 or 0.0
# px, py denotes the coordinates of intermediate waypoints
# pyaw specifies the yaw angle of the car at an intermdiate waypoint
# direction represents the direction taken by car at an intermediate waypoint
px = [0.0 for _ in range(point_num)]
py = [0.0 for _ in range(point_num)]
pyaw = [0.0 for _ in range(point_num)]
directions = [0 for _ in range(point_num)]
ind = 1
# direction[i] = 0 means car is moving along straight line
# direction[i] = 1 means left steering by car
# direction[i] = -1 means right steering by car
if lengths[0] > 0.0:
directions[0] = 1
else:
directions[0] = -1
# length[i] = +ve means distance is going to be increased in positive side
# length[i] = -ve means distance is going to be increased in negaitive side
if lengths[0] > 0.0:
d = step_size
else:
d = -step_size
ll = 0.0
for m, l, i in zip(mode, lengths, range(len(mode))):
# this lengths array consists of the lengths of three parts of the dubin path between two consecutive waypoints
# l[i] denotes the length of any one part
# while car is moving in straight direction or taking a left steering
if l > 0.0:
d = step_size
# while car is taking a right steering
else:
d = -step_size
# storing x, y and yaw values of the starting point of any one specific part
# any part may be a right or left curve or may be a staright line
# index value bocomes 0 from 1
ox, oy, oyaw = px[ind], py[ind], pyaw[ind]
ind -= 1
# while two consecutive mode combination is straight-left or left-straight
# we will get a positive value for multiplication of their lengths
if i >= 1 and (lengths[i - 1] * lengths[i]) > 0:
pd = -d - ll
else:
pd = d - ll
# call interpolate() to generate all the equally spaced intermediate watpoints for any single part
# interpolation is a process of determining the unknown values that lie in between the known data points.
# It is mostly used to predict the unknown values for any geographical related data points such as noise level, rainfall, elevation, and so on.
while abs(pd) <= abs(l):
ind += 1
px, py, pyaw, directions = interpolate(ind, pd, m, max_curv, ox, oy, oyaw, px, py, pyaw, directions)
pd += d
# caluculation of remaining length
ll = l - pd - d
# Now we use inerpolate() to generate the accute path for one part via using those generated waypoints
ind += 1
px, py, pyaw, directions = interpolate(ind, l, m, max_curv, ox, oy, oyaw, px, py, pyaw, directions)
if len(px) <= 1:
return [], [], [], []
# remove unused data
while len(px) >= 1 and px[-1] == 0.0:
px.pop()
py.pop()
pyaw.pop()
directions.pop()
return px, py, pyaw, directions
```
```python
def planning_from_origin(dx, dy, dyaw, curv, radius, step_size):
# origin is first waypoint
# Prior Python 3.8, hypot() is used previously to find the hypotenuse of a right-angled triangle: sqrt(x*x + y*y)
# From Python 3.8, this method is used to calculate the Euclidean norm as well.
# For n-dimensional cases, the coordinates passed are assumed to be like (x1, x2, x3, ..., xn)
# So Euclidean length from the origin is calculated by sqrt(x1*x1 + x2*x2 +x3*x3 .... xn*xn).) & stored in dist variable
# radius of the circle or curvature path along which robot car is moving
dist = math.hypot(dx, dy)
dist_hat = dist/radius
dx_hat = dx/radius
dy_hat = dy/radius
# The math.atan2() method returns the arc tangent of y/x, in radians where, x and y are the coordinates of a point (x,y).
# The arctan function is the inverse of the tangent function.
# It returns the angle whose tangent is a given number.
theta = mod2pi(math.atan2(dy, dx))
alpha = mod2pi(-theta)
beta = mod2pi(dyaw - theta)
# store function or methods names in planners list
planners = [LSL, RSR, LSR, RSL, RLR, LRL]
# best_cost value is set to positive infinity
# None is null value
best_cost = float("inf")
bt, bp, bq, best_mode = None, None, None, None
# call each planner function in the list
# any dubin path between two waypoints is divided into three parts
# distances of those parts are stored in t,p,q variable respectively
# mode is an array of direction elements
for planner in planners:
t, p, q, mode = planner(alpha, beta, dist_hat, dx_hat, dy_hat)
if t is None:
continue
# total cost or distance of the dubin path
# we want the best directions or mode with the minimum distance
# store the length of three parts of the path with minimum cost into bt, bp, bq
cost = (abs(t) + abs(p) + abs(q))
if best_cost > cost:
bt, bp, bq, best_mode = t, p, q, mode
best_cost = cost
lengths = [bt, bp, bq]
# call the generate_local_course()
# returning the x values, y values and yaw values with directions that are needed to compute the dubin path
x_list, y_list, yaw_list, directions = generate_local_course(sum(lengths), lengths, best_mode, curv, step_size)
return x_list, y_list, yaw_list, best_mode, best_cost
```
```python
def calc_dubins_path(sx, sy, syaw, gx, gy, gyaw, curv, radius, step_size=0.1):
# step_size = 0.1 means
# calculate the differences between source and goal positions
dx = gx - sx
dy = gy - sy
# robot car is moving in xy plane and steering along z axis or direction
# from_euler() method rotates our car along z axix using syaw angle and as_dcm() converts a car's 3D rotation into a direction cosine matrix
# l_rot stores the (3X3) direction cosine matrix as a (2X2) matrix
# stack() method makes a matrix of dimension (1X2)
# T denotes the traonspose matrix with dimension(1X2) of the previous matrix and @ denotes the matrix multiplication
# multiplication result is stored in 1e_xy (1x2) matrix and now car's rotation is complete
l_rot = Rot.from_euler('z', syaw).as_dcm()[0:2, 0:2]
le_xy = np.stack([dx, dy]).T @ l_rot
le_yaw = gyaw - syaw
# call planning_from_origin() method with below arguments
lp_x, lp_y, lp_yaw, mode, lengths = planning_from_origin(le_xy[0], le_xy[1], le_yaw, curv, radius, step_size)
#
rot = Rot.from_euler('z', -syaw).as_dcm()[0:2, 0:2]
converted_xy = np.stack([lp_x, lp_y]).T @ rot
x_list = converted_xy[:, 0] + sx
y_list = converted_xy[:, 1] + sy
yaw_list = [pi_2_pi(i_yaw + syaw) for i_yaw in lp_yaw]
return PATH(lengths, mode, x_list, y_list, yaw_list)
```
```python
def main():
# choose robot vehicle's states pairs: (x coordinate, y coordinate, yaw in degree
# Yaw happens when the weight of your vehicle shifts from its center of gravity to the left or the right.
# This is a shift you will definitely feel when you’re inside your vehicle.
# The times this will happen is when you suddenly steer, brake, or accelerate.
# This might cause your vehicle to spin around its center of gravity.
# states is a list that stores multiple other lists with triplets like x, y and yaw angle values
# simulation-1
states = [(0, 0, 0), (10, 10, -90), (20, 5, 60), (30, 10, 120),(35, -5, 30), (25, -10, -120), (15, -15, 100), (15, -25, 90)]
# simulation-2
#states = [(-3, 3, 120), (10, -7, 30), (10, 13, 30), (20, 5, -25),(35, 10, 180), (32, -10, 180), (5, -12, 90)]
# max curvature of robot vehicle in radian
max_curv = 0.25
# radius of the curve along which the car is moving
radius = 4.15
# Initialization of arrays for x, y and yaw values
path_x, path_y, yaw = [], [], []
# start, goal denote the source and goal states or two consecutive waypoints respectively
# deg2rad() coverts given yaw degree value into radian
for i in range(len(states) - 1):
start_x = states[i][0]
start_y = states[i][1]
start_yaw = np.deg2rad(states[i][2])
goal_x = states[i + 1][0]
goal_y = states[i + 1][1]
goal_yaw = np.deg2rad(states[i + 1][2])
# call calc_dubins_path() method with arguments like x, y and yaw values of source and goal waypoints and maximum curvature
# compute the dubin path between two consecutive states or waypoints
path_i = calc_dubins_path(start_x, start_y, start_yaw, goal_x, goal_y, goal_yaw, max_curv, radius)
# append all the x, y and yaw values between two consecutive waypoints to the path_x, path_y and yaw arrays respectively
for x, y, iyaw in zip(path_i.x, path_i.y, path_i.yaw):
path_x.append(x)
path_y.append(y)
yaw.append(iyaw)
# show the animation of computed dubin path
# Interactive mode is on
plt.ion()
# plot a new figure
plt.figure()
# for each waypoint
# clf() function clears the current figure
# plot() function show the dubin curve
for i in range(len(path_x)):
plt.clf()
plt.plot(path_x, path_y, linewidth=1, color='gray')
# for each waypoint or state draw an arrow with given arguments like x, y, yaw, length and color values
for x, y, theta in states:
draw.Arrow(x, y, np.deg2rad(theta), 2, 'blueviolet')
# for each state draw a continuously moving car with arguments like x, y, yaw , width and length values
draw.Car(path_x[i], path_y[i], yaw[i], 1.5, 3)
# plot the figure with equal length axis
# setting the x axis to -10 to 42 and y axis to -20 to 20 via axis() function
# draw() function updates a figure that has been altered
# pause() function show the figure for given interval seconds
plt.axis("equal")
plt.title("Simulation of Dubins Path")
plt.axis([-10, 42, -35, 20])
plt.draw()
plt.pause(0.001)
plt.pause(1)
if __name__ == '__main__':
main()
```
| 4eb73f9b708a0584f42af2d6928b2d09c7065773 | 52,356 | ipynb | Jupyter Notebook | _posts/2D_Dubin_Path_Modified.ipynb | abbhicse/contrast | 56af86b07877305d1ba6286177a30beee04dbe2a | [
"Unlicense"
] | null | null | null | _posts/2D_Dubin_Path_Modified.ipynb | abbhicse/contrast | 56af86b07877305d1ba6286177a30beee04dbe2a | [
"Unlicense"
] | null | null | null | _posts/2D_Dubin_Path_Modified.ipynb | abbhicse/contrast | 56af86b07877305d1ba6286177a30beee04dbe2a | [
"Unlicense"
] | null | null | null | 48.976614 | 1,177 | 0.52038 | true | 14,625 | Qwen/Qwen-72B | 1. YES
2. YES | 0.887205 | 0.859664 | 0.762698 | __label__eng_Latn | 0.811311 | 0.610334 |
# 13 Linear Algebra – Students (1)
## Motivating problem: Two masses on three strings
Two masses $M_1$ and $M_2$ are hung from a horizontal rod with length $L$ in such a way that a rope of length $L_1$ connects the left end of the rod to $M_1$, a rope of length $L_2$ connects $M_1$ and $M_2$, and a rope of length $L_3$ connects $M_2$ to the right end of the rod. The system is at rest (in equilibrium under gravity).
Find the angles that the ropes make with the rod and the tension forces in the ropes.
In class we derived the equations that govern this problem – see [14_String_Problem_lecture_notes (PDF)](14_String_Problem_lecture_notes.pdf).
We can represent the problem as system of nine coupled non-linear equations:
$$
\mathbf{f}(\mathbf{x}) = 0
$$
### Summary of equations to be solved
Treat $\sin\theta_i$ and $\cos\theta_i$ together with $T_i$, $1\leq i \leq 3$, as unknowns that have to simultaneously fulfill the nine equations
\begin{align}
-T_1 \cos\theta_1 + T_2\cos\theta_2 &= 0\\
T_1 \sin\theta_1 - T_2\sin\theta_2 - W_1 &= 0\\
-T_2\cos\theta_2 + T_3\cos\theta_3 &= 0\\
T_2\sin\theta_2 + T_3\sin\theta_3 - W_2 &= 0\\
L_1\cos\theta_1 + L_2\cos\theta_2 + L_3\cos\theta_3 - L &= 0\\
-L_1\sin\theta_1 - L_2\sin\theta_2 + L_3\sin\theta_3 &= 0\\
\sin^2\theta_1 + \cos^2\theta_1 - 1 &= 0\\
\sin^2\theta_2 + \cos^2\theta_2 - 1 &= 0\\
\sin^2\theta_3 + \cos^2\theta_3 - 1 &= 0
\end{align}
Consider the nine equations a vector function $\mathbf{f}$ that takes a 9-vector $\mathbf{x}$ of the unknowns as argument:
\begin{align}
\mathbf{f}(\mathbf{x}) &= 0\\
\mathbf{x} &= \left(\begin{array}{c}
x_0 \\ x_1 \\ x_2 \\
x_3 \\ x_4 \\ x_5 \\
x_6 \\ x_7 \\ x_8
\end{array}\right)
=
\left(\begin{array}{c}
\sin\theta_1 \\ \sin\theta_2 \\ \sin\theta_3 \\
\cos\theta_1 \\ \cos\theta_2 \\ \cos\theta_3 \\
T_1 \\ T_2 \\ T_3
\end{array}\right) \\
\mathbf{L} &= \left(\begin{array}{c}
L \\ L_1 \\ L_2 \\ L_3
\end{array}\right), \quad
\mathbf{W} = \left(\begin{array}{c}
W_1 \\ W_2
\end{array}\right)
\end{align}
In more detail:
\begin{align}
f_1(\mathbf{x}) &= -x_6 x_3 + x_7 x_4 &= 0\\
f_2(\mathbf{x}) &= x_6 x_0 - x_7 x_1 - W_1 & = 0\\
\dots\\
f_8(\mathbf{x}) &= x_2^2 + x_5^2 - 1 &=0
\end{align}
We generalize the *Newton-Raphson algorithm* from the [last lecture](http://asu-compmethodsphysics-phy494.github.io/ASU-PHY494//2018/03/21/12_Root_finding/) to $n$ dimensions:
## General Newton-Raphson algorithm
Given a trial vector $\mathbf{x}$, the correction $\Delta\mathbf{x}$ can be derived from the Taylor expansion
$$
f_i(\mathbf{x} + \Delta\mathbf{x}) = f_i(\mathbf{x}) + \sum_{j=1}^{n} \left.\frac{\partial f_i}{\partial x_j}\right|_{\mathbf{x}} \, \Delta x_j + \dots
$$
or in full vector notation
\begin{align}
\mathbf{f}(\mathbf{x} + \Delta\mathbf{x}) &= \mathbf{f}(\mathbf{x}) + \left.\frac{d\mathbf{f}}{d\mathbf{x}}\right|_{\mathbf{x}} \Delta\mathbf{x} + \dots\\
&= \mathbf{f}(\mathbf{x}) + \mathsf{J}(\mathbf{x}) \Delta\mathbf{x} + \dots
\end{align}
where $\mathsf{J}(\mathbf{x})$ is the *[Jacobian](http://mathworld.wolfram.com/Jacobian.html)* matrix of $\mathbf{f}$ at $\mathbf{x}$, the generalization of the derivative to multivariate vector functions.
Solve
$$
\mathbf{f}(\mathbf{x} + \Delta\mathbf{x}) = 0
$$
i.e.,
$$
\mathsf{J}(\mathbf{x}) \Delta\mathbf{x} = -\mathbf{f}(\mathbf{x})
$$
for the correction $\Delta x$
$$
\Delta\mathbf{x} = -\mathsf{J}(\mathbf{x})^{-1} \mathbf{f}(\mathbf{x})
$$
which has the same form as the 1D Newton-Raphson correction $\Delta x = -f'(x)^{-1} f(x)$.
These are *matrix equations* (we linearized the problem). One can either explicitly solve for the unknown vector $\Delta\mathbf{x}$ with the inverse matrix of the Jacobian or use other methods to solve the coupled system of linear equations of the general form
$$
\mathsf{A} \mathbf{x} = \mathbf{b}.
$$
## Linear algebra with `numpy.linalg`
```python
import numpy as np
```
```python
np.linalg?
```
### System of coupled linear equations
Solve the coupled system of linear equations of the general form
$$
\mathsf{A} \mathbf{x} = \mathbf{b}.
$$
```python
A = np.array([
[1, 0, 0],
[0, 1, 0],
[0, 0, 2]
])
b = np.array([1, 0, 1])
```
What does this system of equations look like?
```python
for i in range(A.shape[0]):
terms = []
for j in range(A.shape[1]):
terms.append("{1} x[{0}]".format(i, A[i, j]))
print(" + ".join(terms), "=", b[i])
```
1 x[0] + 0 x[0] + 0 x[0] = 1
0 x[1] + 1 x[1] + 0 x[1] = 0
0 x[2] + 0 x[2] + 2 x[2] = 1
Now solve it with `numpy.linalg.solve`:
```python
```
Test that it satisfies the original equation:
$$
\mathsf{A} \mathbf{x} - \mathbf{b} = 0
$$
```python
```
#### Activity: Solving matrix equations
With
$$
\mathsf{A}_1 = \left(\begin{array}{ccc}
+4 & -2 & +1\\
+3 & +6 & -4\\
+2 & +1 & +8
\end{array}\right)
$$
and
$$
\mathbf{b}_1 = \left(\begin{array}{c}
+12 \\ -25 \\ +32
\end{array}\right), \quad
\mathbf{b}_2 = \left(\begin{array}{c}
+4 \\ -1 \\ +36
\end{array}\right), \quad
$$
solve for $\mathbf{x}_i$
$$
\mathsf{A}_1 \mathbf{x}_i = \mathbf{b}_i
$$
and *check the correctness of your answer*.
```python
```
### Matrix inverse
In order to solve directly we need the inverse of $\mathsf{A}$:
$$
\mathsf{A}\mathsf{A}^{-1} = \mathsf{A}^{-1}\mathsf{A} = \mathsf{1}
$$
Then
$$
\mathbf{x} = \mathsf{A}^{-1} \mathbf{b}
$$
If the inverse exists, `numpy.linalg.inv()` can calculate it:
```python
```
Check that it behaves like an inverse:
```python
```
Now solve the coupled equations directly:
```python
```
#### Activity: Solving coupled equations with the inverse matrix
1. Compute the inverse of $\mathsf{A}_1$ and *check the correctness*.
2. Compute $\mathbf{x}_1$ and $\mathbf{x}_2$ with $\mathsf{A}_1^{-1}$ and check the correctness of your answers.
```python
```
### Eigenvalue problems
The equation
\begin{gather}
\mathsf{A} \mathbf{x}_i = \lambda_i \mathbf{x}_i
\end{gather}
is the **eigenvalue problem** and a solution provides the eigenvalues $\lambda_i$ and corresponding eigenvectors $x_i$ that satisfy the equation.
#### Example 1: Principal axes of a square
The principle axes of the [moment of inertia tensor](https://en.wikipedia.org/wiki/Moment_of_inertia#The_inertia_tensor) are defined through the eigenvalue problem
$$
\mathsf{I} \mathbf{\omega}_i = \lambda_i \mathbf{\omega}_i
$$
The principal axes are the $\mathbf{\omega}_i$.
```python
Isquare = np.array([[2/3, -1/4], [-1/4, 2/3]])
```
```python
```
Note that the eigenvectors are `omegas[:, i]`! You can transpose so that axis 0 is the eigenvector index:
```python
```
Test:
$$
(\mathsf{I} - \lambda_i \mathsf{1}) \mathbf{\omega}_i = 0
$$
(The identity matrix can be generated with `np.identity(2)`.)
```python
```
#### Example 2: Spin in a magnetic field
In quantum mechanics, a spin 1/2 particle is represented by a spinor $\chi$, a 2-component vector. The Hamiltonian operator for a stationary spin 1/2 particle in a homogenous magnetic field $B_y$ is
$$
\mathsf{H} = -\gamma \mathsf{S}_y B_y = -\gamma B_y \frac{\hbar}{2} \mathsf{\sigma_y}
= \hbar \omega \left( \begin{array}{cc} 0 & -i \\ i & 0 \end{array}\right)
$$
Determine the *eigenvalues* and *eigenstates*
$$
\mathsf{H} \mathbf{\chi} = E \mathbf{\chi}
$$
of the spin 1/2 particle.
(To make this a purely numerical problem, divide through by $\hbar\omega$, i.e. calculate $E/\hbar\omega$.)
```python
```
Normalize the eigenvectors:
$$
\hat\chi = \frac{1}{\sqrt{\chi^\dagger \cdot \chi}} \chi
$$
```python
```
#### Activity: eigenvalues
Find the eigenvalues and eigenvectors of
$$
\mathsf{A}_2 = \left(\begin{array}{ccc}
-2 & +2 & -3\\
+2 & +1 & -6\\
-1 & -2 & +0
\end{array}\right)
$$
Are the eigenvectors normalized?
Check your results.
```python
```
| 9d05808d682ef1a4657d8ce674ba56651e5639db | 14,693 | ipynb | Jupyter Notebook | 14_linear_algebra/14_Linear_Algebra-students-1.ipynb | ASU-CompMethodsPhysics-PHY494/PHY494-resources-2020 | 20e08c20995eab567063b1845487e84c0e690e96 | [
"CC-BY-4.0"
] | null | null | null | 14_linear_algebra/14_Linear_Algebra-students-1.ipynb | ASU-CompMethodsPhysics-PHY494/PHY494-resources-2020 | 20e08c20995eab567063b1845487e84c0e690e96 | [
"CC-BY-4.0"
] | null | null | null | 14_linear_algebra/14_Linear_Algebra-students-1.ipynb | ASU-CompMethodsPhysics-PHY494/PHY494-resources-2020 | 20e08c20995eab567063b1845487e84c0e690e96 | [
"CC-BY-4.0"
] | null | null | null | 26.426259 | 341 | 0.504186 | true | 2,828 | Qwen/Qwen-72B | 1. YES
2. YES | 0.851953 | 0.651355 | 0.554924 | __label__eng_Latn | 0.865364 | 0.127603 |
```python
# Give some definitions (There is no need to change the following definitions)
import sys
import re
from typing import List, Dict, Union
import sympy as sym
from IPython.display import display
from IPython.display import Math
def is_num(s: str) -> bool:
try:
float(s)
except ValueError:
return False
else:
return True
def set_symbol_from_text(text: str) -> List[sym.Symbol]:
"""Make list of sympy symbols from a text
Parameters
----------
text : str
Comma separated words
Returns
-------
symbol_list : List[sym.Symbol]
List of replaced symbols
Examples
-------
input_variables = r'x_{1}, x_{2}'
x = set_symbol_from_text(input_variables)
"""
symbol_list = []
for term in re.split(',\s*', text):
symbol_list.append(sym.Symbol(term))
return symbol_list
def replace_text_with_symbol(
text: str, c: List[sym.Symbol], x: List[sym.Symbol]
) -> str:
"""Make a replaced string with defined sympy symbols
Parameters
----------
text : str
original string
c : List[sym.Symbol]
List of constants (sympy symbols)
x: List[sym.symbol]
List of variables (sympy symbols)
Returns
-------
str
replaced string
"""
text = ' ' + text + ' '
for i, v in enumerate(c):
rep = r"\1c[{0}]\2".format(i)
my_regex = r"([^a-zA-Z_0-9])" + re.escape(sym.latex(v)) + r"([^a-zA-Z_0-9])"
while re.search(my_regex, text) != None:
text = re.sub(my_regex, rep, text)
for i, v in enumerate(x):
rep = r"\1x[{0}]\2".format(i)
my_regex = r"([^a-zA-Z_0-9])" + re.escape(sym.latex(v)) + r"([^a-zA-Z_0-9])"
while re.search(my_regex, text) != None:
text = re.sub(my_regex, rep, text)
t = text.strip()
text = t.replace(r"{", "").replace(r"}", "")
return text
```
```python
# Start scripts
sym.init_printing(use_unicode=True)
# Set output file name
output_filename = "van_der_Pol_event.inp"
# Set constants
input_const = r'\epsilon, \nu_{11}, \nu_{22}'
c = set_symbol_from_text(input_const)
# Set variables
input_var = r"x_{1}, x_{2}"
x = set_symbol_from_text(input_var)
# Set variables for initial positions
input_var_c = r"x_{1}^{\mathrm{c}}, x_{2}^{\mathrm{c}}"
xc = set_symbol_from_text(input_var_c)
# Display inputs for check
display('Defined constants')
display(Math(sym.latex(c)))
display('Defined variables')
display(Math(sym.latex(x)))
display('Defined constants for the initial positions')
display(Math(sym.latex(xc)))
# Add the list for 'xc' to the list 'c'
c = c + xc
```
'Defined constants'
$\displaystyle \left[ \epsilon, \ \nu_{11}, \ \nu_{22}\right]$
'Defined variables'
$\displaystyle \left[ x_{1}, \ x_{2}\right]$
'Defined constants for the initial positions'
$\displaystyle \left[ x_{1}^{\mathrm{c}}, \ x_{2}^{\mathrm{c}}\right]$
```python
# Set equations
# Variables, drift and diffusion coefficients must be enclosed in [].
Eqs = []
Eqs.append(r"d[x_{1}] = [x_{2}]*dt + [\nu_{11}]*d W_{1}")
Eqs.append(r"d[x_{2}] =" \
+ r"[\epsilon*(1-x_{1}**2)*x_{2} - x_{1}]*dt + [\nu_{22}]*d W_{2}")
# Extract strings for drift and diffusion
str_drift = []
str_diff = []
for eq in Eqs:
result = re.findall(r'\[(.*?)\]', eq)
if(len(result) != 3):
print("The format of equation is not adequate: {0}".format(eq))
sys.exit(1)
str_drift.append(result[1])
str_diff.append(result[2])
# Convert strings to sympy
drift = []
for ex in str_drift:
drift.append(eval(replace_text_with_symbol(ex,c,x)))
drift = sym.Matrix(len(drift), 1, drift)
diff_vector = []
for ex in str_diff:
diff_vector.append(eval(replace_text_with_symbol(ex,c,x)))
diff = []
for i, variable in enumerate(x):
tmp_array = [0.0] * len(x)
tmp_array[i] = diff_vector[i]
diff.append(tmp_array)
diff = sym.Matrix(diff)
# Display input SDEs for check
latex_bold_x = sym.Symbol('\mathbf{x}')
latex_bold_W = sym.Symbol('\mathbf{W}')
display('Original stochastic differential equations')
for i in range(len(x)):
latex_W = sym.Symbol('W_{0}'.format(i+1))
print_sde = Math(r'd {0}(t) = \left({1}\right) dt + \left({2}\right) d {3}(t)'\
.format(x[i], sym.latex(drift[i]), sym.latex(diff[i,i]), latex_W))
display(print_sde)
```
'Original stochastic differential equations'
$\displaystyle d x_{1}(t) = \left(x_{2}\right) dt + \left(\nu_{11}\right) d W_1(t)$
$\displaystyle d x_{2}(t) = \left(\epsilon x_{2} \left(1 - x_{1}^{2}\right) - x_{1}\right) dt + \left(\nu_{22}\right) d W_2(t)$
```python
# Make sympy symbols for partial derivatives
deriv = []
for tmp_x in x:
deriv.append(sym.Symbol('\\partial_{{{0}}}'.format(tmp_x)))
# Display the derived SDEs
print_sde = Math(r'd{0}(t) = {1} dt + {2} d {3}(t)'\
.format(latex_bold_x, sym.latex(drift),
sym.latex(diff), latex_bold_W))
display('Extended stochastic differential equations')
display(print_sde)
```
'Extended stochastic differential equations'
$\displaystyle d\mathbf{x}(t) = \left[\begin{matrix}x_{2}\\\epsilon x_{2} \left(1 - x_{1}^{2}\right) - x_{1}\end{matrix}\right] dt + \left[\begin{matrix}\nu_{11} & 0\\0 & \nu_{22}\end{matrix}\right] d \mathbf{W}(t)$
```python
# Make sympy symbols for partial derivatives
deriv = []
for tmp_x in x:
deriv.append(sym.Symbol('\\partial_{{{0}}}'.format(tmp_x)))
# Derive the adjoint operator (backward Kolmogrov) and display it
adj_L = 0
B = diff * diff.transpose()
deriv = sym.Matrix([deriv])
latex_adj_L = sym.Symbol('\mathcal{L}^{\dagger}')
print_adj_L = ""
drift_terms = []
drift_derivs = []
for dri, der in zip(drift,deriv): # 1st order
drift_terms.append(dri)
drift_derivs.append(der)
adj_L = adj_L + dri*der
print_adj_L = print_adj_L \
+ '\\left({0}\\right) {1}'\
.format(sym.latex(dri), sym.latex(der)) \
+ '+'
diff_terms = []
diff_derivs = []
for i in range(len(x)): # 2nd order
for j in range(len(x)):
if B[len(x)*i+j] != 0:
diff_terms.append(0.5*B[len(x)*i+j])
diff_derivs.append(deriv[i]*deriv[j])
adj_L = adj_L + 0.5*B[len(x)*i+j]*deriv[i]*deriv[j]
print_adj_L = print_adj_L \
+ '\\frac{{1}}{{2}}\\left({0}\\right) {1}{2}'\
.format(sym.latex(B[len(x)*i+j]), \
sym.latex(deriv[i]), \
sym.latex(deriv[j])
) \
+ '+'
print_adj_L = print_adj_L[:-1] # Remove the final plus sign
print_dual = Math(r'{0} = {1}'.format(latex_adj_L, print_adj_L))
display('Derived adjoint operator')
display(print_dual)
```
'Derived adjoint operator'
$\displaystyle \mathcal{L}^{\dagger} = \left(x_{2}\right) \partial_{x_{1}}+\left(\epsilon x_{2} \left(1 - x_{1}^{2}\right) - x_{1}\right) \partial_{x_{2}}+\frac{1}{2}\left(\nu_{11}^{2}\right) \partial_{x_{1}}\partial_{x_{1}}+\frac{1}{2}\left(\nu_{22}^{2}\right) \partial_{x_{2}}\partial_{x_{2}}$
```python
# Apply variable transformations for coordinate shifts
for i, v in enumerate(x):
for j, term in enumerate(drift_terms):
drift_terms[j] = drift_terms[j].subs(v, v+xc[i])
for j, term in enumerate(diff_terms):
diff_terms[j] = diff_terms[j].subs(v, v+xc[i])
# Derive the adjoint operator (backward Kolmogrov) and display it
adj_L = 0
latex_adj_L = sym.Symbol('\mathcal{L}^{\dagger}')
print_adj_L = ""
for dri, der in zip(drift_terms,drift_derivs): # 1st order
adj_L = adj_L + dri*der
print_adj_L = print_adj_L \
+ '\\left({0}\\right) {1}'\
.format(sym.latex(dri), sym.latex(der)) \
+ '+'
for diff, der in zip(diff_terms,diff_derivs): # 2nd order
adj_L = adj_L + diff*der
print_adj_L = print_adj_L \
+ '\\left({0}\\right) {1}'\
.format(sym.latex(diff), sym.latex(der)) \
+ '+'
print_adj_L = print_adj_L[:-1] # Remove the final plus sign
print_dual = Math(r'{0} = {1}'.format(latex_adj_L, print_adj_L))
display('Derived adjoint operator')
display(print_dual)
```
'Derived adjoint operator'
$\displaystyle \mathcal{L}^{\dagger} = \left(x_{2} + x_{2}^{\mathrm{c}}\right) \partial_{x_{1}}+\left(\epsilon \left(1 - \left(x_{1} + x_{1}^{\mathrm{c}}\right)^{2}\right) \left(x_{2} + x_{2}^{\mathrm{c}}\right) - x_{1} - x_{1}^{\mathrm{c}}\right) \partial_{x_{2}}+\left(0.5 \nu_{11}^{2}\right) \partial_{x_{1}}^{2}+\left(0.5 \nu_{22}^{2}\right) \partial_{x_{2}}^{2}$
```python
# Display constants (again) for setting values
display('Defined constants (again)')
display(Math(sym.latex(c)))
```
'Defined constants (again)'
$\displaystyle \left[ \epsilon, \ \nu_{11}, \ \nu_{22}, \ x_{1}^{\mathrm{c}}, \ x_{2}^{\mathrm{c}}\right]$
```python
# Print degrees for each term
adj_L = sym.expand(adj_L)
variables = []
variables.extend(x)
variables.extend(deriv)
variables.extend(c)
zero_list = [0]*len(x)
# Print terms (for check)
output = "# [State changes] / rate / [Indices for rate] / [Indices for constants] # term\n"
result = []
for t in adj_L.args:
degree_list = list(sym.degree_list(t, gens=variables))
change_list = []
coeff_list = []
const_list = []
for i in range(len(x)):
change_list.append(degree_list[i]-degree_list[i+len(x)])
coeff_list.append(degree_list[i+len(x)])
const_list = const_list + degree_list[2*len(x):]
is_constant = all(elem == 0 for elem in degree_list)
if is_constant == True:
result.append([change_list, float(t), coeff_list, const_list, sym.latex(t)])
else:
result.append([change_list, float(sym.LC(t)), coeff_list, const_list, sym.latex(t)])
result = sorted(result)
for item in result:
str0 = "{0}".format(item[0]).replace(' ', '')
str1 = "{0}".format(item[1])
str2 = "{0}".format(item[2]).replace(' ', '')
str3 = "{0}".format(item[3]).replace(' ', '')
str4 = "{0}".format(item[4])
output = output + "{0} / {1} / {2} / {3} # {4}\n".format(str0, str1, str2, str3, str4)
state_change_tmp = [item[0] for item in result]
state_change = []
[x for x in state_change_tmp if x not in state_change and not state_change.append(x)]
# Add the basic information
output = "{0} {1} {2} {3}\n".format(len(adj_L.args), len(state_change), len(x), len(c)) + output
output = "# #terms #terms_without_duplicate #x #const / Constants: {0}\n".format(c) + output
with open(output_filename, mode='w') as f:
f.write(output)
# For check
print(output)
```
# #terms #terms_without_duplicate #x #const / Constants: [\epsilon, \nu_{11}, \nu_{22}, x_{1}^{\mathrm{c}}, x_{2}^{\mathrm{c}}]
14 10 2 5
# [State changes] / rate / [Indices for rate] / [Indices for constants] # term
[-2,0] / 0.5 / [2,0] / [0,2,0,0,0] # 0.5 \nu_{11}^{2} \partial_{x_{1}}^{2}
[-1,0] / 1.0 / [1,0] / [0,0,0,0,1] # \partial_{x_{1}} x_{2}^{\mathrm{c}}
[-1,1] / 1.0 / [1,0] / [0,0,0,0,0] # \partial_{x_{1}} x_{2}
[0,-2] / 0.5 / [0,2] / [0,0,2,0,0] # 0.5 \nu_{22}^{2} \partial_{x_{2}}^{2}
[0,-1] / -1.0 / [0,1] / [0,0,0,1,0] # - \partial_{x_{2}} x_{1}^{\mathrm{c}}
[0,-1] / -1.0 / [0,1] / [1,0,0,2,1] # - \epsilon \partial_{x_{2}} \left(x_{1}^{\mathrm{c}}\right)^{2} x_{2}^{\mathrm{c}}
[0,-1] / 1.0 / [0,1] / [1,0,0,0,1] # \epsilon \partial_{x_{2}} x_{2}^{\mathrm{c}}
[0,0] / -1.0 / [0,1] / [1,0,0,2,0] # - \epsilon \partial_{x_{2}} \left(x_{1}^{\mathrm{c}}\right)^{2} x_{2}
[0,0] / 1.0 / [0,1] / [1,0,0,0,0] # \epsilon \partial_{x_{2}} x_{2}
[1,-1] / -2.0 / [0,1] / [1,0,0,1,1] # - 2 \epsilon \partial_{x_{2}} x_{1} x_{1}^{\mathrm{c}} x_{2}^{\mathrm{c}}
[1,-1] / -1.0 / [0,1] / [0,0,0,0,0] # - \partial_{x_{2}} x_{1}
[1,0] / -2.0 / [0,1] / [1,0,0,1,0] # - 2 \epsilon \partial_{x_{2}} x_{1} x_{1}^{\mathrm{c}} x_{2}
[2,-1] / -1.0 / [0,1] / [1,0,0,0,1] # - \epsilon \partial_{x_{2}} x_{1}^{2} x_{2}^{\mathrm{c}}
[2,0] / -1.0 / [0,1] / [1,0,0,0,0] # - \epsilon \partial_{x_{2}} x_{1}^{2} x_{2}
```python
```
| 7f735c32509bd2d8f9397a64b3689ab30f3a6741 | 19,418 | ipynb | Jupyter Notebook | dual_derivation_van_der_Pol.ipynb | junohkubo/Koopman_from_equations | 0163687f5af5297828049b2c5bb3d3c094505af1 | [
"Apache-2.0"
] | null | null | null | dual_derivation_van_der_Pol.ipynb | junohkubo/Koopman_from_equations | 0163687f5af5297828049b2c5bb3d3c094505af1 | [
"Apache-2.0"
] | null | null | null | dual_derivation_van_der_Pol.ipynb | junohkubo/Koopman_from_equations | 0163687f5af5297828049b2c5bb3d3c094505af1 | [
"Apache-2.0"
] | null | null | null | 32.745363 | 404 | 0.483366 | true | 4,023 | Qwen/Qwen-72B | 1. YES
2. YES | 0.70253 | 0.798187 | 0.56075 | __label__eng_Latn | 0.308141 | 0.14114 |
# Chapter 10 - Position and Momentum
We can start using sympy to handle symbolic math (integrals and other calculus):
```python
from sympy import *
```
```python
init_printing(use_unicode=True)
```
```python
x, y, z = symbols('x y z', real=True)
a, c = symbols('a c', nonzero=True, real=True)
```
```python
integrate?
```
There are two ways to use the `integrate` function. In one line, like `integrate(x,(x,0,1))` or by naming an expression and then integrating it over a range:
```
A = (c*cos((pi*x)/(2.0*a)))**2
A.integrate((x,-a,a),conds='none')
```
We'll use both, at different times. For longer expressions, the second form can be easier to read and write.
First, just try the following, then we'll re-create some examples in the book.
```python
integrate(x,(x,0,1))
```
```python
integrate(x**2,(x,0,1))
```
The cell below will return an odd set of conditions on the result. This is because the solver doesn't want to assume anything about `a` and there is a special case where the answer would be different. If you look closely though, that special case isn't physically realistic so to igore these special conditions, we add `conds='none'`. The next cell down does what you'd expect. From here on out, just add this to the `integrate` function and we'll get what we expect.
```python
A = (c*cos((pi*x)/(2.0*a)))**2
A.integrate((x,-a,a))
```
```python
A = (c*cos((pi*x)/(2.0*a)))**2
A.integrate((x,-a,a), conds='none')
```
So this tells us the normalization constant should be $c=\frac{1}{\sqrt{a}}$. Check that it is normalized if we do that:
```python
psi = 1/sqrt(a)*cos((pi*x)/(2.0*a)) # notice we can name the expression something useful.
B = psi**2
B.integrate( (x,-a,a), conds='none')
```
Because `psi` is a real function, we can calculate expectation values by integrating over $x$ or $x^2$ with `psi**2`:
```python
C = x*psi**2
C.integrate( (x,-a,a), conds='none')
```
```python
D = x**2 * psi**2
E = D.integrate( (x,-a,a), conds='none')
```
```python
E.n() # the .n() method approximates the numerical part. You can look at the full expression below.
```
```python
E
```
## Example 10.2
```python
h = Symbol('hbar', real=True)
```
Use the `diff` function to take a derivative of a symbolic expression. For example:
```python
diff(x**2, x)
```
```python
# Solution goes here
```
```python
# Solution goes here
```
## Example 10.3
```python
p = Symbol('p', real=True)
```
```python
# Solution goes here
```
```python
# Solution goes here
```
```python
```
| bb618fbbeab2fb6ad662766143a601decd274e98 | 21,721 | ipynb | Jupyter Notebook | Chapter 10 - Position & Momentum_blank.ipynb | mniehus/QMlabs | bf9ba41bafdac7625ce1810d67d79072219b32b9 | [
"MIT"
] | 28 | 2016-08-21T16:59:18.000Z | 2021-12-21T03:22:14.000Z | Chapter 10 - Position & Momentum_blank.ipynb | mniehus/QMlabs | bf9ba41bafdac7625ce1810d67d79072219b32b9 | [
"MIT"
] | 1 | 2020-04-23T16:37:40.000Z | 2020-04-23T17:42:01.000Z | Chapter 10 - Position & Momentum_blank.ipynb | mniehus/QMlabs | bf9ba41bafdac7625ce1810d67d79072219b32b9 | [
"MIT"
] | 16 | 2015-09-27T04:35:02.000Z | 2022-02-16T23:03:34.000Z | 45.728421 | 3,922 | 0.718291 | true | 750 | Qwen/Qwen-72B | 1. YES
2. YES | 0.932453 | 0.928409 | 0.865698 | __label__eng_Latn | 0.984076 | 0.84964 |
```python
import numpy as np
import sympy as sp
```
```python
def newton_1d_step(f_str, x_prev):
x = sp.symbols('x') # We will assume that you use the symbol x
formula = sp.sympify(f_str)
df = sp.diff(formula, x)
ddf = sp.diff(df, x)
return x_prev - df.subs(x, x_prev) / ddf.subs(x, x_prev)
```
```python
# Test case
tol = 10**-7
expected = 0.239999999617
actual = newton_1d_step('4*sin(x**6)', 0.3)
assert abs(expected-actual) < tol
```
| ed7bbac1f96e4e178ba736a2a4722090184a393b | 1,311 | ipynb | Jupyter Notebook | newton_1d_optimization.ipynb | Racso-3141/uiuc-cs357-fa21-scripts | e44f0a1ea4eb657cb77253f1db464d52961bbe5e | [
"MIT"
] | 10 | 2021-11-02T05:56:10.000Z | 2022-03-03T19:25:19.000Z | newton_1d_optimization.ipynb | Racso-3141/uiuc-cs357-fa21-scripts | e44f0a1ea4eb657cb77253f1db464d52961bbe5e | [
"MIT"
] | null | null | null | newton_1d_optimization.ipynb | Racso-3141/uiuc-cs357-fa21-scripts | e44f0a1ea4eb657cb77253f1db464d52961bbe5e | [
"MIT"
] | 3 | 2021-10-30T15:18:01.000Z | 2021-12-10T11:26:43.000Z | 20.809524 | 75 | 0.517162 | true | 163 | Qwen/Qwen-72B | 1. YES
2. YES | 0.92944 | 0.798187 | 0.741867 | __label__eng_Latn | 0.494862 | 0.561938 |
```
from sympy import *
import numpy as np
import math
```
<h2>Strutinsky Energy Averaging Method</h2>
Define a callable class for computing the smooth part of the density of states $\tilde{g}_m(E)$ with curvature
correction of order $2M$.
```
class SmoothDOS(object):
def __init__(self, energies):
self.energies = energies
def gaussian_smoothing_func(self, M, x):
return (mpmath.laguerre(M,0.5,x**2)*exp(-x**2)/sqrt(pi)).evalf()
def __call__(self, e, M=3, gamma=1.0):
return sum(self.gaussian_smoothing_func(M, (e-ei)/gamma)/gamma for ei in self.energies)
```
```
def avgf(M, x):
return (mpmath.laguerre(M,0.5,x**2)*exp(-x**2)/sqrt(pi)).evalf()
```
```
def smooth_dos(gamma, M, e, energies):
return sum(avgf(M, (e-ei)/gamma)/gamma for ei in energies)
```
<h2>1D Simple Harmonic Oscillator DOS</h2>
```
def sho_smooth_dos(e):
"""Compute the exact smooth DOS."""
return 1
```
```
def sho_spectrum(nmax):
"""Compute the first nmax energies of the 1D SHO."""
return [n+0.5 for n in range(nmax)]
```
```
sho_evalues = np.linspace(0.0,20,100)
```
```
sho_dos = SmoothDOS(sho_spectrum(30))
```
```
sho_exact_dos = [sho_smooth_dos(e) for e in sho_evalues]
```
```
sho_approx_dos = [sho_dos(e,M=3,gamma=10.0) for e in sho_evalues]
```
```
plot(sho_evalues, sho_exact_dos, label="exact")
plot(sho_evalues, sho_approx_dos, label="approx")
title("Smooth part of the DOS for the 1D SHO")
xlabel("Energy"); ylabel("$g(E)$")
legend(loc=4)
```
Note how the exact smoothed DOS and the Strutinsky approximation agree once the energy is away from
0. In general, these are edge effects and are also present at the right limit if you don't use enough energy
levels. Here we have used 30, so the right side doesn't show these artifacts.
<h2>1D Particle in a BOX (PIAB) DOS</h2>
```
def piab_smooth_dos(e):
"""Compute the exact smooth DOS."""
return 1.0/(2.0*math.sqrt(e*math.pi**2/2))
```
```
def piab_spectrum(nmax):
"""Compute the first nmax energies of the 1D PIAB."""
return [0.5*math.pi**2*(n+1)**2 for n in range(nmax)]
```
```
piab_evalues = linspace(1.0,1000.0,100)
```
```
piab_dos = SmoothDOS(piab_spectrum(20))
```
```
piab_exact_dos = [piab_smooth_dos(e) for e in piab_evalues]
```
```
piab_approx_dos = [piab_dos(e, M=4, gamma=150.0) for e in piab_evalues]
```
```
plot(piab_evalues, piab_exact_dos, label="exact")
plot(piab_evalues, piab_approx_dos, label="approx")
title("Smooth part of the DOS for the 1D PIAB")
xlabel("Energy"); ylabel("$g(E)$")
legend(loc=1)
```
```
```
| f62e593d182a5923e1884f315a5d4e3347c93448 | 50,734 | ipynb | Jupyter Notebook | docs/examples/notebooks/smooth_dos.ipynb | tinyclues/ipython | 71e32606b0242772b81c9be0d40751ba47d95f2c | [
"BSD-3-Clause-Clear"
] | 1 | 2016-05-26T10:57:18.000Z | 2016-05-26T10:57:18.000Z | docs/examples/notebooks/smooth_dos.ipynb | adgaudio/ipython | a924f50c0f7b84127391f1c396326258c2b303e2 | [
"BSD-3-Clause-Clear"
] | null | null | null | docs/examples/notebooks/smooth_dos.ipynb | adgaudio/ipython | a924f50c0f7b84127391f1c396326258c2b303e2 | [
"BSD-3-Clause-Clear"
] | null | null | null | 251.158416 | 21,315 | 0.850771 | true | 844 | Qwen/Qwen-72B | 1. YES
2. YES | 0.937211 | 0.800692 | 0.750417 | __label__eng_Latn | 0.780782 | 0.581803 |
```python
import matplotlib.pyplot as plt
import numpy as np
x0 = np.linspace(0,8,100)
fig = plt.figure()
ax = fig.add_subplot()
ax.set_aspect('equal')
ax.set_xlim(xmin=0, xmax= 10)
ax.set_ylim(ymin=0, ymax= 10)
ax.quiver(0,0,7,7,scale=1,units='xy')
ax.text(7,7,'v',fontsize=16)
ax.quiver(0,0,3,8,scale=1,units='xy')
ax.text(3,8.3,'u',fontsize=16)
ax.quiver(0,0,5.5,5.5,scale=1,units='xy',color='r')
ax.text(3,5,r'$u_{\parallel_v}$',fontsize=16)
ax.quiver(5.5,5.5,3-(5.5),8-(5.5),scale=1,units='xy',color='r')
ax.text(4.3,7,r'$u_{\perp_v}$',fontsize=16)
ax.quiver(0,0,8,3,scale=1,units='xy')
ax.text(8,3,r"u'",fontsize=16)
ax.quiver(11/2,11/2,8-11/2,3-11/2,scale=1,units='xy',color='magenta')
ax.text(6.5,4.5,r'$-u_{\perp_v}$',fontsize=16)
import clifford.g2 as g2
# paru -S python-numba
# pip install clifford
u = 3*g2.e1 + 8*g2.e2
v = 7*g2.e1 + 7*g2.e2
(u|v)*v.inv()
(u^v)*v.inv()
v*u*v.inv()
```
```python
```
(8.0^e1) + (3.0^e2)
# Projection in $G_{(\mathbb{R}_2)}$
> ## $ \begin{array}{}uv + vu = 2(u\cdot v) \\
^{Proj_v(u)}_{u=u_{\parallel_v}+u_{\perp_v}}{\huge\color{red}{g}}^{u\cdot v v^{-1}}_{u\cdot v = u_{\parallel_v}v}
\end{array}
\left \{
\begin {array}{}
uv = (u \cdot v) + (u \wedge v) \\
vu = (u \cdot v) - (u \wedge v) \\
u_{\parallel_v}v = u \cdot v\, \because x \parallel y \iff x \wedge y = 0 \\
uv + vu = 2(u_{\parallel_v}v) \\
\frac{1}{2}(uv + vu) = u_{\parallel_v}v \\
u \cdot v = u_{\parallel_v}v \\
u \cdot v v^{-1} = u_{\parallel_v} \quad \because v^{-1} = \frac{v}{||v||^2}\\
\frac{u \cdot v}{v \cdot v} v = u_{\parallel_v} \quad \because v^2 = v \cdot v ,\quad v \parallel v\\
\end{array}\right.$
> ### $ u = u_{\parallel_v} + u_{\perp_v}$
> ### $ uv = u_{\parallel_v}v + u_{\perp_v}v$
> ### $ uv = u_{\parallel_v}v - vu_{\perp_v}$
> ### $ uv = u_{\parallel_v}v - v(u - u_{\parallel_v})$
> ### $ uv = u_{\parallel_v}v - vu + vu_{\parallel_v}$
> ### $ uv = u_{\parallel_v}v - vu + u_{\parallel_v}v$
> ### $ uv + vu = u_{\parallel_v}v + u_{\parallel_v}v$
> ### $ uv + vu = 2u_{\parallel_v}v$
> ### $ \frac{1}{2}(uv + vu) = u_{\parallel_v}v$
> ### $ u \cdot v = u_{\parallel_v}v$
> ### $
^{P_j(\alpha x + \beta y)}_{P_j(\alpha x) + P_j(\beta y)}{\huge\color{red}{p_{roj}}}^{\alpha P_j(x)+\beta P_j(y)}_{}
$
# Rejection in $G_{(\mathbb{R}_2)}$
> ## $\begin{array}{}
uv - vu = 2(u \wedge v)\\
u \wedge v = \frac{1}{2}(uv - vu)\\
u_{\perp_v}v = u_{\perp_v} \wedge v \\
^{Rej_v(u)}_{u=u_{\parallel_v}+u_{\perp_v}}{\huge\color{red}{g}}^{u\wedge v v^{-1}}_{u\wedge v = u_{\perp_v}v}
\end{array}
\left \{
\begin {array}{}
uv = (u \cdot v) + (u \wedge v) \\
vu = (u \cdot v) - (u \wedge v) \\
u_{\perp_v}v = u \wedge v\, \because u \perp v \iff u \cdot v = 0 \\
uv - vu = 2(u_{\perp_v}v) \\
\frac{1}{2}(uv - vu) = u_{\perp_v}v \\
u \wedge v = u_{\perp_v}v \\
u \wedge v v^{-1} = u_{\perp_v} \quad \because v^{-1} = \frac{v}{||v||^2}\\
\frac{u \wedge v}{v \cdot v} v = u_{\perp_v} \quad \because v^2 = v \cdot v ,\quad v \parallel v\\
\end{array}\right.$
# Reflection in $G_{(\mathbb{R}_2)}$
> # $ ^{Ref_v(u)}_{u=u_{\parallel_v} + u_{\perp_v}}{\Large\color{red}{g}}^{vuv^{-1}}_{u'=u_{\parallel_v}-u_{\perp_v}}$
> ## $ u_{\parallel_v} - u_{\perp_v} =
u \cdot v v^{-1} - u \wedge v v^{-1} = (u \cdot v - u \wedge v ) v^{-1} = (v \cdot u + v \wedge u)v^{-1} = vuv^{-1}$
> ## $ u' = u_{\parallel_v} - u_{\perp_v} \\
vu' = vu_{\parallel_v} - vu_{\perp_v} \\
vu' = u_{\parallel_v}v + u_{\perp_v}v \\
vu' = uv\\
v^{-1}vu' = v^{-1}uv\\
u' = v^{-1}uv $
> # $ ^{Ref_v(u)}_{u=u_{\parallel_v} + u_{\perp_v}}{\Large\color{red}{g}}^{v^{-1}uv}_{u'=u_{\parallel_v}-u_{\perp_v}}$
```python
import clifford.g2 as g2
u = g2.e1 + 4*g2.e2
v = 3*g2.e1 + g2.e2
#u.inv()*v.inv() == (u*v).inv()
# (uv)^{-1}
v.inv()*u*v == v*u*v.inv()
```
0.04118 - (0.06471^e12)
# Rotation in $G_{\mathbb{R}_2}$
> ## $
^{Rot_v(\theta)}_{I=e_1e_2}{\Large\color{red}{g}}^{ve^{I\theta}}_{e^{-I\theta}v}
$
>> ### need bivector I
>>> ### $ \color{red}{Bivector} \left \{ \begin{array}{}
e_1e_2 \\ e_1 \wedge e_2 \\ uv = u \cdot v + u \wedge v \\
\Large\therefore\, {^{Dref_{\hat{v}\hat{w}}(u)}_{}{\Large\color{red}{g}}^{(w^{-1})(v^{-1})u(v)(w)}_{(wv)^{-1}u(vw)}}
\end{array}\right .$
```python
v*w
```
0.5403 + (0.5403^e1) + (0.84147^e12)
```python
# import clifford as cl
import clifford.g3 as g3
import sympy as sm
import numpy as np
v = np.sin(np.pi/2)*np.cos(0)*g3.e1 + np.sin(np.pi/2)*np.sin(0)*g3.e2 + np.cos(np.pi/2)*g3.e3
w = np.sin(2)*np.cos(1)*g3.e1 + np.sin(2)*np.sin(1)*g3.e2 + np.cos(2)*g3.e3
#w = (1/np.sqrt(2))*g3.e1 + (1/np.sqrt(2))*g3.e2
u = g3.e1 + g3.e3
(v*w).inv()*u*(v*w)
```
-(0.10836^e1) + (1.38865^e2) + (0.24474^e3)
```python
w*v*u*v*w
```
-(0.10836^e1) + (1.38865^e2) + (0.24474^e3)
```python
```
(1.0^e1)
```python
(v*w).mag2()
```
1.0
```python
```
| 7537df65e5b7cf4e8c81149d07be6bfe44ca0d3f | 20,295 | ipynb | Jupyter Notebook | python/GeometryAG/Projection.ipynb | karng87/nasm_game | a97fdb09459efffc561d2122058c348c93f1dc87 | [
"MIT"
] | null | null | null | python/GeometryAG/Projection.ipynb | karng87/nasm_game | a97fdb09459efffc561d2122058c348c93f1dc87 | [
"MIT"
] | null | null | null | python/GeometryAG/Projection.ipynb | karng87/nasm_game | a97fdb09459efffc561d2122058c348c93f1dc87 | [
"MIT"
] | null | null | null | 55.60274 | 10,160 | 0.717221 | true | 2,264 | Qwen/Qwen-72B | 1. YES
2. YES | 0.812867 | 0.839734 | 0.682592 | __label__yue_Hant | 0.158797 | 0.424222 |
```python
import numpy as np
from sympy import cos,sin
import math
#------------------------------------------------------------------------------------------------------------------------
matriz = [0,0,0,0,0,0],[0,0,0,0,0,0],[0,0,0,0,0,0],[0,0,0,0,0,0],[0,0,0,0,0,0],[0,0,0,0,0,0],[0,0,0,0,0,0],[0,0,0,0,0,0]
x=0;
Numeros = ['0','0.00001','0.0001','0.001','0.01','0.1','1','10']
#------------------------------------------------------------------------------------------------------------------------
for i in range (8):
for e in range (5):
if e == 0:
matriz[i][e]=cos(x)
if e == 1:
matriz[i][e]=cos(0)-x*sin(0)
if e == 2:
matriz[i][e]=cos(0)-x*sin(0)-((x**2)/2)*cos(0)
if e == 3:
matriz[i][e]=cos(0)-x*sin(0)-((x**2)/2)*cos(0)+((x**3)/6)*sin(0)
if e == 4:
matriz[i][e]=cos(0)-x*sin(0)-((x**2)/2)*cos(0)+((x**3)/6)*sin(0)+((x**4)/24)*cos(0)
if e == 5:
matriz[i][e]=cos(0)-x*sin(0)-((x**2)/2)*cos(0)+((x**3)/6)*sin(0)+((x**4)/24)*cos(0)-((x**5)/120)*sin(0)
if x==0:
x=x+0.00001
else:
x=x*10
#------------------------------------------------------------------------------------------------------------------------
print('X\tExacto\t\t\tAprox.1\tAprox.2\tAprox.3\t\tAprox.4\t\tAprox.5')
print('0\t',cos(0),'\t\t\t',matriz[0][0],'\t',matriz[0][1],'\t',matriz[0][2],'\t\t',matriz[0][3],'\t\t',matriz[0][4])
print('0.00001\t',cos(0.00001),'\t\t',matriz[1][0],'\t',matriz[1][1],'\t',matriz[1][2],'\t',matriz[1][3],'\t',matriz[1][4],)
print('0.0001\t',cos(0.0001),'\t\t',matriz[2][0],'\t',matriz[2][1],'\t',matriz[2][2],'\t',matriz[2][3],'\t',matriz[2][4],)
print('0.001\t',cos(0.001),'\t',matriz[3][0],'\t',matriz[3][1],'\t',matriz[3][2],'\t',matriz[3][3],'\t',matriz[3][4],)
print('0.01\t',cos(0.01),'\t',matriz[4][0],'\t',matriz[4][1],'\t',matriz[4][2],'\t',matriz[4][3],'\t',matriz[4][4],)
print('0.1\t',cos(0.1),'\t',matriz[5][0],'\t',matriz[5][1],'\t',matriz[5][2],'\t\t',matriz[5][3],'\t\t',matriz[5][4],)
print('1\t', cos(1),'\t',matriz[6][0],'\t',matriz[6][1],'\t',matriz[6][2],'\t\t',matriz[6][3],'\t\t',matriz[6][4],)
print('10\t', cos(10),'\t',matriz[7][0],'\t',matriz[7][1],'\t',matriz[7][2],'\t\t',matriz[7][3],'\t\t',matriz[7][4],)
```
X Exacto Aprox.1 Aprox.2 Aprox.3 Aprox.4 Aprox.5
0 1 1 1 1 1 1
0.00001 0.999999999950000 0.999999999950000 1 0.999999999950000 0.999999999950000 0.999999999950000
0.0001 0.999999995000000 0.999999995000000 1 0.999999995000000 0.999999995000000 0.999999995000000
0.001 0.999999500000042 0.999999500000042 1 0.999999500000000 0.999999500000000 0.999999500000042
0.01 0.999950000416665 0.999950000416665 1 0.999950000000000 0.999950000000000 0.999950000416667
0.1 0.995004165278026 0.995004165278026 1 0.995000000000000 0.995000000000000 0.995004166666667
1 cos(1) 0.540302305868140 1 0.500000000000000 0.500000000000000 0.541666666666667
10 cos(10) -0.839071529076452 1 -49.0000000000000 -49.0000000000000 367.666666666667
```python
```
| 98ca85f70b1f6af2fecb64278c1a0b081ba8123f | 4,504 | ipynb | Jupyter Notebook | tarea2/tarea2-06.ipynb | jmencisom/nb-mn | 799550d5b6524c36ff339067e233c4508b964866 | [
"MIT"
] | 2 | 2018-04-20T13:54:38.000Z | 2018-06-09T01:43:45.000Z | tarea2/tarea2-06.ipynb | jmencisom/nb-mn | 799550d5b6524c36ff339067e233c4508b964866 | [
"MIT"
] | 5 | 2018-03-02T13:08:32.000Z | 2018-03-05T16:02:19.000Z | tarea2/tarea2-06.ipynb | jmencisom/nb-mn | 799550d5b6524c36ff339067e233c4508b964866 | [
"MIT"
] | null | null | null | 45.959184 | 140 | 0.448046 | true | 1,419 | Qwen/Qwen-72B | 1. YES
2. YES | 0.875787 | 0.712232 | 0.623764 | __label__yue_Hant | 0.030531 | 0.287543 |
# Session 10 - Unsupervised Learning
## Contents
- [Principal Components Analysis](#Principal-Components-Analysis)
- [Clustering Methods](#Clustering-Methods)
```python
# Import
import pandas as pd
import numpy as np
import seaborn as sns
import time
from sklearn.preprocessing import scale
from sklearn.decomposition import PCA
from sklearn.cluster import KMeans
from scipy.cluster import hierarchy
from scipy.cluster.hierarchy import linkage, dendrogram, cut_tree
```
```python
# Import matplotlib for graphs
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import axes3d
from IPython.display import clear_output
# Set global parameters
%matplotlib inline
plt.style.use('seaborn-white')
plt.rcParams['lines.linewidth'] = 3
plt.rcParams['figure.figsize'] = (10,6)
plt.rcParams['figure.titlesize'] = 20
plt.rcParams['axes.titlesize'] = 18
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['legend.fontsize'] = 14
```
The difference between *supervised learning* and *unsupervised learning* is that in the first case we have a variable $y$ which we want to predict, given a set of variables $\{ X_1, . . . , X_p \}$.
In *unsupervised learning* are not interested in prediction, because we do not have an associated response variable $y$. Rather, the goal is to discover interesting things about the measurements on $\{ X_1, . . . , X_p \}$. Question that we can answer are:
- Is there an informative way to visualize the data?
- Can we discover subgroups among the variables or among the observations?
We are going to answe these questions with two main methods, respectively:
- Principal Components Analysis
- Clustering
## Principal Component Analysis
Suppose that we wish to visualize $n$ observations with measurements on a set of $p$ features, $\{X_1, . . . , X_p\}$, as part of an exploratory data analysis.
We could do this by examining two-dimensional scatterplots of the data, each of which contains the n observations’ measurements on two of the features. However, there are $p(p−1)/2$ such scatterplots; for example,
with $p = 10$ there are $45$ plots!
PCA provides a tool to do just this. It finds a low-dimensional represen- tation of a data set that contains as much as possible of the variation.
The **first principal component** of a set of features $\{X_1, . . . , X_p\}$ is the normalized linear combination of the features
$$
Z_1 = \phi_{11} X_1 + \phi_{21} X_2 + ... + \phi_{p1} X_p
$$
that has the largest variance. By normalized, we mean that $\sum_{i=1}^p \phi^2_{i1} = 1$.
In other words, the first principal component loading vector solves the optimization problem
$$
\underset{\phi_{11}, \ldots, \phi_{p 1}}{\operatorname{max}} \ \left\{\frac{1}{n} \sum_{i=1}^{n}\left(\sum_{j=1}^{p} \phi_{j 1} x_{i j}\right)^{2}\right\} \quad \text { subject to } \quad \sum_{j=1}^{p} \phi_{j 1}^{2}=1
$$
The objective that we are maximizing is just the sample variance of the $n$ values of $z_{i1}$.
After the first principal component $Z_1$ of the features has been determined, we can find the second principal component $Z_2$. The **second principal component** is the linear combination of $\{X_1, . . . , X_p\}$ that has maximal variance out of all linear combinations that are *uncorrelated* with $Z_1$.
We illustrate the use of PCA on the `USArrests` data set. For each of the 50 states in the United States, the data set contains the number of arrests per $100,000$ residents for each of three crimes: `Assault`, `Murder`, and `Rape.` We also record the percent of the population in each state living in urban areas, `UrbanPop`.
```python
# Load crime data
df = pd.read_csv('data/USArrests.csv', index_col=0)
df.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Murder</th>
<th>Assault</th>
<th>UrbanPop</th>
<th>Rape</th>
</tr>
<tr>
<th>State</th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<th>Alabama</th>
<td>13.2</td>
<td>236</td>
<td>58</td>
<td>21.2</td>
</tr>
<tr>
<th>Alaska</th>
<td>10.0</td>
<td>263</td>
<td>48</td>
<td>44.5</td>
</tr>
<tr>
<th>Arizona</th>
<td>8.1</td>
<td>294</td>
<td>80</td>
<td>31.0</td>
</tr>
<tr>
<th>Arkansas</th>
<td>8.8</td>
<td>190</td>
<td>50</td>
<td>19.5</td>
</tr>
<tr>
<th>California</th>
<td>9.0</td>
<td>276</td>
<td>91</td>
<td>40.6</td>
</tr>
</tbody>
</table>
</div>
```python
# Scale data
X_scaled = pd.DataFrame(scale(df), index=df.index, columns=df.columns).values
```
```python
# Fit PCA with 2 components
pca2 = PCA(n_components=2).fit(X_scaled)
```
```python
# Get weights
weights = pca2.components_.T
df_weights = pd.DataFrame(weights, index=df.columns, columns=['PC1', 'PC2'])
df_weights
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>PC1</th>
<th>PC2</th>
</tr>
</thead>
<tbody>
<tr>
<th>Murder</th>
<td>0.535899</td>
<td>0.418181</td>
</tr>
<tr>
<th>Assault</th>
<td>0.583184</td>
<td>0.187986</td>
</tr>
<tr>
<th>UrbanPop</th>
<td>0.278191</td>
<td>-0.872806</td>
</tr>
<tr>
<th>Rape</th>
<td>0.543432</td>
<td>-0.167319</td>
</tr>
</tbody>
</table>
</div>
```python
# Transform X to get the principal components
X_dim2 = pca2.transform(X_scaled)
df_dim2 = pd.DataFrame(X_dim2, columns=['PC1', 'PC2'], index=df.index)
df_dim2.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>PC1</th>
<th>PC2</th>
</tr>
<tr>
<th>State</th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<th>Alabama</th>
<td>0.985566</td>
<td>1.133392</td>
</tr>
<tr>
<th>Alaska</th>
<td>1.950138</td>
<td>1.073213</td>
</tr>
<tr>
<th>Arizona</th>
<td>1.763164</td>
<td>-0.745957</td>
</tr>
<tr>
<th>Arkansas</th>
<td>-0.141420</td>
<td>1.119797</td>
</tr>
<tr>
<th>California</th>
<td>2.523980</td>
<td>-1.542934</td>
</tr>
</tbody>
</table>
</div>
```python
fig, ax1 = plt.subplots(figsize=(10,10))
ax1.set_title('Figure 10.1');
# Plot Principal Components 1 and 2
for i in df_dim2.index:
ax1.annotate(i, (df_dim2.PC1.loc[i], -df_dim2.PC2.loc[i]), ha='center', fontsize=14)
# Plot reference lines
m = np.max(np.abs(df_dim2.values))*1.2
ax1.hlines(0,-m,m, linestyles='dotted', colors='grey')
ax1.vlines(0,-m,m, linestyles='dotted', colors='grey')
ax1.set_xlabel('First Principal Component')
ax1.set_ylabel('Second Principal Component')
ax1.set_xlim(-m,m); ax1.set_ylim(-m,m)
# Plot Principal Component loading vectors, using a second y-axis.
ax1b = ax1.twinx().twiny()
ax1b.set_ylim(-1,1); ax1b.set_xlim(-1,1)
for i in df_weights[['PC1', 'PC2']].index:
ax1b.annotate(i, (df_weights.PC1.loc[i]*1.05, -df_weights.PC2.loc[i]*1.05), color='orange', fontsize=16)
ax1b.arrow(0,0,df_weights.PC1[i], -df_weights.PC2[i], color='orange', lw=2)
```
### PCA and spectral analysis
In case you haven't noticed, calculating principal components, is equivalent to calculating the eigenvectors of the design matrix $X'X$, i.e. the variance-covariance matrix of $X$. Indeed what we performed above is a decomposition of the variance of $X$ into orthogonal components.
The constrained maximization problem above can be re-written in matrix notation as
$$
\max \ \phi' X'X \phi \quad \text{ s. t. } \quad \phi'\phi = 1
$$
Which has the following dual representation
$$
\mathcal L (\phi, \lambda) = \phi' X'X \phi - \lambda (\phi'\phi - 1)
$$
Which gives first order conditions
$$
\begin{align}
& \frac{\partial \mathcal L}{\partial \lambda} = \phi'\phi - 1 \\
& \frac{\partial \mathcal L}{\partial \phi} = 2 X'X \phi - 2 \lambda \phi
\end{align}
$$
Setting the derivatives to zero at the optimum, we get
$$
\begin{align}
& \phi'\phi = 1 \\
& X'X \phi = \lambda \phi
\end{align}
$$
Thus, $\phi$ is an **eigenvector** of the covariance matrix $X'X$, and the maximizing vector will be the one associated with the largest **eigenvalue** $\lambda$.
We can now double-check it using `numpy` linear algebra package.
```python
eigenval, eigenvec = np.linalg.eig(X_scaled.T @ X_scaled)
data = np.concatenate((eigenvec,eigenval.reshape(1,-1)))
idx = list(df.columns) + ['Eigenvalue']
df_eigen = pd.DataFrame(data, index=idx, columns=['PC1', 'PC2','PC3','PC4'])
df_eigen
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>PC1</th>
<th>PC2</th>
<th>PC3</th>
<th>PC4</th>
</tr>
</thead>
<tbody>
<tr>
<th>Murder</th>
<td>0.535899</td>
<td>0.418181</td>
<td>0.649228</td>
<td>-0.341233</td>
</tr>
<tr>
<th>Assault</th>
<td>0.583184</td>
<td>0.187986</td>
<td>-0.743407</td>
<td>-0.268148</td>
</tr>
<tr>
<th>UrbanPop</th>
<td>0.278191</td>
<td>-0.872806</td>
<td>0.133878</td>
<td>-0.378016</td>
</tr>
<tr>
<th>Rape</th>
<td>0.543432</td>
<td>-0.167319</td>
<td>0.089024</td>
<td>0.817778</td>
</tr>
<tr>
<th>Eigenvalue</th>
<td>124.012079</td>
<td>49.488258</td>
<td>8.671504</td>
<td>17.828159</td>
</tr>
</tbody>
</table>
</div>
The spectral decomposition of the variance of $X$ generates a set of orthogonal vectors (eigenvectors) with different magnitudes (eigenvalues). The eigenvalues tell us the amount of variance of the data in that direction.
If we combine the eigenvectors together, we form a projection matrix $P$ that we can use to transform the original variables: $\tilde X = P X$
```python
X_transformed = X_scaled @ eigenvec
df_transformed = pd.DataFrame(X_transformed, index=df.index, columns=['PC1', 'PC2','PC3','PC4'])
df_transformed.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>PC1</th>
<th>PC2</th>
<th>PC3</th>
<th>PC4</th>
</tr>
<tr>
<th>State</th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<th>Alabama</th>
<td>0.985566</td>
<td>1.133392</td>
<td>0.156267</td>
<td>-0.444269</td>
</tr>
<tr>
<th>Alaska</th>
<td>1.950138</td>
<td>1.073213</td>
<td>-0.438583</td>
<td>2.040003</td>
</tr>
<tr>
<th>Arizona</th>
<td>1.763164</td>
<td>-0.745957</td>
<td>-0.834653</td>
<td>0.054781</td>
</tr>
<tr>
<th>Arkansas</th>
<td>-0.141420</td>
<td>1.119797</td>
<td>-0.182811</td>
<td>0.114574</td>
</tr>
<tr>
<th>California</th>
<td>2.523980</td>
<td>-1.542934</td>
<td>-0.341996</td>
<td>0.598557</td>
</tr>
</tbody>
</table>
</div>
This is exactly the dataset that we obtained before.
### More on PCA
#### Scaling the Variables
The results obtained when we perform PCA will also depend on whether the variables have been individually scaled. In fact, the variance of a variable depends on its magnitude.
```python
# Variables variance
df.var(axis=0)
```
Murder 18.970465
Assault 6945.165714
UrbanPop 209.518776
Rape 87.729159
dtype: float64
Consequently, if we perform PCA on the unscaled variables, then the first principal component loading vector will have a very large loading for `Assault`, since that variable has by far the highest variance.
```python
# Fit PCA with unscaled varaibles
X = df.values
pca2_u = PCA(n_components=2).fit(X)
```
```python
# Get weights
weights_u = pca2_u.components_.T
df_weights_u = pd.DataFrame(weights_u, index=df.columns, columns=['PC1', 'PC2'])
df_weights_u
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>PC1</th>
<th>PC2</th>
</tr>
</thead>
<tbody>
<tr>
<th>Murder</th>
<td>0.041704</td>
<td>0.044822</td>
</tr>
<tr>
<th>Assault</th>
<td>0.995221</td>
<td>0.058760</td>
</tr>
<tr>
<th>UrbanPop</th>
<td>0.046336</td>
<td>-0.976857</td>
</tr>
<tr>
<th>Rape</th>
<td>0.075156</td>
<td>-0.200718</td>
</tr>
</tbody>
</table>
</div>
```python
# Transform X to get the principal components
X_dim2_u = pca2_u.transform(X)
df_dim2_u = pd.DataFrame(X_dim2_u, columns=['PC1', 'PC2'], index=df.index)
df_dim2_u.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>PC1</th>
<th>PC2</th>
</tr>
<tr>
<th>State</th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<th>Alabama</th>
<td>64.802164</td>
<td>11.448007</td>
</tr>
<tr>
<th>Alaska</th>
<td>92.827450</td>
<td>17.982943</td>
</tr>
<tr>
<th>Arizona</th>
<td>124.068216</td>
<td>-8.830403</td>
</tr>
<tr>
<th>Arkansas</th>
<td>18.340035</td>
<td>16.703911</td>
</tr>
<tr>
<th>California</th>
<td>107.422953</td>
<td>-22.520070</td>
</tr>
</tbody>
</table>
</div>
```python
fig, (ax1,ax2) = plt.subplots(1,2,figsize=(18,9))
# Scaled PCA
for i in df_dim2.index:
ax1.annotate(i, (df_dim2.PC1.loc[i], -df_dim2.PC2.loc[i]), ha='center', fontsize=14)
ax1b = ax1.twinx().twiny()
ax1b.set_ylim(-1,1); ax1b.set_xlim(-1,1)
for i in df_weights[['PC1', 'PC2']].index:
ax1b.annotate(i, (df_weights.PC1.loc[i]*1.05, -df_weights.PC2.loc[i]*1.05), color='orange', fontsize=16)
ax1b.arrow(0,0,df_weights.PC1[i], -df_weights.PC2[i], color='orange', lw=2)
ax1.set_title('Scaled')
# Unscaled PCA
for i in df_dim2_u.index:
ax2.annotate(i, (df_dim2_u.PC1.loc[i], -df_dim2_u.PC2.loc[i]), ha='center', fontsize=14)
ax2b = ax2.twinx().twiny()
ax2b.set_ylim(-1,1); ax2b.set_xlim(-1,1)
for i in df_weights_u[['PC1', 'PC2']].index:
ax2b.annotate(i, (df_weights_u.PC1.loc[i]*1.05, -df_weights_u.PC2.loc[i]*1.05), color='orange', fontsize=16)
ax2b.arrow(0,0,df_weights_u.PC1[i], -df_weights_u.PC2[i], color='orange', lw=2)
ax2.set_title('Unscaled')
# Plot reference lines
for ax,df in zip((ax1,ax2), (df_dim2,df_dim2_u)):
m = np.max(np.abs(df.values))*1.2
ax.hlines(0,-m,m, linestyles='dotted', colors='grey')
ax.vlines(0,-m,m, linestyles='dotted', colors='grey')
ax.set_xlabel('First Principal Component')
ax.set_ylabel('Second Principal Component')
ax.set_xlim(-m,m); ax.set_ylim(-m,m)
```
As predicted, the first principal component loading vector places almost all of its weight on `Assault`, while the second principal component loading vector places almost all of its weight on `UrpanPop`. Comparing this to the left-hand plot, we see that scaling does indeed have a substantial effect on the results obtained. However, this result is simply a consequence of the scales on which the variables were measured.
#### The Proportion of Variance Explained
We can now ask a natural question: how much of the information in a given data set is lost by projecting the observations onto the first few principal components? That is, how much of the variance in the data is not contained in the first few principal components? More generally, we are interested in knowing the **proportion of variance explained (PVE)** by each principal component.
```python
# Four components
pca4 = PCA(n_components=4).fit(X_scaled)
```
```python
# Variance of the four principal components
pca4.explained_variance_
```
array([2.53085875, 1.00996444, 0.36383998, 0.17696948])
```python
# As a percentage of the total variance
pca4.explained_variance_ratio_
```
array([0.62006039, 0.24744129, 0.0891408 , 0.04335752])
In the `Arrest` dataset, the first principal component explains $62.0\%$ of the variance in the data, and the next principal component explains $24.7\%$ of the variance. Together, the first two principal components explain almost $87\%$ of the variance in the data, and the last two principal components explain only $13\%$ of the variance.
```python
fig, (ax1, ax2) = plt.subplots(1,2, figsize=(12,5))
fig.suptitle('Figure 10.2');
# Relative
ax1.plot([1,2,3,4], pca4.explained_variance_ratio_)
ax1.set_ylabel('Prop. Variance Explained')
ax1.set_xlabel('Principal Component');
# Cumulative
ax2.plot([1,2,3,4], np.cumsum(pca4.explained_variance_ratio_))
ax2.set_ylabel('Cumulative Variance Explained');
ax2.set_xlabel('Principal Component');
```
#### Deciding How Many Principal Components to Use
In general, a $n \times p$ data matrix $X$ has $\min\{n − 1, p\}$ distinct principal components. However, we usually are not interested in all of them; rather, we would like to use just the first few principal components in order to visualize or interpret the data.
We typically decide on the number of principal components required to visualize the data by examining a *scree plot*.
However, there is no well-accepted objective way to decide how many principal com- ponents are enough.
## Clustering
Clustering refers to a very broad set of techniques for finding subgroups, or clusters, in a data set. When we cluster the observations of a data set, we seek to partition them into distinct groups so that the observations within each group are quite similar to each other, while observations in different groups are quite different from each other.
In this section we focus on perhaps the two best-known clustering approaches:
1. **K-means clustering**: we seek to partition the observations into a pre-specified clustering number of clusters
2. **Hierarchical clustering**: we do not know in advance how many clusters we want; in fact, we end up with a tree-like visual representation of the observations, called a dendrogram, that allows us to view at once the clusterings obtained for each possible number of clusters, from 1 to n.
### K-Means Clustering
The idea behind K-means clustering is that a good clustering is one for which the within-cluster variation is as small as possible. Hence we want to solve the problem
$$
\underset{C_{1}, \ldots, C_{K}}{\operatorname{minimize}}\left\{\sum_{k=1}^{K} W\left(C_{k}\right)\right\}
$$
where $C_k$ is a cluster and $ W(C_k)$ is a measure of the amount by which the observations within a cluster differ from each other.
There are many possible ways to define this concept, but by far the most common choice involves **squared Euclidean distance**. That is, we define
$$
W\left(C_{k}\right)=\frac{1}{\left|C_{k}\right|} \sum_{i, i^{\prime} \in C_{k}} \sum_{j=1}^{p}\left(x_{i j}-x_{i^{\prime} j}\right)^2
$$
where $|C_k|$ denotes the number of observations in the $k^{th}$ cluster.
### Algorithm
1. Randomly assign a number, from $1$ to $K$, to each of the observations. These serve as initial cluster assignments for the observations.
2. Iterate until the cluster assignments stop changing:
a) For each of the $K$ clusters, compute the cluster centroid. The kth cluster centroid is the vector of the $p$ feature means for the observations in the $k^{th}$ cluster.
b) Assign each observation to the cluster whose centroid is closest (where closest is defined using Euclidean distance).
```python
np.random.seed(123)
# Simulate data
X = np.random.randn(50,2)
X[0:25, 0] = X[0:25, 0] + 3
X[0:25, 1] = X[0:25, 1] - 4
```
```python
fig, ax = plt.subplots(figsize=(6, 5))
fig.suptitle("Baseline")
# Plot
ax.scatter(X[:,0], X[:,1], s=50, alpha=0.5, c='k')
ax.set_xlabel('X0'); ax.set_ylabel('X1');
```
```python
np.random.seed(1)
# Init clusters
K = 2
clusters0 = np.random.randint(K,size=(np.size(X,0)))
```
```python
fig, ax = plt.subplots(figsize=(6, 5))
fig.suptitle("Random assignment")
# Plot
ax.scatter(X[clusters0==0,0], X[clusters0==0,1], s=50, alpha=0.5)
ax.scatter(X[clusters0==1,0], X[clusters0==1,1], s=50, alpha=0.5)
ax.set_xlabel('X0'); ax.set_ylabel('X1');
```
```python
# Compute new centroids
def compute_new_centroids(X, clusters):
K = len(np.unique(clusters))
centroids = np.zeros((K,np.size(X,1)))
for k in range(K):
if sum(clusters==k)>0:
centroids[k,:] = np.mean(X[clusters==k,:], axis=0)
else:
centroids[k,:] = np.mean(X, axis=0)
return centroids
```
```python
# Print
centroids0 = compute_new_centroids(X, clusters0)
print(centroids0)
```
[[ 1.35725989 -2.15281035]
[ 1.84654757 -1.99437838]]
```python
# Plot assignment
def plot_assignment(X, centroids, clusters, d, i):
clear_output(wait=True)
fig, ax = plt.subplots(figsize=(6, 5))
fig.suptitle("Iteration %.0f: inertia=%.1f" % (i,d))
# Plot
ax.clear()
colors = plt.rcParams['axes.prop_cycle'].by_key()['color'];
K = np.size(centroids,0)
for k in range(K):
ax.scatter(X[clusters==k,0], X[clusters==k,1], s=50, c=colors[k], alpha=0.5)
ax.scatter(centroids[k,0], centroids[k,1], marker = '*', s=300, color=colors[k])
ax.set_xlabel('X0'); ax.set_ylabel('X1');
# Show
plt.show();
```
```python
# Plot
plot_assignment(X, centroids0, clusters0, 0, 0)
```
```python
# Assign X to clusters
def assign_to_cluster(X, centroids):
K = np.size(centroids,0)
dist = np.zeros((np.size(X,0),K))
for k in range(K):
dist[:,k] = np.mean((X - centroids[k,:])**2, axis=1)
clusters = np.argmin(dist, axis=1)
# Compute inertia
inertia = 0
for k in range(K):
if sum(clusters==k)>0:
inertia += np.sum((X[clusters==k,:] - centroids[k,:])**2)
return clusters, inertia
```
```python
# Get cluster assignment
[clusters1,d] = assign_to_cluster(X, centroids0)
```
```python
# Plot
plot_assignment(X, centroids0, clusters1, d, 1)
```
We now have all the components to proceed iteratively.
```python
def kmeans_manual(X, K):
# Init
i = 0
d0 = 1e4
d1 = 1e5
clusters = np.random.randint(K,size=(np.size(X,0)))
# Iterate until convergence
while np.abs(d0-d1) > 1e-10:
d1 = d0
centroids = compute_new_centroids(X, clusters)
[clusters, d0] = assign_to_cluster(X, centroids)
plot_assignment(X, centroids, clusters, d0, i)
i+=1
```
```python
# Test
kmeans_manual(X, K)
```
Here the observations can be easily plotted because they are two-dimensional.
If there were more than two variables then we could instead perform PCA
and plot the first two principal components score vectors.
In this example, we knew that there really were two clusters because
we generated the data. However, for real data, in general we do not know
the true number of clusters. We could instead have performed K-means
clustering on this example with `K = 3`. If we do this, K-means clustering will split up the two "real" clusters, since it has no information about them:
```python
# K=3
kmeans_manual(X, 3)
```
The automated function in `sklearn` to persorm $K$-means clustering is `KMeans`.
```python
# SKlearn algorithm
km1 = KMeans(n_clusters=3, n_init=1, random_state=1)
km1.fit(X)
```
KMeans(n_clusters=3, n_init=1, random_state=1)
```python
# Plot
plot_assignment(X, km1.cluster_centers_, km1.labels_, km1.inertia_, km1.n_iter_)
```
As we can see, the results are different in the two algorithms? Why? $K$-means is susceptible to the initial values. One way to solve this problem is to run the algorithm multiple times and report only the best results
To run the `Kmeans()` function in python with multiple initial cluster assignments, we use the `n_init` argument (default: 10). If a value of `n_init` greater than one is used, then K-means clustering will be performed using multiple random assignments, and the `Kmeans()` function will report only the best results.
```python
# 30 runs
km_30run = KMeans(n_clusters=3, n_init=30, random_state=1).fit(X)
plot_assignment(X, km_30run.cluster_centers_, km_30run.labels_, km_30run.inertia_, km_30run.n_iter_)
```
It is generally recommended to always run K-means clustering with a large value of `n_init`, such as 20 or 50 to avoid getting stuck in an undesirable local optimum.
When performing K-means clustering, in addition to using multiple initial cluster assignments, it is also important to set a random seed using the `random_state` parameter. This way, the initial cluster assignments can be replicated, and the K-means output will be fully reproducible.
### Hierarchical Clustering
One potential disadvantage of K-means clustering is that it requires us to pre-specify the number of clusters $K$. Hierarchical clustering is an alternative approach which does not require that we commit to a particular choice of $K$
#### The Dendogram
Hierarchical clustering has an added advantage over K-means clustering in that it results in an attractive tree-based representation of the observations, called a **dendrogram**.
```python
plt.figure(figsize=(25, 10))
plt.title('Hierarchical Clustering Dendrogram')
# calculate full dendrogram
plt.xlabel('sample index')
plt.ylabel('distance')
dendrogram(
linkage(X, "complete"),
leaf_rotation=90., # rotates the x axis labels
leaf_font_size=8., # font size for the x axis labels
)
plt.show()
```
Each leaf of the *dendrogram* represents one observation.
As we move up the tree, some leaves begin to fuse into branches. These correspond to observations that are similar to each other. As we move higher up the tree, branches themselves fuse, either with leaves or other branches. The earlier (lower in the tree) fusions occur, the more similar the groups of observations are to each other.
We can use de *dendogram* to understand how similar two observations are: we can look for the point in the tree where branches containing those two obse rvations are first fused. The height of this fusion, as measured on the vertical axis, indicates how different the two observations are. Thus, observations that fuse at the very bottom of the tree are quite similar to each other, whereas observations that fuse close to the top of the tree will tend to be quite different.
The term **hierarchical** refers to the fact that clusters obtained by cutting the dendrogram at a given height are necessarily nested within the clusters obtained by cutting the dendrogram at any greater height.
#### The Hierarchical Clustering Algorithm
1. Begin with $n$ observations and a measure (such as Euclidean distance) of all the $n(n − 1)/2$ pairwise dissimilarities. Treat each 2 observation as its own cluster.
2. For $i=n,n−1,...,2$
a) Examine all pairwise inter-cluster dissimilarities among the $i$ clusters and identify the **pair of clusters that are least dissimilar** (that is, most similar). Fuse these two clusters. The dissimilarity between these two clusters indicates the height in the dendrogram at which the fusion should be placed.
b) Compute the new pairwise inter-cluster dissimilarities among the $i−1$ remaining clusters.
#### The linkage function
We have a concept of the dissimilarity between pairs of observations, but how do we define the dissimilarity between two clusters if one or both of the clusters contains multiple observations?
The concept of dissimilarity between a pair of observations needs to be extended to a pair of groups of observations. This extension is achieved by developing the notion of **linkage**, which defines the dissimilarity between two groups of observations.
The four most common types of linkage are:
1. **Complete**: Maximal intercluster dissimilarity. Compute all pairwise dissimilarities between the observations in cluster A and the observations in cluster B, and record the largest of these dissimilarities.
2. **Single**: Minimal intercluster dissimilarity. Compute all pairwise dissimilarities between the observations in cluster A and the observations in cluster B, and record the smallest of these dissimilarities.
3. **Average**: Mean intercluster dissimilarity. Compute all pairwise dissimilarities between the observations in cluster A and the observations in cluster B, and record the average of these dissimilarities.
4. **Centroid**: Dissimilarity between the centroid for cluster A (a mean vector of length p) and the centroid for cluster B. Centroid linkage can result in undesirable inversions.
Average, complete, and single linkage are most popular among statisticians. Average and complete linkage are generally preferred over single linkage, as they tend to yield more balanced dendrograms. Centroid linkage is often used in genomics, but suffers from a major drawback in that an inversion can occur, whereby two clusters are fused at a height below either of the individual clusters in the dendrogram. This can lead to difficulties in visualization as well as in interpretation of the dendrogram.
```python
# Init
linkages = [hierarchy.complete(X), hierarchy.average(X), hierarchy.single(X)]
titles = ['Complete Linkage', 'Average Linkage', 'Single Linkage']
```
```python
fig, (ax1,ax2,ax3) = plt.subplots(1,3, figsize=(18,10))
# Plot
for linkage, t, ax in zip(linkages, titles, (ax1,ax2,ax3)):
dendrogram(linkage, ax=ax)
ax.set_title(t)
```
For this data, complete and average linkage generally separates the observations into their correct groups.
| c9d85aa3b6c0835835356452dbe0efaafcd32ac2 | 593,316 | ipynb | Jupyter Notebook | 10_unsupervised.ipynb | albarran/Machine-Learning-for-Economic-Analysis-2020 | eaca29efb8347e51178bdb7fcf90f934d7fee144 | [
"MIT"
] | 1 | 2021-05-14T17:23:16.000Z | 2021-05-14T17:23:16.000Z | 10_unsupervised.ipynb | albarran/Machine-Learning-for-Economic-Analysis-2020 | eaca29efb8347e51178bdb7fcf90f934d7fee144 | [
"MIT"
] | null | null | null | 10_unsupervised.ipynb | albarran/Machine-Learning-for-Economic-Analysis-2020 | eaca29efb8347e51178bdb7fcf90f934d7fee144 | [
"MIT"
] | null | null | null | 339.037714 | 178,032 | 0.923277 | true | 9,193 | Qwen/Qwen-72B | 1. YES
2. YES | 0.933431 | 0.882428 | 0.823685 | __label__eng_Latn | 0.956498 | 0.75203 |
# Stochastic differential equations
Suggested references:
* C Gardiner, *Stochastic Methods: A Handbook for the Natural and Social Sciences*, Springer.
* D Higham, *An algorithmic introduction to Stochastic Differential Equations*, SIAM Review.
## Kostas Zygalakis
Quick recap: the key feature is the *Ito stochastic integral*
\begin{equation}
\int_{t_0}^t G(t') \, dW(t') = \text{mean-square-}\lim_{n\to +\infty} \left\{ \sum_{i=1}^n G(t_{i-1}) (W_{t_i} - W_{t_{i-1}} \right\}
\end{equation}
where the key point for the Ito integral is that the first term in the sum is evaluated at the left end of the interval ($t_{i-1}$).
Now we use this to write down the SDE
\begin{equation}
dX_t = f(X_t) \, dt + g(X_t) \, dW_t
\end{equation}
with formal solution
\begin{equation}
X_t = X_0 + \int_0^t f(X_s) \, ds + \int_0^t g(X_s) \, dW_s.
\end{equation}
Using the Ito stochastic integral formula we get the Euler-Maruyama method
\begin{equation}
X_{n+1} = X_n + h f(X_n) + \sqrt{h} \xi_n \, g(X_n)
\end{equation}
by applying the integral over the region $[t_n, t_{n+1} = t_n + h]$. Here $h$ is the width of the interval and $\xi_n$ is the normal random variable $\xi_n \sim N(0, 1)$.
### Order of accuracy
There are two ways to talk about errors here. It depends on the *realization* that you have. That is, for each realization there is a different Brownian path $W$.
If we fix to one realization - a single Brownian path $W$ - then we can vary the size of the step $h$. This gives us the *strong* order of convergence: how the error varies with $h$ for a *fixed* Brownian path.
The other question is what happens when we consider the average of all possible realizations. This is the *weak* order of convergence.
Formally, denote the true solution as $X(T)$ and the numerical solution for a given step length $h$ as $X^h(T)$. The order of convergence is denoted $\gamma$.
#### Strong convergence
\begin{equation}
\mathbb{E} \left| X(T) - X^h(T) \right| \le C h^{\gamma}
\end{equation}
#### Weak convergence
\begin{equation}
\left| \mathbb{E} \left( \phi( X(T) ) \right) - \mathbb{E} \left( \phi( X^h(T) ) \right) \right| \le C h^{\gamma}
\end{equation}
For Euler-Maruyama, the weak order of convergence is 1 (as you would expect from the name). *However*, the strong order of convergence is 1/2. Intuitively this is related to the $\sqrt{h}$ factor in the Brownian path.
##### Catch
If $g'(X) \ne 0$ the strong convergence is 1/2, otherwise it is 1!
## Stochastic chain rule
For our purposes we just need that $dW^2 = dt$ (from our definition of the Brownian path - changing notation for the increment from $h$ to $dt$), which means that (by only keeping leading order terms) $dW^{2+N} = 0$ for all $N > 0$. The higher moments vary too fast to contribute to anything on the timescales that we're interested in, after averaging.
### Normal chain rule
If
\begin{equation}
\frac{dX}{dt} = f(X_t)
\end{equation}
and we want to find the differential equation satisfied by $h(X(t))$ (or $h(X_t)$), then we write
\begin{align}
&&\frac{d}{dt} h(X_t) &= h \left( X(t) + dX(t) \right) - h(X(t)) \\
&&&\simeq h(X(t)) + dX \, h'(X(t)) + \frac{1}{2} (dX)^2 \, h''(X(t)) + \dots - h(X(t)) \\
&&&\simeq f(X) h'(X) dt + \frac{1}{2} (f(X))^2 h''(X) (dt)^2 + \dots \\
\implies && \frac{d h(X)}{dt} &= f(X) h'(X).
\end{align}
### Stochastic chain rule
Now run through the same steps using the equation
\begin{equation}
dX = f(X)\, dt + g(X) \, dW.
\end{equation}
We find
\begin{align}
&& d h &\simeq h'(X(t))\, dX + \frac{1}{2} h''(X(t)) (dX)^2 + \dots, \\
&&&\simeq h'(X) f(X)\, dt + h'(X) g(X) ', dW + \frac{1}{2} \left( f(X) dt^2 + 2 f(x)g(x)\, dt dW + g^2(x) dW^2 \right) \\
\implies && d h &= \left( f(X) h'(X) + \frac{1}{2} h''(X)g^2(X) \right) \, dt + h'(X) g(X) \, dW.
\end{align}
This additional $g^2$ term makes all the difference when deriving numerical methods, where the chain rule is repeatedly used.
## Using this result
Remember that
\begin{equation}
\int_{t_0}^t W_s \, dW_s = \frac{1}{2} W^2_t - \frac{1}{2} W^2_{t_0} - \frac{1}{2} (t - t_0).
\end{equation}
From this we need to identify the stochastic differential equation, and also the function $h$, that will give us this result just from the chain rule.
The SDE is
\begin{equation}
dX_t = dW_t, \quad f(X) = 0, \quad g(X) = 1.
\end{equation}
Writing the chain rule down in the form
\begin{equation}
h(X_t) = h(X_0) + \int_0^t \left( f(X_s) h'(X_s) + \frac{1}{2} h''(X_s) g^2(X_s) \right) \, dt + \int_0^t h'(X_s) g(X_s) \, dW_s.
\end{equation}
Matching the final term (the integral over $dW_s$) we see that we need $h'$ to go like $X$, or
\begin{equation}
h = X^2, \quad dX_t = dW_t, \quad f(X) = 0, \quad g(X) = 1.
\end{equation}
With $X_t = W_t$ we therefore have
\begin{align}
W_t^2 &= W_0^2 + \int_{t_0}^t \frac{1}{2} 2 \, ds + \int_{t_0}^t 2 W_s \, dW_s
&= W_0^2 + (t - t_0) + \int_{t_0}^t 2 W_s \, dW_s
\end{align}
as required.
##### Exercise
Given
\begin{equation}
dX_t = \lambda X_t \, dt + \mu X_t \, dW_t, \quad h(X) = \log(X),
\end{equation}
prove that
\begin{equation}
X(t) = X(0) e^{(\lambda - \tfrac{1}{2} \mu^2) t + \mu W_t}.
\end{equation}
| d3fe4b06e3f98b8c44ea9015d2f119354de06c11 | 7,960 | ipynb | Jupyter Notebook | FEEG6016 Simulation and Modelling/SDE-Lecture-2.ipynb | ngcm/training-public | e5a0d8830df4292315c8879c4b571eef722fdefb | [
"MIT"
] | 7 | 2015-06-23T05:50:49.000Z | 2016-06-22T10:29:53.000Z | FEEG6016 Simulation and Modelling/SDE-Lecture-2.ipynb | Jhongesell/training-public | e5a0d8830df4292315c8879c4b571eef722fdefb | [
"MIT"
] | 1 | 2017-11-28T08:29:55.000Z | 2017-11-28T08:29:55.000Z | FEEG6016 Simulation and Modelling/SDE-Lecture-2.ipynb | Jhongesell/training-public | e5a0d8830df4292315c8879c4b571eef722fdefb | [
"MIT"
] | 24 | 2015-04-18T21:44:48.000Z | 2019-01-09T17:35:58.000Z | 33.029046 | 361 | 0.516709 | true | 1,899 | Qwen/Qwen-72B | 1. YES
2. YES | 0.752013 | 0.857768 | 0.645052 | __label__eng_Latn | 0.971307 | 0.337004 |
# Data Analysis with Jupyter Notebooks.
# Tutorial 5
Benjamin J. Morgan, University of Bath.
# Contents
- [Data analysis and statistics with numpy](#data_analysis)
- [Linear regression](#linear_regression)
# Data analysis and statistics with numpy<a id='data_analysis'></a>
`numpy` contains a lot of powerful functions for performing simple statistical analysis on our data. For example, consider the set of numbers 1 to 50:
>```python
import numpy as np
a = np.arange(1,51)
a
```
```python
```
To find the minimum and maximum values we can use `np.min()` and `np.max()`
>```python
np.min(a)
```
```python
```
>```python
np.max(a)
```
```python
```
To find the **sum** of all these numbers, we can use `np.sum()`
>```python
np.sum(a)
```
```python
```
The **mean** of a set of numbers is defined as
\begin{equation}
\frac{\sum_i^N x_i}{N}
\end{equation}
which we could calculate with
>```python
np.sum(a) / len(a)
# len(a) returns the length of the array `a`
```
```python
```
or with `np.mean()`
>```python
np.mean( a )
```
```python
```
The **standard deviation**, $\sigma$ quantifies how much the numbers in our set deviate from the mean.
\begin{equation}
\sigma = \sqrt{\frac{1}{N}\sum_{i=1}^N(x_i-\mu)^2}
\end{equation}
where $\mu$ is the mean.
Again, we could write this out in code:
>```python
import math
sigma = math.sqrt( np.sum( ( a - np.mean(a))**2 ) / len(a) )
sigma
```
```python
```
Or use the `np.std()` function
>```python
np.std(a)
```
```python
```
### Linear Regression<a id='linear_regression'></a>
Another commonly used data analysis technique is **linear regression**. This is used to calculate the relationship between two data sets, $X$ and $Y$, assuming that this relationship can be described by a straight line
\begin{equation}
y_i = m x_i + c.
\end{equation}
For any real data set, the data points are unlikely to all fall exactly on the same line. Linear regression is the process of calculating the line that “best fits” the given data.
<div class="alert alert-success">
Look at the following snippet, and try to work out what the result will be.<br/>
<span style='font-family:monospace'>np.random.rand(10)</span> creates an array of 10 random numbers between 0 and 1.
</div>
>```python
import matplotlib.pyplot as plt
x = np.arange(1,11)
offset = 2.0
y = x + ( np.random.rand(10) - 0.5 ) * offset
plt.plot( x, y, 'o' )
plt.show()
```
```python
```
You can see this approximately gives the straight line relationship between $y=x$. We can use linear regression to calculate the “best” straight line that describes these data.
There are a number of different ways in Python to calculate a line of best-fit. One of the simplest is to use another module, [`scipy.stats`](https://docs.scipy.org/doc/scipy-0.18.1/reference/stats.html). As you might suspect from the name, `scipy.stats` contains an large set of statistical analysis tools. We want [`linregress()`](https://docs.scipy.org/doc/scipy-0.18.1/reference/generated/scipy.stats.linregress.html#scipy.stats.linregress):
>```python
from scipy.stats import linregress
linregress( x, y )
```
```python
```
You can see the output is complicated, but includes a list of values that includes the slope and the intercept. In fact you can treat the output like a [list](lists), and use indexing to select a specific result.
>```python
linregress( x, y )[0] # use indexing to get the slope
```
```python
```
Another option is to collect all five of the output values at once
>```python
slope, intercept, rvalue, pvalue, stderr = linregress( x, y )
print( "slope =", slope )
print( "intercept =", intercept )
```
```python
```
To plot the best-fit line against the original data, we generate a new data set according to $y=mx+c$, where $m$ and $c$ are set to the `slope` and `intercept`, calculated from `linregress`.
>```python
y_fit = slope * x + intercept
plt.plot( x, y, 'o' )
plt.plot( x, y_fit, '-' )
plt.xlabel( 'x' )
plt.ylabel( 'y' )
plt.title( 'y =', slope, 'x +', intercept )
plt.show()
```
```python
```
| bbfa199b0b620eeb2e054ecb90fba99429cd85fe | 8,059 | ipynb | Jupyter Notebook | Tutorial_5.ipynb | pythoninchemistry/chem_data_analysis_jupyter | 4af545f1a8acdded28d96508bb5adc8929da92cc | [
"CC-BY-4.0"
] | 3 | 2019-05-05T00:21:55.000Z | 2021-09-16T14:15:15.000Z | Tutorial_5.ipynb | pythoninchemistry/chem_data_analysis_jupyter | 4af545f1a8acdded28d96508bb5adc8929da92cc | [
"CC-BY-4.0"
] | null | null | null | Tutorial_5.ipynb | pythoninchemistry/chem_data_analysis_jupyter | 4af545f1a8acdded28d96508bb5adc8929da92cc | [
"CC-BY-4.0"
] | null | null | null | 24.128743 | 454 | 0.528602 | true | 1,133 | Qwen/Qwen-72B | 1. YES
2. YES | 0.923039 | 0.812867 | 0.750308 | __label__eng_Latn | 0.986287 | 0.58155 |
# Joint TV for multi-contrast MR
This demonstration shows how to do a synergistic reconstruction of two MR images with different contrast. Both MR images show the same underlying anatomy but of course with different contrast. In order to make use of this similarity a joint total variation (TV) operator is used as a regularisation in an iterative image reconstruction approach.
This demo is a jupyter notebook, i.e. intended to be run step by step.
You could export it as a Python file and run it one go, but that might
make little sense as the figures are not labelled.
Author: Christoph Kolbitsch, Evangelos Papoutsellis, Edoardo Pasca
First version: 16th of June 2021
CCP PETMR Synergistic Image Reconstruction Framework (SIRF).
Copyright 2021 Rutherford Appleton Laboratory STFC.
Copyright 2021 Physikalisch-Technische Bundesanstalt.
This is software developed for the Collaborative Computational
Project in Positron Emission Tomography and Magnetic Resonance imaging
(http://www.ccppetmr.ac.uk/).
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
# Initial set-up
```python
# Make sure figures appears inline and animations works
%matplotlib notebook
```
```python
# Make sure everything is installed that we need
!pip install brainweb nibabel --user
```
```python
# Initial imports etc
import numpy
from numpy.linalg import norm
import matplotlib.pyplot as plt
import random
import os
import sys
import shutil
import brainweb
from tqdm.auto import tqdm
# Import SIRF functionality
import notebook_setup
import sirf.Gadgetron as mr
from sirf_exercises import exercises_data_path
# Import CIL functionality
from cil.framework import AcquisitionGeometry, BlockDataContainer, BlockGeometry, ImageGeometry
from cil.optimisation.functions import Function, OperatorCompositionFunction, SmoothMixedL21Norm, L1Norm, L2NormSquared, BlockFunction, MixedL21Norm, IndicatorBox, TotalVariation, LeastSquares, ZeroFunction
from cil.optimisation.operators import GradientOperator, BlockOperator, ZeroOperator, CompositionOperator, LinearOperator, FiniteDifferenceOperator
from cil.optimisation.algorithms import PDHG, FISTA, GD
from cil.plugins.ccpi_regularisation.functions import FGP_TV
```
# Utilities
```python
# First define some handy function definitions
# To make subsequent code cleaner, we have a few functions here. You can ignore
# ignore them when you first see this demo.
def plot_2d_image(idx,vol,title,clims=None,cmap="viridis"):
"""Customized version of subplot to plot 2D image"""
plt.subplot(*idx)
plt.imshow(vol,cmap=cmap)
if not clims is None:
plt.clim(clims)
plt.colorbar()
plt.title(title)
plt.axis("off")
def crop_and_fill(templ_im, vol):
"""Crop volumetric image data and replace image content in template image object"""
# Get size of template image and crop
idim_orig = templ_im.as_array().shape
idim = (1,)*(3-len(idim_orig)) + idim_orig
offset = (numpy.array(vol.shape) - numpy.array(idim)) // 2
vol = vol[offset[0]:offset[0]+idim[0], offset[1]:offset[1]+idim[1], offset[2]:offset[2]+idim[2]]
# Make a copy of the template to ensure we do not overwrite it
templ_im_out = templ_im.copy()
# Fill image content
templ_im_out.fill(numpy.reshape(vol, idim_orig))
return(templ_im_out)
# This functions creates a regular (pattern='regular') or random (pattern='random') undersampled k-space data
# with an undersampling factor us_factor and num_ctr_lines fully sampled k-space lines in the k-space centre.
# For more information on this function please see the notebook f_create_undersampled_kspace
def create_undersampled_kspace(acq_orig, us_factor, num_ctr_lines, pattern='regular'):
"""Create a regular (pattern='regular') or random (pattern='random') undersampled k-space data"""
# Get ky indices
ky_index = acq_orig.parameter_info('kspace_encode_step_1')
# K-space centre in the middle of ky_index
ky0_index = len(ky_index)//2
# Fully sampled k-space centre
ky_index_subset = numpy.arange(ky0_index-num_ctr_lines//2, ky0_index+num_ctr_lines//2)
if pattern == 'regular':
ky_index_outside = numpy.arange(start=0, stop=len(ky_index), step=us_factor)
elif pattern == 'random':
ky_index_outside = numpy.asarray(random.sample(list(ky_index), len(ky_index)//us_factor))
else:
raise ValueError('pattern should be "random" or "linear"')
# Combine fully sampled centre and outer undersampled region
ky_index_subset = numpy.concatenate((ky_index_subset, ky_index_outside), axis=0)
# Ensure k-space points are note repeated
ky_index_subset = numpy.unique(ky_index_subset)
# Create new k-space data
acq_new = preprocessed_data.new_acquisition_data(empty=True)
# Select raw data
for jnd in range(len(ky_index_subset)):
cacq = preprocessed_data.acquisition(ky_index_subset[jnd])
acq_new.append_acquisition(cacq)
acq_new.sort()
return(acq_new)
```
### Joint TV reconstruction of two MR images
Assume we want to reconstruct two MR images $u$ and $v$ and utilse the similarity between both images using a joint TV ($JTV$) operator we can formulate the reconstruction problem as:
$$
\begin{equation}
(u^{*}, v^{*}) \in \underset{u,v}{\operatorname{argmin}} \frac{1}{2} \| A_{1} u - g\|^{2}_{2} + \frac{1}{2} \| A_{2} v - h\|^{2}_{2} + \alpha\,\mathrm{JTV}_{\eta, \lambda}(u, v)
\end{equation}
$$
* $JTV_{\eta, \lambda}(u, v) = \sum \sqrt{ \lambda|\nabla u|^{2} + (1-\lambda)|\nabla v|^{2} + \eta^{2}}$
* $A_{1}$, $A_{2}$: __MR__ `AcquisitionModel`
* $g_{1}$, $g_{2}$: __MR__ `AcquisitionData`
### Solving this problem
In order to solve the above minimization problem, we will use an alternating minimisation approach, where one variable is fixed and we solve wrt to the other variable:
$$
\begin{align*}
u^{k+1} & = \underset{u}{\operatorname{argmin}} \frac{1}{2} \| A_{1} u - g\|^{2}_{2} + \alpha_{1}\,\mathrm{JTV}_{\eta, \lambda}(u, v^{k}) \quad \text{subproblem 1}\\
v^{k+1} & = \underset{v}{\operatorname{argmin}} \frac{1}{2} \| A_{2} v - h\|^{2}_{2} + \alpha_{2}\,\mathrm{JTV}_{\eta, 1-\lambda}(u^{k+1}, v) \quad \text{subproblem 2}\\
\end{align*}$$
We are going to use a gradient descent approach to solve each of these subproblems alternatingly.
The *regularisation parameter* `alpha` should be different for each subproblem. But not to worry at this stage. Maybe we should use $\alpha_{1}, \alpha_{2}$ in front of the two JTVs and a $\lambda$, $1-\lambda$ for the first JTV and $1-\lambda$, $\lambda$, for the second JTV with $0<\lambda<1$.
This notebook builds on several other notebooks and hence certain steps will be carried out with minimal documentation. If you want more explainations, then we would like to ask you to refer to the corresponding notebooks which are mentioned in the following list. The steps we are going to carry out are
- (A) Get a T1 and T2 map from brainweb which we are going to use as ground truth $u_{gt}$ and $v_{gt}$ for our reconstruction (further information: `introduction` notebook)
- (B) Create __MR__ `AcquisitionModel` $A_{1}$ and $A_{2}$ and simulate undersampled __MR__ `AcquisitionData` $g_{1}$ and $g_{2}$ (further information: `acquisition_model_mr_pet_ct` notebook)
- (C) Set up the joint TV reconstruction problem
- (D) Solve the joint TV reconstruction problem (further information on gradient descent: `gradient_descent_mr_pet_ct` notebook)
# (A) Get brainweb data
We will download and use data from the brainweb.
```python
fname, url= sorted(brainweb.utils.LINKS.items())[0]
files = brainweb.get_file(fname, url, ".")
data = brainweb.load_file(fname)
brainweb.seed(1337)
```
```python
for f in tqdm([fname], desc="mMR ground truths", unit="subject"):
vol = brainweb.get_mmr_fromfile(f, petNoise=1, t1Noise=0.75, t2Noise=0.75, petSigma=1, t1Sigma=1, t2Sigma=1)
```
```python
T2_arr = vol['T2']
T1_arr = vol['T1']
# Normalise image data
T2_arr /= numpy.max(T2_arr)
T1_arr /= numpy.max(T1_arr)
```
```python
# Display it
plt.figure();
slice_show = T1_arr.shape[0]//2
plot_2d_image([1,2,1], T1_arr[slice_show, :, :], 'T1', cmap="Greys_r")
plot_2d_image([1,2,2], T2_arr[slice_show, :, :], 'T2', cmap="Greys_r")
```
Ok, we got to two images with T1 and T2 contrast BUT they brain looks a bit small. Spoiler alert: We are going to reconstruct MR images with a FOV 256 x 256 voxels. As the above image covers 344 x 344 voxels, the brain would only cover a small part of our MR FOV. In order to ensure the brain fits well into our MR FOV, we are going to scale the images.
In order to do this we are going to use an image `rescale` from the skimage package and simply rescale the image by a factor 2 and then crop it. To speed things up, we are going to already select a single slice because also our MR scan is going to be 2D.
```python
from skimage.transform import rescale
# Select central slice
central_slice = T1_arr.shape[0]//2
T1_arr = T1_arr[central_slice, :, :]
T2_arr = T2_arr[central_slice, :, :]
# Rescale by a factor 2.0
T1_arr = rescale(T1_arr, 2,0)
T2_arr = rescale(T2_arr, 2.0)
# Select a central ROI with 256 x 256
# We could also skip this because it is automaticall done by crop_and_fill()
# but we would like to check if we did the right thing
idim = [256, 256]
offset = (numpy.array(T1_arr.shape) - numpy.array(idim)) // 2
T1_arr = T1_arr[offset[0]:offset[0]+idim[0], offset[1]:offset[1]+idim[1]]
T2_arr = T2_arr[offset[0]:offset[0]+idim[0], offset[1]:offset[1]+idim[1]]
# Now we make sure our image is of shape (1, 256, 256) again because in __SIRF__ even 2D images
# are expected to have 3 dimensions.
T1_arr = T1_arr[numpy.newaxis,...]
T2_arr = T2_arr[numpy.newaxis,...]
```
```python
# Display it
plt.figure();
slice_show = T1_arr.shape[0]//2
plot_2d_image([1,2,1], T1_arr[slice_show, :, :], 'T1', cmap="Greys_r")
plot_2d_image([1,2,2], T2_arr[slice_show, :, :], 'T2', cmap="Greys_r")
```
Now, that looks better. Now we have got images we can use for our MR simulation.
# (B) Simulate undersampled MR AcquisitionData
```python
# Create MR AcquisitionData
mr_acq = mr.AcquisitionData(exercises_data_path('MR', 'PTB_ACRPhantom_GRAPPA')
+ '/ptb_resolutionphantom_fully_ismrmrd.h5' )
```
```python
# Calculate CSM
preprocessed_data = mr.preprocess_acquisition_data(mr_acq)
csm = mr.CoilSensitivityData()
csm.smoothness = 200
csm.calculate(preprocessed_data)
```
```python
# Calculate image template
recon = mr.FullySampledReconstructor()
recon.set_input(preprocessed_data)
recon.process()
im_mr = recon.get_output()
```
```python
# Display the coil maps
plt.figure();
csm_arr = numpy.abs(csm.as_array())
plot_2d_image([1,2,1], csm_arr[0, 0, :, :], 'Coil 0', cmap="Greys_r")
plot_2d_image([1,2,2], csm_arr[2, 0, :, :], 'Coil 2', cmap="Greys_r")
```
We want to use these coilmaps to simulate our MR raw data. Nevertheless, they are obtained from a phantom scan which unfortunately has got some signal voids inside. If we used these coil maps directly, then these signal voids would cause artefacts. We are therefore going to interpolate the coil maps first.
We are going to calculate a mask from the `ImageData` `im_mr`:
```python
im_mr_arr = numpy.squeeze(numpy.abs(im_mr.as_array()))
im_mr_arr /= numpy.max(im_mr_arr)
mask = numpy.zeros_like(im_mr_arr)
mask[im_mr_arr > 0.2] = 1
plt.figure();
plot_2d_image([1,1,1], mask, 'Mask', cmap="Greys_r")
```
Now we are going to interpolate between the values defined by the mask:
```python
from scipy.interpolate import griddata
# Target grid for a square image
xi = yi = numpy.arange(0, im_mr_arr.shape[0])
xi, yi = numpy.meshgrid(xi, yi)
# Define grid points in mask
idx = numpy.where(mask == 1)
x = xi[idx[0], idx[1]]
y = yi[idx[0], idx[1]]
# Go through each coil and interpolate linearly
csm_arr = csm.as_array()
for cnd in range(csm_arr.shape[0]):
cdat = csm_arr[cnd, 0, idx[0], idx[1]]
cdat_intp = griddata((x,y), cdat, (xi,yi), method='linear')
csm_arr[cnd, 0, :, :] = cdat_intp
# No extrapolation was done by griddate and we will set these values to 0
csm_arr[numpy.isnan(csm_arr)] = 0
```
```python
# Display the coil maps
plt.figure();
plot_2d_image([1,2,1], numpy.abs(csm_arr[0, 0, :, :]), 'Coil 0', cmap="Greys_r")
plot_2d_image([1,2,2], numpy.abs(csm_arr[2, 0, :, :]), 'Coil 2', cmap="Greys_r")
```
This is not the world's best interpolation but it will do for the moment. Let's replace the data in the coils maps with the new interpolation
```python
csm.fill(csm_arr);
```
Next we are going to create the two __MR__ `AcquisitionModel` $A_{1}$ and $A_{2}$
```python
# Create undersampled acquisition data
us_factor = 2
num_ctr_lines = 30
pattern = 'random'
acq_us = create_undersampled_kspace(preprocessed_data, us_factor, num_ctr_lines, pattern)
# Create two MR acquisition models
A1 = mr.AcquisitionModel(acq_us, im_mr)
A1.set_coil_sensitivity_maps(csm)
A2 = mr.AcquisitionModel(acq_us, im_mr)
A2.set_coil_sensitivity_maps(csm)
```
and simulate undersampled __MR__ `AcquisitionData` $g_{1}$ and $g_{2}$
```python
# MR
u_gt = crop_and_fill(im_mr, T1_arr)
g1 = A1.forward(u_gt)
v_gt = crop_and_fill(im_mr, T2_arr)
g2 = A2.forward(v_gt)
```
Lastly we are going to add some noise
```python
g1_arr = g1.as_array()
g1_max = numpy.max(numpy.abs(g1_arr))
g1_arr += (numpy.random.random(g1_arr.shape) - 0.5 + 1j*(numpy.random.random(g1_arr.shape) - 0.5)) * g1_max * 0.01
g1.fill(g1_arr)
g2_arr = g2.as_array()
g2_max = numpy.max(numpy.abs(g2_arr))
g2_arr += (numpy.random.random(g2_arr.shape) - 0.5 + 1j*(numpy.random.random(g2_arr.shape) - 0.5)) * g2_max * 0.01
g2.fill(g2_arr)
```
Just to check we are going to apply the backward/adjoint operation to do a simply image reconstruction.
```python
# Simple reconstruction
u_simple = A1.backward(g1)
v_simple = A2.backward(g2)
```
```python
# Display it
plt.figure();
plot_2d_image([1,2,1], numpy.abs(u_simple.as_array())[0, :, :], '$u_{simple}$', cmap="Greys_r")
plot_2d_image([1,2,2], numpy.abs(v_simple.as_array())[0, :, :], '$v_{simple}$', cmap="Greys_r")
```
These images look quite poor compared to the ground truth input images, because they are reconstructed from an undersampled k-space. In addition, you can see a strange "structure" going through the centre of the brain. This has something to do with the coil maps. As mentioned above, our coil maps have these two "holes" in the centre and this creates this artefacts. Nevertheless, this is not going to be a problem for our reconstruction as we will see later on.
# (C) Set up the joint TV reconstruction problem
So far we have used mainly __SIRF__ functionality, now we are going to use __CIL__ in order to set up the reconstruction problem and then solve it. In order to be able to reconstruct both $u$ and $v$ at the same time, we will make use of `BlockDataContainer`. In the following we will define an operator which allows us to project a $(u,v)$ `BlockDataContainer` object into either $u$ or $v$. In literature, this operator is called **[Projection Map (or Canonical Projection)](https://proofwiki.org/wiki/Definition:Projection_(Mapping_Theory))** and is defined as:
$$ \pi_{i}: X_{1}\times\cdots\times X_{n}\rightarrow X_{i}$$
with
$$\pi_{i}(x_{0},\dots,x_{i},\dots,x_{n}) = x_{i},$$
mapping an element $x$ from a Cartesian Product $X =\prod_{k=1}^{n}X_{k}$ to the corresponding element $x_{i}$ determined by the index $i$.
```python
class ProjectionMap(LinearOperator):
def __init__(self, domain_geometry, index, range_geometry=None):
self.index = index
if range_geometry is None:
range_geometry = domain_geometry.geometries[self.index]
super(ProjectionMap, self).__init__(domain_geometry=domain_geometry,
range_geometry=range_geometry)
def direct(self,x,out=None):
if out is None:
return x[self.index]
else:
out.fill(x[self.index])
def adjoint(self,x, out=None):
if out is None:
tmp = self.domain_geometry().allocate()
tmp[self.index].fill(x)
return tmp
else:
out[self.index].fill(x)
```
In the following we define the `SmoothJointTV` class. Our plan is to use the Gradient descent (`GD`) algorithm to solve the above problems. This implements the `__call__` method required to monitor the objective value and the `gradient` method that evaluates the gradient of `JTV`.
For the two subproblems, the first variations with respect to $u$ and $v$ variables are:
$$
\begin{equation}
\begin{aligned}
& A_{1}^{T}*(A_{1}u - g_{1}) - \alpha_{1} \mathrm{div}\bigg( \frac{\nabla u}{|\nabla(u, v)|_{2,\eta,\lambda}}\bigg)\\
& A_{2}^{T}*(A_{2}v - g_{2}) - \alpha_{2} \mathrm{div}\bigg( \frac{\nabla v}{|\nabla(u, v)|_{2,\eta,1-\lambda}}\bigg)
\end{aligned}
\end{equation}
$$
where $$|\nabla(u, v)|_{2,\eta,\lambda} = \sqrt{ \lambda|\nabla u|^{2} + (1-\lambda)|\nabla v|^{2} + \eta^{2}}.$$
```python
class SmoothJointTV(Function):
def __init__(self, eta, axis, lambda_par):
r'''
:param eta: smoothing parameter making SmoothJointTV differentiable
'''
super(SmoothJointTV, self).__init__(L=8)
# smoothing parameter
self.eta = eta
# GradientOperator
FDy = FiniteDifferenceOperator(u_simple, direction=1)
FDx = FiniteDifferenceOperator(u_simple, direction=2)
self.grad = BlockOperator(FDy, FDx)
# Which variable to differentiate
self.axis = axis
if self.eta==0:
raise ValueError('Need positive value for eta')
self.lambda_par=lambda_par
def __call__(self, x):
r""" x is BlockDataContainer that contains (u,v). Actually x is a BlockDataContainer that contains 2 BDC.
"""
if not isinstance(x, BlockDataContainer):
raise ValueError('__call__ expected BlockDataContainer, got {}'.format(type(x)))
tmp = numpy.abs((self.lambda_par*self.grad.direct(x[0]).pnorm(2).power(2) + (1-self.lambda_par)*self.grad.direct(x[1]).pnorm(2).power(2)+\
self.eta**2).sqrt().sum())
return tmp
def gradient(self, x, out=None):
denom = (self.lambda_par*self.grad.direct(x[0]).pnorm(2).power(2) + (1-self.lambda_par)*self.grad.direct(x[1]).pnorm(2).power(2)+\
self.eta**2).sqrt()
if self.axis==0:
num = self.lambda_par*self.grad.direct(x[0])
else:
num = (1-self.lambda_par)*self.grad.direct(x[1])
if out is None:
tmp = self.grad.range.allocate()
tmp[self.axis].fill(self.grad.adjoint(num.divide(denom)))
return tmp
else:
self.grad.adjoint(num.divide(denom), out=out[self.axis])
```
Now we are going to put everything together and define our two objective functions which solve the two subproblems which we defined at the beginning
```python
alpha1 = 0.05
alpha2 = 0.05
lambda_par = 0.5
eta = 1e-12
# BlockGeometry for the two modalities
bg = BlockGeometry(u_simple, v_simple)
# Projection map, depending on the unkwown variable
L1 = ProjectionMap(bg, index=0)
L2 = ProjectionMap(bg, index=1)
# Fidelity terms based on the acqusition data
f1 = 0.5*L2NormSquared(b=g1)
f2 = 0.5*L2NormSquared(b=g2)
# JTV for each of the subproblems
JTV1 = alpha1*SmoothJointTV(eta=eta, axis=0, lambda_par = lambda_par )
JTV2 = alpha2*SmoothJointTV(eta=eta, axis=1, lambda_par = 1-lambda_par)
# Compose the two objective functions
objective1 = OperatorCompositionFunction(f1, CompositionOperator(A1, L1)) + JTV1
objective2 = OperatorCompositionFunction(f2, CompositionOperator(A2, L2)) + JTV2
```
# (D) Solve the joint TV reconstruction problem
```python
# We start with zero-filled images
x0 = bg.allocate(0.0)
# We use a fixed step-size for the gradient descent approach
step_size = 0.1
# We are also going to log the value of the objective functions
obj1_val_it = []
obj2_val_it = []
for i in range(10):
gd1 = GD(x0, objective1, step_size=step_size, \
max_iteration = 4, update_objective_interval = 1)
gd1.run(verbose=1)
# We skip the first one because it gets repeated
obj1_val_it.extend(gd1.objective[1:])
# Here we are going to do a little "trick" in order to better see, when each subproblem is optimised, we
# are going to append NaNs to the objective function which is currently not optimised. The NaNs will not
# show up in the final plot and hence we can nicely see each subproblem.
obj2_val_it.extend(numpy.ones_like(gd1.objective[1:])*numpy.nan)
gd2 = GD(gd1.solution, objective2, step_size=step_size, \
max_iteration = 4, update_objective_interval = 1)
gd2.run(verbose=1)
obj2_val_it.extend(gd2.objective[1:])
obj1_val_it.extend(numpy.ones_like(gd2.objective[1:])*numpy.nan)
x0.fill(gd2.solution)
print('* * * * * * Outer Iteration ', i, ' * * * * * *\n')
```
Finally we can look at the images $u_{jtv}$ and $v_{jtv}$ and compare them to the simple reconstruction $u_{simple}$ and $v_{simple}$ and the original ground truth images.
```python
u_jtv = numpy.squeeze(numpy.abs(x0[0].as_array()))
v_jtv = numpy.squeeze(numpy.abs(x0[1].as_array()))
plt.figure()
plot_2d_image([2,3,1], numpy.squeeze(numpy.abs(u_simple.as_array()[0, :, :])), '$u_{simple}$', cmap="Greys_r")
plot_2d_image([2,3,2], u_jtv, '$u_{JTV}$', cmap="Greys_r")
plot_2d_image([2,3,3], numpy.squeeze(numpy.abs(u_gt.as_array()[0, :, :])), '$u_{gt}$', cmap="Greys_r")
plot_2d_image([2,3,4], numpy.squeeze(numpy.abs(v_simple.as_array()[0, :, :])), '$v_{simple}$', cmap="Greys_r")
plot_2d_image([2,3,5], v_jtv, '$v_{JTV}$', cmap="Greys_r")
plot_2d_image([2,3,6], numpy.squeeze(numpy.abs(v_gt.as_array()[0, :, :])), '$v_{gt}$', cmap="Greys_r")
```
And let's look at the objective functions
```python
plt.figure()
plt.plot(obj1_val_it, 'o-', label='subproblem 1')
plt.plot(obj2_val_it, '+-', label='subproblem 2')
plt.xlabel('Number of iterations')
plt.ylabel('Value of objective function')
plt.title('Objective functions')
plt.legend()
# Logarithmic y-axis
plt.yscale('log')
```
# Next steps
The above is a good demonstration for a synergistic image reconstruction of two different images. The following gives a few suggestions of what to do next and also how to extend this notebook to other applications.
## Number of iterations
In our problem we have several regularisation parameters such as $\alpha_{1}$, $\alpha_{2}$ and $\lambda$. In addition, the number of inner iterations for each subproblem (currently set to 3) and the number of outer iterations (currently set to 10) also determine the final solution. Of course, for infinite number of total iterations it shouldn't matter but usually we don't have that much time.
__TODO__: Change the number of iterations and see what happens to the objective functions. For a given number of total iterations, do you think it is better to have a high number of inner or high number of outer iterations? Why? Does this also depend on the undersampling factor?
## Spatial misalignment
In the above example we simulated our data such that there is a perfect spatial match between $u$ and $v$. For real world applications this usually cannot be assumed.
__TODO__: Add spatial misalignment between $u$ and $v$. This can be achieved e.g. by calling `numpy.roll` on `T2_arr` before calling `v_gt = crop_and_fill(im_mr, T2_arr)`. What is the effect on the reconstructed images? For a more "advanced" misalignment, have a look at notebook `BrainWeb`.
__TODO__: One way to minimize spatial misalignment is to use image registration to ensure both $u$ and $v$ are well aligned. In the notebook `sirf_registration` you find information about how to register two images and also how to resample one image based on the spatial transformation estimated from the registration. Try to use this to correct for the misalignment you introduced above. For a real world example, at which point in the code would you have to carry out the registration+resampling? (some more information can also be found at the end of notebook `de_Pierro_MAPEM`)
## Pathologies
The images $u$ and $v$ show the same anatomy, just with a different contrast. Clinically more useful are of course images which show complementary image information.
__TODO__: Add a pathology to either $u$ and $v$ and see how this effects the reconstruction. For something more advanced, have a loot at the notebook `BrainWeb`.
## Single anatomical prior
So far we have alternated between two reconstruction problems. Another option is to do a single regularised reconstruction and simply use a previously reconstructed image for regularisation.
__TODO__: Adapt the above code such that $u$ is reconstructed first without regularisation and is then used for a regularised reconstruction of $v$ without any further updates of $u$.
## Complementary k-space trajectories
We used the same k-space trajectory for $u$ and $v$. This is of course not ideal for such an optimisation, because the same k-space trajectory also means the same pattern of undersampling artefacts. Of course the artefacts in each image will be different because of the different image content but it still would be better if $u$ and $v$ were acquired with different k-space trajectories.
__TODO__: Create two different k-space trajectories and compare the results to a reconstruction using the same k-space trajectories.
__TODO__: Try different undersampling factors and compare results for _regular_ and _random_ undersampling patterns.
## Other regularisation options
In this example we used a TV-based regularisation, but of course other regularisers could also be used, such as directional TV.
__TODO__: Have a look at the __CIL__ notebook `02_Dynamic_CT` and adapt the `SmoothJointTV` class above to use directional TV.
| 203d4fa76fd087d4b08f6e43a6c40035268c2af9 | 37,671 | ipynb | Jupyter Notebook | notebooks/Synergistic/cil_joint_tv_mr.ipynb | johannesmayer/SIRF-Exercises | 772f132ca3639f364189258d558a8f06a8666fb1 | [
"Apache-2.0"
] | 1 | 2019-11-25T12:16:44.000Z | 2019-11-25T12:16:44.000Z | notebooks/Synergistic/cil_joint_tv_mr.ipynb | johannesmayer/SIRF-Exercises | 772f132ca3639f364189258d558a8f06a8666fb1 | [
"Apache-2.0"
] | null | null | null | notebooks/Synergistic/cil_joint_tv_mr.ipynb | johannesmayer/SIRF-Exercises | 772f132ca3639f364189258d558a8f06a8666fb1 | [
"Apache-2.0"
] | null | null | null | 37.746493 | 587 | 0.586924 | true | 7,312 | Qwen/Qwen-72B | 1. YES
2. YES | 0.841826 | 0.824462 | 0.694053 | __label__eng_Latn | 0.96053 | 0.450849 |
<b>The Relative Frequency</b> of any random variable is the number of occurance in the total number of observation.
The Relative Frequency is calculated as:<br>
\begin{equation}
Relative Frequency = \frac{Frequency}{Total\ number\ of\ observations}
\end{equation}<br>
E.g. We have a samples are like { 5,7,11,19,23,5,18,7,18,23 }. If we calculate the relative frequency of each unique values then it will be as:<br>
${R.F of (5)} = {\frac {2}{10}}$<br>
${R.F of (7)} = {\frac {2}{10}}$<br>
${R.F of (11)} = {\frac {1}{10}}$<br>
${R.F of (18)} = {\frac {2}{10}}$<br>
${R.F of (19)} = {\frac {1}{10}}$<br>
${R.F of (23)} = {\frac {2}{10}}$<br>
Whereas, <b>Probability</b> is actually the limiting case of the <b> Relative Frequency </b> when the sample approaches(limits) towards population.
Going to demonstrate that probability is actually the limiting case of relative frequency when the sample slowly approaches population.
Lets first take an example of binomial random variable where we are taking a sample of observations (responses) arrising due to repititive conduction of a binomial experiment. Which means that we are going to perform some experiment 'N' number of times where each each conduction of experiment (trial) will result in an obervation or response of a trial out of two possible responses. Lets say that our binomial trial is the toss of a coin where each toss is going to give us one of the two responses : Either Heads or Tails.
As we are in statistical domain, we will evaluate the relative frequencies of heads as well as tails as:
\begin{equation}
r = \frac{h}{N}
\end{equation}
Where, r = Relative Frequency.
h = Number of Binomial Experiments in which heads was the outcome.
N = Total number of times binomial experiment has been conducted.
```python
# Importing the Numpy library alias np and matplotlib library alias plt
import numpy as np
import matplotlib.pyplot as plt
```
Going to conduct a binomial experiment (having two results either Head or Tail) for N=10 times and considering that the coin used in coin tossing is an unbiased coin, i.e. p=q=0.5
```python
N = 10
psuccess = 0.5
qfailure = 1-psuccess
ExperimentOutcomes = np.random.binomial(N,psuccess)
```
```python
ExperimentOutcomes
```
4
We tossed an unbiased coin 10 times, we got only 4 heads. So, the relative frequency is given by:
\begin{equation}
r = \frac{h}{N} = \frac{4}{10} = 0.4
\end{equation}
Let's toss an unbiased coin 100 times and see what happens.
```python
N = 100
ExperimentOutcomes = np.random.binomial(N,psuccess)
```
```python
ExperimentOutcomes
```
54
We tossed a coin 100 times, we got 54 heads. So, the relative frequency is given by:
\begin{equation}
r = \frac{h}{N} = \frac{54}{100} = 0.54
\end{equation}
Let's toss an unbiased coin 10000 times and see what happens.
```python
N = 10000
ExperimentOutcomes = np.random.binomial(N,psuccess)
```
```python
ExperimentOutcomes
```
4972
We tossed a coin 10000 times, we got 4972 heads. So, the relative frequency is given by:
\begin{equation}
r = \frac{h}{N} = \frac{4972}{10000} = 0.4972
\end{equation}
Let's toss a coin 100000 times and see what happens.
```python
N = 100000
ExperimentOutcomes = np.random.binomial(N,psuccess)
```
```python
ExperimentOutcomes
```
49991
We tossed a coin 100000 times, we got 49991 heads. So the relative frequency is given by:
\begin{equation}
r = \frac{h}{N} = \frac{49991}{100000} = 0.49991
\end{equation}
The theoretical answer to the probability of getting heads in a single toss is 0.5 and we can observe that as we are increasing the number of tosses, we are approaching the theoretical value of probability in terms of relative frequency.
This phenomena can also be shown by plotting relative frequencies of heads with increasing sample sizes which are slowly approaching population.
```python
#we are trying to get the values from the lower limit=10, higher limit=500
#in the interval of 10.
Ns = np.arange(10,500,10)
```
```python
Ns
```
array([ 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120, 130,
140, 150, 160, 170, 180, 190, 200, 210, 220, 230, 240, 250, 260,
270, 280, 290, 300, 310, 320, 330, 340, 350, 360, 370, 380, 390,
400, 410, 420, 430, 440, 450, 460, 470, 480, 490])
We have generated different sample sizes at the gap of 10 tosses.
```python
ExperimentOutcomes = np.random.binomial(Ns,psuccess)
```
We have tossed an unbiased coin at different values of total tosses(N) and recorded the number of heads(h) during several number of total tosses.
```python
RelativeFrequencies = ExperimentOutcomes/Ns
```
We have now calculated relative frequencies by dividing the number of heads(h) which we observed during different values of N.
```python
plt.stem(Ns,RelativeFrequencies)
```
As we can observe from above plot that as we are tossing a coin more number of times, the relative frequency of heads is approaching towards psuccess = 0.5.
Now, let's change the psuccess to 0.65 and plot the similar graph once again.
```python
psuccess = 0.65
ExperimentOutcomes = np.random.binomial(Ns,psuccess)
```
```python
RelativeFrequencies = ExperimentOutcomes/Ns
```
```python
plt.stem(Ns,RelativeFrequencies)
```
As we can observe from above plot that as we are tossing a coin more number of times, the relative frequency of heads is approaching towards psuccess = 0.65.
Now, let's change the psuccess to 0.35 and plot the similar graph once again.
```python
psuccess = 0.35
ExperimentOutcomes = np.random.binomial(Ns,psuccess)
```
```python
RelativeFrequencies = ExperimentOutcomes/Ns
```
```python
plt.stem(Ns,RelativeFrequencies)
```
As we can observe from above plot that as we are tossing a coin more number of times, the relative frequency of heads is approaching towards psuccess = 0.35.
```python
```
| dc72aaae23f93222c2e9a24d0a9c582d48e3c5f5 | 34,844 | ipynb | Jupyter Notebook | 1. Relative Frequency and Probability and their relation.ipynb | rarpit1994/Probability-and-Statistics | 1ae8b7428ec9a3072fce3089d7951b9f7bd06c26 | [
"Apache-2.0"
] | 1 | 2019-03-24T10:29:56.000Z | 2019-03-24T10:29:56.000Z | 1. Relative Frequency and Probability and their relation.ipynb | rarpit1994/Probability-and-Statistics | 1ae8b7428ec9a3072fce3089d7951b9f7bd06c26 | [
"Apache-2.0"
] | null | null | null | 1. Relative Frequency and Probability and their relation.ipynb | rarpit1994/Probability-and-Statistics | 1ae8b7428ec9a3072fce3089d7951b9f7bd06c26 | [
"Apache-2.0"
] | null | null | null | 63.009042 | 7,696 | 0.809379 | true | 1,675 | Qwen/Qwen-72B | 1. YES
2. YES | 0.946597 | 0.92944 | 0.879805 | __label__eng_Latn | 0.991526 | 0.882416 |
# Adaptive Market Planning
## Narrative:
There is a broad class of problems that involves allocating some resources to meet an uncertain (
sometimes unobservable ) demand . Example:
**Stocking a perishable inventory (e.g. fresh fish) to meet a demand where leftover inventory cannot be held for the future.** \
**We have to allocate annual budgets for activities such as marketing. Left-over funds are returned to the company**
"newsvendor problem", as canonical problem in optimization under uncertainity.
### Newsvendor problem
$$
\max_{x} \; \mathbf{E} \; F(x, W) = \mathbf{E} \{pmin\{x,W\}-cx\} \\
$$
* where $x$ is our decision variable that detrmines the amount of resources to meet task
* where $W$ is uncertain demand for the resource
* We assume that we "purchase" our resource at a unit cost of $c$
* we sell the smaller of $x$ and $W$ at a price $p$
* $p$ is assumed greater than than $c$
If $W$ was deterministic (and if $p$>$c$), then the solution is easily verified to be $x$=$W$
Now imagine that $W$ is a random variable with:
* probality distribution: $f^W(W)$
* cumulative distribution: $F^W(w) = Prob[W<w]$
* Then we can compute:
* $F(x) = \mathbf{E} \; F(x, W)$
Then, the optimal solution $x^{*}$ would satisfy:
$$
\frac{dF(x)}{dx} = 0 \\
x=x^*
$$
Now consider the stochastic gradient, where we take the derivative of $F(x, W)$ assuming we know
$W$, which is given by:
$$
\begin{equation}
\frac{dF(x,W)}{dx} =\left\{
\begin{array}{@{}ll@{}}
p-c, & x\leq W\ \\
-c, & x > W
\end{array}\right.
\end{equation}
$$
* Taking Expectation:
$$
\begin{equation}
\mathbf{E}\frac{dF(x,W)}{dx} = (p-c)Prob[x \leq W] - cProb[x>W] \\
= (p-c)(1-F^W(x))-cF^W(x) \\
= (p-c) - pF^W
= 0 \; For \; x = x^*
\end{equation}
$$
We can now solve for $F^W(x^*)$ giving:
$$
F^W(x^*) = \frac{p-c}{p}
$$
**This Chapter**: we are going to addressing the single-dimensional newsvendor, where: \
we assume that the demand distribution $W$ is unknown, but can be observed.
## Basic Model
We are going to solve problem using a classical method, called a stochastic gradient algorithm. The sequence is like:
we going to pick $x^n$, then observe $W^{n+1}$ and then compute $\nabla F (x^n, W^{n+1})$- As:
$$
x^{n+1} = x^{n} + \alpha_{n} \nabla F(x^n, W^{n+1)}
$$
whre $\alpha_n$ is known as a stepsize. Where it has been shown that we obtain asymptotic optimality:
$$
\lim_{n \to +\infty} x^n = x^*,
$$
If the stepsize $\alpha_n$ satisfie:
$$
\alpha_n \ > \ 0, \\
\sum_{n=1}^{\infty}\alpha_n = \infty, \\
\sum_{n=1}^{\infty}(\alpha_n)^2 < \infty,
$$
The second equation ensures that the stepsizes do not shrink so quickly that we stall out on the way to the optimum, the last equation has a effect ofinsuring that the variance of our estimate $x^*$ does shrink to zero.
```python
```
### State variable:
For our stochastic gradient algorithm, our state variable is given by:
$$
S^n = (x^n)
$$
### Decision Variable:
### harmonic stepsize rule:
***Step size Policy***
As with all of our sequential decision problems, the decision ( that is, the stepsize) is determined by what is typically referred to as a stepsize rule, but is sometimes called a stepzie policy that we denote by $\alpha^{\pi}(S^n)$
$$
\alpha^{harmonic} (S^n|\theta^{step}) = \frac{\theta^{step}}{\theta^{step} + n -1}
$$
### Exogenous information
The exogenous information is the random demand $W^{n+1}$ for the resource (product, time or money) that we are trying to meet with our supply of product $x^n$.<br> We may assume that we observe $W^{n+1}$ directly.
### Transition Function
$$
x^{n+1} = x^n + \alpha_n \nabla_xF(x^n, W^{n+1})
$$
### Objective Function
***Net benefit at each iteration***
$$
F(x^n, W^{n+1}) = p \ min \{x^n, W^{n+1} \} -cx
$$
***Maximize the total reward over some horizon***
$$
max \ \mathbf{E} \{ \sum_{n=0}^{N-1}F(X^{\pi}(S^n), W^{n+1})| S^0)\} \\
\pi
$$
$$
S^{n+1} = S^M(S^n, X^{\pi}(S^n), W^{n+1})
$$
## Coding Part
```python
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
```
### Chnaging the path to function folder
```python
import os
os.chdir("/home/peyman/Documents/PhD_UiS/seqdec_powell_repo/Chap3_Adap_Mar_Plan/functions")
```
```python
from AdaptiveMarketPlanningModel import AdaptiveMarketPlanningModel
from AdaptiveMarketPlanningPolicy import AdaptiveMarketPlanningPolicy
```
```python
os.chdir("/home/peyman/Documents/PhD_UiS/seqdec_powell_repo/Chap3_Adap_Mar_Plan/data")
```
```python
raw_data= pd.read_excel("Base_parameters.xlsx", sheet_name="parameters", usecols=["Parameter", "Value"])
```
```python
raw_data
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Parameter</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>cost</td>
<td>20</td>
</tr>
<tr>
<th>1</th>
<td>trial_size</td>
<td>100</td>
</tr>
<tr>
<th>2</th>
<td>price</td>
<td>26</td>
</tr>
<tr>
<th>3</th>
<td>theta_step</td>
<td>50</td>
</tr>
<tr>
<th>4</th>
<td>T</td>
<td>24</td>
</tr>
<tr>
<th>5</th>
<td>reward_type</td>
<td>Terminal</td>
</tr>
</tbody>
</table>
</div>
```python
cost=raw_data["Value"][0]
trial_size = raw_data["Value"][1]
price = raw_data["Value"][2]
theta_step = raw_data["Value"][3]
T = raw_data["Value"][4]
reward_type = raw_data["Value"][5]
```
```python
if __name__ == "__main__":
# this is an example of creating a model and running a simulation for a certain trial size
# define state variables
state_names = ['order_quantity', 'counter']
init_state = {'order_quantity': 0, 'counter': 0}
decision_names = ['step_size']
# read in variables from excel file
#file = 'Base parameters.xlsx'
#raw_data = pd.ExcelFile(file)
raw_data= pd.read_excel("Base_parameters.xlsx", sheet_name="parameters", usecols=["Parameter", "Value"])
cost=raw_data["Value"][0]
trial_size = raw_data["Value"][1]
price = raw_data["Value"][2]
theta_step = raw_data["Value"][3]
T = raw_data["Value"][4]
reward_type = raw_data["Value"][5]
# initialize model and store ordered quantities in an array
M = AdaptiveMarketPlanningModel(state_names, decision_names, init_state, T,reward_type, price, cost)
P = AdaptiveMarketPlanningPolicy(M, theta_step)
rewards_per_iteration = []
learning_list_per_iteration = []
for ite in list(range(trial_size)):
print("Starting iteration ", ite)
reward,learning_list = P.run_policy()
M.learning_list=[]
#print(learning_list)
rewards_per_iteration.append(reward)
learning_list_per_iteration.append(learning_list)
print("Ending iteration ", ite," Reward ",reward)
nElem = np.arange(1,trial_size+1)
rewards_per_iteration = np.array(rewards_per_iteration)
rewards_per_iteration_sum = rewards_per_iteration.cumsum()
rewards_per_iteration_cum_avg = rewards_per_iteration_sum/nElem
if (reward_type=="Cumulative"):
rewards_per_iteration_cum_avg = rewards_per_iteration_cum_avg/T
rewards_per_iteration = rewards_per_iteration/T
optimal_order_quantity = -np.log(cost/price) * 100
print("Optimal order_quantity for price {} and cost {} is {}".format(price,cost,optimal_order_quantity))
print("Reward type: {}, theta_step: {}, T: {} - Average reward over {} iteratios is: {}".format(reward_type,theta_step,T,trial_size,rewards_per_iteration_cum_avg[-1]))
ite = np.random.randint(0,trial_size)
order_quantity = learning_list_per_iteration[ite]
print("Order quantity for iteration {}".format(ite))
print(order_quantity)
#Ploting the reward
fig1, axsubs = plt.subplots(1,2,sharex=True,sharey=True)
fig1.suptitle("Reward type: {}, theta_step: {}, T: {}".format(reward_type,theta_step,T) )
axsubs[0].plot(nElem, rewards_per_iteration_cum_avg, 'g')
axsubs[0].set_title('Cum_average reward')
axsubs[1].plot(nElem, rewards_per_iteration, 'g')
axsubs[1].set_title('Reward per iteration')
#Create a big subplot
ax = fig1.add_subplot(111, frameon=False)
# hide tick and tick label of the big axes
plt.tick_params(labelcolor='none', top=False, bottom=False, left=False, right=False)
ax.set_ylabel('USD', labelpad=0) # Use argument `labelpad` to move label downwards.
ax.set_xlabel('Iterations', labelpad=10)
plt.show()
# ploting the analytical sol
plt.xlabel("Time")
plt.ylabel("Order quantity")
plt.title("Analytical vs learned ordered quantity - (iteration {})".format(ite))
time = np.arange(0, len(order_quantity))
plt.plot(time, time * 0 - np.log(cost/price) * 100, label = "Analytical solution")
plt.plot(time, order_quantity, label = "Kesten's Rule for theta_step {}".format(theta_step))
plt.legend()
plt.show()
```
| 4b296002eef414848967c2b1ba37cde75ab5ed15 | 520,638 | ipynb | Jupyter Notebook | Chap3_Adap_Mar_Plan/Chap3_Modeling_Theory.ipynb | Peymankor/seqdec_powell_repo | d3a03399ca7762821e80988d112f98bad5adefd8 | [
"MIT"
] | 1 | 2021-04-21T19:21:53.000Z | 2021-04-21T19:21:53.000Z | Chap3_Adap_Mar_Plan/Chap3_Modeling_Theory.ipynb | Efsilvaa/seqdec_powell_repo | d3a03399ca7762821e80988d112f98bad5adefd8 | [
"MIT"
] | null | null | null | Chap3_Adap_Mar_Plan/Chap3_Modeling_Theory.ipynb | Efsilvaa/seqdec_powell_repo | d3a03399ca7762821e80988d112f98bad5adefd8 | [
"MIT"
] | 2 | 2021-04-21T19:21:45.000Z | 2021-06-24T18:19:17.000Z | 63.764605 | 37,224 | 0.713705 | true | 2,710 | Qwen/Qwen-72B | 1. YES
2. YES | 0.885631 | 0.845942 | 0.749193 | __label__eng_Latn | 0.895217 | 0.578959 |
```python
import numpy as np
import matplotlib.pyplot as plt
from IPython.display import display, HTML, IFrame, YouTubeVideo
from ipywidgets import interact,fixed
import pandas as pd
from numpy import cos,sin,pi,tan,log,exp,sqrt,array,linspace,arange
from mpl_toolkits import mplot3d
# from mpl_toolkits.mplot3d.art3d import Poly3DCollection
from ipywidgets import interact
plt.rcParams["figure.figsize"] = [7,7]
from numpy.linalg import norm
%matplotlib inline
# Uncomment the one that corresponds to your Jupyter theme
plt.style.use('dark_background')
# plt.style.use('fivethirtyeight')
# plt.style.use('Solarize_Light2')
```
$\renewcommand{\vec}{\mathbf}$
### Exercises
1. Wheat production $W$ in a given year depends on the average temperature $T$ and the annual rainfall $R$. Scientists estimate that the average temperature is rising at a rate of $0.15^\circ$C/year and rainfall is decreasing at a rate of $0.1$ cm/year. They also estimate that at current production levels, $\partial W/\partial T = -2$ and $\partial W/\partial R = 8$.
1. What is the significance of the signs of these partial derivatives?
As temperature goes up, wheat production decreases. More rain, on the other hand, more wheat.
2. Estimate the current rate of change of wheat production, $dW/dt$.
$$\frac{dW}{dt} = \frac{\partial W}{\partial T}\frac{dT}{dt} + \frac{\partial W}{\partial R}\frac{dR}{dt} = -2(0.15) + 8(-0.1) = -1.1 \text{ wheats} / \text{year}$$
2. Suppose
\begin{align}
z &= z(x,y) \\
x &= x(u,v) \\
y &= y(u,v) \\
u &= u(s,t) \\
v &= v(s,t) \\
\end{align}
are all differentiable. Find an expression for $\frac{\partial z}{\partial s}$.
$$ \frac{\partial z}{\partial s} = \frac{\partial z}{\partial x}\frac{\partial x}{\partial u}\frac{\partial u}{\partial s}
+ \frac{\partial z}{\partial x}\frac{\partial x}{\partial v}\frac{\partial v}{\partial s}
+ \frac{\partial z}{\partial y}\frac{\partial y}{\partial u}\frac{\partial u}{\partial s}
+ \frac{\partial z}{\partial y}\frac{\partial y}{\partial v}\frac{\partial v}{\partial s} $$
## Example
If $g:\RR\to\RR$ is any smooth function, show that $f(x,y) = g(x^2+y^2)$ is radially symmetric. That is, $\frac{\partial f}{\partial \theta} =0$
$$\frac{\partial f }{\partial \theta} = \frac{\partial}{\partial x} (g(x^2 + y^2)) \frac{\partial x}{\partial \theta}
+ \frac{\partial}{\partial y} (g(x^2 + y^2)) \frac{\partial y}{\partial \theta} $$
$$ = g'(x^2 + y^2)2x (-r \sin \theta) + g'(x^2 + y^2)2y (-r \cos \theta) $$
$$ = g'(x^2 + y^2)( -2xy + 2yx) = 0 $$
<p style="padding-bottom:40%;"> </p>
## Example
Find the slope of the tangent line to
$$ x \sin(y) - \frac12 = \sqrt{2} - 2\cos(xy)$$ at the point $\left(\frac12,\frac\pi2\right)$.
```python
x = y = np.linspace(-pi,pi,102)
x,y = np.meshgrid(x,y)
z = x*sin(y) + 2*cos(x*y) - sqrt(2) - 1/2
plt.figure(figsize=(7,7))
cp = plt.contour(x,y,z,levels=arange(-3,3.4,.5),alpha=.5,colors='y')
cp = plt.contour(x,y,z,levels=[0],colors='y')
# plt.clabel(cp,fmt="%d");
x = np.linspace(-2.5,3.5,102)
plt.plot(x,pi/2 + (x-1/2) * (sqrt(2) - pi),color='r');
plt.grid(True)
plt.scatter(1/2,pi/2)
plt.xlim(-pi,pi)
plt.ylim(-pi,pi);
```
$$F(x,y) = x \sin y + 2\cos(xy) = \frac12 + \sqrt 2 $$
$$ \frac{dy}{dx} = \left.-\frac{F_x}{F_y} \right\rvert_{(1/2,\pi/2)} = \left.-\frac{\sin y -2\sin(xy)y}{x\cos y -2 \sin(xy)x}\right\rvert_{(1/2,\pi/2)} $$
$$ = - \frac{1 - \frac{\pi}{\sqrt2}}{-\frac{1}{\sqrt2}} = \sqrt2 - \pi $$
### Example
Differentiate the function $$f(t) = \int_0^t e^{-tx^2}dx.$$
**Solution** This is a funny example as it is ostensibly a one-variable calculus problem. $x$ is just a dummy variable so the only variable to differentiate here is $t$, but you are not likely to find this example in a Calculus 1 text.
```python
@interact
def _(t = (0.,3.,0.05)):
x = np.linspace(0,3,200)
plt.plot(x,exp(-x**2),label = "$e^{-x^2}$")
plt.plot(x,exp(-t*x**2),label = "$e^{-tx^2}$")
y = np.array([0] + list(np.linspace(0,t,150)) + [t])
z = exp(-t*y**2)
z[0] = 0
z[-1] = 0
plt.fill(y,z)
plt.legend();
```
interactive(children=(FloatSlider(value=1.5, description='t', max=3.0, step=0.05), Output()), _dom_classes=('w…
We cannot only apply the Fundamental Theorem of Calculus here directly as $t$ appears in both the limits and the integrand. So instead, we define
$$F(a,b) = \int_0^a e^{-bx^2}dx$$
to separate those roles and then realize $f(t) = F(t,t)$ so we apply the chain rule
$$f'(t) = F_a(t,t) + F_b(t,t)$$ where of course here $\frac{da}{dt} = 1 = \frac{db}{dt}$. The first partial is computed via FTC and the second by differentiating under the integral sign. And thus,
$$f'(t) = e^{-t^3} + \int_0^t (-x^2)e^{-tx^2}\,dx $$
which is not beautiful but can be evaluated to arbitrary precision.
```python
from scipy.integrate import quad
def fprime(t):
val = quad(lambda x: (-x**2)*exp(-t*x**2),0,t)[0]
return exp(-t**3) + val
fprime(1)
```
0.17840709535094998
```python
t = np.linspace(0,3,200)
plt.figure(figsize=(8,8))
plt.plot(t, [fprime(tt) for tt in t],label="$df/dt$")
plt.plot(t, [quad(lambda x: exp(-tt*x**2),0,tt)[0] for tt in t],label="$f$")
plt.legend();
plt.plot(t, 0*t);
```
```python
```
| 12ba81b4b3e43c84e78d4e658b3bab6501c10171 | 156,777 | ipynb | Jupyter Notebook | exercises/L10-Exercises-Solutions.ipynb | drewyoungren/mvc | f5217ae7888050d722c66de95756586f662841d2 | [
"MIT"
] | null | null | null | exercises/L10-Exercises-Solutions.ipynb | drewyoungren/mvc | f5217ae7888050d722c66de95756586f662841d2 | [
"MIT"
] | null | null | null | exercises/L10-Exercises-Solutions.ipynb | drewyoungren/mvc | f5217ae7888050d722c66de95756586f662841d2 | [
"MIT"
] | null | null | null | 411.488189 | 116,532 | 0.936426 | true | 1,812 | Qwen/Qwen-72B | 1. YES
2. YES | 0.727975 | 0.812867 | 0.591747 | __label__eng_Latn | 0.882461 | 0.213158 |
# Characterization of Discrete Systems in the Spectral Domain
*This Jupyter notebook is part of a [collection of notebooks](../index.ipynb) in the bachelors module Signals and Systems, Communications Engineering, Universität Rostock. Please direct questions and suggestions to [Sascha.Spors@uni-rostock.de](mailto:Sascha.Spors@uni-rostock.de).*
## Combination of Systems
The representation of systems with a complex structure as combination of simpler systems is often convenient for their analysis or synthesis. This section discusses three of the most common combinations, the series and parallel connection of systems as well as feedback loops. The latter is very important in control engineering.
### Concatenation
When two linear time-invariant (LTI) systems are combined in series by connecting the output of the first system to the input of a second system this is termed as *concatenation* of two systems. Denoting the impulse responses of the two systems by $h_1[k]$ and $h_2[k]$, the output signal $y[k]$ of the second system is given as
\begin{equation}
y[k] = x[k] * h_1[k] * h_2[k]
\end{equation}
where $x[k]$ denotes the input signal of the first system. Applying a $z$-transform to the left- and right-hand side, and repeated application of the convolution theorem yields
\begin{equation}
Y(z) = \underbrace{H_1(z) \cdot H_2(z)}_{H(z)} \cdot X(z)
\end{equation}
It can be concluded that the concatenation of two systems can be regarded as one LTI system with the transfer function $H(z) = H_1(z) \cdot H_2(z)$. Hence, the following structures are equivalent
The extension to a concatenation of $N$ systems is straightforward. The overall transfer function is given by multiplication of all the individual transfer functions $H_n(z)$
\begin{equation}
H(z) = \prod_{n=1}^{N} H_n(z)
\end{equation}
Applications of concatenated systems include for instance the modeling of electroacoustic systems, wireless transmission systems and cascaded filters.
**Example - Concatenation of second-order sections**
Concatenation of LTI systems can be used to construct higher-order filters from lower-order prototypes. Such filters are known as *cascaded filters*. In digital signal processing, typically second-order systems are used as building blocks for higher-order systems. These blocks are termed second-order sections or [biquad filters](https://en.wikipedia.org/wiki/Digital_biquad_filter).
This is illustrated at the before introduced [second-order recursive LTI system](difference_equation.ipynb#Second-Order-System) with transfer function
\begin{equation}
H_0(z) = \frac{\frac{1}{2}}{1 - z^{-1} + \frac{1}{2} z^{-2}}
\end{equation}
Note, the transfer function has been normalized for unit gain at $z = e^{j \Omega} \vert_{\Omega = 0}$.
Concatenation of $N$ second-order filters leads to a filter with order $2 N$. Its transfer function reads
\begin{equation}
H_N(z) = \left(\frac{\frac{1}{2}}{1 - z^{-1} + \frac{1}{2} z^{-2}} \right)^N
\end{equation}
The resulting transfer function is illustrated by its logarithmic magnitude response for a varying number of cascaded filters. First the transfer function $H_N(s)$ is defined
```python
import sympy as sym
sym.init_printing()
%matplotlib inline
z = sym.symbols('z', complex=True)
W = sym.symbols('Omega', real=True)
N = sym.symbols('N', integer=True)
H0 = sym.Rational(1, 2) / (1 - z**(-1) + sym.Rational(1, 2)*z**(-2))
HN = H0**N
HN
```
The magnitude $|H_N(e^{j \Omega})|$ of the transfer function is shown for $N = \{1, 2, 3\}$ (red, green, blue line)
```python
HNa = 20*sym.log(sym.Abs(HN.subs(z, sym.exp(sym.I*W))))
p1 = sym.plot(HNa.subs(N, 1), (W, -sym.pi, sym.pi), xlabel='$\Omega$', ylabel='$| H_n(e^{j \Omega}) |$ in dB', line_color='r', show=False);
p2 = sym.plot(HNa.subs(N, 2), (W, -sym.pi, sym.pi), xlabel='$\Omega$', ylabel='$| H_n(e^{j \Omega}) |$ in dB', line_color='g', show=False);
p3 = sym.plot(HNa.subs(N, 3), (W, -sym.pi, sym.pi), xlabel='$\Omega$', ylabel='$| H_n(e^{j \Omega}) |$ in dB', line_color='b', show=False);
p1.extend(p2)
p1.extend(p3)
p1.show()
```
**Exercise**
* Compute the magnitude $|H_N(z)|$ and phase $\varphi(z)$ of the concatenated system.
* Using the result from the first exercise, how will the phase of the cascaded filter develop for an increasing number $N$ of cascaded filters?
### Parallel Connection
A structure where two LTI systems share the same input signal and their output signals are superimposed is called *parallel connection*. The overall output signal $y[k]$ is given as the superposition of the output signals of the individual systems
\begin{equation}
y[k] = h_1[k] * x[k] + h_2[k] * x[k]
\end{equation}
Applying a $z$-transform to the left- and right-hand side, exploiting the superposition principle, and convolution theorem yields
\begin{equation}
Y(z) = \underbrace{\left( H_1(z) + H_2(z) \right)}_{H(z)} \cdot X(z)
\end{equation}
The overall transfer function $H(z)$ of a parallel connection of two systems is given as the superposition of the transfer functions of the individual systems. Hence, the following structures are equivalent
The extension to a parallel connection of $N$ systems is straightforward. The overall transfer function is given by superposition of all individual transfer functions $H_n(z)$
\begin{equation}
H(z) = \sum_{n=1}^{N} H_n(z)
\end{equation}
A prominent application of a parallel connection of systems are [filter banks](https://en.wikipedia.org/wiki/Filter_bank) as used in signal analysis and many lossy coding schemes.
### Feedback
The connection of two LTI systems, where the input of the second system is connected to the output of the first and the output of the second system is superimposed to the input of the first is called *feedback loop*. This structure is depicted in the following illustration (upper block diagram)
The output signal $y[k]$ is given as
\begin{equation}
y[k] = x[k] * h_1[k] + y[k] * h_2[k] * h_1[k]
\end{equation}
Applying a $z$-transform to the left- and right-hand side, exploiting the superposition principle and the convolution theorem, and rearrangement of terms yields
\begin{equation}
Y(z) = \frac{H_1(z)}{1 - H_1(z) \cdot H_2(z)} \cdot X(z)
\end{equation}
The overall transfer function $H(z)$ of the feedback loop is then given as
\begin{equation}
H(z) = \frac{H_1(z)}{1 - H_1(z) \cdot H_2(z)}
\end{equation}
This equivalence is depicted by the lower block diagram of above structure. Applications of feedback loops include [digital control systems](https://en.wikipedia.org/wiki/Control_system).
**Copyright**
This notebook is provided as [Open Educational Resource](https://en.wikipedia.org/wiki/Open_educational_resources). Feel free to use the notebook for your own purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Sascha Spors, Continuous- and Discrete-Time Signals and Systems - Theory and Computational Examples*.
| 360914b2d753508767872c2a330a316252267abd | 192,055 | ipynb | Jupyter Notebook | discrete_systems_spectral_domain/combination.ipynb | spatialaudio/signals-and-systems-lecture | 93e2f3488dc8f7ae111a34732bd4d13116763c5d | [
"MIT"
] | 243 | 2016-04-01T14:21:00.000Z | 2022-03-28T20:35:09.000Z | discrete_systems_spectral_domain/combination.ipynb | iamzhd1977/signals-and-systems-lecture | b134608d336ceb94d83cdb66bc11c6d4d035f99c | [
"MIT"
] | 6 | 2016-04-11T06:28:17.000Z | 2021-11-10T10:59:35.000Z | discrete_systems_spectral_domain/combination.ipynb | iamzhd1977/signals-and-systems-lecture | b134608d336ceb94d83cdb66bc11c6d4d035f99c | [
"MIT"
] | 63 | 2017-04-20T00:46:03.000Z | 2022-03-30T14:07:09.000Z | 66.340242 | 27,330 | 0.6112 | true | 1,879 | Qwen/Qwen-72B | 1. YES
2. YES | 0.805632 | 0.787931 | 0.634783 | __label__eng_Latn | 0.992082 | 0.313144 |
We now extend SEIR model to SEAI5R:
\begin{align}
\dot{S_{i}} &=-\lambda_{i}(t)S_{i}+\sigma_{i}, \\
\dot{E}_{i} &=\lambda_{i}(t)S_{i}-\gamma_{E}E_{i},\\
\dot{A}_{i} &=\gamma_{E}E_{i}-\gamma_{A}A_{i},\\
\dot{I}_{i}^{a} &=\alpha\gamma_{A}A_{i}-\gamma_{I^{a}}I_{i}^{a},\\
\dot{I}_{i}^{s} &=\bar{\alpha}\gamma_{A}A_{i}-\gamma_{I^{s}}I_{i}^{s},\\
\dot{I}_{i}^{h} &=h_{i}\gamma_{I^{s}}I_{i}^{s}-\gamma_{I^{h}}I_{i}^{h},\\
\dot{I}_{i}^{c} &=c_{i}\gamma_{I^{h}}I_{i}^{h}-\gamma_{I^{c}}I_{i}^{c},\\
\dot{I}_{i}^{m} &=m_{i}\gamma_{I^{c}}I_{i}^{c},\\
\dot{R}_{i} &=\gamma_{I^{a}}I_{i}^{a}+\bar{h}_{i}\gamma_{I^{s}}I_{i}^{s}+\bar{c}_{i}\gamma_{I^{h}}I_{i}^{h}+\bar{m}_{i}\gamma_{I^{c}}I_{i}^{c},\\
\dot{N}_{i} &=\sigma_{i}-I_{i}^{m}.
\end{align}
$\lambda_{i}(t)=\beta\sum_{j=1}^{M}\left(C_{ij}^{a}\frac{A_{j}}{N_{j}}+C_{ij}^{a}\frac{I_{j}^{a}}{N_{j}}+C_{ij}^{s}\frac{I_{j}^{s}}{N_{j}}+C_{ij}^{h}\frac{I_{j}^{h}}{N_{j}}\right)$.
Here
* $I^{a}$ : asymptomatic infectives
* $I^{s}$ : symptomatic infectives
* $I^{h}$ : hospitalized infectives
* $I^{c}$ : ICU cases
* $I^{m}$ : mortality
* ${h}_{i}=1-\bar h_{i}$ is the fraction of syptomatics who are hospitalized
* $c_{i}=1-\bar{c}_{i}$, is the fraction of hospitalizeds, who are in ICU
* $m_{i}=1-\bar{m}_{i}$, is the fraction of ICU which leads to mortality
* $C_{ij}^{s}=f^{s}C_{ij}^{a}\equiv f^{s}C_{ij}$
* $C_{ij}^{h}=f^{h}C_{ij}^{a}\equiv f^{h}C_{ij}$.
S ---> E
E ---> A
A ---> Ia, Is
Ia ---> R
Is ---> Ih, R
Ih ---> Ic, R
Ic ---> Im, R
```python
%%capture
## compile PyRoss for this notebook
import os
owd = os.getcwd()
os.chdir('../../')
%run setup.py install
os.chdir(owd)
```
```python
%matplotlib inline
import numpy as np
import pyross
import matplotlib.pyplot as plt
```
```python
M=16 # number of age groups
# load age structure data
my_data = np.genfromtxt('../data/age_structures/UK.csv', delimiter=',', skip_header=1)
aM, aF = my_data[:, 1], my_data[:, 2]
# set age groups
Ni=aM+aF; Ni=Ni[0:M]; N=np.sum(Ni)
```
```python
# contact matrices
CH, CW, CS, CO = pyross.contactMatrix.UK()
## matrix of total contacts
C=CH+CW+CS+CO
fig,aCF = plt.subplots(2,2);
aCF[0][0].pcolor(CH, cmap=plt.cm.get_cmap('GnBu', 10));
aCF[0][1].pcolor(CW, cmap=plt.cm.get_cmap('GnBu', 10));
aCF[1][0].pcolor(CS, cmap=plt.cm.get_cmap('GnBu', 10));
aCF[1][1].pcolor(CO, cmap=plt.cm.get_cmap('GnBu', 10));
```
## Fraction of asymptomatic infectives is constant
```python
beta = 0.036692 # infection rate
gE = 1/5
gA = 1/3
gIa = 1./7 # recovery rate of asymptomatic infectives
gIs = 1./7 # recovery rate of symptomatic infectives
alpha = 0.3 # fraction of asymptomatic infectives
fsa = 0.2 # the self-isolation parameter
fh = 0
gIh = 1/14
gIc = 1/14
sa = 100*np.ones(M) # rate of additional/removal of population by birth etc
sa[0] = 1500 # birth
sa[12:16] = -300 # mortality
hh = 0.1*np.ones(M) # fraction which goes from Is to hospital
cc = 0.05*np.ones(M) # fraction which goes from hospital to ICU
mm = 0.4*np.ones(M) # mortality from IC
# initial conditions
Is_0 = np.zeros((M)); #Is_0[6:13]=8; Is_0[2:6]=4; Is_0[13:16]=4
Ia_0 = 1000*np.ones((M));
R_0 = np.zeros((M))
E_0 = np.zeros((M))
A_0 = np.zeros((M))
Ih_0 = np.zeros((M))
Ic_0 = np.zeros((M))
Im_0 = np.zeros((M))
S_0 = Ni - (E_0 + A_0 + Ia_0 + Is_0 + Ih_0 + Ic_0 + R_0)
# matrix for linearised dynamics
L0 = np.zeros((M, M))
L = np.zeros((2*M, 2*M))
for i in range(M):
for j in range(M):
L0[i,j]=C[i,j]*Ni[i]/Ni[j]
L[0:M, 0:M] = alpha*beta/gIs*L0
L[0:M, M:2*M] = fsa*alpha*beta/gIs*L0
L[M:2*M, 0:M] = ((1-alpha)*beta/gIs)*L0
L[M:2*M, M:2*M] = fsa*((1-alpha)*beta/gIs)*L0
r0 = np.max(np.linalg.eigvals(L))
print("The basic reproductive ratio for these parameters is", r0)
```
The basic reproductive ratio for these parameters is (1.3199080176950944+0j)
```python
# duration of simulation and data file
Tf=200; Nf=2000;
# intantiate model
parameters = {'alpha':alpha,'beta':beta, 'gIa':gIa,'gIs':gIs,
'gIh':gIh,'gIc':gIc, 'gE':gE, 'gA':gA,
'fsa':fsa, 'fh':fh,
'sa':sa, 'hh':hh, 'cc':cc, 'mm':mm}
model = pyross.deterministic.SEAI5R(parameters, M, Ni)
# the contact structure is independent of time
def contactMatrix(t):
return C
# run model
data=model.simulate(S_0, E_0, A_0, Ia_0, Is_0, Ih_0, Ic_0, Im_0, contactMatrix, Tf, Nf)
t = data['t']; IC = np.zeros((Nf))
for i in range(1*M):
IC += data['X'][:,3*M+i]
fig = plt.figure(num=None, figsize=(10, 8), dpi=80, facecolor='w', edgecolor='k')
plt.rcParams.update({'font.size': 22})
plt.plot(t, IC, '-', lw=4, color='#A60628', label='forecast', alpha=0.8)
plt.legend(fontsize=26, loc='upper left'); plt.grid()
plt.autoscale(enable=True, axis='x', tight=True)
plt.ylabel('Asymtomatic Infected individuals'); #plt.xlim(0, 40); plt.ylim(0, 4);
#plt.savefig('/Users/rsingh/Desktop/2a.png', format='png', dpi=212)
IC[0]
```
## Fraction of asymptomatic infectives is age-dependent
```python
alpha = 0.5*np.ones(M) # rate of additional/removal of population by birth etc
alpha[6:M] = 0.1 # everyone is symtomatic in large age-group
Ia_0 = 1000*np.ones((M));
```
```python
# duration of simulation and data file
Tf=200; Nf=2000;
# intantiate model
parameters = {'alpha':alpha,'beta':beta, 'gIa':gIa,'gIs':gIs,
'gIh':gIh,'gIc':gIc, 'gE':gE, 'gA':gA,
'fsa':fsa, 'fh':fh,
'sa':sa, 'hh':hh, 'cc':cc, 'mm':mm}
model = pyross.deterministic.SEAI5R(parameters, M, Ni)
# the contact structure is independent of time
def contactMatrix(t):
return C
# run model
data=model.simulate(S_0, E_0, A_0, Ia_0, Is_0, Ih_0, Ic_0, Im_0, contactMatrix, Tf, Nf)
t = data['t']; IC1 = data['X'][:,3*M+1];
IC2 = data['X'][:,3*M+8]
fig = plt.figure(num=None, figsize=(10, 8), dpi=80, facecolor='w', edgecolor='k')
plt.rcParams.update({'font.size': 22})
plt.plot(t, IC1, '-', lw=4, color='#A60628', label='M=1', alpha=0.8)
plt.plot(t, IC2, '-', lw=4, color='green', label='M=16', alpha=0.8)
plt.legend(fontsize=26, loc='upper left'); plt.grid()
plt.autoscale(enable=True, axis='x', tight=True)
plt.ylabel('Asymtomatic Infected individuals'); #plt.xlim(0, 40); plt.ylim(0, 4);
#plt.savefig('/Users/rsingh/Desktop/2a.png', format='png', dpi=212)
IC[0]
```
| c225b78d4fa23ebc4aa826b93d8eaa496eedd7a3 | 144,978 | ipynb | Jupyter Notebook | examples/deterministic/ex11-SEAI5R-UK.ipynb | ineskris/pyross | 2ee6deb01b17cdbff19ef89ec6d1e607bceb481c | [
"MIT"
] | null | null | null | examples/deterministic/ex11-SEAI5R-UK.ipynb | ineskris/pyross | 2ee6deb01b17cdbff19ef89ec6d1e607bceb481c | [
"MIT"
] | null | null | null | examples/deterministic/ex11-SEAI5R-UK.ipynb | ineskris/pyross | 2ee6deb01b17cdbff19ef89ec6d1e607bceb481c | [
"MIT"
] | null | null | null | 372.694087 | 69,468 | 0.929196 | true | 2,584 | Qwen/Qwen-72B | 1. YES
2. YES | 0.812867 | 0.737158 | 0.599212 | __label__eng_Latn | 0.311629 | 0.2305 |
```python
%load_ext autoreload
%autoreload 2
```
```python
import jax
import jax.numpy as jnp
```
```python
import cr
```
```python
M = 10
p = 3
N = 5
```
```python
A = jnp.zeros([M, p])
A
```
WARNING:absl:No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)
DeviceArray([[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.]], dtype=float32)
```python
B = jnp.ones([N, p])
B
```
DeviceArray([[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]], dtype=float32)
```python
import cr.sparse as crs
```
```python
crs.pairwise_sqr_l2_distances_rw(A, B)
```
DeviceArray([[3., 3., 3., 3., 3.],
[3., 3., 3., 3., 3.],
[3., 3., 3., 3., 3.],
[3., 3., 3., 3., 3.],
[3., 3., 3., 3., 3.],
[3., 3., 3., 3., 3.],
[3., 3., 3., 3., 3.],
[3., 3., 3., 3., 3.],
[3., 3., 3., 3., 3.],
[3., 3., 3., 3., 3.]], dtype=float32)
```python
A = 3*jnp.ones([M, p])
```
```python
B = 5*jnp.ones([N, p])
```
```python
crs.pairwise_sqr_l2_distances_rw(A, B)
```
DeviceArray([[12., 12., 12., 12., 12.],
[12., 12., 12., 12., 12.],
[12., 12., 12., 12., 12.],
[12., 12., 12., 12., 12.],
[12., 12., 12., 12., 12.],
[12., 12., 12., 12., 12.],
[12., 12., 12., 12., 12.],
[12., 12., 12., 12., 12.],
[12., 12., 12., 12., 12.],
[12., 12., 12., 12., 12.]], dtype=float32)
```python
A = jnp.arange(1, M+1) * jnp.ones((p, 1))
A
```
DeviceArray([[ 1., 2., 3., 4., 5., 6., 7., 8., 9., 10.],
[ 1., 2., 3., 4., 5., 6., 7., 8., 9., 10.],
[ 1., 2., 3., 4., 5., 6., 7., 8., 9., 10.]], dtype=float32)
```python
B = jnp.arange(2, N+2) * jnp.ones((p, 1))
B
```
DeviceArray([[2., 3., 4., 5., 6.],
[2., 3., 4., 5., 6.],
[2., 3., 4., 5., 6.]], dtype=float32)
```python
crs.pairwise_sqr_l2_distances_rw(A.T, B.T)
```
DeviceArray([[ 3., 12., 27., 48., 75.],
[ 0., 3., 12., 27., 48.],
[ 3., 0., 3., 12., 27.],
[ 12., 3., 0., 3., 12.],
[ 27., 12., 3., 0., 3.],
[ 48., 27., 12., 3., 0.],
[ 75., 48., 27., 12., 3.],
[108., 75., 48., 27., 12.],
[147., 108., 75., 48., 27.],
[192., 147., 108., 75., 48.]], dtype=float32)
```python
crs.pairwise_sqr_l2_distances_cw(A, B)
```
DeviceArray([[ 3., 12., 27., 48., 75.],
[ 0., 3., 12., 27., 48.],
[ 3., 0., 3., 12., 27.],
[ 12., 3., 0., 3., 12.],
[ 27., 12., 3., 0., 3.],
[ 48., 27., 12., 3., 0.],
[ 75., 48., 27., 12., 3.],
[108., 75., 48., 27., 12.],
[147., 108., 75., 48., 27.],
[192., 147., 108., 75., 48.]], dtype=float32)
```python
cr.sparse.pairwise_sqr_l2_distances_cw(A, B)
```
DeviceArray([[ 3., 12., 27., 48., 75.],
[ 0., 3., 12., 27., 48.],
[ 3., 0., 3., 12., 27.],
[ 12., 3., 0., 3., 12.],
[ 27., 12., 3., 0., 3.],
[ 48., 27., 12., 3., 0.],
[ 75., 48., 27., 12., 3.],
[108., 75., 48., 27., 12.],
[147., 108., 75., 48., 27.],
[192., 147., 108., 75., 48.]], dtype=float32)
```python
A
```
DeviceArray([[ 1., 2., 3., 4., 5., 6., 7., 8., 9., 10.],
[ 1., 2., 3., 4., 5., 6., 7., 8., 9., 10.],
[ 1., 2., 3., 4., 5., 6., 7., 8., 9., 10.]], dtype=float32)
```python
B
```
DeviceArray([[2., 3., 4., 5., 6.],
[2., 3., 4., 5., 6.],
[2., 3., 4., 5., 6.]], dtype=float32)
```python
C = A.T
```
```python
C
```
DeviceArray([[ 1., 1., 1.],
[ 2., 2., 2.],
[ 3., 3., 3.],
[ 4., 4., 4.],
[ 5., 5., 5.],
[ 6., 6., 6.],
[ 7., 7., 7.],
[ 8., 8., 8.],
[ 9., 9., 9.],
[10., 10., 10.]], dtype=float32)
```python
D = B.T
```
```python
D
```
DeviceArray([[2., 2., 2.],
[3., 3., 3.],
[4., 4., 4.],
[5., 5., 5.],
[6., 6., 6.]], dtype=float32)
```python
crs.pairwise_l1_distances_rw(C, D)
```
DeviceArray([[ 3., 6., 9., 12., 15.],
[ 0., 3., 6., 9., 12.],
[ 3., 0., 3., 6., 9.],
[ 6., 3., 0., 3., 6.],
[ 9., 6., 3., 0., 3.],
[12., 9., 6., 3., 0.],
[15., 12., 9., 6., 3.],
[18., 15., 12., 9., 6.],
[21., 18., 15., 12., 9.],
[24., 21., 18., 15., 12.]], dtype=float32)
```python
crs.pairwise_l1_distances_cw(A, B)
```
DeviceArray([[ 3., 6., 9., 12., 15.],
[ 0., 3., 6., 9., 12.],
[ 3., 0., 3., 6., 9.],
[ 6., 3., 0., 3., 6.],
[ 9., 6., 3., 0., 3.],
[12., 9., 6., 3., 0.],
[15., 12., 9., 6., 3.],
[18., 15., 12., 9., 6.],
[21., 18., 15., 12., 9.],
[24., 21., 18., 15., 12.]], dtype=float32)
```python
crs.pdist_l1_rw(C)
```
DeviceArray([[ 0., 3., 6., 9., 12., 15., 18., 21., 24., 27.],
[ 3., 0., 3., 6., 9., 12., 15., 18., 21., 24.],
[ 6., 3., 0., 3., 6., 9., 12., 15., 18., 21.],
[ 9., 6., 3., 0., 3., 6., 9., 12., 15., 18.],
[12., 9., 6., 3., 0., 3., 6., 9., 12., 15.],
[15., 12., 9., 6., 3., 0., 3., 6., 9., 12.],
[18., 15., 12., 9., 6., 3., 0., 3., 6., 9.],
[21., 18., 15., 12., 9., 6., 3., 0., 3., 6.],
[24., 21., 18., 15., 12., 9., 6., 3., 0., 3.],
[27., 24., 21., 18., 15., 12., 9., 6., 3., 0.]], dtype=float32)
```python
crs.pdist_l1_cw(A)
```
DeviceArray([[ 0., 3., 6., 9., 12., 15., 18., 21., 24., 27.],
[ 3., 0., 3., 6., 9., 12., 15., 18., 21., 24.],
[ 6., 3., 0., 3., 6., 9., 12., 15., 18., 21.],
[ 9., 6., 3., 0., 3., 6., 9., 12., 15., 18.],
[12., 9., 6., 3., 0., 3., 6., 9., 12., 15.],
[15., 12., 9., 6., 3., 0., 3., 6., 9., 12.],
[18., 15., 12., 9., 6., 3., 0., 3., 6., 9.],
[21., 18., 15., 12., 9., 6., 3., 0., 3., 6.],
[24., 21., 18., 15., 12., 9., 6., 3., 0., 3.],
[27., 24., 21., 18., 15., 12., 9., 6., 3., 0.]], dtype=float32)
```python
A
```
DeviceArray([[ 1., 2., 3., 4., 5., 6., 7., 8., 9., 10.],
[ 1., 2., 3., 4., 5., 6., 7., 8., 9., 10.],
[ 1., 2., 3., 4., 5., 6., 7., 8., 9., 10.]], dtype=float32)
```python
crs.pairwise_linf_distances_rw(C, D)
```
DeviceArray([[1., 2., 3., 4., 5.],
[0., 1., 2., 3., 4.],
[1., 0., 1., 2., 3.],
[2., 1., 0., 1., 2.],
[3., 2., 1., 0., 1.],
[4., 3., 2., 1., 0.],
[5., 4., 3., 2., 1.],
[6., 5., 4., 3., 2.],
[7., 6., 5., 4., 3.],
[8., 7., 6., 5., 4.]], dtype=float32)
```python
crs.pairwise_linf_distances_cw(A, B)
```
DeviceArray([[1., 2., 3., 4., 5.],
[0., 1., 2., 3., 4.],
[1., 0., 1., 2., 3.],
[2., 1., 0., 1., 2.],
[3., 2., 1., 0., 1.],
[4., 3., 2., 1., 0.],
[5., 4., 3., 2., 1.],
[6., 5., 4., 3., 2.],
[7., 6., 5., 4., 3.],
[8., 7., 6., 5., 4.]], dtype=float32)
```python
crs.pdist_linf_rw(C)
```
DeviceArray([[0., 1., 2., 3., 4., 5., 6., 7., 8., 9.],
[1., 0., 1., 2., 3., 4., 5., 6., 7., 8.],
[2., 1., 0., 1., 2., 3., 4., 5., 6., 7.],
[3., 2., 1., 0., 1., 2., 3., 4., 5., 6.],
[4., 3., 2., 1., 0., 1., 2., 3., 4., 5.],
[5., 4., 3., 2., 1., 0., 1., 2., 3., 4.],
[6., 5., 4., 3., 2., 1., 0., 1., 2., 3.],
[7., 6., 5., 4., 3., 2., 1., 0., 1., 2.],
[8., 7., 6., 5., 4., 3., 2., 1., 0., 1.],
[9., 8., 7., 6., 5., 4., 3., 2., 1., 0.]], dtype=float32)
```python
crs.pdist_linf_cw(B)
```
DeviceArray([[0., 1., 2., 3., 4.],
[1., 0., 1., 2., 3.],
[2., 1., 0., 1., 2.],
[3., 2., 1., 0., 1.],
[4., 3., 2., 1., 0.]], dtype=float32)
```python
jnp.mod(jnp.array([1, 2, 2.1, 2.3, 1.0, 3.0]), 1) == 0
```
DeviceArray([ True, True, False, False, True, True], dtype=bool)
```python
x = jnp.array([1, -2, -2.1, 2.3, 1.0, 3.0, 2.0, -1])
```
```python
jnp.logical_and(x > 0, jnp.mod(x, 1) == 0)
```
DeviceArray([ True, False, False, False, True, True], dtype=bool)
```python
jnp.mod(x, 2) == 1
```
DeviceArray([ True, False, False, False, True, True], dtype=bool)
```python
jnp.mod(x, 2) == 0
```
DeviceArray([False, True, False, False, False, False], dtype=bool)
```python
jnp.logical_and(x > 0, jnp.mod(x, 2) == 1)
```
DeviceArray([ True, False, False, False, True, True, False, False], dtype=bool)
```python
x = 2
```
```python
jnp.logical_and(x > 0, jnp.mod(x, 2) == 1)
```
DeviceArray(False, dtype=bool)
```python
jnp.logical_not(jnp.bitwise_and(x, x - 1))
```
DeviceArray(True, dtype=bool)
```python
x = jnp.arange(1, 17)
```
```python
jnp.logical_not(jnp.bitwise_and(x, x - 1))
```
DeviceArray([ True, True, False, True, False, False, False, True,
False, False, False, False, False, False, False, True], dtype=bool)
```python
jnp.floor(x)
```
DeviceArray([ 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12.,
13., 14., 15., 16.], dtype=float32)
```python
jnp.mod
```
<function jax._src.numpy.lax_numpy.remainder(x1, x2)>
```python
crs.is_integer(x)
```
DeviceArray([ True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True], dtype=bool)
```python
crs.integer_factors_close_to_sqr_root(16)
```
(DeviceArray(4, dtype=int32), DeviceArray(4, dtype=int32))
```python
```
DeviceArray([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15,
16], dtype=int32)
```python
sqrt(2)
```
```python
```
| 7a09a49b590b915362d409c3c4ab3dfee8e8e577 | 27,275 | ipynb | Jupyter Notebook | notebooks/la/distances.ipynb | carnot-shailesh/cr-sparse | 989ebead8a8ac37ade643093e1caa31ae2a3eda1 | [
"Apache-2.0"
] | 42 | 2021-06-11T17:11:29.000Z | 2022-03-29T11:51:44.000Z | notebooks/la/distances.ipynb | carnot-shailesh/cr-sparse | 989ebead8a8ac37ade643093e1caa31ae2a3eda1 | [
"Apache-2.0"
] | 19 | 2021-06-04T11:36:11.000Z | 2022-01-22T20:13:39.000Z | notebooks/la/distances.ipynb | carnot-shailesh/cr-sparse | 989ebead8a8ac37ade643093e1caa31ae2a3eda1 | [
"Apache-2.0"
] | 5 | 2021-11-21T21:01:11.000Z | 2022-02-28T07:20:03.000Z | 24.840619 | 290 | 0.374409 | true | 5,112 | Qwen/Qwen-72B | 1. YES
2. YES | 0.815233 | 0.771844 | 0.629232 | __label__krc_Cyrl | 0.975292 | 0.300247 |
# Transformations, Eigenvectors, and Eigenvalues
Matrices and vectors are used together to manipulate spatial dimensions. This has a lot of applications, including the mathematical generation of 3D computer graphics, geometric modeling, and the training and optimization of machine learning algorithms. We're not going to cover the subject exhaustively here; but we'll focus on a few key concepts that are useful to know when you plan to work with machine learning.
## Linear Transformations
You can manipulate a vector by multiplying it with a matrix. The matrix acts a function that operates on an input vector to produce a vector output. Specifically, matrix multiplications of vectors are *linear transformations* that transform the input vector into the output vector.
For example, consider this matrix ***A*** and vector ***v***:
$$ A = \begin{bmatrix}2 & 3\\5 & 2\end{bmatrix} \;\;\;\; \vec{v} = \begin{bmatrix}1\\2\end{bmatrix}$$
We can define a transformation ***T*** like this:
$$ T(\vec{v}) = A\vec{v} $$
To perform this transformation, we simply calculate the dot product by applying the *RC* rule; multiplying each row of the matrix by the single column of the vector:
$$\begin{bmatrix}2 & 3\\5 & 2\end{bmatrix} \cdot \begin{bmatrix}1\\2\end{bmatrix} = \begin{bmatrix}8\\9\end{bmatrix}$$
Here's the calculation in Python:
```python
import numpy as np
v = np.array([1,2])
A = np.array([[2,3],
[5,2]])
t = A@v
print (t)
```
[8 9]
In this case, both the input vector and the output vector have 2 components - in other words, the transformation takes a 2-dimensional vector and produces a new 2-dimensional vector; which we can indicate like this:
$$ T: \rm I\!R^{2} \to \rm I\!R^{2} $$
Note that the output vector may have a different number of dimensions from the input vector; so the matrix function might transform the vector from one space to another - or in notation, ${\rm I\!R}$<sup>n</sup> -> ${\rm I\!R}$<sup>m</sup>.
For example, let's redefine matrix ***A***, while retaining our original definition of vector ***v***:
$$ A = \begin{bmatrix}2 & 3\\5 & 2\\1 & 1\end{bmatrix} \;\;\;\; \vec{v} = \begin{bmatrix}1\\2\end{bmatrix}$$
Now if we once again define ***T*** like this:
$$ T(\vec{v}) = A\vec{v} $$
We apply the transformation like this:
$$\begin{bmatrix}2 & 3\\5 & 2\\1 & 1\end{bmatrix} \cdot \begin{bmatrix}1\\2\end{bmatrix} = \begin{bmatrix}8\\9\\3\end{bmatrix}$$
So now, our transformation transforms the vector from 2-dimensional space to 3-dimensional space:
$$ T: \rm I\!R^{2} \to \rm I\!R^{3} $$
Here it is in Python:
```python
import numpy as np
v = np.array([1,2])
A = np.array([[2,3],
[5,2],
[1,1]])
t = A@v
print (t)
```
[8 9 3]
```python
import numpy as np
v = np.array([1,2])
A = np.array([[1,2],
[2,1]])
t = A@v
print (t)
```
[5 4]
## Transformations of Magnitude and Amplitude
When you multiply a vector by a matrix, you transform it in at least one of the following two ways:
* Scale the length (*magnitude*) of the matrix to make it longer or shorter
* Change the direction (*amplitude*) of the matrix
For example consider the following matrix and vector:
$$ A = \begin{bmatrix}2 & 0\\0 & 2\end{bmatrix} \;\;\;\; \vec{v} = \begin{bmatrix}1\\0\end{bmatrix}$$
As before, we transform the vector ***v*** by multiplying it with the matrix ***A***:
\begin{equation}\begin{bmatrix}2 & 0\\0 & 2\end{bmatrix} \cdot \begin{bmatrix}1\\0\end{bmatrix} = \begin{bmatrix}2\\0\end{bmatrix}\end{equation}
In this case, the resulting vector has changed in length (*magnitude*), but has not changed its direction (*amplitude*).
Let's visualize that in Python:
```python
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
v = np.array([1,0])
A = np.array([[2,0],
[0,2]])
t = A@v
print (t)
# Plot v and t
vecs = np.array([t,v])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, *v, color=['blue'], scale=10)
plt.quiver(*origin, *t, color=['orange'], scale=10)
plt.show()
```
The original vector ***v*** is shown in orange, and the transformed vector ***t*** is shown in blue - note that ***t*** has the same direction (*amplitude*) as ***v*** but a greater length (*magnitude*).
Now let's use a different matrix to transform the vector ***v***:
\begin{equation}\begin{bmatrix}0 & -1\\1 & 0\end{bmatrix} \cdot \begin{bmatrix}1\\0\end{bmatrix} = \begin{bmatrix}0\\1\end{bmatrix}\end{equation}
This time, the resulting vector has been changed to a different amplitude, but has the same magnitude.
```python
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
v = np.array([1,0])
A = np.array([[0,-1],
[1,0]])
t = A@v
print (t)
# Plot v and t
vecs = np.array([v,t])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, *v, color=['orange'], scale=10)
plt.quiver(*origin, *t, color=['blue'], scale=10)
plt.show()
```
Now let's see change the matrix one more time:
\begin{equation}\begin{bmatrix}2 & 1\\1 & 2\end{bmatrix} \cdot \begin{bmatrix}1\\0\end{bmatrix} = \begin{bmatrix}2\\1\end{bmatrix}\end{equation}
Now our resulting vector has been transformed to a new amplitude *and* magnitude - the transformation has affected both direction and scale.
```python
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
v = np.array([1,0])
A = np.array([[2,1],
[1,2]])
t = A@v
print (t)
# Plot v and t
vecs = np.array([v,t])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, *v, color=['orange'], scale=10)
plt.quiver(*origin, *t, color=['blue'], scale=10)
plt.show()
```
### Afine Transformations
An Afine transformation multiplies a vector by a matrix and adds an offset vector, sometimes referred to as *bias*; like this:
$$T(\vec{v}) = A\vec{v} + \vec{b}$$
For example:
\begin{equation}\begin{bmatrix}5 & 2\\3 & 1\end{bmatrix} \cdot \begin{bmatrix}1\\1\end{bmatrix} + \begin{bmatrix}-2\\-6\end{bmatrix} = \begin{bmatrix}5\\-2\end{bmatrix}\end{equation}
This kind of transformation is actually the basis of linear regression, which is a core foundation for machine learning. The matrix defines the *features*, the first vector is the *coefficients*, and the bias vector is the *intercept*.
here's an example of an Afine transformation in Python:
```python
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
v = np.array([1,1])
A = np.array([[5,2],
[3,1]])
b = np.array([-2,-6])
t = A@v + b
print (t)
# Plot v and t
vecs = np.array([v,t])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, *v, color=['orange'], scale=15)
plt.quiver(*origin, *t, color=['blue'], scale=15)
plt.show()
```
## Eigenvectors and Eigenvalues
So we can see that when you transform a vector using a matrix, we change its direction, length, or both. When the transformation only affects scale (in other words, the output vector has a different magnitude but the same amplitude as the input vector), the matrix multiplication for the transformation is the equivalent operation as some scalar multiplication of the vector.
For example, earlier we examined the following transformation that dot-mulitplies a vector by a matrix:
$$\begin{bmatrix}2 & 0\\0 & 2\end{bmatrix} \cdot \begin{bmatrix}1\\0\end{bmatrix} = \begin{bmatrix}2\\0\end{bmatrix}$$
You can achieve the same result by mulitplying the vector by the scalar value ***2***:
$$2 \times \begin{bmatrix}1\\0\end{bmatrix} = \begin{bmatrix}2\\0\end{bmatrix}$$
The following python performs both of these calculation and shows the results, which are identical.
```python
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
v = np.array([1,0])
A = np.array([[2,0],
[0,2]])
t1 = A@v
print (t1)
t2 = 2*v
print (t2)
fig = plt.figure()
a=fig.add_subplot(1,1,1)
# Plot v and t1
vecs = np.array([t1,v])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, *v, color=['orange'], scale=10)
plt.quiver(*origin, *t1, color=['blue'], scale=10)
plt.show()
a=fig.add_subplot(1,2,1)
# Plot v and t2
vecs = np.array([t2,v])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, *v, color=['orange'], scale=10)
plt.quiver(*origin, *t2, color=['blue'], scale=10)
plt.show()
```
In cases like these, where a matrix transformation is the equivelent of a scalar-vector multiplication, the scalar-vector pairs that correspond to the matrix are known respectively as eigenvalues and eigenvectors. We generally indicate eigenvalues using the Greek letter lambda (λ), and the formula that defines eigenvalues and eigenvectors with respect to a transformation is:
$$ T(\vec{v}) = \lambda\vec{v}$$
Where the vector ***v*** is an eigenvector and the value ***λ*** is an eigenvalue for transformation ***T***.
When the transformation ***T*** is represented as a matrix multiplication, as in this case where the transformation is represented by matrix ***A***:
$$ T(\vec{v}) = A\vec{v} = \lambda\vec{v}$$
Then ***v*** is an eigenvector and ***λ*** is an eigenvalue of ***A***.
A matrix can have multiple eigenvector-eigenvalue pairs, and you can calculate them manually. However, it's generally easier to use a tool or programming language. For example, in Python you can use the ***linalg.eig*** function, which returns an array of eigenvalues and a matrix of the corresponding eigenvectors for the specified matrix.
Here's an example that returns the eigenvalue and eigenvector pairs for the following matrix:
$$A=\begin{bmatrix}2 & 0\\0 & 3\end{bmatrix}$$
```python
import numpy as np
A = np.array([[2,0],
[0,3]])
eVals, eVecs = np.linalg.eig(A)
print(eVals)
print(eVecs)
```
[2. 3.]
[[1. 0.]
[0. 1.]]
So there are two eigenvalue-eigenvector pairs for this matrix, as shown here:
$$ \lambda_{1} = 2, \vec{v_{1}} = \begin{bmatrix}1 \\ 0\end{bmatrix} \;\;\;\;\;\; \lambda_{2} = 3, \vec{v_{2}} = \begin{bmatrix}0 \\ 1\end{bmatrix} $$
Let's verify that multiplying each eigenvalue-eigenvector pair corresponds to the dot-product of the eigenvector and the matrix. Here's the first pair:
$$ 2 \times \begin{bmatrix}1 \\ 0\end{bmatrix} = \begin{bmatrix}2 \\ 0\end{bmatrix} \;\;\;and\;\;\; \begin{bmatrix}2 & 0\\0 & 3\end{bmatrix} \cdot \begin{bmatrix}1 \\ 0\end{bmatrix} = \begin{bmatrix}2 \\ 0\end{bmatrix} $$
So far so good. Now let's check the second pair:
$$ 3 \times \begin{bmatrix}0 \\ 1\end{bmatrix} = \begin{bmatrix}0 \\ 3\end{bmatrix} \;\;\;and\;\;\; \begin{bmatrix}2 & 0\\0 & 3\end{bmatrix} \cdot \begin{bmatrix}0 \\ 1\end{bmatrix} = \begin{bmatrix}0 \\ 3\end{bmatrix} $$
So our eigenvalue-eigenvector scalar multiplications do indeed correspond to our matrix-eigenvector dot-product transformations.
Here's the equivalent code in Python, using the ***eVals*** and ***eVecs*** variables you generated in the previous code cell:
```python
vec1 = eVecs[:,0]
lam1 = eVals[0]
print('Matrix A:')
print(A)
print('-------')
print('lam1: ' + str(lam1))
print ('v1: ' + str(vec1))
print ('Av1: ' + str(A@vec1))
print ('lam1 x v1: ' + str(lam1*vec1))
print('-------')
vec2 = eVecs[:,1]
lam2 = eVals[1]
print('lam2: ' + str(lam2))
print ('v2: ' + str(vec2))
print ('Av2: ' + str(A@vec2))
print ('lam2 x v2: ' + str(lam2*vec2))
```
Matrix A:
[[2 0]
[0 3]]
-------
lam1: 2.0
v1: [1. 0.]
Av1: [2. 0.]
lam1 x v1: [2. 0.]
-------
lam2: 3.0
v2: [0. 1.]
Av2: [0. 3.]
lam2 x v2: [0. 3.]
You can use the following code to visualize these transformations:
```python
t1 = lam1*vec1
print (t1)
t2 = lam2*vec2
print (t2)
fig = plt.figure()
a=fig.add_subplot(1,1,1)
# Plot v and t1
vecs = np.array([t1,vec1])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, *v, color=['orange'], scale=10)
plt.quiver(*origin, *t1, color=['blue'], scale=10)
plt.show()
a=fig.add_subplot(1,2,1)
# Plot v and t2
vecs = np.array([t2,vec2])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, *v, color=['orange'], scale=10)
plt.quiver(*origin, *t2, color=['blue'], scale=10)
plt.show()
```
Similarly, earlier we examined the following matrix transformation:
$$\begin{bmatrix}2 & 0\\0 & 2\end{bmatrix} \cdot \begin{bmatrix}1\\0\end{bmatrix} = \begin{bmatrix}2\\0\end{bmatrix}$$
And we saw that you can achieve the same result by mulitplying the vector by the scalar value ***2***:
$$2 \times \begin{bmatrix}1\\0\end{bmatrix} = \begin{bmatrix}2\\0\end{bmatrix}$$
This works because the scalar value 2 and the vector (1,0) are an eigenvalue-eigenvector pair for this matrix.
Let's use Python to determine the eigenvalue-eigenvector pairs for this matrix:
```python
import numpy as np
A = np.array([[2,0],
[0,2]])
eVals, eVecs = np.linalg.eig(A)
print(eVals)
print(eVecs)
```
[2. 2.]
[[1. 0.]
[0. 1.]]
So once again, there are two eigenvalue-eigenvector pairs for this matrix, as shown here:
$$ \lambda_{1} = 2, \vec{v_{1}} = \begin{bmatrix}1 \\ 0\end{bmatrix} \;\;\;\;\;\; \lambda_{2} = 2, \vec{v_{2}} = \begin{bmatrix}0 \\ 1\end{bmatrix} $$
Let's verify that multiplying each eigenvalue-eigenvector pair corresponds to the dot-product of the eigenvector and the matrix. Here's the first pair:
$$ 2 \times \begin{bmatrix}1 \\ 0\end{bmatrix} = \begin{bmatrix}2 \\ 0\end{bmatrix} \;\;\;and\;\;\; \begin{bmatrix}2 & 0\\0 & 2\end{bmatrix} \cdot \begin{bmatrix}1 \\ 0\end{bmatrix} = \begin{bmatrix}2 \\ 0\end{bmatrix} $$
Well, we already knew that. Now let's check the second pair:
$$ 2 \times \begin{bmatrix}0 \\ 1\end{bmatrix} = \begin{bmatrix}0 \\ 2\end{bmatrix} \;\;\;and\;\;\; \begin{bmatrix}2 & 0\\0 & 2\end{bmatrix} \cdot \begin{bmatrix}0 \\ 1\end{bmatrix} = \begin{bmatrix}0 \\ 2\end{bmatrix} $$
Now let's use Pythonto verify and plot these transformations:
```python
vec1 = eVecs[:,0]
lam1 = eVals[0]
print('Matrix A:')
print(A)
print('-------')
print('lam1: ' + str(lam1))
print ('v1: ' + str(vec1))
print ('Av1: ' + str(A@vec1))
print ('lam1 x v1: ' + str(lam1*vec1))
print('-------')
vec2 = eVecs[:,1]
lam2 = eVals[1]
print('lam2: ' + str(lam2))
print ('v2: ' + str(vec2))
print ('Av2: ' + str(A@vec2))
print ('lam2 x v2: ' + str(lam2*vec2))
# Plot the resulting vectors
t1 = lam1*vec1
t2 = lam2*vec2
fig = plt.figure()
a=fig.add_subplot(1,1,1)
# Plot v and t1
vecs = np.array([t1,vec1])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, *v, color=['orange'], scale=10)
plt.quiver(*origin, *t1, color=['blue'], scale=10)
plt.show()
a=fig.add_subplot(1,2,1)
# Plot v and t2
vecs = np.array([t2,vec2])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, *v, color=['orange'], scale=10)
plt.quiver(*origin, *t2, color=['blue'], scale=10)
plt.show()
```
Let's take a look at one more, slightly more complex example. Here's our matrix:
$$\begin{bmatrix}2 & 1\\1 & 2\end{bmatrix}$$
Let's get the eigenvalue and eigenvector pairs:
```python
import numpy as np
A = np.array([[2,1],
[1,2]])
eVals, eVecs = np.linalg.eig(A)
print(eVals)
print(eVecs)
```
[3. 1.]
[[ 0.70710678 -0.70710678]
[ 0.70710678 0.70710678]]
This time the eigenvalue-eigenvector pairs are:
$$ \lambda_{1} = 3, \vec{v_{1}} = \begin{bmatrix}0.70710678 \\ 0.70710678\end{bmatrix} \;\;\;\;\;\; \lambda_{2} = 1, \vec{v_{2}} = \begin{bmatrix}-0.70710678 \\ 0.70710678\end{bmatrix} $$
So let's check the first pair:
$$ 3 \times \begin{bmatrix}0.70710678 \\ 0.70710678\end{bmatrix} = \begin{bmatrix}2.12132034 \\ 2.12132034\end{bmatrix} \;\;\;and\;\;\; \begin{bmatrix}2 & 1\\0 & 2\end{bmatrix} \cdot \begin{bmatrix}0.70710678 \\ 0.70710678\end{bmatrix} = \begin{bmatrix}2.12132034 \\ 2.12132034\end{bmatrix} $$
Now let's check the second pair:
$$ 1 \times \begin{bmatrix}-0.70710678 \\ 0.70710678\end{bmatrix} = \begin{bmatrix}-0.70710678\\0.70710678\end{bmatrix} \;\;\;and\;\;\; \begin{bmatrix}2 & 1\\1 & 2\end{bmatrix} \cdot \begin{bmatrix}-0.70710678 \\ 0.70710678\end{bmatrix} = \begin{bmatrix}-0.70710678\\0.70710678\end{bmatrix} $$
With more complex examples like this, it's generally easier to do it with Python:
```python
vec1 = eVecs[:,0]
lam1 = eVals[0]
print('Matrix A:')
print(A)
print('-------')
print('lam1: ' + str(lam1))
print ('v1: ' + str(vec1))
print ('Av1: ' + str(A@vec1))
print ('lam1 x v1: ' + str(lam1*vec1))
print('-------')
vec2 = eVecs[:,1]
lam2 = eVals[1]
print('lam2: ' + str(lam2))
print ('v2: ' + str(vec2))
print ('Av2: ' + str(A@vec2))
print ('lam2 x v2: ' + str(lam2*vec2))
# Plot the results
t1 = lam1*vec1
t2 = lam2*vec2
fig = plt.figure()
a=fig.add_subplot(1,1,1)
# Plot v and t1
vecs = np.array([t1,vec1])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, *v, color=['orange'], scale=10)
plt.quiver(*origin, *t1, color=['blue'], scale=10)
plt.show()
a=fig.add_subplot(1,2,1)
# Plot v and t2
vecs = np.array([t2,vec2])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, *v, color=['orange'], scale=10)
plt.quiver(*origin, *t2, color=['blue'], scale=10)
plt.show()
```
## Eigendecomposition
So we've learned a little about eigenvalues and eigenvectors; but you may be wondering what use they are. Well, one use for them is to help decompose transformation matrices.
Recall that previously we found that a matrix transformation of a vector changes its magnitude, amplitude, or both. Without getting too technical about it, we need to remember that vectors can exist in any spatial orientation, or *basis*; and the same transformation can be applied in different *bases*.
We can decompose a matrix using the following formula:
$$A = Q \Lambda Q^{-1}$$
Where ***A*** is a trasformation that can be applied to a vector in its current base, ***Q*** is a matrix of eigenvectors that defines a change of basis, and ***Λ*** is a matrix with eigenvalues on the diagonal that defines the same linear transformation as ***A*** in the base defined by ***Q***.
Let's look at these in some more detail. Consider this matrix:
$$A=\begin{bmatrix}3 & 2\\1 & 0\end{bmatrix}$$
***Q*** is a matrix in which each column is an eigenvector of ***A***; which as we've seen previously, we can calculate using Python:
```python
import numpy as np
A = np.array([[3,2],
[1,0]])
l, Q = np.linalg.eig(A)
print(Q)
```
[[ 0.96276969 -0.48963374]
[ 0.27032301 0.87192821]]
So for matrix ***A***, ***Q*** is the following matrix:
$$Q=\begin{bmatrix}0.96276969 & -0.48963374\\0.27032301 & 0.87192821\end{bmatrix}$$
***Λ*** is a matrix that contains the eigenvalues for ***A*** on the diagonal, with zeros in all other elements; so for a 2x2 matrix, Λ will look like this:
$$\Lambda=\begin{bmatrix}\lambda_{1} & 0\\0 & \lambda_{2}\end{bmatrix}$$
In our Python code, we've already used the ***linalg.eig*** function to return the array of eigenvalues for ***A*** into the variable ***l***, so now we just need to format that as a matrix:
```python
L = np.diag(l)
print (L)
```
[[ 3.56155281 0. ]
[ 0. -0.56155281]]
So ***Λ*** is the following matrix:
$$\Lambda=\begin{bmatrix}3.56155281 & 0\\0 & -0.56155281\end{bmatrix}$$
Now we just need to find ***Q<sup>-1</sup>***, which is the inverse of ***Q***:
```python
Qinv = np.linalg.inv(Q)
print(Qinv)
```
[[ 0.89720673 0.50382896]
[-0.27816009 0.99068183]]
The inverse of ***Q*** then, is:
$$Q^{-1}=\begin{bmatrix}0.89720673 & 0.50382896\\-0.27816009 & 0.99068183\end{bmatrix}$$
So what does that mean? Well, it means that we can decompose the transformation of *any* vector multiplied by matrix ***A*** into the separate operations ***QΛQ<sup>-1</sup>***:
$$A\vec{v} = Q \Lambda Q^{-1}\vec{v}$$
To prove this, let's take vector ***v***:
$$\vec{v} = \begin{bmatrix}1\\3\end{bmatrix} $$
Our matrix transformation using ***A*** is:
$$\begin{bmatrix}3 & 2\\1 & 0\end{bmatrix} \cdot \begin{bmatrix}1\\3\end{bmatrix} $$
So let's show the results of that using Python:
```python
v = np.array([1,3])
t = A@v
print(t)
# Plot v and t
vecs = np.array([v,t])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, *v, color=['orange'], scale=10)
plt.quiver(*origin, *t, color=['blue'], scale=10)
plt.show()
```
And now, let's do the same thing using the ***QΛQ<sup>-1</sup>*** sequence of operations:
```python
import math
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
t = (Q@(L@(Qinv)))@v
# Plot v and t
vecs = np.array([v,t])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, *v, color=['orange'], scale=10)
plt.quiver(*origin, *t, color=['blue'], scale=10)
plt.show()
```
So ***A*** and ***QΛQ<sup>-1</sup>*** are equivalent.
If we view the intermediary stages of the decomposed transformation, you can see the transformation using ***A*** in the original base for ***v*** (orange to blue) and the transformation using ***Λ*** in the change of basis decribed by ***Q*** (red to magenta):
```python
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
t1 = Qinv@v
t2 = L@t1
t3 = Q@t2
# Plot the transformations
vecs = np.array([v,t1, t2, t3])
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, *v, color=['orange'], scale=20)
plt.quiver(*origin, *t1, color=['blue'], scale=20)
plt.quiver(*origin, *t2, color=['red'], scale=20)
plt.quiver(*origin, *t3, color=['magenta'], scale=20)
plt.show()
```
So from this visualization, it should be apparent that the transformation ***Av*** can be performed by changing the basis for ***v*** using ***Q*** (from orange to red in the above plot) applying the equivalent linear transformation in that base using ***Λ*** (red to magenta), and switching back to the original base using ***Q<sup>-1</sup>*** (magenta to blue).
## Rank of a Matrix
The **rank** of a square matrix is the number of non-zero eigenvalues of the matrix. A **full rank** matrix has the same number of non-zero eigenvalues as the dimension of the matrix. A **rank-deficient** matrix has fewer non-zero eigenvalues as dimensions. The inverse of a rank deficient matrix is singular and so does not exist (this is why in a previous notebook we noted that some matrices have no inverse).
Consider the following matrix ***A***:
$$A=\begin{bmatrix}1 & 2\\4 & 3\end{bmatrix}$$
Let's find its eigenvalues (***Λ***):
```python
import numpy as np
A = np.array([[1,2],
[4,3]])
l, Q = np.linalg.eig(A)
L = np.diag(l)
print(L)
```
[[-1. 0.]
[ 0. 5.]]
$$\Lambda=\begin{bmatrix}-1 & 0\\0 & 5\end{bmatrix}$$
This matrix has full rank. The dimensions of the matrix is 2. There are two non-zero eigenvalues.
Now consider this matrix:
$$B=\begin{bmatrix}3 & -3 & 6\\2 & -2 & 4\\1 & -1 & 2\end{bmatrix}$$
Note that the second and third columns are just scalar multiples of the first column.
Let's examine it's eigenvalues:
```python
B = np.array([[3,-3,6],
[2,-2,4],
[1,-1,2]])
lb, Qb = np.linalg.eig(B)
Lb = np.diag(lb)
print(Lb)
```
[[ 3.00000000e+00 0.00000000e+00 0.00000000e+00]
[ 0.00000000e+00 -6.00567308e-17 0.00000000e+00]
[ 0.00000000e+00 0.00000000e+00 3.57375398e-16]]
$$\Lambda=\begin{bmatrix}3 & 0& 0\\0 & -6\times10^{-17} & 0\\0 & 0 & 3.6\times10^{-16}\end{bmatrix}$$
Note that matrix has only 1 non-zero eigenvalue. The other two eigenvalues are so extremely small as to be effectively zero. This is an example of a rank-deficient matrix; and as such, it has no inverse.
## Inverse of a Square Full Rank Matrix
You can calculate the inverse of a square full rank matrix by using the following formula:
$$A^{-1} = Q \Lambda^{-1} Q^{-1}$$
Let's apply this to matrix ***A***:
$$A=\begin{bmatrix}1 & 2\\4 & 3\end{bmatrix}$$
Let's find the matrices for ***Q***, ***Λ<sup>-1</sup>***, and ***Q<sup>-1</sup>***:
```python
import numpy as np
A = np.array([[1,2],
[4,3]])
l, Q = np.linalg.eig(A)
L = np.diag(l)
print(Q)
Linv = np.linalg.inv(L)
Qinv = np.linalg.inv(Q)
print(Linv)
print(Qinv)
```
[[-0.70710678 -0.4472136 ]
[ 0.70710678 -0.89442719]]
[[-1. -0. ]
[ 0. 0.2]]
[[-0.94280904 0.47140452]
[-0.74535599 -0.74535599]]
So:
$$A^{-1}=\begin{bmatrix}-0.70710678 & -0.4472136\\0.70710678 & -0.89442719\end{bmatrix}\cdot\begin{bmatrix}-1 & -0\\0 & 0.2\end{bmatrix}\cdot\begin{bmatrix}-0.94280904 & 0.47140452\\-0.74535599 & -0.74535599\end{bmatrix}$$
Let's calculate that in Python:
```python
Ainv = (Q@(Linv@(Qinv)))
print(Ainv)
```
[[-0.6 0.4]
[ 0.8 -0.2]]
That gives us the result:
$$A^{-1}=\begin{bmatrix}-0.6 & 0.4\\0.8 & -0.2\end{bmatrix}$$
We can apply the ***np.linalg.inv*** function directly to ***A*** to verify this:
```python
print(np.linalg.inv(A))
```
[[-0.6 0.4]
[ 0.8 -0.2]]
| a327d953e0a2f466925f6d82dd371ea0f009f851 | 150,137 | ipynb | Jupyter Notebook | 4_oreilly-book/code/ch13/ML-Math-Notebooks/Copy_of_03_05_Transformations_Eigenvectors_and_Eigenvalues.ipynb | lynnlangit/learning-quantum | 8da57597efe7e213b4368fc115d67e8b42b906c6 | [
"Apache-2.0"
] | 14 | 2021-02-10T10:20:05.000Z | 2022-03-31T05:21:30.000Z | 4_oreilly-book/code/ch13/ML-Math-Notebooks/Copy_of_03_05_Transformations_Eigenvectors_and_Eigenvalues.ipynb | lynnlangit/learning-quantum | 8da57597efe7e213b4368fc115d67e8b42b906c6 | [
"Apache-2.0"
] | null | null | null | 4_oreilly-book/code/ch13/ML-Math-Notebooks/Copy_of_03_05_Transformations_Eigenvectors_and_Eigenvalues.ipynb | lynnlangit/learning-quantum | 8da57597efe7e213b4368fc115d67e8b42b906c6 | [
"Apache-2.0"
] | 2 | 2021-03-23T13:34:24.000Z | 2022-01-14T18:51:00.000Z | 87.339732 | 8,862 | 0.777477 | true | 8,544 | Qwen/Qwen-72B | 1. YES
2. YES | 0.97024 | 0.904651 | 0.877728 | __label__eng_Latn | 0.892179 | 0.87759 |
```python
from sympy.physics.matrices import mgamma
import sympy.abc as greek
from sympy import *
init_printing()
```
**Shut up and calculate.** - *Fedor Herbut* quoting *Niels Bohr*.
```python
def commutator(a, b, p = -1):
return a * b + p * b * a
```
Applying *commutator(mgamma(mu), mgamma(nu), p = 1)* to, for $\mu, \nu = 0, 1, 2, 3$ it is easily be verified that ($3^{rd}$ problem):
\begin{equation}
\{\gamma_{\mu}, \gamma_{\nu}\} = 2 g_{\mu \nu}
\end{equation}
Function *mgamma_2(mu, nu, gamma)* returns $\gamma^{\mu \nu}$, defined as:
\begin{equation}
\gamma^{\mu \nu} = \frac{1}{2}[\gamma^{\mu}, \gamma^{\nu}]
\end{equation}
Function *mgamma_3(mu, nu, rho, gamma)* returns $\gamma^{\mu \nu \rho}$, defined as:
\begin{equation}
\gamma^{\mu \nu \rho} = \frac{1}{2}[\gamma^{\mu \nu}, \gamma^{\rho}]
\end{equation}
Function *mgamma_4(mu, nu, rho, sigma, gamma)* returns $\gamma^{\mu \nu \rho \sigma}$, defined as:
\begin{equation}
\gamma^{\mu \nu \rho \sigma} = \frac{1}{2}[\gamma^{\mu \nu \rho}, \gamma^{\sigma}]
\end{equation}
Function *mgamma_n(indices, order, gamma)* returns general tensor of arbitrary order defined recursively.
Here argument *gamma* can specify any matrix to evaluate *mgamma_n* functions. Particularly, they can be $\gamma^{\mu}$ (as set by default), $\gamma_{\mu}$, or any function of single number.
```python
def mgamma_2(mu, nu, gamma = mgamma):
return (gamma(mu) * gamma(nu) - gamma(nu) * gamma(mu)) / 2
def mgamma_3(mu, nu, rho, gamma = mgamma):
return (mgamma_2(mu, nu, gamma = gamma) * gamma(rho) - gamma(rho) * mgamma_2(mu, nu, gamma = gamma)) / 2
def mgamma_4(mu, nu, rho, sigma, gamma = mgamma):
return (mgamma_3(mu, nu, rho, gamma = gamma) * gamma(sigma) - \
gamma(sigma) * mgamma_3(mu, nu, rho, gamma = gamma)) / 2
def mgamma_n(indices, order, gamma = mgamma):
if order == 1:
return gamma(indices[0])
else:
return commutator(mgamma_n(indices[1:], order-1, gamma = mgamma), gamma(indices[0]))/2
def mgamma_(mu):
if mu == 0:
return mgamma(0)
else:
return -mgamma(mu)
```
## Statement of problem
With tensors defined as above, find contraction:
\begin{equation}
\gamma^{\mu \nu \rho \sigma} \gamma_{\rho \sigma}
\end{equation}
Let:
\begin{equation}
G^{\mu \nu \rho \sigma}_{\alpha \beta} = \gamma^{\mu \nu \rho \sigma} \gamma_{\alpha \beta}
\end{equation}
```python
def G(mu, nu, rho, sigma, alpha, beta):
return mgamma_4(mu, nu, rho, sigma) * mgamma_2(alpha, beta, gamma = mgamma_)
#More general tensor than G
def G_(upper, lower):
return mgamma_n(upper, len(upper)) * mgamma_n(lower, len(lower), gamma = mgamma_)
def zeroMatrix(n, m):
return Matrix([[0 for i in range(m)] for j in range(n)])
```
```python
def contractor(mu, nu):
tensor = zeroMatrix(4, 4)
for rho in range(4):
for sigma in range(4):
tensor += G(mu, nu, rho, sigma, rho, sigma)
return tensor
```
```python
for mu in range(4):
for nu in range(4):
print('mu = ', mu, ' nu = ', nu)
pprint(contractor(mu, nu))
```
mu = 0 nu = 0
⎡0 0 0 0⎤
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎣0 0 0 0⎦
mu = 0 nu = 1
⎡0 0 0 -4⎤
⎢ ⎥
⎢0 0 -4 0 ⎥
⎢ ⎥
⎢0 -4 0 0 ⎥
⎢ ⎥
⎣-4 0 0 0 ⎦
mu = 0 nu = 2
⎡ 0 0 0 4⋅ⅈ⎤
⎢ ⎥
⎢ 0 0 -4⋅ⅈ 0 ⎥
⎢ ⎥
⎢ 0 4⋅ⅈ 0 0 ⎥
⎢ ⎥
⎣-4⋅ⅈ 0 0 0 ⎦
mu = 0 nu = 3
⎡0 0 -4 0⎤
⎢ ⎥
⎢0 0 0 4⎥
⎢ ⎥
⎢-4 0 0 0⎥
⎢ ⎥
⎣0 4 0 0⎦
mu = 1 nu = 0
⎡0 0 0 4⎤
⎢ ⎥
⎢0 0 4 0⎥
⎢ ⎥
⎢0 4 0 0⎥
⎢ ⎥
⎣4 0 0 0⎦
mu = 1 nu = 1
⎡0 0 0 0⎤
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎣0 0 0 0⎦
mu = 1 nu = 2
⎡4⋅ⅈ 0 0 0 ⎤
⎢ ⎥
⎢ 0 -4⋅ⅈ 0 0 ⎥
⎢ ⎥
⎢ 0 0 4⋅ⅈ 0 ⎥
⎢ ⎥
⎣ 0 0 0 -4⋅ⅈ⎦
mu = 1 nu = 3
⎡0 -4 0 0 ⎤
⎢ ⎥
⎢4 0 0 0 ⎥
⎢ ⎥
⎢0 0 0 -4⎥
⎢ ⎥
⎣0 0 4 0 ⎦
mu = 2 nu = 0
⎡ 0 0 0 -4⋅ⅈ⎤
⎢ ⎥
⎢ 0 0 4⋅ⅈ 0 ⎥
⎢ ⎥
⎢ 0 -4⋅ⅈ 0 0 ⎥
⎢ ⎥
⎣4⋅ⅈ 0 0 0 ⎦
mu = 2 nu = 1
⎡-4⋅ⅈ 0 0 0 ⎤
⎢ ⎥
⎢ 0 4⋅ⅈ 0 0 ⎥
⎢ ⎥
⎢ 0 0 -4⋅ⅈ 0 ⎥
⎢ ⎥
⎣ 0 0 0 4⋅ⅈ⎦
mu = 2 nu = 2
⎡0 0 0 0⎤
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎣0 0 0 0⎦
mu = 2 nu = 3
⎡ 0 4⋅ⅈ 0 0 ⎤
⎢ ⎥
⎢4⋅ⅈ 0 0 0 ⎥
⎢ ⎥
⎢ 0 0 0 4⋅ⅈ⎥
⎢ ⎥
⎣ 0 0 4⋅ⅈ 0 ⎦
mu = 3 nu = 0
⎡0 0 4 0 ⎤
⎢ ⎥
⎢0 0 0 -4⎥
⎢ ⎥
⎢4 0 0 0 ⎥
⎢ ⎥
⎣0 -4 0 0 ⎦
mu = 3 nu = 1
⎡0 4 0 0⎤
⎢ ⎥
⎢-4 0 0 0⎥
⎢ ⎥
⎢0 0 0 4⎥
⎢ ⎥
⎣0 0 -4 0⎦
mu = 3 nu = 2
⎡ 0 -4⋅ⅈ 0 0 ⎤
⎢ ⎥
⎢-4⋅ⅈ 0 0 0 ⎥
⎢ ⎥
⎢ 0 0 0 -4⋅ⅈ⎥
⎢ ⎥
⎣ 0 0 -4⋅ⅈ 0 ⎦
mu = 3 nu = 3
⎡0 0 0 0⎤
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎣0 0 0 0⎦
### Solution
Since $G^{\mu \nu \rho \sigma}_{\rho \sigma}$ is second rank tensor, and as many physics book authors just use guessing method to solve some problems, we will guess the solution to be $g \cdot \gamma^{\mu \nu}$, where $g$ is constant to be determined.
```python
def guess():
for mu in range(4):
for nu in range(4):
print('mu = ', mu, ' nu = ', nu)
pprint(contractor(mu, nu))
pprint(mgamma_2(mu, nu))
```
```python
guess()
```
mu = 0 nu = 0
⎡0 0 0 0⎤
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎣0 0 0 0⎦
⎡0 0 0 0⎤
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎣0 0 0 0⎦
mu = 0 nu = 1
⎡0 0 0 -4⎤
⎢ ⎥
⎢0 0 -4 0 ⎥
⎢ ⎥
⎢0 -4 0 0 ⎥
⎢ ⎥
⎣-4 0 0 0 ⎦
⎡0 0 0 1⎤
⎢ ⎥
⎢0 0 1 0⎥
⎢ ⎥
⎢0 1 0 0⎥
⎢ ⎥
⎣1 0 0 0⎦
mu = 0 nu = 2
⎡ 0 0 0 4⋅ⅈ⎤
⎢ ⎥
⎢ 0 0 -4⋅ⅈ 0 ⎥
⎢ ⎥
⎢ 0 4⋅ⅈ 0 0 ⎥
⎢ ⎥
⎣-4⋅ⅈ 0 0 0 ⎦
⎡0 0 0 -ⅈ⎤
⎢ ⎥
⎢0 0 ⅈ 0 ⎥
⎢ ⎥
⎢0 -ⅈ 0 0 ⎥
⎢ ⎥
⎣ⅈ 0 0 0 ⎦
mu = 0 nu = 3
⎡0 0 -4 0⎤
⎢ ⎥
⎢0 0 0 4⎥
⎢ ⎥
⎢-4 0 0 0⎥
⎢ ⎥
⎣0 4 0 0⎦
⎡0 0 1 0 ⎤
⎢ ⎥
⎢0 0 0 -1⎥
⎢ ⎥
⎢1 0 0 0 ⎥
⎢ ⎥
⎣0 -1 0 0 ⎦
mu = 1 nu = 0
⎡0 0 0 4⎤
⎢ ⎥
⎢0 0 4 0⎥
⎢ ⎥
⎢0 4 0 0⎥
⎢ ⎥
⎣4 0 0 0⎦
⎡0 0 0 -1⎤
⎢ ⎥
⎢0 0 -1 0 ⎥
⎢ ⎥
⎢0 -1 0 0 ⎥
⎢ ⎥
⎣-1 0 0 0 ⎦
mu = 1 nu = 1
⎡0 0 0 0⎤
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎣0 0 0 0⎦
⎡0 0 0 0⎤
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎣0 0 0 0⎦
mu = 1 nu = 2
⎡4⋅ⅈ 0 0 0 ⎤
⎢ ⎥
⎢ 0 -4⋅ⅈ 0 0 ⎥
⎢ ⎥
⎢ 0 0 4⋅ⅈ 0 ⎥
⎢ ⎥
⎣ 0 0 0 -4⋅ⅈ⎦
⎡-ⅈ 0 0 0⎤
⎢ ⎥
⎢0 ⅈ 0 0⎥
⎢ ⎥
⎢0 0 -ⅈ 0⎥
⎢ ⎥
⎣0 0 0 ⅈ⎦
mu = 1 nu = 3
⎡0 -4 0 0 ⎤
⎢ ⎥
⎢4 0 0 0 ⎥
⎢ ⎥
⎢0 0 0 -4⎥
⎢ ⎥
⎣0 0 4 0 ⎦
⎡0 1 0 0⎤
⎢ ⎥
⎢-1 0 0 0⎥
⎢ ⎥
⎢0 0 0 1⎥
⎢ ⎥
⎣0 0 -1 0⎦
mu = 2 nu = 0
⎡ 0 0 0 -4⋅ⅈ⎤
⎢ ⎥
⎢ 0 0 4⋅ⅈ 0 ⎥
⎢ ⎥
⎢ 0 -4⋅ⅈ 0 0 ⎥
⎢ ⎥
⎣4⋅ⅈ 0 0 0 ⎦
⎡0 0 0 ⅈ⎤
⎢ ⎥
⎢0 0 -ⅈ 0⎥
⎢ ⎥
⎢0 ⅈ 0 0⎥
⎢ ⎥
⎣-ⅈ 0 0 0⎦
mu = 2 nu = 1
⎡-4⋅ⅈ 0 0 0 ⎤
⎢ ⎥
⎢ 0 4⋅ⅈ 0 0 ⎥
⎢ ⎥
⎢ 0 0 -4⋅ⅈ 0 ⎥
⎢ ⎥
⎣ 0 0 0 4⋅ⅈ⎦
⎡ⅈ 0 0 0 ⎤
⎢ ⎥
⎢0 -ⅈ 0 0 ⎥
⎢ ⎥
⎢0 0 ⅈ 0 ⎥
⎢ ⎥
⎣0 0 0 -ⅈ⎦
mu = 2 nu = 2
⎡0 0 0 0⎤
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎣0 0 0 0⎦
⎡0 0 0 0⎤
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎣0 0 0 0⎦
mu = 2 nu = 3
⎡ 0 4⋅ⅈ 0 0 ⎤
⎢ ⎥
⎢4⋅ⅈ 0 0 0 ⎥
⎢ ⎥
⎢ 0 0 0 4⋅ⅈ⎥
⎢ ⎥
⎣ 0 0 4⋅ⅈ 0 ⎦
⎡0 -ⅈ 0 0 ⎤
⎢ ⎥
⎢-ⅈ 0 0 0 ⎥
⎢ ⎥
⎢0 0 0 -ⅈ⎥
⎢ ⎥
⎣0 0 -ⅈ 0 ⎦
mu = 3 nu = 0
⎡0 0 4 0 ⎤
⎢ ⎥
⎢0 0 0 -4⎥
⎢ ⎥
⎢4 0 0 0 ⎥
⎢ ⎥
⎣0 -4 0 0 ⎦
⎡0 0 -1 0⎤
⎢ ⎥
⎢0 0 0 1⎥
⎢ ⎥
⎢-1 0 0 0⎥
⎢ ⎥
⎣0 1 0 0⎦
mu = 3 nu = 1
⎡0 4 0 0⎤
⎢ ⎥
⎢-4 0 0 0⎥
⎢ ⎥
⎢0 0 0 4⎥
⎢ ⎥
⎣0 0 -4 0⎦
⎡0 -1 0 0 ⎤
⎢ ⎥
⎢1 0 0 0 ⎥
⎢ ⎥
⎢0 0 0 -1⎥
⎢ ⎥
⎣0 0 1 0 ⎦
mu = 3 nu = 2
⎡ 0 -4⋅ⅈ 0 0 ⎤
⎢ ⎥
⎢-4⋅ⅈ 0 0 0 ⎥
⎢ ⎥
⎢ 0 0 0 -4⋅ⅈ⎥
⎢ ⎥
⎣ 0 0 -4⋅ⅈ 0 ⎦
⎡0 ⅈ 0 0⎤
⎢ ⎥
⎢ⅈ 0 0 0⎥
⎢ ⎥
⎢0 0 0 ⅈ⎥
⎢ ⎥
⎣0 0 ⅈ 0⎦
mu = 3 nu = 3
⎡0 0 0 0⎤
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎣0 0 0 0⎦
⎡0 0 0 0⎤
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎣0 0 0 0⎦
It seems reasonable to guess that $g = -4$. Lets verify that.
```python
def verify_1():
for mu in range(4):
for nu in range(4):
print('mu = ', mu, ' nu = ', nu)
pprint(contractor(mu, nu) + 4 * mgamma_2(mu, nu))
def verify_2():
for mu in range(4):
for nu in range(4):
print('mu = ', mu, ' nu = ', nu, contractor(mu, nu) == -4 * mgamma_2(mu, nu))
```
```python
verify_1()
```
mu = 0 nu = 0
⎡0 0 0 0⎤
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎣0 0 0 0⎦
mu = 0 nu = 1
⎡0 0 0 0⎤
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎣0 0 0 0⎦
mu = 0 nu = 2
⎡0 0 0 0⎤
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎣0 0 0 0⎦
mu = 0 nu = 3
⎡0 0 0 0⎤
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎣0 0 0 0⎦
mu = 1 nu = 0
⎡0 0 0 0⎤
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎣0 0 0 0⎦
mu = 1 nu = 1
⎡0 0 0 0⎤
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎣0 0 0 0⎦
mu = 1 nu = 2
⎡0 0 0 0⎤
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎣0 0 0 0⎦
mu = 1 nu = 3
⎡0 0 0 0⎤
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎣0 0 0 0⎦
mu = 2 nu = 0
⎡0 0 0 0⎤
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎣0 0 0 0⎦
mu = 2 nu = 1
⎡0 0 0 0⎤
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎣0 0 0 0⎦
mu = 2 nu = 2
⎡0 0 0 0⎤
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎣0 0 0 0⎦
mu = 2 nu = 3
⎡0 0 0 0⎤
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎣0 0 0 0⎦
mu = 3 nu = 0
⎡0 0 0 0⎤
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎣0 0 0 0⎦
mu = 3 nu = 1
⎡0 0 0 0⎤
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎣0 0 0 0⎦
mu = 3 nu = 2
⎡0 0 0 0⎤
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎣0 0 0 0⎦
mu = 3 nu = 3
⎡0 0 0 0⎤
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎣0 0 0 0⎦
```python
verify_2()
```
mu = 0 nu = 0 True
mu = 0 nu = 1 True
mu = 0 nu = 2 True
mu = 0 nu = 3 True
mu = 1 nu = 0 True
mu = 1 nu = 1 True
mu = 1 nu = 2 True
mu = 1 nu = 3 True
mu = 2 nu = 0 True
mu = 2 nu = 1 True
mu = 2 nu = 2 True
mu = 2 nu = 3 True
mu = 3 nu = 0 True
mu = 3 nu = 1 True
mu = 3 nu = 2 True
mu = 3 nu = 3 True
So, solution is:
\begin{equation}
\gamma^{\mu \nu \rho \sigma} \gamma_{\rho \sigma} = -4 \gamma^{\mu \nu}
\end{equation}
Other reasonable guesses might be $\gamma^{\nu \mu}$, which is just $-\gamma^{\mu \nu}$. On the other hand, $\gamma^{\mu}\gamma^{\nu}$ might also be intuitive guess, but can easily be verified that it is not correct. For example, take $\mu = \nu$, we know ($3^{rd}$ problem) that $\{\gamma^{\mu}, \gamma^{\nu}\} = 2g^{\mu\nu}$, so:
```python
pprint(mgamma(0) * mgamma(0))
pprint(contractor(0, 0))
```
⎡1 0 0 0⎤
⎢ ⎥
⎢0 1 0 0⎥
⎢ ⎥
⎢0 0 1 0⎥
⎢ ⎥
⎣0 0 0 1⎦
⎡0 0 0 0⎤
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎢0 0 0 0⎥
⎢ ⎥
⎣0 0 0 0⎦
To conclude, this guessing method might not be best choice all the time, but having a way to evaluate all components of given tensor is deffinitely useful. :)
| a7ce399a997f38832ec107fde513189e78644a40 | 22,990 | ipynb | Jupyter Notebook | Relativistic Quantum Mechanics/HomeworkProblem1.2.ipynb | PhyProg/Numerical-simulations-in-Physics | ab335117d993be4129654fbfc4455176410fabe2 | [
"MIT"
] | null | null | null | Relativistic Quantum Mechanics/HomeworkProblem1.2.ipynb | PhyProg/Numerical-simulations-in-Physics | ab335117d993be4129654fbfc4455176410fabe2 | [
"MIT"
] | null | null | null | Relativistic Quantum Mechanics/HomeworkProblem1.2.ipynb | PhyProg/Numerical-simulations-in-Physics | ab335117d993be4129654fbfc4455176410fabe2 | [
"MIT"
] | null | null | null | 26.394948 | 357 | 0.277294 | true | 9,246 | Qwen/Qwen-72B | 1. YES
2. YES | 0.831143 | 0.672332 | 0.558804 | __label__eng_Latn | 0.120778 | 0.136618 |
#### Jupyter notebooks
This is a [Jupyter](http://jupyter.org/) notebook using Python. You can install Jupyter locally to edit and interact with this notebook.
# Finite Difference methods in 2 dimensions
Let's start by generalizing the 1D Laplacian,
\begin{align} - u''(x) &= f(x) \text{ on } \Omega = (a,b) & u(a) &= g_0(a) & u'(b) = g_1(b) \end{align}
to two dimensions
\begin{align} -\nabla\cdot \big( \nabla u(x,y) \big) &= f(x,y) \text{ on } \Omega \subset \mathbb R^2
& u|_{\Gamma_D} &= g_0(x,y) & \nabla u \cdot \hat n|_{\Gamma_N} &= g_1(x,y)
\end{align}
where $\Omega$ is some well-connected open set (we will assume simply connected) and the Dirichlet boundary $\Gamma_D \subset \partial \Omega$ is nonempty.
We need to choose a system for specifying the domain $\Omega$ and ordering degrees of freedom. Perhaps the most significant limitation of finite difference methods is that this specification is messy for complicated domains. We will choose
$$ \Omega = (0, 1) \times (0, 1) $$
and
\begin{align} (x, y)_{im+j} &= (i h, j h) & h &= 1/(m-1) & i,j \in \{0, 1, \dotsc, m-1 \} .
\end{align}
```python
%matplotlib inline
import numpy
from matplotlib import pyplot
pyplot.style.use('ggplot')
def laplacian2d_dense(h, f, g0):
m = int(1/h + 1)
c = numpy.linspace(0, 1, m)
y, x = numpy.meshgrid(c, c)
u0 = g0(x, y).flatten()
rhs = f(x, y).flatten()
A = numpy.zeros((m*m, m*m))
def idx(i, j):
return i*m + j
for i in range(m):
for j in range(m):
row = idx(i, j)
if i in (0, m-1) or j in (0, m-1):
A[row, row] = 1
rhs[row] = u0[row]
else:
cols = [idx(*pair) for pair in
[(i-1, j), (i, j-1), (i, j), (i, j+1), (i+1, j)]]
stencil = 1/h**2 * numpy.array([-1, -1, 4, -1, -1])
A[row, cols] = stencil
return x, y, A, rhs
x, y, A, rhs = laplacian2d_dense(.1, lambda x,y: 0*x+1, lambda x,y: 0*x)
pyplot.spy(A);
```
```python
u = numpy.linalg.solve(A, rhs).reshape(x.shape)
pyplot.contourf(x, y, u)
pyplot.colorbar();
```
```python
import cProfile
prof = cProfile.Profile()
prof.enable()
x, y, A, rhs = laplacian2d_dense(.0125, lambda x,y: 0*x+1, lambda x,y: 0*x)
u = numpy.linalg.solve(A, rhs).reshape(x.shape)
prof.disable()
prof.print_stats(sort='tottime')
```
50365 function calls in 2.706 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
1 2.623 2.623 2.623 2.623 linalg.py:300(solve)
1 0.063 0.063 0.083 0.083 <ipython-input-1-bc43fafc383c>:6(laplacian2d_dense)
6251 0.009 0.000 0.009 0.000 {built-in method numpy.core.multiarray.array}
6241 0.007 0.000 0.010 0.000 <ipython-input-1-bc43fafc383c>:22(<listcomp>)
37766 0.005 0.000 0.005 0.000 <ipython-input-1-bc43fafc383c>:13(idx)
3 0.000 0.000 0.000 0.000 {built-in method builtins.compile}
2 0.000 0.000 0.000 0.000 {method 'copy' of 'numpy.ndarray' objects}
1 0.000 0.000 0.000 0.000 function_base.py:25(linspace)
3 0.000 0.000 2.706 0.902 interactiveshell.py:2880(run_code)
3 0.000 0.000 0.000 0.000 codeop.py:132(__call__)
2 0.000 0.000 0.000 0.000 stride_tricks.py:115(_broadcast_to)
2 0.000 0.000 0.000 0.000 {method 'astype' of 'numpy.ndarray' objects}
2 0.000 0.000 0.000 0.000 linalg.py:106(_makearray)
2 0.000 0.000 0.000 0.000 {method 'flatten' of 'numpy.ndarray' objects}
1 0.000 0.000 0.000 0.000 function_base.py:4554(meshgrid)
1 0.000 0.000 0.000 0.000 {built-in method numpy.core.multiarray.zeros}
1 0.000 0.000 0.000 0.000 <ipython-input-3-e477be7765b2>:4(<lambda>)
1 0.000 0.000 0.083 0.083 <ipython-input-3-e477be7765b2>:4(<module>)
3 0.000 0.000 0.000 0.000 hooks.py:142(__call__)
1 0.000 0.000 2.623 2.623 <ipython-input-3-e477be7765b2>:5(<module>)
1 0.000 0.000 0.000 0.000 linalg.py:139(_commonType)
1 0.000 0.000 0.000 0.000 linalg.py:209(_assertNdSquareness)
3 0.000 0.000 0.000 0.000 ipstruct.py:125(__getattr__)
3 0.000 0.000 0.000 0.000 {method 'reshape' of 'numpy.ndarray' objects}
3 0.000 0.000 2.706 0.902 {built-in method builtins.exec}
1 0.000 0.000 0.000 0.000 stride_tricks.py:195(broadcast_arrays)
1 0.000 0.000 0.000 0.000 {built-in method numpy.core.multiarray.arange}
1 0.000 0.000 0.000 0.000 linalg.py:198(_assertRankAtLeast2)
1 0.000 0.000 0.000 0.000 function_base.py:4671(<listcomp>)
2 0.000 0.000 0.000 0.000 linalg.py:124(_realType)
1 0.000 0.000 0.000 0.000 stride_tricks.py:176(_broadcast_shape)
1 0.000 0.000 0.000 0.000 linalg.py:101(get_linalg_error_extobj)
4 0.000 0.000 0.000 0.000 numeric.py:534(asanyarray)
1 0.000 0.000 0.000 0.000 <ipython-input-3-e477be7765b2>:6(<module>)
2 0.000 0.000 0.000 0.000 function_base.py:213(iterable)
1 0.000 0.000 0.000 0.000 function_base.py:13(_index_deprecate)
3 0.000 0.000 0.000 0.000 interactiveshell.py:1069(user_global_ns)
1 0.000 0.000 0.000 0.000 function_base.py:4684(<listcomp>)
2 0.000 0.000 0.000 0.000 stride_tricks.py:251(<genexpr>)
3 0.000 0.000 0.000 0.000 linalg.py:111(isComplexType)
2 0.000 0.000 0.000 0.000 {built-in method builtins.any}
2 0.000 0.000 0.000 0.000 {built-in method builtins.getattr}
1 0.000 0.000 0.000 0.000 {built-in method builtins.max}
6 0.000 0.000 0.000 0.000 stride_tricks.py:120(<genexpr>)
1 0.000 0.000 0.000 0.000 stride_tricks.py:257(<listcomp>)
2 0.000 0.000 0.000 0.000 numeric.py:463(asarray)
1 0.000 0.000 0.000 0.000 {built-in method numpy.core.multiarray.result_type}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
3 0.000 0.000 0.000 0.000 hooks.py:207(pre_run_code_hook)
1 0.000 0.000 0.000 0.000 {built-in method builtins.all}
5 0.000 0.000 0.000 0.000 {built-in method builtins.issubclass}
2 0.000 0.000 0.000 0.000 {built-in method builtins.len}
2 0.000 0.000 0.000 0.000 {method 'get' of 'dict' objects}
4 0.000 0.000 0.000 0.000 {method 'pop' of 'dict' objects}
2 0.000 0.000 0.000 0.000 stride_tricks.py:25(_maybe_view_as_subclass)
1 0.000 0.000 0.000 0.000 stride_tricks.py:247(<listcomp>)
1 0.000 0.000 0.000 0.000 {method '__array_prepare__' of 'numpy.ndarray' objects}
1 0.000 0.000 0.000 0.000 {built-in method _operator.index}
2 0.000 0.000 0.000 0.000 {built-in method builtins.iter}
1 0.000 0.000 0.000 0.000 {built-in method builtins.min}
```python
import scipy.sparse as sp
import scipy.sparse.linalg
def laplacian2d(h, f, g0):
m = int(1/h + 1) # Number of elements in terms of nominal grid spacing h
h = 1 / (m-1) # Actual grid spacing
c = numpy.linspace(0, 1, m)
y, x = numpy.meshgrid(c, c)
u0 = g0(x, y).flatten()
rhs = f(x, y).flatten()
A = sp.lil_matrix((m*m, m*m))
def idx(i, j):
return i*m + j
mask = numpy.zeros_like(x, dtype=int)
mask[1:-1,1:-1] = 1
mask = mask.flatten()
for i in range(m):
for j in range(m):
row = idx(i, j)
stencili = numpy.array([idx(*pair) for pair in
[(i-1, j), (i, j-1),
(i, j),
(i, j+1), (i+1, j)]])
stencilw = 1/h**2 * numpy.array([-1, -1, 4, -1, -1])
if mask[row] == 0: # Dirichlet boundary
A[row, row] = 1
rhs[row] = u0[row]
else:
smask = mask[stencili]
cols = stencili[smask == 1]
A[row, cols] = stencilw[smask == 1]
bdycols = stencili[smask == 0]
rhs[row] -= stencilw[smask == 0] @ u0[bdycols]
return x, y, A.tocsr(), rhs
x, y, A, rhs = laplacian2d(.15, lambda x,y: 0*x+1, lambda x,y: 0*x)
pyplot.spy(A);
sp.linalg.norm(A - A.T)
```
```python
prof = cProfile.Profile()
prof.enable()
x, y, A, rhs = laplacian2d(.005, lambda x,y: 0*x+1, lambda x,y: 0*x)
u = sp.linalg.spsolve(A, rhs).reshape(x.shape)
prof.disable()
prof.print_stats(sort='tottime')
```
5362376 function calls in 4.868 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
158406 0.753 0.000 1.025 0.000 stride_tricks.py:115(_broadcast_to)
1 0.752 0.752 4.644 4.644 <ipython-input-4-66fa325aa3de>:4(laplacian2d)
516438 0.321 0.000 0.321 0.000 {built-in method numpy.core.multiarray.array}
39601 0.283 0.000 0.283 0.000 {scipy.sparse._csparsetools.lil_fancy_set}
40401 0.252 0.000 3.562 0.000 lil.py:333(__setitem__)
79203 0.220 0.000 1.645 0.000 stride_tricks.py:195(broadcast_arrays)
1 0.218 0.218 0.218 0.218 {built-in method scipy.sparse.linalg.dsolve._superlu.gssv}
79202 0.167 0.000 0.349 0.000 sputils.py:331(_check_boolean)
39601 0.160 0.000 1.592 0.000 sputils.py:351(_index_to_arrays)
79203 0.158 0.000 0.167 0.000 stride_tricks.py:176(_broadcast_shape)
39601 0.137 0.000 0.486 0.000 sputils.py:265(_unpack_index)
834034 0.117 0.000 0.117 0.000 {built-in method builtins.isinstance}
79202 0.101 0.000 0.273 0.000 shape_base.py:11(atleast_1d)
118805 0.076 0.000 0.076 0.000 {built-in method builtins.hasattr}
158406 0.070 0.000 0.121 0.000 {built-in method builtins.any}
79203 0.069 0.000 1.094 0.000 stride_tricks.py:257(<listcomp>)
1 0.067 0.067 0.067 0.067 lil.py:84(__init__)
118804 0.063 0.000 0.089 0.000 <frozen importlib._bootstrap>:402(parent)
39601 0.062 0.000 0.062 0.000 lil.py:486(_prepare_index_for_memoryview)
39604 0.057 0.000 0.057 0.000 {method 'reshape' of 'numpy.ndarray' objects}
40401 0.054 0.000 0.080 0.000 <ipython-input-4-66fa325aa3de>:20(<listcomp>)
237609 0.052 0.000 0.085 0.000 base.py:1111(isspmatrix)
39601 0.052 0.000 0.056 0.000 sputils.py:293(_check_ellipsis)
396016 0.051 0.000 0.051 0.000 stride_tricks.py:120(<genexpr>)
158406 0.051 0.000 0.051 0.000 stride_tricks.py:251(<genexpr>)
158406 0.048 0.000 0.048 0.000 stride_tricks.py:25(_maybe_view_as_subclass)
39603 0.047 0.000 0.052 0.000 numeric.py:2135(isscalar)
158406 0.046 0.000 0.069 0.000 function_base.py:213(iterable)
79203 0.045 0.000 0.079 0.000 stride_tricks.py:247(<listcomp>)
242406 0.039 0.000 0.039 0.000 <ipython-input-4-66fa325aa3de>:12(idx)
39603 0.036 0.000 0.102 0.000 sputils.py:183(isscalarlike)
118804 0.033 0.000 0.110 0.000 <frozen importlib._bootstrap>:989(_handle_fromlist)
278821 0.031 0.000 0.031 0.000 {built-in method builtins.len}
118804 0.026 0.000 0.026 0.000 {method 'rpartition' of 'str' objects}
158406 0.024 0.000 0.024 0.000 {built-in method builtins.iter}
79206 0.024 0.000 0.096 0.000 numeric.py:534(asanyarray)
79203 0.021 0.000 0.049 0.000 {built-in method builtins.all}
80807 0.013 0.000 0.013 0.000 base.py:100(get_shape)
79206 0.012 0.000 0.012 0.000 {method 'pop' of 'dict' objects}
39614 0.011 0.000 0.067 0.000 numeric.py:463(asarray)
79202 0.011 0.000 0.011 0.000 {method 'append' of 'list' objects}
39601 0.011 0.000 0.015 0.000 sputils.py:227(isdense)
1 0.010 0.010 0.043 0.043 lil.py:463(tocsr)
1 0.006 0.006 4.650 4.650 <ipython-input-5-8cdfec67b31c>:3(<module>)
80802 0.006 0.000 0.006 0.000 {method 'extend' of 'list' objects}
1 0.004 0.004 0.006 0.006 lil.py:464(<listcomp>)
800 0.002 0.000 0.002 0.000 {built-in method scipy.sparse._csparsetools.lil_insert}
2 0.002 0.001 0.002 0.001 {method 'copy' of 'numpy.ndarray' objects}
3 0.000 0.000 0.000 0.000 {method 'flatten' of 'numpy.ndarray' objects}
1 0.000 0.000 0.000 0.000 <ipython-input-5-8cdfec67b31c>:3(<lambda>)
1 0.000 0.000 0.000 0.000 {built-in method builtins.sum}
2 0.000 0.000 0.000 0.000 {built-in method numpy.core.multiarray.empty}
3 0.000 0.000 0.000 0.000 {built-in method builtins.compile}
1 0.000 0.000 0.000 0.000 {built-in method scipy.sparse._sparsetools.csr_has_sorted_indices}
1 0.000 0.000 0.000 0.000 {built-in method numpy.core.multiarray.empty_like}
1 0.000 0.000 0.000 0.000 {method 'cumsum' of 'numpy.ndarray' objects}
1 0.000 0.000 0.000 0.000 function_base.py:25(linspace)
1 0.000 0.000 0.218 0.218 linsolve.py:62(spsolve)
3 0.000 0.000 4.868 1.623 interactiveshell.py:2880(run_code)
1 0.000 0.000 0.000 0.000 {built-in method numpy.core.multiarray.copyto}
1 0.000 0.000 0.000 0.000 compressed.py:128(check_format)
3 0.000 0.000 0.000 0.000 getlimits.py:507(__init__)
3 0.000 0.000 0.000 0.000 sputils.py:119(get_index_dtype)
1 0.000 0.000 0.000 0.000 {built-in method numpy.core.multiarray.concatenate}
1 0.000 0.000 0.000 0.000 compressed.py:25(__init__)
1 0.000 0.000 0.002 0.002 function_base.py:4554(meshgrid)
3 0.000 0.000 0.000 0.000 codeop.py:132(__call__)
2 0.000 0.000 0.000 0.000 sputils.py:188(isintlike)
1 0.000 0.000 0.000 0.000 base.py:142(asfptype)
1 0.000 0.000 0.000 0.000 numeric.py:87(zeros_like)
2 0.000 0.000 0.000 0.000 base.py:70(__init__)
2 0.000 0.000 0.000 0.000 sputils.py:200(isshape)
1 0.000 0.000 0.218 0.218 <ipython-input-5-8cdfec67b31c>:4(<module>)
3 0.000 0.000 0.000 0.000 ipstruct.py:125(__getattr__)
1 0.000 0.000 0.000 0.000 sputils.py:95(getdtype)
3 0.000 0.000 0.000 0.000 hooks.py:142(__call__)
5 0.000 0.000 0.000 0.000 compressed.py:100(getnnz)
1 0.000 0.000 0.000 0.000 function_base.py:4671(<listcomp>)
1 0.000 0.000 0.000 0.000 fromnumeric.py:55(_wrapfunc)
1 0.000 0.000 0.000 0.000 {built-in method numpy.core.multiarray.arange}
1 0.000 0.000 0.000 0.000 compressed.py:1065(prune)
8 0.000 0.000 0.000 0.000 cycler.py:227(<genexpr>)
1 0.000 0.000 0.002 0.002 function_base.py:4684(<listcomp>)
3 0.000 0.000 4.868 1.623 {built-in method builtins.exec}
1 0.000 0.000 0.000 0.000 compressed.py:1025(__get_sorted)
2 0.000 0.000 0.000 0.000 fromnumeric.py:2584(ndim)
1 0.000 0.000 0.000 0.000 sputils.py:91(to_native)
1 0.000 0.000 0.000 0.000 data.py:22(__init__)
1 0.000 0.000 0.000 0.000 lil.py:130(set_shape)
1 0.000 0.000 0.000 0.000 {built-in method numpy.core.multiarray.zeros}
2 0.000 0.000 0.000 0.000 _util.py:128(_prune_array)
1 0.000 0.000 0.000 0.000 base.py:77(set_shape)
3 0.000 0.000 0.000 0.000 getlimits.py:532(max)
1 0.000 0.000 0.000 0.000 fromnumeric.py:2053(cumsum)
1 0.000 0.000 0.000 0.000 {method 'ravel' of 'numpy.ndarray' objects}
5 0.000 0.000 0.000 0.000 base.py:193(nnz)
1 0.000 0.000 0.000 0.000 <ipython-input-5-8cdfec67b31c>:5(<module>)
1 0.000 0.000 0.000 0.000 {method 'newbyteorder' of 'numpy.dtype' objects}
1 0.000 0.000 0.000 0.000 {built-in method numpy.core.multiarray.result_type}
3 0.000 0.000 0.000 0.000 interactiveshell.py:1069(user_global_ns)
1 0.000 0.000 0.000 0.000 {built-in method builtins.getattr}
2 0.000 0.000 0.000 0.000 csc.py:220(isspmatrix_csc)
1 0.000 0.000 0.000 0.000 base.py:562(__getattr__)
1 0.000 0.000 0.000 0.000 function_base.py:13(_index_deprecate)
1 0.000 0.000 0.000 0.000 {method 'astype' of 'numpy.ndarray' objects}
1 0.000 0.000 0.000 0.000 {built-in method numpy.core.multiarray.promote_types}
2 0.000 0.000 0.000 0.000 {built-in method builtins.max}
3 0.000 0.000 0.000 0.000 csr.py:231(_swap)
1 0.000 0.000 0.000 0.000 csr.py:458(isspmatrix_csr)
1 0.000 0.000 0.000 0.000 compressed.py:1056(sort_indices)
3 0.000 0.000 0.000 0.000 data.py:25(_get_dtype)
3 0.000 0.000 0.000 0.000 hooks.py:207(pre_run_code_hook)
1 0.000 0.000 0.000 0.000 {built-in method _operator.index}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
## A manufactured solution
```python
class mms0:
def u(x, y):
return x*numpy.exp(-x)*numpy.tanh(y)
def grad_u(x, y):
return numpy.array([(1 - x)*numpy.exp(-x)*numpy.tanh(y),
x*numpy.exp(-x)*(1 - numpy.tanh(y)**2)])
def laplacian_u(x, y):
return ((2 - x)*numpy.exp(-x)*numpy.tanh(y)
- 2*x*numpy.exp(-x)*(numpy.tanh(y)**2 - 1)*numpy.tanh(y))
def grad_u_dot_normal(x, y, n):
return grad_u(x, y) @ n
x, y, A, rhs = laplacian2d(.02, mms0.laplacian_u, mms0.u)
u = sp.linalg.spsolve(A, rhs).reshape(x.shape)
print(u.shape, numpy.linalg.norm((u - mms0.u(x,y)).flatten(), numpy.inf))
pyplot.contourf(x, y, u)
pyplot.colorbar()
pyplot.title('Numeric solution')
pyplot.figure()
pyplot.contourf(x, y, u - mms0.u(x, y))
pyplot.colorbar()
pyplot.title('Error');
```
```python
hs = numpy.logspace(-2, -.5, 12)
def mms_error(h):
x, y, A, rhs = laplacian2d(h, mms0.laplacian_u, mms0.u)
u = sp.linalg.spsolve(A, rhs).reshape(x.shape)
return numpy.linalg.norm((u - mms0.u(x, y)).flatten(), numpy.inf)
pyplot.loglog(hs, [mms_error(h) for h in hs], 'o', label='numeric error')
pyplot.loglog(hs, hs**1/100, label='$h^1/100$')
pyplot.loglog(hs, hs**2/100, label='$h^2/100$')
pyplot.legend();
```
# Neumann boundary conditions
Recall that in 1D, we would reflect the solution into ghost points according to
$$ u_{-i} = u_i - (x_i - x_{-i}) g_1(x_0, y) $$
and similarly for the right boundary and in the $y$ direction. After this, we (optionally) scale the row in the matrix for symmetry and shift the known parts to the right hand side. Below, we implement the reflected symmetry, but not the inhomogeneous contribution or rescaling of the matrix row.
```python
def laplacian2d_bc(h, f, g0, dirichlet=((),())):
m = int(1/h + 1) # Number of elements in terms of nominal grid spacing h
h = 1 / (m-1) # Actual grid spacing
c = numpy.linspace(0, 1, m)
y, x = numpy.meshgrid(c, c)
u0 = g0(x, y).flatten()
rhs = f(x, y).flatten()
ai = []
aj = []
av = []
def idx(i, j):
i = (m-1) - abs(m-1 - abs(i))
j = (m-1) - abs(m-1 - abs(j))
return i*m + j
mask = numpy.ones_like(x, dtype=int)
mask[dirichlet[0],:] = 0
mask[:,dirichlet[1]] = 0
mask = mask.flatten()
for i in range(m):
for j in range(m):
row = idx(i, j)
stencili = numpy.array([idx(*pair) for pair in [(i-1, j), (i, j-1), (i, j), (i, j+1), (i+1, j)]])
stencilw = 1/h**2 * numpy.array([-1, -1, 4, -1, -1])
if mask[row] == 0: # Dirichlet boundary
ai.append(row)
aj.append(row)
av.append(1)
rhs[row] = u0[row]
else:
smask = mask[stencili]
ai += [row]*sum(smask)
aj += stencili[smask == 1].tolist()
av += stencilw[smask == 1].tolist()
bdycols = stencili[smask == 0]
rhs[row] -= stencilw[smask == 0] @ u0[bdycols]
A = sp.csr_matrix((av, (ai, aj)), shape=(m*m,m*m))
return x, y, A, rhs
x, y, A, rhs = laplacian2d_bc(.05, lambda x,y: 0*x+1,
lambda x,y: 0*x, dirichlet=((0,),()))
u = sp.linalg.spsolve(A, rhs).reshape(x.shape)
print(sp.linalg.eigs(A, which='SM')[0])
pyplot.contourf(x, y, u)
pyplot.colorbar();
```
```python
# We used a different technique for assembling the sparse matrix.
# This is faster with scipy.sparse, but may be worse for other sparse matrix packages, such as PETSc.
prof = cProfile.Profile()
prof.enable()
x, y, A, rhs = laplacian2d_bc(.005, lambda x,y: 0*x+1, lambda x,y: 0*x)
u = sp.linalg.spsolve(A, rhs).reshape(x.shape)
prof.disable()
prof.print_stats(sort='tottime')
```
1454850 function calls (1454848 primitive calls) in 1.504 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.660 0.660 1.233 1.233 <ipython-input-8-7866e0ac5ad1>:1(laplacian2d_bc)
1 0.264 0.264 0.264 0.264 {built-in method scipy.sparse.linalg.dsolve._superlu.gssv}
242406 0.168 0.000 0.231 0.000 <ipython-input-8-7866e0ac5ad1>:11(idx)
80841 0.152 0.000 0.152 0.000 {built-in method numpy.core.multiarray.array}
40401 0.097 0.000 0.097 0.000 {built-in method builtins.sum}
969624 0.063 0.000 0.063 0.000 {built-in method builtins.abs}
40401 0.056 0.000 0.236 0.000 <ipython-input-8-7866e0ac5ad1>:22(<listcomp>)
80802 0.024 0.000 0.024 0.000 {method 'tolist' of 'numpy.ndarray' objects}
1 0.009 0.009 0.009 0.009 {built-in method numpy.core.multiarray.lexsort}
1 0.006 0.006 1.239 1.239 <ipython-input-9-b5abe0e23855>:6(<module>)
1 0.002 0.002 0.012 0.012 coo.py:460(_sum_duplicates)
1 0.001 0.001 0.001 0.001 {method 'reduceat' of 'numpy.ufunc' objects}
1 0.001 0.001 0.001 0.001 {built-in method scipy.sparse._sparsetools.coo_tocsr}
2 0.001 0.000 0.001 0.000 {method 'copy' of 'numpy.ndarray' objects}
4 0.001 0.000 0.001 0.000 {method 'reduce' of 'numpy.ufunc' objects}
3 0.000 0.000 0.000 0.000 {method 'flatten' of 'numpy.ndarray' objects}
3 0.000 0.000 0.000 0.000 {built-in method builtins.compile}
1 0.000 0.000 0.000 0.000 {method 'nonzero' of 'numpy.ndarray' objects}
1 0.000 0.000 0.000 0.000 {built-in method scipy.sparse._sparsetools.csr_has_sorted_indices}
1 0.000 0.000 0.000 0.000 <ipython-input-9-b5abe0e23855>:6(<lambda>)
1 0.000 0.000 0.001 0.001 coo.py:212(_check)
1 0.000 0.000 0.000 0.000 function_base.py:25(linspace)
3 0.000 0.000 0.000 0.000 compressed.py:128(check_format)
1 0.000 0.000 0.264 0.264 linsolve.py:62(spsolve)
7 0.000 0.000 0.000 0.000 getlimits.py:507(__init__)
3/1 0.000 0.000 0.040 0.040 compressed.py:25(__init__)
7 0.000 0.000 0.000 0.000 sputils.py:119(get_index_dtype)
1 0.000 0.000 0.027 0.027 coo.py:118(__init__)
1 0.000 0.000 0.013 0.013 coo.py:301(tocsr)
3 0.000 0.000 1.504 0.501 interactiveshell.py:2880(run_code)
1 0.000 0.000 0.001 0.001 function_base.py:4554(meshgrid)
2 0.000 0.000 0.000 0.000 stride_tricks.py:115(_broadcast_to)
1 0.000 0.000 0.000 0.000 {built-in method numpy.core.multiarray.copyto}
3 0.000 0.000 0.000 0.000 codeop.py:132(__call__)
3 0.000 0.000 0.000 0.000 compressed.py:1065(prune)
5 0.000 0.000 0.000 0.000 base.py:77(set_shape)
1 0.000 0.000 0.000 0.000 {built-in method numpy.core.multiarray.concatenate}
4 0.000 0.000 0.000 0.000 coo.py:186(getnnz)
1 0.000 0.000 0.000 0.000 function_base.py:5100(append)
3 0.000 0.000 0.000 0.000 sputils.py:200(isshape)
17 0.000 0.000 0.000 0.000 base.py:193(nnz)
23 0.000 0.000 0.000 0.000 numeric.py:463(asarray)
1 0.000 0.000 0.000 0.000 stride_tricks.py:195(broadcast_arrays)
45 0.000 0.000 0.000 0.000 {built-in method builtins.len}
4 0.000 0.000 0.000 0.000 sputils.py:91(to_native)
1 0.000 0.000 0.264 0.264 <ipython-input-9-b5abe0e23855>:7(<module>)
4 0.000 0.000 0.000 0.000 base.py:70(__init__)
3 0.000 0.000 0.000 0.000 {method 'reshape' of 'numpy.ndarray' objects}
3 0.000 0.000 1.503 0.501 {built-in method builtins.exec}
22 0.000 0.000 0.000 0.000 {built-in method builtins.isinstance}
1 0.000 0.000 0.012 0.012 coo.py:449(sum_duplicates)
13 0.000 0.000 0.000 0.000 compressed.py:100(getnnz)
7 0.000 0.000 0.000 0.000 getlimits.py:532(max)
3 0.000 0.000 0.000 0.000 {built-in method numpy.core.multiarray.empty_like}
3 0.000 0.000 0.000 0.000 hooks.py:142(__call__)
4 0.000 0.000 0.000 0.000 {built-in method builtins.max}
1 0.000 0.000 0.000 0.000 numeric.py:197(ones_like)
1 0.000 0.000 0.000 0.000 {built-in method numpy.core.multiarray.arange}
3 0.000 0.000 0.000 0.000 ipstruct.py:125(__getattr__)
3 0.000 0.000 0.000 0.000 {built-in method builtins.hasattr}
4 0.000 0.000 0.000 0.000 base.py:1111(isspmatrix)
4 0.000 0.000 0.000 0.000 data.py:22(__init__)
1 0.000 0.000 0.000 0.000 stride_tricks.py:176(_broadcast_shape)
3 0.000 0.000 0.000 0.000 {method 'ravel' of 'numpy.ndarray' objects}
2 0.000 0.000 0.000 0.000 <frozen importlib._bootstrap>:402(parent)
16 0.000 0.000 0.000 0.000 base.py:100(get_shape)
1 0.000 0.000 0.000 0.000 function_base.py:4671(<listcomp>)
1 0.000 0.000 0.000 0.000 fromnumeric.py:55(_wrapfunc)
4 0.000 0.000 0.000 0.000 {method 'newbyteorder' of 'numpy.dtype' objects}
1 0.000 0.000 0.000 0.000 compressed.py:1025(__get_sorted)
1 0.000 0.000 0.000 0.000 base.py:142(asfptype)
1 0.000 0.000 0.013 0.013 base.py:249(asformat)
1 0.000 0.000 0.000 0.000 <ipython-input-9-b5abe0e23855>:8(<module>)
2 0.000 0.000 0.000 0.000 numeric.py:2135(isscalar)
1 0.000 0.000 0.000 0.000 sputils.py:20(upcast)
6 0.000 0.000 0.000 0.000 _util.py:128(_prune_array)
6 0.000 0.000 0.000 0.000 numeric.py:534(asanyarray)
3 0.000 0.000 0.000 0.000 {method 'astype' of 'numpy.ndarray' objects}
2 0.000 0.000 0.000 0.000 {method 'min' of 'numpy.ndarray' objects}
1 0.000 0.000 0.000 0.000 {built-in method numpy.core.multiarray.empty}
2 0.000 0.000 0.000 0.000 compressed.py:117(_set_self)
2 0.000 0.000 0.000 0.000 sputils.py:183(isscalarlike)
5 0.000 0.000 0.000 0.000 data.py:25(_get_dtype)
1 0.000 0.000 0.000 0.000 function_base.py:13(_index_deprecate)
2 0.000 0.000 0.000 0.000 _methods.py:28(_amin)
2 0.000 0.000 0.000 0.000 {method 'max' of 'numpy.ndarray' objects}
1 0.000 0.000 0.000 0.000 compressed.py:1056(sort_indices)
1 0.000 0.000 0.000 0.000 base.py:562(__getattr__)
1 0.000 0.000 0.001 0.001 function_base.py:4684(<listcomp>)
6 0.000 0.000 0.000 0.000 stride_tricks.py:120(<genexpr>)
1 0.000 0.000 0.000 0.000 stride_tricks.py:247(<listcomp>)
1 0.000 0.000 0.000 0.000 stride_tricks.py:257(<listcomp>)
1 0.000 0.000 0.000 0.000 fromnumeric.py:1487(nonzero)
2 0.000 0.000 0.000 0.000 _methods.py:25(_amax)
3 0.000 0.000 0.000 0.000 interactiveshell.py:1069(user_global_ns)
2 0.000 0.000 0.000 0.000 <frozen importlib._bootstrap>:989(_handle_fromlist)
2 0.000 0.000 0.000 0.000 sputils.py:188(isintlike)
2 0.000 0.000 0.000 0.000 sputils.py:227(isdense)
2 0.000 0.000 0.000 0.000 csc.py:220(isspmatrix_csc)
2 0.000 0.000 0.000 0.000 stride_tricks.py:251(<genexpr>)
2 0.000 0.000 0.000 0.000 function_base.py:213(iterable)
1 0.000 0.000 0.000 0.000 fromnumeric.py:1380(ravel)
1 0.000 0.000 0.000 0.000 {built-in method numpy.core.multiarray.result_type}
2 0.000 0.000 0.000 0.000 {built-in method builtins.getattr}
2 0.000 0.000 0.000 0.000 {method 'rpartition' of 'str' objects}
1 0.000 0.000 0.000 0.000 csr.py:458(isspmatrix_csr)
2 0.000 0.000 0.000 0.000 stride_tricks.py:25(_maybe_view_as_subclass)
1 0.000 0.000 0.000 0.000 {built-in method numpy.core.multiarray.promote_types}
1 0.000 0.000 0.000 0.000 {built-in method builtins.all}
2 0.000 0.000 0.000 0.000 {built-in method builtins.any}
2 0.000 0.000 0.000 0.000 {built-in method builtins.iter}
1 0.000 0.000 0.000 0.000 {method 'get' of 'dict' objects}
4 0.000 0.000 0.000 0.000 {method 'pop' of 'dict' objects}
9 0.000 0.000 0.000 0.000 csr.py:231(_swap)
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
3 0.000 0.000 0.000 0.000 hooks.py:207(pre_run_code_hook)
1 0.000 0.000 0.000 0.000 {built-in method _operator.index}
1 0.000 0.000 0.000 0.000 {built-in method builtins.hash}
# Variable coefficients
In physical systems, it is common for equations to be given in **divergence form** (sometimes called **conservative form**),
$$ -\nabla\cdot \Big( \kappa(x,y) \nabla u \Big) = f(x,y) . $$
This can be converted to **non-divergence form**,
$$ - \kappa(x,y) \nabla\cdot \nabla u - \nabla \kappa(x,y) \cdot \nabla u = f(x,y) . $$
* What assumptions did we just make on $\kappa(x,y)$?
```python
def laplacian2d_nondiv(h, f, kappa, grad_kappa, g0, dirichlet=((),())):
m = int(1/h + 1) # Number of elements in terms of nominal grid spacing h
h = 1 / (m-1) # Actual grid spacing
c = numpy.linspace(0, 1, m)
y, x = numpy.meshgrid(c, c)
u0 = g0(x, y).flatten()
rhs = f(x, y).flatten()
ai = []
aj = []
av = []
def idx(i, j):
i = (m-1) - abs(m-1 - abs(i))
j = (m-1) - abs(m-1 - abs(j))
return i*m + j
mask = numpy.ones_like(x, dtype=int)
mask[dirichlet[0],:] = 0
mask[:,dirichlet[1]] = 0
mask = mask.flatten()
for i in range(m):
for j in range(m):
row = idx(i, j)
stencili = numpy.array([idx(*pair) for pair in [(i-1, j), (i, j-1), (i, j), (i, j+1), (i+1, j)]])
stencilw = kappa(i*h, j*h)/h**2 * numpy.array([-1, -1, 4, -1, -1])
if grad_kappa is None:
gk = 1/h * numpy.array([kappa((i+.5)*h,j*h) - kappa((i-.5)*h,j*h),
kappa(i*h,(j+.5)*h) - kappa(i*h,(j-.5)*h)])
else:
gk = grad_kappa(i*h, j*h)
stencilw -= gk[0] / (2*h) * numpy.array([-1, 0, 0, 0, 1])
stencilw -= gk[1] / (2*h) * numpy.array([0, -1, 0, 1, 0])
if mask[row] == 0: # Dirichlet boundary
ai.append(row)
aj.append(row)
av.append(1)
rhs[row] = u0[row]
else:
smask = mask[stencili]
ai += [row]*sum(smask)
aj += stencili[smask == 1].tolist()
av += stencilw[smask == 1].tolist()
bdycols = stencili[smask == 0]
rhs[row] -= stencilw[smask == 0] @ u0[bdycols]
A = sp.csr_matrix((av, (ai, aj)), shape=(m*m,m*m))
return x, y, A, rhs
def kappa(x, y):
#return 1 - 2*(x-.5)**2 - 2*(y-.5)**2
return 1e-2 + 2*(x-.5)**2 + 2*(y-.5)**2
def grad_kappa(x, y):
#return -4*(x-.5), -4*(y-.5)
return 4*(x-.5), 4*(y-.5)
pyplot.contourf(x, y, kappa(x,y))
pyplot.colorbar();
```
```python
x, y, A, rhs = laplacian2d_nondiv(.05, lambda x,y: 0*x+1,
kappa, grad_kappa,
lambda x,y: 0*x, dirichlet=((0,-1),()))
u = sp.linalg.spsolve(A, rhs).reshape(x.shape)
pyplot.contourf(x, y, u)
pyplot.colorbar();
```
```python
x, y, A, rhs = laplacian2d_nondiv(.05, lambda x,y: 0*x,
kappa, grad_kappa,
lambda x,y: x, dirichlet=((0,-1),()))
u = sp.linalg.spsolve(A, rhs).reshape(x.shape)
pyplot.contourf(x, y, u)
pyplot.colorbar();
```
```python
def laplacian2d_div(h, f, kappa, g0, dirichlet=((),())):
m = int(1/h + 1) # Number of elements in terms of nominal grid spacing h
h = 1 / (m-1) # Actual grid spacing
c = numpy.linspace(0, 1, m)
y, x = numpy.meshgrid(c, c)
u0 = g0(x, y).flatten()
rhs = f(x, y).flatten()
ai = []
aj = []
av = []
def idx(i, j):
i = (m-1) - abs(m-1 - abs(i))
j = (m-1) - abs(m-1 - abs(j))
return i*m + j
mask = numpy.ones_like(x, dtype=int)
mask[dirichlet[0],:] = 0
mask[:,dirichlet[1]] = 0
mask = mask.flatten()
for i in range(m):
for j in range(m):
row = idx(i, j)
stencili = numpy.array([idx(*pair) for pair in [(i-1, j), (i, j-1), (i, j), (i, j+1), (i+1, j)]])
stencilw = 1/h**2 * ( kappa((i-.5)*h, j*h) * numpy.array([-1, 0, 1, 0, 0])
+ kappa(i*h, (j-.5)*h) * numpy.array([0, -1, 1, 0, 0])
+ kappa(i*h, (j+.5)*h) * numpy.array([0, 0, 1, -1, 0])
+ kappa((i+.5)*h, j*h) * numpy.array([0, 0, 1, 0, -1]))
if mask[row] == 0: # Dirichlet boundary
ai.append(row)
aj.append(row)
av.append(1)
rhs[row] = u0[row]
else:
smask = mask[stencili]
ai += [row]*sum(smask)
aj += stencili[smask == 1].tolist()
av += stencilw[smask == 1].tolist()
bdycols = stencili[smask == 0]
rhs[row] -= stencilw[smask == 0] @ u0[bdycols]
A = sp.csr_matrix((av, (ai, aj)), shape=(m*m,m*m))
return x, y, A, rhs
x, y, A, rhs = laplacian2d_div(.05, lambda x,y: 0*x+1,
kappa,
lambda x,y: 0*x, dirichlet=((0,-1),()))
u = sp.linalg.spsolve(A, rhs).reshape(x.shape)
pyplot.contourf(x, y, u)
pyplot.colorbar();
```
```python
x, y, A, rhs = laplacian2d_div(.05, lambda x,y: 0*x,
kappa,
lambda x,y: x, dirichlet=((0,-1),()))
u = sp.linalg.spsolve(A, rhs).reshape(x.shape)
pyplot.contourf(x, y, u)
pyplot.colorbar();
```
```python
x, y, A, rhs = laplacian2d_nondiv(.05, lambda x,y: 0*x+1,
kappa, grad_kappa,
lambda x,y: 0*x, dirichlet=((0,-1),()))
u_nondiv = sp.linalg.spsolve(A, rhs).reshape(x.shape)
x, y, A, rhs = laplacian2d_div(.05, lambda x,y: 0*x+1,
kappa,
lambda x,y: 0*x, dirichlet=((0,-1),()))
u_div = sp.linalg.spsolve(A, rhs).reshape(x.shape)
pyplot.contourf(x, y, u_nondiv - u_div)
pyplot.colorbar();
```
```python
class mms1:
def __init__(self):
import sympy
x, y = sympy.symbols('x y')
uexpr = x*sympy.exp(-2*x) * sympy.tanh(1.2*y+.1)
kexpr = 1e-2 + 2*(x-.42)**2 + 2*(y-.51)**2
self.u = sympy.lambdify((x,y), uexpr)
self.kappa = sympy.lambdify((x,y), kexpr)
def grad_kappa(xx, yy):
kx = sympy.lambdify((x,y), sympy.diff(kexpr, x))
ky = sympy.lambdify((x,y), sympy.diff(kexpr, y))
return kx(xx, yy), ky(xx, yy)
self.grad_kappa = grad_kappa
self.div_kappa_grad_u = sympy.lambdify((x,y),
-( sympy.diff(kexpr * sympy.diff(uexpr, x), x)
+ sympy.diff(kexpr * sympy.diff(uexpr, y), y)))
mms = mms1()
x, y, A, rhs = laplacian2d_nondiv(.05, mms.div_kappa_grad_u,
mms.kappa, mms.grad_kappa,
mms.u, dirichlet=((0,-1),(0,-1)))
u_nondiv = sp.linalg.spsolve(A, rhs).reshape(x.shape)
pyplot.contourf(x, y, u_nondiv)
pyplot.colorbar()
numpy.linalg.norm((u_nondiv - mms.u(x, y)).flatten(), numpy.inf)
```
```python
x, y, A, rhs = laplacian2d_div(.05, mms.div_kappa_grad_u,
mms.kappa,
mms.u, dirichlet=((0,-1),(0,-1)))
u_div = sp.linalg.spsolve(A, rhs).reshape(x.shape)
pyplot.contourf(x, y, u_div)
pyplot.colorbar()
numpy.linalg.norm((u_div - mms.u(x, y)).flatten(), numpy.inf)
```
```python
def mms_error(h):
x, y, A, rhs = laplacian2d_nondiv(h, mms.div_kappa_grad_u,
mms.kappa, mms.grad_kappa,
mms.u, dirichlet=((0,-1),(0,-1)))
u_nondiv = sp.linalg.spsolve(A, rhs).flatten()
x, y, A, rhs = laplacian2d_div(h, mms.div_kappa_grad_u,
mms.kappa, mms.u, dirichlet=((0,-1),(0,-1)))
u_div = sp.linalg.spsolve(A, rhs).flatten()
u_exact = mms.u(x, y).flatten()
return numpy.linalg.norm(u_nondiv - u_exact, numpy.inf), numpy.linalg.norm(u_div - u_exact, numpy.inf)
hs = numpy.logspace(-1.5, -.5, 10)
errors = numpy.array([mms_error(h) for h in hs])
pyplot.loglog(hs, errors[:,0], 'o', label='nondiv')
pyplot.loglog(hs, errors[:,1], 's', label='div')
pyplot.plot(hs, hs**2, label='$h^2$')
pyplot.legend();
```
```python
#kappablob = lambda x,y: .01 + ((x-.5)**2 + (y-.5)**2 < .125)
def kappablob(x, y):
#return .01 + ((x-.5)**2 + (y-.5)**2 < .125)
return .01 + (numpy.abs(x-.505) < .25) # + (numpy.abs(y-.5) < .25)
x, y, A, rhs = laplacian2d_div(.02, lambda x,y: 0*x, kappablob,
lambda x,y:x, dirichlet=((0,-1),()))
u_div = sp.linalg.spsolve(A, rhs).reshape(x.shape)
pyplot.contourf(x, y, kappablob(x, y))
pyplot.colorbar();
pyplot.figure()
pyplot.contourf(x, y, u_div, 10)
pyplot.colorbar();
```
```python
x, y, A, rhs = laplacian2d_nondiv(.01, lambda x,y: 0*x, kappablob, None,
lambda x,y:x, dirichlet=((0,-1),()))
u_nondiv = sp.linalg.spsolve(A, rhs).reshape(x.shape)
pyplot.contourf(x, y, u_nondiv, 10)
pyplot.colorbar();
```
## Weak forms
When we write
$$ {\huge "} - \nabla\cdot \big( \kappa \nabla u \big) = 0 {\huge "} \text{ on } \Omega $$
where $\kappa$ is a discontinuous function, that's not exactly what we mean the derivative of that discontinuous function doesn't exist. Formally, however, let us multiply by a "test function" $v$ and integrate,
\begin{split}
- \int_\Omega v \nabla\cdot \big( \kappa \nabla u \big) = 0 & \text{ for all } v \\
\int_\Omega \nabla v \cdot \kappa \nabla u = \int_{\partial \Omega} v \kappa \nabla u \cdot \hat n & \text{ for all } v
\end{split}
where we have used integration by parts. This is called the **weak form** of the PDE and will be what we actually discretize using finite element methods. All the terms make sense when $\kappa$ is discontinuous. Now suppose our domain is decomposed into two disjoint sub domains $$\overline{\Omega_1 \cup \Omega_2} = \overline\Omega $$
with interface $$\Gamma = \overline\Omega_1 \cap \overline\Omega_2$$ and $\kappa_1$ is continuous on $\overline\Omega_1$ and $\kappa_2$ is continuous on $\overline\Omega_2$, but possibly $\kappa_1(x) \ne \kappa_2(x)$ for $x \in \Gamma$,
\begin{split}
\int_\Omega \nabla v \cdot \kappa \nabla u &= \int_{\Omega_1} \nabla v \cdot \kappa_1\nabla u + \int_{\Omega_2} \nabla v \cdot \kappa_2 \nabla u \\
&= -\int_{\Omega_1} v \nabla\cdot \big(\kappa_1 \nabla u \big) + \int_{\partial \Omega_1} v \kappa_1 \nabla u \cdot \hat n \\
&\qquad -\int_{\Omega_2} v \nabla\cdot \big(\kappa_2 \nabla u \big) + \int_{\partial \Omega_2} v \kappa_2 \nabla u \cdot \hat n \\
&= -\int_{\Omega} v \nabla\cdot \big(\kappa \nabla u \big) + \int_{\partial \Omega} v \kappa \nabla u \cdot \hat n + \int_{\Gamma} v (\kappa_1 - \kappa_2) \nabla u\cdot \hat n .
\end{split}
* Which direction is $\hat n$ for the integral over $\Gamma$?
* Does it matter what we choose for the value of $\kappa$ on $\Gamma$ in the volume integral?
When $\kappa$ is continuous, the jump term vanishes and we recover the **strong form**
$$ - \nabla\cdot \big( \kappa \nabla u \big) = 0 \text{ on } \Omega . $$
But if $\kappa$ is discontinuous, we would need to augment this with a jump condition ensuring that the flux $-\kappa \nabla u$ is continuous. We could go add this condition to our FD code to recover convergence in case of discontinuous $\kappa$, but it is messy.
## Nonlinear problems
Let's consider the nonlinear problem
$$ -\nabla \cdot \big(\underbrace{(1 + u^2)}_{\kappa(u)} \nabla u \big) = f \text{ on } (0,1)^2 $$
subject to Dirichlet boundary conditions. We will discretize the divergence form and thus will need
$\kappa(u)$ evaluated at staggered points $(i-1/2,j)$, $(i,j-1/2)$, etc. We will calculate these by averaging
$$ u_{i-1/2,j} = \frac{u_{i-1,j} + u_{i,j}}{2} $$
and similarly for the other staggered directions.
To use a Newton method, we also need the derivatives
$$ \frac{\partial \kappa_{i-1/2,j}}{\partial u_{i,j}} = 2 u_{i-1/2,j} \frac{\partial u_{i-1/2,j}}{\partial u_{i,j}} = u_{i-1/2,j} . $$
In the function below, we compute both the residual
$$F(u) = -\nabla\cdot \kappa(u) \nabla u - f(x,y)$$
and its Jacobian
$$J(u) = \frac{\partial F}{\partial u} . $$
```python
def hgrid(h):
m = int(1/h + 1) # Number of elements in terms of nominal grid spacing h
h = 1 / (m-1) # Actual grid spacing
c = numpy.linspace(0, 1, m)
y, x = numpy.meshgrid(c, c)
return x, y
def nonlinear2d_div(h, x, y, u, forcing, g0, dirichlet=((),())):
m = x.shape[0]
u0 = g0(x, y).flatten()
F = -forcing(x, y).flatten()
ai = []
aj = []
av = []
def idx(i, j):
i = (m-1) - abs(m-1 - abs(i))
j = (m-1) - abs(m-1 - abs(j))
return i*m + j
mask = numpy.ones_like(x, dtype=bool)
mask[dirichlet[0],:] = False
mask[:,dirichlet[1]] = False
mask = mask.flatten()
u = u.flatten()
F[mask == False] = u[mask == False] - u0[mask == False]
u[mask == False] = u0[mask == False]
for i in range(m):
for j in range(m):
row = idx(i, j)
stencili = numpy.array([idx(*pair) for pair in [(i-1, j), (i, j-1), (i, j), (i, j+1), (i+1, j)]])
# Stencil to evaluate gradient at four staggered points
grad = numpy.array([[-1, 0, 1, 0, 0],
[0, -1, 1, 0, 0],
[0, 0, -1, 1, 0],
[0, 0, -1, 0, 1]]) / h
# Stencil to average at four staggered points
avg = numpy.array([[1, 0, 1, 0, 0],
[0, 1, 1, 0, 0],
[0, 0, 1, 1, 0],
[0, 0, 1, 0, 1]]) / 2
# Stencil to compute divergence at cell centers from fluxes at four staggered points
div = numpy.array([-1, -1, 1, 1]) / h
ustencil = u[stencili]
ustag = avg @ ustencil
kappa = 1 + ustag**2
if mask[row] == 0: # Dirichlet boundary
ai.append(row)
aj.append(row)
av.append(1)
else:
F[row] -= div @ (kappa[:,None] * grad @ ustencil)
Jstencil = -div @ (kappa[:,None] * grad
+ 2*(ustag*(grad @ ustencil))[:,None] * avg)
smask = mask[stencili]
ai += [row]*sum(smask)
aj += stencili[smask].tolist()
av += Jstencil[smask].tolist()
J = sp.csr_matrix((av, (ai, aj)), shape=(m*m,m*m))
return F, J
h = .1
x, y = hgrid(h)
u = 0*x
F, J = nonlinear2d_div(h, x, y, u, lambda x,y: 0*x+1,
lambda x,y: 0*x, dirichlet=((0,-1),(0,-1)))
deltau = sp.linalg.spsolve(J, -F).reshape(x.shape)
pyplot.contourf(x, y, deltau)
pyplot.colorbar();
```
```python
def solve_nonlinear(h, g0, dirichlet, atol=1e-8, verbose=False):
x, y = hgrid(h)
u = 0*x
for i in range(50):
F, J = nonlinear2d_div(h, x, y, u, lambda x,y: 0*x+1,
lambda x,y: 0*x, dirichlet=((0,-1),(0,-1)))
anorm = numpy.linalg.norm(F, numpy.inf)
if verbose:
print('{:2d}: anorm {:8e}'.format(i,anorm))
if anorm < atol:
break
deltau = sp.linalg.spsolve(J, -F)
u += deltau.reshape(x.shape)
return x, y, u, i
x, y, u, i = solve_nonlinear(.1, lambda x,y: 0*x, dirichlet=((0,-1),(0,-1)), verbose=True)
pyplot.contourf(x, y, u)
pyplot.colorbar();
```
## Homework 3: Due 2017-11-03
Write a solver for the regularized $p$-Laplacian,
$$ -\nabla\cdot\big( \kappa(\nabla u) \nabla u \big) = 0 $$
where
$$ \kappa(\nabla u) = \big(\frac 1 2 \epsilon^2 + \frac 1 2 \nabla u \cdot \nabla u \big)^{\frac{p-2}{2}}, $$
$ \epsilon > 0$, and $1 < p < \infty$. The case $p=2$ is the conventional Laplacian. This problem gets more strongly nonlinear when $p$ is far from 2 and when $\epsilon$ approaches zero. The $p \to 1$ limit is related to plasticity and has applications in non-Newtonion flows and structural mechanics.
1. Implement a "Picard" solver, which is like a Newton solver except that the Jacobian is replaced by the linear system
$$ J_{\text{Picard}}(u) \delta u \sim -\nabla\cdot\big( \kappa(\nabla u) \nabla \delta u \big) . $$
This is much easier to implement than the full Newton linearization. How fast does this method converge for values of $p < 2$ and $p > 2$?
* Use the linearization above as a preconditioner to a Newton-Krylov method. That is, use [`scipy.sparse.linalg.LinearOperator`](https://docs.scipy.org/doc/scipy-0.19.1/reference/generated/scipy.sparse.linalg.LinearOperator.html) to apply the Jacobian to a vector
$$ \tilde J(u) v = \frac{F(u + h v) - F(u)}{h} . $$
Then for each linear solve, use [`scipy.sparse.linalg.gmres`](https://docs.scipy.org/doc/scipy-0.19.1/reference/generated/scipy.sparse.linalg.gmres.html) and pass as a preconditioner, a direct solve with the Picard linearization above. (You might find [`scipy.sparse.linalg.factorized`](https://docs.scipy.org/doc/scipy-0.19.1/reference/generated/scipy.sparse.linalg.factorized.html#scipy.sparse.linalg.factorized) to be useful. Compare algebraic convergence to that of the Picard method.
* Can you directly implement a Newton linearization? Either do it or explain what is involved. How will its nonlinear convergence compare to that of the Newton-Krylov method?
# Wave equations and multi-component systems
The acoustic wave equation with constant wave speed $c$ can be written
$$ \ddot u - c^2 \nabla\cdot \nabla u = 0 $$
where $u$ is typically a pressure.
We can convert to a first order system
$$ \begin{bmatrix} \dot u \\ \dot v \end{bmatrix} = \begin{bmatrix} 0 & I \\ c^2 \nabla\cdot \nabla & 0 \end{bmatrix} \begin{bmatrix} u \\ v \end{bmatrix} . $$
We will choose a zero-penetration boundary condition $\nabla u \cdot \hat n = 0$, which will cause waves to reflect.
```python
%run fdtools.py
x, y, L, _ = laplacian2d_bc(.1, lambda x,y: 0*x,
lambda x,y: 0*x, dirichlet=((),()))
A = sp.bmat([[None, sp.eye(*L.shape)],
[-L, None]])
eigs = sp.linalg.eigs(A, 10, which='LM')[0]
print(eigs)
maxeig = max(eigs.imag)
u0 = numpy.concatenate([numpy.exp(-8*(x**2 + y**2)), 0*x], axis=None)
hist = ode_rkexplicit(lambda t, u: A @ u, u0, tfinal=2, h=2/maxeig)
def plot_wave(x, y, time, U):
u = U[:x.size].reshape(x.shape)
pyplot.contourf(x, y, u)
pyplot.colorbar()
pyplot.title('Wave solution t={:f}'.format(time));
for step in numpy.linspace(0, len(hist)-1, 6, dtype=int):
pyplot.figure()
plot_wave(x, y, *hist[step])
```
* This was a second order discretization, but we could extend it to higher order.
* The largest eigenvalues of this operator are proportional to $c/h$.
* Formally, we can write this equation in conservative form
$$ \begin{bmatrix} \dot u \\ \dot{\mathbf v} \end{bmatrix} = \begin{bmatrix} 0 & c\nabla\cdot \\ c \nabla & 0 \end{bmatrix} \begin{bmatrix} u \\ \mathbf v \end{bmatrix} $$
where $\mathbf{v}$ is now a momentum vector and $\nabla u = \nabla\cdot (u I)$. This formulation could produce an anti-symmetric ($A^T = -A$) discretization. Discretizations with this property are sometimes called "mimetic".
* A conservative form is often preferred when studiying waves traveling through materials with different wave speeds $c$.
* This is a Hamiltonian system. While high order Runge-Kutta methods can be quite accurate, "symplectic" time integrators are needed to preserve the structure of the Hamiltonian (related to energy conservation) over long periods of time. The midpoint method (aka $\theta=1/2$) is one such method. There are also explicit symplectic methods such as [Verlet methods](https://en.wikipedia.org/wiki/Verlet_integration), though these can be fragile.
| 8df5e5a54f78dc3ba752be872af7769bc04c8236 | 438,334 | ipynb | Jupyter Notebook | FD2D.ipynb | reycronin/numpde | 499e762ffa6f91592fc444523e87f93e0693c362 | [
"BSD-2-Clause"
] | 8 | 2017-11-18T00:48:54.000Z | 2018-01-23T15:25:43.000Z | FD2D.ipynb | reycronin/numpde | 499e762ffa6f91592fc444523e87f93e0693c362 | [
"BSD-2-Clause"
] | null | null | null | FD2D.ipynb | reycronin/numpde | 499e762ffa6f91592fc444523e87f93e0693c362 | [
"BSD-2-Clause"
] | 16 | 2017-08-28T16:13:41.000Z | 2018-08-08T15:37:46.000Z | 281.886817 | 20,640 | 0.879405 | true | 18,988 | Qwen/Qwen-72B | 1. YES
2. YES | 0.870597 | 0.682574 | 0.594247 | __label__eng_Latn | 0.296541 | 0.218965 |
# Time-varying Convex Optimization
This notebook will provide implementation and examples from the paper [Time-varying Convex Optimization](https://arxiv.org/abs/1808.03994), Amir Ali Ahmadi and Bachir El Khadir, 2018.
* bachir009@gmail.com
* sindhwani@google.com
#### Copyright 2018 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
```
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
>[Time-varying Convex Optimization](#scrollTo=cgvP6mUf5WJs)
>>>>[Copyright 2018 Google LLC.](#scrollTo=qDTiddF1Q8Iu)
>>[Install Dependencies](#scrollTo=_xLiNJfmORvW)
>>[Time Varying Semi-definite Programs](#scrollTo=6PuweE1NO-sZ)
>>[Some Polynomial Tools](#scrollTo=27St0x2TO7Eu)
>>[Examples: To Add.](#scrollTo=enYVtJrS5mCw)
## Install Dependencies
```
!pip install cvxpy
!pip install sympy
```
Collecting cvxpy
Using cached https://files.pythonhosted.org/packages/76/3c/4314c56be5b069f4d542046912d503a07c96b42c0b075ef0e32b48f8579f/cvxpy-1.0.10.tar.gz
Collecting osqp (from cvxpy)
Using cached https://files.pythonhosted.org/packages/43/f2/bbeb83c0da6fd89a6d835b98d85ec76c04f39a476c065e3c99b6b709c493/osqp-0.4.1-cp36-cp36m-manylinux1_x86_64.whl
Collecting ecos>=2 (from cvxpy)
Using cached https://files.pythonhosted.org/packages/b6/b4/988b15513b13e8ea2eac65e97d84221ac515a735a93f046e2a2a3d7863fc/ecos-2.0.5.tar.gz
Collecting scs>=1.1.3 (from cvxpy)
Using cached https://files.pythonhosted.org/packages/b3/fd/6e01c4f4a69fcc6c3db130ba55572089e78e77ea8c0921a679f9da1ec04c/scs-2.0.2.tar.gz
Collecting multiprocess (from cvxpy)
Using cached https://files.pythonhosted.org/packages/7a/ee/b9bf3e171f936743758ef924622d8dd00516c5532b00a1210a09bce68325/multiprocess-0.70.6.1.tar.gz
Collecting fastcache (from cvxpy)
Using cached https://files.pythonhosted.org/packages/fb/98/93f2d36738868e8dd5a8dbfc918169b24658f63e5fa041fe000c22ae4f8b/fastcache-1.0.2.tar.gz
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from cvxpy) (1.11.0)
Requirement already satisfied: toolz in /usr/local/lib/python3.6/dist-packages (from cvxpy) (0.9.0)
Requirement already satisfied: numpy>=1.14 in /usr/local/lib/python3.6/dist-packages (from cvxpy) (1.14.6)
Requirement already satisfied: scipy>=0.19 in /usr/local/lib/python3.6/dist-packages (from cvxpy) (1.1.0)
Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from osqp->cvxpy) (0.16.0)
Requirement already satisfied: dill>=0.2.8.1 in /usr/local/lib/python3.6/dist-packages (from multiprocess->cvxpy) (0.2.8.2)
Building wheels for collected packages: cvxpy, ecos, scs, multiprocess, fastcache
Running setup.py bdist_wheel for cvxpy ... [?25l- \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - done
[?25h Stored in directory: /root/.cache/pip/wheels/8b/af/aa/46570716431521ee92085f317c33b2f427e27f08fe4a8a738a
Running setup.py bdist_wheel for ecos ... [?25l- \ | / - \ | / - \ | / - \ | / - \ | / - \ | / done
[?25h Stored in directory: /root/.cache/pip/wheels/50/91/1b/568de3c087b3399b03d130e71b1fd048ec072c45f72b6b6e9a
Running setup.py bdist_wheel for scs ... [?25l- \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - done
[?25h Stored in directory: /root/.cache/pip/wheels/ff/f0/aa/530ccd478d7d9900b4e9ef5bc5a39e895ce110bed3d3ac653e
Running setup.py bdist_wheel for multiprocess ... [?25l- \ | / done
[?25h Stored in directory: /root/.cache/pip/wheels/8b/36/e5/96614ab62baf927e9bc06889ea794a8e87552b84bb6bf65e3e
Running setup.py bdist_wheel for fastcache ... [?25l- \ done
[?25h Stored in directory: /root/.cache/pip/wheels/b7/90/c0/da92ac52d188d9ebca577044e89a14d0e6ff333c1bcd1ebc14
Successfully built cvxpy ecos scs multiprocess fastcache
Installing collected packages: osqp, ecos, scs, multiprocess, fastcache, cvxpy
Successfully installed cvxpy-1.0.10 ecos-2.0.5 fastcache-1.0.2 multiprocess-0.70.6.1 osqp-0.4.1 scs-2.0.2
Requirement already satisfied: sympy in /usr/local/lib/python3.6/dist-packages (1.1.1)
Requirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.6/dist-packages (from sympy) (1.0.0)
```
import numpy as np
import scipy as sp
```
## Time Varying Semi-definite Programs
The TV-SDP framework for CVXPY for imposing constraints of the form:
$$A(t) \succeq 0 \; \forall t \in [0, 1],$$
where $$A(t)$$ is a polynomial symmetric matrix, i.e. a symmetric matrix
whose entries are polynomial functions of time, and $$A(t) \succeq 0$$
means that all the eigen values of the matrix $$A(t)$$ are nonnegative.
```
def _mult_poly_matrix_poly(p, mat_y):
"""Multiplies the polynomial matrix mat_y by the polynomial p entry-wise.
Args:
p: list of size d1+1 representation the polynomial sum p[i] t^i.
mat_y: (m, m, d2+1) tensor representing a polynomial
matrix Y_ij(t) = sum mat_y[i, j, k] t^k.
Returns:
(m, m, d1+d2+1) tensor representing the polynomial matrix p(t)*Y(t).
"""
mult_op = lambda q: np.convolve(p, q)
p_times_y = np.apply_along_axis(mult_op, 2, mat_y)
return p_times_y
def _make_zero(p):
"""Returns the constraints p_i == 0.
Args:
p: list of cvxpy expressions.
Returns:
A list of cvxpy constraints [pi == 0 for pi in p].
"""
return [pi == 0 for pi in p]
def _lambda(m, d, Q):
"""Returns the mxm polynomial matrix of degree d whose Gram matrix is Q.
Args:
m: size of the polynomial matrix to be returned.
d: degreen of the polynomial matrix to be returned.
Q: (m*d/2, m*d/2) gram matrix of the polynomial matrix to be returned.
Returns:
(m, m, d+1) tensor representing the polynomial whose gram matrix is Q.
i.e. $$Y_ij(t) == sum_{r, s s.t. r+s == k} Q_{y_i t^r, y_j t^s} t^k$$.
"""
d_2 = int(d / 2)
def y_i_j(i, j):
poly = list(np.zeros((d + 1, 1)))
for k in range(d_2 + 1):
for l in range(d_2 + 1):
poly[k + l] += Q[i + k * m, j + l * m]
return poly
mat_y = [[y_i_j(i, j) for j in range(m)] for i in range(m)]
mat_y = np.array(mat_y)
return mat_y
def _alpha(m, d, Q):
"""Returns t*Lambda(Q) if d odd, Lambda(Q) o.w.
Args:
m: size of the polynomial matrix to be returned.
d: degreen of the polynomial matrix to be returned.
Q: gram matrix of the polynomial matrix.
Returns:
t*Lambda(Q) if d odd, Lambda(Q) o.w.
"""
if d % 2 == 1:
w1 = np.array([0, 1]) # t
else:
w1 = np.array([1]) # 1
mat_y = _lambda(m, d + 1 - len(w1), Q)
return _mult_poly_matrix_poly(w1, mat_y)
def _beta(m, d, Q):
"""Returns (1-t)*Lambda(Q) if d odd, t(1-t)*Lambda(Q) o.w.
Args:
m: size of the polynomial matrix to be returned.
d: degreen of the polynomial matrix to be returned.
Q: gram matrix of the polynomial matrix.
Returns:
(1-t)*Lambda(Q) if d odd, t(1-t)*Lambda(Q) o.w.
"""
if d % 2 == 1:
w2 = np.array([1, -1]) # 1 - t
else:
w2 = np.array([0, 1, -1]) # t - t^2
mat_y = _lambda(m, d + 1 - len(w2), Q)
return _mult_poly_matrix_poly(w2, mat_y)
def make_poly_matrix_psd_on_0_1(mat_x):
"""Returns the constraint X(t) psd on [0, 1].
Args:
mat_x: (m, m, d+1) tensor representing a mxm polynomial matrix of degree d.
Returns:
A list of cvxpy constraints imposing that X(t) psd on [0, 1].
"""
m, m2, d = len(mat_x), len(mat_x[0]), len(mat_x[0][0]) - 1
# square matrix
assert m == m2
# build constraints: X == alpha(Q1) + beta(Q2) with Q1, Q2 >> 0
d_2 = int(d / 2)
size_Q1 = m * (d_2 + 1)
size_Q2 = m * d_2 if d % 2 == 0 else m * (d_2 + 1)
Q1 = cvxpy.Variable((size_Q1, size_Q1))
Q2 = cvxpy.Variable((size_Q2, size_Q2))
diff = mat_x - _alpha(m, d, Q1) - _beta(m, d, Q2)
diff = diff.reshape(-1)
const = _make_zero(diff)
const += [Q1 >> 0, Q2 >> 0, Q1.T == Q1, Q2.T == Q2]
return const
```
## Some Polynomial Tools
```
def integ_poly_0_1(p):
"""Return the integral of p(t) between 0 and 1."""
return np.array(p).dot(1 / np.linspace(1, len(p), len(p)))
def spline_regression(x, y, num_parts, deg=3, alpha=.01, smoothness=1):
"""Fits splines with `num_parts` to data `(x, y)`.
Finds a piecewise polynomial function `p` of degree `deg` with `num_parts`
pieces that minimizes the fitting error sum |y_i - p(x_i)| + alpha |p|_1.
Args:
x: [N] ndarray of input data. Must be increasing.
y: [N] ndarray, same size as `x`.
num_parts: int, Number of pieces of the piecewise polynomial function `p`.
deg: int, degree of each polynomial piece of `p`.
alpha: float, Regularizer.
smoothness: int, the desired degree of smoothness of `p`, e.g.
`smoothness==0` corresponds to a continuous `p`.
Returns:
[num_parts, deg+1] ndarray representing the piecewise polynomial `p`.
Entry (i, j) contains j^th coefficient of the i^th piece of `p`.
"""
# coefficients of the polynomial of p.
p = cvxpy.Variable((num_parts, deg + 1), name='p')
# convert to numpy format because it is easier to work with.
numpy_p = np.array([[p[i, j] for j in range(deg+1)] \
for i in range(num_parts)])
regularizer = alpha * cvxpy.norm(p, 1)
num_points_per_part = int(len(x) / num_parts)
smoothness_constraints = []
# cuttoff values
t = []
fitting_value = 0
# split the data into equal `num_parts` pieces
for i in range(num_parts):
# the part of the data that the current piece fits
sub_x = x[num_points_per_part * i:num_points_per_part * (i + 1)]
sub_y = y[num_points_per_part * i:num_points_per_part * (i + 1)]
# compute p(sub_x)
# pow_x = np.array([sub_x**k for k in range(deg + 1)])
# sub_p = polyval(sub_xnumpy_p[i, :].dot(pow_x)
sub_p = eval_poly_from_coefficients(numpy_p[i], sub_x)
# fitting value of the current part of p,
# equal to sqrt(sum |p(x_i) - y_i|^2), where the sum
# is over data (x_i, y_i) in the current piece.
fitting_value += cvxpy.norm(cvxpy.vstack(sub_p - sub_y), 1)
# glue things together by ensuring smoothness of the p at x1
if i > 0:
x1 = x[num_points_per_part * i]
# computes the derivatives p'(x1) for the left and from the right of x1
# x_deriv is the 2D matrix k!/(k-j)! x1^(k-j) indexed by (j, k)
x1_deriv = np.array(
[[np.prod(range(k - j, k)) * x1**(k - j)
for k in range(deg + 1)]
for j in range(smoothness + 1)]).T
p_deriv_left = numpy_p[i - 1].dot(x1_deriv)
p_deriv_right = numpy_p[i].dot(x1_deriv)
smoothness_constraints += [
cvxpy.vstack(p_deriv_left - p_deriv_right) == 0
]
t.append(x1)
min_loss = cvxpy.Minimize(fitting_value + regularizer)
prob = cvxpy.Problem(min_loss, smoothness_constraints)
prob.solve(verbose=False)
return _piecewise_polynomial_as_function(p.value, t)
def _piecewise_polynomial_as_function(p, t):
"""Returns the piecewise polynomial `p` as a function.
Args:
p: [N, d+1] array of coefficients of p.
t: [N] array of cuttoffs.
Returns:
The function f s.t. f(x) = p_i(x) if t[i] < x < t[i+1].
"""
def evaluate_p_at(x):
"""Returns p(x)."""
pieces = [x < t[0]] + [(x >= ti) & (x < ti_plusone) \
for ti, ti_plusone in zip(t[:-1], t[1:])] +\
[x >= t[-1]]
# pylint: disable=unused-variable
func_list = [
lambda u, pi=pi: eval_poly_from_coefficients(pi, u) for pi in p
]
return np.piecewise(x, pieces, func_list)
return evaluate_p_at
def eval_poly_from_coefficients(coefficients, x):
"""Evaluates the polynomial whose coefficients are `coefficients` at `x`."""
return coefficients.dot([x**i for i in range(len(coefficients))])
```
## Examples: To Add.
| 35cf53d113e7101b7194e5756ae9f310868348e4 | 20,912 | ipynb | Jupyter Notebook | time_varying_optimization/tvsdp.ipynb | deepneuralmachine/google-research | d2ce2cf0f5c004f8d78bfeddf6e88e88f4840231 | [
"Apache-2.0"
] | 23,901 | 2018-10-04T19:48:53.000Z | 2022-03-31T21:27:42.000Z | time_varying_optimization/tvsdp.ipynb | deepneuralmachine/google-research | d2ce2cf0f5c004f8d78bfeddf6e88e88f4840231 | [
"Apache-2.0"
] | 891 | 2018-11-10T06:16:13.000Z | 2022-03-31T10:42:34.000Z | time_varying_optimization/tvsdp.ipynb | deepneuralmachine/google-research | d2ce2cf0f5c004f8d78bfeddf6e88e88f4840231 | [
"Apache-2.0"
] | 6,047 | 2018-10-12T06:31:02.000Z | 2022-03-31T13:59:28.000Z | 43.028807 | 604 | 0.520228 | true | 4,441 | Qwen/Qwen-72B | 1. YES
2. YES | 0.685949 | 0.737158 | 0.505653 | __label__eng_Latn | 0.710216 | 0.013131 |
# Introduction
This example gives a simple demostration of chaotic behavior in a simple two body system. The system is made up of a slender rod that is connected to the ceiling at one end with a revolute joint that rotates about the $\hat{\mathbf{n}}_y$ unit vector. At the other end of the rod a flat plate is attached via a second revolute joint allowing the plate to rotate about the rod's axis with aligns with the $\hat{\mathbf{a}_z}$ unit vector.
# Setup
```python
import numpy as np
import matplotlib.pyplot as plt
import sympy as sm
import sympy.physics.mechanics as me
from pydy.system import System
from pydy.viz import Cylinder, Plane, VisualizationFrame, Scene
```
```python
%matplotlib nbagg
```
```python
me.init_vprinting(use_latex='mathjax')
```
# Define Variables
First define the system constants:
- $m_A$: Mass of the slender rod.
- $m_B$: Mass of the plate.
- $l_B$: Distance from $N_o$ to $B_o$ along the slender rod's axis.
- $w$: The width of the plate.
- $h$: The height of the plate.
- $g$: The acceleratoin due to gravity.
```python
mA, mB, lB, w, h, g = sm.symbols('m_A, m_B, L_B, w, h, g')
```
There are two time varying generalized coordinates:
- $\theta(t)$: The angle of the slender rod with respect to the ceiling.
- $\phi(t)$: The angle of the plate with respect to the slender rod.
The two generalized speeds will then be defined as:
- $\omega(t)=\dot{\theta}$: The angular rate of the slender rod with respect to the ceiling.
- $\alpha(t)=\dot{\phi}$: The angluer rate of the plate with respect to the slender rod.
```python
theta, phi = me.dynamicsymbols('theta, phi')
omega, alpha = me.dynamicsymbols('omega, alpha')
```
The kinematical differential equations are defined in this fashion for the `KanesMethod` class:
$$0 = \omega - \dot{\theta}\\
0 = \alpha - \dot{\phi}$$
```python
kin_diff = (omega - theta.diff(), alpha - phi.diff())
kin_diff
```
$$\left ( \omega - \dot{\theta}, \quad \alpha - \dot{\phi}\right )$$
# Define Orientations
There are three reference frames. These are defined as such:
```python
N = me.ReferenceFrame('N')
A = me.ReferenceFrame('A')
B = me.ReferenceFrame('B')
```
The frames are oriented with respect to each other by simple revolute rotations. The following lines set the orientations:
```python
A.orient(N, 'Axis', (theta, N.y))
B.orient(A, 'Axis', (phi, A.z))
```
# Define Positions
Three points are necessary to define the problem:
- $N_o$: The fixed point which the slender rod rotates about.
- $A_o$: The center of mass of the slender rod.
- $B_o$: The center of mass of the plate.
```python
No = me.Point('No')
Ao = me.Point('Ao')
Bo = me.Point('Bo')
```
The two centers of mass positions can be set relative to the fixed point, $N_o$.
```python
lA = (lB - h / 2) / 2
Ao.set_pos(No, lA * A.z)
Bo.set_pos(No, lB * A.z)
```
# Specify the Velocities
The generalized speeds should be used in the definition of the linear and angular velocities when using Kane's method. For simple rotations and the defined kinematical differential equations the angular rates are:
```python
A.set_ang_vel(N, omega * N.y)
B.set_ang_vel(A, alpha * A.z)
```
Once the angular velocities are specified the linear velocities can be computed using the two point velocity thereom, starting with the origin point having a velocity of zero.
```python
No.set_vel(N, 0)
```
```python
Ao.v2pt_theory(No, N, A)
```
$$\left(\frac{L_{B}}{2} - \frac{h}{4}\right) \omega\mathbf{\hat{a}_x}$$
```python
Bo.v2pt_theory(No, N, A)
```
$$L_{B} \omega\mathbf{\hat{a}_x}$$
# Inertia
The central inertia of the symmetric slender rod with respect to its reference frame is a function of its length and its mass.
```python
IAxx = sm.S(1) / 12 * mA * (2 * lA)**2
IAyy = IAxx
IAzz = 0
IA = (me.inertia(A, IAxx, IAyy, IAzz), Ao)
```
This gives the inertia tensor:
```python
IA[0].to_matrix(A)
```
$$\left[\begin{matrix}\frac{m_{A}}{12} \left(L_{B} - \frac{h}{2}\right)^{2} & 0 & 0\\0 & \frac{m_{A}}{12} \left(L_{B} - \frac{h}{2}\right)^{2} & 0\\0 & 0 & 0\end{matrix}\right]$$
The central inerita of the symmetric plate with respect to its reference frame is a function of its width and height.
```python
IBxx = sm.S(1)/12 * mB * h**2
IByy = sm.S(1)/12 * mB * (w**2 + h**2)
IBzz = sm.S(1)/12 * mB * w**2
IB = (me.inertia(B, IBxx, IByy, IBzz), Bo)
```
```python
IB[0].to_matrix(B)
```
$$\left[\begin{matrix}\frac{h^{2} m_{B}}{12} & 0 & 0\\0 & \frac{m_{B}}{12} \left(h^{2} + w^{2}\right) & 0\\0 & 0 & \frac{m_{B} w^{2}}{12}\end{matrix}\right]$$
All of the information to define the two rigid bodies are now available. This information is used to create an object for the rod and the plate.
```python
rod = me.RigidBody('rod', Ao, A, mA, IA)
```
```python
plate = me.RigidBody('plate', Bo, B, mB, IB)
```
# Loads
The only loads in this problem is the force due to gravity that acts on the center of mass of each body. These forces are specified with a tuple containing the point of application and the force vector.
```python
rod_gravity = (Ao, mA * g * N.z)
plate_gravity = (Bo, mB * g * N.z)
```
# Equations of motion
Now that the kinematics, kinetics, and inertia have all been defined the `KanesMethod` class can be used to generate the equations of motion of the system. In this case the independent generalized speeds, independent generalized speeds, the kinematical differential equations, and the inertial reference frame are used to initialize the class.
```python
kane = me.KanesMethod(N, q_ind=(theta, phi), u_ind=(omega, alpha), kd_eqs=kin_diff)
```
The equations of motion are then generated by passing in all of the loads and bodies to the `kanes_equations` method. This produces $f_r$ and $f_r^*$.
```python
bodies = (rod, plate)
loads = (rod_gravity, plate_gravity)
fr, frstar = kane.kanes_equations(loads, bodies)
```
```python
sm.trigsimp(fr)
```
$$\left[\begin{matrix}g \left(- \frac{L_{B} m_{A}}{2} - L_{B} m_{B} + \frac{h m_{A}}{4}\right) \operatorname{sin}\left(\theta\right)\\0\end{matrix}\right]$$
```python
sm.trigsimp(frstar)
```
$$\left[\begin{matrix}\frac{m_{B} w^{2}}{12} \alpha \omega \operatorname{sin}\left(2 \phi\right) - \left(\frac{L_{B}^{2} m_{A}}{3} + L_{B}^{2} m_{B} - \frac{L_{B} h}{3} m_{A} + \frac{h^{2} m_{A}}{12} + \frac{h^{2} m_{B}}{12} + \frac{m_{B} w^{2}}{12} \operatorname{cos}^{2}\left(\phi\right)\right) \dot{\omega}\\- \frac{m_{B} w^{2}}{24} \left(\omega^{2} \operatorname{sin}\left(2 \phi\right) + 2 \dot{\alpha}\right)\end{matrix}\right]$$
# Simulation
The equations of motion can now be simulated numerically. Values for the constants, initial conditions, and time are provided to the `System` class along with the symbolic `KanesMethod` object.
```python
sys = System(kane)
```
```python
sys.constants = {lB: 0.2, # meters
h: 0.1, # meters
w: 0.2, # meters
mA: 0.01, # kilograms
mB: 0.1, # kilograms
g: 9.81} # meters per second squared
```
```python
sys.initial_conditions = {theta: np.deg2rad(45),
phi: np.deg2rad(0.5),
omega: 0,
alpha: 0}
```
```python
sys.times = np.linspace(0, 10, 500)
```
The trajectories of the states are found with the `integrate` method.
```python
x = sys.integrate()
```
The angles can be plotted to see how they change with respect to time given the initial conditions.
```python
def plot():
plt.figure()
plt.plot(sys.times, np.rad2deg(x[:, :2]))
plt.legend([sm.latex(s, mode='inline') for s in sys.coordinates])
plot()
```
<IPython.core.display.Javascript object>
# Chaotic Behavior
Now change the intial condition of the plat angle just slighty to see if the behvior of the system is similar.
```python
sys.initial_conditions[phi] = np.deg2rad(1.0)
x = sys.integrate()
plot()
```
<IPython.core.display.Javascript object>
Seems all good, very similar behavior. But now set the rod angle to $90^\circ$ and try the same slight change in plate angle.
```python
sys.initial_conditions[theta] = np.deg2rad(90)
sys.initial_conditions[phi] = np.deg2rad(0.5)
x = sys.integrate()
plot()
```
<IPython.core.display.Javascript object>
First note that the plate behaves wildly. What happens when the initial plate angle is altered slightly.
```python
sys.initial_conditions[phi] = np.deg2rad(1.0)
x = sys.integrate()
plot()
```
<IPython.core.display.Javascript object>
The behavior does not look similar to the previous simulation. This is an example of chaotic behavior. The plate angle can not be reliably predicted because slight changes in the initial conditions cause the behavior of the system to vary widely.
# Visualization
Finally, the system can be animated by attached a cylinder and a plane shape to the rigid bodies. To properly align the coordinate axes of the shapes with the bodies, simple rotations are used.
```python
rod_shape = Cylinder(2 * lA, 0.005, color='red')
plate_shape = Plane(h, w, color='blue')
v1 = VisualizationFrame('rod',
A.orientnew('rod', 'Axis', (sm.pi / 2, A.x)),
Ao,
rod_shape)
v2 = VisualizationFrame('plate',
B.orientnew('plate', 'Body', (sm.pi / 2, sm.pi / 2, 0), 'XZX'),
Bo,
plate_shape)
scene = Scene(N, No, v1, v2, system=sys)
```
The following method opens up a simple gui that shows a 3D animatoin of the system.
```python
scene.display_ipython()
```
```python
%load_ext version_information
%version_information numpy, sympy, scipy, matplotlib, pydy
```
<table><tr><th>Software</th><th>Version</th></tr><tr><td>Python</td><td>3.5.3 64bit [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)]</td></tr><tr><td>IPython</td><td>4.2.0</td></tr><tr><td>OS</td><td>Linux 4.10.0 21 generic x86_64 with debian stretch sid</td></tr><tr><td>numpy</td><td>1.12.1</td></tr><tr><td>sympy</td><td>1.0</td></tr><tr><td>scipy</td><td>0.19.0</td></tr><tr><td>matplotlib</td><td>2.0.2</td></tr><tr><td>pydy</td><td>0.4.0dev</td></tr><tr><td colspan='2'>Mon May 29 13:08:51 2017 PDT</td></tr></table>
| 3ce27df1496ba02a2e7a4b713c81e51022f81033 | 390,715 | ipynb | Jupyter Notebook | examples/chaos_pendulum/chaos_pendulum.ipynb | JanMV/pydy | 22c6a3965853bb3641c63e493717976d775034e2 | [
"BSD-3-Clause"
] | 298 | 2015-01-31T11:43:22.000Z | 2022-03-15T02:18:21.000Z | examples/chaos_pendulum/chaos_pendulum.ipynb | JanMV/pydy | 22c6a3965853bb3641c63e493717976d775034e2 | [
"BSD-3-Clause"
] | 359 | 2015-01-17T16:56:42.000Z | 2022-02-08T05:27:08.000Z | examples/chaos_pendulum/chaos_pendulum.ipynb | JanMV/pydy | 22c6a3965853bb3641c63e493717976d775034e2 | [
"BSD-3-Clause"
] | 109 | 2015-02-03T13:02:45.000Z | 2021-12-21T12:57:21.000Z | 91.609613 | 72,791 | 0.75465 | true | 3,074 | Qwen/Qwen-72B | 1. YES
2. YES | 0.901921 | 0.839734 | 0.757373 | __label__eng_Latn | 0.951855 | 0.597964 |
# Derivatives of $S_0$
In this notebook we'll validate the analytical expression for the area of the "lens" of overlap between the occulted body and the occultor.
```python
import sympy
from sympy import *
from sympy.functions.special.tensor_functions import KroneckerDelta
# Initialize the session
init_session(quiet=True)
# Let's report what version of sympy this is
print("Using sympy version", sympy.__version__)
```
Using sympy version 1.3
## Define our quantities
```python
S0, k0, k1, r, b, A = symbols("S_0 \kappa_0 \kappa_1 r b A")
```
```python
A = sqrt((1 + r + b) * (b - 1 + r) * (b + 1 - r) * (1 + r - b)) / 2
```
```python
k0 = atan2(2 * A, (r - 1) * (r + 1) + b * b)
```
```python
k1 = atan2(2 * A, (1 - r) * (1 + r) + b * b)
```
```python
S0 = pi - k1 - r ** 2 * k0 + A
```
## Derivative with respect to $b$
```python
simplify(diff(S0, b))
```
This is equal to...
```python
expand(2 * A / b)
```
## Derivative with respect to $r$
```python
simplify(diff(S0, r))
```
This is equal to...
```python
expand(-2 * r * k0)
```
| 926bcd1fe635451f8b07b4a6ab3f0316ff05ae95 | 18,844 | ipynb | Jupyter Notebook | proofs/dS0drb.ipynb | langfzac/Limbdark.jl | a6a30c52e14686a9a01d7b4437f4f171c9c53a24 | [
"MIT"
] | 13 | 2019-05-08T09:03:56.000Z | 2021-02-14T20:53:23.000Z | proofs/dS0drb.ipynb | langfzac/Limbdark.jl | a6a30c52e14686a9a01d7b4437f4f171c9c53a24 | [
"MIT"
] | 75 | 2018-04-23T20:41:25.000Z | 2019-05-02T01:50:19.000Z | proofs/dS0drb.ipynb | langfzac/Limbdark.jl | a6a30c52e14686a9a01d7b4437f4f171c9c53a24 | [
"MIT"
] | 4 | 2019-05-15T08:06:04.000Z | 2020-02-24T19:11:47.000Z | 74.188976 | 4,124 | 0.803439 | true | 372 | Qwen/Qwen-72B | 1. YES
2. YES | 0.913677 | 0.851953 | 0.778409 | __label__eng_Latn | 0.945975 | 0.646838 |
# Initial Conditions (Experiences, Observables, Lagged Choices)
While working with **respy**, you often want to simulate the effects of policies in counterfactual environments. At the start of the simulation, you need to sample the characteristics of individuals. You normally want to do at least one of the following points:
- Individuals should start with nonzero years of experience for some choice.
- The previous (lagged) choice in the first period can only be a subset of all choices in the model.
- An observed characteristic is not evenly distributed in the population.
Taken together these assumptions are called the initial conditions of a model. An initial condition is also called a *seed value* and determines the value of a variable in the first period of a dynamic system.
In the following, we describe how to set the initial condition for each of the the three points for a small Robinson Crusoe Economy. A more thorough presentation of a similar model can be found [here](../tutorials/robinson_crusoe.ipynb).
```python
%matplotlib inline
import io
import pandas as pd
import respy as rp
import matplotlib.pyplot as plt
import seaborn as sns
```
## Experiences
In a nutshell, Robinson can choose between fishing and staying in the hammock every period. He can accumulate experience in fishing which makes him more productive. To describe such a simple model, we write the following parameterization and simulate data with ten periods.
```python
params = """
category,name,value
delta,delta,0.95
wage_fishing,exp_fishing,0.01
nonpec_hammock,constant,1
shocks_sdcorr,sd_fishing,1
shocks_sdcorr,sd_hammock,1
shocks_sdcorr,corr_hammock_fishing,0
"""
options = {
"n_periods": 10,
"simulation_agents": 1_000,
"covariates": {"constant": "1"}
}
```
```python
params = pd.read_csv(io.StringIO(params), index_col=["category", "name"])
params
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th></th>
<th>value</th>
</tr>
<tr>
<th>category</th>
<th>name</th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<th>delta</th>
<th>delta</th>
<td>0.95</td>
</tr>
<tr>
<th>wage_fishing</th>
<th>exp_fishing</th>
<td>0.01</td>
</tr>
<tr>
<th>nonpec_hammock</th>
<th>constant</th>
<td>1.00</td>
</tr>
<tr>
<th rowspan="3" valign="top">shocks_sdcorr</th>
<th>sd_fishing</th>
<td>1.00</td>
</tr>
<tr>
<th>sd_hammock</th>
<td>1.00</td>
</tr>
<tr>
<th>corr_hammock_fishing</th>
<td>0.00</td>
</tr>
</tbody>
</table>
</div>
```python
simulate = rp.get_simulate_func(params, options)
df = simulate(params)
```
```python
fig, axs = plt.subplots(1, 2, figsize=(12.8, 4.8))
(df.groupby("Period").Choice.value_counts(normalize=True).unstack()
.plot.bar(ax=axs[0], stacked=True, rot=0, title="Choice Probabilities"))
(df.groupby("Period").Experience_Fishing.value_counts(normalize=True).unstack().plot
.bar(ax=axs[1], stacked=True, rot=0, title="Share of Experience Level per Period", cmap="Reds"))
axs[0].legend(["Fishing", "Hammock"], loc="upper center", bbox_to_anchor=(0.5, -0.15), ncol=2)
axs[1].legend(loc="upper center", bbox_to_anchor=(0.5, -0.15), ncol=5)
plt.show()
plt.close()
```
The figure on the left-hand-side shows the choice probabilities for Robinson in each period. On the right-hand-side, one can see the average experience in fishing. By default Robinson starts with zero experience in fishing.
What if Robinson has an equal probability start with zero, one or two periods of experience in fishing? The first way to add more complex initial conditions to the model is via **probability mass functions**. This type of distributions is handy if the probabilities do not depend on any information. To feed the information to **respy** use the keyword ``initial_exp_fishing_*`` in the category-level of the index. Replace ``*`` with the experience level. In the name-level, use ``probability`` to signal that the float in ``value`` is a probability. The new parameter specification is below.
Note that one probability is set to 0.34 such that all probabilities sum to one. If that is not the case, respy will emit a warning and normalize probabilities.
```python
params = """
category,name,value
delta,delta,0.95
wage_fishing,exp_fishing,0.01
nonpec_hammock,constant,1
shocks_sdcorr,sd_fishing,1
shocks_sdcorr,sd_hammock,1
shocks_sdcorr,corr_hammock_fishing,0
initial_exp_fishing_0,probability,0.33
initial_exp_fishing_1,probability,0.33
initial_exp_fishing_2,probability,0.34
"""
```
```python
params = pd.read_csv(io.StringIO(params), index_col=["category", "name"])
params
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th></th>
<th>value</th>
</tr>
<tr>
<th>category</th>
<th>name</th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<th>delta</th>
<th>delta</th>
<td>0.95</td>
</tr>
<tr>
<th>wage_fishing</th>
<th>exp_fishing</th>
<td>0.01</td>
</tr>
<tr>
<th>nonpec_hammock</th>
<th>constant</th>
<td>1.00</td>
</tr>
<tr>
<th rowspan="3" valign="top">shocks_sdcorr</th>
<th>sd_fishing</th>
<td>1.00</td>
</tr>
<tr>
<th>sd_hammock</th>
<td>1.00</td>
</tr>
<tr>
<th>corr_hammock_fishing</th>
<td>0.00</td>
</tr>
<tr>
<th>initial_exp_fishing_0</th>
<th>probability</th>
<td>0.33</td>
</tr>
<tr>
<th>initial_exp_fishing_1</th>
<th>probability</th>
<td>0.33</td>
</tr>
<tr>
<th>initial_exp_fishing_2</th>
<th>probability</th>
<td>0.34</td>
</tr>
</tbody>
</table>
</div>
```python
simulate = rp.get_simulate_func(params, options)
df = simulate(params)
```
C:\Users\tobia\git\respy\respy\state_space.py:84: UserWarning: Some choices in the model are not admissible all the time. Thus, respy applies a penalty to the utility for these choices which is -400000 by default. For the full solution, the penalty only needs to be larger than all other value functions to be effective. Choose a milder penalty for the interpolation which does not dominate the linear interpolation model.
"Some choices in the model are not admissible all the time. Thus, respy"
```python
fig, axs = plt.subplots(1, 2, figsize=(12.8, 4.8))
(df.groupby("Period").Choice.value_counts(normalize=True).unstack()
.plot.bar(ax=axs[0], stacked=True, rot=0, title="Choice Probabilities"))
(df.groupby("Period").Experience_Fishing.value_counts(normalize=True).unstack().plot
.bar(ax=axs[1], stacked=True, rot=0, title="Share of Experience Level per Period", cmap="Reds"))
axs[0].legend(["Fishing", "Hammock"], loc="upper center", bbox_to_anchor=(0.5, -0.15), ncol=2)
axs[1].legend(loc="upper center", bbox_to_anchor=(0.5, -0.15), ncol=6)
plt.show()
plt.close()
```
One can clearly see the proportions of experiences in the first period.
## Lagged Choices
In the next step, we want to enrich the model with information on the choice in the previous period. Let us assume that Robinson has troubles storing food on a tropical island so that he incurs a penalty if he enjoys his life in the hammock two times in a row. We add the parameter ``"not_fishing_last_period"`` to the nonpecuniary reward and add a corresponding covariate to the options.
```python
params = """
category,name,value
delta,delta,0.95
wage_fishing,exp_fishing,0.01
nonpec_hammock,constant,1
nonpec_hammock,not_fishing_last_period,-0.5
shocks_sdcorr,sd_fishing,1
shocks_sdcorr,sd_hammock,1
shocks_sdcorr,corr_hammock_fishing,0
initial_exp_fishing_0,probability,0.33
initial_exp_fishing_1,probability,0.33
initial_exp_fishing_2,probability,0.34
"""
options = {
"n_periods": 10,
"simulation_agents": 1_000,
"covariates": {
"constant": "1",
"not_fishing_last_period": "lagged_choice_1 != 'fishing'"
}
}
```
```python
params = pd.read_csv(io.StringIO(params), index_col=["category", "name"])
params
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th></th>
<th>value</th>
</tr>
<tr>
<th>category</th>
<th>name</th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<th>delta</th>
<th>delta</th>
<td>0.95</td>
</tr>
<tr>
<th>wage_fishing</th>
<th>exp_fishing</th>
<td>0.01</td>
</tr>
<tr>
<th rowspan="2" valign="top">nonpec_hammock</th>
<th>constant</th>
<td>1.00</td>
</tr>
<tr>
<th>not_fishing_last_period</th>
<td>-0.50</td>
</tr>
<tr>
<th rowspan="3" valign="top">shocks_sdcorr</th>
<th>sd_fishing</th>
<td>1.00</td>
</tr>
<tr>
<th>sd_hammock</th>
<td>1.00</td>
</tr>
<tr>
<th>corr_hammock_fishing</th>
<td>0.00</td>
</tr>
<tr>
<th>initial_exp_fishing_0</th>
<th>probability</th>
<td>0.33</td>
</tr>
<tr>
<th>initial_exp_fishing_1</th>
<th>probability</th>
<td>0.33</td>
</tr>
<tr>
<th>initial_exp_fishing_2</th>
<th>probability</th>
<td>0.34</td>
</tr>
</tbody>
</table>
</div>
```python
simulate = rp.get_simulate_func(params, options)
df = simulate(params)
```
C:\Users\tobia\git\respy\respy\pre_processing\model_processing.py:470: UserWarning: The distribution of initial lagged choices is insufficiently specified in the parameters. Covariates require 1 lagged choices and parameters define 0. Missing lags have equiprobable choices.
category=UserWarning,
C:\Users\tobia\git\respy\respy\pre_processing\model_processing.py:470: UserWarning: The distribution of initial lagged choices is insufficiently specified in the parameters. Covariates require 1 lagged choices and parameters define 0. Missing lags have equiprobable choices.
category=UserWarning,
The warning is raised because we forgot to specify what individuals in period 0 have been doing in the previous period. As a default, **respy** assumes that all choices have the same probability for being the previous choice. Note that, it might lead to inconsistent states where an individual should have accumulated experience in the previous period, but still starts with zero experience.
Below we see the choice probabilities on the left-hand-side and the shares of lagged choices on the right-hand-side. Without further information, **respy** makes all previous choices in the first period equiprobable.
If we had set the covariate with the lagged choice but not added a parameter using the covariate, **respy** would have discarded the covariate and created a model without lagged choices.
```python
fig, axs = plt.subplots(1, 2, figsize=(12.8, 4.8))
(df.groupby("Period").Choice.value_counts(normalize=True).unstack()
.plot.bar(ax=axs[0], stacked=True, rot=0, title="Choice Probabilities"))
(df.groupby("Period").Lagged_Choice_1.value_counts(normalize=True).unstack().plot
.bar(ax=axs[1], stacked=True, rot=0, title="Share of Lagged Choice per Period"))
axs[0].legend(["Fishing", "Hammock"], loc="upper center", bbox_to_anchor=(0.5, -0.15), ncol=2)
axs[1].legend(loc="upper center", bbox_to_anchor=(0.5, -0.15), ncol=2)
plt.show()
plt.close()
```
What if we wanted a more complex distribution of lagged choices. Remember that Robinson starts with 0, 1 or 2 periods of experience in fishing. Thus, it is natural to assume that higher experience levels correspond to a higher probability for having fishing as a previous choice.
This kind of distribution is more complex because it involves covariates. A flexible solution is to use a **multinomial logit** or **softmax function** to map parameters with covariates to probabilities. This allows to specify very complex distributions, ones which could have been estimated outside the structural model. Here is a short explanation.
For each lag, we have a set of parameters, $\beta^f$ and $\beta^h$, and their corresponding covariates, $x^f$ and $x^h$. The probability $p^f$ for one individual having fishing as their first lagged choice is
$$
p^f = \frac{e^{x^f \beta^f}}{e^{x^f \beta^f} + e^{x^h \beta^h}}
$$
The probability for hammock as the lagged choice is defined analogously.
In the parameters, you can use the ``lagged_choice_1_fishing`` keyword to define the parameters for fishing as the first lag. You can also define higher order lags using, for example, ``lagged_choice_2_*``.
Let us assume that with each level of initial experience the probability for choosing fishing the previous period rises from 0.5 over 0.75 to 1. For zero experience in fishing, the resulting probabilities should be one halve for fishing and one halve for hammock. First, we create a covariate which is ``True`` if an individual has zero experience in fishing. This covariate is ``{"zero_exp_fishing": "exp_fishing == 0"}``. Then, the parameters for this covariate just have to be equal and can take any value. That is because the softmax function is shift-invariant. The resulting lines in the csv are
lagged_choice_1_fishing,zero_exp_fishing,1
lagged_choice_1_hammock,zero_exp_fishing,1
For one period of experience, the weights are 0.75 for fishing and 0.25 for the hammock. First, define a covariate ``{"one_exp_fishing": "exp_fishing == 1"}``. To get the corresponding softmax coefficients for the probabilities, note that you can rewrite the softmax formula replacing $x_i$ with 1. Recognize that the sum in the denominator, $C$, applies to all probabilities and can be discarded. Thus, the coefficients are the logs of the probabilities.
$$\begin{align}
p_i &= \frac{e^{x_i \beta_i}}{\sum_j e^{x_j \beta_j}} \\
&= \frac{e^{\beta_i}}{\sum_j e^{\beta_j}} \\
log(p_i) &= \beta_i - \log(\sum_j e^{\beta_j}) \\
&= \beta_i - C
\end{align}$$
The lines in the parameters for $log(0.75)$ and $log(0.25)$ are
lagged_choice_1_fishing,one_exp_fishing,-0.2877
lagged_choice_1_hammock,one_exp_fishing,-1.3863
For two experience in fishing, we have to make sure that fishing receives a probability of one which can be achieved by using a fairly large value such as 6. By default, all missing choices receive a parameter with value -1e300.
The final parameters and options are the following:
```python
params = """
category,name,value
delta,delta,0.95
wage_fishing,exp_fishing,0.01
nonpec_hammock,constant,1
nonpec_hammock,not_fishing_last_period,-0.5
shocks_sdcorr,sd_fishing,1
shocks_sdcorr,sd_hammock,1
shocks_sdcorr,corr_hammock_fishing,0
initial_exp_fishing_0,probability,0.33
initial_exp_fishing_1,probability,0.33
initial_exp_fishing_2,probability,0.34
lagged_choice_1_fishing,zero_exp_fishing,1
lagged_choice_1_hammock,zero_exp_fishing,1
lagged_choice_1_fishing,one_exp_fishing,-0.2877
lagged_choice_1_hammock,one_exp_fishing,-1.3863
lagged_choice_1_fishing,two_exp_fishing,6
"""
options = {
"n_periods": 10,
"simulation_agents": 1_000,
"covariates": {
"constant": "1",
"not_fishing_last_period": "lagged_choice_1 != 'fishing'",
"zero_exp_fishing": "exp_fishing == 0",
"one_exp_fishing": "exp_fishing == 1",
"two_exp_fishing": "exp_fishing == 2",
}
}
```
```python
params = pd.read_csv(io.StringIO(params), index_col=["category", "name"])
params
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th></th>
<th>value</th>
</tr>
<tr>
<th>category</th>
<th>name</th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<th>delta</th>
<th>delta</th>
<td>0.9500</td>
</tr>
<tr>
<th>wage_fishing</th>
<th>exp_fishing</th>
<td>0.0100</td>
</tr>
<tr>
<th rowspan="2" valign="top">nonpec_hammock</th>
<th>constant</th>
<td>1.0000</td>
</tr>
<tr>
<th>not_fishing_last_period</th>
<td>-0.5000</td>
</tr>
<tr>
<th rowspan="3" valign="top">shocks_sdcorr</th>
<th>sd_fishing</th>
<td>1.0000</td>
</tr>
<tr>
<th>sd_hammock</th>
<td>1.0000</td>
</tr>
<tr>
<th>corr_hammock_fishing</th>
<td>0.0000</td>
</tr>
<tr>
<th>initial_exp_fishing_0</th>
<th>probability</th>
<td>0.3300</td>
</tr>
<tr>
<th>initial_exp_fishing_1</th>
<th>probability</th>
<td>0.3300</td>
</tr>
<tr>
<th>initial_exp_fishing_2</th>
<th>probability</th>
<td>0.3400</td>
</tr>
<tr>
<th>lagged_choice_1_fishing</th>
<th>zero_exp_fishing</th>
<td>1.0000</td>
</tr>
<tr>
<th>lagged_choice_1_hammock</th>
<th>zero_exp_fishing</th>
<td>1.0000</td>
</tr>
<tr>
<th>lagged_choice_1_fishing</th>
<th>one_exp_fishing</th>
<td>-0.2877</td>
</tr>
<tr>
<th>lagged_choice_1_hammock</th>
<th>one_exp_fishing</th>
<td>-1.3863</td>
</tr>
<tr>
<th>lagged_choice_1_fishing</th>
<th>two_exp_fishing</th>
<td>6.0000</td>
</tr>
</tbody>
</table>
</div>
```python
simulate = rp.get_simulate_func(params, options)
df = simulate(params)
```
C:\Users\tobia\git\respy\respy\state_space.py:84: UserWarning: Some choices in the model are not admissible all the time. Thus, respy applies a penalty to the utility for these choices which is -400000 by default. For the full solution, the penalty only needs to be larger than all other value functions to be effective. Choose a milder penalty for the interpolation which does not dominate the linear interpolation model.
"Some choices in the model are not admissible all the time. Thus, respy"
The parameters produce previous choice probabilities on the left-hand-side and the correct conditional probabilities on the right-hand-side. The differences in the probabilities occur due to the low number of simulated individuals.
```python
fig, axs = plt.subplots(1, 2, figsize=(12.8, 4.8))
(df.groupby("Period").Lagged_Choice_1.value_counts(normalize=True).unstack().plot
.bar(ax=axs[0], stacked=True, rot=0, title="Share of Lagged Choice per Period"))
sns.heatmap((df.query("Period == 0").groupby(["Experience_Fishing"]).Lagged_Choice_1
.value_counts(normalize="rows").unstack().fillna(0)), ax=axs[1], cmap="RdBu_r", annot=True)
axs[0].legend(loc="upper center", bbox_to_anchor=(0.5, -0.15), ncol=6)
plt.show()
plt.close()
```
## Observables
Now, we have talked enough about experiences and lagged choices. The last section is about observables which are other characteristics such as whether the island has rich fishing grounds or not. We can also express the distribution of observables as probability mass functions or softmax functions.
Note that observables are sampled first before all other characteristics of individuals. After that, experiences and lagged choices in chronological order (highest lag first) follow. Every group itself is sorted alphabetically. Keep this in mind if you want to condition a distribution on other characteristics.
For observables, it is also possible to use labels instead of numbers to identify levels which might be more convenient and intuitive. The evenly distributed levels are ``"rich"`` and ``"poor"``. The keyword is ``observable_*_*`` and everything after the last ``_`` is considered the name of the level.
```python
params = """
category,name,value
delta,delta,0.95
wage_fishing,exp_fishing,0.01
nonpec_fishing,rich_fishing_grounds,0.5
nonpec_hammock,constant,1
nonpec_hammock,not_fishing_last_period,-0.5
shocks_sdcorr,sd_fishing,1
shocks_sdcorr,sd_hammock,1
shocks_sdcorr,corr_hammock_fishing,0
initial_exp_fishing_0,probability,0.33
initial_exp_fishing_1,probability,0.33
initial_exp_fishing_2,probability,0.34
lagged_choice_1_fishing,zero_exp_fishing,1
lagged_choice_1_hammock,zero_exp_fishing,1
lagged_choice_1_fishing,one_exp_fishing,-0.2877
lagged_choice_1_hammock,one_exp_fishing,-1.3863
lagged_choice_1_fishing,two_exp_fishing,6
observable_fishing_grounds_rich,probability,0.5
observable_fishing_grounds_poor,probability,0.5
"""
options = {
"n_periods": 10,
"simulation_agents": 10_000,
"covariates": {
"constant": "1",
"not_fishing_last_period": "lagged_choice_1 != 'fishing'",
"zero_exp_fishing": "exp_fishing == 0",
"one_exp_fishing": "exp_fishing == 1",
"two_exp_fishing": "exp_fishing == 2",
"rich_fishing_grounds": "fishing_grounds == 'rich'",
}
}
```
```python
params = pd.read_csv(io.StringIO(params), index_col=["category", "name"])
params
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th></th>
<th>value</th>
</tr>
<tr>
<th>category</th>
<th>name</th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<th>delta</th>
<th>delta</th>
<td>0.9500</td>
</tr>
<tr>
<th>wage_fishing</th>
<th>exp_fishing</th>
<td>0.0100</td>
</tr>
<tr>
<th>nonpec_fishing</th>
<th>rich_fishing_grounds</th>
<td>0.5000</td>
</tr>
<tr>
<th rowspan="2" valign="top">nonpec_hammock</th>
<th>constant</th>
<td>1.0000</td>
</tr>
<tr>
<th>not_fishing_last_period</th>
<td>-0.5000</td>
</tr>
<tr>
<th rowspan="3" valign="top">shocks_sdcorr</th>
<th>sd_fishing</th>
<td>1.0000</td>
</tr>
<tr>
<th>sd_hammock</th>
<td>1.0000</td>
</tr>
<tr>
<th>corr_hammock_fishing</th>
<td>0.0000</td>
</tr>
<tr>
<th>initial_exp_fishing_0</th>
<th>probability</th>
<td>0.3300</td>
</tr>
<tr>
<th>initial_exp_fishing_1</th>
<th>probability</th>
<td>0.3300</td>
</tr>
<tr>
<th>initial_exp_fishing_2</th>
<th>probability</th>
<td>0.3400</td>
</tr>
<tr>
<th>lagged_choice_1_fishing</th>
<th>zero_exp_fishing</th>
<td>1.0000</td>
</tr>
<tr>
<th>lagged_choice_1_hammock</th>
<th>zero_exp_fishing</th>
<td>1.0000</td>
</tr>
<tr>
<th>lagged_choice_1_fishing</th>
<th>one_exp_fishing</th>
<td>-0.2877</td>
</tr>
<tr>
<th>lagged_choice_1_hammock</th>
<th>one_exp_fishing</th>
<td>-1.3863</td>
</tr>
<tr>
<th>lagged_choice_1_fishing</th>
<th>two_exp_fishing</th>
<td>6.0000</td>
</tr>
<tr>
<th>observable_fishing_grounds_rich</th>
<th>probability</th>
<td>0.5000</td>
</tr>
<tr>
<th>observable_fishing_grounds_poor</th>
<th>probability</th>
<td>0.5000</td>
</tr>
</tbody>
</table>
</div>
```python
simulate = rp.get_simulate_func(params, options)
df = simulate(params)
```
```python
fig, axs = plt.subplots(1, 2, figsize=(12.8, 4.8))
(df.groupby("Fishing_Grounds").Choice.value_counts(normalize=True).unstack().plot
.bar(ax=axs[0], stacked=True, rot=0, title="Choice Probabilities"))
(df.Fishing_Grounds.value_counts(normalize=True)
.fillna(0).plot.bar(stacked=True, ax=axs[1], rot=0, title="Observable Distribution"))
axs[0].legend(loc="upper center", bbox_to_anchor=(0.5, -0.15), ncol=2)
axs[1].legend(loc="upper center", bbox_to_anchor=(0.5, -0.15), ncol=2)
plt.show()
plt.close()
```
You can see in the figure on the right-hand-side that poor and rich fishing grounds are evenly distributed. The figure on the left-hand-side shows that rich fishing grounds lead to higher engagement in fishing.
## Conclusion
We showed ...
- how to express different distributions, probability mass functions or softmax functions, in the parameters of a respy model.
- that you can use numbers and labels for discrete levels of observables.
Happy modeling!
| 1b5d44a536112178f9bd4643cd03596aa7d7e7c9 | 135,269 | ipynb | Jupyter Notebook | docs/how_to_guides/how_to_initial_conditions.ipynb | restudToolbox/respy | 19b9602c6f34f39034b00a88f36219ed3c4cfe5a | [
"MIT"
] | 58 | 2018-04-10T19:52:43.000Z | 2022-02-28T19:45:06.000Z | docs/how_to_guides/how_to_initial_conditions.ipynb | restudToolbox/respy | 19b9602c6f34f39034b00a88f36219ed3c4cfe5a | [
"MIT"
] | 319 | 2018-03-29T07:06:47.000Z | 2021-09-27T18:03:10.000Z | docs/how_to_guides/how_to_initial_conditions.ipynb | restudToolbox/respy | 19b9602c6f34f39034b00a88f36219ed3c4cfe5a | [
"MIT"
] | 39 | 2018-04-02T09:38:02.000Z | 2021-12-30T01:13:05.000Z | 114.054806 | 21,680 | 0.817711 | true | 7,503 | Qwen/Qwen-72B | 1. YES
2. YES | 0.715424 | 0.805632 | 0.576369 | __label__eng_Latn | 0.930597 | 0.177427 |
# Normalizing flows in PyTorch
One of the key in modern generative models is to find ways of optimizing the probability distribution of a given set of data. The recent idea of *Normalizing Flows* [[1](#reference1),[2](#reference)] addresses this problem and be able to rely on richer probability distributions. The main idea is to start from a simple probability distribution and approximate a complex multimodal density by *transforming* the simpler density through a sequence of invertible nonlinear transforms. To fully understand this blazing tool, we will see in this tutorial
1. The new [PyTorch distributions](#distribs) module and how to use it
2. How transforming a distribution is expressed as a [change of variables](#change) leading to a flow
3. How we can [chain multiple transforms](#chaining) leading to the overall framework of normalizing flows
4. Understanding the original [planar flow](#planar), its parameters and how to implement it
5. Defining [learnable flows](#learning) and performing optimization on a target density
<a id="distribs"></a>
### PyTorch distributions
In this tutorial, we are going to rely on the novel [PyTorch distributions module](https://pytorch.org/docs/stable/_modules/torch/distributions/), which is defined in `torch.distributions`. Most notably, we are going to rely both on the `Distribution` and `Transform` objects.
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.distributions as distrib
import torch.distributions.transforms as transform
# Imports for plotting
import numpy as np
import matplotlib.pyplot as plt
from helper_plot import hdr_plot_style
hdr_plot_style()
# Define grids of points (for later plots)
x = np.linspace(-4, 4, 1000)
z = np.array(np.meshgrid(x, x)).transpose(1, 2, 0)
z = np.reshape(z, [z.shape[0] * z.shape[1], -1])
```
Inside this toolbox, we can already find some of the major probability distributions that we are used to deal with
```python
p = distrib.Normal(loc=0, scale=1)
p = distrib.Bernoulli(probs=torch.tensor([0.5]))
p = distrib.Beta(concentration1=torch.tensor([0.5]), concentration0=torch.tensor([0.5]))
p = distrib.Gamma(concentration=torch.tensor([1.0]), rate=torch.tensor([1.0]))
p = distrib.Pareto(alpha=torch.tensor([1.0]), scale=torch.tensor([1.0]))
```
The interesting aspect of these `Distribution` objects is that we can both obtain some samples from it through the `sample` (or `sample_n`) function, but we can also obtain the analytical density at any given point through the `log_prob` function
```python
# Based on a normal
n = distrib.Normal(0, 1)
# Obtain some samples
samples = n.sample((1000, ))
# Evaluate true density at given points
density = torch.exp(n.log_prob(torch.Tensor(x))).numpy()
# Plot both samples and density
fig, (ax1, ax2) = plt.subplots(1, 2, sharex=True, figsize=(15, 4))
ax1.hist(samples, 50, alpha=0.8);
ax1.set_title('Empirical samples', fontsize=18);
ax2.plot(x, density); ax2.fill_between(x, density, 0, alpha=0.5)
ax2.set_title('True density', fontsize=18);
```
<a id="change"></a>
## Transforming distributions
### Change of variables and flow
In order to transform a probability distribution, we can perform a *change of variable*. As we are interested in probability distributions, we need to *scale* our transformed density so that the total probability still sums to one. This is directly measured with the determinant of our transform.
Let $\mathbf{z}\in\mathcal{R}^d$ be a random variable with distribution $q(\mathbf{z})$ and $f:\mathcal{R}^d\rightarrow\mathcal{R}^d$ an invertible smooth mapping (meaning that $f^{-1} = g$ and $g\circ f(\mathbf{z})=\mathbf{z}'$. We can use $f$ to transform $\mathbf{z}\sim q(\mathbf{z})$. The resulting random variable $\mathbf{z}'=f(\mathbf{z})$ has the following probability distribution
$$
q(\mathbf{z}')=q(\mathbf{z})\left|\text{ det}\frac{\delta f^{-1}}{\delta \mathbf{z}'}\right| = q(\mathbf{z})\left|\text{ det}\frac{\delta f}{\delta \mathbf{z}}\right|^{-1}
\tag{1}
$$
where the last equality is obtained through both the inverse function theorem [1] and the property of Jacobians of invertible functions. Therefore, we can transform probability distributions with this property.
Fortunately, this can be easily implemented in PyTorch with the `Transform` classes, that already defines some basic probability distribution transforms. For instance, if we define $\mathbf{z}\sim q_0(\mathbf{z})=\mathcal{N}(0, 1)$, we can apply the transform $\mathbf{z}'=exp(\mathbf{z})$ so that $\mathbf{z}'\sim q_1(\mathbf{z}')$
```python
q0 = distrib.Normal(0, 1)
exp_t = transform.ExpTransform()
q1 = distrib.TransformedDistribution(q0, exp_t)
samples_q0 = q0.sample((int(1e4),))
samples_q1 = q1.sample((int(1e4),))
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 4))
ax1.hist(samples_q0, 50, alpha=0.8);
ax1.set_title('$q_0 = \mathcal{N}(0,1)$', fontsize=18);
ax2.hist(samples_q1, 50, alpha=0.8, color='g');
ax2.set_title('$q_1=exp(q_0)$', fontsize=18);
```
But remember as the objects `q0` and `q1` are defined as `Distribution`, we can actually observe their true densities instead of just empirical samples
```python
hdr_plot_style()
x2 = np.linspace(-0.5, 7.5, 1000)
q0_density = torch.exp(q0.log_prob(torch.Tensor(x))).numpy()
q1_density = torch.exp(q1.log_prob(torch.Tensor(x2))).numpy()
fig, (ax1, ax2) = plt.subplots(1, 2, sharex=True, figsize=(15, 5))
ax1.plot(x, q0_density); ax1.fill_between(x, q0_density, 0, alpha=0.5)
ax1.set_title('$q_0 = \mathcal{N}(0,1)$', fontsize=18);
ax2.plot(x2, q1_density, color='g'); ax2.fill_between(x2, q1_density, 0, alpha=0.5, color='g')
ax2.set_title('$q_1=exp(q_0)$', fontsize=18);
fig.savefig('transform.pdf')
```
What we obtain here with `q1` is actually the `LogNormal` distribution. Interestingly, several distributions in the `torch.distributions` module are already defined based on `TransformedDistribution`. You can convince yourself of that by lurking in the code of the [`torch.distributions.LogNormal`](https://pytorch.org/docs/stable/_modules/torch/distributions/log_normal.html#LogNormal)
<a id="chaining"></a>
### Chaining transforms (normalizing flows)
Now, if we start with a random vector $\mathbf{z}_0$ with distribution $q_0$, we can apply a series of mappings $f_i$, $i \in 1,\cdots,k$ with $k\in\mathcal{N}^{+}$ and obtain a normalizing flow. Hence, if we apply $k$ normalizing flows, we obtain a chain of change of variables
$$
\mathbf{z}_k=f_k\circ f_{k-1}\circ...\circ f_1(\mathbf{z}_0)
\tag{2}
$$
Therefore the distribution of $\mathbf{z}_k\sim q_k(\mathbf{z}_k)$ will be given by
$$
\begin{align}
q_k(\mathbf{z}_k) &= q_0(f_1^{-1} \circ f_{2}^{-1} \circ ... \circ f_k^{-1}(\mathbf{z}_k))\prod_{i=1}^k\left|\text{det}\frac{\delta f^{-1}_i}{\delta\mathbf{z}_{i}}\right|\\
&= q_0(\mathbf{z_0})\prod_{i=1}^k\left|\text{det}\frac{\delta f_i}{\delta\mathbf{z}_{i-1}}\right|^{-1}
\end{align}
\tag{3}
$$
where we compute the determinant of the Jacobian of each normalizing flow (as explained in the previous section). This series of transformations can transform a simple probability distribution (e.g. Gaussian) into a complicated multi-modal one. As usual, we will rely on log-probabilities to simplify the computation and obtain
$$
\text{log} q_K(\mathbf{z}_k) = \text{log} q_0(\mathbf{z}_0) - \sum_{i=1}^{k} \text{log} \left|\text{det}\frac{\delta f_i}{\delta\mathbf{z}_{i-1}}\right|
\tag{4}
$$
To be of practical use, however, we can consider only transformations whose determinants of Jacobians are easy to compute. Of course, we can perform any amount of combined transformations, and it also works with multivariate distributions. Here, this is demonstrated by transforming a `MultivariateNormal` successively with an `ExpTransform` and `AffineTransform`. (Note that the final distribution `q2` is defined as a `TransformedDistribution` directly with a *sequence* of transformations)
```python
q0 = distrib.MultivariateNormal(torch.zeros(2), torch.eye(2))
# Define an affine transform
f1 = transform.ExpTransform()
q1 = distrib.TransformedDistribution(q0, f1)
# Define an additional transform
f2 = transform.AffineTransform(2, torch.Tensor([0.2, 1.5]))
# Here I define on purpose q2 as a sequence of transforms on q0
q2 = distrib.TransformedDistribution(q0, [f1, f2])
# Plot all these lads
fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(15, 5))
ax1.hexbin(z[:,0], z[:,1], C=torch.exp(q0.log_prob(torch.Tensor(z))), cmap='rainbow')
ax1.set_title('$q_0 = \mathcal{N}(\mathbf{0},\mathbb{I})$', fontsize=18);
ax2.hexbin(z[:,0], z[:,1], C=torch.exp(q1.log_prob(torch.Tensor(z))), cmap='rainbow')
ax2.set_title('$q_1=exp(q_0)$', fontsize=18);
ax3.hexbin(z[:,0], z[:,1], C=torch.exp(q2.log_prob(torch.Tensor(z))), cmap='rainbow')
ax3.set_title('$q_2=Affine(exp(q_0))$', fontsize=18);
```
<a id="planar"></a>
## Normalizing flows
Now, we are interested in normalizing flows as we could define our own flows. And, most importantly, we could optimize the parameters of these flow in order to fit complex and richer probability distributions. We will see how this plays out by trying to implement the *planar flow* proposed in the original paper by Rezende [1].
### Planar flow
A planar normalizing flow is defined as a function of the form
$$
f(\mathbf{z})=\mathbf{z}+\mathbf{u}h(\mathbf{w}^T\mathbf{z}+b)
\tag{5}
$$
where $\mathbf{u}\in\mathbb{R}^D$ and $\mathbf{w}\in\mathbb{R}^D$ are vectors (called here scale and weight), $b\in\mathbb{R}$ is a scalar (bias) and $h$ is an activation function. These transform functions are chosen depending on the fact that
1. the determinant of their Jacobian can be computed in linear time
2. the transformation is invertible (under usually mild conditions only)
As shown in the paper, for the planar flow, the determinant of the Jacobian can be computed in $O(D)$ time by relying on the matrix determinant lemma
$$
\psi(\mathbf{z})=h'(\mathbf{w}^T\mathbf{z}+b)\mathbf{w}
\tag{6}
$$
$$
\left|\text{det}\frac{\delta f}{\delta\mathbf{z}}\right| = \left|\text{det}\left(\mathbf{I}+\mathbf{u}\psi(\mathbf{z})^{T}\right)\right|=\left|1+\mathbf{u}^T\psi(\mathbf{z})\right|
\tag{7}
$$
Therefore, we have all definitions that we need to implement this flow as a `Transform` object. Note that here the non-linear activation function $h$ is selected as a $tanh$. Therefore the derivative $h'$ is $1-tanh(x)^2$
```python
class PlanarFlow(transform.Transform):
def __init__(self, weight, scale, bias):
super(PlanarFlow, self).__init__()
self.bijective = False
self.weight = weight
self.scale = scale
self.bias = bias
def _call(self, z):
######################
# YOUR CODE GOES HERE
######################
return f_z
def log_abs_det_jacobian(self, z):
######################
# YOUR CODE GOES HERE
######################
return abs_log_det
```
As before, we can witness the effect of this transform on a given `MultivariateNormal` distribution. You should note here that I am using the density estimation for `q0`, but only display empirical samples from `q1`.
```python
w = torch.Tensor([[3., 0]])
u = torch.Tensor([[2, 0]])
b = torch.Tensor([0])
q0 = distrib.MultivariateNormal(torch.zeros(2), torch.eye(2))
flow_0 = PlanarFlow(w, u, b)
q1 = distrib.TransformedDistribution(q0, flow_0)
q1_samples = q1.sample((int(1e6), ))
# Plot this
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 5))
ax1.hexbin(z[:,0], z[:,1], C=torch.exp(q0.log_prob(torch.Tensor(z))), cmap='rainbow')
ax1.set_title('$q_0 = \mathcal{N}(\mathbf{0},\mathbb{I})$', fontsize=18);
ax2.hexbin(q1_samples[:,0], q1_samples[:,1], cmap='rainbow')
ax2.set_title('$q_1=planar(q_0)$', fontsize=18);
```
The reason for this is that the `PlanarFlow` is not invertible in all regions of the space. However, if we recall the mathematical reasoning of the previous section, we can see how the change of variables plays out if we are able to compute the determinant of the Jacobian of this transform.
```python
q0_density = torch.exp(q0.log_prob(torch.Tensor(z)))
# Apply our transform on coordinates
f_z = flow_0(torch.Tensor(z))
# Obtain our density
q1_density = q0_density.squeeze() / np.exp(flow_0.log_abs_det_jacobian(torch.Tensor(z)).squeeze())
# Plot this
fig, (ax1, ax2) = plt.subplots(1, 2, sharey=True, figsize=(15, 5))
ax1.hexbin(z[:,0], z[:,1], C=q0_density.numpy().squeeze(), cmap='rainbow')
ax1.set_title('$q_0 = \mathcal{N}(\mathbf{0},\mathbb{I})$', fontsize=18);
ax2.hexbin(f_z[:,0], f_z[:,1], C=q1_density.numpy().squeeze(), cmap='rainbow')
ax2.set_title('$q_1=planar(q_0)$', fontsize=18);
```
So we were able to "split" our distribution and transform a unimodal gaussian into a multimodal distribution ! Pretty neat
### Visualizing parameters effects
Here, we provide a little toy example so that you can play around with the parameters of the flow in order to get a better understanding of how it operates. As put forward by Rezende [1], this flow is related to the hyperplane defined by $\mathbf{w}^{T}\mathbf{z}+b=0$ and transforms the original density by applying a series of contractions and expansions in the direction perpendicular to this hyperplane.
```python
id_figure=1
plt.figure(figsize=(16, 18))
for i in np.arange(5):
# Draw a random hyperplane
w = torch.rand(1, 2) * 5
b = torch.rand(1) * 5
for j in np.arange(5):
# Different effects of scaling factor u on the same hyperplane (row)
u = torch.Tensor([[((j < 3) and (j / 2.0) or 0), ((j > 2) and ((j - 2) / 2.0) or 0)]])
flow_0 = PlanarFlow(w, u, b)
q1 = distrib.TransformedDistribution(q0, flow_0)
q1_samples = q1.sample((int(1e6), ))
plt.subplot(5,5,id_figure)
plt.hexbin(q1_samples[:,0], q1_samples[:,1], cmap='rainbow')
plt.title("u=(%.1f,%.1f)"%(u[0,0],u[0,1]) + " w=(%d,%d)"%(w[0,0],w[0,1]) + ", " + "b=%d"%b)
plt.xlim([-3, 3])
plt.ylim([-3, 3])
id_figure += 1
```
<a id="learning"></a>
## Optimizing normalizing flows
Now that we have this magnificent tool, we would like to apply this in order to learn richer distributions and perform *inference*. Now, we have to deal with the fact that the `Transform` object is not inherently parametric and cannot yet be optimized similarly to other modules.
To do so, we will start by defining our own `Flow` class which can be seen both as a `Transform` and also a `Module`that can be optmized
```python
class Flow(transform.Transform, nn.Module):
def __init__(self):
transform.Transform.__init__(self)
nn.Module.__init__(self)
# Init all parameters
def init_parameters(self):
for param in self.parameters():
param.data.uniform_(-0.01, 0.01)
# Hacky hash bypass
def __hash__(self):
return nn.Module.__hash__(self)
```
Thanks to this little trick, we can use the same `PlanarFlow` class as before, that we put back here just to show that the only change is that it now inherits from the `Flow` class (with the small added bonus that now parameters of this flow are also registered in the `Module` interface)
```python
class PlanarFlow(Flow):
def __init__(self, dim):
super(PlanarFlow, self).__init__()
self.weight = nn.Parameter(torch.Tensor(1, dim))
self.scale = nn.Parameter(torch.Tensor(1, dim))
self.bias = nn.Parameter(torch.Tensor(1))
self.init_parameters()
def _call(self, z):
######################
# YOUR CODE GOES HERE
######################
return f_z
def log_abs_det_jacobian(self, z):
######################
# YOUR CODE GOES HERE
######################
return abs_log_det
```
Now let's say that we have a given complex density that we aim to model through normalizing flows, such as the following one
```python
def density_ring(z):
z1, z2 = torch.chunk(z, chunks=2, dim=1)
norm = torch.sqrt(z1 ** 2 + z2 ** 2)
exp1 = torch.exp(-0.5 * ((z1 - 2) / 0.8) ** 2)
exp2 = torch.exp(-0.5 * ((z1 + 2) / 0.8) ** 2)
u = 0.5 * ((norm - 4) / 0.4) ** 2 - torch.log(exp1 + exp2)
return torch.exp(-u)
# Plot it
x = np.linspace(-5, 5, 1000)
z = np.array(np.meshgrid(x, x)).transpose(1, 2, 0)
z = np.reshape(z, [z.shape[0] * z.shape[1], -1])
plt.hexbin(z[:,0], z[:,1], C=density_ring(torch.Tensor(z)).numpy().squeeze(), cmap='rainbow')
plt.title('Target density', fontsize=18);
```
Now to approximate such a complicated density, we will need to chain multiple planar flows and optimize their parameters to find a suitable approximation. We can do exactly that like in the following (you can see that we start by a simple normal density and perform 16 successive planar flows)
```python
# Main class for normalizing flow
class NormalizingFlow(nn.Module):
def __init__(self, dim, flow_length, density):
super().__init__()
biject = []
for f in range(flow_length):
biject.append(PlanarFlow(dim))
self.transforms = transform.ComposeTransform(biject)
self.bijectors = nn.ModuleList(biject)
self.base_density = density
self.final_density = distrib.TransformedDistribution(density, self.transforms)
self.log_det = []
def forward(self, z):
self.log_det = []
# Applies series of flows
for b in range(len(self.bijectors)):
self.log_det.append(self.bijectors[b].log_abs_det_jacobian(z))
z = self.bijectors[b](z)
return z, self.log_det
# Create normalizing flow
flow = NormalizingFlow(dim=2, flow_length=16, density=distrib.MultivariateNormal(torch.zeros(2), torch.eye(2)))
```
Now the only missing ingredient is the loss function that is simply defined as follows
```python
def loss(density, zk, log_jacobians):
######################
# YOUR CODE GOES HERE
######################
return loss
```
We can now perform optimization as usual by defining an optimizer, the parameters it will act on and eventually a learning rate scheduler
```python
import torch.optim as optim
# Create optimizer algorithm
optimizer = optim.Adam(flow.parameters(), lr=2e-3)
# Add learning rate scheduler
scheduler = optim.lr_scheduler.ExponentialLR(optimizer, 0.9999)
```
And now we perform the loop by sampling a batch (here of 512) from the reference Normal distribution, and then evaluating our loss with respect to the density we want to approximate.
```python
ref_distrib = distrib.MultivariateNormal(torch.zeros(2), torch.eye(2))
id_figure=2
plt.figure(figsize=(16, 18))
plt.subplot(3,4,1)
plt.hexbin(z[:,0], z[:,1], C=density_ring(torch.Tensor(z)).numpy().squeeze(), cmap='rainbow')
plt.title('Target density', fontsize=15);
# Main optimization loop
for it in range(10001):
# Draw a sample batch from Normal
samples = ref_distrib.sample((512, ))
# Evaluate flow of transforms
zk, log_jacobians = flow(samples)
# Evaluate loss and backprop
optimizer.zero_grad()
loss_v = loss(density_ring, zk, log_jacobians)
loss_v.backward()
optimizer.step()
scheduler.step()
if (it % 1000 == 0):
print('Loss (it. %i) : %f'%(it, loss_v.item()))
# Draw random samples
samples = ref_distrib.sample((int(1e5), ))
# Evaluate flow and plot
zk, _ = flow(samples)
zk = zk.detach().numpy()
plt.subplot(3,4,id_figure)
plt.hexbin(zk[:,0], zk[:,1], cmap='rainbow')
plt.title('Iter.%i'%(it), fontsize=15);
id_figure += 1
```
That concludes this tutorial ! In the next one we will see how to implement more complicated flows and how this can fit in a global inference framework
### References
<a id="reference1"></a>
[1] Rezende, Danilo Jimenez, and Shakir Mohamed. "Variational inference with normalizing flows." _arXiv preprint arXiv:1505.05770_ (2015). [link](http://arxiv.org/pdf/1505.05770)
[2] Kingma, Diederik P., Tim Salimans, and Max Welling. "Improving Variational Inference with Inverse Autoregressive Flow." _arXiv preprint arXiv:1606.04934_ (2016). [link](https://arxiv.org/abs/1606.04934)
[3] Germain, Mathieu, et al. "Made: masked autoencoder for distribution estimation." International Conference on Machine Learning. 2015.
### Inspirations and resources
https://blog.evjang.com/2018/01/nf1.html
https://github.com/ex4sperans/variational-inference-with-normalizing-flows
https://akosiorek.github.io/ml/2018/04/03/norm_flows.html
https://github.com/abdulfatir/normalizing-flows
| f6dc902eb7aec196313e7334cd65de28e76f2a4d | 767,794 | ipynb | Jupyter Notebook | 10b_normalizing_flows.ipynb | RisseThomas/atiam_ml | 28a963f9a7504b1b9600d438a35c4ca4a737605d | [
"MIT"
] | 4 | 2021-12-07T15:34:09.000Z | 2022-03-16T19:48:56.000Z | 10b_normalizing_flows.ipynb | RisseThomas/atiam_ml | 28a963f9a7504b1b9600d438a35c4ca4a737605d | [
"MIT"
] | null | null | null | 10b_normalizing_flows.ipynb | RisseThomas/atiam_ml | 28a963f9a7504b1b9600d438a35c4ca4a737605d | [
"MIT"
] | 5 | 2016-10-19T16:01:59.000Z | 2021-09-20T04:59:13.000Z | 1,103.152299 | 739,192 | 0.95098 | true | 5,845 | Qwen/Qwen-72B | 1. YES
2. YES | 0.861538 | 0.771843 | 0.664973 | __label__eng_Latn | 0.952991 | 0.383285 |
# Fun with Hidden Markov Models
*by Loren Lugosch*
This notebook introduces the Hidden Markov Model (HMM), a simple model for sequential data.
We will see:
- what an HMM is and when you might want to use it;
- the so-called "three problems" of an HMM; and
- how to implement an HMM in PyTorch.
(The code in this notebook can also be found at https://github.com/lorenlugosch/pytorch_HMM.)
EDIT: The first version of this notebook normalized the transition matrix along the wrong axis. I've fixed that now.
A hypothetical scenario
------
To motivate the use of HMMs, imagine that you have a friend who gets to do a lot of travelling. Every day, this jet-setting friend sends you a selfie from the city they’re in, to make you envious.
<center>
</center>
How would you go about guessing which city the friend is in each day, just by looking at the selfies?
If the selfie contains a really obvious landmark, like the Eiffel Tower, it will be easy to figure out where the photo was taken. If not, it will be a lot harder to infer the city.
But we have a clue to help us: the city the friend is in each day is not totally random. For example, the friend will probably remain in the same city for a few days to sightsee before flying to a new city.
## The HMM setup
The hypothetical scenario of the friend travelling between cities and sending you selfies can be modeled using an HMM.
An HMM models a system that is in a particular state at any given time and produces an output that depends on that state.
At each timestep or clock tick, the system randomly decides on a new state and jumps into that state. The system then randomly generates an observation. The states are "hidden": we can't observe them. (In the cities/selfies analogy, the unknown cities would be the hidden states, and the selfies would be the observations.)
Let's denote the sequence of states as $\mathbf{z} = \{z_1, z_2, \dots, z_T \}$, where each state is one of a finite set of $N$ states, and the sequence of observations as $\mathbf{x} = \{x_1, x_2, \dots, x_T\}$. The observations could be discrete, like letters, or real-valued, like audio frames.
<center>
</center>
An HMM makes two key assumptions:
- **Assumption 1:** The state at time $t$ depends *only* on the state at the previous time $t-1$.
- **Assumption 2:** The output at time $t$ depends *only* on the state at time $t$.
These two assumptions make it possible to efficiently compute certain quantities that we may be interested in.
## Components of an HMM
An HMM has three sets of trainable parameters.
- The **transition model** is a square matrix $A$, where $A_{s, s'}$ represents $p(z_t = s|z_{t-1} = s')$, the probability of jumping from state $s'$ to state $s$.
- The **emission model** $b_s(x_t)$ tells us $p(x_t|z_t = s)$, the probability of generating $x_t$ when the system is in state $s$. For discrete observations, which we will use in this notebook, the emission model is just a lookup table, with one row for each state, and one column for each observation. For real-valued observations, it is common to use a Gaussian mixture model or neural network to implement the emission model.
- The **state priors** tell us $p(z_1 = s)$, the probability of starting in state $s$. We use $\pi$ to denote the vector of state priors, so $\pi_s$ is the state prior for state $s$.
Let's program an HMM class in PyTorch.
```
import torch
import numpy as np
class HMM(torch.nn.Module):
"""
Hidden Markov Model with discrete observations.
"""
def __init__(self, M, N):
super(HMM, self).__init__()
self.M = M # number of possible observations
self.N = N # number of states
# A
self.transition_model = TransitionModel(self.N)
# b(x_t)
self.emission_model = EmissionModel(self.N,self.M)
# pi
self.unnormalized_state_priors = torch.nn.Parameter(torch.randn(self.N))
# use the GPU
self.is_cuda = torch.cuda.is_available()
if self.is_cuda: self.cuda()
class TransitionModel(torch.nn.Module):
def __init__(self, N):
super(TransitionModel, self).__init__()
self.N = N
self.unnormalized_transition_matrix = torch.nn.Parameter(torch.randn(N,N))
class EmissionModel(torch.nn.Module):
def __init__(self, N, M):
super(EmissionModel, self).__init__()
self.N = N
self.M = M
self.unnormalized_emission_matrix = torch.nn.Parameter(torch.randn(N,M))
```
To sample from the HMM, we start by picking a random initial state from the state prior distribution.
Then, we sample an output from the emission distribution, sample a transition from the transition distribution, and repeat.
(Notice that we pass the unnormalized model parameters through a softmax function to make them into probabilities.)
```
def sample(self, T=10):
state_priors = torch.nn.functional.softmax(self.unnormalized_state_priors, dim=0)
transition_matrix = torch.nn.functional.softmax(self.transition_model.unnormalized_transition_matrix, dim=0)
emission_matrix = torch.nn.functional.softmax(self.emission_model.unnormalized_emission_matrix, dim=1)
# sample initial state
z_t = torch.distributions.categorical.Categorical(state_priors).sample().item()
z = []; x = []
z.append(z_t)
for t in range(0,T):
# sample emission
x_t = torch.distributions.categorical.Categorical(emission_matrix[z_t]).sample().item()
x.append(x_t)
# sample transition
z_t = torch.distributions.categorical.Categorical(transition_matrix[:,z_t]).sample().item()
if t < T-1: z.append(z_t)
return x, z
# Add the sampling method to our HMM class
HMM.sample = sample
```
Let's try hard-coding an HMM for generating fake words. (We'll also add some helper functions for encoding and decoding strings.)
We will assume that the system has one state for generating vowels and one state for generating consonants, and the transition matrix has 0s on the diagonal---in other words, the system cannot stay in the vowel state or the consonant state for one than one timestep; it has to switch.
Since we pass the transition matrix through a softmax, to get 0s we set the unnormalized parameter values to $-\infty$.
```
import string
alphabet = string.ascii_lowercase
def encode(s):
"""
Convert a string into a list of integers
"""
x = [alphabet.index(ss) for ss in s]
return x
def decode(x):
"""
Convert list of ints to string
"""
s = "".join([alphabet[xx] for xx in x])
return s
# Initialize the model
model = HMM(M=len(alphabet), N=2)
# Hard-wiring the parameters!
# Let state 0 = consonant, state 1 = vowel
model.unnormalized_state_priors[0] = 0. # Let's start with a consonant more frequently
model.unnormalized_state_priors[1] = -0.5
print("State priors:", torch.nn.functional.softmax(model.unnormalized_state_priors, dim=0))
# In state 0, only allow consonants; in state 1, only allow vowels
vowel_indices = torch.tensor([alphabet.index(letter) for letter in "aeiou"])
consonant_indices = torch.tensor([alphabet.index(letter) for letter in "bcdfghjklmnpqrstvwxyz"])
model.emission_model.unnormalized_emission_matrix[0, vowel_indices] = -np.inf
model.emission_model.unnormalized_emission_matrix[1, consonant_indices] = -np.inf
print("Emission matrix:", torch.nn.functional.softmax(model.emission_model.unnormalized_emission_matrix, dim=1))
# Only allow vowel -> consonant and consonant -> vowel
model.transition_model.unnormalized_transition_matrix[0,0] = -np.inf # consonant -> consonant
model.transition_model.unnormalized_transition_matrix[0,1] = 0. # vowel -> consonant
model.transition_model.unnormalized_transition_matrix[1,0] = 0. # consonant -> vowel
model.transition_model.unnormalized_transition_matrix[1,1] = -np.inf # vowel -> vowel
print("Transition matrix:", torch.nn.functional.softmax(model.transition_model.unnormalized_transition_matrix, dim=0))
```
State priors: tensor([0.6225, 0.3775], device='cuda:0', grad_fn=<SoftmaxBackward>)
Emission matrix: tensor([[0.0000, 0.0896, 0.1045, 0.0080, 0.0000, 0.0454, 0.0211, 0.0725, 0.0000,
0.0305, 0.0452, 0.0340, 0.0340, 0.0068, 0.0000, 0.2051, 0.0106, 0.0567,
0.0259, 0.0527, 0.0000, 0.0185, 0.0388, 0.0331, 0.0641, 0.0027],
[0.1517, 0.0000, 0.0000, 0.0000, 0.2175, 0.0000, 0.0000, 0.0000, 0.2027,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.2571, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.1709, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000]],
device='cuda:0', grad_fn=<SoftmaxBackward>)
Transition matrix: tensor([[0., 1.],
[1., 0.]], device='cuda:0', grad_fn=<SoftmaxBackward>)
Try sampling from our hard-coded model:
```
# Sample some outputs
for _ in range(4):
sampled_x, sampled_z = model.sample(T=5)
print("x:", decode(sampled_x))
print("z:", sampled_z)
```
x: xutek
z: [0, 1, 0, 1, 0]
x: bipuk
z: [0, 1, 0, 1, 0]
x: pogut
z: [0, 1, 0, 1, 0]
x: coraw
z: [0, 1, 0, 1, 0]
## The Three Problems
In a [classic tutorial](https://www.cs.cmu.edu/~cga/behavior/rabiner1.pdf) on HMMs, Lawrence Rabiner describes "three problems" that need to be solved before you can effectively use an HMM. They are:
- Problem 1: How do we efficiently compute $p(\mathbf{x})$?
- Problem 2: How do we find the most likely state sequence $\mathbf{z}$ that could have generated the data?
- Problem 3: How do we train the model?
In the rest of the notebook, we will see how to solve each problem and implement the solutions in PyTorch.
### Problem 1: How do we compute $p(\mathbf{x})$?
#### *Why?*
Why might we care about computing $p(\mathbf{x})$? Here's two reasons.
* Given two HMMs, $\theta_1$ and $\theta_2$, we can compute the likelihood of some data $\mathbf{x}$ under each model, $p_{\theta_1}(\mathbf{x})$ and $p_{\theta_2}(\mathbf{x})$, to decide which model is a better fit to the data.
(For example, given an HMM for English speech and an HMM for French speech, we could compute the likelihood given each model, and pick the model with the higher likelihood to infer whether the person is speaking English or French.)
* Being able to compute $p(\mathbf{x})$ gives us a way to train the model, as we will see later.
#### *How?*
Given that we want $p(\mathbf{x})$, how do we compute it?
We've assumed that the data is generated by visiting some sequence of states $\mathbf{z}$ and picking an output $x_t$ for each $z_t$ from the emission distribution $p(x_t|z_t)$. So if we knew $\mathbf{z}$, then the probability of $\mathbf{x}$ could be computed as follows:
$$p(\mathbf{x}|\mathbf{z}) = \prod_{t} p(x_t|z_t) p(z_t|z_{t-1})$$
However, we don't know $\mathbf{z}$; it's hidden. But we do know the probability of any given $\mathbf{z}$, independent of what we observe. So we could get the probability of $\mathbf{x}$ by summing over the different possibilities for $\mathbf{z}$, like this:
$$p(\mathbf{x}) = \sum_{\mathbf{z}} p(\mathbf{x}|\mathbf{z}) p(\mathbf{z}) = \sum_{\mathbf{z}} \prod_{t} p(x_t|z_t) p(z_t|z_{t-1})$$
The problem is: if you try to take that sum directly, you will need to compute $N^T$ terms. This is impossible to do for anything but very short sequences. For example, let's say the sequence is of length $T=100$ and there are $N=2$ possible states. Then we would need to check $N^T = 2^{100} \approx 10^{30}$ different possible state sequences.
We need a way to compute $p(\mathbf{x})$ that doesn't require us to explicitly calculate all $N^T$ terms. For this, we use the forward algorithm.
________
<u><b>The Forward Algorithm</b></u>
> for $s=1 \rightarrow N$:\
> $\alpha_{s,1} := b_s(x_1) \cdot \pi_s$
>
> for $t = 2 \rightarrow T$:\
> for $s = 1 \rightarrow N$:\
>
> $\alpha_{s,t} := b_s(x_t) \cdot \underset{s'}{\sum} A_{s, s'} \cdot \alpha_{s',t-1} $
>
> $p(\mathbf{x}) := \underset{s}{\sum} \alpha_{s,T}$\
> return $p(\mathbf{x})$
________
The forward algorithm is much faster than enumerating all $N^T$ possible state sequences: it requires only $O(N^2T)$ operations to run, since each step is mostly multiplying the vector of forward variables by the transition matrix. (And very often we can reduce that complexity even further, if the transition matrix is sparse.)
There is one practical problem with the forward algorithm as presented above: it is prone to underflow due to multiplying a long chain of small numbers, since probabilities are always between 0 and 1. Instead, let's do everything in the log domain. In the log domain, a multiplication becomes a sum, and a sum becomes a [logsumexp](https://en.wikipedia.org/wiki/LogSumExp).
________
<u><b>The Forward Algorithm (Log Domain)</b></u>
> for $s=1 \rightarrow N$:\
> $\text{log }\alpha_{s,1} := \text{log }b_s(x_1) + \text{log }\pi_s$
>
> for $t = 2 \rightarrow T$:\
> for $s = 1 \rightarrow N$:\
>
> $\text{log }\alpha_{s,t} := \text{log }b_s(x_t) + \underset{s'}{\text{logsumexp}} \left( \text{log }A_{s, s'} + \text{log }\alpha_{s',t-1} \right)$
>
> $\text{log }p(\mathbf{x}) := \underset{s}{\text{logsumexp}} \left( \text{log }\alpha_{s,T} \right)$\
> return $\text{log }p(\mathbf{x})$
________
Now that we have a numerically stable version of the forward algorithm, let's implement it in PyTorch.
```
def HMM_forward(self, x, T):
"""
x : IntTensor of shape (batch size, T_max)
T : IntTensor of shape (batch size)
Compute log p(x) for each example in the batch.
T = length of each example
"""
if self.is_cuda:
x = x.cuda()
T = T.cuda()
batch_size = x.shape[0]; T_max = x.shape[1]
log_state_priors = torch.nn.functional.log_softmax(self.unnormalized_state_priors, dim=0)
log_alpha = torch.zeros(batch_size, T_max, self.N)
if self.is_cuda: log_alpha = log_alpha.cuda()
log_alpha[:, 0, :] = self.emission_model(x[:,0]) + log_state_priors
for t in range(1, T_max):
log_alpha[:, t, :] = self.emission_model(x[:,t]) + self.transition_model(log_alpha[:, t-1, :])
# Select the sum for the final timestep (each x may have different length).
log_sums = log_alpha.logsumexp(dim=2)
log_probs = torch.gather(log_sums, 1, T.view(-1,1) - 1)
return log_probs
def emission_model_forward(self, x_t):
log_emission_matrix = torch.nn.functional.log_softmax(self.unnormalized_emission_matrix, dim=1)
out = log_emission_matrix[:, x_t].transpose(0,1)
return out
def transition_model_forward(self, log_alpha):
"""
log_alpha : Tensor of shape (batch size, N)
Multiply previous timestep's alphas by transition matrix (in log domain)
"""
log_transition_matrix = torch.nn.functional.log_softmax(self.unnormalized_transition_matrix, dim=0)
# Matrix multiplication in the log domain
out = log_domain_matmul(log_transition_matrix, log_alpha.transpose(0,1)).transpose(0,1)
return out
def log_domain_matmul(log_A, log_B):
"""
log_A : m x n
log_B : n x p
output : m x p matrix
Normally, a matrix multiplication
computes out_{i,j} = sum_k A_{i,k} x B_{k,j}
A log domain matrix multiplication
computes out_{i,j} = logsumexp_k log_A_{i,k} + log_B_{k,j}
"""
m = log_A.shape[0]
n = log_A.shape[1]
p = log_B.shape[1]
log_A_expanded = torch.stack([log_A] * p, dim=2)
log_B_expanded = torch.stack([log_B] * m, dim=0)
elementwise_sum = log_A_expanded + log_B_expanded
out = torch.logsumexp(elementwise_sum, dim=1)
return out
TransitionModel.forward = transition_model_forward
EmissionModel.forward = emission_model_forward
HMM.forward = HMM_forward
```
Try running the forward algorithm on our vowels/consonants model from before:
```
x = torch.stack( [torch.tensor(encode("cat"))] )
T = torch.tensor([3])
print(model.forward(x, T))
x = torch.stack( [torch.tensor(encode("aba")), torch.tensor(encode("abb"))] )
T = torch.tensor([3,3])
print(model.forward(x, T))
```
tensor([[-7.5605]], device='cuda:0', grad_fn=<GatherBackward>)
tensor([[-7.1584],
[ -inf]], device='cuda:0', grad_fn=<GatherBackward>)
When using the vowel <-> consonant HMM from above, notice that the forward algorithm returns $-\infty$ for $\mathbf{x} = \text{"abb"}$. That's because our transition matrix says the probability of vowel -> vowel and consonant -> consonant is 0, so the probability of $\text{"abb"}$ happening is 0, and thus the log probability is $-\infty$.
#### *Side note: deriving the forward algorithm*
If you're interested in understanding how the forward algorithm actually computes $p(\mathbf{x})$, read this section; if not, skip to the next part on "Problem 2" (finding the most likely state sequence).
To derive the forward algorithm, start by deriving the forward variable:
$$\begin{align}
\alpha_{s,t} &= p(x_1, x_2, \dots, x_t, z_t=s) \\
&= p(x_t | x_1, x_2, \dots, x_{t-1}, z_t = s) \cdot p(x_1, x_2, \dots, x_{t-1}, z_t = s) \\
&= p(x_t | z_t = s) \cdot p(x_1, x_2, \dots, x_{t-1}, z_t = s) \\
&= p(x_t | z_t = s) \cdot \left( \sum_{s'} p(x_1, x_2, \dots, x_{t-1}, z_{t-1}=s', z_t = s) \right)\\
&= p(x_t | z_t = s) \cdot \left( \sum_{s'} p(z_t = s | x_1, x_2, \dots, x_{t-1}, z_{t-1}=s') \cdot p(x_1, x_2, \dots, x_{t-1}, z_{t-1}=s') \right)\\
&= \underbrace{p(x_t | z_t = s)}_{\text{emission model}} \cdot \left( \sum_{s'} \underbrace{p(z_t = s | z_{t-1}=s')}_{\text{transition model}} \cdot \underbrace{p(x_1, x_2, \dots, x_{t-1}, z_{t-1}=s')}_{\text{forward variable for previous timestep}} \right)\\
&= b_s(x_t) \cdot \left( \sum_{s'} A_{s, s'} \cdot \alpha_{s',t-1} \right)
\end{align}$$
I'll explain how to get to each line of this equation from the previous line.
Line 1 is the definition of the forward variable $\alpha_{s,t}$.
Line 2 is the chain rule ($p(A,B) = p(A|B) \cdot p(B)$, where $A$ is $x_t$ and $B$ is all the other variables).
In Line 3, we apply Assumption 2: the probability of observation $x_t$ depends only on the current state $z_t$.
In Line 4, we marginalize over all the possible states in the previous timestep $t-1$.
In Line 5, we apply the chain rule again.
In Line 6, we apply Assumption 1: the current state depends only on the previous state.
In Line 7, we substitute in the emission probability, the transition probability, and the forward variable for the previous timestep, to get the complete recursion.
The formula above can be used for $t = 2 \rightarrow T$. At $t=1$, there is no previous state, so instead of the transition matrix $A$, we use the state priors $\pi$, which tell us the probability of starting in each state. Thus for $t=1$, the forward variables are computed as follows:
$$\begin{align}
\alpha_{s,1} &= p(x_1, z_1=s) \\
&= p(x_1 | z_1 = s) \cdot p(z_1 = s) \\
&= b_s(x_1) \cdot \pi_s
\end{align}$$
Finally, to compute $p(\mathbf{x}) = p(x_1, x_2, \dots, x_T)$, we marginalize over $\alpha_{s,T}$, the forward variables computed in the last timestep:
$$\begin{align*}
p(\mathbf{x}) &= \sum_{s} p(x_1, x_2, \dots, x_T, z_T = s) \\
&= \sum_{s} \alpha_{s,T}
\end{align*}$$
You can get from this formulation to the log domain formulation by taking the log of the forward variable, and using these identities:
- $\text{log }(a \cdot b) = \text{log }a + \text{log }b$
- $\text{log }(a + b) = \text{log }(e^{\text{log }a} + e^{\text{log }b}) = \text{logsumexp}(\text{log }a, \text{log }b)$
### Problem 2: How do we compute $\underset{\mathbf{z}}{\text{argmax }} p(\mathbf{z}|\mathbf{x})$?
Given an observation sequence $\mathbf{x}$, we may want to find the most likely sequence of states that could have generated $\mathbf{x}$. (Given the sequence of selfies, we want to infer what cities the friend visited.) In other words, we want $\underset{\mathbf{z}}{\text{argmax }} p(\mathbf{z}|\mathbf{x})$.
We can use Bayes' rule to rewrite this expression:
$$\begin{align*}
\underset{\mathbf{z}}{\text{argmax }} p(\mathbf{z}|\mathbf{x}) &= \underset{\mathbf{z}}{\text{argmax }} \frac{p(\mathbf{x}|\mathbf{z}) p(\mathbf{z})}{p(\mathbf{x})} \\
&= \underset{\mathbf{z}}{\text{argmax }} p(\mathbf{x}|\mathbf{z}) p(\mathbf{z})
\end{align*}$$
Hmm! That last expression, $\underset{\mathbf{z}}{\text{argmax }} p(\mathbf{x}|\mathbf{z}) p(\mathbf{z})$, looks suspiciously similar to the intractable expression we encountered before introducing the forward algorithm, $\underset{\mathbf{z}}{\sum} p(\mathbf{x}|\mathbf{z}) p(\mathbf{z})$.
And indeed, just as the intractable *sum* over all $\mathbf{z}$ can be implemented efficiently using the forward algorithm, so too this intractable *argmax* can be implemented efficiently using a similar divide-and-conquer algorithm: the legendary Viterbi algorithm!
________
<u><b>The Viterbi Algorithm</b></u>
> for $s=1 \rightarrow N$:\
> $\delta_{s,1} := b_s(x_1) \cdot \pi_s$\
> $\psi_{s,1} := 0$
>
> for $t = 2 \rightarrow T$:\
> for $s = 1 \rightarrow N$:\
> $\delta_{s,t} := b_s(x_t) \cdot \left( \underset{s'}{\text{max }} A_{s, s'} \cdot \delta_{s',t-1} \right)$\
$\psi_{s,t} := \underset{s'}{\text{argmax }} A_{s, s'} \cdot \delta_{s',t-1}$
>
> $z_T^* := \underset{s}{\text{argmax }} \delta_{s,T}$\
> for $t = T-1 \rightarrow 1$:\
$z_{t}^* := \psi_{z_{t+1}^*,t+1}$
>
> $\mathbf{z}^* := \{z_{1}^*, \dots, z_{T}^* \}$\
return $\mathbf{z}^*$
________
The Viterbi algorithm looks somewhat gnarlier than the forward algorithm, but it is essentially the same algorithm, with two tweaks: 1) instead of taking the sum over previous states, we take the max; and 2) we record the argmax of the previous states in a table, and loop back over this table at the end to get $\mathbf{z}^*$, the most likely state sequence. (And like the forward algorithm, we should run the Viterbi algorithm in the log domain for better numerical stability.)
Let's add the Viterbi algorithm to our PyTorch model:
```
def viterbi(self, x, T):
"""
x : IntTensor of shape (batch size, T_max)
T : IntTensor of shape (batch size)
Find argmax_z log p(x|z) for each (x) in the batch.
"""
if self.is_cuda:
x = x.cuda()
T = T.cuda()
batch_size = x.shape[0]; T_max = x.shape[1]
log_state_priors = torch.nn.functional.log_softmax(self.unnormalized_state_priors, dim=0)
log_delta = torch.zeros(batch_size, T_max, self.N).float()
psi = torch.zeros(batch_size, T_max, self.N).long()
if self.is_cuda:
log_delta = log_delta.cuda()
psi = psi.cuda()
log_delta[:, 0, :] = self.emission_model(x[:,0]) + log_state_priors
for t in range(1, T_max):
max_val, argmax_val = self.transition_model.maxmul(log_delta[:, t-1, :])
log_delta[:, t, :] = self.emission_model(x[:,t]) + max_val
psi[:, t, :] = argmax_val
# Get the log probability of the best path
log_max = log_delta.max(dim=2)[0]
best_path_scores = torch.gather(log_max, 1, T.view(-1,1) - 1)
# This next part is a bit tricky to parallelize across the batch,
# so we will do it separately for each example.
z_star = []
for i in range(0, batch_size):
z_star_i = [ log_delta[i, T[i] - 1, :].max(dim=0)[1].item() ]
for t in range(T[i] - 1, 0, -1):
z_t = psi[i, t, z_star_i[0]].item()
z_star_i.insert(0, z_t)
z_star.append(z_star_i)
return z_star, best_path_scores # return both the best path and its log probability
def transition_model_maxmul(self, log_alpha):
log_transition_matrix = torch.nn.functional.log_softmax(self.unnormalized_transition_matrix, dim=0)
out1, out2 = maxmul(log_transition_matrix, log_alpha.transpose(0,1))
return out1.transpose(0,1), out2.transpose(0,1)
def maxmul(log_A, log_B):
"""
log_A : m x n
log_B : n x p
output : m x p matrix
Similar to the log domain matrix multiplication,
this computes out_{i,j} = max_k log_A_{i,k} + log_B_{k,j}
"""
m = log_A.shape[0]
n = log_A.shape[1]
p = log_B.shape[1]
log_A_expanded = torch.stack([log_A] * p, dim=2)
log_B_expanded = torch.stack([log_B] * m, dim=0)
elementwise_sum = log_A_expanded + log_B_expanded
out1,out2 = torch.max(elementwise_sum, dim=1)
return out1,out2
TransitionModel.maxmul = transition_model_maxmul
HMM.viterbi = viterbi
```
Try running Viterbi on an input sequence, given the vowel/consonant HMM:
```
x = torch.stack( [torch.tensor(encode("aba")), torch.tensor(encode("abb"))] )
T = torch.tensor([3,3])
print(model.viterbi(x, T))
```
([[1, 0, 1], [1, 0, 0]], tensor([[-7.1584],
[ -inf]], device='cuda:0', grad_fn=<GatherBackward>))
For $\mathbf{x} = \text{"aba"}$, the Viterbi algorithm returns $\mathbf{z}^* = \{1,0,1\}$. This corresponds to "vowel, consonant, vowel" according to the way we defined the states above, which is correct for this input sequence. Yay!
For $\mathbf{x} = \text{"abb"}$, the Viterbi algorithm still returns a $\mathbf{z}^*$, but we know this is gibberish because "vowel, consonant, consonant" is impossible under this HMM, and indeed the log probability of this path is $-\infty$.
Let's compare the "forward score" (the log probability of all possible paths, returned by the forward algorithm) with the "Viterbi score" (the log probability of the maximum likelihood path, returned by the Viterbi algorithm):
```
print(model.forward(x, T))
print(model.viterbi(x, T)[1])
```
tensor([[-7.1584],
[ -inf]], device='cuda:0', grad_fn=<GatherBackward>)
tensor([[-7.1584],
[ -inf]], device='cuda:0', grad_fn=<GatherBackward>)
The two scores are the same! That's because in this instance there is only one possible path through the HMM, so the probability of the most likely path is the same as the sum of the probabilities of all possible paths.
In general, though, the forward score and Viterbi score will always be somewhat close. This is because of a property of the $\text{logsumexp}$ function: $\text{logsumexp}(\mathbf{x}) \approx \max (\mathbf{x})$. ($\text{logsumexp}$ is sometimes referred to as the "smooth maximum" function.)
```
x = torch.tensor([1., 2., 3.])
print(x.max(dim=0)[0])
print(x.logsumexp(dim=0))
```
tensor(3.)
tensor(3.4076)
### Problem 3: How do we train the model?
Earlier, we hard-coded an HMM to have certain behavior. What we would like to do instead is have the HMM learn to model the data on its own. And while it is possible to use supervised learning with an HMM (by hard-coding the emission model or the transition model) so that the states have a particular interpretation, the really cool thing about HMMs is that they are naturally unsupervised learners, so they can learn to use their different states to represent different patterns in the data, without the programmer needing to indicate what each state means.
Like many machine learning models, an HMM can be trained using maximum likelihood estimation, i.e.:
$$\theta^* = \underset{\theta}{\text{argmin }} -\sum_{\mathbf{x}^i}\text{log }p_{\theta}(\mathbf{x}^i)$$
where $\mathbf{x}^1, \mathbf{x}^2, \dots$ are training examples.
The standard method for doing this is the Expectation-Maximization (EM) algorithm, which for HMMs is also called the "Baum-Welch" algorithm. In EM training, we alternate between an "E-step", where we estimate the values of the latent variables, and an "M-step", where the model parameters are updated given the estimated latent variables. (Think $k$-means, where you guess which cluster each data point belongs to, then reestimate where the clusters are, and repeat.) The EM algorithm has some nice properties: it is guaranteed at each step to decrease the loss function, and the E-step and M-step may have an exact closed form solution, in which case no pesky learning rates are required.
But because the HMM forward algorithm is differentiable with respect to all the model parameters, we can also just take advantage of automatic differentiation methods in libraries like PyTorch and try to minimize $-\text{log }p_{\theta}(\mathbf{x})$ directly, by backpropagating through the forward algorithm and running stochastic gradient descent. That means we don't need to write any additional HMM code to implement training: `loss.backward()` is all you need.
Here we will implement SGD training for an HMM in PyTorch. First, some helper classes:
```
import torch.utils.data
from collections import Counter
from sklearn.model_selection import train_test_split
class TextDataset(torch.utils.data.Dataset):
def __init__(self, lines):
self.lines = lines # list of strings
collate = Collate() # function for generating a minibatch from strings
self.loader = torch.utils.data.DataLoader(self, batch_size=1024, num_workers=1, shuffle=True, collate_fn=collate)
def __len__(self):
return len(self.lines)
def __getitem__(self, idx):
line = self.lines[idx].lstrip(" ").rstrip("\n").rstrip(" ").rstrip("\n")
return line
class Collate:
def __init__(self):
pass
def __call__(self, batch):
"""
Returns a minibatch of strings, padded to have the same length.
"""
x = []
batch_size = len(batch)
for index in range(batch_size):
x_ = batch[index]
# convert letters to integers
x.append(encode(x_))
# pad all sequences with 0 to have same length
x_lengths = [len(x_) for x_ in x]
T = max(x_lengths)
for index in range(batch_size):
x[index] += [0] * (T - len(x[index]))
x[index] = torch.tensor(x[index])
# stack into single tensor
x = torch.stack(x)
x_lengths = torch.tensor(x_lengths)
return (x,x_lengths)
```
Let's load some training/testing data. By default, this will use the unix "words" file, but you could also use your own text file.
```
!wget https://raw.githubusercontent.com/lorenlugosch/pytorch_HMM/master/data/train/training.txt
filename = "training.txt"
with open(filename, "r") as f:
lines = f.readlines() # each line of lines will have one word
alphabet = list(Counter(("".join(lines))).keys())
train_lines, valid_lines = train_test_split(lines, test_size=0.1, random_state=42)
train_dataset = TextDataset(train_lines)
valid_dataset = TextDataset(valid_lines)
M = len(alphabet)
```
--2020-02-17 15:17:50-- https://raw.githubusercontent.com/lorenlugosch/pytorch_HMM/master/data/train/training.txt
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.0.133, 151.101.64.133, 151.101.128.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.0.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2493109 (2.4M) [text/plain]
Saving to: ‘training.txt’
training.txt 100%[===================>] 2.38M --.-KB/s in 0.1s
2020-02-17 15:17:50 (24.5 MB/s) - ‘training.txt’ saved [2493109/2493109]
We will use a Trainer class for training and testing the model:
```
from tqdm import tqdm # for displaying progress bar
class Trainer:
def __init__(self, model, lr):
self.model = model
self.lr = lr
self.optimizer = torch.optim.Adam(model.parameters(), lr=self.lr, weight_decay=0.00001)
def train(self, dataset):
train_loss = 0
num_samples = 0
self.model.train()
print_interval = 50
for idx, batch in enumerate(tqdm(dataset.loader)):
x,T = batch
batch_size = len(x)
num_samples += batch_size
log_probs = self.model(x,T)
loss = -log_probs.mean()
self.optimizer.zero_grad()
loss.backward()
self.optimizer.step()
train_loss += loss.cpu().data.numpy().item() * batch_size
if idx % print_interval == 0:
print("loss:", loss.item())
for _ in range(5):
sampled_x, sampled_z = self.model.sample()
print(decode(sampled_x))
print(sampled_z)
train_loss /= num_samples
return train_loss
def test(self, dataset):
test_loss = 0
num_samples = 0
self.model.eval()
print_interval = 50
for idx, batch in enumerate(dataset.loader):
x,T = batch
batch_size = len(x)
num_samples += batch_size
log_probs = self.model(x,T)
loss = -log_probs.mean()
test_loss += loss.cpu().data.numpy().item() * batch_size
if idx % print_interval == 0:
print("loss:", loss.item())
sampled_x, sampled_z = self.model.sample()
print(decode(sampled_x))
print(sampled_z)
test_loss /= num_samples
return test_loss
```
Finally, initialize the model and run the main training loop. Every 50 batches, the code will produce a few samples from the model. Over time, these samples should look more and more realistic.
```
# Initialize model
model = HMM(N=64, M=M)
# Train the model
num_epochs = 10
trainer = Trainer(model, lr=0.01)
for epoch in range(num_epochs):
print("========= Epoch %d of %d =========" % (epoch+1, num_epochs))
train_loss = trainer.train(train_dataset)
valid_loss = trainer.test(valid_dataset)
print("========= Results: epoch %d of %d =========" % (epoch+1, num_epochs))
print("train loss: %.2f| valid loss: %.2f\n" % (train_loss, valid_loss) )
```
0%| | 0/208 [00:00<?, ?it/s]
========= Epoch 1 of 10 =========
0%| | 1/208 [00:00<01:21, 2.53it/s]
loss: 37.83818817138672
CnJDVadnIj
[18, 25, 11, 54, 20, 3, 10, 49, 0, 2]
RobzPWmmX
[55, 43, 56, 61, 38, 54, 21, 21, 31, 63]
wnWnwDCoBM
[14, 33, 58, 63, 21, 23, 17, 32, 43, 63]
hLcxLtbumv
[58, 53, 61, 34, 1, 48, 48, 62, 15, 58]
wM-fUt-tLr
[27, 5, 14, 45, 43, 27, 51, 51, 54, 24]
25%|██▍ | 51/208 [00:13<00:43, 3.57it/s]
loss: 32.814849853515625
utrsceValB
[45, 5, 14, 22, 56, 5, 45, 42, 44, 50]
gbUnHnDDYM
[46, 8, 54, 56, 44, 29, 0, 26, 33, 46]
svgrSPKyHD
[26, 19, 37, 50, 50, 56, 44, 19, 52, 63]
PSAIyXlCHu
[15, 39, 35, 20, 29, 13, 37, 17, 26, 46]
IjiHtZ-nio
[7, 37, 19, 6, 54, 12, 44, 27, 3, 46]
49%|████▊ | 101/208 [00:27<00:29, 3.63it/s]
loss: 29.635101318359375
IaryUenEVo
[18, 22, 27, 3, 27, 46, 14, 6, 35, 46]
UiePuZneiI
[16, 42, 21, 3, 7, 58, 3, 29, 4, 27]
stliozerda
[22, 16, 5, 4, 23, 53, 63, 44, 48, 2]
pzpurSsloo
[26, 34, 12, 29, 15, 22, 6, 21, 3, 12]
biboakqVhl
[7, 5, 55, 2, 6, 35, 46, 26, 33, 21]
73%|███████▎ | 151/208 [00:40<00:14, 3.87it/s]
loss: 28.170347213745117
puHpurjlos
[32, 46, 15, 35, 46, 15, 63, 27, 34, 15]
pibpaitrei
[62, 44, 8, 44, 21, 44, 6, 35, 46, 44]
bensPeaeos
[48, 46, 15, 22, 35, 38, 53, 35, 46, 15]
lirrtaqoci
[62, 34, 46, 15, 22, 53, 22, 2, 14, 63]
purlayIlXd
[48, 46, 15, 22, 27, 14, 22, 21, 4, 10]
97%|█████████▋| 201/208 [00:53<00:01, 3.59it/s]
loss: 26.471345901489258
geresodePc
[31, 63, 40, 63, 15, 3, 31, 63, 27, 14]
sertVmelpn
[48, 46, 15, 26, 27, 35, 46, 15, 26, 38]
qabdellyni
[48, 34, 24, 10, 21, 44, 21, 4, 27, 42]
untenomcmy
[62, 27, 16, 46, 27, 46, 15, 57, 54, 4]
Pubndytang
[48, 46, 26, 27, 31, 4, 61, 46, 27, 18]
100%|██████████| 208/208 [00:55<00:00, 4.68it/s]
loss: 26.017606735229492
inzilanuru
[63, 27, 61, 63, 2, 34, 27, 46, 15, 16]
0%| | 0/208 [00:00<?, ?it/s]
========= Results: epoch 1 of 10 =========
train loss: 30.53| valid loss: 26.40
========= Epoch 2 of 10 =========
0%| | 1/208 [00:00<01:14, 2.77it/s]
loss: 26.479572296142578
lplincirmn
[22, 26, 33, 34, 29, 5, 4, 15, 4, 27]
unauatorbu
[49, 18, 42, 14, 42, 29, 3, 15, 54, 53]
tecu-elibk
[26, 46, 41, 46, 53, 22, 2, 34, 56, 44]
thgUrencgo
[26, 33, 21, 46, 15, 63, 27, 16, 51, 34]
rtafltoBee
[15, 14, 34, 45, 44, 29, 46, 44, 21, 4]
25%|██▍ | 51/208 [00:13<00:41, 3.76it/s]
loss: 25.50018310546875
anEitorisZ
[49, 27, 31, 63, 35, 46, 44, 4, 15, 63]
ropoberpsi
[48, 46, 15, 3, 31, 63, 15, 26, 15, 4]
uionwemidn
[49, 25, 34, 27, 31, 3, 31, 63, 10, 63]
alcireytlt
[46, 15, 35, 46, 44, 21, 4, 22, 5, 14]
beweleones
[62, 63, 31, 63, 44, 21, 3, 31, 63, 15]
49%|████▊ | 101/208 [00:27<00:31, 3.40it/s]
loss: 24.771900177001953
SYbrinibby
[48, 34, 45, 44, 42, 31, 46, 45, 44, 19]
niniswetus
[31, 63, 27, 46, 15, 35, 63, 27, 46, 15]
soborsliig
[62, 34, 38, 46, 44, 53, 5, 4, 34, 18]
chiceuspuo
[48, 33, 14, 61, 46, 46, 15, 48, 46, 34]
sangmaropc
[2, 34, 27, 58, 7, 46, 44, 46, 15, 14]
73%|███████▎ | 151/208 [00:40<00:15, 3.62it/s]
loss: 24.587112426757812
unigJzerpa
[49, 27, 63, 26, 33, 61, 63, 15, 26, 34]
uasmdencBn
[62, 46, 15, 50, 10, 46, 15, 2, 34, 27]
suststyotu
[48, 53, 22, 35, 22, 5, 4, 5, 26, 46]
crowhiolap
[26, 33, 3, 26, 33, 2, 34, 29, 6, 35]
Qiresononi
[48, 46, 15, 26, 15, 3, 29, 34, 27, 63]
97%|█████████▋| 201/208 [00:54<00:01, 3.61it/s]
loss: 24.65005111694336
peptismigy
[26, 63, 15, 35, 14, 22, 7, 42, 29, 4]
proekicaki
[26, 33, 46, 63, 15, 4, 61, 6, 44, 42]
chRngIveif
[26, 33, 42, 27, 58, 0, 23, 21, 4, 5]
niusphideg
[31, 63, 53, 22, 35, 33, 63, 31, 63, 15]
fapeleauca
[48, 34, 26, 46, 44, 21, 3, 53, 61, 46]
100%|██████████| 208/208 [00:55<00:00, 4.52it/s]
loss: 24.449634552001953
Costaparra
[48, 46, 15, 35, 46, 24, 26, 33, 12, 34]
0%| | 0/208 [00:00<?, ?it/s]
========= Results: epoch 2 of 10 =========
train loss: 25.07| valid loss: 24.47
========= Epoch 3 of 10 =========
0%| | 1/208 [00:00<01:13, 2.80it/s]
loss: 24.374300003051758
prulesange
[48, 33, 46, 44, 46, 15, 34, 27, 58, 63]
jomucrlasi
[48, 46, 31, 46, 15, 33, 2, 46, 15, 14]
soryithode
[48, 46, 15, 4, 22, 35, 33, 3, 31, 63]
unceyiphiz
[49, 27, 26, 46, 17, 42, 26, 33, 42, 61]
tissanarip
[26, 46, 15, 22, 6, 35, 46, 15, 42, 35]
25%|██▍ | 51/208 [00:13<00:47, 3.31it/s]
loss: 24.072898864746094
wroneneste
[26, 33, 3, 35, 63, 27, 63, 15, 35, 42]
caovadiarc
[48, 46, 3, 31, 42, 31, 14, 34, 15, 26]
GWlerlgrca
[48, 46, 44, 63, 15, 31, 63, 15, 61, 6]
changokers
[26, 33, 34, 27, 58, 34, 36, 63, 15, 35]
Lrsophiali
[48, 15, 26, 34, 26, 33, 2, 6, 44, 42]
49%|████▊ | 101/208 [00:27<00:29, 3.64it/s]
loss: 24.010425567626953
slyeezarvi
[22, 21, 19, 21, 46, 61, 6, 44, 31, 42]
lediFlefli
[48, 46, 29, 14, 6, 44, 63, 10, 21, 4]
suctenphos
[48, 46, 15, 35, 63, 27, 26, 33, 46, 15]
valilrobbu
[31, 46, 44, 4, 61, 33, 46, 24, 51, 46]
esadicsest
[46, 15, 42, 31, 14, 61, 35, 63, 15, 35]
73%|███████▎ | 151/208 [00:40<00:15, 3.58it/s]
loss: 24.108936309814453
mRerphammi
[7, 32, 63, 15, 26, 33, 46, 7, 7, 4]
moeteinert
[48, 46, 63, 15, 14, 34, 27, 63, 15, 35]
uniztinalQ
[62, 12, 42, 61, 35, 42, 29, 6, 44, 21]
Dralhysaso
[62, 12, 46, 15, 33, 19, 35, 6, 35, 46]
chissicabs
[26, 33, 46, 15, 35, 42, 61, 46, 24, 14]
97%|█████████▋| 201/208 [00:54<00:01, 3.55it/s]
loss: 23.95977783203125
hcheddemiY
[33, 26, 33, 63, 10, 31, 63, 31, 42, 61]
disyscXomi
[48, 46, 15, 19, 15, 26, 33, 3, 7, 42]
cangitesmi
[48, 34, 27, 35, 42, 35, 63, 22, 7, 42]
Gueleronin
[48, 46, 63, 44, 63, 15, 3, 29, 42, 29]
isulwisaza
[46, 15, 46, 44, 21, 4, 35, 42, 61, 6]
100%|██████████| 208/208 [00:56<00:00, 3.89it/s]
loss: 23.793458938598633
mestlasDam
[31, 63, 15, 35, 33, 46, 15, 28, 6, 35]
0%| | 0/208 [00:00<?, ?it/s]
========= Results: epoch 3 of 10 =========
train loss: 24.17| valid loss: 23.99
========= Epoch 4 of 10 =========
0%| | 1/208 [00:00<01:17, 2.66it/s]
loss: 24.02060317993164
corypulsyt
[48, 46, 15, 19, 32, 63, 44, 22, 17, 26]
varectauyp
[12, 46, 15, 63, 15, 35, 34, 27, 19, 32]
loinusiuss
[48, 46, 49, 27, 46, 15, 22, 53, 22, 35]
malershiag
[48, 46, 44, 63, 15, 22, 35, 14, 34, 26]
haraeiglyl
[48, 46, 15, 6, 63, 3, 58, 5, 19, 44]
25%|██▍ | 51/208 [00:13<00:42, 3.69it/s]
loss: 24.00276756286621
rerUertivu
[48, 46, 15, 26, 46, 15, 35, 42, 31, 6]
tagstertio
[26, 3, 58, 22, 35, 63, 15, 35, 14, 34]
lwauedirsl
[48, 33, 3, 53, 63, 10, 46, 15, 35, 12]
granilomal
[62, 12, 46, 27, 42, 44, 3, 7, 6, 44]
heroresysy
[33, 63, 15, 46, 15, 63, 15, 4, 22, 17]
49%|████▊ | 101/208 [00:27<00:30, 3.52it/s]
loss: 23.493682861328125
gisegoacia
[18, 46, 15, 3, 58, 12, 46, 15, 14, 34]
Petiudnton
[48, 46, 15, 3, 53, 10, 27, 35, 34, 27]
amotlylyyh
[3, 7, 46, 26, 33, 4, 5, 19, 32, 33]
dettedisti
[48, 46, 15, 35, 63, 10, 14, 22, 35, 42]
hlyctumfma
[33, 21, 4, 61, 35, 46, 7, 39, 38, 6]
73%|███████▎ | 151/208 [00:40<00:15, 3.73it/s]
loss: 23.660245895385742
vigelionho
[31, 4, 29, 63, 44, 4, 34, 27, 33, 46]
smptuspoma
[22, 7, 32, 35, 46, 15, 26, 46, 7, 6]
chelthitre
[26, 33, 46, 44, 26, 33, 42, 35, 31, 63]
derbomping
[31, 63, 15, 8, 46, 7, 32, 42, 29, 58]
Tidaticmme
[48, 42, 61, 6, 35, 46, 15, 7, 7, 63]
97%|█████████▋| 201/208 [00:54<00:01, 3.72it/s]
loss: 23.784198760986328
hemeronede
[48, 46, 7, 63, 15, 3, 31, 46, 31, 63]
babBicasal
[48, 6, 45, 44, 4, 61, 6, 35, 46, 44]
maibvonoel
[48, 46, 46, 24, 31, 46, 15, 46, 46, 44]
scaanurtip
[22, 26, 46, 49, 27, 46, 15, 26, 46, 26]
andertrysh
[34, 27, 31, 63, 15, 26, 33, 46, 22, 33]
100%|██████████| 208/208 [00:55<00:00, 4.53it/s]
loss: 23.924264907836914
plabontowe
[62, 12, 3, 31, 34, 27, 26, 46, 7, 63]
0%| | 0/208 [00:00<?, ?it/s]
========= Results: epoch 4 of 10 =========
train loss: 23.82| valid loss: 23.75
========= Epoch 5 of 10 =========
0%| | 1/208 [00:00<01:15, 2.75it/s]
loss: 23.799543380737305
ronyglydob
[48, 34, 27, 47, 18, 21, 19, 10, 46, 24]
pocaristom
[48, 46, 26, 46, 15, 14, 22, 35, 46, 7]
geltesylec
[48, 46, 44, 26, 46, 15, 19, 44, 63, 15]
epshypedin
[46, 32, 22, 33, 19, 32, 63, 10, 17, 27]
otlelisynt
[3, 26, 12, 46, 44, 4, 22, 17, 27, 26]
25%|██▍ | 51/208 [00:13<00:40, 3.84it/s]
loss: 23.391658782958984
utionerisy
[49, 15, 14, 34, 29, 63, 15, 4, 22, 17]
Malymotado
[48, 34, 44, 19, 7, 46, 26, 46, 31, 3]
qigbsmeste
[48, 42, 41, 24, 22, 7, 46, 22, 35, 63]
Thorsetian
[48, 33, 46, 27, 22, 46, 35, 14, 34, 29]
odobointia
[3, 31, 46, 24, 46, 49, 27, 35, 14, 34]
49%|████▊ | 101/208 [00:27<00:28, 3.80it/s]
loss: 23.864551544189453
gindusalia
[48, 49, 27, 10, 46, 15, 46, 44, 4, 6]
yolampewoi
[8, 46, 44, 46, 7, 32, 63, 15, 3, 42]
moncrautib
[48, 34, 27, 26, 12, 46, 46, 35, 14, 45]
dachontode
[48, 46, 41, 33, 46, 27, 26, 46, 31, 63]
nomQotacri
[48, 46, 7, 11, 3, 26, 34, 26, 33, 42]
73%|███████▎ | 151/208 [00:40<00:15, 3.70it/s]
loss: 23.713638305664062
kraphreesd
[62, 12, 46, 32, 33, 12, 46, 63, 15, 31]
dekication
[31, 63, 31, 42, 61, 6, 35, 14, 34, 27]
eslytychyr
[46, 22, 21, 19, 5, 19, 26, 33, 19, 44]
lymerselew
[48, 46, 31, 63, 15, 22, 63, 44, 63, 15]
tedmeredem
[26, 46, 10, 31, 63, 15, 63, 10, 46, 7]
97%|█████████▋| 201/208 [00:53<00:01, 3.72it/s]
loss: 23.801746368408203
cybyanguud
[48, 46, 24, 4, 34, 27, 18, 49, 27, 10]
semicatera
[48, 46, 7, 42, 61, 6, 35, 63, 15, 3]
ompilatian
[3, 7, 32, 42, 44, 6, 35, 14, 34, 29]
apholticka
[46, 32, 33, 34, 27, 35, 42, 61, 36, 6]
mingyntert
[25, 42, 29, 58, 17, 27, 35, 63, 15, 35]
100%|██████████| 208/208 [00:55<00:00, 4.46it/s]
loss: 23.500385284423828
GensexJuls
[48, 46, 27, 35, 63, 15, 30, 53, 15, 22]
0%| | 0/208 [00:00<?, ?it/s]
========= Results: epoch 5 of 10 =========
train loss: 23.64| valid loss: 23.61
========= Epoch 6 of 10 =========
0%| | 1/208 [00:00<01:30, 2.28it/s]
loss: 23.998214721679688
merypiston
[48, 46, 15, 19, 32, 63, 15, 8, 34, 29]
kequitcong
[48, 46, 30, 53, 42, 35, 16, 34, 27, 58]
untraryote
[49, 27, 26, 12, 46, 15, 19, 34, 29, 63]
dephylumos
[48, 46, 32, 33, 19, 44, 50, 7, 34, 15]
abralalicr
[46, 24, 2, 46, 44, 46, 44, 4, 61, 12]
25%|██▍ | 51/208 [00:14<00:42, 3.71it/s]
loss: 23.56419563293457
strablbieo
[22, 35, 12, 6, 45, 44, 25, 42, 63, 3]
sobdicaies
[48, 46, 24, 10, 14, 61, 46, 42, 63, 15]
pabgistati
[48, 46, 24, 58, 42, 22, 35, 6, 35, 42]
rurchfromo
[12, 46, 15, 26, 33, 62, 12, 46, 7, 34]
dellesityp
[31, 46, 44, 21, 63, 15, 4, 5, 19, 32]
49%|████▊ | 101/208 [00:27<00:28, 3.70it/s]
loss: 23.507736206054688
Iryplalodi
[62, 12, 19, 32, 44, 46, 44, 3, 31, 42]
syismembit
[48, 19, 4, 22, 7, 46, 7, 24, 42, 26]
mockermmgl
[48, 46, 15, 57, 63, 15, 38, 25, 58, 12]
untarislnc
[49, 27, 35, 46, 12, 42, 22, 17, 27, 26]
Guchiperri
[48, 46, 26, 33, 42, 32, 63, 15, 12, 42]
73%|███████▎ | 151/208 [00:41<00:15, 3.74it/s]
loss: 23.453380584716797
gnatonovia
[41, 38, 6, 35, 34, 27, 3, 31, 42, 6]
griarcryos
[62, 12, 42, 6, 15, 26, 12, 19, 3, 22]
zariogrurs
[48, 46, 15, 42, 3, 58, 12, 46, 15, 22]
Fontaterad
[48, 34, 27, 35, 6, 35, 63, 15, 46, 31]
Polocksnol
[48, 46, 44, 46, 61, 57, 22, 29, 6, 44]
97%|█████████▋| 201/208 [00:54<00:01, 3.72it/s]
loss: 23.498559951782227
etecymably
[13, 26, 46, 15, 19, 7, 6, 45, 21, 19]
stesemanop
[22, 35, 63, 15, 63, 1, 34, 29, 3, 32]
prulyssati
[62, 12, 46, 44, 19, 22, 35, 6, 35, 14]
flatsoarin
[18, 21, 6, 15, 22, 46, 46, 15, 14, 31]
rommamedam
[48, 46, 7, 37, 49, 7, 46, 31, 46, 7]
100%|██████████| 208/208 [00:56<00:00, 4.32it/s]
loss: 23.944332122802734
sanginaciu
[48, 34, 29, 58, 42, 29, 6, 16, 4, 50]
0%| | 0/208 [00:00<?, ?it/s]
========= Results: epoch 6 of 10 =========
train loss: 23.52| valid loss: 23.50
========= Epoch 7 of 10 =========
0%| | 1/208 [00:00<01:15, 2.74it/s]
loss: 23.66443634033203
aftelysheb
[13, 55, 35, 63, 21, 19, 22, 33, 63, 62]
editepadac
[49, 10, 42, 35, 46, 32, 46, 10, 3, 26]
omonsucide
[3, 11, 34, 27, 22, 46, 61, 14, 31, 63]
ocKogytabu
[46, 15, 4, 3, 58, 19, 26, 3, 24, 53]
orsmpetedy
[46, 15, 22, 7, 32, 63, 35, 63, 10, 19]
25%|██▍ | 51/208 [00:14<00:44, 3.49it/s]
loss: 23.37617301940918
eerpombavo
[46, 46, 15, 26, 46, 7, 51, 46, 31, 34]
bonconatio
[48, 34, 29, 16, 34, 29, 6, 35, 14, 34]
gonctisedi
[48, 34, 27, 16, 35, 14, 22, 63, 10, 4]
tanroperot
[26, 46, 27, 12, 3, 32, 63, 15, 3, 26]
hapterdert
[33, 46, 32, 35, 63, 44, 31, 63, 15, 35]
49%|████▊ | 101/208 [00:27<00:28, 3.74it/s]
loss: 23.183443069458008
erbbsonine
[46, 15, 8, 24, 2, 34, 27, 42, 29, 63]
vacterarir
[48, 46, 15, 35, 63, 12, 46, 12, 42, 44]
hocrenanae
[33, 46, 26, 12, 63, 31, 6, 27, 46, 3]
unendesizk
[49, 27, 46, 27, 31, 63, 15, 14, 61, 57]
untaticens
[49, 27, 35, 6, 35, 14, 61, 63, 15, 22]
73%|███████▎ | 151/208 [00:41<00:18, 3.04it/s]
loss: 23.409622192382812
lapitaders
[48, 46, 32, 42, 26, 46, 31, 63, 15, 38]
arinercras
[49, 15, 4, 35, 63, 15, 26, 12, 46, 15]
rulitytent
[12, 46, 44, 4, 5, 19, 35, 63, 27, 35]
sobrigaero
[48, 46, 24, 12, 42, 41, 6, 63, 15, 3]
fpavelogon
[13, 26, 46, 31, 63, 44, 3, 58, 34, 29]
97%|█████████▋| 201/208 [00:54<00:01, 3.72it/s]
loss: 23.292179107666016
pruthededi
[62, 12, 46, 26, 33, 63, 10, 3, 31, 4]
caedoplycn
[26, 46, 27, 10, 3, 32, 21, 19, 26, 12]
cringiatio
[26, 12, 42, 29, 58, 4, 6, 35, 14, 34]
increxshta
[49, 27, 26, 12, 63, 15, 22, 33, 14, 34]
unegigneri
[49, 27, 3, 58, 42, 41, 29, 63, 15, 14]
100%|██████████| 208/208 [00:56<00:00, 4.70it/s]
loss: 23.704559326171875
sninbadomi
[22, 35, 14, 29, 9, 46, 10, 3, 38, 4]
0%| | 0/208 [00:00<?, ?it/s]
========= Results: epoch 7 of 10 =========
train loss: 23.41| valid loss: 23.40
========= Epoch 8 of 10 =========
0%| | 1/208 [00:00<01:24, 2.45it/s]
loss: 23.13858985900879
hasstacare
[48, 46, 15, 22, 26, 46, 26, 46, 15, 63]
qusionblyl
[30, 53, 22, 14, 34, 29, 45, 21, 19, 44]
loglymifis
[48, 46, 58, 21, 19, 7, 4, 39, 42, 22]
ungexserme
[49, 27, 58, 63, 15, 22, 63, 15, 38, 46]
necsetrecr
[31, 63, 15, 22, 46, 26, 12, 3, 26, 12]
25%|██▍ | 51/208 [00:13<00:41, 3.78it/s]
loss: 23.059940338134766
cercitylyc
[48, 46, 44, 61, 4, 5, 19, 44, 19, 26]
Mocracrala
[48, 46, 26, 12, 46, 26, 12, 6, 44, 6]
gasroectiv
[48, 46, 15, 38, 46, 46, 61, 35, 14, 31]
trumflerun
[26, 12, 46, 7, 39, 21, 63, 12, 46, 27]
panosickex
[48, 34, 29, 3, 38, 4, 61, 57, 63, 15]
49%|████▊ | 101/208 [00:27<00:29, 3.59it/s]
loss: 23.657520294189453
rHaysminan
[12, 3, 52, 19, 22, 7, 42, 29, 6, 27]
trolikenme
[26, 12, 46, 44, 4, 36, 63, 27, 51, 46]
dicroperte
[31, 4, 26, 12, 3, 32, 63, 15, 35, 63]
welizerrer
[48, 46, 44, 4, 36, 63, 15, 12, 63, 15]
matioblesw
[25, 6, 35, 14, 34, 45, 21, 63, 15, 60]
73%|███████▎ | 151/208 [00:41<00:14, 3.81it/s]
loss: 23.064796447753906
wighmeponz
[48, 46, 41, 33, 38, 46, 26, 34, 27, 43]
phipereTmi
[32, 33, 42, 32, 63, 12, 63, 15, 7, 46]
umposscers
[49, 7, 32, 46, 15, 22, 35, 63, 15, 38]
cronlishla
[26, 12, 34, 29, 44, 4, 22, 33, 12, 46]
Aserictich
[13, 22, 63, 15, 4, 61, 35, 14, 61, 33]
97%|█████████▋| 201/208 [00:54<00:02, 3.48it/s]
loss: 23.43466567993164
epoftienes
[49, 32, 46, 55, 35, 14, 34, 29, 63, 15]
untartrivi
[49, 27, 26, 46, 15, 26, 12, 14, 31, 4]
stosairido
[22, 26, 3, 22, 46, 46, 15, 4, 29, 3]
Palydyluti
[48, 46, 44, 19, 10, 19, 44, 53, 35, 14]
unmelyasma
[49, 27, 38, 46, 44, 0, 6, 22, 28, 6]
100%|██████████| 208/208 [00:56<00:00, 3.67it/s]
loss: 23.20675277709961
corotocity
[26, 46, 15, 3, 35, 46, 15, 4, 5, 19]
0%| | 0/208 [00:00<?, ?it/s]
========= Results: epoch 8 of 10 =========
train loss: 23.34| valid loss: 23.35
========= Epoch 9 of 10 =========
0%| | 1/208 [00:00<01:16, 2.70it/s]
loss: 23.150440216064453
pemacislee
[48, 46, 7, 46, 61, 4, 22, 21, 63, 3]
sorenaiabl
[48, 46, 15, 63, 27, 46, 42, 6, 45, 21]
bricteecan
[62, 12, 42, 61, 35, 63, 46, 61, 6, 27]
vialoughon
[31, 4, 6, 44, 3, 53, 41, 33, 34, 29]
traftoveno
[26, 33, 46, 55, 35, 3, 31, 63, 27, 46]
25%|██▍ | 51/208 [00:14<00:41, 3.78it/s]
loss: 23.249794006347656
trecaliabl
[62, 12, 46, 61, 6, 44, 4, 6, 45, 21]
Marvansubs
[48, 46, 15, 31, 6, 27, 22, 56, 24, 25]
roncactomo
[48, 34, 27, 26, 46, 61, 35, 3, 54, 34]
phaueragol
[48, 33, 6, 27, 63, 12, 6, 58, 34, 27]
resdlentet
[12, 46, 15, 10, 21, 46, 27, 35, 63, 26]
49%|████▊ | 101/208 [00:28<00:29, 3.61it/s]
loss: 23.418804168701172
unninerwos
[49, 27, 38, 4, 29, 63, 15, 60, 46, 15]
rossyngisn
[12, 46, 15, 22, 17, 29, 58, 4, 22, 35]
morsangelb
[48, 46, 15, 38, 3, 29, 58, 63, 44, 51]
wertossict
[48, 46, 15, 35, 46, 15, 38, 4, 61, 35]
gerheayatr
[48, 46, 15, 33, 63, 3, 0, 6, 35, 12]
73%|███████▎ | 151/208 [00:41<00:15, 3.70it/s]
loss: 23.153179168701172
fativedere
[18, 46, 35, 14, 31, 63, 31, 63, 15, 63]
vadontubro
[31, 6, 10, 46, 27, 26, 56, 24, 2, 34]
warsoiermo
[48, 46, 15, 38, 3, 42, 63, 15, 38, 46]
shaunorone
[22, 33, 46, 20, 31, 63, 15, 34, 29, 63]
modndoisyl
[48, 46, 10, 27, 31, 46, 42, 22, 17, 44]
97%|█████████▋| 201/208 [00:55<00:01, 3.69it/s]
loss: 23.24637222290039
fleuredant
[18, 21, 46, 20, 12, 46, 31, 6, 27, 35]
erpeceinea
[46, 15, 26, 3, 26, 46, 42, 29, 63, 6]
enosyperid
[49, 27, 3, 38, 19, 32, 63, 12, 42, 10]
carcrodent
[48, 46, 15, 26, 12, 3, 31, 63, 27, 35]
brousesact
[62, 12, 3, 53, 22, 63, 38, 46, 61, 5]
100%|██████████| 208/208 [00:56<00:00, 4.54it/s]
loss: 23.462291717529297
petasbopae
[48, 46, 26, 46, 15, 38, 3, 32, 6, 63]
0%| | 0/208 [00:00<?, ?it/s]
========= Results: epoch 9 of 10 =========
train loss: 23.30| valid loss: 23.32
========= Epoch 10 of 10 =========
0%| | 1/208 [00:00<01:13, 2.81it/s]
loss: 23.180316925048828
Totickenat
[48, 46, 26, 42, 61, 57, 63, 29, 6, 35]
relomelosw
[12, 46, 44, 3, 11, 63, 21, 3, 22, 26]
scrygobroi
[22, 26, 12, 19, 8, 46, 24, 12, 3, 42]
spterverop
[22, 32, 35, 63, 15, 31, 63, 12, 3, 32]
plopensyou
[62, 12, 3, 32, 63, 27, 35, 19, 3, 53]
25%|██▍ | 51/208 [00:13<00:42, 3.65it/s]
loss: 23.340988159179688
Vopaforedi
[48, 46, 32, 3, 39, 46, 44, 63, 10, 17]
thaurnosle
[26, 33, 46, 46, 15, 38, 46, 22, 21, 63]
niphreltip
[48, 42, 32, 33, 12, 46, 44, 35, 42, 32]
uneangidea
[49, 27, 63, 6, 29, 58, 14, 31, 63, 6]
Cytricatel
[48, 19, 26, 12, 42, 61, 6, 35, 63, 44]
49%|████▊ | 101/208 [00:27<00:28, 3.75it/s]
loss: 23.096843719482422
sphaladica
[13, 32, 33, 6, 44, 6, 10, 14, 61, 6]
unsporicta
[49, 27, 22, 32, 46, 15, 42, 61, 35, 46]
inizedleca
[49, 27, 4, 36, 63, 10, 12, 63, 61, 6]
peldiblegl
[32, 46, 44, 31, 14, 45, 21, 3, 58, 21]
tongicubro
[48, 34, 29, 58, 4, 61, 56, 24, 2, 3]
73%|███████▎ | 151/208 [00:41<00:16, 3.48it/s]
loss: 22.76116371154785
alislydupo
[13, 44, 4, 22, 21, 19, 10, 56, 32, 46]
bleneoidos
[62, 12, 63, 27, 63, 3, 42, 31, 46, 22]
outatinine
[3, 53, 35, 6, 35, 14, 29, 4, 29, 63]
flewrastra
[18, 21, 63, 26, 12, 46, 15, 26, 12, 46]
veldebosca
[48, 46, 44, 31, 63, 51, 46, 22, 26, 46]
97%|█████████▋| 201/208 [00:54<00:01, 3.68it/s]
loss: 23.19066047668457
sthlityssi
[22, 26, 33, 21, 4, 5, 19, 22, 38, 42]
ralcubfert
[12, 6, 44, 26, 20, 55, 39, 63, 15, 35]
tontermipe
[26, 46, 27, 35, 63, 15, 25, 42, 32, 63]
insinially
[49, 27, 38, 4, 29, 4, 6, 44, 21, 19]
waarganery
[48, 46, 46, 15, 8, 34, 29, 63, 15, 17]
100%|██████████| 208/208 [00:56<00:00, 4.32it/s]
loss: 23.30339813232422
Picrertifi
[48, 46, 61, 12, 63, 15, 35, 14, 39, 42]
========= Results: epoch 10 of 10 =========
train loss: 23.27| valid loss: 23.30
You may wish to try different values of $N$ and see what the impact on sample quality is.
It's also interesting to look at the visited states. For instance, I noticed the following interesting behavior:
- ~~If weight decay is used (`weight_decay=0.00001` in the initialization of the optimizer), the model almost always picks one of two states as the initial state, and uses one of these states to emit vowels and the other to emit consonants (without us programming that behavior or telling the model which letters are vowels!).~~
- ~~But if no weight decay is used, then the model always starts from one particular state. I suspect that this is because weight decay encourages the state priors to be not too big, and if one state prior is very big, the model will always start from that state.~~
EDIT: This behavior disappeared when I fixed the normalization of the transition matrix. But there are other interesting behaviors now:
```
x = torch.tensor(encode("quack")).unsqueeze(0)
T = torch.tensor([5])
print(model.viterbi(x,T))
x = torch.tensor(encode("quick")).unsqueeze(0)
T = torch.tensor([5])
print(model.viterbi(x,T))
x = torch.tensor(encode("qurck")).unsqueeze(0)
T = torch.tensor([5])
print(model.viterbi(x,T)) # should have lower probability---in English only vowels follow "qu"
x = torch.tensor(encode("qiick")).unsqueeze(0)
T = torch.tensor([5])
print(model.viterbi(x,T)) # should have lower probability---in English only "u" follows "q"
```
([[30, 53, 6, 16, 57]], tensor([[-15.3841]], device='cuda:0', grad_fn=<GatherBackward>))
([[30, 53, 42, 61, 57]], tensor([[-12.1762]], device='cuda:0', grad_fn=<GatherBackward>))
([[30, 53, 15, 16, 57]], tensor([[-18.2916]], device='cuda:0', grad_fn=<GatherBackward>))
([[30, 53, 42, 61, 57]], tensor([[-20.4187]], device='cuda:0', grad_fn=<GatherBackward>))
## Conclusion
HMMs used to be very popular in natural language processing, but they have largely been overshadowed by neural network models like RNNs and Transformers. Still, it is fun and instructive to study the HMM; some commonly used machine learning techniques like [Connectionist Temporal Classification](https://www.cs.toronto.edu/~graves/icml_2006.pdf) are inspired by HMM methods. HMMs are [still used in conjunction with neural networks in speech recognition](https://arxiv.org/abs/1811.07453), where the assumption of a one-hot state makes sense for modelling phonemes, which are spoken one at a time.
## Acknowledgments
This notebook is based partly on Lawrence Rabiner's excellent article "[A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition](https://www.cs.cmu.edu/~cga/behavior/rabiner1.pdf)", which you may also like to check out. Thanks also to Dima Serdyuk and Kyle Gorman for their feedback on the draft.
| 80845aae100fdc9b59dee8c3575a3a6bdedad4ba | 105,932 | ipynb | Jupyter Notebook | HMM.ipynb | boru-roylu/pytorch_HMM | 024274a0d73653816406b70f1d8308a1d0260b0a | [
"Apache-2.0"
] | null | null | null | HMM.ipynb | boru-roylu/pytorch_HMM | 024274a0d73653816406b70f1d8308a1d0260b0a | [
"Apache-2.0"
] | null | null | null | HMM.ipynb | boru-roylu/pytorch_HMM | 024274a0d73653816406b70f1d8308a1d0260b0a | [
"Apache-2.0"
] | null | null | null | 36.006798 | 708 | 0.442189 | true | 23,347 | Qwen/Qwen-72B | 1. YES
2. YES | 0.822189 | 0.743168 | 0.611025 | __label__eng_Latn | 0.712314 | 0.257945 |
```python
%load_ext sympyprinting
%matplotlib inline
import matplotlib.pyplot as plt
import sympy
from IPython.display import display
sympy.init_printing(use_unicode=False, wrap_line=False, no_global=True)
import scipy.constants
import numpy as np
```
/home/ash/anaconda2/envs/python3/lib/python3.5/site-packages/IPython/extensions/sympyprinting.py:31: UserWarning: The sympyprinting extension has moved to `sympy`, use `from sympy import init_printing; init_printing()`
warnings.warn("The sympyprinting extension has moved to `sympy`, "
# Equation of motion - SDE to be solved
### $\ddot{q}(t) + \Gamma_0\dot{q}(t) + \Omega_0^2 q(t) - \dfrac{1}{m} F(t) = 0 $
#### where q = x, y or z
Where $F(t) = \mathcal{F}_{fluct}(t) + F_{feedback}(t)$
Taken from page 46 of 'Dynamics of optically levitated nanoparticles in high vacuum' - Thesis by Jan Gieseler
Using $\mathcal{F}_{fluct}(t) = \sqrt{2m \Gamma_0 k_B T_0}\dfrac{dW(t)}{dt}$
and $F_{feedback}(t) = \Omega_0 \eta q^2 \dot{q}$
Taken from page 49 of 'Dynamics of optically levitated nanoparticles in high vacuum' - Thesis by Jan Gieseler
we get the following SDE:
$\dfrac{d^2q(t)}{dt^2} + (\Gamma_0 - \Omega_0 \eta q(t)^2)\dfrac{dq(t)}{dt} + \Omega_0^2 q(t) - \sqrt{\dfrac{2\Gamma_0 k_B T_0}{m}} \dfrac{dW(t)}{dt} = 0$
split into 2 first order ODE/SDE s
letting $v = \dfrac{dq}{dt}$
$\dfrac{dv(t)}{dt} + (\Gamma_0 - \Omega_0 \eta q(t)^2)v + \Omega_0^2 q(t) - \sqrt{\dfrac{2\Gamma_0 k_B T_0}{m}} \dfrac{dW(t)}{dt} = 0$
therefore
$\dfrac{dv(t)}{dt} = -(\Gamma_0 - \Omega_0 \eta q(t)^2)v - \Omega_0^2 q(t) + \sqrt{\dfrac{2\Gamma_0 k_B T_0}{m}} \dfrac{dW(t)}{dt} $
$v = \dfrac{dq}{dt}$ therefore $dq = v~dt$
\begin{align}
dq&=v\,dt\\
dv&=[-(\Gamma_0-\Omega_0 \eta q(t)^2)v(t) - \Omega_0^2 q(t)]\,dt + \sqrt{\frac{2\Gamma_0 k_B T_0}m}\,dW
\end{align}
### Apply Milstein Method to solve
Consider the autonomous Itō stochastic differential equation
${\mathrm {d}}X_{t}=a(X_{t})\,{\mathrm {d}}t+b(X_{t})\,{\mathrm {d}}W_{t}$
Taking $X_t = q_t$ for the 1st equation above (i.e. $dq = v~dt$) we get:
$$ a(q_t) = v $$
$$ b(q_t) = 0 $$
Taking $X_t = v_t$ for the 2nd equation above (i.e. $dv = ...$) we get:
$$a(v_t) = -(\Gamma_0-\Omega_0\eta q(t)^2)v - \Omega_0^2 q(t)$$
$$b(v_t) = \sqrt{\dfrac{2\Gamma_0 k_B T_0}m}$$
${\displaystyle b'(v_{t})=0}$ therefore the diffusion term does not depend on ${\displaystyle v_{t}}$ , the Milstein's method in this case is therefore equivalent to the Euler–Maruyama method.
We then construct these functions in python:
```python
def a_q(t, v, q):
return v
def a_v(t, v, q):
return -(Gamma0 - Omega0*eta*q**2)*v - Omega0**2*q
def b_v(t, v, q):
return np.sqrt(2*Gamma0*k_b*T_0/m)
```
Using values obtained from fitting to data from a real particle we set the following constant values describing the system. Cooling has been assumed to be off by setting $\eta = 0$.
```python
Gamma0 = 4000 # radians/second
Omega0 = 75e3*2*np.pi # radians/second
eta = 0.5e7
T_0 = 300 # K
k_b = scipy.constants.Boltzmann # J/K
m = 3.1e-19 # KG
```
partition the interval [0, T] into N equal subintervals of width $\Delta t>0$:
$ 0=\tau _{0}<\tau _{1}<\dots <\tau _{N}=T{\text{ with }}\tau _{n}:=n\Delta t{\text{ and }}\Delta t={\frac {T}{N}}$
```python
dt = 1e-10
tArray = np.arange(0, 100e-6, dt)
```
```python
print("{} Hz".format(1/dt))
```
10000000000.0 Hz
set $Y_{0}=x_{0}$
```python
q0 = 0
v0 = 0
q = np.zeros_like(tArray)
v = np.zeros_like(tArray)
q[0] = q0
v[0] = v0
```
Generate independent and identically distributed normal random variables with expected value 0 and variance dt
```python
np.random.seed(88)
dwArray = np.random.normal(0, np.sqrt(dt), len(tArray)) # independent and identically distributed normal random variables with expected value 0 and variance dt
```
Apply Milstein's method (Euler Maruyama if $b'(Y_{n}) = 0$ as is the case here):
recursively define $Y_{n}$ for $ 1\leq n\leq N $ by
$ Y_{{n+1}}=Y_{n}+a(Y_{n})\Delta t+b(Y_{n})\Delta W_{n}+{\frac {1}{2}}b(Y_{n})b'(Y_{n})\left((\Delta W_{n})^{2}-\Delta t\right)$
Perform this for the 2 first order differential equations:
```python
#%%timeit
for n, t in enumerate(tArray[:-1]):
dw = dwArray[n]
v[n+1] = v[n] + a_v(t, v[n], q[n])*dt + b_v(t, v[n], q[n])*dw + 0
q[n+1] = q[n] + a_q(t, v[n], q[n])*dt + 0
```
We now have an array of positions, $v$, and velocities $p$ with time $t$.
```python
plt.plot(tArray*1e6, v)
plt.xlabel("t (us)")
plt.ylabel("v")
```
```python
plt.plot(tArray*1e6, q)
plt.xlabel("t (us)")
plt.ylabel("q")
```
Alternatively we can use a derivative-free version of Milsteins method as a two-stage kind-of Runge-Kutta method, documented in wikipedia (https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_method_%28SDE%29) or the original in arxiv.org https://arxiv.org/pdf/1210.0933.pdf.
```python
q0 = 0
v0 = 0
X = np.zeros([len(tArray), 2])
X[0, 0] = q0
X[0, 1] = v0
```
```python
def a(t, X):
q, v = X
return np.array([v, -(Gamma0 - Omega0*eta*q**2)*v - Omega0**2*q])
def b(t, X):
q, v = X
return np.array([0, np.sqrt(2*Gamma0*k_b*T_0/m)])
```
```python
%%timeit
S = np.array([-1,1])
for n, t in enumerate(tArray[:-1]):
dw = dwArray[n]
K1 = a(t, X[n])*dt + b(t, X[n])*(dw - S*np.sqrt(dt))
Xh = X[n] + K1
K2 = a(t, Xh)*dt + b(t, Xh)*(dw + S*np.sqrt(dt))
X[n+1] = X[n] + 0.5 * (K1+K2)
```
1 loop, best of 3: 2.33 s per loop
```python
q = X[:, 0]
v = X[:, 1]
```
```python
plt.plot(tArray*1e6, v)
plt.xlabel("t (us)")
plt.ylabel("v")
```
```python
plt.plot(tArray*1e6, q)
plt.xlabel("t (us)")
plt.ylabel("q")
```
The form of $F_{feedback}(t)$ is still questionable
On page 49 of 'Dynamics of optically levitated nanoparticles in high vacuum' - Thesis by Jan Gieseler he uses the form: $F_{feedback}(t) = \Omega_0 \eta q^2 \dot{q}$
On page 2 of 'Parametric feeedback cooling of levitated optomechancs in a parabolic mirror trap' Paper by Jamie and Muddassar they use the form: $F_{feedback}(t) = \dfrac{\Omega_0 \eta q^2 \dot{q}}{q_0^2}$ where $q_0$ is the amplitude of the motion: $q(t) = q_0(sin(\omega_0t)$
However it always shows up as a term $\delta \Gamma$ like so:
$\dfrac{d^2q(t)}{dt^2} + (\Gamma_0 + \delta \Gamma)\dfrac{dq(t)}{dt} + \Omega_0^2 q(t) - \sqrt{\dfrac{2\Gamma_0 k_B T_0}{m}} \dfrac{dW(t)}{dt} = 0$
By fitting to data we extract the following 3 parameters:
1) $A = \gamma^2 \dfrac{k_B T_0}{\pi m}\Gamma_0 $
Where:
- $\gamma$ is the conversion factor between Volts and nanometres. This parameterises the amount of light/ number of photons collected from the nanoparticle. With unchanged allignment and the same particle this should remain constant with changes in pressure.
- $m$ is the mass of the particle, a constant
- $T_0$ is the temperature of the environment
- $\Gamma_0$ the damping due to the environment only
2) $\Omega_0$ - the natural frequency at this trapping power
3) $\Gamma$ - the total damping on the system including environment and feedback etc...
By taking a reference save with no cooling we have $\Gamma = \Gamma_0$ and therefore we can extract $A' = \gamma^2 \dfrac{k_B T_0}{\pi m}$. Since $A'$ should be constant with pressure we can therefore extract $\Gamma_0$ at any pressure (if we have a reference save and therefore a value of $A'$) and therefore can extract $\delta \Gamma$, the damping due to cooling, we can then plug this into our SDE instead in order to include cooling in the SDE model.
For any dataset at any pressure we can do:
$\Gamma_0 = \dfrac{A}{A'}$
And then $\delta \Gamma = \Gamma - \Gamma_0$
Using this form and the same derivation as above we arrive at the following form of the 2 1st order differential equations:
\begin{align}
dq&=v\,dt\\
dv&=[-(\Gamma_0 + \delta \Gamma)v(t) - \Omega_0^2 v(t)]\,dt + \sqrt{\frac{2\Gamma_0 k_B T_0}m}\,dW
\end{align}
```python
def a_q(t, v, q):
return v
def a_v(t, v, q):
return -(Gamma0 + deltaGamma)*v - Omega0**2*q
def b_v(t, v, q):
return np.sqrt(2*Gamma0*k_b*T_0/m)
```
values below are taken from a ~1e-2 mbar cooled save
```python
Gamma0 = 15 # radians/second
deltaGamma = 2200
Omega0 = 75e3*2*np.pi # radians/second
eta = 0.5e7
T_0 = 300 # K
k_b = scipy.constants.Boltzmann # J/K
m = 3.1e-19 # KG
```
```python
dt = 1e-10
tArray = np.arange(0, 100e-6, dt)
```
```python
q0 = 0
v0 = 0
q = np.zeros_like(tArray)
v = np.zeros_like(tArray)
q[0] = q0
v[0] = v0
```
```python
np.random.seed(88)
dwArray = np.random.normal(0, np.sqrt(dt), len(tArray)) # independent and identically distributed normal random variables with expected value 0 and variance dt
```
```python
for n, t in enumerate(tArray[:-1]):
dw = dwArray[n]
v[n+1] = v[n] + a_v(t, v[n], q[n])*dt + b_v(t, v[n], q[n])*dw + 0
q[n+1] = q[n] + a_q(t, v[n], q[n])*dt + 0
```
```python
plt.plot(tArray*1e6, v)
plt.xlabel("t (us)")
plt.ylabel("v")
```
```python
plt.plot(tArray*1e6, q)
plt.xlabel("t (us)")
plt.ylabel("q")
```
```python
```
```python
```
| fd94de9a258df8adcba42f99b5d71325cb054a7d | 149,068 | ipynb | Jupyter Notebook | SDE_Solution_Derivation.ipynb | markusrademacher/DataHandling | 240c7c8378541cc2624fec049a185646f3016233 | [
"MIT"
] | 2 | 2017-07-12T11:18:51.000Z | 2018-08-26T10:31:00.000Z | SDE_Solution_Derivation.ipynb | markusrademacher/DataHandling | 240c7c8378541cc2624fec049a185646f3016233 | [
"MIT"
] | 7 | 2017-04-24T18:42:23.000Z | 2017-06-20T13:00:09.000Z | SDE_Solution_Derivation.ipynb | AshleySetter/optoanalysis | 2b24a4176508d5e0e5e8644bb617a34f73b041f7 | [
"MIT"
] | 3 | 2017-04-09T19:15:06.000Z | 2017-04-28T09:31:32.000Z | 188.45512 | 24,848 | 0.898322 | true | 3,263 | Qwen/Qwen-72B | 1. YES
2. YES | 0.882428 | 0.849971 | 0.750038 | __label__eng_Latn | 0.790468 | 0.580922 |
# ATSC 507 Assignment VIII - Due March 3
```python
__author__ = 'Yingkai (Kyle) Sha'
__email__ = 'yingkai@eos.ubc.ca'
```
```python
import numpy as np
import sympy as sp
import matplotlib.pyplot as plt
from __future__ import division
% matplotlib inline
```
#Content
1. [**Known conditions**](#Known-conditions)
1. [**Numerical scheme**](#Numerical-scheme)
1. [**Results on the first timestep**](#Results-on-the-first-timestep)
1. [**Comparisons for 15 timesteps**](#Comparisons-for-15-timesteps)
# Known conditions
I repeat the problem to make it clear:
> Based on HW8 content, we are tring to solve $T(t)$ in:
$$
\frac{\partial T}{\partial t} = 3-2.25t-\frac{1.5T}{2-1.5t}
$$
And mean while we already know the analytical solution is:
$$
T = - 2.25t^2 + \left( 6 - 1.5T_0 \right)t + 2T_0 - 4
$$
with $t=m\Delta t$ and initial condition $T_0$.
> The problem requires to start with 0 dimension case $T_0 = 2^\circ C$ for 1 time step ($m=1$) in `Euler forward` and the family of `Runge-Kutta` methods.
```python
def analytical(t, T0):
return (1.5*t+T0-2)*(2-1.5*t)
def slop_T(t, Tm):
return 1.5*(2-1.5*t-(Tm/(2-1.5*t)))
```
# Numerical scheme
Here are 4 functions for testing:
* Euler_forward: `Euler forward` temporal discretization
* RK2: `2nd order Runge-Kutta`
* RK3: `3rd order Runge-Kutta`
* RK4: `4th order Runge-Kutta`
```python
# delta_t: step-length
# m: number of steps
# T0: Tref in Roland's equation
# Euler forward
def Euler_forward(delta_t, m, T0):
time_steps = np.arange(0, (m+1)*delta_t, delta_t)
T = np.empty(m); Tm = T0; delta_t = np.array(delta_t)
for i in range(m):
t = time_steps[i]
T[i] = Tm + delta_t * slop_T(t, Tm)
Tm = T[i]
return T
# 2nd order Runge-Kutta
def RK2(delta_t, m, T0):
time_steps = np.arange(0, (m+1)*delta_t, delta_t)
T = np.empty(m); Tm = T0; delta_t = np.array(delta_t)
for i in range(m):
t = time_steps[i]
T1 = Tm + 0.5*delta_t * slop_T(t, Tm)
T[i] = Tm + delta_t * slop_T(t+0.5*delta_t, T1)
Tm = T[i]
return T
# 3rd order Runge-Kutta
def RK3(delta_t, m, T0):
time_steps = np.arange(0, (m+1)*delta_t, delta_t)
T = np.empty(m); Tm = T0; delta_t = np.array(delta_t)
for i in range(m):
t = time_steps[i]
T1 = Tm + delta_t/3 * slop_T(t, Tm)
T2 = Tm + delta_t/2 * slop_T(t+delta_t/3, T1)
T[i] = Tm + delta_t * slop_T(t+0.5*delta_t, T2)
Tm = T[i]
return T
# 4th order Runge-Kutta
def RK4(delta_t, m, T0):
time_steps = np.arange(0, (m+1)*delta_t, delta_t)
T = np.empty(m); Tm = T0; delta_t = np.array(delta_t)
for i in range(m):
t = time_steps[i]
C1 = slop_T(t, Tm)
C2 = slop_T(t+0.5*delta_t, Tm+0.5*delta_t*C1)
C3 = slop_T(t+0.5*delta_t, Tm+0.5*delta_t*C2)
C4 = slop_T(t+delta_t, Tm+delta_t*C3)
T[i] = Tm + delta_t * (C1+2*C2+2*C3+C4)/6
Tm = T[i]
return T
```
The analytical solution at the corresponds timestep for comparison.
```python
def analytical_sln(delta_t, m, T0):
time_steps = np.arange(delta_t, (m+1)*delta_t, delta_t)
return analytical(time_steps, T0)
```
# Results on the first timestep
Here I set $\Delta t = 1s$, $T_0 = 2^\circ C$ for the assignment.
```python
T0 = 2
delta_t = 1; m=1
```
```python
results = [Euler_forward(delta_t, m, T0), RK2(delta_t, m, T0), RK3(delta_t, m, T0), RK4(delta_t, m, T0)]
true_result = analytical_sln(delta_t, m, T0)
print('Euler_forward: {}\nRK2: {}\nRK3:{}\nRK4: {}\ntrue_result: {}'.format(results[0], results[1], results[2], results[3], true_result))
```
Euler_forward: [ 3.5]
RK2: [ 0.575]
RK3:[ 1.625]
RK4: [ 0.845]
true_result: [ 0.75]
We see that `Euler forward` and `3rd order R.-K.`do not preforms well for its first step, meanwhile `2nd/4th order R.-K. methods` show a better guess for the first step.
# Comparisons for 15 timesteps
Equation in HW8 should be stable for all methods above. So here I integrate 15 timesteps to see which method approach to the analytical values faster.
`3rd R.-K. method` will not be applied, since for the second time step, $T^{**}$ is calculated by $t = 1+\frac13$, that makes term $\frac{1.5T}{2-1.5t}$ in `slop_T` encountered "divided by zero" problem.
```python
m = 15; t = range(1, m+1)
fig = plt.figure(figsize=(12, 6))
ax = fig.gca()
ax.plot(t, Euler_forward(delta_t, m, T0), linewidth=2, label='Euler forward')
ax.plot(t, RK2(delta_t, m, T0), linewidth=2, label='2nd order Runge-Kutta')
#ax.plot(t, RK3(delta_t, m, T0), linewidth=2, label='3rd order Runge-Kutta')
ax.plot(t, RK4(delta_t, m, T0), linewidth=2, label='4th order Runge-Kutta')
ax.plot(t, analytical_sln(delta_t, m, T0), color='k', linewidth=2, label='Analytical solution')
ax.legend(loc=3); ax.grid(); ax.set_xlim(1, m)
ax.set_xlabel('Timesteps ( m )', fontsize=12)
ax.set_title('Comparisons among Euler forward, and methods in R.-K. family', fontweight='bold', fontsize=14)
```
We see that `Euler forward` do not perform well for their first step, but after the 2nd step, they do much better.
| 28b49e3857a0aa0592fe8ce275248f98b6098666 | 67,059 | ipynb | Jupyter Notebook | ATSC_507/ATSC_507_Assignment_VIII.ipynb | yingkaisha/Homework | fff00fb5a41513e0edf2b1f8d8a74687a1db7120 | [
"Unlicense"
] | 1 | 2021-02-17T23:19:36.000Z | 2021-02-17T23:19:36.000Z | ATSC_507/ATSC_507_Assignment_VIII.ipynb | yingkaisha/homework | fff00fb5a41513e0edf2b1f8d8a74687a1db7120 | [
"Unlicense"
] | null | null | null | ATSC_507/ATSC_507_Assignment_VIII.ipynb | yingkaisha/homework | fff00fb5a41513e0edf2b1f8d8a74687a1db7120 | [
"Unlicense"
] | null | null | null | 68.707992 | 214 | 0.78082 | true | 1,799 | Qwen/Qwen-72B | 1. YES
2. YES | 0.877477 | 0.897695 | 0.787707 | __label__eng_Latn | 0.665729 | 0.668439 |
# Surge hull equation
```python
%load_ext autoreload
%autoreload 2
%matplotlib inline
```
```python
import sympy as sp
from sympy.plotting import plot as plot
from sympy.plotting import plot3d as plot3d
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
sp.init_printing()
from IPython.core.display import HTML
```
```python
import seaman.helpers
import seaman_symbol as ss
import surge_hull_equations as equations
import surge_hull_lambda_functions as lambda_functions
from bis_system import BisSystem
```
## Coordinate system
## Symbols
```python
from seaman_symbols import *
```
```python
HTML(ss.create_html_table(symbols=equations.surge_hull_equation_SI.free_symbols))
```
## Surge equation
```python
sp.latex(equations.surge_hull_equation)
```
```python
equations.surge_hull_equation_SI
```
### Plotting the total surge hull equation
```python
df = pd.DataFrame()
df['v_w'] = np.linspace(-0.3,0.3,20)
df['r_w'] = 0.0
df['rho'] = 1025
df['L'] = 1.0
df['g'] = 9.81
df['X_vv'] = -1.0
df['X_vr'] = 1.0
df['X_rr'] = 1.0
df['X_res'] = 0.0
df['disp'] = 23
result = df.copy()
result['fx'] = lambda_functions.X_h_function(**df)
result.plot(x = 'v_w',y = 'fx');
```
### Plotting with coefficients from a real seaman ship model
```python
import generate_input
shipdict = seaman.ShipDict.load('../../tests/test_ship.ship')
```
```python
df = pd.DataFrame()
df['v_w'] = np.linspace(-0.3,0.3,20)
df['r_w'] = 0.0
df['rho'] = 1025
df['g'] = 9.81
df['X_res'] = 0.0
df_input = generate_input.add_shipdict_inputs(lambda_function=lambda_functions.X_h_function,
shipdict = shipdict,
df = df)
df_input
```
```python
result = df_input.copy()
result['fx'] = lambda_functions.X_h_function(**df_input)
```
```python
result.plot(x = 'v_w',y = 'fx');
```
## Real seaman++
Run real seaman in C++ to verify that the documented model is correct.
```python
import run_real_seaman
```
```python
df = pd.DataFrame()
df['v_w'] = np.linspace(-0.3,0.3,20)
df['r_w'] = 0.0
df['rho'] = 1025
df['g'] = 9.81
df['X_res'] = 0.0
result_comparison = run_real_seaman.compare_with_seaman(lambda_function=lambda_functions.X_h_function,
shipdict = shipdict,
df = df,label='fx',
seaman_function=run_real_seaman.calculate_resistance)
fig,ax = plt.subplots()
result_comparison.plot(x = 'v_w',y = ['fx','fx_seaman'],ax = ax)
ax.set_title('Drift angle variation');
```
```python
df = pd.DataFrame()
df['r_w'] = np.linspace(-0.3,0.3,20)
df['v_w'] = 0.0
df['rho'] = 1025
df['g'] = 9.81
df['X_res'] = 0.0
shipdict2 = shipdict.copy()
shipdict2.resistance_data['xrr'] = -0.01
result_comparison = run_real_seaman.compare_with_seaman(lambda_function=lambda_functions.X_h_function,
shipdict = shipdict2,
df = df,label='fx',
seaman_function=run_real_seaman.calculate_resistance)
fig,ax = plt.subplots()
result_comparison.plot(x = 'r_w',y = ['fx','fx_seaman'],ax = ax)
ax.set_title('Yaw rate variation');
```
```python
df = pd.DataFrame()
df['v_w'] = np.linspace(-0.3,0.3,20)
df['r_w'] = 0.01
df['rho'] = 1025
df['g'] = 9.81
df['X_res'] = 0.0
result_comparison = run_real_seaman.compare_with_seaman(lambda_function=lambda_functions.X_h_function,
shipdict = shipdict,
df = df,label='fx',
seaman_function=run_real_seaman.calculate_resistance)
fig,ax = plt.subplots()
result_comparison.plot(x = 'v_w',y = ['fx','fx_seaman'],ax = ax)
ax.set_title('Drift angle variation, yaw rate = 0.01 rad/s');
```
```python
df = pd.DataFrame()
df['r_w'] = np.linspace(-0.3,0.3,20)
df['v_w'] = 0.01
df['rho'] = 1025
df['g'] = 9.81
df['X_res'] = 0.0
result_comparison = run_real_seaman.compare_with_seaman(lambda_function=lambda_functions.X_h_function,
shipdict = shipdict,
df = df,label='fx',
seaman_function=run_real_seaman.calculate_resistance)
fig,ax = plt.subplots()
result_comparison.plot(x = 'r_w',y = ['fx','fx_seaman'],ax = ax)
ax.set_title('Yaw rate variation');
```
```python
import save_lambda_functions
function_name = 'X_h_function'
lambda_function = lambda_functions.X_h_function
save_lambda_functions.save_lambda_to_python_file(lambda_function = lambda_function,
function_name = function_name)
save_lambda_functions.save_lambda_to_matlab_file(lambda_function = lambda_function,
function_name = function_name)
```
| 445b7c8619e72fbb29b9ff3d1d1779bdd61d2cb5 | 9,088 | ipynb | Jupyter Notebook | docs/seaman/05.1_seaman_surge_hull_equation.ipynb | martinlarsalbert/wPCC | 16e0d4cc850d503247916c9f5bd9f0ddb07f8930 | [
"MIT"
] | null | null | null | docs/seaman/05.1_seaman_surge_hull_equation.ipynb | martinlarsalbert/wPCC | 16e0d4cc850d503247916c9f5bd9f0ddb07f8930 | [
"MIT"
] | null | null | null | docs/seaman/05.1_seaman_surge_hull_equation.ipynb | martinlarsalbert/wPCC | 16e0d4cc850d503247916c9f5bd9f0ddb07f8930 | [
"MIT"
] | null | null | null | 26.114943 | 118 | 0.492848 | true | 1,315 | Qwen/Qwen-72B | 1. YES
2. YES | 0.896251 | 0.853913 | 0.76532 | __label__eng_Latn | 0.24235 | 0.616428 |
# Word Embeddings: Intro to CBOW model, activation functions and working with Numpy
In this lecture notebook you will be given an introduction to the continuous bag-of-words model, its activation functions and some considerations when working with Numpy.
Let's dive into it!
```python
import numpy as np
```
# The continuous bag-of-words model
The CBOW model is based on a neural network, the architecture of which looks like the figure below, as you'll recall from the lecture.
<div style="width:image width px; font-size:100%; text-align:center;"> Figure 1 </div>
## Activation functions
Let's start by implementing the activation functions, ReLU and softmax.
### ReLU
ReLU is used to calculate the values of the hidden layer, in the following formulas:
\begin{align}
\mathbf{z_1} &= \mathbf{W_1}\mathbf{x} + \mathbf{b_1} \tag{1} \\
\mathbf{h} &= \mathrm{ReLU}(\mathbf{z_1}) \tag{2} \\
\end{align}
Let's fix a value for $\mathbf{z_1}$ as a working example.
```python
# Define a random seed so all random outcomes can be reproduced
np.random.seed(10)
# Define a 5X1 column vector using numpy
z_1 = 10*np.random.rand(5, 1)-5
# Print the vector
z_1
```
array([[ 2.71320643],
[-4.79248051],
[ 1.33648235],
[ 2.48803883],
[-0.01492988]])
Notice that using numpy's `random.rand` function returns a numpy array filled with values taken from a uniform distribution over [0, 1). Numpy allows vectorization so each value is multiplied by 10 and then substracted 5.
To get the ReLU of this vector, you want all the negative values to become zeros.
First create a copy of this vector.
```python
# Create copy of vector and save it in the 'h' variable
h = z_1.copy()
```
Now determine which of its values are negative.
```python
# Determine which values met the criteria (this is possible because of vectorization)
h < 0
```
array([[False],
[ True],
[False],
[False],
[ True]])
You can now simply set all of the values which are negative to 0.
```python
# Slice the array or vector. This is the same as applying ReLU to it
h[h < 0] = 0
```
And that's it: you have the ReLU of $\mathbf{z_1}$!
```python
# Print the vector after ReLU
h
```
array([[2.71320643],
[0. ],
[1.33648235],
[2.48803883],
[0. ]])
**Now implement ReLU as a function.**
```python
# Define the 'relu' function that will include the steps previously seen
def relu(z):
result = z.copy()
result[result < 0] = 0
return result
```
**And check that it's working.**
```python
# Define a new vector and save it in the 'z' variable
z = np.array([[-1.25459881], [ 4.50714306], [ 2.31993942], [ 0.98658484], [-3.4398136 ]])
# Apply ReLU to it
relu(z)
```
array([[0. ],
[4.50714306],
[2.31993942],
[0.98658484],
[0. ]])
Expected output:
array([[0. ],
[4.50714306],
[2.31993942],
[0.98658484],
[0. ]])
### Softmax
The second activation function that you need is softmax. This function is used to calculate the values of the output layer of the neural network, using the following formulas:
\begin{align}
\mathbf{z_2} &= \mathbf{W_2}\mathbf{h} + \mathbf{b_2} \tag{3} \\
\mathbf{\hat y} &= \mathrm{softmax}(\mathbf{z_2}) \tag{4} \\
\end{align}
To calculate softmax of a vector $\mathbf{z}$, the $i$-th component of the resulting vector is given by:
$$ \textrm{softmax}(\textbf{z})_i = \frac{e^{z_i} }{\sum\limits_{j=1}^{V} e^{z_j} } \tag{5} $$
Let's work through an example.
```python
# Define a new vector and save it in the 'z' variable
z = np.array([9, 8, 11, 10, 8.5])
# Print the vector
z
```
array([ 9. , 8. , 11. , 10. , 8.5])
You'll need to calculate the exponentials of each element, both for the numerator and for the denominator.
```python
# Save exponentials of the values in a new vector
e_z = np.exp(z)
# Print the vector with the exponential values
e_z
```
array([ 8103.08392758, 2980.95798704, 59874.1417152 , 22026.46579481,
4914.7688403 ])
The denominator is equal to the sum of these exponentials.
```python
# Save the sum of the exponentials
sum_e_z = np.sum(e_z)
# Print sum of exponentials
sum_e_z
```
97899.41826492078
And the value of the first element of $\textrm{softmax}(\textbf{z})$ is given by:
```python
# Print softmax value of the first element in the original vector
e_z[0]/sum_e_z
```
0.08276947985173956
This is for one element. You can use numpy's vectorized operations to calculate the values of all the elements of the $\textrm{softmax}(\textbf{z})$ vector in one go.
**Implement the softmax function.**
```python
# Define the 'softmax' function that will include the steps previously seen
def softmax(z):
e_z = np.exp(z)
sum_e_z = np.sum(e_z)
return e_z / sum_e_z
```
**Now check that it works.**
```python
# Print softmax values for original vector
softmax([9, 8, 11, 10, 8.5])
```
array([0.08276948, 0.03044919, 0.61158833, 0.22499077, 0.05020223])
Expected output:
array([0.08276948, 0.03044919, 0.61158833, 0.22499077, 0.05020223])
Notice that the sum of all these values is equal to 1.
```python
# Assert that the sum of the softmax values is equal to 1
np.sum(softmax([9, 8, 11, 10, 8.5])) == 1
```
True
## Dimensions: 1-D arrays vs 2-D column vectors
Before moving on to implement forward propagation, backpropagation, and gradient descent in the next lecture notebook, let's have a look at the dimensions of the vectors you've been handling until now.
Create a vector of length $V$ filled with zeros.
```python
# Define V. Remember this was the size of the vocabulary in the previous lecture notebook
V = 5
# Define vector of length V filled with zeros
x_array = np.zeros(V)
# Print vector
x_array
```
array([0., 0., 0., 0., 0.])
This is a 1-dimensional array, as revealed by the `.shape` property of the array.
```python
# Print vector's shape
x_array.shape
```
(5,)
To perform matrix multiplication in the next steps, you actually need your column vectors to be represented as a matrix with one column. In numpy, this matrix is represented as a 2-dimensional array.
The easiest way to convert a 1D vector to a 2D column matrix is to set its `.shape` property to the number of rows and one column, as shown in the next cell.
```python
# Copy vector
x_column_vector = x_array.copy()
# Reshape copy of vector
x_column_vector.shape = (V, 1) # alternatively ... = (x_array.shape[0], 1)
# Print vector
x_column_vector
```
array([[0.],
[0.],
[0.],
[0.],
[0.]])
The shape of the resulting "vector" is:
```python
# Print vector's shape
x_column_vector.shape
```
(5, 1)
So you now have a 5x1 matrix that you can use to perform standard matrix multiplication.
**Congratulations on finishing this lecture notebook!** Hopefully you now have a better understanding of the activation functions used in the continuous bag-of-words model, as well as a clearer idea of how to leverage Numpy's power for these types of mathematical computations.
In the next lecture notebook you will get a comprehensive dive into:
- Forward propagation.
- Cross-entropy loss.
- Backpropagation.
- Gradient descent.
**See you next time!**
| e0ea1ef5a23b463f0b1a41869c9ea5ea5600fa76 | 15,938 | ipynb | Jupyter Notebook | NLP/Natural Language Processing with Probabilistic Models/4/NLP_C2_W4_lecture_notebook_model_architecture.ipynb | verneh/datasci | 0d98c7b4f4def3f1d4389e54659d1a62c552bbdd | [
"MIT"
] | 37 | 2021-01-02T01:20:11.000Z | 2021-04-17T06:50:54.000Z | NLP/Natural Language Processing with Probabilistic Models/4/NLP_C2_W4_lecture_notebook_model_architecture.ipynb | verneh/datasci | 0d98c7b4f4def3f1d4389e54659d1a62c552bbdd | [
"MIT"
] | null | null | null | NLP/Natural Language Processing with Probabilistic Models/4/NLP_C2_W4_lecture_notebook_model_architecture.ipynb | verneh/datasci | 0d98c7b4f4def3f1d4389e54659d1a62c552bbdd | [
"MIT"
] | 3 | 2020-07-18T21:42:43.000Z | 2021-03-18T10:22:33.000Z | 22.866571 | 286 | 0.510164 | true | 2,134 | Qwen/Qwen-72B | 1. YES
2. YES | 0.890294 | 0.833325 | 0.741904 | __label__eng_Latn | 0.99033 | 0.562024 |
# Scalar wave equation seismic amplitude derivations
## Green's functions
The wavefield at $(\mathbf{x}, t)$ due to source term $f(\mathbf{x}, t)$ is given by
$$ u(\mathbf{x}, t) = \int \int G(\mathbf{x}, t; \mathbf{x'}, t')f(\mathbf{x'}, t') d\mathbf{x'}dt', $$
where $G(\mathbf{x}, t; \mathbf{x'}, t')$ is the Green's function for the wave equation between $(\mathbf{x'}, t')$ and $(\mathbf{x}, t)$.
The scalar wave equation is
$$ \frac{1}{c^2(\mathbf{x})}\partial_{tt}u(\mathbf{x}, t) - \nabla u(\mathbf{x}, t) = f(\mathbf{x}, t) $$
For the scalar wave equation when the wave speed is constant, the Green's functions are given by
Dimension | Green's function
--- | ---
**1D** | $\frac{c}{2}\Theta(\tau - r/c)$
**2D** | $\frac{c}{2\pi \sqrt{c^2\tau^2 - r^2}}\Theta(\tau - r/c)$
**3D** | $\frac{\delta(\tau-\frac{r}{c})}{4 \pi r}$
where $r = ||\mathbf{x}-\mathbf{x'}||_2$, $\tau=t-t'$ for the causal Green's function or $\tau=t'-t$ for the anticausal Green's function, and $\Theta(x)$ is a step function.
## Direct wave
The expected amplitude of the direct, unscattered wave in a constant wave speed model can thus be computed using the equations above.
For a discretized simulation, with a source that is only active in a single cell of the model (at position $\mathbf{x_s}$), so $f(\mathbf{x}, t) = f(t)\delta(\mathbf{x} - \mathbf{x_s})v$ (where $v$ is the volume of the cell) and $r = ||\mathbf{x}-\mathbf{x_s}||_2$, the amplitudes are thus:
\begin{align}
u_{1D}(\mathbf{x}, t) &= \frac{\Delta x\Delta tc}{2}\sum_{t'=0}^{t - r/c} f(t') \\
u_{2D}(\mathbf{x}, t) &= \frac{\Delta x^2\Delta tc}{2\pi}\sum_{t'=0}^{t - r/c}\frac{f(t')}{\sqrt{c^2(t-t')^2 - r^2}} \approx \mathcal{F^{-1}} \left( \frac{i}{4}H^{(1)}_0\left(\frac{-2\pi \omega r}{c}\right)f(\omega)\Delta x^2 \right) \\
u_{3D}(\mathbf{x}, t) &= \frac{\Delta x^3f(t-\frac{r}{c})}{4 \pi r}
\end{align}
## Scattered wave
To calculate the expected amplitude of a wave that has been scattered by a wave speed anomaly in a single cell (a "point scatterer"), we need to derive the scattering amplitude.
Separating the wave speed into a smooth background ($c_0$, which we will assume varies slowly enough to be considered locally constant) and short wavelength changes ($\Delta c$), and performing a linear expansion around $\Delta c(\mathbf{x})=0$,
$$\frac{1}{c^2(\mathbf{x})} = \frac{1}{(c_0+\Delta c)^2(\mathbf{x})} \approx \frac{1}{c_0^2(\mathbf{x})} - 2 \frac{\Delta c(\mathbf{x})}{c_0^3(\mathbf{x})}.$$
We also assume that the wavefield can be split into a term that only senses the smooth model, and a scattered term,
$$u(\mathbf{x}, t) = u_0(\mathbf{x}, t) + u_{sc}(\mathbf{x}, t).$$
For the scalar wave equation, this gives
$$ \left(\frac{1}{c_0^2(\mathbf{x})}-2\frac{\Delta c(\mathbf{x})}{c_0^3(\mathbf{x})}\right)\partial_{tt}(u_0(\mathbf{x}, t)+u_{sc}(\mathbf{x},t)) - \nabla (u_0(\mathbf{x}, t)+u_{sc}(\mathbf{x},t)) = f(\mathbf{x}, t), $$
and
$$ \frac{1}{c_0^2(\mathbf{x})}\partial_{tt}u_0(\mathbf{x}, t) - \nabla u_0(\mathbf{x}, t) = f(\mathbf{x}, t). $$
Subtracting the second of these from the first gives
$$ \frac{1}{c_0^2(\mathbf{x})}\partial_{tt}u_{sc}(\mathbf{x}, t) - \nabla u_{sc}(\mathbf{x}, t) = 2\frac{\Delta c(\mathbf{x})}{c_0^3(\mathbf{x})}\partial_{tt}u(\mathbf{x}, t). $$
Making the Born approximation, we assume that multiply scattered waves have negligible amplitude. That is, the scattered wavefield $u_{sc}$ is approximated as being only generated by interactions of $u_0$ with scatterers, and further interactions of $u_{sc}$ with the scatterers are ignored. This is expressed by changing $u$ to $u_0$ in the previous equation,
$$ \frac{1}{c_0^2(\mathbf{x})}\partial_{tt}u_{sc}(\mathbf{x}, t) - \nabla u_{sc}(\mathbf{x}, t) = 2\frac{\Delta c(\mathbf{x})}{c_0^3(\mathbf{x})}\partial_{tt}u_0(\mathbf{x}, t). $$
Note that this is in the form of the wave equation, with the right-hand side as the source term. We can use this to calculate the scattered wavefield. In terms of Green's functions,
$$ u_0(\mathbf{x}, t) = \int \int G(\mathbf{x}, t; \mathbf{x'}, t') f(\mathbf{x'}, t') d\mathbf{x'} dt', $$
$$ u_{sc}(\mathbf{x}, t) = 2\int \int G(\mathbf{x}, t; \mathbf{x'}, t') \frac{\Delta c(\mathbf{x'})}{c_0^3(\mathbf{x'})}\partial_{t't'}u_0(\mathbf{x'}, t') d\mathbf{x'} dt'. $$
For a discretized simulation with a single cell source at $\mathbf{x_s}$ as above, and a wave speed model that is the constant value $c$ everywhere except at a single cell ($\mathbf{x_p}$) where it is $c+\Delta c$ (a point scatterer), with $r=||\mathbf{x}-\mathbf{x_p}||_2$, and with $u_{1D}$, $u_{2D}$, and $u_{3D}$ from the direct wave calculation above, and using the direct wave amplitudes derived above, with $u_{p}(\mathbf{x}, t)$ being the direct wave that originates from the scatterer location with an amplitude that is the second time derivative of the direct wave that arrives at that location from the real source, the expected amplitudes of the scattered wavefields are:
\begin{equation}
u^{sc}(\mathbf{x}, t) = \frac{2 \Delta c}{c^3} u_{p}(\mathbf{x}, t).
\end{equation}
## Scatterer image/Gradient
Let's assume we only know the constant background wave speed $c$, and we want to find the location and amplitude of a point scatterer. The only information we have to help us find it is the wavefield recorded on the surface of the model that contains the scatterer, due to sources on the surface, which I will call $d(t)$. This is the typical situation when applying RTM, LSRTM, or FWI.
We'll define a mean squared error cost function that compares the wavefield at the surface when using the constant model $c$ with the data that was provided to us, $d(t)$,
$$ J = \frac{1}{T}||u_{sc}(\mathbf{x_{s}}, t) - d(t)||_2^2, $$
where $\mathbf{x_s}$ are the coordinates of the observation surface.
The derivative of this cost function with respect to changes in the model, where $T$ is the maximum recorded time, is
$$ \frac{\partial J}{\partial \Delta c(\mathbf{x})} = \frac{1}{T}\int_0^T2(u_{sc}(\mathbf{x_{s}}, t) - d(t))\frac{\partial u_{sc}}{\partial \Delta c(\mathbf{x})}dt. $$
Using our result from earlier, we get
$$ \frac{\partial u_{sc}(\mathbf{x_{s}}, t)}{\partial \Delta c(\mathbf{x})} = \frac{2v}{c_0^3(\mathbf{x})}\int_0^t G(\mathbf{x_s}, t; \mathbf{x}, t') \partial_{t't'}u_0(\mathbf{x}, t') dt', $$
where $u_0$ is, as before, the wavefield propagated in the background model, and $v$ is the volume of one cell. So,
$$ \frac{\partial J}{\partial \Delta c(\mathbf{x})} = \frac{1}{T}\int_0^T\frac{4v}{c_0^3(\mathbf{x})}(u_{sc}(\mathbf{x_{s}}, t) - d(t)) \int_0^t G(\mathbf{x_s}, t; \mathbf{x}, t') \partial_{t't'}u_0(\mathbf{x}, t') dt'dt. $$
Since $G(\mathbf{x_s}, t; \mathbf{x}, t') = 0 \forall t'>t$ when using the causal Green's function, the upper limit of the inner integral can be increased to $\infty$ (so it no longer depends on $t$ and thus the order of integration can be switched), and since the model is time invariant and due to source-receiver reciprocity, $G^+(\mathbf{x_s}, t; \mathbf{x}, t') = G^-(\mathbf{x}, t'; \mathbf{x_s}, t)$, we can write this as
$$ \frac{\partial J}{\partial \Delta c(\mathbf{x})} = \frac{1}{T}\int_0^\infty\int_T^0\frac{4v}{c_0^3(\mathbf{x})}(u_{sc}(\mathbf{x_{s}}, t) - d(t)) G^-(\mathbf{x}, t'; \mathbf{x_s}, t) \partial_{t't'}u_0(\mathbf{x}, t') dtdt'. $$
This is a product involving $u_0$ and a new wavefield I will call $u_r$, obtained by propagating the source $u_{sc}(\mathbf{x_{s}}, t) - d(t)$ backward in time:
$$ \frac{\partial J}{\partial\Delta c(\mathbf{x})} = \frac{4}{Tc_0^3(\mathbf{x})}\int u_r(\mathbf{x}, t)\partial_{tt}u_0(\mathbf{x}, t) dt. $$
This is known as an adjoint-state method means of calculating the gradient.
| db6bcbb37f8bad1bfa8d2a1b94d8f6e68d684663 | 9,403 | ipynb | Jupyter Notebook | test/amplitude_derivation.ipynb | vkazei/deepwave | 032bb06328673f4f824fbca20f09ba7bb277c8d1 | [
"MIT"
] | 73 | 2018-07-16T13:57:09.000Z | 2022-03-24T04:08:27.000Z | test/amplitude_derivation.ipynb | vkazei/deepwave | 032bb06328673f4f824fbca20f09ba7bb277c8d1 | [
"MIT"
] | 41 | 2018-07-14T15:44:13.000Z | 2022-03-25T09:35:08.000Z | test/amplitude_derivation.ipynb | vkazei/deepwave | 032bb06328673f4f824fbca20f09ba7bb277c8d1 | [
"MIT"
] | 20 | 2018-12-02T14:42:59.000Z | 2022-03-21T15:52:52.000Z | 72.891473 | 699 | 0.59162 | true | 2,662 | Qwen/Qwen-72B | 1. YES
2. YES | 0.947381 | 0.810479 | 0.767832 | __label__eng_Latn | 0.970027 | 0.622264 |
(c) Juan Gomez 2019. Thanks to Universidad EAFIT for support. This material is part of the course Introduction to Finite Element Analysis
# Piecewise Interpolation
## Introduction
In the previous notebook we introduced Lagrange interpolation as a method to approximate functions in terms of discrete known values of the function. Accordingly, if there were $N$ known values of the function it was possible to propose an $N-1$ order interpolating polynomial. Although this approach is theoretically fine, the specific interpolating polynomial would be problem dependent difficulting its automatization in a general computer code. In the finite element method this lack of generality is dealt with through the use of sub-domain based interpolation where the interpolation scheme is implemented for a general sub-domain of a fixed number of discrete points. Accordingly, in this technique the sample of $N$ data points is arranged in sub-domains, say of pairs of two points and the resulting interpolating polynomial is now piece-wise continous. In this notebook we cover the basics of this sub-domain or element based interpolation scheme. **After completing this notebook you should be able to:**
* Recognize the differences between a global and a local based interpolation scheme.
* Recognize the advantages and disadvantages between global and local interpolation scheme.
* Formulate, and implement in Python, locally based interpolation schemes.
* Recognize the fudamental concept of a finite element as a locally based interpolation scheme.
## Global scheme
Leat us consider the same function used for the interpolation problem described in the previous notebook and defined as:
$$ f(x)=x^3+4x^2-10 $$
with $x$ in the interval $\left[ {{-1},{1}} \right]$. Assume also that we know the exact value of the function at points $[-1, -0.5, 0, 0.5, 1].$ These points will be called in what follows **nodes**. We will compute a unique Lagrange interpolating polynomial using the 5 knwon values of the function at the given nodes. The following block of code shows the steps required in this global scheme.
**(Add comments to clarify the relevant steps ion the code below)**
```
#Try using matplotlib inline instead of matplotlib notebook to see the difference
%matplotlib notebook
import matplotlib.pyplot as plt
import numpy as np
from scipy import interpolate
import sympy as sym
```
```
def LagrangPoly(x, order, i, xi=None):
if xi == None:
xi = sym.symbols('x:%d' % (order+1))
index = list(range(order+1))
index.pop(i)
return sym.prod([(x-xi[j])/(xi[i]-xi[j]) for j in index])
```
```
fx = lambda x: x**3 + 4.0*x**2 - 10.0
fdx = lambda x: 3*x**2 + 8.0*x
#
npts = 200
xx = np.linspace(-1, 1, npts)
x_data = np.array([-1, -0.5, 0.0, 0.5, 1])
fd = fx(x_data)
plt.figure(0)
yy = fx(xx)
plt.plot(xx, yy, 'r--')
plt.plot([-1, -0.5, 0.0, 0.5, 1], fd, 'ko')
```
<IPython.core.display.Javascript object>
[<matplotlib.lines.Line2D at 0x7fc7c27026a0>]
### Locally based Lagrange polynomials
We now split the problem domain corresponding to $x \in \left[ {{-1},{1}} \right]$ into 4-constant size subdomains. Each resulting sub-domain is comprised of two **nodes** which are used to define linear first-order polynomials.
In the following block of code we compute interpolation polynomials at each one of these $\triangle x=0.5$ sub-domians. Note that although each polynomial exists only at a given sub-domain all of them are exactly the same. This suggests that a more general and powerful implementation should compute only the linear polynomials once and then use them recursively in the construction of the approximating function.
**(Add comments to clarify the relevant steps ion the code below)**
```
x = sym.symbols('x')
pol = []
pol.append(sym.simplify(LagrangPoly(x, 1, 0, [-1.0, -0.5])))
pol.append(sym.simplify(LagrangPoly(x, 1, 1, [-1.0, -0.5])))
pol.append(sym.simplify(LagrangPoly(x, 1, 0, [-0.5, 0.0])))
pol.append(sym.simplify(LagrangPoly(x, 1, 1, [-0.5, 0.0])))
pol.append(sym.simplify(LagrangPoly(x, 1, 0, [0.0, 0.5])))
pol.append(sym.simplify(LagrangPoly(x, 1, 1, [0.0, 0.5])))
pol.append(sym.simplify(LagrangPoly(x, 1, 0, [0.5, 1.0])))
pol.append(sym.simplify(LagrangPoly(x, 1, 1, [0.5, 1.0])))
```
```
plt.figure()
xx = np.linspace(-1, -0.5, npts)
for k in range(2):
for i in range(npts):
yy[i] = pol[k].subs([(x, xx[i])])
plt.plot(xx, yy)
xx = np.linspace(-0.5, 0.0, npts)
for k in range(2):
for i in range(npts):
yy[i] = pol[k+2].subs([(x, xx[i])])
plt.plot(xx, yy)
#
xx = np.linspace(0.0, 0.5, npts)
for k in range(2):
for i in range(npts):
yy[i] = pol[k+4].subs([(x, xx[i])])
plt.plot(xx, yy)
#
xx = np.linspace(0.5, 1.0, npts)
for k in range(2):
for i in range(npts):
yy[i] = pol[k+6].subs([(x, xx[i])])
plt.plot(xx, yy)
```
<IPython.core.display.Javascript object>
### Interpolating polynomial $p(x)$ to approximate $f(x)$
Now we build the complete approximating polynomial $p(x)$. Since each polynomial is now local at each sub-domain we use:
$$p(x) = {L^L}(x)f({x^L}) + {L^R}(x)f({x^R})$$
where $f({x^L})$ and $f({x^R})$ are the function values at the left and right ends of the sub-domain respectively while ${L^L}(x)$ and ${L^R}(x)$ are the associated first-order polynomials.
Within the context of the finite element method each one of the sub-domains is termed a **finite element**. As such, a finte element is then a prescribed sub-domain and its corresponding local interpolation functions. In this problem the sub-domains are comprised of two points therefore defining linear finite elements. However, elements with higher order variations can also be formulated. This local approach becomes easy to code if the local polynomials are formulated in an auxiliary reference system, as will be described later.
In the following code snippet we approximate the unknown function $f(x)$ in each sub-domain using the two locally based Lagrange polynomials. In the plot the black dots represent the exact or known nodal values of the fuction, while the continous line is the exact function.
**Questions:**
**Suggest an approach to improve the approximation to the function $f(x)$ but using the idea of locally based interpolation polynomials.**
**(Add comments to clarify the relevant steps ion the code below)**
```
plt.figure()
plt.grid()
xx = np.linspace(-1.0, -0.5, npts)
for i in range(npts):
yy[i] = fd[0]*pol[0].subs([(x, xx[i])]) + fd[1]*pol[1].subs([(x, xx[i])])
plt.plot(xx, yy)
xx = np.linspace(-0.5, 0.0, npts)
for i in range(npts):
yy[i] = fd[1]*pol[2].subs([(x, xx[i])]) + fd[2]*pol[3].subs([(x, xx[i])])
plt.plot(xx, yy)
xx = np.linspace( 0.0, 0.5, npts)
for i in range(npts):
yy[i] = fd[2]*pol[4].subs([(x, xx[i])]) + fd[3]*pol[5].subs([(x, xx[i])])
plt.plot(xx, yy)
xx = np.linspace( 0.5, 1.0, npts)
for i in range(npts):
yy[i] = fd[3]*pol[6].subs([(x, xx[i])]) + fd[4]*pol[7].subs([(x, xx[i])])
plt.plot(xx, yy)
#
xx = np.linspace(-1.0, 1.0, npts)
zz = fx(xx)
plt.plot(xx, zz)
plt.plot([-1, -0.5, 0, 0.5, 1], fd, 'ko')
```
<IPython.core.display.Javascript object>
[<matplotlib.lines.Line2D at 0x7fc7c019d6a0>]
### Secondary variables
The term secondary variable is given here to these functions which are obtained using the approximated function $p(x)$. For instance, assume that in the current problem we are also interested in the first order derivative of the function $f(x)$ but we don't have nodal values in order to conduct interpolation. We can however use the found approximation $p(x)$ to $f(x)$ like:
$$p'(x) = \frac{dL^L(x)}{dx}f({x^L}) + \frac{dL^R(x)}{dx}f({x^R}).$$
The following code snippet plots the known values of the first-otder derivatives (black dots), the exact derivative $f'(x)$ (blue line) and the locally computed derivatives. Note that since in each interval the function is approximated by a linear function the derivative is constant leading to jumps at the boundaries of each sub-domain.
**Questions:**
**Propose an alternative to improve the approximation to the first order derivative of the function shown below.**
**How are the discontinuities in the first derivative of the function related to the local function?**
```
dpol = []
for j in range(8):
dpol.append(sym.diff(pol[j], x))
```
```
plt.figure()
plt.grid()
xx = np.linspace(-1.0, -0.5, npts)
for i in range(npts):
yy[i] = fd[0]*dpol[0].subs([(x, xx[i])]) + fd[1]*dpol[1].subs([(x, xx[i])])
plt.plot(xx, yy)
xx = np.linspace(-0.5, 0.0, npts)
for i in range(npts):
yy[i] = fd[1]*dpol[2].subs([(x, xx[i])]) + fd[2]*dpol[3].subs([(x, xx[i])])
plt.plot(xx, yy)
xx = np.linspace(0.0, 0.5, npts)
for i in range(npts):
yy[i] = fd[2]*dpol[4].subs([(x, xx[i])]) + fd[3]*dpol[5].subs([(x, xx[i])])
plt.plot(xx, yy)
xx = np.linspace(0.5, 1.0, npts)
for i in range(npts):
yy[i] = fd[3]*dpol[6].subs([(x, xx[i])]) + fd[4]*dpol[7].subs([(x, xx[i])])
plt.plot(xx, yy)
#
fc = fdx(x_data)
plt.plot([-1, -0.5, 0, 0.5, 1], fc, 'ko')
xx = np.linspace(-1.0, 1.0, npts)
for i in range(npts):
zz[i] = fdx(xx[i])
plt.plot(xx, zz)
```
<IPython.core.display.Javascript object>
[<matplotlib.lines.Line2D at 0x7fc7c0060128>]
As a result of the local interpolation scheme the first derivative becomes discontinous. Although these disconitnuities introduce error in the solution these can be reduced using a larger number of sub-domains. In finite element analysis this corresponds to conducting a mesh refinement.
### Glossary of terms
**Node:** A point where the function to be approximated by the interpolating polynomials is known.
**Sub-domain:** A portion of the total computational domain comprised between two points but probably containing several nodes.
**Finite element:** The specification of a sub-domain size and its corresponding interpolation functions.
**Canonical element:** A finite element of constant size and thus of known interpolation polynomials and to which elements of different size can be transformed to.
### Class activity
#### Problem 1
For the function $f(x) = {x^3} + 4{x^2} - 10$ in the range $[-1.0, 1.0]$:
* Find values at nodal points corresponding to 4 sub-domains each one with 3 nodal points and using these values implement a local interpolation scheme using 2-nd order interpolation polynomials.
* Plot the interpolation polynomial in each sub-domain and the corresponding interpolating function $p(x)$.
* In the same plot compare $p(x)$ and $f(x)$. Additionally, plot the first derivative of the function obtained from $p(x)$ and $f(x)$.
#### Problem 2
For the Runge function defined by:
$$f(x) = \frac{1}{{1 + 25{x^2}}}$$
implement an interpolation scheme using local 1st-order Lagrange polynomials using:
* (i) sub-domains of constant size $\Delta x = 0.2$
* (ii) sub-domains whose size decreases towards the edges of the interval.
#### Problem 3
Using an independent script or a notebook implement a local interpolation scheme using a canonical element of size 2.0 and use it to approximate the Runge function discussed in the class notes.
```
from IPython.core.display import HTML
def css_styling():
styles = open('./nb_style.css', 'r').read()
return HTML(styles)
css_styling()
```
<link href='http://fonts.googleapis.com/css?family=Fenix' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Alegreya+Sans:100,300,400,500,700,800,900,100italic,300italic,400italic,500italic,700italic,800italic,900italic' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Source+Code+Pro:300,400' rel='stylesheet' type='text/css'>
<style>
/*
Template for Notebooks for Modelación computacional.
Based on Lorena Barba template available at:
https://github.com/barbagroup/AeroPython/blob/master/styles/custom.css
*/
/* Fonts */
@font-face {
font-family: "Computer Modern";
src: url('http://mirrors.ctan.org/fonts/cm-unicode/fonts/otf/cmunss.otf');
}
/* Text */
div.cell{
width:800px;
margin-left:16% !important;
margin-right:auto;
}
h1 {
font-family: 'Alegreya Sans', sans-serif;
}
h2 {
font-family: 'Fenix', serif;
}
h3{
font-family: 'Fenix', serif;
margin-top:12px;
margin-bottom: 3px;
}
h4{
font-family: 'Fenix', serif;
}
h5 {
font-family: 'Alegreya Sans', sans-serif;
}
div.text_cell_render{
font-family: 'Alegreya Sans',Computer Modern, "Helvetica Neue", Arial, Helvetica, Geneva, sans-serif;
line-height: 135%;
font-size: 120%;
width:600px;
margin-left:auto;
margin-right:auto;
}
.CodeMirror{
font-family: "Source Code Pro";
font-size: 90%;
}
/* .prompt{
display: None;
}*/
.text_cell_render h1 {
font-weight: 200;
font-size: 50pt;
line-height: 100%;
color:#CD2305;
margin-bottom: 0.5em;
margin-top: 0.5em;
display: block;
}
.text_cell_render h5 {
font-weight: 300;
font-size: 16pt;
color: #CD2305;
font-style: italic;
margin-bottom: .5em;
margin-top: 0.5em;
display: block;
}
.warning{
color: rgb( 240, 20, 20 )
}
</style>
```
```
| 0cc0d1a10ec1ef297ee66aa64a4d1f348b4a1b14 | 345,700 | ipynb | Jupyter Notebook | notebooks/02_lagrange_1d_nonlocal.ipynb | AppliedMechanics-EAFIT/Introductory-Finite-Elements | a4b44d8bf29bcd40185e51ee036f38102f9c6a72 | [
"MIT"
] | 39 | 2019-11-26T13:28:30.000Z | 2022-02-16T17:57:11.000Z | notebooks/02_lagrange_1d_nonlocal.ipynb | jgomezc1/Introductory-Finite-Elements | a4b44d8bf29bcd40185e51ee036f38102f9c6a72 | [
"MIT"
] | null | null | null | notebooks/02_lagrange_1d_nonlocal.ipynb | jgomezc1/Introductory-Finite-Elements | a4b44d8bf29bcd40185e51ee036f38102f9c6a72 | [
"MIT"
] | 18 | 2020-02-17T07:24:59.000Z | 2022-03-02T07:54:28.000Z | 93.005112 | 67,876 | 0.727622 | true | 3,737 | Qwen/Qwen-72B | 1. YES
2. YES | 0.863392 | 0.867036 | 0.748591 | __label__eng_Latn | 0.973513 | 0.577561 |
(sec:DFT)=
# Diskretna Fourierova transformacija
Predhodno smo spoznali Fourierovo transformacijo za vzorčene podatke (glejte: {ref}`sec:enakomerno_casovno_vzorcenje`), kjer je vzorčenje s korakom $\Delta t\,n$ in $n\in\mathbb{Z}$ gre od $-\infty$ do $+\infty$. Kadar imamo končno mnogo vzorčenih podatkov ($N$) in gre $n=[0,1,\dots,N-1]$, uporabimo diskretno Fourierovo transformacijo (DFT).
:::{note}
**Diskretna Fourierova transformacija**:
$$
X_k = \sum_{n=0}^{N-1} x_n\,e^{-\mathrm{i}\,2\pi\,k\,n/N},
$$
kjer velja $x_n = x(n\,\Delta t)$ in $X_k=X(k/(N\,\Delta t))$. Ker je DFT periodična z $1/\Delta t$ ($X_k=X_{k+N}$), je treba izračunati samo $N$ členov, torej $k=[0,1,\dots\,N-1]$.
:::
Da velja $X_k=X_{k+N}$ dokažemo z:
$$
e^{-\mathrm{i}\,2\pi\,k\,n/N}=e^{-\mathrm{i}\,2\pi\,n\,(k+N)/N}=e^{-\mathrm{i}\,2\pi\,n\,k/N}\,\underbrace{e^{-\mathrm{i}\,2\pi\,n\,N/N}}_{=\cos(-2\pi\,n)=1}
$$
Inverzno DFT izpeljemo tako, da zgornjo enačbo pomnožimo z $e^{\mathrm{i}\,2\pi\,k\,r/N}$ in nato seštejemo po $k$:
$$
\sum_{k=0}^{N-1}X_k\,e^{\mathrm{i}\,2\pi\,k\,r/N} = \sum_{k=0}^{N-1}\sum_{n=0}^{N-1} x_n\,e^{-\mathrm{i}\,2\pi\,k\,n/N}\,e^{\mathrm{i}\,2\pi\,k\,r/N},
$$
Kjer na desni strani zamenjamo vrstni red vsote in ugotovimo:
$$
\sum_{k=0}^{N-1} e^{-\mathrm{i}\,2\pi\,k\,(n-r)/N}=
\begin{cases}
N; \quad n=r\\
0; \quad\textrm{sicer.}
\end{cases}
$$
Sledi:
$$
\sum_{k=0}^{N-1}X_k\,e^{\mathrm{i}\,2\pi\,k\,r/N} = x_r\,N,
$$
:::{note}
**Inverzna diskretna Fourierova transformacija**:
$$
x_n = \frac{1}{N} \sum_{k=0}^{N-1} X_k\,e^{\mathrm{i}\,2\pi\,k\,n/N}.
$$
:::
Ker velja:
$$
e^{\mathrm{i}\,2\pi\,k\,n/N}=e^{\mathrm{i}\,2\pi\,k\,(n+N)/N}=e^{\mathrm{i}\,2\pi\,k\,n/N}\,\underbrace{e^{\mathrm{i}\,2\pi\,k\,N/N}}_{=\cos(2\pi\,k)=1}.
$$
(DFT_periodicno)=
:::{note}
Ugotovimo, da je $x_n$ periodična vrsta $x_n=x_{n+N}$. Res smo začeli z vzorčenimi podatki v času $x_n$, ki niso periodični; rezultat z inverzno DFT pa je periodičen.
:::
## Hitra Fourierova transformacija
Hitra Fourierova transformacija (ang. *Fast Fourier Transform* - FFT) je ime numeričnega algoritma za diskretno Fourierovo transformacijo. DFT ima numerični obseg proporcionalen z $N^2$, kar zapišemo kot: ($\mathcal{O}(N^2)$). Kot avtorja hitre Fourierove transformacije štejeta James Cooley in John Tukey, ki sta algoritem objavila v letu 1965 (glejte: [vir](https://www.ams.org/journals/mcom/1965-19-090/S0025-5718-1965-0178586-1/S0025-5718-1965-0178586-1.pdf)), pozneje pa se je izkazalo, da je podoben algoritem že bistveno prej, v neobjavljenem delu iz leta 1805, odkril Carl Friedrich Gauss. Algoritem ima to bistveno lastnost, da je numerično bistveno hitrejši: $\mathcal{O}(N\log(N))$; ker numerični obseg več ne narašča kvadratično s številom elementov, je to bistveno povečalo uporabnost DFT in nekateri algoritem FFT smatrajo kot enega najbolj pomembnih algoritmov, saj je imel in še ima zelo pomemben družbeno-ekonomski vpliv. Kot zanimivost lahko tukaj povemo, da algoritem ni bil patentiran (glejte: [vir](https://en.wikipedia.org/wiki/Fast_Fourier_transform)). Zaradi simetrije v podatkih, je hitrost izračuna najvišja, če je število vzorcev enako $2^n$, kjer $n\in\mathbb{Z}$.
Diskretna Fourierova transformacija je v paketu `numpy` dokumentirana [tukaj](https://numpy.org/doc/stable/reference/routines.fft.html) in si jo bomo najprej pogledali na spodnjem primeru sinusuide z amplitudo `A`, frekvenco `fr`, frekvenco vzorčenja `fs` in dolžino vzorčenja, ki bo enaka frekvenci vzorčenja (vzorčili bomo torej 1 sekundo). DFT bomo izvedli z metodo `numpy.fft.fft()` ([dokumentacija](https://numpy.org/doc/stable/reference/generated/numpy.fft.fft.html#numpy.fft.fft))
```python
import numpy as np
A = 1
fr = 5
fs = 100
N = 100
dt = 1/fs
t = np.arange(N)*dt
x = A*np.sin(2*np.pi*fr*t)
X = np.fft.fft(x)
freq = np.fft.fftfreq(len(x), d=dt)
```
Sinusuida ima samo eno frekvenčno komponento različno od nič:
```python
(X[freq==fr], np.abs(X[freq==fr]))
```
(array([-1.62040942e-14-50.j]), array([50.]))
```python
import matplotlib.pyplot as plt
plt.title('Časovna domena')
plt.plot(t, x, '.')
plt.xlabel('Čas [s]')
plt.ylabel('$x$')
plt.show()
plt.title('Frekvenčna domena - amplitudni spekter')
plt.plot(freq, np.abs(X), '.')
plt.xlabel('Frekvenca [Hz]')
plt.ylabel('$X$')
plt.show()
```
## Fourierove vrste in diskretna Fourierova transformacija
Diskretna Fourierova transformacija je s Fourierovimi vrstami zelo povezana. Povezavo lahko hitro razkrijemo, če predpostavimo diskretno časovno vrsto $x_i=x(\Delta t\,i)$, kjer je $\Delta t$ konstanten časovni korak, $i=0,1,\dots,N-1$ in $T_p=N\,\Delta t$, sledi:
$$
\begin{split}
c_k &= \frac{1}{T_p}\,\int_0^{T_p} x(t)\,e^{-\mathrm{i}\,2\pi\,k\,t/T_p}\,\mathrm{d}t\\
&= \frac{1}{N\,\Delta t}\,\sum_{n=0}^{N-1} x(n\,\Delta t)\,e^{-\mathrm{i}\,2\pi\,k\,\Delta t\,n/(N\,\Delta t)}\,\Delta t\\
&= \frac{1}{N}\,\underbrace{\sum_{n=0}^{N-1} x(n\,\Delta t)\,e^{-\mathrm{i}\,2\pi\,k\,n/N}}_{X_k}\\
&= \frac{X_k}{N}\\
\end{split}
$$
Poudarti je treba, da v splošnem $X_k/N\ne c_k$, saj DFT temelji na končni vrsti in je zato $X_k$ periodična vrsta.
Pri primerjanju DFT in Fourierovih vrst se pogosto naredi napaka pri razumevanju periode (predvsem zadnje diskretne točke). Pri zgornjem primeru sinusoide časovna točka pri `t=1` eksplicitno ni vključena; implicitno pa je, saj smo zgoraj spoznali, da so vzorčeni podatki periodični in velja $x_n=x_{n+N}$. Poglejmo ta detajl pobližje; Fourierove vrste so definirana kot:
$$
c_n = \frac{1}{T_p}\,\int_0^{T_p} x(t)\,e^{-\mathrm{i}\,2\pi\,n\,t/T_p}\,\mathrm{d}t
$$
Izračunamo Fourierov koeficient za frekvenco `fr`:
```python
import sympy as sym
t, fr, Tp, A = sym.symbols('t, fr, Tp, A')
π = sym.pi
i = sym.I
podatki = {fr: 5, A:1, Tp:1}
x = sym.sin(2*π*fr*t)
c = 1/Tp*sym.integrate(x*sym.exp(-i*2*π*fr*t/Tp), (t,0,Tp))
c.subs(podatki)
```
$\displaystyle - \frac{i}{2}$
Do istega rezultata pridemo prek DFT, vendar zadnja časovna točka eksplicitno ni vključena:
```python
import numpy as np
A = 1
fr = 5
fs = 100
N = 100
# spodnja fs rezultira v vključenost zadnje točke. Rezultat bo napačen!
#fs = 100/1.01010101010101
dt = 1/fs
t = np.arange(100)*dt
x = A*np.sin(2*np.pi*fr*t)
X_r = np.fft.rfft(x)
freq_r = np.fft.rfftfreq(len(x), d=dt)
```
```python
c = X_r[freq_r==fr] / len(x)
c
```
array([-1.49438567e-16-0.5j])
Prepričajmo se, da zadnja časovna točka eksplicitno ni vključena:
```python
t[-3:]
```
array([0.97, 0.98, 0.99])
Če v kodi zgoraj spremenimo `fs` ali število točk tako, da je zadnja točka eksplicitno vključena, rezultat ne bo enak tistemu iz Fourierovih vrst!
## Frekvenčna ločljivost in dodajanje ničel
Frekvenčna ločljivost DFT je definirana z dolžino diskretne časovne vrste $x_i=x(\Delta t\,i)$, kjer je $\Delta t$ konstanten časovni korak, $i=0,1,\dots,N-1$; dolžina take vrste je torej $T_p=N\,\Delta t$, zato sledi, da je frekvenčna ločljivost:
$$
\Delta f= \frac{1}{N\,\Delta t}.
$$
Z daljšo časovno vrsto, bi lahko imeli tudi boljšo frekvenčno ločljivost. Kadar dodatnih točk diskretne vrste ne moremo pridobiti, lahko frekvenčno ločljivost povečamo z dodajanjem ničel:
$$
\tilde{x}_n=
\begin{cases}
x_n;\quad & 0\le n\le N-1\\
0;\quad & N\le n\le L-1
\end{cases}
$$
sledi:
:::{note}
**Diskretna Fourierova transformacija z dodajanjem ničel** (ang. *zero-padding*):
$$
\tilde{X}_k = \sum_{n=0}^{L-1} \tilde{x}_n\,e^{-\mathrm{i}\,2\pi\,k\,n/L}
= \sum_{n=0}^{N-1} \tilde{x}_n\,e^{-\mathrm{i}\,2\pi\,k\,n/L}.
$$
:::
Na takšen pridobimo frekvenčno ločjivost:
$$
\Delta f = \frac{1}{L\,\Delta t},
$$
pri tem pa je treba poudariti, da gre samo za frekvenčno interpolacijo, ki nam omogoča bolj podroben vpogled; novih informacij z dodajanjem ničel ne dodajamo. Podobno kakor pri DFT, lahko ničle dodajamo v frekvenčni domeni in nato z inverzno diskretno Fourierovo transformacijo pridobimo bolj goste oz interpolirane časovne podatke.
Pri dodajanju ničel je potrebno paziti na normiranje podatkov (TBA).
Spodnji primer prikazuje uporabo dodajanja ničel; s pomočjo komentarjev v kodi raziščite delovanje.
```python
import numpy as np
import matplotlib.pyplot as plt
A = 1
fr = 5
fs = 100
N = 25 # tukaj poskusite z 20 (sinusuida se zaključi; rezultat brez dodajanja niče je točen!)
k = 10 # dodajanje ničel
dt = 1/fs
t = np.arange(N)*dt
x = A*np.sin(2*np.pi*fr*t)
X = np.fft.fft(x)
freq = np.fft.fftfreq(len(x), d=dt)
X_kx = np.fft.fft(x, n=k*N)
freq_kx = np.fft.fftfreq(k*N, d=dt)
plt.title('Časovna domena')
plt.plot(t, x, '.')
plt.xlabel('Čas [s]')
plt.ylabel('$x$')
plt.show()
plt.title('Frekvenčna domena - amplitudni spekter')
plt.plot(freq_kx, np.abs(X_kx), '.', label=f'{k}x dodajanje ničel')
plt.plot(freq, np.abs(X), '.', label='Brez dodajanja ničel')
plt.xlabel(f'Frekvenca [Hz], $\Delta f=${1/(N*dt):3.2f}')
plt.ylabel('$X$')
plt.legend()
plt.show()
```
## Simetrija DFT za realne podatke
:::{note}
Za realne podatke v frekvenčni domeni velja:
$$
\begin{split}
\textrm{Re}\big(X(k)\big)&=\textrm{Re}\big(X(N-k)\big)\\
\textrm{Im}\big(X(k)\big)&=-\textrm{Im}\big(X(N-k)\big)\\
|X(k)|&=|X(N-k)|\\
\angle X(k)&=\angle X(N-k)
\end{split}
$$
:::
Ker je večina inženirskih podatkov v časovni domeni realnih, je smiselno, da se uporabi ustrezno prilagojena metoda hitre Fourierove transformacije; ta se v paketu `numpy` dostopa prek klica `numpy.fft.rfft()`. Inverzna DFT je dosegljiva prek metod `numpy.fft.ifft` oz. `numpy.fft.irfft` v primer realnih podatkov v časovni domeni.
Navedene lastnosti preverimo na kodi v paketu `numpy` (opomba: k časovnim signalom dodamo šum, da ni faza zelo blizu 0):
```python
import numpy as np
A = 1
fr = 5
fs = 100
N = 10
dt = 1/fs
t = np.arange(N)*dt
np.random.seed(0)
x = A*np.sin(2*np.pi*fr*t) + np.random.normal(scale=A/2, size=N)
X = np.fft.fft(x)
freq = np.fft.fftfreq(len(x), d=dt)
X_r = np.fft.rfft(x)
freq_r = np.fft.rfftfreq(len(x), d=dt)
np.testing.assert_allclose(freq[1:N//2], -freq[:N//2:-1])
np.testing.assert_allclose(np.real(X[1:N//2]), np.real(X[:N//2:-1]))
np.testing.assert_allclose(np.imag(X[1:N//2]), -np.imag(X[:N//2:-1]))
np.testing.assert_allclose(np.abs(freq[:N//2+1]), freq_r)
np.testing.assert_allclose(np.real(X[:N//2+1]), np.real(X_r))
np.testing.assert_allclose(np.imag(X[:N//2+1]), np.imag(X_r), atol=1e-15)
```
Še primer od zgoraj z `numpy.fft.rfft()`:
```python
import numpy as np
import matplotlib.pyplot as plt
A = 1
fr = 5
fs = 100
N = 25 # tukaj poskusite z 20 (sinusuida se zaključi; rezultat brez dodajanja niče je točen!)
k = 10 # dodajanje ničel
dt = 1/fs
t = np.arange(N)*dt
x = A*np.sin(2*np.pi*fr*t)
X = np.fft.rfft(x)
freq = np.fft.rfftfreq(len(x), d=dt)
X_kx = np.fft.rfft(x, n=k*N)
freq_kx = np.fft.rfftfreq(k*N, d=dt)
plt.title('Časovna domena')
plt.plot(t, x, '.')
plt.xlabel('Čas [s]')
plt.ylabel('$x$')
plt.show()
plt.title('Frekvenčna domena - amplitudni spekter')
plt.plot(freq_kx, np.abs(X_kx), '.', label=f'{k}x dodajanje ničel')
plt.plot(freq, np.abs(X), '.', label='Brez dodajanja ničel')
plt.xlabel(f'Frekvenca [Hz], $\Delta f=${1/(N*dt):3.2f}')
plt.ylabel('$X$')
plt.legend()
plt.show()
```
(sec:krozna_konvolucija)_
## Konvolucija periodičnih podatkov
Konvolucijo funkcij v primeru Fourierove integralske transformacije smo obravnavali v poglavju {ref}`sec:konvolucija_funkcij`, tukaj si bomo pogledali posebnosti pri obravnavi dveh periodičnih vrst (glejte: {ref}`DFT_periodicno` ) enake dolžine: $x_n$ in $y_n$:
$$
X_k = \sum_{n=0}^{N-1} x_n\,e^{-\mathrm{i}\,2\pi\,k\,n/N},
$$
$$
Y_k = \sum_{n=0}^{N-1} y_n\,e^{-\mathrm{i}\,2\pi\,k\,n/N},
$$
Podobno kakor za funkcije, velja tudi za periodične časovne vrste, da je DFT konvolucije v časovni domeni produkt frekvenčnih transformirank v frekvenčni domeni.
:::{note}
Konvolucija periodičnih podatkov (tudi krožna konvolucija, ang. *circular convolution*):
$$
\textrm{DFT}\big\{x_n*y_n\big\}=X_k\,Y_k,
$$
Da se poudari, da gre za krožno konvolucijo, se kdaj uporabi tudi znak $\circledast$: $x_n\circledast y_n$.
:::
Poudarjanje periodičnosti $x_n$ n $y_n$ je nujno sicer (krožna) konvolucija ni mogoča; to bo jasno iz sledeče izpeljave:
$$
\begin{split}
\textrm{DFT}\big\{x_n\circledast y_n\big\}&=\textrm{DFT}\big\{\sum_r^{N-1}x_r\,y_{n-r}\big\}\\
&=\sum_n^{N-1}\sum_r^{N-1}x_r\,y_{n-r}\,e^{-\textrm{i}\,2\,\pi\,n\,k/N}\\
&=\underbrace{\sum_r^{N-1}x_r\,\,e^{-\textrm{i}\,2\,\pi\,r\,k/N}}_{X_k}\,\underbrace{\sum_n^{N-1}y_{n-r}\,e^{-\textrm{i}\,2\,\pi\,(n-r)\,k/N}}_{Y_k}\\
&=X_k\,Y_k
\end{split}
$$
Pri zgornji izpeljavi je treba poudariti, da vsota po $r$ v predzadnji vrstici vključuje tudi vsoto po $n$, ker pa je vsota po $n$ zaradi periodičnosti neodvisna od $r$, se lahko vsoti po $r$ in $n$ izvedeta neodvisno. Zaradi periodičnosti $y_n$, za vsak $r$ torej velja:
$$
\sum_n^{N-1}y_{n-r}\,e^{-\textrm{i}\,2\,\pi\,(n-r)\,k/N}=\sum_n^{N-1}y_{n}\,e^{-\textrm{i}\,2\,\pi\,n\,k/N}=Y_k
$$
Če vrsti $x_n$ in $y_n$ ne bi bili periodični, konvolucija končnih vrst ne bi bila tako enostavna.
| 0cc59892040f425945069c2d50a416e2a644ee46 | 102,529 | ipynb | Jupyter Notebook | notebooks/06 - Diskretna Fourierova transformacija.ipynb | jankoslavic/procesiranje_signalov | 2913f169d643dcd508e5a22ca5648f5bb7bad8af | [
"MIT"
] | null | null | null | notebooks/06 - Diskretna Fourierova transformacija.ipynb | jankoslavic/procesiranje_signalov | 2913f169d643dcd508e5a22ca5648f5bb7bad8af | [
"MIT"
] | null | null | null | notebooks/06 - Diskretna Fourierova transformacija.ipynb | jankoslavic/procesiranje_signalov | 2913f169d643dcd508e5a22ca5648f5bb7bad8af | [
"MIT"
] | null | null | null | 129.29256 | 19,708 | 0.86872 | true | 5,477 | Qwen/Qwen-72B | 1. YES
2. YES | 0.923039 | 0.7773 | 0.717478 | __label__slv_Latn | 0.974315 | 0.505274 |
# Lucas_Task20
### 3 - Regarding the distribution problem for the elbow muscles presented in this text:
a. Test different initial values for the optimization.
b. Test other values for the elbow angle where the results are likely to change.
```python
import numpy as np
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import sympy as sym
from sympy.plotting import plot
import pandas as pd
from IPython.display import display
from IPython.core.display import Math
```
```python
r_ef = np.loadtxt('C:/Users/ebm/Downloads/Lucas/r_elbowflexors.mot', skiprows=7)
f_ef = np.loadtxt('C:/Users/ebm/Downloads/Lucas/f_elbowflexors.mot', skiprows=7)
```
```python
m_ef = r_ef*1
m_ef[:, 2:] = r_ef[:, 2:]*f_ef[:, 2:]
```
```python
labels = ['Biceps long head', 'Biceps short head', 'Brachialis']
fig, ax = plt.subplots(nrows=1, ncols=3, sharex=True, figsize=(10, 4))
ax[0].plot(r_ef[:, 1], r_ef[:, 2:])
#ax[0].set_xlabel('Elbow angle $(\,^o)$')
ax[0].set_title('Moment arm (m)')
ax[1].plot(f_ef[:, 1], f_ef[:, 2:])
ax[1].set_xlabel('Elbow angle $(\,^o)$', fontsize=16)
ax[1].set_title('Maximum force (N)')
ax[2].plot(m_ef[:, 1], m_ef[:, 2:])
#ax[2].set_xlabel('Elbow angle $(\,^o)$')
ax[2].set_title('Maximum torque (Nm)')
ax[2].legend(labels, loc='best', framealpha=.5)
ax[2].set_xlim(np.min(r_ef[:, 1]), np.max(r_ef[:, 1]))
plt.tight_layout()
plt.show()
```
```python
a_ef = np.array([624.3, 435.56, 987.26])/50 # 50 N/cm2
print(a_ef)
```
[ 12.486 8.7112 19.7452]
#### Test 1
```python
M = 40 # desired torque at the elbow
iang = 80 # which will give the closest value to 90 degrees
r = r_ef[iang, 2:]
f0 = f_ef[iang, 2:]
a = a_ef
m = m_ef[iang, 2:]
x0 = f_ef[iang, 2:]/10 # far from the correct answer for the sum of torques
print('M =', M)
print('x0 =', x0)
print('r * x0 =', np.sum(r*x0))
```
M = 40
x0 = [ 51.78948736 31.97872006 83.65566559]
r * x0 = 6.11695790214
#### Test 2
```python
M = 80 # desired torque at the elbow
iang = 40 # which will give the closest value to 90 degrees
r = r_ef[iang, 2:]
f0 = f_ef[iang, 2:]
a = a_ef
m = m_ef[iang, 2:]
x0 = f_ef[iang, 2:]/10 # far from the correct answer for the sum of torques
print('M =', M)
print('x0 =', x0)
print('r * x0 =', np.sum(r*x0))
```
M = 80
x0 = [ 62.58312845 42.94361276 97.71786525]
r * x0 = 5.38553888055
#### Test 3
```python
M = 50 # desired torque at the elbow
iang = 60 # which will give the closest value to 90 degrees
r = r_ef[iang, 2:]
f0 = f_ef[iang, 2:]
a = a_ef
m = m_ef[iang, 2:]
x0 = f_ef[iang, 2:]/10 # far from the correct answer for the sum of torques
print('M =', M)
print('x0 =', x0)
print('r * x0 =', np.sum(r*x0))
```
M = 50
x0 = [ 60.67171435 39.21827211 93.34593473]
r * x0 = 6.61186324316
### 4 - In an experiment to estimate forces of the elbow flexors, through inverse dynamics it was found an elbow flexor moment of 10 Nm. Consider the following data for maximum force (F0), moment arm (r), and pcsa (A) of the brachialis, brachioradialis, and biceps brachii muscles: F0 (N): 1000, 250, 700; r (cm): 2, 5, 4; A (cm2): 33, 8, 23, respectively (data from Robertson et al. (2013)).
a. Use static optimization to estimate the muscle forces.
b. Test the robustness of the results using different initial values for the muscle forces.
c. Compare the results for different cost functions.
#### Test 1
```python
#Valores fornecidos: F0 (N): 1000, 250, 700; r (cm): 2, 5, 4; A(cm²): 33, 8, 23
M = 40 # desired torque at the elbow
iang = 80 # which will give the closest value to 90 degrees
r = np.array([2, 5, 4])
f0 = np.array([1000, 250, 700])
a = np.array([33, 8, 23])
m = r*f0
x0 = f0/10 # far from the correct answer for the sum of torques
print('M =', M)
print('x0 =', x0)
print('r * x0 =', np.sum(r*x0))
```
M = 40
x0 = [ 100. 25. 70.]
r * x0 = 605.0
#### Test 2
```python
#Valores fornecidos: F0 (N): 1000, 250, 700; r (cm): 2, 5, 4; A(cm²): 33, 8, 23
M = 80 # desired torque at the elbow
iang = 40 # which will give the closest value to 90 degrees
r = np.array([2, 5, 4])
f0 = np.array([1000, 250, 700])
a = np.array([33, 8, 23])
m = r*f0
x0 = f0/10 # far from the correct answer for the sum of torques
print('M =', M)
print('x0 =', x0)
print('r * x0 =', np.sum(r*x0))
```
M = 80
x0 = [ 100. 25. 70.]
r * x0 = 605.0
#### Test 3
```python
#Valores fornecidos: F0 (N): 1000, 250, 700; r (cm): 2, 5, 4; A(cm²): 33, 8, 23
M = 50 # desired torque at the elbow
iang = 60 # which will give the closest value to 90 degrees
r = np.array([2, 5, 4])
f0 = np.array([1000, 250, 700])
a = np.array([33, 8, 23])
m = r*f0
x0 = f0/10 # far from the correct answer for the sum of torques
print('M =', M)
print('x0 =', x0)
print('r * x0 =', np.sum(r*x0))
```
M = 50
x0 = [ 100. 25. 70.]
r * x0 = 605.0
| 0d4c8160169286e13e874a23ec9d3fb8155dfbcd | 67,029 | ipynb | Jupyter Notebook | courses/modsim2018/tasks/Lucas_task20.ipynb | regifukuchi/bmc-1 | f4418212664758511bb3f4d4ca2318ac48a55e88 | [
"MIT"
] | null | null | null | courses/modsim2018/tasks/Lucas_task20.ipynb | regifukuchi/bmc-1 | f4418212664758511bb3f4d4ca2318ac48a55e88 | [
"MIT"
] | null | null | null | courses/modsim2018/tasks/Lucas_task20.ipynb | regifukuchi/bmc-1 | f4418212664758511bb3f4d4ca2318ac48a55e88 | [
"MIT"
] | null | null | null | 184.145604 | 58,018 | 0.893419 | true | 1,873 | Qwen/Qwen-72B | 1. YES
2. YES | 0.73412 | 0.774583 | 0.568637 | __label__eng_Latn | 0.823138 | 0.159464 |
# Probabilidad I:
Valor esperado e indicadores. Teorema de Bayes. Estimación Bayesiana.
# 0. PMFs y PDFs conjuntas.
- Una PMF conjunta $p_{X,Y}$ de las variables $X$ y $Y$ está definida como
\begin{equation}
p_{X,Y}(x,y)=P(X=x,Y=y)
\end{equation}
- La PMF marginal de X y Y puede ser obtenida a partir de la PMF conjunta, utilizando
\begin{equation}
p_{X}(x)=\sum_{y}{p_{X,Y}(x,y)}
\end{equation}
y
\begin{equation}
p_{Y}(y)=\sum_{x}{p_{X,Y}(x,y)}
\end{equation}
De manera análoga, para $X$ y $Y$ que sean variables aleatorias conjuntas con una PDF conjunta $f_{X,Y}$ se tiene que
\begin{equation}
f_{X,Y}(x,y)=f_{Y}(y)f_{X|Y}(x|y)
\end{equation}
\begin{equation}
f_{X}(x)=\int_{-\infty}^{\infty}f_{Y}(y)f_{X|Y}(x|y)dy
\end{equation}
#### Ejercicio 1: Realice dos gráficas de funciones de distribución conjuntas para variables aleatorias X,Y distribuidas normales para medias y varianzas iguales y medias y varianzas diferentes
Fórmula para distribución normal multivariada
\begin{equation}
f_X (x_1, ...,x_n)=\frac{1}{(2\pi)^{n/2}|\Sigma|^{1/2}} \exp{-\frac{1}{2}(x-\mu)^T \Sigma (x-\mu)}
\end{equation}
```python
import numpy as np
import matplotlib.pyplot as plt
#Utilizando una función gaussiana como la utilizada previamente
def gauss(x,m,s):
return 1/(np.sqrt(2*np.pi)*s)*np.exp(-(x-m)**2 / (2*s**2))
x_0=np.linspace(-3,3,600)
x=gauss(x_0,0,1)
y=gauss(x_0,0,1)
xx,yy=np.meshgrid(x,y)
img=xx*yy
plt.contourf(x_0,x_0,img)
plt.xlabel('x')
plt.ylabel('y')
plt.show()
```
```python
#Sin embargo, al hacer esto, no obtenemos la distribución normal multivariada dadas las condicionales de x, y.
#Utilizando la función multivariate_normal de scipy, podemos obtenerla, esta función recibe como parámetro
#la media [x,y] y la matriz de covarianza [[c1,c2],[c3,c4]]. Luego podemos graficar la pdf utilizando rv.pdf
from scipy.stats import multivariate_normal
x, y = np.mgrid[-3:3:1.0/100, -3:3:1.0/100]
pos = np.empty(x.shape + (2,))
pos[:, :, 0] = x; pos[:, :, 1] = y
rv = multivariate_normal([0, 0], [[1, 0], [0, 1]])
plt.contourf(x, y, rv.pdf(pos))
plt.title('Distribución gaussiana 2D')
plt.xlabel('x')
plt.ylabel('y')
plt.show()
```
# 1. Valor Esperado y varianza.
Para variables discretas se define el valor esperado de una variable discreta $X$ como
\begin{equation}
E[X] =\sum_{x}{x p_{X} (x)}
\end{equation}
y para una variable continua:
\begin{equation}
E[X] =\int_{x}{x p_{X} (x)}dx
\end{equation}
y la varianza se define como
\begin{equation}
var[X] = E[(X-E[X])^2]
\end{equation}
#### Ejercicio 2: Obtenga numéricamente el valor esperado y la varianza de una función de distribución Gamma generada aleatoriamente con parámetros $1/\lambda=0.5$ y $k=9$. Realice la gráfica de la distribución y de la CDF.
```python
#Realizaremos la gráfica de una distribución gamma con los parámetros dados
shape,scale=9,0.5
g=np.random.gamma(shape,scale,1000)
(n, bins, patches)=plt.hist(g,bins=100,density=True)
plt.title(r'histograma de distribución $\Gamma(0.5,9)$')
plt.show()
#Utilizaremos los bins y n para graficar un estimado de la CDF de esta distribución
plt.plot(bins[:-1],np.cumsum(n)/sum(n))
plt.title('CDF estimada a partir de bins')
plt.show()
```
# 2. Teorema de Bayes
Volviendo a los fundamentos de probabilidad, hablemos del teorema de probabilidad total, que dicta: Sean $A_1, ..., A_n$ eventos disjuntos que forman una partición del espacio muestral (cada posible resultado está incluido en exactamente uno de los eventos $A_1, ...., A_n$) y asumamos que $P(A_i)>0$ para todo $i$. Entonces, para cada evento $B$, se tiene
\begin{equation}
P(B)=P(A_1 \cap B)+P(A_n \cap B)
\end{equation}
\begin{equation}
=P(A_1)P(B|A_1)+...+P(A_n)P(B|A_n).
\end{equation}
este teorema lo que dice es que, intuitivamente, estamos particionando el espacio muestral en un número de escenarios $A_i$. Entonces, la probabilidad que $B$ ocurra es un promedio ponderado de la probabilidad condicional sobre cada escenario, donde cada escenario es pesado de acuerdo con su probabilidad. Esto permite calcular la probabilidad de varios eventos $B$ para los cuales las probabilidades $P(B|A_i)$ son conocidas o fáciles de obtener. El teorema de probabilidad total puede ser aplicado repetidamente para calcular probabilidades en experimentos que tienen caracter secuencial.
## Inferencia y regla de Bayes.
El teorema de probabilidad total es usualmente usado en conjunto con el teorema de Bayes, que relaciona las probabilidades condicionales de la forma $P(A|B)$ cno probabilidades condicionales de la forma $P(B|A)$, en el cual el orden del condicionamiento es contrario.
#### Regla de Bayes
Sean $A_1, A_2, ..., A_n$ eventos disjuntos que forman una partición del espacio muestral, y asumiendo que $P(A_i)>0 $ para todo $i$. Entonces para algún evento tal que $P(B)>0$, se tiene
\begin{equation}
P(A_i|B)=\frac{P(A_i)P(B|A_i)}{P(B)}
\end{equation}
\begin{equation}
=\frac{P(A_i)P(B|A_i)}{P(A_1)P(B|A_1)+...+P(A_n)P(B|A_n)}
\end{equation}
La regla de Bayes es usualmente usada para inferencia. Existen un cierto número de "causas" que producen un "efecto". Es posible observar el efecto y se desea inferir la causa. Los eventos $A_1,...,A_n$ son asociados con las causas y el evento $B$ representa el efecto. La probabilidad $P(B|A_i)$ de que el efecto observado cuando la causa $A_i$ está presente, contribuye a un modelo probabilistico de relación de causa y efecto. Dado el efecto B que se ha observado, se desea evaluar la probabilidad $P(A_i|B)$ de que la causa $A_i$ esté presente. En general se refiere a $P(A_i|B)$ como la probabilidad a posteriori de un evento $A_i$ y a $P(A_i)$ la cual se denomina probabilidad a priori.
##### Nota:
Es importante recordar el concepto de independencia, que dice que cuando $P(A|B)=P(A)$, entonces A es independiente de B. Por lo que por definición:
\begin{equation}
P(A\cup B)=P(A)P(B)
\end{equation}
En el caso en que $P(B)=0$, entonces $P(A|B)$ no está definida, por lo que la relación se mantiene. La simetría de esta relación también implica que la independencia es una propiedad simétrica; es decir, que si A es independiente de B, entonces B es independiente de A, y se puede decir que A y B son eventos independientes.
Además, también es importante notar que la independencia condicional entre un evento C y dos eventos B y C, es condicional si
\begin{equation}
P(A\cup B| C)=P(A|C)P(B|C)
\end{equation}
Que significa, en otras palabras, que si se sabe que C ocurrió, el conocimiento adicional de que B también ocurrió no cambia la probabilidad de A. La independencia entre dos eventos A y B respecto a una ley de probabilidad incondicional no implica la independencia condicional y viceversa.
# 3 Inferencia Bayesiana
La inferencia estadística es el proceso de extraer información a partir de una variable o un modelo desconocido a partir de información disponible.
La infrencia estadística difiere de la teoría de probabilidad en formas fundamentales. La probabilidad es un área de las matemáticas completamente autocontenida, basada en axiomas, como ya se ha visto. En razonamiento probabilístico se asume un modelo probabilístico completamente especificado que obedece estos axiomas. Luego se hace uso de un método matemático para cuantificar las consecuencias de este modelo o responder varias preguntas de interés. En particular, cada pregunta no ambigua tiene una respuesta correcta única.
La estadística es diferente. Para cualquier problema, pueden existir múltiples métodos razonables, con diferentes respuestas. En general, no hay una forma de obtener el mejor método, a menos que se realicen suposiciones fuertes y se impongan restricciones adicionales sobre la inferencia.
En la estadística Bayesiana, todas las suposiciones se localizan en un lugar, en la forma de un prior, los estadístas Bayesianos argumentan que todas las suposiciones son traidas a una superficie y están dispuestas al escrutinio.
Finalmente hay consideraciones prácticas. En muchos casos, los métodos Bayesianos son computacionalmente intratables. Sin embargo, con las capacidades de cómputo recientes, gran parte de la comunidad se enfoca en realizar métodos Bayesianos más prácticos y aplicables.
Conceptos clave:
- En estadística Bayesiana, se tratan los parámetros desconocidos como variables aleatorias con distribuciones a priori conocidas.
- En estimación de parámetros se quiere generar un estimado de qué tan cercanos están los valores de los estimadores a los verdaderos valores de los parámetros en un sentido probabilístico.
- En pruebas de hipótesis, el parámetro desconocido toma uno de valores infinitos, correspondiente según la hipótesis. Se quiere seleccionar una hipótesis basados en una pequeña probabilidad de error.
- Los principales métodos de inferencia Bayesiana son:
- MAP: (Maximun a posteriori probability): A partir de posibles valores de parámetros, se selecciona uno con máxima probabilidad condicional dados unos datos.
- LMS: (Least Mean Squares) Se selecciona un estimador/función de los datos que minimiza el error cuadrático medio entre el parámetro y su estimado.
- Linear Least Mean Squares: Se selecciona un estimador que es una función lineal de los datos y minimiza el error cuadrático medio entre los parámetros y su estimado.
En inferencia bayesiana, la cantidad de interés se denota por $\Theta$, y es modelada como una variable aleatoria o como una colección finita de variables aleatorias. Aquí, $\theta$ puede representar cantidades físicas, tales como una velocidad o una posición, o un conjunto de parámetros desconocidos de un modelo probabilístico. Por simpleza, a menos que lo contrario sea explícitamente mencionado, se ve $\Theta$ como una variable aleatoria.
El objetivo es extraer información acerca de $\theta$, basado en observar una colección de variables aleatorias $X=(X_1,...,X_n)$ relacionadas, llamadas observaciones, medidas o vector de observaciones. Por esto, se asume que se conoce la distribución conjunta de $\Theta$ y $X$. Es decir,
- Se asume el conocimiento de la distribución a priori $p_\theta$, dependiendo de si $\theta$ es discreta o continua-
- Una distribución condicional $p_{X|\theta}$, dependiendo de si X es discreta o continua.
Una vez un valor particular de $x$ en $X$ ha sido observado, una respuesta completa de un problema de inferencia es proveído por la distribución a posteriori. Esta distribución está determinada por la forma apropiada de la regla de Bayes y encapsula todo el conocimiento que se pueda tener acerca de $\Theta$ dada la información disponible.
### Resumen:
- Se comienza con una distribución a priori $p_{\Theta}$ o $f_{\Theta}$ para una variable desconocida aleatoria $\Theta$.
- Se tiene un modelo $p_{X|\Theta}$ o $f_{X|\Theta}$ del vector se observaciones X.
- Después de observar el valor $x$ en $X$, se forma la distribución a posteriori de $\Theta$, usando la versión apropiada de la regla de Bayes.
##### Ejercicio:
Suponga que tratamos con las dos siguientes hipótesis
\begin{equation}
H_1:p=0.1,H_2:p=0.2
\end{equation}
donde $p=0.2$ está basada en una proporción muestral de 1 a 5.
Primero asumamos que el tamaño de la muestra es $n=5$, $k=1$ donde, k significa una pelota amarilla que escojemos entre n pelotas. También podemos asumor que la probailidad a priori es 0.5 y 0.5, es decir igual para ambas hipótesis. Es posible actualizar la probabilidad a posteriori de cada hipótesis utilizando la regla de Bayes. Calcule las probabilidades P(p=0.1) y P(p=0.2) respectivamente para muestras de tamaño 5, 10, 15 y 20 y concluya al respecto.
```python
from scipy.stats import binom
n=[5,10,15,20]
p1=0.1
p2=0.2
k=1
for n_ in n:
p_1= binom(n_, p1)
p_2=binom(n_,p2)
"""
P1=p_1.pmf(k)*0.5/(p_1.pmf(k)*0.5+p_2.pmf(k)*0.5)
P2=p_2.pmf(k)*0.5/(p_1.pmf(k)*0.5+p_2.pmf(k)*0.5)
print(".............")
print("P(H1):%lf para n=%d"%(P1,n_))
print("P(H2):%lf para n=%d"%(P2,n_))
"""
P1=p_1.pmf(k)*0.5
P2=p_2.pmf(k)*0.5
nm=P1+P2
print(".............")
print("P(H1):%lf para n=%d"%(P1/nm,n_))
print("P(H2):%lf para n=%d"%(P2/nm,n_))
```
.............
P(H1):0.444723 para n=5
P(H2):0.555277 para n=5
.............
P(H1):0.590710 para n=10
P(H2):0.409290 para n=10
.............
P(H1):0.722283 para n=15
P(H2):0.277717 para n=15
.............
P(H1):0.824151 para n=20
P(H2):0.175849 para n=20
# Referencias.
- Wackerly, D. Mathematical Statistics with Applications. 2008
- Bertsekas, D. Introduction to Probability. 2008.
- Sivia, D.S. Data Analysis: A Bayesian Tutorial. 2006.
```python
```
| cd1e5f8003001db5f3d5c28f3a595b756c429cfe | 67,897 | ipynb | Jupyter Notebook | 7Estadistica/3_ProbabilidadI.ipynb | sergiogaitan/Study_Guides | 083acd23f5faa6c6bc404d4d53df562096478e7c | [
"MIT"
] | 5 | 2020-09-12T17:16:12.000Z | 2021-02-03T01:37:02.000Z | 7Estadistica/3_ProbabilidadI.ipynb | sergiogaitan/Study_Guides | 083acd23f5faa6c6bc404d4d53df562096478e7c | [
"MIT"
] | null | null | null | 7Estadistica/3_ProbabilidadI.ipynb | sergiogaitan/Study_Guides | 083acd23f5faa6c6bc404d4d53df562096478e7c | [
"MIT"
] | 4 | 2020-05-22T12:57:49.000Z | 2021-02-03T01:37:07.000Z | 154.66287 | 12,980 | 0.875989 | true | 3,932 | Qwen/Qwen-72B | 1. YES
2. YES | 0.787931 | 0.800692 | 0.63089 | __label__spa_Latn | 0.983904 | 0.3041 |
# Reconstructing an *off-axis* hologram by Fresnel Approximation
Reference: Digital holography and wavefront sensing by Ulf Schnars, Claas Falldorf, John Watson, and Werner Jüptner, Springer-verlag Berlin an, 2016. (Section 3.2)
## Info about the digital hologram:
'ulf7.BMP' is a digital hologram created by recording an object at about 1 meter distance with HeNe laser (632.8 nm) and an image sensor with 6.8 µm pixel size.
```python
#Import libraries realted to matplotlib and mathematical operations
# %matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
from PIL import Image
import numpy as np
import ipywidgets as widgets
from IPython.display import display
```
```python
# Read the hologram image file
hologram = Image.open('ulf7.BMP')
hologram = np.array(hologram).astype(np.float64) #Convert into float type. Crucial for non integer based mathematical operations
# plot the hologram
imgplot = plt.imshow(hologram, cmap="viridis")
```
## Some equations from the book!
The *Fresnel-Kirchhoff* integral describing diffraction field beyond an aperture is given by the coherent superposition of the secondary waves (section 2.4)
\begin{equation}
\Gamma\left(\xi^{\prime}, \eta^{\prime}\right)=\frac{i}{\lambda} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} A(x, y) \frac{\exp \left(-i \frac{2 \pi}{\lambda} \rho^{\prime}\right)}{\rho^{\prime}} Q d x d y
\end{equation}
where, $A(x, y)$ is the complex amplitude in the plane of the diffracting aperture, $\rho^{\prime}$ is the distance between a point in the aperture plane and a point in the observation plane, and $Q$ is the inclination factor defined to take care of no backward propagation of the diffracted optical field. For holograms, $Q$ is approximately equal to 1.
A hologram $h(x,y)$ recorded by a reference light wave $E_{R}(x, y)$ can be reconstructed by a conjugate reference wave $E_{R}^{*}(x, y)$ as described by the following *Fresnel-Kirchhoff* integral
\begin{equation}
\Gamma(\xi, \eta)=\frac{i}{\lambda} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} h(x, y) E_{R}^{*}(x, y) \frac{\exp \left(-i \frac{2 \pi}{\lambda} \rho\right)}{\rho} d x d y
\end{equation}
with $\rho = \sqrt{ (x-\xi)^2 + (y-\eta)^2 + d^2 }$. Here $d$ is the distance between the object and hologram planes. Substituting the apprximated *Taylor* expansion of $\rho$ in above equation leads to the Fresnel reonstruction field relation (see section 3.2 of the book)
\begin{aligned} \Gamma(\xi, \eta)=& \frac{i}{\lambda d} \exp \left(-i \frac{2 \pi}{\lambda} d\right) \exp \left[-i \frac{\pi}{\lambda d}\left(\xi^{2}+\eta^{2}\right)\right] \times \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} E_{R}^{*}(x, y) h(x, y) \exp \left[-i \frac{\pi}{\lambda d}\left(x^{2}+y^{2}\right)\right] \exp \left[i \frac{2 \pi}{\lambda d}(x \xi+y \eta)\right] d x d y \end{aligned}
Or, in a digital form by
\begin{aligned} \Gamma(m, n)=& \frac{i}{\lambda d} \exp \left(-i \frac{2 \pi}{\lambda} d\right) \exp \left[-i \pi \lambda d\left(\frac{m^{2}}{N^{2} \Delta x^{2}}+\frac{n^{2}}{N^{2} \Delta y^{2}}\right)\right] \times \sum_{k=0}^{N-1} \sum_{l=0}^{N-1} E_{R}^{*}(k, l) h(k, l) \exp \left[-i \frac{\pi}{\lambda d}\left(k^{2} \Delta x^{2}+l^{2} \Delta y^{2}\right)\right] \exp \left[i 2 \pi\left(\frac{k m}{N}+\frac{l n}{N}\right)\right] \\ =& C \times \sum_{k=0}^{N-1} \sum_{l=0}^{N-1} E_{R}^{*}(k, l) h(k, l) \exp \left[-i \frac{\pi}{\lambda d}\left(k^{2} \Delta x^{2}+l^{2} \Delta y^{2}\right)\right] \exp \left[i 2 \pi\left(\frac{k m}{N}+\frac{l n}{N}\right)\right] \end{aligned}
where, $h(k,l)$ is the hologram, $N$ is number of pixels in camera sensor (assumed number of rows = number of columns, if not, convert the hologram in such a way prior to the operations), $\lambda$ is wavelength, $\Delta x$ and $\Delta y$ are horizontal and vertical distance of neighboring sensor pixels, and $d$ is the distance of reconstruction. It is easy to see that the last term under discrete integral is actually an IFT (inverse Fourier transform) of a multiple of hologram function and an exponential factor term. C is just a complex constant which does not affect the reconstruction process and $E_{R}^{*}(k, l)$ gets simplified to unity for a plane wave as reconstruction/recording wave.
```python
# User defined reconstruction distance
w = widgets.FloatSlider(value=-1.054,min=-2.0,max=2.0,step=0.001,
description='d (in meters):',orientation='horizontal',readout=True,readout_format='.3f',)
display(w)
```
```python
# User define parameters
Nr,Nc = np.shape(hologram) #number of rows and columns in the hologram
wavelength = 632.8e-9 #HeNe laser wavelength in SI units i.e. meters
dx = 6.8e-6 #sensor pixel size in meters
d = w.value #-1.054 #reconstruction distance in meters
# prepare the Fresnel operand for the hologram
Nr = np.linspace(0, Nr-1, Nr)-Nr/2
Nc = np.linspace(0, Nc-1, Nc)-Nc/2
k, l = np.meshgrid(Nc,Nr)
factor = np.multiply(hologram, np.exp(-1j*np.pi/(wavelength*d)*(np.multiply(k, k)*dx**2 + np.multiply(l, l)*dx**2)))
reconstructed_field = np.fft.ifftshift(np.fft.ifft2(np.fft.ifftshift(factor))) # Take inverse Fourier transform of the factor
# plot
I = np.abs(reconstructed_field)/np.max(np.abs(reconstructed_field)) #normalized intensity profile
fig = plt.figure(figsize=(10,10)) #setup a blank figure
plt.imshow(I, cmap="hot", clim=(0.0, 0.3))
plt.colorbar()
```
| 5ef860ddaeb2ec6ff2beee568a187004e097d6af | 7,189 | ipynb | Jupyter Notebook | 1_Fresnel_reconstruction_new.ipynb | OptoManishK/Digital_Holography | 0fa694e706daaaa703a4553f5ff6518ca8a4c0c3 | [
"MIT"
] | 4 | 2020-02-14T13:16:44.000Z | 2021-07-15T02:38:30.000Z | 1_Fresnel_reconstruction_new.ipynb | OptoManishK/Digital_Holography | 0fa694e706daaaa703a4553f5ff6518ca8a4c0c3 | [
"MIT"
] | null | null | null | 1_Fresnel_reconstruction_new.ipynb | OptoManishK/Digital_Holography | 0fa694e706daaaa703a4553f5ff6518ca8a4c0c3 | [
"MIT"
] | 4 | 2019-05-02T05:12:04.000Z | 2022-01-22T13:06:25.000Z | 56.606299 | 753 | 0.614272 | true | 1,735 | Qwen/Qwen-72B | 1. YES
2. YES | 0.913677 | 0.805632 | 0.736087 | __label__eng_Latn | 0.914932 | 0.548509 |
```python
%matplotlib inline
%config InlineBackend.figure_format ='retina'
import torch
import socialforce
_ = torch.manual_seed(42)
```
# 1+1D
## Parametric
The potential $V(b, d_{\perp})$ is approximated by two 1D potentials:
\begin{align}
V(b, d_{\perp}) &= \textrm{SF}(b) \cdot \max(0, 1 + a d_{\perp})
\end{align}
where $a$ is the `asymmetry` parameter of the `PedPedPotential2D` constructor.
```python
V = socialforce.potentials.PedPedPotential2D(asymmetry=-1.0)
with socialforce.show.canvas(figsize=(12, 6), ncols=2) as (ax1, ax2):
socialforce.show.potential_2d(V, ax1)
socialforce.show.potential_2d_grad(V, ax2)
```
## Scenarios
Here we use a combination of synthetic {ref}`Circle and ParallelOvertake scenarios <scenarios>`.
```python
circle = socialforce.scenarios.Circle(ped_ped=V)
parallel = socialforce.scenarios.ParallelOvertake(ped_ped=V)
scenarios = circle.generate(5) + parallel.generate(5)
true_experience = socialforce.Trainer.scenes_to_experience(scenarios)
```
## MLP
Next we create a model for pedestrian-pedestrian interaction that is the
product of two 1D potentials: one potential as a function of $b$ and another
as a function of perpendicular distance. The potential is initialized to random
weights and biases.
\begin{align}
V(b, d_{\perp}) &= \textrm{MLP}_b(b) \cdot \textrm{MLP}_{\perp}(d_{\perp}) \;\;\; .
\end{align}
```python
V = socialforce.potentials.PedPedPotentialMLP1p1D()
with socialforce.show.canvas(figsize=(12, 6), ncols=2) as (ax1, ax2):
socialforce.show.potential_2d(V, ax1)
socialforce.show.potential_2d_grad(V, ax2)
```
## Inference
Next, we use the standard SGD optimizer from PyTorch and train the ped-ped
interaction model on the synthetic data created above.
```python
simulator = socialforce.Simulator(ped_ped=V)
opt = torch.optim.SGD(V.parameters(), lr=1.0)
socialforce.Trainer(simulator, opt).loop(20, true_experience, log_interval=5)
```
```python
with socialforce.show.canvas(figsize=(12, 6), ncols=2) as (ax1, ax2):
socialforce.show.potential_2d(V, ax1)
socialforce.show.potential_2d_grad(V, ax2)
```
```python
```
| fce36cde932c3bb031661c6a6e416b8564b04006 | 4,474 | ipynb | Jupyter Notebook | guide/pedped_1p1d.ipynb | svenkreiss/socialforce | c1dde8cc25979d68f67ede1bb77ce09a0406af0e | [
"MIT"
] | 83 | 2018-09-10T13:34:57.000Z | 2022-03-30T00:03:16.000Z | guide/pedped_1p1d.ipynb | svenkreiss/socialforce | c1dde8cc25979d68f67ede1bb77ce09a0406af0e | [
"MIT"
] | 5 | 2018-10-15T14:49:52.000Z | 2022-01-20T00:50:49.000Z | guide/pedped_1p1d.ipynb | svenkreiss/socialforce | c1dde8cc25979d68f67ede1bb77ce09a0406af0e | [
"MIT"
] | 36 | 2018-10-07T16:14:08.000Z | 2022-03-30T00:03:19.000Z | 25.712644 | 105 | 0.575548 | true | 651 | Qwen/Qwen-72B | 1. YES
2. YES | 0.803174 | 0.651355 | 0.523151 | __label__eng_Latn | 0.853524 | 0.053785 |
# Bucle de control
## Objetivos
- Identificar los elementos típicos de los sistemas controlados.
- Identificar las tareas de cada elemento del bucle de control.
## Definición
Se conoce como bucle de control al conjunto de sistemas interactando entre sí para tener [control en lazo cerrado](https://www.electronics-tutorials.ws/systems/closed-loop-system.html). El objetivo de esta interacción es obtener comportamientos deseados como respuesta de una planta.
En este diagrama se observa un bucle (bucla o lazo) típico de control que sirve como orientación a la hora de automatizar procesos.
- La **planta** es el proceso que se debe controlar.
- El **actuador** cambia el comportamiento de la planta a partir de las ordenes del **controlador**.
- El **controlador** toma decisiones a partir del error del proceso para que el sistema controlado cumpla con el objetivo impuesto con la señal de **referencia**.
- El **sensor** se encarga de medir el comportamiento de la **planta** para dar esta información al **controlador**.
Con la tecnología del momento, los controladores son electrónicos. Por esta razón, el **actuador** recibe señales eléctricas y las lleva a la naturaleza propia de la planta, y el sensor transfiere la información del comportamiento de la planta a forma eléctrica.
Si se considera que:
- Los sistemas **actuador** y **planta** se ven como un solo sistema desde el punto de vista del controlador, podría adoptar el nombre de **proceso**.
- El sistema **sensor** brinda información sin error de forma muy rápida en comparación a la evolución del proceso.
## Reducción del bucle
El bucle de control puede reducirse a:
- $Y_{sp}$ es la señal de referencia (sp por SetPoint)
- $Y$ es la señal de respuesta del sistema controlado.
- $E = Y_{sp} - Y$ es la señal de error.
- $G_C$ es el **controlador**.
- $U$ es el decisión tomada por el **controlador** y la excitación del **proceso**.
- $G_P$ es el **proceso**.
Recuerde que las señales varían en el tiempo. Así, pueden definirse:
\begin{align}
E(t) &= Y_{sp}(t)-Y(t)\\
U(t) &= \mathcal{G_C} \{E(t) \} = \mathcal{G_C} \{ Y_{sp}(t)-Y(t) \} \\
Y(t) &= \mathcal{G_P} \{U(t) \} = \mathcal{G_P} \{ \mathcal{G_C} \{ Y_{sp}(t)-Y(t) \} \}
\end{align}
## Bucle con sistemas LTI
Si los sistemas son **LTI**, se pueden denominar $g_{C}(t)$ a la respuesta impulsional del **controlador**, y $g_{P}(t)$ a la respuesta impulsional del **proceso**. Esto permite reescribir las expresiones anteriores como:
\begin{align}
E(t) &= Y_{sp}(t)-Y(t)\\
U(t) &= g_C(t) * ( Y_{sp}(t)-Y(t) ) \\
Y(t) &= g_P(t) * g_C(t) * ( Y_{sp}(t)-Y(t) )
\end{align}
Esto indica que la señal de respuesta depende de:
- el deseo $Y_{sp}$
- el proceso $G_{P}$
- el controlador $G_{C}$
Observe que para obtener un comportamiento deseado en $Y(t)=g_P(t)*g_C(t)*( Y_{sp}(t)-Y(t) )$ debe definirse $g_C(t)$ de manera que se corrijan los comportamientos del proceso. La labor de ingeniería de control es diseñar el controlador para cumplir especificaciones.
Para facilitar el análisis y trabajo de los sistemas controlados, se usará la **transformada de Laplace**.
## Juego de control
```python
import matplotlib.pyplot as plt
%matplotlib inline
from juego import ControlGame
game = ControlGame(runtime=45) # segundos
```
Suponga que usted debe operar un sistema **SISO** (de una entrada y una salida) usando un botón deslizable y su percepción del funcionamiento del sistema.
- Ejecute la celda con `game.ui()`.
- Presione el botón `Ejecutar` y mueva el botón `U(t)` para que la señal `Salida` siga a la señal `Referencia`, que cambia de forma aleatoria después de cierta cantidad de segundos.
- Tenga en cuenta que el `Puntaje` crece más rápido mientras menor sea el error.
- Ejecute la celda varias veces para ver cómo usted aprende a controlar el sistema.
- Para visualizar su desempeño como controlador, ejecute la celda con `game.plot()`.
```python
game.ui()
```
VBox(children=(HBox(children=(Button(description='Ejecutar', style=ButtonStyle()), Text(value='0', description…
```python
game.plot()
```
Los cambios que acaba de realizar manualmente deben ser ejercidos de manera automática por el **controlador**.
En el resto del curso se discutirá sobre las técnicas de análisis y diseño más usadas para sistemas análogos.
```python
```
| ab3f6188814b90bfe362c1dec36cb9d79a9dfcbf | 29,482 | ipynb | Jupyter Notebook | Bucle_control.ipynb | pierrediazp/Control | 2a185eff5b5dc84045115009e62296174d072220 | [
"MIT"
] | null | null | null | Bucle_control.ipynb | pierrediazp/Control | 2a185eff5b5dc84045115009e62296174d072220 | [
"MIT"
] | null | null | null | Bucle_control.ipynb | pierrediazp/Control | 2a185eff5b5dc84045115009e62296174d072220 | [
"MIT"
] | 1 | 2021-11-18T13:08:36.000Z | 2021-11-18T13:08:36.000Z | 133.402715 | 22,168 | 0.8668 | true | 1,250 | Qwen/Qwen-72B | 1. YES
2. YES | 0.692642 | 0.798187 | 0.552858 | __label__spa_Latn | 0.986731 | 0.122803 |
```python
import sympy as sp
import numpy as np
```
```python
x, y = [sp.IndexedBase(e) for e in ['x', 'y']]
m = sp.symbols('m', integer=True)
a, b = sp.symbols('a b', real=True)
i = sp.Idx('i', m)
```
```python
loss = (y[i] - (a*x[i] + b))**2
```
```python
loss
```
$\displaystyle \left(- a {x}_{Idx\left(i, \left( 0, \ m - 1\right)\right)} - b + {y}_{Idx\left(i, \left( 0, \ m - 1\right)\right)}\right)^{2}$
Having defined the loss function using indexed variables we might hope that the
implicit summation of repeated indexes might fall through to derivative
but it looks like this isn't the case.
Below we see taking derivative wrt to fit parameters is only applied
to each point rather than the whole sum, which is incorrect.
```python
sp.solve(loss.diff(a), a)
```
[(-b + y[i])/x[i]]
```python
sp.solve(loss.diff(b), b)
```
[-a*x[i] + y[i]]
Try adding explicit summation around the loss expression. This gives the
correct set of equations for derivatives but a solution can't be found.
```python
sp.diff(sp.Sum(loss, i),a)
```
$\displaystyle \sum_{Idx\left(i, \left( 0, \ m - 1\right)\right)=0}^{m - 1} - 2 \left(- a {x}_{Idx\left(i, \left( 0, \ m - 1\right)\right)} - b + {y}_{Idx\left(i, \left( 0, \ m - 1\right)\right)}\right) {x}_{Idx\left(i, \left( 0, \ m - 1\right)\right)}$
```python
sp.diff(sp.Sum(loss, i), b)
```
$\displaystyle \sum_{Idx\left(i, \left( 0, \ m - 1\right)\right)=0}^{m - 1} \left(2 a {x}_{Idx\left(i, \left( 0, \ m - 1\right)\right)} + 2 b - 2 {y}_{Idx\left(i, \left( 0, \ m - 1\right)\right)}\right)$
```python
sp.solve([sp.diff(sp.Sum(loss, i),a), sp.diff(sp.Sum(loss, i),b)], [a, b])
```
[]
```python
sp.solve([loss.expand().diff(a), loss.expand().diff(b)], [a,b])
```
{a: -b/x[i] + y[i]/x[i]}
MatrixSymbol seems to be the trick
```python
x_2 = sp.MatrixSymbol('x', m, 1)
y_2 = sp.MatrixSymbol('y', m, 1)
a_2 = sp.MatrixSymbol('a', 1, 1)
b_2 = b*sp.OneMatrix(m, 1)
```
```python
err = y_2 - (x_2*a_2 + b_2)
err
```
$\displaystyle - b \mathbb{1} - x a + y$
```python
objective = (err.T * err)
objective
```
$\displaystyle \left(- b \mathbb{1} - a^{T} x^{T} + y^{T}\right) \left(- b \mathbb{1} - x a + y\right)$
```python
objective.diff(a_2)
```
$\displaystyle - 2 x^{T} \left(- b \mathbb{1} - x a + y\right)$
```python
objective.diff(b)
```
$\displaystyle - \left(- b \mathbb{1} - a^{T} x^{T} + y^{T}\right) \mathbb{1} - \mathbb{1} \left(- b \mathbb{1} - x a + y\right)$
Functions of Matrices e.g. generator of rotations
```python
t = sp.symbols('t', real=True)
g = sp.Matrix([[0, -t], [t, 0]])
```
```python
g
```
```python
sp.exp(g)
```
$\displaystyle \left[\begin{matrix}\cos{\left(t \right)} & - \sin{\left(t \right)}\\\sin{\left(t \right)} & \cos{\left(t \right)}\end{matrix}\right]$
```python
```
| c0ced813d174d88447ec5e463ff19e84bc3c13b0 | 18,999 | ipynb | Jupyter Notebook | scripts/indexed_expressions_20211107.ipynb | mattmcd/PyBayes | c3931cd495f1e96eea7e8f6b5c527403726de7d7 | [
"Apache-2.0"
] | null | null | null | scripts/indexed_expressions_20211107.ipynb | mattmcd/PyBayes | c3931cd495f1e96eea7e8f6b5c527403726de7d7 | [
"Apache-2.0"
] | null | null | null | scripts/indexed_expressions_20211107.ipynb | mattmcd/PyBayes | c3931cd495f1e96eea7e8f6b5c527403726de7d7 | [
"Apache-2.0"
] | null | null | null | 37.998 | 1,420 | 0.571135 | true | 1,054 | Qwen/Qwen-72B | 1. YES
2. YES | 0.924142 | 0.847968 | 0.783642 | __label__eng_Latn | 0.536521 | 0.658996 |
$\newcommand{\pr}{\textrm{Pr}}$
$\newcommand{\l}{\left}$
$\newcommand{\r}{\right}$
$\newcommand\given[1][]{\:#1\vert\:}$
$\newcommand{\var}{\textrm{Var}}$
$\newcommand{\mc}{\mathcal}$
$\newcommand{\lp}{\left(}$
$\newcommand{\rp}{\right)}$
$\newcommand{\lb}{\left\{}$
$\newcommand{\rb}{\right\}}$
$\newcommand{\iid}{\textrm{i.i.d. }}$
# 2.1 Belief functions and probabilities
Probabilities are a way to numerically express rational beliefs.
This first section shows that probabilities satisfy some general
features that we would expect a measure of "belief" to have, so
it seems reasonable to use probabilities to represent our belief
in something.
Probability axioms:
**P1**: $0 = \pr\l(\textrm{not } H \given H\r) \leq \pr\l(F \given H\r) \leq \pr\l(H \given H\r) = 1$
**P2**: $\pr\l(F \cup G \given H\r) = \pr\l(F \given H\r) + \pr\l(G \given H\r)$ if $F \cap G = \varnothing$
**P3**: $\pr\l(F \cap G \given H\r) = \pr\l(G \given H\r)\pr\l(F \given G \cap H\r)$
# 2.2 Events, partitions, and Bayes' rule
**Definition 1 (Partition)**: A collection of sets $\left\{H_1, \cdots, H_k\right\}$ is a partition
of another set $\mathcal{H}$ if
1. the events are disjoint: $H_i \cap H_j = \varnothing$ for $i \not= j$
2. the union of the sets is $\mathcal{H}$: $\bigcup_{k=1}^K H_k = \mathcal{H}$
If $\mathcal{H}$ is the set of all possible truths and $\left\{H_1, \cdots, H_k\right\}$ is a partition of
$\mathcal{H}$, then exactly one of $\left\{H_1, \cdots, H_k\right\}$ contains the truth.
Suppose $\left\{H_1, \cdots, H_k\right\}$ is a partition of $\mathcal{H}$, $\textrm{Pr}\left(\mathcal{H}\right) = 1$,
and $E$ is some specific event. The axioms of probability imply:
**Rule of total probability**: $$\sum_{k=1}^{K} \pr\left(H_k\right) = 1$$
**Rule of marginal probability**:
\begin{align}
\pr\left(E\right) &= \sum_{k=1}^K \pr\left(E \cap H_k\right) \\
&= \sum_{k=1}^K \pr\left(E \given H_k\right)\pr\left(H_k\right)
\end{align}
**Bayes' rule**:
\begin{align}
\pr\left(H_j \given E\right) &= \frac{\pr\l(E \given H_j\r)\pr\l(H_j\r)}{\pr\l(E\r)} \\
&= \frac{\pr\l(E \given H_k\r)\pr\l(H_j\r)}{\sum_{k=1}^K \pr\l(E \given H_k\r)\pr\l(H_k\r)}
\end{align}
We would say that $H_k$ has been marginalized out in the denominator.
I think it's worth looking at the formula for Bayes' rule here. The left hand side,
$\pr\left(H_j \given E\right)$, represents that probability that $H_j$ is true given that
event $E$ happened. For us, $E$ is generally data and $H_k$ will be a parameter we are
interested in estimating. On the right side, we have $\pr\l(E \given H_k\r)$
which is the probability of observing the event/data under the parameterization $H_k$.
This is multiplied by $\pr\l(H_j\r)$ which is our prior distribution on $H_j$.
The denominator is a little more tricky. $\pr\l(E\r)$ doesn't mean much on its own. However,
we can rewrite the denominator using the rule of marginal probability as
$$\pr\l(E\r) = \sum_{k=1}^K\pr\l(E \cap H_k\r) = \sum_{k=1}^K \pr\l(E \given H_k\r)\pr\l(H_k\r)$$
Since we think $E$ depends on the $H_i$, we can treat $\pr\l(E\r)$ as a
**marginal density** and rewrite it as a joint distribution (see 2.5 below for more information).
We know $\pr\l(E \given H_k\r)$ and $\pr\l(H_k\r)$ so we can evaluate this expression.
I kind of think of
the numerator as the strength of my belief and the denominator acts to normalize that value
based on how strongly I believe all of the $H_k$.
**Bayes factors**: $\l\{H_1, \cdots, H_k\r\}$ often refer to disjoint hypotheses and $E$ refers
to data. To compare hypotheses post-experimentally, we often calculate the ratio:
\begin{align}
\frac{\pr\l(H_i \given E\r)}{\pr\l(H_j \given E\r)} &=
\frac{\pr\l(E \given H_i\r)}{\pr\l(E \given H_j\r)} \times \frac{\pr\l(H_i\r)}{\pr\l(H_j\r)} \\
&= \textrm{"Bayes factor"}\times \textrm{"prior beliefs"}
\end{align}
This quantity reminds us that Bayes' rule does not determine what our beliefs should be after we
see data, it only tells us how they should change.
# 2.3 Independence
**Definition 2 (Independence)**: Two events $F$ and $G$ are conditionally independent given
$H$ if $\pr\l(F \cap G \given H\r) = \pr\l(F \given H\r)\pr\l(G \given H\r)$.
If $F$ and $G$ are conditionally independent, then knowing $G$ will tell us nothing about $F$.
# 2.4 Random variables
A random variable is an unknown numerical quantity about which we make probability statements.
## 2.4.1 Discrete random variables
Let $Y$ be a random variable and let $\mathcal{Y}$ be the set of all possible values of $Y$.
We say $Y$ is discrete if $\mathcal{Y}$ is countable: $\mathcal{Y} = \l\{y_1, y_2, \cdots\r\}$.
For short, we will write $\pr\l(Y=y\r) = p\l(y\r)$ where $p$ is the probability density function (pdf).
The pdf has the following properties:
1. $0 \leq p\l(y\r) \leq 1$ for all $y \in \mathcal{Y}$
2. $\sum_{y \in \mathcal{Y}} p\left(y\right) = 1$
## 2.4.2 Continuous random variables
If the sample space $\mathcal{Y}$ is roughly equal to $\mathbb{R}$, then we often define
probability distributions for random variables in terms of a cumulative distribution function
(cdf): $F\l(y\r) = \pr\l(Y \leq y\r)$. Note that
* $F\l(\infty\r) = 1$
* $F\l(-\infty\r) = 0$
* $F\l(b\r) \leq F\l(a\r)$ if $b<a$
* $\pr\l(Y > a\r) = 1-F\l(a\r)$
* $\pr\l(a < Y < b\r) = F\l(b\r) - F\l(a\r)$
If $Y$ is a continuous random variable, then there exists a pdf $p$ such that
$$F\l(a\r) = \int_{-\infty}^a p\l(y\r)dy$$
The continuous pdf has analogous characteristics to the discrete pdf:
1. $0 \leq p\l(y\r) \leq 1$ for all $y \in \mathcal{Y}$
2. $\int_{y \in \mathbb{R}} p\left(y\right)dy = 1$
## 2.4.3 Descriptions of distributions
### Mean, mode, median
The **mean** or **expectation** of an unknown quantity $Y$ is given by
* $E\l[Y\r] = \sum_{y\in Y}yp\l(y\r)$ if $Y$ is discrete
* $E\l[Y\r] = \int_{y\in \mathbb{R}}yp\l(y\r)dy$ if $Y$ is continuous
The mean is the center of mass of the distribution. It is generally not equal to
* the **mode**: the probable value of $Y$
* the **median**: the value of $Y$ in the middle of the distribution
The mean is a good quantity to look at because
1. The mean is a scaled value of the total of $\l\{Y_1, \cdots, Y_n\r\}$, and the total
is often a quantity of interest.
2. If you were forced to guess the value of $Y$, guessing the mean would minimize your error
if it was measured as $\l(Y - y_{guess}\r)^2$.
3. In some simple models, the mean contains all of the information about the population that
can be obtained from the data.
### Variance
The variance is a measure of the spread:
\begin{align}
\var\l[Y\r] &= E\l[\l(Y - E\l[Y\r]\r)^2\r] \\
&= E\l[Y^2\r] - E\l[Y\r]^2
\end{align}
The variance is the average squared distance that a sample value $Y$ will be from
the population mean $E\l[Y\r]$. The standard deviation is the square root of the variance
and is on the same scale as the mean.
For a continuous, increasing cdf $F$, the $\alpha$ **quantile** is the $y_\alpha$ such that
$F\left(y_\alpha\right) \equiv \pr\l(Y \leq y_\alpha\r) = \alpha$. The **interquartile** range
is the interval $\l(y_{0.25}, y_{0.75}\r)$ which contains 50% of the mass of the distribution.
## 2.5 Joint distributions
### Discrete distributions
Let
* $\mathcal{Y}_1$, $\mathcal{Y}_2$ be two countable samples spaces
* $Y_1$, $Y_2$, be two random variables, taking values in $\mathcal{Y}_1$, $\mathcal{Y}_2$ respectively
The **joint pdf** or **joint density** of $Y_1$ and $Y_2$ is defined as
$$p_{Y_1 Y_2}\l(y_1, y_2\r) = \pr\l(\l\{Y_1 = y_1\r\} \cap \l\{Y_2 = y_2\r\} \r)$$
for $y_1 \in \mc{Y}_1$, $y_2 \in \mc{Y}_2$.
The **marginal density** can be computed from the joint density:
\begin{align}
p_{Y_1}\l(y_1\r) &\equiv \pr\l(Y_1 = y_1\r) \\
&= \sum_{y_2 \in \mc{Y}_2} \pr\l(\l\{Y_1 = y_1\r\} \cap \l\{Y_2 = y_2\r\} \r) \\
&\equiv \sum_{y_2 \in \mc{Y}_2}p_{Y_1Y_2} \l(y_1, y_2\r)
\end{align}
The **conditional density** of $Y_2$ given $\l\{Y_1 = y_1\r\}$ can be computed from the
joint density and the marginal density of $Y_1$:
\begin{align}
p_{Y_2 \given Y_1}\l(y_2 \given y_1\r) &= \frac{\pr\l(\l\{Y_1 = y_1\r\} \cap \l\{Y_2 = y_2\r\}\r)}{\pr\l(Y_1 = y_1\r)} \\
&= \frac{p_{Y_1Y_2}\l(y_1, y_2\r)}{p_{Y_1}\l(y_1\r)} \\
&= \frac{p_{Y_1Y_2}\l(y_1, y_2\r)}{\sum_{y_2 \in \mc{Y}_2}p_{Y_1Y_2} \l(y_1, y_2\r)}
\end{align}
We often drop the subscripts on the pdf's such that $p_{Y_1}\l(y_1\r)$ becomes $p\l(y_1\r)$ etc.
### Continuous joint distributions
If $Y_1$ and $Y_2$ are continuous, we have a joint cdf $F_{Y_1Y_2}\l(a, b\r) \equiv
\pr\l(\l\{Y_1 \leq a\r\} \cap \l\{Y_2 \leq b\r\}\r)$, there is a function $p_{Y_1Y_2}$ such that
$$F_{Y_1Y_2}\l(a,b\r) = \int_{-\infty}^a \int_{-\infty}^b p_{Y_1Y_2}\l(y_1, y_2\r)dy_2dy_1$$
The function $p_{Y_1Y_2}$ is the joint density of $Y_1$ and $Y_2$. As in the discrete case,
we have
* $p_{Y_1}\l(y_1\r) = \int_{-\infty}^\infty p_{Y_1Y_2}\l(y_1,y_2\r)dy_2$
* $p_{Y_2 \given Y_1}\l(y_2 \given y_1\r) = p_{Y_1Y_2}\l(y_1, y_2\r) / p_{Y_1}\l(y_1\r)$
### Mixed continuous and discrete variables
Let $Y_1$ be discrete and $Y_2$ be continuous. Suppose we have
* a marginal density $p_{Y_1}$ from our beliefs $\pr\l(Y_1=y_1\r)$
* a conditional density $p_{Y_2 \given Y_1}\left(y_2\given y_1\r)$ from
$\pr\l(Y_2 \leq y_2 \given Y_1 = y_1\r) \equiv F_{Y_2 \given Y_1}\l(y_2 \given y_1\r)$
The joint density of $Y_1$ and $Y_2$ is then
$$p_{Y_1Y_2}\l(y_1, y_2\r) = p_{Y_1}\l(y_1\r) \times p_{Y_2 \given Y_1}\l(y_2 \given y_1\r)$$
and has the property that
$$\pr\l(Y_1 \in A, Y_2 \in B\r) = \int_{y_2 \in B} \left\{\sum_{y_1 \in A} p_{Y_1Y_2}\l(y_1, y_2\r)\r\}dy_2$$
In other words, we can use summation and integration to calculate the joint density.
### Bayes rule and parameter estimation
Let $\theta$ be a continuous parameter we want to estimate and let $Y$ be a discrete
data measurement. Bayesian estimation of $\theta$ derives from the calculation of
$p\l(\theta \given y\r)$, where $y$ is the observed value of $Y$. This calculation
first requires the joint density of $\theta$ and $Y$. We can construct the joint density from
* $p\l(\theta\r)$, beliefs about $\theta$
* $p\l(y \given \theta\r)$, beliefs about $Y$ for each value of $\theta$
Having observed $\l\{Y = y\r\}$, we need to compute our updated beliefs about $\theta$:
$$p\l(\theta | y\r) = p\l(\theta, y\r) / p\lp y\rp = p\lp \theta \rp p\lp y \given \theta \rp / p\lp y \rp$$
**This conditional density is called the posterior density of $\theta$.** If $\theta_a$ and $\theta_b$
are two estimates of $\theta$, the posterior probability (density) of $\theta_a$ relative to $\theta_b$,
conditional on $Y=y$, is
\begin{align}
\frac{p\lp \theta_a \given y \rp}{p \lp \theta_b \given y \rp} &=
\frac{p\lp \theta_a \rp p\lp y | \theta_a \rp / p\lp y \rp}{p\lp \theta_b \rp p\lp y | \theta_b \rp / p\lp y \rp} \\
&= \frac{p\lp \theta_a \rp p\lp y | \theta_a \rp}{p\lp \theta_b \rp p\lp y | \theta_b \rp}
\end{align}
To evaluate the **relative** posterior odds of $\theta_a$ and $\theta_b$, we do not need to evaluate
the marginal density $p\lp y \rp$. We see that
$$p\lp \theta | y \rp \propto p\lp \theta \rp p\lp y \given \theta \rp$$
The constant of proportionality is $1 / p\lp y \rp$ which we *could* calculate using
$$p\lp y \rp = \int_\Theta p\lp y, \theta \rp d\theta = \int_\Theta p\lp y \given \theta \rp p\lp \theta \rp d\theta$$
Later we will see that the numerator is more important than this denominator.
# 2.6 Independent random variables
Suppose $Y_1, \cdots, Y_n$ are random variables and that $\theta$ is a parameter describing
the conditions under which the random variables are generated. We say that $Y_1, \cdots, Y_n$
are **conditionally independent** given $\theta$ if for every collection of $n$ sets $\lb A_1, \cdots, A_n \rb$
we have
$$\pr\lp Y_1 \in A_1, \cdots, Y_n \in A_n \given \theta \rp =
\pr\lp Y_1 \in A_1 \given \theta\rp \times \cdots \times \pr \lp Y_n \in A_n \given \theta \rp$$
This tells us that knowing $Y_j$ gives us no further information about $Y_i$ beyond what $\theta$
gives us. Under independence, the joint density of the $Y_i$ condtioned on $\theta$ is
$$p\lp y_1, y_2, \cdots, y_n \given \theta \rp = \prod_{i=1}^n p_{Y_i} \lp y_i \given \theta \rp$$
In this case, we say that $Y_1, \cdots, Y_n$ are **conditionally independent and identically
distributed (i.i.d.)**: $Y_1, \cdots, Y_n \sim \iid p\lp y \given \theta \rp$.
# 2.7 Exchangeability
**Definition 3 (Exchangeable)**: Let $p\lp y_1, \cdots, y_n\rp$ be the joint density of $Y_1, \cdots, Y_n$.
If $p\lp y_1, \cdots, y_n\rp = p\lp y_{\pi_1}, \cdots, y_{\pi_n}\rp$ for all permutations $\pi$ of
$\lb 1, \cdots, n \rb$, then $Y_1, \cdots, Y_N$ are exchangeable. This basically means that
$Y_1, \cdots, Y_N$ are exchangeable if the subscript labels convey no information about the outcomes.
Therom: If $\theta \sim p\lp \theta \rp$ and $Y_1, \cdots, Y_N$ are conditionally i.i.d. given $\theta$,
then marginally (unconditionally on $\theta$), $Y_1, \cdots, Y_N$ are exchangeable.
# 2.8 de Finetti's thereom
We know that if $Y_1, \cdots, Y_n \given \theta$ are i.i.d. and $\theta \sim p\lp \theta \rp$, then
$Y_1, \cdots, Y_N$ are exchangeable. Can we say something about the other direction?
**Theorem 1 (de Finetti)**: Let $\lb Y_1, \cdots, Y_n \rb$ be a potentially
infinite sequence of random variables all having a common sample space $\mc{Y}$.
Let $Y_i \in \mc{Y}$ for all $i \in \lb Y_1, Y_2, \cdots \rb$. Suppose that for
any $n$ our belief model for $Y_1, \cdots, Y_n$ is exchangeable
$$p \lp y_1, \cdots, y_n \rp = p\lp y_{\pi_1}, \cdots, y_{\pi_n} \rp$$
for all permutations of $\lb 1, \cdots, n \rb$. The our model can be written as
$$p\lp y_1, \cdots, y_n \rp = \int \lb \prod_1^np \lp y_i \given \theta \rp \rb p \lp \theta \rp d \theta$$
for some parameter $\theta$, some prior distribution on $\theta$ ($p \lp \theta \rp$), and some sampling
model $p\lp y \given \theta \rp$. The prior and sampling model depend on the form of the belief
model $p\lp y_1, \cdots, y_n \rp$.
The probability distribution $p\lp \theta \rp$ represents our beliefs about the outcomes
of $\lb Y_1, \cdots, Y_n \rb$, induced by our belief model $p\lp y_1, y_2, \cdots \rp$.
More precisely,
* $p\lp \theta \rp$ represents our beliefs about $\lim_{n\rightarrow \infty} Y_i / n$ in the binary case
* $p\lp \theta \rp$ represents our beliefs about $\lim_{n\rightarrow \infty} \lp Y_i \leq c \rp / n$
for each $c$ in the general case
We can summarize as
$$Y_1, \cdots, Y_n \given \theta \textrm{ are i.i.d. and }\theta \sim p\lp \theta \rp \iff
Y_1, \cdots, Y_N \textrm{ are exchangeable for all } n$$
The question is when are $Y_1, \cdots, Y_n$ exchangeable for all $n$? For this to be true,
we need both exchangeability and repeatability. Exchangeability is true when the labels
have no meaning. Repeatability is true when
* $Y_1, \cdots, Y_n$ are outcomes of a repeatable experiment
* $Y_1, \cdots, Y_n$ are sampled from a finite population *with* replacement
* $Y_1, \cdots, Y_n$ are sampled from an infinite population *without* replacement
If $Y_1, \cdots, Y_n$ are exchangeable and sampled from a finite population *without* replacement
of size $N >> n$, then they can be modeled as approximately conditionally i.i.d.
```python
```
| e6adbdf0e08ae1da7cb269b001fe95201accbe70 | 19,780 | ipynb | Jupyter Notebook | readings/A First Course in Bayesian Statistical Methods/2. Belief, probability, and exchangeability (long).ipynb | rivas-lab/presentations-readings | 3d9af7fba9c01ec67e80c12468106ab2cbd0ab24 | [
"MIT"
] | 2 | 2017-02-09T19:23:06.000Z | 2017-03-09T18:23:37.000Z | readings/A First Course in Bayesian Statistical Methods/2. Belief, probability, and exchangeability (long).ipynb | rivas-lab/reading | 3d9af7fba9c01ec67e80c12468106ab2cbd0ab24 | [
"MIT"
] | null | null | null | readings/A First Course in Bayesian Statistical Methods/2. Belief, probability, and exchangeability (long).ipynb | rivas-lab/reading | 3d9af7fba9c01ec67e80c12468106ab2cbd0ab24 | [
"MIT"
] | null | null | null | 53.315364 | 152 | 0.554297 | true | 5,566 | Qwen/Qwen-72B | 1. YES
2. YES | 0.847968 | 0.865224 | 0.733682 | __label__eng_Latn | 0.968673 | 0.542921 |
## Symbolic analysis of models
In this notebook, we use symbolic mathematics to study energy-based PySB model. We derive steady-state analytical solutions to study reaction networks without costly ODE simulations. First, we load the previously developed models for RAF and RAF inhibition:
```python
import toy_example_RAF_RAFi as model
from pysb.bng import generate_equations
from util_display import display_model_info
#generate the model equations
model=model.model
generate_equations(model)
#display model informations
display_model_info(model)
```
Model information
Species: 6
Parameters: 12
Expressions: 8
Observables: 8
Total Rules: 2
Energy Rules: 2
Non-energy Rules: 0
Energy Patterns: 4
Reactions: 12
Next, we define the sympy systems of equations corresponding to the ODE system but with the left-hand side (the derivative definition) set to zero, meaning that the system is considered to be at steady-state:
```python
import sympy
import scipy
#create a list of expressions to be substituted into kinetic rates of reactions
species = sympy.Matrix([sympy.Symbol(f"s_{i}", nonnegative=True) for i in range(len(model.species))])
subs = {e: e.expand_expr() for e in model.expressions | model._derived_expressions}
subs.update({sympy.Symbol(f"__s{i}"): s for i, s in enumerate(species)})
kinetics = sympy.Matrix([r['rate'] for r in model.reactions]).xreplace(subs)
#simplyfy kinetic
kinetics.simplify()
sm = sympy.SparseMatrix(*model.stoichiometry_matrix.shape, model.stoichiometry_matrix.todok())
obs_matrix = scipy.sparse.lil_matrix(
(len(model.observables), len(model.species)), dtype=int
)
for i, obs in enumerate(model.observables):
obs_matrix[i, obs.species] = obs.coefficients
om = sympy.SparseMatrix(*obs_matrix.shape, obs_matrix.todok())
odes = sm * kinetics
observables = om * species
```
The following cell currently needs to be customized to your specific model. Define conservation of mass expressions for all monomers, and an expression you would like to solve for.
```python
# Define conservation of mass expressions (each equal to zero).
conservation = sympy.Matrix([
model.parameters["R_0"] - observables[0],
model.parameters["I_0"] - observables[1],
])
system = sympy.Matrix.vstack(odes)
# This is just R_BRAFmut_active_obs, but it could be any expression.
#R_active = sympy.Symbol('R_active')
#expression = sympy.Matrix([R_active - observables[2]])
system = sympy.Matrix.vstack(odes, conservation)
display(system)
```
$\displaystyle \left[\begin{matrix}f^{1 - \phi_{RR}} kr_{RR} s_{4} - kf_{RI} s_{0} s_{1} - 1.0 kf_{RR} s_{0}^{2} + kr_{RI} s_{3} + 2 kr_{RR} s_{2} - f^{- \phi_{RR}} kf_{RR} s_{0} s_{3}\\f^{1 - \phi_{RI}} kr_{RI} s_{4} + 2.0 g^{1 - \phi_{RI}} kr_{RI} s_{5} - kf_{RI} s_{0} s_{1} + kr_{RI} s_{3} - g^{- \phi_{RI}} kf_{RI} s_{1} s_{4} - 2.0 f^{- \phi_{RI}} kf_{RI} s_{1} s_{2}\\f^{1 - \phi_{RI}} kr_{RI} s_{4} + 0.5 kf_{RR} s_{0}^{2} - kr_{RR} s_{2} - 2.0 f^{- \phi_{RI}} kf_{RI} s_{1} s_{2}\\f^{1 - \phi_{RR}} kr_{RR} s_{4} + kf_{RI} s_{0} s_{1} - 1.0 kf_{RR} s_{3}^{2} \left(\frac{1}{f g}\right)^{\phi_{RR}} - kr_{RI} s_{3} + 2 kr_{RR} s_{5} \left(f g\right)^{1 - \phi_{RR}} - f^{- \phi_{RR}} kf_{RR} s_{0} s_{3}\\- f^{1 - \phi_{RI}} kr_{RI} s_{4} - f^{1 - \phi_{RR}} kr_{RR} s_{4} + 2.0 g^{1 - \phi_{RI}} kr_{RI} s_{5} - g^{- \phi_{RI}} kf_{RI} s_{1} s_{4} + f^{- \phi_{RR}} kf_{RR} s_{0} s_{3} + 2.0 f^{- \phi_{RI}} kf_{RI} s_{1} s_{2}\\- 2.0 g^{1 - \phi_{RI}} kr_{RI} s_{5} + 0.5 kf_{RR} s_{3}^{2} \left(\frac{1}{f g}\right)^{\phi_{RR}} - kr_{RR} s_{5} \left(f g\right)^{1 - \phi_{RR}} + g^{- \phi_{RI}} kf_{RI} s_{1} s_{4}\\R_{0} - s_{0} - 2 s_{2} - s_{3} - 2 s_{4} - 2 s_{5}\\I_{0} - s_{1} - s_{3} - s_{4} - 2 s_{5}\end{matrix}\right]$
Solve the combined system of the ODEs and conservation expressions for the list of symbols used in our desired expression. There may be multiple solutions
```python
#solve the symbolic systems
#solutions = sympy.solve(system, list(species), force=True, manual=True)
#unfortunately the sympy solver seems not to be able to solve even this simple system
#working on implementin a new approach tailored for mass-action kinetic systems with specific characteristics
```
We evaluate the expressions or observable as function of the steady state species concentrations (S_x) to calculate the amount of active RAFs:
```python
# to be done
```
| 4c62d9ffbba7d989d2a2ea3659a22ac7f22f668b | 8,429 | ipynb | Jupyter Notebook | symbolic_analysis_models.ipynb | himoto/rosa_webinar_20Jan2021 | d55a1a15100eaf837abca5e1454ef318c9947771 | [
"MIT"
] | 5 | 2021-01-20T17:45:44.000Z | 2021-09-23T10:58:33.000Z | symbolic_analysis_models.ipynb | himoto/rosa_webinar_20Jan2021 | d55a1a15100eaf837abca5e1454ef318c9947771 | [
"MIT"
] | null | null | null | symbolic_analysis_models.ipynb | himoto/rosa_webinar_20Jan2021 | d55a1a15100eaf837abca5e1454ef318c9947771 | [
"MIT"
] | 2 | 2021-09-23T19:04:50.000Z | 2021-11-06T05:39:50.000Z | 42.145 | 1,299 | 0.53482 | true | 1,427 | Qwen/Qwen-72B | 1. YES
2. YES | 0.83762 | 0.79053 | 0.662164 | __label__eng_Latn | 0.775081 | 0.37676 |