text
stringlengths 87
777k
| meta.hexsha
stringlengths 40
40
| meta.size
int64 682
1.05M
| meta.ext
stringclasses 1
value | meta.lang
stringclasses 1
value | meta.max_stars_repo_path
stringlengths 8
226
| meta.max_stars_repo_name
stringlengths 8
109
| meta.max_stars_repo_head_hexsha
stringlengths 40
40
| meta.max_stars_repo_licenses
sequencelengths 1
5
| meta.max_stars_count
int64 1
23.9k
⌀ | meta.max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | meta.max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | meta.max_issues_repo_path
stringlengths 8
226
| meta.max_issues_repo_name
stringlengths 8
109
| meta.max_issues_repo_head_hexsha
stringlengths 40
40
| meta.max_issues_repo_licenses
sequencelengths 1
5
| meta.max_issues_count
int64 1
15.1k
⌀ | meta.max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | meta.max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | meta.max_forks_repo_path
stringlengths 8
226
| meta.max_forks_repo_name
stringlengths 8
109
| meta.max_forks_repo_head_hexsha
stringlengths 40
40
| meta.max_forks_repo_licenses
sequencelengths 1
5
| meta.max_forks_count
int64 1
6.05k
⌀ | meta.max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | meta.max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | meta.avg_line_length
float64 15.5
967k
| meta.max_line_length
int64 42
993k
| meta.alphanum_fraction
float64 0.08
0.97
| meta.converted
bool 1
class | meta.num_tokens
int64 33
431k
| meta.lm_name
stringclasses 1
value | meta.lm_label
stringclasses 3
values | meta.lm_q1_score
float64 0.56
0.98
| meta.lm_q2_score
float64 0.55
0.97
| meta.lm_q1q2_score
float64 0.5
0.93
| text_lang
stringclasses 53
values | text_lang_conf
float64 0.03
1
| label
float64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
# Gaussian Process Distribution of Relaxation Times
## In this tutorial we will reproduce Figure 7 of the article https://doi.org/10.1016/j.electacta.2019.135316
GP-DRT is our newly developed approach that can be used to obtain both the mean and covariance of the DRT from EIS data by assuming that the DRT is a Gaussian process (GP). The GP-DRP can predict the DRT and the imaginary part of the impedance at frequencies that were not previously measured.
To obtain the DRT from the impedance we take that $\gamma(\xi)$ is a GP where $f$ is the frequency and $\xi=\log f$. Under the DRT model and considering that GPs are closed linear transformations, it follows that $Z^{\rm DRT}_{\rm im}\left(\xi\right)$ is also a GP.
In other words, we can write
$$\begin{pmatrix}
\gamma(\xi) \\
Z^{\rm DRT}_{\rm im}\left(\xi\right)
\end{pmatrix}\sim \mathcal{GP}\left(\mathbf 0, \begin{pmatrix}
k(\xi, \xi^\prime) & \mathcal L^{\rm im}_{\xi^\prime} \left(k(\xi, \xi^\prime)\right)\\
\mathcal L^{\rm im}_{\xi} k(\xi, \xi^\prime) & \mathcal L^{\rm im}_{\xi^\prime}\left(\mathcal L^{\rm im}_{\xi} \left(k(\xi, \xi^\prime)\right)\right)
\end{pmatrix}\right)$$
where
$$\mathcal L^{\rm im}_\xi \left(\cdot\right) = -\displaystyle \int_{-\infty}^\infty \frac{2\pi \displaystyle e^{\xi-\hat \xi}}{1+\left(2\pi \displaystyle e^{\xi-\hat \xi}\right)^2} \left(\cdot\right) d \hat \xi$$
is a linear functional. The latter functional, maps the DRT to the imaginary part of the impedance.
Assuming $N$ observations, we can set $\left(\mathbf Z^{\rm exp}_{\rm im}\right)_n = Z^{\rm exp}_{\rm im}(\xi_n)$ with $\xi_n =\log f_n$ and $n =1, 2, \ldots N $. The corresponding multivariate Gaussian random variable can be written as
$$\begin{pmatrix}
\boldsymbol{\gamma} \\
\mathbf Z^{\rm exp}_{\rm im}
\end{pmatrix}\sim \mathcal{N}\left(\mathbf 0, \begin{pmatrix}
\mathbf K & \mathcal L_{\rm im} \mathbf K\\
\mathcal L_{\rm im}^\sharp \mathbf K & \mathcal L^2_{\rm im} \mathbf K + \sigma_n^2 \mathbf I
\end{pmatrix}\right)$$
where
$$\begin{align}
(\mathbf K)_{nm} &= k(\xi_n, \xi_m)\\
(\mathcal L_{\rm im} \mathbf K)_{nm} &= \left. \mathcal L^{\rm im}_{\xi^\prime} \left(k(\xi, \xi^\prime)\right) \right |_{\xi_n, \xi_m}\\
(\mathcal L_{\rm im}^\sharp \mathbf K)_{nm} &= \left.\mathcal L^{\rm im}_{\xi} \left(k(\xi, \xi^\prime)\right) \right|_{\xi_n, \xi_m}\\
(\mathcal L^2_{\rm im} \mathbf K)_{nm} &= \left.\mathcal L^{\rm im}_{\xi^\prime}\left(\mathcal L^{\rm im}_{\xi} \left(k(\xi, \xi^\prime)\right)\right) \right|_{\xi_n, \xi_m}
\end{align}$$
and $\mathcal L_{\rm im} \mathbf K^\top = \mathcal L_{\rm im}^\sharp \mathbf K$.
To obtain the DRT from impedance, the distribution of $\mathbf{\gamma}$ conditioned on $\mathbf Z^{\rm exp}_{\rm im}$ can be written as
$$\boldsymbol{\gamma}|\mathbf Z^{\rm exp}_{\rm im}\sim \mathcal N\left( \mathbf \mu_{\gamma|Z^{\rm exp}_{\rm im}}, \mathbf\Sigma_{\gamma| Z^{\rm exp}_{\rm im}}\right)$$
with
$$\begin{align}
\mathbf \mu_{\gamma|Z^{\rm exp}_{\rm im}} &= \mathcal L_{\rm im} \mathbf K \left(\mathcal L^2_{\rm im} \mathbf K + \sigma_n^2 \mathbf I \right)^{-1} \mathbf Z^{\rm exp}_{\rm im} \\
\mathbf \Sigma_{\gamma| Z^{\rm exp}_{\rm im}} &= \mathbf K- \mathcal L_{\rm im} \mathbf K \left(\mathcal L^2_{\rm im} \mathbf K + \sigma_n^2 \mathbf I \right)^{-1}\mathcal L_{\rm im} \mathbf K^\top
\end{align}$$
The above formulas depend on 1) the kernel, $k(\xi, \xi^\prime)$; 2) the noise level, $\sigma_n$; and 3) the experimental data, $\mathbf Z^{\rm exp}_{\rm im}$ (at the log-frequencies $\mathbf \xi$).
```python
import numpy as np
import matplotlib.pyplot as plt
from math import sin, cos, pi
import GP_DRT
from scipy.optimize import minimize
%matplotlib inline
```
## 1) Define parameters of the ZARC circuit which will be used for the synthetic experiment generation
The impedance of a ZARC can be written as
$$
Z^{\rm exact}(f) = R_\infty + \displaystyle \frac{1}{\displaystyle \frac{1}{R_{\rm ct}}+C \left(i 2\pi f\right)^\phi}
$$
where $\displaystyle C = \frac{\tau_0^\phi}{R_{\rm ct}}$.
The corresponding DRT is given by
$$
\gamma(\log \tau) = \displaystyle \frac{\displaystyle R_{\rm ct}}{\displaystyle 2\pi} \displaystyle \frac{\displaystyle \sin\left((1-\phi)\pi\right)}{\displaystyle \cosh(\phi \log(\tau/\tau_0))-\cos(\pi(1-\phi))}
$$
```python
# define the frequency range
N_freqs = 81
freq_vec = np.logspace(-4., 4., num=N_freqs, endpoint=True)
xi_vec = np.log(freq_vec)
tau = 1/freq_vec
# define the frequency range used for prediction
# note: we could have used other values
freq_vec_star = np.logspace(-4., 4., num=81, endpoint=True)
xi_vec_star = np.log(freq_vec_star)
# parameters for ZARC model, the impedance and analytical DRT are calculated as the above equations
R_inf = 10
R_ct = 50
phi = 0.8
tau_0 = 1.
C = tau_0**phi/R_ct
Z_exact = R_inf+1./(1./R_ct+C*(1j*2.*pi*freq_vec)**phi)
gamma_fct = (R_ct)/(2.*pi)*sin((1.-phi)*pi)/(np.cosh(phi*np.log(tau/tau_0))-cos((1.-phi)*pi))
# we will use a finer mesh for plotting the results
freq_vec_plot = np.logspace(-4., 4., num=10*(N_freqs-1), endpoint=True)
tau_plot = 1/freq_vec_plot
# for plotting only
gamma_fct_plot = (R_ct)/(2.*pi)*sin((1.-phi)*pi)/(np.cosh(phi*np.log(tau_plot/tau_0))-cos((1.-phi)*pi))
# we will add noise to the impedance computed analytically
rng = np.random.seed(214975)
sigma_n_exp = 1.
Z_exp = Z_exact + sigma_n_exp*(np.random.normal(0, 1, N_freqs)+1j*np.random.normal(0, 1, N_freqs))
```
## 2) Show the synthetic impedance in the Nyquist plot - this is similar to Figure 7 (a)
```python
plt.rc('text', usetex=True)
plt.rc('font', family='serif', size=15)
plt.rc('xtick', labelsize=15)
plt.rc('ytick', labelsize=15)
# Nyquist plot of the impedance
plt.plot(np.real(Z_exact), -np.imag(Z_exact), linewidth=4, color="black", label="exact")
plt.plot(np.real(Z_exp), -np.imag(Z_exp), "o", markersize=10, color="red", label="synth exp")
plt.plot(np.real(Z_exp[20:60:10]), -np.imag(Z_exp[20:60:10]), 's', markersize=10, color="black")
plt.legend(frameon=False, fontsize = 15)
plt.axis('scaled')
plt.xticks(range(10, 70, 10))
plt.yticks(range(0, 60, 10))
plt.gca().set_aspect('equal', adjustable='box')
plt.xlabel(r'$Z_{\rm re}/\Omega$', fontsize = 20)
plt.ylabel(r'$-Z_{\rm im}/\Omega$', fontsize = 20)
# label the frequency points
plt.annotate(r'$10^{-2}$', xy=(np.real(Z_exp[20]), -np.imag(Z_exp[20])),
xytext=(np.real(Z_exp[20])-2, 10-np.imag(Z_exp[20])),
arrowprops=dict(arrowstyle="-",connectionstyle="arc"))
plt.annotate(r'$10^{-1}$', xy=(np.real(Z_exp[30]), -np.imag(Z_exp[30])),
xytext=(np.real(Z_exp[30])-2, 6-np.imag(Z_exp[30])),
arrowprops=dict(arrowstyle="-",connectionstyle="arc"))
plt.annotate(r'$1$', xy=(np.real(Z_exp[40]), -np.imag(Z_exp[40])),
xytext=(np.real(Z_exp[40]), 10-np.imag(Z_exp[40])),
arrowprops=dict(arrowstyle="-",connectionstyle="arc"))
plt.annotate(r'$10$', xy=(np.real(Z_exp[50]), -np.imag(Z_exp[50])),
xytext=(np.real(Z_exp[50])-1, 10-np.imag(Z_exp[50])),
arrowprops=dict(arrowstyle="-",connectionstyle="arc"))
plt.show()
```
## 3) Obtain the optimal hyperparameters of the GP-DRT model by minimizing the negative marginal log-likelihood (NMLL)
We constrain the kernel to be a squared exponential, _i.e._,
$$
k(\xi, \xi^\prime) = \sigma_f^2 \exp\left(-\frac{1}{2 \ell^2}\left(\xi-\xi^\prime\right)^2 \right)
$$
and optimize its two parameters, $\sigma_f$ and $\ell$ as well as the noise level $\sigma_n$. Therefore, the vector of GP-DRT hyperparameters is $\boldsymbol \theta = \begin{pmatrix} \sigma_n, \sigma_f, \ell \end{pmatrix}^\top$.
Following the article, we can write that
$$
\log p(\mathbf Z^{\rm exp}_{\rm im}|\boldsymbol \xi, \boldsymbol \theta)= - \frac{1}{2} {\mathbf Z^{\rm exp}_{\rm im}}^\top \left(\mathcal L^2_{\rm im} \mathbf K +\sigma_n^2\mathbf I \right)^{-1} \mathbf Z^{\rm exp}_{\rm im} -\frac{1}{2} \log \left| \mathcal L^2_{\rm im} \mathbf K+\sigma_n^2\mathbf I \right| - \frac{N}{2} \log 2\pi
$$
We will call $L(\boldsymbol \theta)$ the negative (and shifted) MLL (NMLL):
$$
L(\boldsymbol \theta) = - \log p(\mathbf Z^{\rm exp}_{\rm im}|\boldsymbol \xi, \boldsymbol \theta) - \frac{N}{2} \log 2\pi
$$
the experimental evidence is maximized for
$$
\boldsymbol \theta = \arg \min_{\boldsymbol \theta^\prime}L(\boldsymbol \theta^\prime)
$$
The above minimization problem is solved using the `optimize` function given in `scipy`
```python
# initialize the parameters for the minimization of the NMLL, see (31) in the manuscript
sigma_n = sigma_n_exp
sigma_f = 5.
ell = 1.
theta_0 = np.array([sigma_n, sigma_f, ell])
seq_theta = np.copy(theta_0)
def print_results(theta):
global seq_theta
seq_theta = np.vstack((seq_theta, theta))
print('{0:.7f} {1:.7f} {2:.7f}'.format(theta[0], theta[1], theta[2]))
print('sigma_n, sigma_f, ell')
# minimize the NMLL L(\theta) w.r.t sigma_n, sigma_f, ell using the Newton-CG method as implemented in scipy
res = minimize(GP_DRT.NMLL_fct, theta_0, args=(Z_exp, xi_vec), method='Newton-CG', \
jac=GP_DRT.grad_NMLL_fct, callback=print_results, options={'disp': True})
# collect the optimized parameters
sigma_n, sigma_f, ell = res.x
```
sigma_n, sigma_f, ell
0.8903599 5.0014151 1.0120875
0.8136354 5.0035337 1.0291912
0.8291863 5.0357867 1.2588673
0.8303934 5.0832372 1.2117784
0.8304464 5.2060761 1.2283664
0.8305219 5.3874435 1.2524151
0.8305286 5.4068909 1.2546651
0.8305276 5.4070863 1.2546870
0.8305265 5.4070865 1.2546866
Optimization terminated successfully.
Current function value: 53.657989
Iterations: 9
Function evaluations: 11
Gradient evaluations: 54
Hessian evaluations: 0
## 4) Core of the GP-DRT
### 4a) Compute matrices
Once we have identified the optimized parameters we can compute $\mathbf K$, $\mathcal L_{\rm im} \mathbf K$, and $\mathcal L^2_{\rm im} \mathbf K$, which are given in equation (18) in the article
```python
K = GP_DRT.matrix_K(xi_vec, xi_vec, sigma_f, ell)
L_im_K = GP_DRT.matrix_L_im_K(xi_vec, xi_vec, sigma_f, ell)
L2_im_K = GP_DRT.matrix_L2_im_K(xi_vec, xi_vec, sigma_f, ell)
Sigma = (sigma_n**2)*np.eye(N_freqs)
```
### 4b) Factorize the matrices and solve the linear equations
We are computing
$$
\boldsymbol{\gamma}|\mathbf Z^{\rm exp}_{\rm im}\sim \mathcal N\left( \boldsymbol \mu_{\gamma|Z^{\rm exp}_{\rm im}}, \boldsymbol \Sigma_{\gamma| Z^{\rm exp}_{\rm im}}\right)
$$
using
$$
\begin{align}
\boldsymbol \mu_{\gamma|Z^{\rm exp}_{\rm im}} &= \mathcal L_{\rm im} \mathbf K\left(\mathcal L^2_{\rm im} \mathbf K+\sigma_n^2\mathbf I\right)^{-1}\mathbf Z^{\rm exp}_{\rm im} \\
\boldsymbol \Sigma_{\gamma| Z^{\rm exp}_{\rm im}} &= \mathbf K-\mathcal L_{\rm im} \mathbf K\left(\mathcal L^2_{\rm im} \mathbf K+\sigma_n^2\mathbf I\right)^{-1}\mathcal L_{\rm im} \mathbf K^\top
\end{align}
$$
The key step is to do Cholesky factorization of $\mathcal L^2_{\rm im} \mathbf K+\sigma_n^2\mathbf I$, _i.e._, K_im_full
```python
# the matrix $\mathcal L^2_{\rm im} \mathbf K + \sigma_n^2 \mathbf I$ whose inverse is needed
K_im_full = L2_im_K + Sigma
# check if the K_im_full is positive definite, otherwise, a nearest one would replace the K_im_full
if not GP_DRT.is_PD(K_im_full):
K_im_full = GP_DRT.nearest_PD(K_im_full)
# Cholesky factorization, L is a lower-triangular matrix
L = np.linalg.cholesky(K_im_full)
# solve for alpha
alpha = np.linalg.solve(L, Z_exp.imag)
alpha = np.linalg.solve(L.T, alpha)
# estimate the gamma of eq (21a)
gamma_fct_est = np.dot(L_im_K, alpha)
# covariance matrix
inv_L = np.linalg.inv(L)
inv_K_im_full = np.dot(inv_L.T, inv_L)
# estimate the sigma of gamma for eq (21b)
cov_gamma_fct_est = K - np.dot(L_im_K, np.dot(inv_K_im_full, L_im_K.T))
sigma_gamma_fct_est = np.sqrt(np.diag(cov_gamma_fct_est))
```
### 4c) Plot the obtained DRT against the analytical DRT
```python
# plot the DRT and its confidence region
plt.semilogx(freq_vec_plot, gamma_fct_plot, linewidth=4, color="black", label="exact")
plt.semilogx(freq_vec, gamma_fct_est, linewidth=4, color="red", label="GP-DRT")
plt.fill_between(freq_vec, gamma_fct_est-3*sigma_gamma_fct_est, gamma_fct_est+3*sigma_gamma_fct_est, color="0.4", alpha=0.3)
plt.rc('text', usetex=True)
plt.rc('font', family='serif', size=15)
plt.rc('xtick', labelsize=15)
plt.rc('ytick', labelsize=15)
plt.axis([1E-4,1E4,-5,25])
plt.legend(frameon=False, fontsize = 15)
plt.xlabel(r'$f/{\rm Hz}$', fontsize = 20)
plt.ylabel(r'$\gamma/\Omega$', fontsize = 20)
plt.show()
```
### 4d) Predict the $\gamma$ and the imaginary part of the GP-DRT impedance
#### This part is explained in Section 2.3.3 of the main article
```python
# initialize the imaginary part of impedance vector
Z_im_vec_star = np.empty_like(xi_vec_star)
Sigma_Z_im_vec_star = np.empty_like(xi_vec_star)
gamma_vec_star = np.empty_like(xi_vec_star)
Sigma_gamma_vec_star = np.empty_like(xi_vec_star)
# calculate the imaginary part of impedance at each $\xi$ point for the plot
for index, val in enumerate(xi_vec_star):
xi_star = np.array([val])
# compute matrices shown in eq (23), xi_star corresponds to a new point
k_star = GP_DRT.matrix_K(xi_vec, xi_star, sigma_f, ell)
L_im_k_star_up = GP_DRT.matrix_L_im_K(xi_star, xi_vec, sigma_f, ell)
L2_im_k_star = GP_DRT.matrix_L2_im_K(xi_vec, xi_star, sigma_f, ell)
k_star_star = GP_DRT.matrix_K(xi_star, xi_star, sigma_f, ell)
L_im_k_star_star = GP_DRT.matrix_L_im_K(xi_star, xi_star, sigma_f, ell)
L2_im_k_star_star = GP_DRT.matrix_L2_im_K(xi_star, xi_star, sigma_f, ell)
# compute Z_im_star mean and standard deviation using eq (26)
Z_im_vec_star[index] = np.dot(L2_im_k_star.T, np.dot(inv_K_im_full, Z_exp.imag))
Sigma_Z_im_vec_star[index] = L2_im_k_star_star-np.dot(L2_im_k_star.T, np.dot(inv_K_im_full, L2_im_k_star))
# compute gamma_star mean and standard deviation using eq (29)
gamma_vec_star[index] = np.dot(L_im_k_star_up, np.dot(inv_K_im_full, Z_exp.imag))
Sigma_gamma_vec_star[index] = k_star_star-np.dot(L_im_k_star_up, np.dot(inv_K_im_full, L_im_k_star_up.T))
```
### 4e) Plot the imaginary part of the GP-DRT impedance together with the exact one and the synthetic experiment
```python
plt.semilogx(freq_vec_star, -np.imag(Z_exact), ":", linewidth=4, color="blue", label="exact")
plt.semilogx(freq_vec, -Z_exp.imag, "o", markersize=10, color="black", label="synth exp")
plt.semilogx(freq_vec_star, -Z_im_vec_star, linewidth=4, color="red", label="GP-DRT")
plt.fill_between(freq_vec_star, -Z_im_vec_star-3*np.sqrt(abs(Sigma_Z_im_vec_star)), -Z_im_vec_star+3*np.sqrt(abs(Sigma_Z_im_vec_star)), alpha=0.3)
plt.rc('text', usetex=True)
plt.rc('font', family='serif', size=15)
plt.rc('xtick', labelsize=15)
plt.rc('ytick', labelsize=15)
plt.axis([1E-4,1E4,-5,25])
plt.legend(frameon=False, fontsize = 15)
plt.xlabel(r'$f/{\rm Hz}$', fontsize = 20)
plt.ylabel(r'$-Z_{\rm im}/\Omega$', fontsize = 20)
plt.show()
```
| eb88848b4b82d459b5b2abdd47eafd99aa36c1c9 | 89,966 | ipynb | Jupyter Notebook | tutorials/ex1_single_ZARC.ipynb | jiapeng-liu/GP-DRT | 1bd8232dc07b84293ea6960a669f04e5a875d982 | [
"MIT"
] | 21 | 2019-11-26T16:58:01.000Z | 2022-02-23T10:27:07.000Z | tutorials/ex1_single_ZARC.ipynb | jiapeng-liu/GP-DRT | 1bd8232dc07b84293ea6960a669f04e5a875d982 | [
"MIT"
] | 1 | 2021-02-07T17:10:35.000Z | 2021-08-06T04:20:22.000Z | tutorials/ex1_single_ZARC.ipynb | jiapeng-liu/GP-DRT | 1bd8232dc07b84293ea6960a669f04e5a875d982 | [
"MIT"
] | 11 | 2019-11-27T02:38:39.000Z | 2022-03-18T08:17:47.000Z | 174.352713 | 29,556 | 0.871007 | true | 5,117 | Qwen/Qwen-72B | 1. YES
2. YES | 0.91848 | 0.760651 | 0.698643 | __label__eng_Latn | 0.606891 | 0.461512 |
# Algebra Lineal con Python
## Introducción
Una de las herramientas matemáticas más utilizadas en [machine learning](http://es.wikipedia.org/wiki/Machine_learning) y [data mining](http://es.wikipedia.org/wiki/Miner%C3%ADa_de_datos) es el [Álgebra lineal](http://es.wikipedia.org/wiki/%C3%81lgebra_lineal); por tanto, si queremos incursionar en el fascinante mundo del aprendizaje automático y el análisis de datos es importante reforzar los conceptos que forman parte de sus cimientos.
El [Álgebra lineal](http://es.wikipedia.org/wiki/%C3%81lgebra_lineal) es una rama de las [matemáticas](http://es.wikipedia.org/wiki/Matem%C3%A1ticas) que es sumamente utilizada en el estudio de una gran variedad de ciencias, como ingeniería, finanzas, investigación operativa, entre otras. Es una extensión del [álgebra](http://es.wikipedia.org/wiki/%C3%81lgebra) que aprendemos en la escuela secundaria, hacia un mayor número de dimensiones; en lugar de trabajar con incógnitas a nivel de <a href="http://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">escalares</a> comenzamos a trabajar con <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matrices</a> y [vectores](http://es.wikipedia.org/wiki/Vector).
El estudio del [Álgebra lineal](http://es.wikipedia.org/wiki/%C3%81lgebra_lineal) implica trabajar con varios objetos matemáticos, como ser:
* **Los <a href="http://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">Escalares</a>**: Un *escalar* es un solo número, en contraste con la mayoría de los otros objetos estudiados en [Álgebra lineal](http://es.wikipedia.org/wiki/%C3%81lgebra_lineal), que son generalmente una colección de múltiples números.
* **Los [Vectores](http://es.wikipedia.org/wiki/Vector)**:Un *vector* es una serie de números. Los números tienen una orden preestablecido, y podemos identificar cada número individual por su índice en ese orden. Podemos pensar en los *vectores* como la identificación de puntos en el espacio, con cada elemento que da la coordenada a lo largo de un eje diferente. Existen dos tipos de *vectores*, los *vectores de fila* y los *vectores de columna*. Podemos representarlos de la siguiente manera, dónde *f* es un vector de fila y *c* es un vector de columna:
$$f=\begin{bmatrix}0&1&-1\end{bmatrix} ; c=\begin{bmatrix}0\\1\\-1\end{bmatrix}$$
* **Las <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">Matrices</a>**: Una *matriz* es un arreglo bidimensional de números (llamados entradas de la matriz) ordenados en filas (o renglones) y columnas, donde una fila es cada una de las líneas horizontales de la matriz y una columna es cada una de las líneas verticales. En una *matriz* cada elemento puede ser identificado utilizando dos índices, uno para la fila y otro para la columna en que se encuentra. Las podemos representar de la siguiente manera, *A* es una matriz de 3x2.
$$A=\begin{bmatrix}0 & 1& \\-1 & 2 \\ -2 & 3\end{bmatrix}$$
* **Los [Tensores](http://es.wikipedia.org/wiki/C%C3%A1lculo_tensorial)**:En algunos casos necesitaremos una matriz con más de dos ejes. En general, una serie de números dispuestos en una cuadrícula regular con un número variable de ejes es conocido como un *tensor*.
Sobre estos objetos podemos realizar las operaciones matemáticas básicas, como ser [adición](http://es.wikipedia.org/wiki/Adici%C3%B3n), [multiplicación](http://es.wikipedia.org/wiki/Multiplicaci%C3%B3n), [sustracción](http://es.wikipedia.org/wiki/Sustracci%C3%B3n) y <a href="http://es.wikipedia.org/wiki/Divisi%C3%B3n_(matem%C3%A1tica)" >división</a>, es decir que vamos a poder sumar [vectores](http://es.wikipedia.org/wiki/Vector) con <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matrices</a>, multiplicar <a href="http://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">escalares</a> a [vectores](http://es.wikipedia.org/wiki/Vector) y demás.
## Librerías de Python para álgebra lineal
Los principales módulos que [Python](http://python.org/) nos ofrece para realizar operaciones de [Álgebra lineal](http://es.wikipedia.org/wiki/%C3%81lgebra_lineal) son los siguientes:
* **[Numpy](http://www.numpy.org/)**: El popular paquete matemático de [Python](http://python.org/), nos va a permitir crear *[vectores](http://es.wikipedia.org/wiki/Vector)*, *<a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matrices</a>* y *[tensores](http://es.wikipedia.org/wiki/C%C3%A1lculo_tensorial)* con suma facilidad.
* **[numpy.linalg](http://docs.scipy.org/doc/numpy/reference/routines.linalg.html)**: Este es un submodulo dentro de [Numpy](http://www.numpy.org/) con un gran número de funciones para resolver ecuaciones de [Álgebra lineal](http://es.wikipedia.org/wiki/%C3%81lgebra_lineal).
* **[scipy.linalg](http://docs.scipy.org/doc/scipy/reference/tutorial/linalg.html)**: Este submodulo del paquete científico [Scipy](http://docs.scipy.org/doc/scipy/reference/index.html) es muy similar al anterior, pero con algunas más funciones y optimaciones.
* **[Sympy](http://www.sympy.org/es/)**: Esta librería nos permite trabajar con matemática simbólica, convierte a [Python](http://python.org/) en un [sistema algebraico computacional](http://es.wikipedia.org/wiki/Sistema_algebraico_computacional). Nos va a permitir trabajar con ecuaciones y fórmulas simbólicamente, en lugar de numéricamente.
* **[CVXOPT](http://cvxopt.org/)**: Este módulo nos permite resolver problemas de optimizaciones de [programación lineal](http://es.wikipedia.org/wiki/Programaci%C3%B3n_lineal).
* **[PuLP](http://pythonhosted.org//PuLP/)**: Esta librería nos permite crear modelos de [programación lineal](http://es.wikipedia.org/wiki/Programaci%C3%B3n_lineal) en forma muy sencilla con [Python](http://python.org/).
## Operaciones básicas
### Vectores
Un *[vector](http://es.wikipedia.org/wiki/Vector)* de largo `n` es una secuencia (o *array*, o *tupla*) de `n` números. La solemos escribir como x=(x1,...,xn) o x=[x1,...,xn]
En [Python](http://python.org/), un *[vector](http://es.wikipedia.org/wiki/Vector)* puede ser representado con una simple *lista*, o con un *array* de [Numpy](http://www.numpy.org/); siendo preferible utilizar esta última opción.
```python
# Vector como lista de Python
v1 = [2, 4, 6]
v1
```
[2, 4, 6]
```python
# Vectores con numpy
import numpy as np
v2 = np.ones(3) # vector de solo unos.
v2
```
array([1., 1., 1.])
```python
v3 = np.array([1, 3, 5]) # pasando una lista a las arrays de numpy
v3
```
array([1, 3, 5])
```python
v4 = np.arange(1, 8) # utilizando la funcion arange de numpy
v4
```
array([1, 2, 3, 4, 5, 6, 7])
### Representación gráfica
Tradicionalmente, los *[vectores](http://es.wikipedia.org/wiki/Vector)* son representados visualmente como flechas que parten desde el origen hacia un punto.
Por ejemplo, si quisiéramos representar graficamente a los vectores v1=[2, 4], v2=[-3, 3] y v3=[-4, -3.5], podríamos hacerlo de la siguiente manera.
```python
import matplotlib.pyplot as plt
from warnings import filterwarnings
%matplotlib inline
filterwarnings('ignore') # Ignorar warnings
```
```python
def move_spines():
"""Crea la figura de pyplot y los ejes. Mueve las lineas de la izquierda y de abajo
para que se intersecten con el origen. Elimina las lineas de la derecha y la de arriba.
Devuelve los ejes."""
fix, ax = plt.subplots()
for spine in ["left", "bottom"]:
ax.spines[spine].set_position("zero")
for spine in ["right", "top"]:
ax.spines[spine].set_color("none")
return ax
def vect_fig():
"""Genera el grafico de los vectores en el plano"""
ax = move_spines()
ax.set_xlim(-5, 5)
ax.set_ylim(-5, 5)
ax.grid()
vecs = [[2, 4], [-3, 3], [-4, -3.5]] # lista de vectores
for v in vecs:
ax.annotate(" ", xy=v, xytext=[0, 0],
arrowprops=dict(facecolor="blue",
shrink=0,
alpha=0.7,
width=0.5))
ax.text(1.1 * v[0], 1.1 * v[1], v)
```
```python
vect_fig() # crea el gráfico
```
### Operaciones con vectores
Las operaciones más comunes que utilizamos cuando trabajamos con *[vectores](http://es.wikipedia.org/wiki/Vector)* son la *suma*, la *resta* y la *multiplicación por <a href="http://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">escalares</a>*.
Cuando *sumamos* dos *[vectores](http://es.wikipedia.org/wiki/Vector)*, vamos sumando elemento por elemento de cada
*[vector](http://es.wikipedia.org/wiki/Vector)*.
$$ \begin{split}x + y
=
\left[
\begin{array}{c}
x_1 \\
x_2 \\
\vdots \\
x_n
\end{array}
\right]
+
\left[
\begin{array}{c}
y_1 \\
y_2 \\
\vdots \\
y_n
\end{array}
\right]
:=
\left[
\begin{array}{c}
x_1 + y_1 \\
x_2 + y_2 \\
\vdots \\
x_n + y_n
\end{array}
\right]\end{split}$$
De forma similar funciona la operación de resta.
$$ \begin{split}x - y
=
\left[
\begin{array}{c}
x_1 \\
x_2 \\
\vdots \\
x_n
\end{array}
\right]
-
\left[
\begin{array}{c}
y_1 \\
y_2 \\
\vdots \\
y_n
\end{array}
\right]
:=
\left[
\begin{array}{c}
x_1 - y_1 \\
x_2 - y_2 \\
\vdots \\
x_n - y_n
\end{array}
\right]\end{split}$$
La *multiplicación por <a href="http://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">escalares</a>* es una operación que toma a un número $\gamma$, y a un *[vector](http://es.wikipedia.org/wiki/Vector)* $x$ y produce un nuevo *[vector](http://es.wikipedia.org/wiki/Vector)* donde cada elemento del vector $x$ es multiplicado por el número $\gamma$.
$$\begin{split}\gamma x
:=
\left[
\begin{array}{c}
\gamma x_1 \\
\gamma x_2 \\
\vdots \\
\gamma x_n
\end{array}
\right]\end{split}$$
En [Python](http://python.org/) podríamos realizar estas operaciones en forma muy sencilla:
```python
# Ejemplo en Python
x = np.arange(1, 5)
y = np.array([2, 4, 6, 8])
x, y
```
(array([1, 2, 3, 4]), array([2, 4, 6, 8]))
```python
# sumando dos vectores numpy
x + y
```
array([ 3, 6, 9, 12])
```python
# restando dos vectores
x - y
```
array([-1, -2, -3, -4])
```python
# multiplicando por un escalar
x * 2
```
array([2, 4, 6, 8])
```python
y * 3
```
array([ 6, 12, 18, 24])
#### Producto escalar o interior
El [producto escalar](https://es.wikipedia.org/wiki/Producto_escalar) de dos *[vectores](http://es.wikipedia.org/wiki/Vector)* se define como la suma de los productos de sus elementos, suele representarse matemáticamente como < x, y > o x'y, donde x e y son dos vectores.
$$< x, y > := \sum_{i=1}^n x_i y_i$$
Dos *[vectores](http://es.wikipedia.org/wiki/Vector)* son <a href="https://es.wikipedia.org/wiki/Ortogonalidad_(matem%C3%A1ticas)">ortogonales</a> o perpendiculares cuando forman ángulo recto entre sí. Si el producto escalar de dos vectores es cero, ambos vectores son <a href="https://es.wikipedia.org/wiki/Ortogonalidad_(matem%C3%A1ticas)">ortogonales</a>.
Adicionalmente, todo [producto escalar](https://es.wikipedia.org/wiki/Producto_escalar) induce una [norma](https://es.wikipedia.org/wiki/Norma_vectorial) sobre el espacio en el que está definido, de la siguiente manera:
$$\| x \| := \sqrt{< x, x>} := \left( \sum_{i=1}^n x_i^2 \right)^{1/2}$$
En [Python](http://python.org/) lo podemos calcular de la siguiente forma:
```python
# Calculando el producto escalar de los vectores x e y
np.dot(x, y)
```
60
```python
# o lo que es lo mismo, que:
sum(x * y)
```
60
```python
# Calculando la norma del vector X
np.linalg.norm(x)
```
5.477225575051661
```python
# otra forma de calcular la norma de x
np.sqrt(np.dot(x, x))
```
5.477225575051661
```python
# vectores ortogonales
v1 = np.array([3, 4])
v2 = np.array([4, -3])
np.dot(v1, v2)
```
0
### Matrices
Las <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matrices</a> son una forma clara y sencilla de organizar los datos para su uso en operaciones lineales.
Una <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> `n × k` es una agrupación rectangular de números con n filas y k columnas; se representa de la siguiente forma:
$$\begin{split}A =
\left[
\begin{array}{cccc}
a_{11} & a_{12} & \cdots & a_{1k} \\
a_{21} & a_{22} & \cdots & a_{2k} \\
\vdots & \vdots & & \vdots \\
a_{n1} & a_{n2} & \cdots & a_{nk}
\end{array}
\right]\end{split}$$
En la <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> A, el símbolo $a_{nk}$ representa el elemento n-ésimo de la fila en la k-ésima columna. La <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> A también puede ser llamada un *[vector](http://es.wikipedia.org/wiki/Vector)* si cualquiera de n o k son iguales a 1. En el caso de n=1, A se llama un *[vector](http://es.wikipedia.org/wiki/Vector) fila*, mientras que en el caso de k=1 se denomina un *[vector](http://es.wikipedia.org/wiki/Vector) columna*.
Las <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matrices</a> se utilizan para múltiples aplicaciones y sirven, en particular, para representar los coeficientes de los sistemas de ecuaciones lineales o para representar transformaciones lineales dada una base. Pueden sumarse, multiplicarse y descomponerse de varias formas.
### Operaciones con matrices
Al igual que con los *[vectores](http://es.wikipedia.org/wiki/Vector)*, que no son más que un caso particular, las <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matrices</a> se pueden *sumar*, *restar* y la *multiplicar por <a href="http://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">escalares</a>*.
Multiplicacion por escalares:
$$\begin{split}\gamma A
\left[
\begin{array}{ccc}
a_{11} & \cdots & a_{1k} \\
\vdots & \vdots & \vdots \\
a_{n1} & \cdots & a_{nk} \\
\end{array}
\right]
:=
\left[
\begin{array}{ccc}
\gamma a_{11} & \cdots & \gamma a_{1k} \\
\vdots & \vdots & \vdots \\
\gamma a_{n1} & \cdots & \gamma a_{nk} \\
\end{array}
\right]\end{split}$$
Suma de matrices:
$$\begin{split}A + B =
\left[
\begin{array}{ccc}
a_{11} & \cdots & a_{1k} \\
\vdots & \vdots & \vdots \\
a_{n1} & \cdots & a_{nk} \\
\end{array}
\right]
+
\left[
\begin{array}{ccc}
b_{11} & \cdots & b_{1k} \\
\vdots & \vdots & \vdots \\
b_{n1} & \cdots & b_{nk} \\
\end{array}
\right]
:=
\left[
\begin{array}{ccc}
a_{11} + b_{11} & \cdots & a_{1k} + b_{1k} \\
\vdots & \vdots & \vdots \\
a_{n1} + b_{n1} & \cdots & a_{nk} + b_{nk} \\
\end{array}
\right]\end{split}$$
Resta de matrices:
$$\begin{split}A - B =
\left[
\begin{array}{ccc}
a_{11} & \cdots & a_{1k} \\
\vdots & \vdots & \vdots \\
a_{n1} & \cdots & a_{nk} \\
\end{array}
\right]-
\left[
\begin{array}{ccc}
b_{11} & \cdots & b_{1k} \\
\vdots & \vdots & \vdots \\
b_{n1} & \cdots & b_{nk} \\
\end{array}
\right]
:=
\left[
\begin{array}{ccc}
a_{11} - b_{11} & \cdots & a_{1k} - b_{1k} \\
\vdots & \vdots & \vdots \\
a_{n1} - b_{n1} & \cdots & a_{nk} - b_{nk} \\
\end{array}
\right]\end{split}$$
Para los casos de suma y resta, hay que tener en cuenta que solo se pueden sumar o restar <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matrices</a> que tengan las mismas dimensiones, es decir que si tengo una <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> A de dimensión 3x2 (3 filas y 2 columnas) solo voy a poder sumar o restar la <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> B si esta también tiene 3 filas y 2 columnas.
```python
# Ejemplo en Python
A = np.array([[1, 3, 2],
[1, 0, 0],
[1, 2, 2]])
B = np.array([[1, 0, 5],
[7, 5, 0],
[2, 1, 1]])
```
```python
# suma de las matrices A y B
A + B
```
array([[2, 3, 7],
[8, 5, 0],
[3, 3, 3]])
```python
# resta de matrices
A - B
```
array([[ 0, 3, -3],
[-6, -5, 0],
[-1, 1, 1]])
```python
# multiplicando matrices por escalares
A * 2
```
array([[2, 6, 4],
[2, 0, 0],
[2, 4, 4]])
```python
B * 3
```
array([[ 3, 0, 15],
[21, 15, 0],
[ 6, 3, 3]])
```python
# ver la dimension de una matriz
A.shape
```
(3, 3)
```python
# ver cantidad de elementos de una matriz
A.size
```
9
#### Multiplicacion o Producto de matrices
La regla para la [multiplicación de matrices](https://es.wikipedia.org/wiki/Multiplicaci%C3%B3n_de_matrices) generaliza la idea del [producto interior](https://es.wikipedia.org/wiki/Producto_escalar) que vimos con los [vectores](http://es.wikipedia.org/wiki/Vector); y esta diseñada para facilitar las operaciones lineales básicas.
Cuando [multiplicamos matrices](https://es.wikipedia.org/wiki/Multiplicaci%C3%B3n_de_matrices), el número de columnas de la primera <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> debe ser igual al número de filas de la segunda <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a>; y el resultado de esta multiplicación va a tener el mismo número de filas que la primer <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> y el número de la columnas de la segunda <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a>. Es decir, que si yo tengo una <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> A de dimensión 3x4 y la multiplico por una <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> B de dimensión 4x2, el resultado va a ser una <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> C de dimensión 3x2.
Algo a tener en cuenta a la hora de [multiplicar matrices](https://es.wikipedia.org/wiki/Multiplicaci%C3%B3n_de_matrices) es que la propiedad [connmutativa](https://es.wikipedia.org/wiki/Conmutatividad) no se cumple. AxB no es lo mismo que BxA.
Veamos los ejemplos en [Python](http://python.org/).
```python
# Ejemplo multiplicación de matrices
A = np.arange(1, 13).reshape(3, 4) #matriz de dimension 3x4
A
```
array([[ 1, 2, 3, 4],
[ 5, 6, 7, 8],
[ 9, 10, 11, 12]])
```python
B = np.arange(8).reshape(4,2) #matriz de dimension 4x2
B
```
array([[0, 1],
[2, 3],
[4, 5],
[6, 7]])
```python
# Multiplicando A x B
A.dot(B) #resulta en una matriz de dimension 3x2
```
array([[ 40, 50],
[ 88, 114],
[136, 178]])
```python
# Multiplicando B x A
B.dot(A)
```
Este ultimo ejemplo vemos que la propiedad conmutativa no se cumple, es más, [Python](http://python.org/) nos arroja un error, ya que el número de columnas de B no coincide con el número de filas de A, por lo que ni siquiera se puede realizar la multiplicación de B x A.
Para una explicación más detallada del proceso de [multiplicación de matrices](https://es.wikipedia.org/wiki/Multiplicaci%C3%B3n_de_matrices), pueden consultar el siguiente [tutorial](http://www.mathsisfun.com/algebra/matrix-multiplying.html).
#### La matriz identidad, la matriz inversa, la matrix transpuesta y el determinante
La [matriz identidad](https://es.wikipedia.org/wiki/Matriz_identidad) es el elemento neutro en la [multiplicación de matrices](https://es.wikipedia.org/wiki/Multiplicaci%C3%B3n_de_matrices), es el equivalente al número 1. Cualquier matriz multiplicada por la [matriz identidad](https://es.wikipedia.org/wiki/Matriz_identidad) nos da como resultado la misma matriz. La [matriz identidad](https://es.wikipedia.org/wiki/Matriz_identidad) es una [matriz cuadrada](https://es.wikipedia.org/wiki/Matriz_cuadrada) (tiene siempre el mismo número de filas que de columnas); y su diagonal principal se compone de todos elementos 1 y el resto de los elementos se completan con 0. Suele representase con la letra I
Por ejemplo la matriz identidad de 3x3 sería la siguiente:
$$I=\begin{bmatrix}1 & 0 & 0 & \\0 & 1 & 0\\ 0 & 0 & 1\end{bmatrix}$$
Ahora que conocemos el concepto de la [matriz identidad](https://es.wikipedia.org/wiki/Matriz_identidad), podemos llegar al concepto de la [matriz inversa](https://es.wikipedia.org/wiki/Matriz_invertible). Si tenemos una matriz A, la [matriz inversa](https://es.wikipedia.org/wiki/Matriz_invertible) de A, que se representa como $A^{-1}$ es aquella [matriz cuadrada](https://es.wikipedia.org/wiki/Matriz_cuadrada) que hace que la multiplicación $A$x$A^{-1}$ sea igual a la [matriz identidad](https://es.wikipedia.org/wiki/Matriz_identidad) I. Es decir que es la <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> recíproca de A.
$$A × A^{-1} = A^{-1} × A = I$$
Tener en cuenta que esta [matriz inversa](https://es.wikipedia.org/wiki/Matriz_invertible) en muchos casos puede no existir.En este caso se dice que la matriz es singular o degenerada. Una matriz es singular si y solo si su <a href="https://es.wikipedia.org/wiki/Determinante_(matem%C3%A1tica)">determinante</a> es nulo.
El <a href="https://es.wikipedia.org/wiki/Determinante_(matem%C3%A1tica)">determinante</a> es un número especial que puede calcularse sobre las [matrices cuadradas](https://es.wikipedia.org/wiki/Matriz_cuadrada). Se calcula como la suma de los productos de las diagonales de la matriz en una dirección menos la suma de los productos de las diagonales en la otra dirección. Se represente con el símbolo |A|.
$$A=\begin{bmatrix}a_{11} & a_{12} & a_{13} & \\a_{21} & a_{22} & a_{23} & \\ a_{31} & a_{32} & a_{33} & \end{bmatrix}$$
$$|A| =
(a_{11} a_{22} a_{33}
+ a_{12} a_{23} a_{31}
+ a_{13} a_{21} a_{32} )
- (a_{31} a_{22} a_{13}
+ a_{32} a_{23} a_{11}
+ a_{33} a_{21} a_{12})
$$
Por último, la [matriz transpuesta](http://es.wikipedia.org/wiki/Matriz_transpuesta) es aquella en que las filas se transforman en columnas y las columnas en filas. Se representa con el símbolo $A^\intercal$
$$\begin{bmatrix}a & b & \\c & d & \\ e & f & \end{bmatrix}^T:=\begin{bmatrix}a & c & e &\\b & d & f & \end{bmatrix}$$
Ejemplos en [Python](http://python.org/):
```python
# Creando una matriz identidad de 2x2
I = np.eye(2)
I
```
array([[1., 0.],
[0., 1.]])
```python
# Multiplicar una matriz por la identidad nos da la misma matriz
A = np.array([[4, 7],
[2, 6]])
A
```
array([[4, 7],
[2, 6]])
```python
A.dot(I) # AxI = A
```
array([[4., 7.],
[2., 6.]])
```python
# Calculando el determinante de la matriz A
np.linalg.det(A)
```
10.000000000000002
```python
# Calculando la inversa de A.
A_inv = np.linalg.inv(A)
A_inv
```
array([[ 0.6, -0.7],
[-0.2, 0.4]])
```python
# A x A_inv nos da como resultado I.
A.dot(A_inv)
```
array([[ 1.00000000e+00, -1.11022302e-16],
[ 1.11022302e-16, 1.00000000e+00]])
```python
# Trasponiendo una matriz
A = np.arange(6).reshape(3, 2)
A
```
array([[0, 1],
[2, 3],
[4, 5]])
```python
np.transpose(A)
```
array([[0, 2, 4],
[1, 3, 5]])
### Sistemas de ecuaciones lineales
Una de las principales aplicaciones del [Álgebra lineal](http://es.wikipedia.org/wiki/%C3%81lgebra_lineal) consiste en resolver problemas de sistemas de ecuaciones lineales.
Una [ecuación lineal](https://es.wikipedia.org/wiki/Ecuaci%C3%B3n_de_primer_grado) es una ecuación que solo involucra sumas y restas de una variable o mas variables a la primera potencia. Es la ecuación de la línea recta.Cuando nuestro problema esta representado por más de una [ecuación lineal](https://es.wikipedia.org/wiki/Ecuaci%C3%B3n_de_primer_grado), hablamos de un [sistema de ecuaciones lineales](http://es.wikipedia.org/wiki/Sistema_de_ecuaciones_lineales). Por ejemplo, podríamos tener un sistema de dos ecuaciones con dos incógnitas como el siguiente:
$$ x - 2y = 1$$
$$3x + 2y = 11$$
La idea es encontrar el valor de $x$ e $y$ que resuelva ambas ecuaciones. Una forma en que podemos hacer esto, puede ser representando graficamente ambas rectas y buscar los puntos en que las rectas se cruzan.
En [Python](http://python.org/) esto se puede hacer en forma muy sencilla con la ayuda de [matplotlib](http://matplotlib.org/).
```python
# graficando el sistema de ecuaciones.
x_vals = np.linspace(0, 5, 50) # crea 50 valores entre 0 y 5
plt.plot(x_vals, (1 - x_vals)/-2) # grafica x - 2y = 1
plt.plot(x_vals, (11 - (3*x_vals))/2) # grafica 3x + 2y = 11
plt.axis(ymin = 0)
```
x - 2y = 1
x - 2y -x = 1 -x
-2y = 1 -x
-2y /2 = (1 - x)/2
-y = (1-x)/2
-y * (-1) = (1-x)/2 * (-1)
y = (1-x)/(-2)
y = -(1-x)/2 = (-1-(-x))/2 = (-1 + x)/2 = (x-1)/2
Luego de haber graficado las funciones, podemos ver que ambas rectas se cruzan en el punto (3, 1), es decir que la solución de nuestro sistema sería $x=3$ e $y=1$. En este caso, al tratarse de un sistema simple y con solo dos incógnitas, la solución gráfica puede ser de utilidad, pero para sistemas más complicados se necesita una solución numérica, es aquí donde entran a jugar las <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matrices</a>.
Ese mismo sistema se podría representar como una ecuación matricial de la siguiente forma:
$$\begin{bmatrix}1 & -2 & \\3 & 2 & \end{bmatrix} \begin{bmatrix}x & \\y & \end{bmatrix} = \begin{bmatrix}1 & \\11 & \end{bmatrix}$$
Lo que es lo mismo que decir que la <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> A por la <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> $x$ nos da como resultado el [vector](http://es.wikipedia.org/wiki/Vector) b.
$$ Ax = b$$
En este caso, ya sabemos el resultado de $x$, por lo que podemos comprobar que nuestra solución es correcta realizando la [multiplicación de matrices](https://es.wikipedia.org/wiki/Multiplicaci%C3%B3n_de_matrices).
```python
# Comprobando la solucion con la multiplicación de matrices.
A = np.array([[1., -2.],
[3., 2.]])
x = np.array([[3.],[1.]])
A.dot(x)
```
array([[ 1.],
[11.]])
Para resolver en forma numérica los [sistema de ecuaciones](http://es.wikipedia.org/wiki/Sistema_de_ecuaciones_lineales), existen varios métodos:
* **El método de sustitución**: El cual consiste en despejar en una de las ecuaciones cualquier incógnita, preferiblemente la que tenga menor coeficiente y a continuación sustituirla en otra ecuación por su valor.
* **El método de igualacion**: El cual se puede entender como un caso particular del método de sustitución en el que se despeja la misma incógnita en dos ecuaciones y a continuación se igualan entre sí la parte derecha de ambas ecuaciones.
* **El método de reduccion**: El procedimiento de este método consiste en transformar una de las ecuaciones (generalmente, mediante productos), de manera que obtengamos dos ecuaciones en la que una misma incógnita aparezca con el mismo coeficiente y distinto signo. A continuación, se suman ambas ecuaciones produciéndose así la reducción o cancelación de dicha incógnita, obteniendo una ecuación con una sola incógnita, donde el método de resolución es simple.
* **El método gráfico**: Que consiste en construir el gráfica de cada una de las ecuaciones del sistema. Este método (manualmente aplicado) solo resulta eficiente en el plano cartesiano (solo dos incógnitas).
* **El método de Gauss**: El método de eliminación de Gauss o simplemente método de Gauss consiste en convertir un sistema lineal de n ecuaciones con n incógnitas, en uno escalonado, en el que la primera ecuación tiene n incógnitas, la segunda ecuación tiene n - 1 incógnitas, ..., hasta la última ecuación, que tiene 1 incógnita. De esta forma, será fácil partir de la última ecuación e ir subiendo para calcular el valor de las demás incógnitas.
* **El método de Eliminación de Gauss-Jordan**: El cual es una variante del método anterior, y consistente en triangular la matriz aumentada del sistema mediante transformaciones elementales, hasta obtener ecuaciones de una sola incógnita.
* **El método de Cramer**: El cual consiste en aplicar la [regla de Cramer](http://es.wikipedia.org/wiki/Regla_de_Cramer) para resolver el sistema. Este método solo se puede aplicar cuando la matriz de coeficientes del sistema es cuadrada y de determinante no nulo.
La idea no es explicar cada uno de estos métodos, sino saber que existen y que [Python](http://python.org/) nos hacer la vida mucho más fácil, ya que para resolver un [sistema de ecuaciones](http://es.wikipedia.org/wiki/Sistema_de_ecuaciones_lineales) simplemente debemos llamar a la función `solve()`.
Por ejemplo, para resolver este sistema de 3 ecuaciones y 3 incógnitas.
$$ x + 2y + 3z = 6$$
$$ 2x + 5y + 2z = 4$$
$$ 6x - 3y + z = 2$$
Primero armamos la <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> A de coeficientes y la <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> b de resultados y luego utilizamos `solve()` para resolverla.
```python
# Creando matriz de coeficientes
A = np.array([[1, 2, 3],
[2, 5, 2],
[6, -3, 1]])
A
```
array([[ 1, 2, 3],
[ 2, 5, 2],
[ 6, -3, 1]])
```python
# Creando matriz de resultados
b = np.array([6, 4, 2])
b
```
array([6, 4, 2])
```python
# Resolviendo sistema de ecuaciones
x = np.linalg.solve(A, b)
x
```
array([-1.48029737e-16, -1.48029737e-16, 2.00000000e+00])
```python
# Comprobando la solucion
A.dot(x) == b
```
array([False, True, True])
### Programación lineal
La [programación lineal](http://es.wikipedia.org/wiki/Programaci%C3%B3n_lineal) estudia las situaciones en las que se exige maximizar o minimizar funciones que se encuentran sujetas a determinadas restricciones.
Consiste en optimizar (minimizar o maximizar) una función lineal, denominada función objetivo, de tal forma que las variables de dicha función estén sujetas a una serie de restricciones que expresamos mediante un [sistema de inecuaciones lineales](http://es.wikipedia.org/wiki/Inecuaci%C3%B3n#Sistema_de_inecuaciones).
Para resolver un problema de programación lineal, debemos seguir los siguientes pasos:
1. Elegir las incógnitas.
2. Escribir la función objetivo en función de los datos del problema.
3. Escribir las restricciones en forma de sistema de inecuaciones.
4. Averiguar el conjunto de soluciones factibles representando gráficamente las restricciones.
5. Calcular las coordenadas de los vértices del recinto de soluciones factibles (si son pocos).
6. Calcular el valor de la función objetivo en cada uno de los vértices para ver en cuál de ellos presenta el valor máximo o mínimo según nos pida el problema (hay que tener en cuenta aquí la posible no existencia de solución).
Veamos un ejemplo y como [Python](http://python.org/) nos ayuda a resolverlo en forma sencilla.
Supongamos que tenemos la siguiente *función objetivo*:
$$f(x_{1},x_{2})= 50x_{1} + 40x_{2}$$
y las siguientes *restricciones*:
$$x_{1} + 1.5x_{2} \leq 750$$
$$2x_{1} + x_{2} \leq 1000$$
$$x_{1} \geq 0$$
$$x_{2} \geq 0$$
Podemos resolverlo utilizando [PuLP](http://pythonhosted.org//PuLP/), [CVXOPT](http://cvxopt.org/) o graficamente (con [matplotlib](http://matplotlib.org/)) de la siguiente forma.
```python
!python -m pip install pulp
```
Requirement already satisfied: pulp in c:\users\admin\appdata\local\programs\python\python37\lib\site-packages (2.6.0)
```python
!python -m pip install --upgrade pip
```
Requirement already satisfied: pip in c:\users\admin\appdata\local\programs\python\python37\lib\site-packages (21.3.1)
```python
!pip install PyHamcrest
```
Requirement already satisfied: PyHamcrest in c:\users\admin\appdata\local\programs\python\python37\lib\site-packages (2.0.2)
```python
# Resolviendo la optimizacion con pulp
from pulp import *
# declarando las variables
x1 = LpVariable("x1", 0, 800) # 0<= x1 <= 40
x2 = LpVariable("x2", 0, 1000) # 0<= x2 <= 1000
# definiendo el problema
prob = LpProblem("problem", LpMaximize)
# definiendo las restricciones
prob += x1+1.5*x2 <= 750
prob += 2*x1+x2 <= 1000
prob += x1>=0
prob += x2>=0
# definiendo la funcion objetivo a maximizar
prob += 50*x1+40*x2
# resolviendo el problema
status = prob.solve(use_mps=False)
LpStatus[status]
# imprimiendo los resultados
(value(x1), value(x2))
```
(375.0, 250.0)
```python
!pip install cvxopt
```
Requirement already satisfied: cvxopt in c:\users\admin\appdata\local\programs\python\python37\lib\site-packages (1.2.7)
```python
# Resolviendo el problema con cvxopt
from cvxopt import matrix, solvers
A = matrix([[-1., -2., 1., 0.], # columna de x1
[-1.5, -1., 0., 1.]]) # columna de x2
b = matrix([750., 1000., 0., 0.]) # resultados
c = matrix([50., 40.]) # funcion objetivo
# resolviendo el problema
sol=solvers.lp(c,A,b)
```
pcost dcost gap pres dres k/t
0: -2.5472e+04 -3.6797e+04 5e+03 0e+00 3e-01 1e+00
1: -2.8720e+04 -2.9111e+04 1e+02 5e-16 9e-03 2e+01
2: -2.8750e+04 -2.8754e+04 1e+00 3e-16 9e-05 2e-01
3: -2.8750e+04 -2.8750e+04 1e-02 4e-17 9e-07 2e-03
4: -2.8750e+04 -2.8750e+04 1e-04 2e-16 9e-09 2e-05
Optimal solution found.
```python
# imprimiendo la solucion.
print('{0:.2f}, {1:.2f}'.format(sol['x'][0]*-1, sol['x'][1]*-1))
```
375.00, 250.00
```python
# Resolviendo la optimizacion graficamente.
x_vals = np.linspace(0, 800, 10) # 10 valores entre 0 y 800
plt.plot(x_vals, ((750 - x_vals)/1.5)) # grafica x1 + 1.5x2 = 750
plt.plot(x_vals, (1000 - 2*x_vals)) # grafica 2x1 + x2 = 1000
plt.axis(ymin = 0)
```
Como podemos ver en el gráfico, ambas rectas se cruzan en la solución óptima, x1=375 y x2=250.
Con esto termino esta introducción al [Álgebra lineal](http://es.wikipedia.org/wiki/%C3%81lgebra_lineal) con [Python](http://python.org/).
## Campos
Un <a href="https://es.wikipedia.org/wiki/Cuerpo_(matem%C3%A1ticas)">Campo</a>, $F$, es una [estructura algebraica](https://es.wikipedia.org/wiki/Estructura_algebraica) en la cual las operaciones de <a href="https://es.wikipedia.org/wiki/Adici%C3%B3n_(matem%C3%A1ticas)">adición</a> y [multiplicación](https://es.wikipedia.org/wiki/Multiplicaci%C3%B3n) se pueden realizar y cumplen con las siguientes propiedades:
1. La [propiedad conmutativa](https://es.wikipedia.org/wiki/Conmutatividad) tanto para la <a href="https://es.wikipedia.org/wiki/Adici%C3%B3n_(matem%C3%A1ticas)">adición</a> como para la [multiplicación](https://es.wikipedia.org/wiki/Multiplicaci%C3%B3n); es decir: $a + b = b + a$; y $a \cdot b = b \cdot a$; para todo $a, b \in F$
2. La <a href="https://es.wikipedia.org/wiki/Asociatividad_(%C3%A1lgebra)">propiedad asociativa</a>, tanto para la <a href="https://es.wikipedia.org/wiki/Adici%C3%B3n_(matem%C3%A1ticas)">adición</a> como para la [multiplicación](https://es.wikipedia.org/wiki/Multiplicaci%C3%B3n); es decir: $(a + b) + c = a + (b + c)$; y $(a \cdot b) \cdot c = a \cdot (b \cdot c)$; para todo $a, b, c \in F$
3. La [propiedad distributiva](https://es.wikipedia.org/wiki/Distributividad) de la [multiplicación](https://es.wikipedia.org/wiki/Multiplicaci%C3%B3n) sobre la <a href="https://es.wikipedia.org/wiki/Adici%C3%B3n_(matem%C3%A1ticas)">adición</a>; es decir: $a \cdot (b + c) = a \cdot b + a \cdot c$; para todo $a, b, c \in F$
4. La existencia de un *[elemento neutro](https://es.wikipedia.org/wiki/Elemento_neutro)* tanto para la <a href="https://es.wikipedia.org/wiki/Adici%C3%B3n_(matem%C3%A1ticas)">adición</a> como para la [multiplicación](https://es.wikipedia.org/wiki/Multiplicaci%C3%B3n); es decir: $a + 0 = a$; y $a \cdot 1 = a$; para todo $a \in F$.
5. La existencia de un *[elemento inverso](https://es.wikipedia.org/wiki/Elemento_sim%C3%A9trico)* tanto para la <a href="https://es.wikipedia.org/wiki/Adici%C3%B3n_(matem%C3%A1ticas)">adición</a> como para la [multiplicación](https://es.wikipedia.org/wiki/Multiplicaci%C3%B3n); es decir: $a + (-a) = 0$; y $a \cdot a^{-1} = 1$; para todo $a \in F$ y $a \ne 0$.
Dos de los <a href="https://es.wikipedia.org/wiki/Cuerpo_(matem%C3%A1ticas)">Campos</a> más comunes con los que nos vamos a encontrar al trabajar en problemas de [Álgebra lineal](http://relopezbriega.github.io/tag/algebra.html), van a ser el [conjunto](http://relopezbriega.github.io/blog/2015/10/11/conjuntos-con-python/) de los [números reales](https://es.wikipedia.org/wiki/N%C3%BAmero_real), $\mathbb{R}$; y el [conjunto](http://relopezbriega.github.io/blog/2015/10/11/conjuntos-con-python/) de los [números complejos](http://relopezbriega.github.io/blog/2015/10/12/numeros-complejos-con-python/), $\mathbb{C}$.
## Vectores
Muchas nociones físicas, tales como las fuerzas, velocidades y aceleraciones, involucran una magnitud (el valor de la fuerza, velocidad o aceleración) y una dirección. Cualquier entidad que involucre magnitud y dirección se llama [vector](http://es.wikipedia.org/wiki/Vector). Los [vectores](http://es.wikipedia.org/wiki/Vector) se representan por flechas en las que la longitud de ellas define la magnitud; y la dirección de la flecha representa la dirección del [vector](http://es.wikipedia.org/wiki/Vector). Podemos pensar en los [vectores](http://es.wikipedia.org/wiki/Vector) como una serie de números. Éstos números tienen una orden preestablecido, y podemos identificar cada número individual por su índice en ese orden. Los [vectores](http://es.wikipedia.org/wiki/Vector) identifican puntos en el espacio, en donde cada elemento representa una coordenada del eje en el espacio. La típica forma de representarlos es la siguiente:
$$v = \left[ \begin{array}{c} x_1 \\ x_2 \\ \vdots \\ x_n \end{array} \right]$$
Geométricamente podemos representarlos del siguiente modo en el plano de 2 dimensiones:
```python
!pip install scipy
```
Requirement already satisfied: scipy in c:\users\admin\appdata\local\programs\python\python37\lib\site-packages (1.7.3)
Requirement already satisfied: numpy<1.23.0,>=1.16.5 in c:\users\admin\appdata\local\programs\python\python37\lib\site-packages (from scipy) (1.21.4)
```python
!pip install sympy
```
Requirement already satisfied: sympy in c:\users\admin\appdata\local\programs\python\python37\lib\site-packages (1.9)
Requirement already satisfied: mpmath>=0.19 in c:\users\admin\appdata\local\programs\python\python37\lib\site-packages (from sympy) (1.2.1)
```python
# <!-- collapse=True -->
# importando modulos necesarios
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import scipy.sparse as sp
import scipy.sparse.linalg
import scipy.linalg as la
import sympy
# imprimir con notación matemática.
sympy.init_printing(use_latex='mathjax')
```
```python
# <!-- collapse=True -->
# graficando vector en R^2 [2, 4]
def move_spines():
"""Crea la figura de pyplot y los ejes. Mueve las lineas de la izquierda
y de abajo para que se intersecten con el origen. Elimina las lineas de
la derecha y la de arriba. Devuelve los ejes."""
fix, ax = plt.subplots()
for spine in ["left", "bottom"]:
ax.spines[spine].set_position("zero")
for spine in ["right", "top"]:
ax.spines[spine].set_color("none")
return ax
def vect_fig(vector, color):
"""Genera el grafico de los vectores en el plano"""
v = vector
ax.annotate(" ", xy=v, xytext=[0, 0], color=color,
arrowprops=dict(facecolor=color,
shrink=0,
alpha=0.7,
width=0.5))
ax.text(1.1 * v[0], 1.1 * v[1], v)
ax = move_spines()
ax.set_xlim(-5, 5)
ax.set_ylim(-5, 5)
ax.grid()
vect_fig([2, 4], "blue")
```
## Combinaciones lineales
Cuando trabajamos con [vectores](http://es.wikipedia.org/wiki/Vector), nos vamos a encontrar con dos operaciones fundamentales, la *suma* o *<a href="https://es.wikipedia.org/wiki/Adici%C3%B3n_(matem%C3%A1ticas)">adición</a>*; y la *multiplicación por <a href="http://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">escalares</a>*. Cuando *sumamos* dos vectores $v$ y $w$, sumamos elemento por elemento, del siguiente modo:
$$v + w
=
\left[
\begin{array}{c}
v_1 \\
v_2 \\
\vdots \\
v_n
\end{array}
\right]
+
\left[
\begin{array}{c}
w_1 \\
w_2 \\
\vdots \\
w_n
\end{array}
\right] =
\left[
\begin{array}{c}
v_1 + w_1 \\
v_2 + w_2 \\
\vdots \\
v_n + w_n
\end{array}
\right]$$
Geométricamente lo podemos ver representado del siguiente modo:
```python
# <!-- collapse=True -->
# graficando suma de vectores en R^2
# [2, 4] + [2, -2]
ax = move_spines()
ax.set_xlim(-5, 5)
ax.set_ylim(-5, 5)
ax.grid()
vecs = [[2, 4], [2, -2]] # lista de vectores
for v in vecs:
vect_fig(v, "blue")
v = np.array([2, 4]) + np.array([2, -2])
vect_fig(v, "red")
ax.plot([2, 4], [-2, 2], linestyle='--')
a =ax.plot([2, 4], [4, 2], linestyle='--' )
```
Cuando *multiplicamos [vectores](http://es.wikipedia.org/wiki/Vector) por <a href="http://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">escalares</a>*, lo que hacemos es tomar un número $\alpha$ y un [vector](http://es.wikipedia.org/wiki/Vector) $v$; y creamos un nuevo [vector](http://es.wikipedia.org/wiki/Vector) $w$ en el cada elemento de $v$ es *multiplicado* por $\alpha$ del siguiente modo:
$$\begin{split}\alpha v
=
\left[
\begin{array}{c}
\alpha v_1 \\
\alpha v_2 \\
\vdots \\
\alpha v_n
\end{array}
\right]\end{split}$$
Geométricamente podemos representar a esta operación en el plano de 2 dimensiones del siguiente modo:
```python
# <!-- collapse=True -->
# graficando multiplicación por escalares en R^2
# [2, 3] * 2
ax = move_spines()
ax.set_xlim(-6, 6)
ax.set_ylim(-6, 6)
ax.grid()
v = np.array([2, 3])
vect_fig(v, "blue")
v = v * 2
vect_fig(v, "red")
```
Cuando combinamos estas dos operaciones, formamos lo que se conoce en [Álgebra lineal](http://relopezbriega.github.io/tag/algebra.html) como [combinaciones lineales](https://es.wikipedia.org/wiki/Combinaci%C3%B3n_lineal). Es decir que una [combinación lineal](https://es.wikipedia.org/wiki/Combinaci%C3%B3n_lineal) va a ser una expresión matemática construida sobre un conjunto de [vectores](http://es.wikipedia.org/wiki/Vector), en el que cada vector es *multiplicado por un <a href="http://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">escalar</a>* y los resultados son luego *sumados*. Matemáticamente lo podemos expresar de la siguiente forma:
$$w = \alpha_1 v_1 + \alpha_2 v_2 + \dots + \alpha_n v_n = \sum_{i=1}^n \alpha_i v_i
$$
en donde, $v_n$ son [vectores](http://es.wikipedia.org/wiki/Vector) y $\alpha_n$ son <a href="http://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">escalares</a>.
## Matrices, combinaciones lineales y Ax = b
Una <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> es un arreglo bidimensional de números ordenados en filas y columnas, donde una fila es cada una de las líneas horizontales de la matriz y una columna es cada una de las líneas verticales. En una <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> cada elemento puede ser identificado utilizando dos índices, uno para la fila y otro para la columna en que se encuentra. Las podemos representar de la siguiente manera:
$$A=\begin{bmatrix}a_{11} & a_{12} & \dots & a_{1n}\\a_{21} & a_{22} & \dots & a_{2n}
\\ \vdots & \vdots & \ddots & \vdots \\
a_{n1} & a_{n2} & \dots & a_{nn}\end{bmatrix}$$
Las <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matrices</a> se utilizan para múltiples aplicaciones y sirven, en particular, para representar los coeficientes de los [sistemas de ecuaciones lineales](https://es.wikipedia.org/wiki/Sistema_de_ecuaciones_lineales) o para representar [combinaciones lineales](https://es.wikipedia.org/wiki/Combinaci%C3%B3n_lineal).
Supongamos que tenemos los siguientes 3 vectores:
$$x_1
=
\left[
\begin{array}{c}
1 \\
-1 \\
0
\end{array}
\right]
x_2 =
\left[
\begin{array}{c}
0 \\
1 \\
-1
\end{array}
\right] \
x_3 =
\left[
\begin{array}{c}
0 \\
0 \\
1
\end{array}
\right]$$
su [combinación lineal](https://es.wikipedia.org/wiki/Combinaci%C3%B3n_lineal) en el espacio de 3 dimensiones va a ser igual a $\alpha_1 x_1 + \alpha_2 x_2 + \alpha_3 x_3$; lo que es lo mismo que decir:
$$\alpha_1
\left[
\begin{array}{c}
1 \\
-1 \\
0
\end{array}
\right]
+ \alpha_2
\left[
\begin{array}{c}
0 \\
1 \\
-1
\end{array}
\right] + \alpha_3
\left[
\begin{array}{c}
0 \\
0 \\
1
\end{array}
\right] = \left[
\begin{array}{c}
\alpha_1 \\
\alpha_2 - \alpha_1 \\
\alpha_3 - \alpha_2
\end{array}
\right]$$
Ahora esta [combinación lineal](https://es.wikipedia.org/wiki/Combinaci%C3%B3n_lineal) la podríamos reescribir en forma matricial. Los vectores $x_1, x_2$ y $x_3$, pasarían a formar las columnas de la <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> $A$ y los <a href="http://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">escalares</a> $\alpha_1, \alpha_2$ y $\alpha_3$ pasarían a ser los componentes del [vector](http://es.wikipedia.org/wiki/Vector) $x$ del siguiente modo:
$$\begin{bmatrix}1 & 0 & 0\\-1 & 1 & 0
\\ 0 & -1 & 1\end{bmatrix}\begin{bmatrix} \alpha_1 \\ \alpha_2 \\ \alpha_3\end{bmatrix}=
\begin{bmatrix}\alpha_1 \\ \alpha_2 - \alpha_1 \\ \alpha_3 - \alpha_2 \end{bmatrix}$$
De esta forma la <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> $A$ multiplicada por el [vector](http://es.wikipedia.org/wiki/Vector) $x$, nos da como resultado la misma [combinación lineal](https://es.wikipedia.org/wiki/Combinaci%C3%B3n_lineal) $b$. De esta forma, arribamos a una de las ecuaciones más fundamentales del [Álgebra lineal](http://relopezbriega.github.io/tag/algebra.html):
$$Ax = b$$
Esta ecuación no solo nos va a servir para expresar [combinaciones lineales](https://es.wikipedia.org/wiki/Combinaci%C3%B3n_lineal), sino que también se vuelve de suma importancia a la hora de resolver [sistemas de ecuaciones lineales](https://es.wikipedia.org/wiki/Sistema_de_ecuaciones_lineales), en dónde $b$ va a ser conocido y la incógnita pasa a ser $x$. Por ejemplo, supongamos que queremos resolver el siguiente [sistemas de ecuaciones](https://es.wikipedia.org/wiki/Sistema_de_ecuaciones_lineales) de 3 incógnitas:
$$ 2x_1 + 3x_2 + 5x_3 = 52 \\
3x_1 + 6x_2 + 2x_3 = 61 \\
8x_1 + 3x_2 + 6x_3 = 75
$$
Podemos ayudarnos de [SymPy](http://www.sympy.org/es/) para expresar a la <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> $A$ y $b$ para luego arribar a la solución del [vector](http://es.wikipedia.org/wiki/Vector) $x$.
```python
# Resolviendo sistema de ecuaciones con SymPy
A = sympy.Matrix(( (2, 3, 5), (3, 6, 2), (8, 3, 6) ))
A
```
$\displaystyle \left[\begin{matrix}2 & 3 & 5\\3 & 6 & 2\\8 & 3 & 6\end{matrix}\right]$
```python
b = sympy.Matrix(3,1,(52,61,75))
b
```
$\displaystyle \left[\begin{matrix}52\\61\\75\end{matrix}\right]$
```python
# Resolviendo Ax = b
x = A.LUsolve(b)
x
```
$\displaystyle \left[\begin{matrix}3\\7\\5\end{matrix}\right]$
```python
# Comprobando la solución
A*x
```
$\displaystyle \left[\begin{matrix}52\\61\\75\end{matrix}\right]$
## La matriz identidad , la matriz transpuesta y la matriz invertible
Tres <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matrices</a> de suma importancia en problemas de [Álgebra lineal](http://relopezbriega.github.io/tag/algebra.html). Son la [matriz identidad](https://es.wikipedia.org/wiki/Matriz_identidad), la [matriz transpuesta](http://es.wikipedia.org/wiki/Matriz_transpuesta) y la [matriz invertible](https://es.wikipedia.org/wiki/Matriz_invertible).
La [matriz identidad](https://es.wikipedia.org/wiki/Matriz_identidad) es el elemento neutro en la [multiplicación de matrices](https://es.wikipedia.org/wiki/Multiplicaci%C3%B3n_de_matrices), es el equivalente al número 1. Cualquier <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> multiplicada por la [matriz identidad](https://es.wikipedia.org/wiki/Matriz_identidad) nos da como resultado la misma <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a>. La [matriz identidad](https://es.wikipedia.org/wiki/Matriz_identidad) es una [matriz cuadrada](https://es.wikipedia.org/wiki/Matriz_cuadrada) (tiene siempre el mismo número de filas que de columnas); y su diagonal principal se compone de todos elementos 1 y el resto de los elementos se completan con 0. Suele representase con la letra $I$.
Por ejemplo la [matriz identidad](https://es.wikipedia.org/wiki/Matriz_identidad) de 3x3 sería la siguiente:
$$I=\begin{bmatrix}1 & 0 & 0 & \\0 & 1 & 0\\ 0 & 0 & 1\end{bmatrix}$$
La [matriz transpuesta](http://es.wikipedia.org/wiki/Matriz_transpuesta) de una <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> $A$ de $m \times n$ va a ser igual a la <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> $n \times m$ $A^T$, la cual se obtiene al transformar las filas en columnas y las columnas en filas, del siguiente modo:
$$\begin{bmatrix}a & b & \\c & d & \\ e & f & \end{bmatrix}^T=
\begin{bmatrix}a & c & e &\\b & d & f & \end{bmatrix}$$
Una [matriz cuadrada](https://es.wikipedia.org/wiki/Matriz_cuadrada) va a ser *[simétrica](https://es.wikipedia.org/wiki/Matriz_sim%C3%A9trica)* si $A^T = A$, es decir si $A$ es igual a su propia [matriz transpuesta](http://es.wikipedia.org/wiki/Matriz_transpuesta).
Algunas de las propiedades de las [matrices transpuestas](http://es.wikipedia.org/wiki/Matriz_transpuesta) son:
a. $(A^T)^T = A$
b. $(A + B)^T = A^T + B^T$
c. $k(A)^T = k(A^T)$
d. $(AB)^T = B^T A^T$
e. $(A^r)^T = (A^T)^r$ para todos los $r$ no negativos.
f. Si $A$ es una [matriz cuadrada](https://es.wikipedia.org/wiki/Matriz_cuadrada), entonces $A + A^T$ es una [matriz simétrica](https://es.wikipedia.org/wiki/Matriz_sim%C3%A9trica).
g. Para cualquier <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> $A$, $A A^T$ y $A^T A$ son [matrices simétricas](https://es.wikipedia.org/wiki/Matriz_sim%C3%A9trica).
Veamos algunos ejemplos en [Python](http://python.org/)
```python
# Matriz transpuesta
A = sympy.Matrix( [[ 2,-3,-8, 7],
[-2,-1, 2,-7],
[ 1, 0,-3, 6]] )
A
```
$\displaystyle \left[\begin{matrix}2 & -3 & -8 & 7\\-2 & -1 & 2 & -7\\1 & 0 & -3 & 6\end{matrix}\right]$
```python
A.transpose()
```
$\displaystyle \left[\begin{matrix}2 & -2 & 1\\-3 & -1 & 0\\-8 & 2 & -3\\7 & -7 & 6\end{matrix}\right]$
```python
# transpuesta de transpuesta vuelve a A.
A.transpose().transpose()
```
$\displaystyle \left[\begin{matrix}2 & -3 & -8 & 7\\-2 & -1 & 2 & -7\\1 & 0 & -3 & 6\end{matrix}\right]$
```python
# creando matriz simetrica
As = A*A.transpose()
As
```
$\displaystyle \left[\begin{matrix}126 & -66 & 68\\-66 & 58 & -50\\68 & -50 & 46\end{matrix}\right]$
```python
# comprobando simetria.
As.transpose()
```
$\displaystyle \left[\begin{matrix}126 & -66 & 68\\-66 & 58 & -50\\68 & -50 & 46\end{matrix}\right]$
La [matriz invertible](https://es.wikipedia.org/wiki/Matriz_invertible) es muy importante, ya que esta relacionada con la ecuación $Ax = b$. Si tenemos una [matriz cuadrada](https://es.wikipedia.org/wiki/Matriz_cuadrada) $A$ de $n \times n$, entonces la [matriz inversa](https://es.wikipedia.org/wiki/Matriz_invertible) de $A$ es una <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> $A'$ o $A^{-1}$ de $n \times n$ que hace que la multiplicación $A A^{-1}$ sea igual a la [matriz identidad](https://es.wikipedia.org/wiki/Matriz_identidad) $I$. Es decir que es la <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> recíproca de $A$.
$A A^{-1} = I$ o $A^{-1} A = I$
En caso de que estas condiciones se cumplan, decimos que la [matriz es invertible](https://es.wikipedia.org/wiki/Matriz_invertible).
Que una <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> sea [invertible](https://es.wikipedia.org/wiki/Matriz_invertible) tiene importantes implicaciones, como ser:
a. Si $A$ es una [matriz invertible](https://es.wikipedia.org/wiki/Matriz_invertible), entonces su [matriz inversa](https://es.wikipedia.org/wiki/Matriz_invertible) es única.
b. Si $A$ es una [matriz invertible](https://es.wikipedia.org/wiki/Matriz_invertible) de $n \times n$, entonces el [sistemas de ecuaciones lineales](https://es.wikipedia.org/wiki/Sistema_de_ecuaciones_lineales) dado por $Ax = b$ tiene una única solución $x = A^{-1}b$ para cualquier $b$ en $\mathbb{R}^n$.
c. Una <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> va a ser [invertible](https://es.wikipedia.org/wiki/Matriz_invertible) si y solo si su <a href="https://es.wikipedia.org/wiki/Determinante_(matem%C3%A1tica)">determinante</a> es distinto de cero. En el caso de que el <a href="https://es.wikipedia.org/wiki/Determinante_(matem%C3%A1tica)">determinante</a> sea cero se dice que la <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> es singular.
d. Si $A$ es una [matriz invertible](https://es.wikipedia.org/wiki/Matriz_invertible), entonces el [sistema](https://es.wikipedia.org/wiki/Sistema_de_ecuaciones_lineales) $Ax = 0$ solo tiene una solución *trivial*. Es decir, en las que todas las incógnitas son ceros.
e. Si $A$ es una [matriz invertible](https://es.wikipedia.org/wiki/Matriz_invertible), entonces su [forma escalonada](https://es.wikipedia.org/wiki/Matriz_escalonada) va a ser igual a la [matriz identidad](https://es.wikipedia.org/wiki/Matriz_identidad).
f. Si $A$ es una [matriz invertible](https://es.wikipedia.org/wiki/Matriz_invertible), entonces $A^{-1}$ es [invertible](https://es.wikipedia.org/wiki/Matriz_invertible) y:
$$(A^{-1})^{-1} = A$$
g. Si $A$ es una [matriz invertible](https://es.wikipedia.org/wiki/Matriz_invertible) y $\alpha$ es un <a href="http://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">escalar</a> distinto de cero, entonces $\alpha A$ es [invertible](https://es.wikipedia.org/wiki/Matriz_invertible) y:
$$(\alpha A)^{-1} = \frac{1}{\alpha}A^{-1}$$
h. Si $A$ y $B$ son [matrices invertibles](https://es.wikipedia.org/wiki/Matriz_invertible) del mismo tamaño, entonces $AB$ es [invertible](https://es.wikipedia.org/wiki/Matriz_invertible) y:
$$(AB)^{-1} = B^{-1} A^{-1}$$
i. Si $A$ es una [matriz invertible](https://es.wikipedia.org/wiki/Matriz_invertible), entonces $A^T$ es [invertible](https://es.wikipedia.org/wiki/Matriz_invertible) y:
$$(A^T)^{-1} = (A^{-1})^T$$
Con [SymPy](http://www.sympy.org/es/) podemos trabajar con las [matrices invertibles](https://es.wikipedia.org/wiki/Matriz_invertible) del siguiente modo:
```python
# Matriz invertible
A = sympy.Matrix( [[1,2],
[3,9]] )
A
```
$\displaystyle \left[\begin{matrix}1 & 2\\3 & 9\end{matrix}\right]$
```python
A_inv = A.inv()
A_inv
```
$\displaystyle \left[\begin{matrix}3 & - \frac{2}{3}\\-1 & \frac{1}{3}\end{matrix}\right]$
```python
# A * A_inv = I
A*A_inv
```
$\displaystyle \left[\begin{matrix}1 & 0\\0 & 1\end{matrix}\right]$
```python
# forma escalonada igual a indentidad.
A.rref()
```
$\displaystyle \left( \left[\begin{matrix}1 & 0\\0 & 1\end{matrix}\right], \ \left( 0, \ 1\right)\right)$
```python
# la inversa de A_inv es A
A_inv.inv()
```
$\displaystyle \left[\begin{matrix}1 & 2\\3 & 9\end{matrix}\right]$
## Espacios vectoriales
Las Matemáticas derivan su poder en gran medida de su capacidad para encontrar las características comunes de los diversos problemas y estudiarlos de manera abstracta. Existen muchos problemas que implican los conceptos relacionados de *<a href="https://es.wikipedia.org/wiki/Adici%C3%B3n_(matem%C3%A1ticas)">adición</a>*, *multiplicación por <a href="http://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">escalares</a>*, y la [linealidad](https://es.wikipedia.org/wiki/Lineal). Para estudiar estas propiedades de manera abstracta, debemos introducir la noción de [espacio vectorial](https://es.wikipedia.org/wiki/Espacio_vectorial).
Para alcanzar la definición de un [espacio vectorial](https://es.wikipedia.org/wiki/Espacio_vectorial), debemos combinar los conceptos que venimos viendo hasta ahora de <a href="https://es.wikipedia.org/wiki/Cuerpo_(matem%C3%A1ticas)">Campo</a>, [vector](http://es.wikipedia.org/wiki/Vector) y las operaciones de *<a href="https://es.wikipedia.org/wiki/Adici%C3%B3n_(matem%C3%A1ticas)">adición</a>*; y *multiplicación por <a href="http://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">escalares</a>*. De esta forma un [espacio vectorial](https://es.wikipedia.org/wiki/Espacio_vectorial), $V$, sobre un <a href="https://es.wikipedia.org/wiki/Cuerpo_(matem%C3%A1ticas)">Campo</a>, $F$, va a ser un [conjunto](http://relopezbriega.github.io/blog/2015/10/11/conjuntos-con-python/) en el que están definidas las operaciones de *<a href="https://es.wikipedia.org/wiki/Adici%C3%B3n_(matem%C3%A1ticas)">adición</a>* y *multiplicación por <a href="http://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">escalares</a>*, tal que para cualquier par de elementos $x$ e $y$ en $V$, existe un elemento único $x + y$ en $V$, y para cada elemento $\alpha$ en $F$ y cada elemento $x$ en $V$, exista un único elemento $\alpha x$ en $V$, de manera que se cumplan las siguientes condiciones:
1. Para todo $x, y$ en $V$, $x + y = y + x$ ([conmutatividad](https://es.wikipedia.org/wiki/Conmutatividad) de la adición).
2. Para todo $x, y, z$ en $V$, $(x + y) + z = x + (y + z)$. (<a href="https://es.wikipedia.org/wiki/Asociatividad_(%C3%A1lgebra)">asociatividad</a> de la adición).
3. Existe un elemento en $V$ llamado $0$ tal que $x + 0 = x$ para todo $x$ en $V$.
4. Para cada elemento $x$ en $V$, existe un elemento $y$ en $V$ tal que $x + y = 0$.
5. Para cada elemento $x$ en $V$, $1 x = x$.
6. Para cada par, $\alpha, \beta$ en $F$ y cada elemento $x$ en $V$, $(\alpha \beta) x = \alpha (\beta x)$.
7. Para cada elemento $\alpha$ en $F$ y cada para de elementos $x, y$ en $V$, $\alpha(x + y) = \alpha x + \alpha y$.
8. Para cada par de elementos $\alpha, \beta$ en $F$ y cada elemento $x$ en $V$, $(\alpha + \beta)x = \alpha x + \beta x$.
Los [espacios vectoriales](https://es.wikipedia.org/wiki/Espacio_vectorial) más comunes son $\mathbb{R}^2$, el cual representa el plano de 2 dimensiones y consiste de todos los pares ordenados de los [números reales](https://es.wikipedia.org/wiki/N%C3%BAmero_real):
$$\mathbb{R}^2 = \{(x, y): x, y \in \mathbb{R}\}$$
y $\mathbb{R}^3$, que representa el espacio ordinario de 3 dimensiones y consiste en todos los tríos ordenados de los [números reales](https://es.wikipedia.org/wiki/N%C3%BAmero_real):
$$\mathbb{R}^3 = \{(x, y, z): x, y, z \in \mathbb{R}\}$$
Una de las grandes bellezas del [Álgebra lineal](http://relopezbriega.github.io/tag/algebra.html) es que podemos fácilmente pasar a trabajar sobre espacios de $n$ dimensiones, $\mathbb{R}^n$!
Tampoco tenemos porque quedarnos con solo los [números reales](https://es.wikipedia.org/wiki/N%C3%BAmero_real), ya que la definición que dimos de un [espacio vectorial](https://es.wikipedia.org/wiki/Espacio_vectorial) reside sobre un <a href="https://es.wikipedia.org/wiki/Cuerpo_(matem%C3%A1ticas)">Campo</a>; y los <a href="https://es.wikipedia.org/wiki/Cuerpo_(matem%C3%A1ticas)">campos</a> pueden estar representados por [números complejos](http://relopezbriega.github.io/blog/2015/10/12/numeros-complejos-con-python/). Por tanto también podemos tener [espacios vectoriales](https://es.wikipedia.org/wiki/Espacio_vectorial) $\mathbb{C}^2, \mathbb{C}^3, \dots, \mathbb{C}^n$.
### Subespacios
Normalmente, en el estudio de cualquier estructura algebraica es interesante examinar subconjuntos que tengan la misma estructura que el [conjunto](http://relopezbriega.github.io/blog/2015/10/11/conjuntos-con-python/) que esta siendo considerado. Así, dentro de los [espacios vectoriales](https://es.wikipedia.org/wiki/Espacio_vectorial), podemos tener [subespacios vectoriales](https://es.wikipedia.org/wiki/Subespacio_vectorial), los cuales son un subconjunto que cumplen con las mismas *propiedades* que el [espacio vectorial](https://es.wikipedia.org/wiki/Espacio_vectorial) que los contiene. De esta forma, $\mathbb{R}^3$ representa un [subespacio](https://es.wikipedia.org/wiki/Subespacio_vectorial) del [espacio vectorial](https://es.wikipedia.org/wiki/Espacio_vectorial) $\mathbb{R}^n$.
## Independencia lineal
La [independencia lineal](https://es.wikipedia.org/wiki/Dependencia_e_independencia_lineal) es un concepto aparentemente simple con consecuencias que se extienden profundamente en muchos aspectos del análisis. Si deseamos entender cuándo una matriz puede ser [invertible](https://es.wikipedia.org/wiki/Matriz_invertible), o cuándo un [sistema de ecuaciones lineales](https://es.wikipedia.org/wiki/Sistema_de_ecuaciones_lineales) tiene una única solución, o cuándo una estimación por [mínimos cuadrados](https://es.wikipedia.org/wiki/M%C3%ADnimos_cuadrados) se define de forma única, la idea fundamental más importante es la de [independencia lineal](https://es.wikipedia.org/wiki/Dependencia_e_independencia_lineal) de [vectores](http://es.wikipedia.org/wiki/Vector).
Dado un conjunto finito de [vectores](http://es.wikipedia.org/wiki/Vector) $x_1, x_2, \dots, x_n$ se dice que los mismos son *[linealmente independientes](https://es.wikipedia.org/wiki/Dependencia_e_independencia_lineal)*, si y solo si, los únicos <a href="http://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">escalares</a> $\alpha_1, \alpha_2, \dots, \alpha_n$ que satisfacen la ecuación:
$$\alpha_1 x_1 + \alpha_2 x_2 + \dots + \alpha_n x_n = 0$$
son todos ceros, $\alpha_1 = \alpha_2 = \dots = \alpha_n = 0$.
En caso de que esto no se cumpla, es decir, que existe una solución a la ecuación de arriba en que no todos los <a href="http://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">escalares</a> son ceros, a esta solución se la llama *no trivial* y se dice que los [vectores](http://es.wikipedia.org/wiki/Vector) son *[linealmente dependientes](https://es.wikipedia.org/wiki/Dependencia_e_independencia_lineal)*.
Para ilustrar la definición y que quede más clara, veamos algunos ejemplos. Supongamos que queremos determinar si los siguientes [vectores](http://es.wikipedia.org/wiki/Vector) son *[linealmente independientes](https://es.wikipedia.org/wiki/Dependencia_e_independencia_lineal)*:
$$\begin{split}x_1
=
\left[
\begin{array}{c}
1.2 \\
1.1 \\
\end{array}
\right] \ \ \ x_2 =
\left[
\begin{array}{c}
-2.2 \\
1.4 \\
\end{array}
\right]\end{split}$$
Para lograr esto, deberíamos resolver el siguiente [sistema de ecuaciones](https://es.wikipedia.org/wiki/Sistema_de_ecuaciones_lineales) y verificar si la única solución es aquella en que los <a href="http://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">escalares</a> sean ceros.
$$\begin{split}\alpha_1
\left[
\begin{array}{c}
1.2 \\
1.1 \\
\end{array}
\right] + \alpha_2
\left[
\begin{array}{c}
-2.2 \\
1.4 \\
\end{array}
\right]\end{split} = 0
$$
Para resolver este [sistema de ecuaciones](https://es.wikipedia.org/wiki/Sistema_de_ecuaciones_lineales), podemos recurrir a la ayuda de [Python](http://python.org/).
```python
# Resolviendo el sistema de ecuaciones.
A = np.array([[1.2, -2.2],
[1.1, 1.4]])
b = np.array([0., 0.])
x = np.linalg.solve(A, b)
x
```
array([0., 0.])
```python
# <!-- collapse=True -->
# Solución gráfica.
x_vals = np.linspace(-5, 5, 50) # crea 50 valores entre 0 y 5
ax = move_spines()
ax.set_xlim(-5, 5)
ax.set_ylim(-5, 5)
ax.grid()
ax.plot(x_vals, (1.2 * x_vals) / -2.2) # grafica 1.2x_1 - 2.2x_2 = 0
a = ax.plot(x_vals, (1.1 * x_vals) / 1.4) # grafica 1.1x + 1.4x_2 = 0
```
Como podemos ver, tanto por la solución numérica como por la solución gráfica, estos vectores son *[linealmente independientes](https://es.wikipedia.org/wiki/Dependencia_e_independencia_lineal)*, ya que la única solución a la ecuación $\alpha_1 x_1 + \alpha_2 x_2 + \dots + \alpha_n x_n = 0$, es aquella en que los <a href="http://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">escalares</a> son cero.
Determinemos ahora si por ejemplo, los siguientes [vectores](http://es.wikipedia.org/wiki/Vector) en $\mathbb{R}^4$ son *[linealmente independientes](https://es.wikipedia.org/wiki/Dependencia_e_independencia_lineal)*: $\{(3, 2, 2, 3), (3, 2, 1, 2), (3, 2, 0, 1)\}$. Aquí, ahora deberíamos resolver la siguiente ecuación:
$$\alpha_1 (3, 2, 2, 3) +\alpha_2 (3, 2, 1, 2) + \alpha_3 (3, 2, 0, 1) = (0, 0, 0, 0)$$
Para resolver este sistema de ecuaciones que no es cuadrado (tiene 4 ecuaciones y solo 3 incógnitas); podemos utilizar [SymPy](http://www.sympy.org/es/).
```python
# Sympy para resolver el sistema de ecuaciones lineales
a1, a2, a3 = sympy.symbols('a1, a2, a3')
A = sympy.Matrix(( (3, 3, 3, 0), (2, 2, 2, 0), (2, 1, 0, 0), (3, 2, 1, 0) ))
A
```
$\displaystyle \left[\begin{matrix}3 & 3 & 3 & 0\\2 & 2 & 2 & 0\\2 & 1 & 0 & 0\\3 & 2 & 1 & 0\end{matrix}\right]$
```python
sympy.solve_linear_system(A, a1, a2, a3)
```
$\displaystyle \left\{ a_{1} : a_{3}, \ a_{2} : - 2 a_{3}\right\}$
Como vemos, esta solución es *no trivial*, ya que por ejemplo existe la solución $\alpha_1 = 1, \ \alpha_2 = -2 , \ \alpha_3 = 1$ en la que los <a href="http://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">escalares</a> no son ceros. Por lo tanto este sistema es *[linealmente dependiente](https://es.wikipedia.org/wiki/Dependencia_e_independencia_lineal)*.
Por último, podríamos considerar si los siguientes [polinomios](https://es.wikipedia.org/wiki/Polinomio) son *[linealmente independientes](https://es.wikipedia.org/wiki/Dependencia_e_independencia_lineal)*: $1 -2x -x^2$, $1 + x$, $1 + x + 2x^2$. En este caso, deberíamos resolver la siguiente ecuación:
$$\alpha_1 (1 − 2x − x^2) + \alpha_2 (1 + x) + \alpha_3 (1 + x + 2x^2) = 0$$
y esta ecuación es equivalente a la siguiente:
$$(\alpha_1 + \alpha_2 + \alpha_3 ) + (−2 \alpha_1 + \alpha_2 + \alpha_3 )x + (−\alpha_1 + 2 \alpha_2 )x^2 = 0$$
Por lo tanto, podemos armar el siguiente [sistema de ecuaciones](https://es.wikipedia.org/wiki/Sistema_de_ecuaciones_lineales):
$$\alpha_1 + \alpha_2 + \alpha_3 = 0, \\
-2 \alpha_1 + \alpha_2 + \alpha_3 = 0, \\
-\alpha_1 + 2 \alpha_2 = 0.
$$
El cual podemos nuevamente resolver con la ayuda de [SymPy](http://www.sympy.org/es/).
```python
A = sympy.Matrix(( (1, 1, 1, 0), (-2, 1, 1, 0), (-1, 2, 0, 0) ))
A
```
$\displaystyle \left[\begin{matrix}1 & 1 & 1 & 0\\-2 & 1 & 1 & 0\\-1 & 2 & 0 & 0\end{matrix}\right]$
```python
sympy.solve_linear_system(A, a1, a2, a3)
```
$\displaystyle \left\{ a_{1} : 0, \ a_{2} : 0, \ a_{3} : 0\right\}$
Como vemos, todos los <a href="http://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">escalares</a> son ceros, por lo tanto estos [polinomios](https://es.wikipedia.org/wiki/Polinomio) son *[linealmente independientes](https://es.wikipedia.org/wiki/Dependencia_e_independencia_lineal)*.
### Espacio nulo, espacio columna y espacio fila
Un termino particularmente relacionado con la [independencia lineal](https://es.wikipedia.org/wiki/Dependencia_e_independencia_lineal) es el de <a href="https://es.wikipedia.org/wiki/N%C3%BAcleo_(matem%C3%A1tica)">espacio nulo o núcleo</a>. El <a href="https://es.wikipedia.org/wiki/N%C3%BAcleo_(matem%C3%A1tica)">espacio nulo</a> de una <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> $A$, el cual lo vamos a expresar como $N(A)$, va a consistir de todas las soluciones a la ecuación fundamental $Ax = 0$. Por supuesto, una solución inmediata a esta ecuación es el caso de $x = 0$, que ya vimos que establece la [independencia lineal](https://es.wikipedia.org/wiki/Dependencia_e_independencia_lineal). Esta solución solo va a ser la única que exista para los casos de [matrices invertibles](https://es.wikipedia.org/wiki/Matriz_invertible). Pero en el caso de las matrices singulares (aquellas que no son [invertibles](https://es.wikipedia.org/wiki/Matriz_invertible), que tienen <a href="https://es.wikipedia.org/wiki/Determinante_(matem%C3%A1tica)">determinante</a> igual a cero), van a existir soluciones que no son cero para la ecuación $Ax = 0$. El conjunto de todas estas soluciones, va a representar el <a href="https://es.wikipedia.org/wiki/N%C3%BAcleo_(matem%C3%A1tica)">espacio nulo</a>.
Para encontrar el <a href="https://es.wikipedia.org/wiki/N%C3%BAcleo_(matem%C3%A1tica)">espacio nulo</a> también nos podemos ayudar de [SymPy](http://www.sympy.org/es/).
```python
# Espacio nulo de un matriz
A = sympy.Matrix(((1, 5, 7), (0, 0, 9)))
A
```
$\displaystyle \left[\begin{matrix}1 & 5 & 7\\0 & 0 & 9\end{matrix}\right]$
```python
# Calculando el espacio nulo
x = A.nullspace()
x
```
$\displaystyle \left[ \left[\begin{matrix}-5\\1\\0\end{matrix}\right]\right]$
```python
# Comprobando la solución
A_aum = sympy.Matrix(((1, 5, 7, 0), (0, 0, 9, 0)))
sympy.solve_linear_system(A_aum, a1, a2, a3)
```
$\displaystyle \left\{ a_{1} : - 5 a_{2}, \ a_{3} : 0\right\}$
```python
# Comprobación con numpy
A = np.array([[1, 5, 7],
[0, 0, 9]])
x = np.array([[-5],
[1],
[0]])
A.dot(x)
```
array([[0],
[0]])
Otro espacio de suma importancia es el [espacio columna](https://es.wikipedia.org/wiki/Subespacios_fundamentales_de_una_matriz). El [espacio columna](https://es.wikipedia.org/wiki/Subespacios_fundamentales_de_una_matriz), $C(A)$, consiste en todas las [combinaciones lineales](https://es.wikipedia.org/wiki/Combinaci%C3%B3n_lineal) de las columnas de una <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> $A$. Estas combinaciones son los posibles vectores $Ax$. Este espacio es fundamental para resolver la ecuación $Ax = b$; ya que para resolver esta ecuación debemos expresar a $b$ como una combinación de columnas. El sistema $Ax = b$, va a tener solución solamente si $b$ esta en el espacio columna de $A$. Como las <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matrices</a> tienen la forma $m \times n$, sus columnas tienen $m$ componentes ($n$ son las filas). Por lo tanto el [espacio columna](https://es.wikipedia.org/wiki/Subespacios_fundamentales_de_una_matriz) es un *subespacio* de $\mathbb{R}^m$ y no $\mathbb{R}^n$.
Por último, el otro espacio que conforma los [espacios fundamentales](https://es.wikipedia.org/wiki/Subespacios_fundamentales_de_una_matriz) de una <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a>, es el [espacio fila](https://es.wikipedia.org/wiki/Subespacios_fundamentales_de_una_matriz), el cual esta constituido por las [combinaciones lineales](https://es.wikipedia.org/wiki/Combinaci%C3%B3n_lineal) de las filas de una <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a>.
Para obtener estos espacios, nuevamente podemos recurrir a [SymPy](http://www.sympy.org/es/). Para poder obtener estos espacios, primero vamos a tener que obtener la [forma escalonada](https://es.wikipedia.org/wiki/Matriz_escalonada) de la <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a>, la cual es la forma a la que arribamos luego del proceso de [eliminación](https://es.wikipedia.org/wiki/Eliminaci%C3%B3n_de_Gauss-Jordan).
```python
# A.rref() forma escalonada.
A = sympy.Matrix( [[2,-3,-8, 7],
[-2,-1,2,-7],
[1 ,0,-3, 6]])
A.rref() # [0, 1, 2] es la ubicación de las pivot.
```
$\displaystyle \left( \left[\begin{matrix}1 & 0 & 0 & 0\\0 & 1 & 0 & 3\\0 & 0 & 1 & -2\end{matrix}\right], \ \left( 0, \ 1, \ 2\right)\right)$
```python
# Espacio columna
[ A[:,c] for c in A.rref()[1] ]
```
$\displaystyle \left[ \left[\begin{matrix}2\\-2\\1\end{matrix}\right], \ \left[\begin{matrix}-3\\-1\\0\end{matrix}\right], \ \left[\begin{matrix}-8\\2\\-3\end{matrix}\right]\right]$
```python
# Espacio fila
[ A.rref()[0][r,:] for r in A.rref()[1] ]
```
$\displaystyle \left[ \left[\begin{matrix}1 & 0 & 0 & 0\end{matrix}\right], \ \left[\begin{matrix}0 & 1 & 0 & 3\end{matrix}\right], \ \left[\begin{matrix}0 & 0 & 1 & -2\end{matrix}\right]\right]$
## Rango
Otro concepto que también esta ligado a la [independencia lineal](https://es.wikipedia.org/wiki/Dependencia_e_independencia_lineal) es el de <a href="https://es.wikipedia.org/wiki/Rango_(%C3%A1lgebra_lineal)">rango</a>. Los números de columnas $m$ y filas $n$ pueden darnos el tamaño de una <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a>, pero esto no necesariamente representa el verdadero tamaño del [sistema lineal](https://es.wikipedia.org/wiki/Sistema_de_ecuaciones_lineales), ya que por ejemplo si existen dos filas iguales en una <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> $A$, la segunda fila desaparecía en el proceso de [eliminación](https://es.wikipedia.org/wiki/Eliminaci%C3%B3n_de_Gauss-Jordan). El verdadero tamaño de $A$ va a estar dado por su <a href="https://es.wikipedia.org/wiki/Rango_(%C3%A1lgebra_lineal)">rango</a>. El <a href="https://es.wikipedia.org/wiki/Rango_(%C3%A1lgebra_lineal)">rango</a> de una <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> es el número máximo de columnas (filas respectivamente) que son [linealmente independientes](https://es.wikipedia.org/wiki/Dependencia_e_independencia_lineal). Por ejemplo si tenemos la siguiente <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> de 3 x 4:
$$A = \begin{bmatrix}1 & 1 & 2 & 4\\1 & 2 & 2 & 5
\\ 1 & 3 & 2 & 6\end{bmatrix}$$
Podemos ver que la tercer columna $(2, 2, 2)$ es un múltiplo de la primera y que la cuarta columna $(4, 5, 6)$ es la suma de las primeras 3 columnas. Por tanto el <a href="https://es.wikipedia.org/wiki/Rango_(%C3%A1lgebra_lineal)">rango</a> de $A$ va a ser igual a 2; ya que la tercer y cuarta columna pueden ser eliminadas.
Obviamente, el <a href="https://es.wikipedia.org/wiki/Rango_(%C3%A1lgebra_lineal)">rango</a> también lo podemos calcular con la ayuda de [Python](http://python.org/).
```python
# Calculando el rango con SymPy
A = sympy.Matrix([[1, 1, 2, 4],
[1, 2, 2, 5],
[1, 3, 2, 6]])
A
```
$\displaystyle \left[\begin{matrix}1 & 1 & 2 & 4\\1 & 2 & 2 & 5\\1 & 3 & 2 & 6\end{matrix}\right]$
```python
# Rango con SymPy
A.rank()
```
$\displaystyle 2$
```python
# Rango con numpy
A = np.array([[1, 1, 2, 4],
[1, 2, 2, 5],
[1, 3, 2, 6]])
np.linalg.matrix_rank(A)
```
2
Una útil aplicación de calcular el <a href="https://es.wikipedia.org/wiki/Rango_(%C3%A1lgebra_lineal)">rango</a> de una <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> es la de determinar el número de soluciones al [sistema de ecuaciones lineales](https://es.wikipedia.org/wiki/Sistema_de_ecuaciones_lineales), de acuerdo al enunciado del [Teorema de Rouché–Frobenius](https://es.wikipedia.org/wiki/Teorema_de_Rouch%C3%A9%E2%80%93Frobenius). El sistema tiene por lo menos una solución si el <a href="https://es.wikipedia.org/wiki/Rango_(%C3%A1lgebra_lineal)">rango</a> de la <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> de coeficientes equivale al <a href="https://es.wikipedia.org/wiki/Rango_(%C3%A1lgebra_lineal)">rango</a> de la [matriz aumentada](https://es.wikipedia.org/wiki/Matriz_aumentada). En ese caso, ésta tiene exactamente una solución si el rango equivale al número de incógnitas.
## La norma y la Ortogonalidad
Si quisiéramos saber cual es el *largo* del un [vector](http://es.wikipedia.org/wiki/Vector), lo único que necesitamos es el famoso [teorema de Pitágoras](https://es.wikipedia.org/wiki/Teorema_de_Pit%C3%A1goras). En el plano $\mathbb{R}^2$, el *largo* de un [vector](http://es.wikipedia.org/wiki/Vector) $v=\begin{bmatrix}a \\ b \end{bmatrix}$ va a ser igual a la distancia desde el origen $(0, 0)$ hasta el punto $(a, b)$. Esta distancia puede ser fácilmente calculada gracias al [teorema de Pitágoras](https://es.wikipedia.org/wiki/Teorema_de_Pit%C3%A1goras) y va ser igual a $\sqrt{a^2 + b^2}$, como se puede ver en la siguiente figura:
```python
# <!-- collapse=True -->
# Calculando largo de un vector
# forma un triángulo rectángulo
ax = move_spines()
ax.set_xlim(-6, 6)
ax.set_ylim(-6, 6)
ax.grid()
v = np.array([4, 6])
vect_fig(v, "blue")
a = ax.vlines(x=v[0], ymin=0, ymax = 6, linestyle='--', color='g')
```
En esta definición podemos observar que $a^2 + b^2 = v \cdot v$, por lo que ya estamos en condiciones de poder definir lo que en [Álgebra lineal](http://relopezbriega.github.io/tag/algebra.html) se conoce como [norma](https://es.wikipedia.org/wiki/Norma_vectorial).
El *largo* o [norma](https://es.wikipedia.org/wiki/Norma_vectorial) de un [vector](http://es.wikipedia.org/wiki/Vector) $v = \begin{bmatrix} v_1 \\ v_2 \\ \vdots \\ v_n \end{bmatrix}$, en $\mathbb{R}^n$ va a ser igual a un número no negativo $||v||$ definido por:
$$||v|| = \sqrt{v \cdot v} = \sqrt{v_1^2 + v_2^2 + \dots + v_n^2}$$
Es decir que la [norma](https://es.wikipedia.org/wiki/Norma_vectorial) de un [vector](http://es.wikipedia.org/wiki/Vector) va a ser igual a la raíz cuadrada de la suma de los cuadrados de sus componentes.
### Ortogonalidad
El concepto de [perpendicularidad](https://es.wikipedia.org/wiki/Perpendicularidad) es fundamental en [geometría](https://es.wikipedia.org/wiki/Geometr%C3%ADa). Este concepto llevado a los [vectores](http://es.wikipedia.org/wiki/Vector) en $\mathbb{R}^n$ se llama <a href="https://es.wikipedia.org/wiki/Ortogonalidad_(matem%C3%A1ticas)">ortogonalidad</a>.
Dos [vectores](http://es.wikipedia.org/wiki/Vector) $v$ y $w$ en $\mathbb{R}^n$ van a ser <a href="https://es.wikipedia.org/wiki/Ortogonalidad_(matem%C3%A1ticas)">ortogonales</a> el uno al otro si su [producto interior](https://es.wikipedia.org/wiki/Producto_escalar) es igual a cero. Es decir, $v \cdot w = 0$.
Geométricamente lo podemos ver de la siguiente manera:
```python
# <!-- collapse=True -->
# Vectores ortogonales
ax = move_spines()
ax.set_xlim(-6, 6)
ax.set_ylim(-6, 6)
ax.grid()
vecs = [np.array([4, 6]), np.array([-3, 2])]
for v in vecs:
vect_fig(v, "blue")
a = ax.plot([-3, 4], [2, 6], linestyle='--', color='g')
```
```python
# comprobando su producto interior.
v = np.array([4, 6])
w = np.array([-3, 2])
v.dot(w)
```
0
Un [conjunto](http://relopezbriega.github.io/blog/2015/10/11/conjuntos-con-python/) de [vectores](http://es.wikipedia.org/wiki/Vector) en $\mathbb{R}^n$ va a ser <a href="https://es.wikipedia.org/wiki/Ortogonalidad_(matem%C3%A1ticas)">ortogonal</a> si todo los pares de los distintos [vectores](http://es.wikipedia.org/wiki/Vector) en el [conjunto](http://relopezbriega.github.io/blog/2015/10/11/conjuntos-con-python/) son <a href="https://es.wikipedia.org/wiki/Ortogonalidad_(matem%C3%A1ticas)">ortogonales</a> entre sí. O sea:
$v_i \cdot v_j = 0$ para todo $i, j = 1, 2, \dots, k$ y donde $i \ne j$.
Por ejemplo, si tenemos el siguiente [conjunto](http://relopezbriega.github.io/blog/2015/10/11/conjuntos-con-python/) de [vectores](http://es.wikipedia.org/wiki/Vector) en $\mathbb{R}^3$:
$$v1=\begin{bmatrix} 2 \\ 1 \\ -1\end{bmatrix} \
v2=\begin{bmatrix} 0 \\ 1 \\ 1\end{bmatrix}
v3=\begin{bmatrix} 1 \\ -1 \\ 1\end{bmatrix}$$
En este caso, deberíamos combrobar que:
$$v1 \cdot v2 = 0 \\
v2 \cdot v3 = 0 \\
v1 \cdot v3 = 0 $$
```python
# comprobando ortogonalidad del conjunto
v1 = np.array([2, 1, -1])
v2 = np.array([0, 1, 1])
v3 = np.array([1, -1, 1])
v1.dot(v2), v2.dot(v3), v1.dot(v3)
```
(0, 0, 0)
Como vemos, este conjunto es <a href="https://es.wikipedia.org/wiki/Ortogonalidad_(matem%C3%A1ticas)">ortogonal</a>. Una de las principales ventajas de trabajar con [conjuntos](http://relopezbriega.github.io/blog/2015/10/11/conjuntos-con-python/) de [vectores](http://es.wikipedia.org/wiki/Vector) <a href="https://es.wikipedia.org/wiki/Ortogonalidad_(matem%C3%A1ticas)">ortogonales</a> es que los mismos son necesariamente [linealmente independientes](https://es.wikipedia.org/wiki/Dependencia_e_independencia_lineal).
El concepto de <a href="https://es.wikipedia.org/wiki/Ortogonalidad_(matem%C3%A1ticas)">ortogonalidad</a> es uno de los más importantes y útiles en [Álgebra lineal](http://relopezbriega.github.io/tag/algebra.html) y surge en muchas situaciones prácticas, sobre todo cuando queremos calcular distancias.
## Determinante
El <a href="https://es.wikipedia.org/wiki/Determinante_(matem%C3%A1tica)">determinante</a> es un número especial que puede calcularse sobre las [matrices cuadradas](https://es.wikipedia.org/wiki/Matriz_cuadrada). Este número nos va a decir muchas cosas sobre la <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a>. Por ejemplo, nos va decir si la <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> es [invertible](https://es.wikipedia.org/wiki/Matriz_invertible) o no. Si el <a href="https://es.wikipedia.org/wiki/Determinante_(matem%C3%A1tica)">determinante</a> es igual a cero, la <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> no es [invertible](https://es.wikipedia.org/wiki/Matriz_invertible). Cuando la <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> es [invertible](https://es.wikipedia.org/wiki/Matriz_invertible), el <a href="https://es.wikipedia.org/wiki/Determinante_(matem%C3%A1tica)">determinante</a> de $A^{-1}= 1/(\det \ A)$. El <a href="https://es.wikipedia.org/wiki/Determinante_(matem%C3%A1tica)">determinante</a> también puede ser útil para calcular áreas.
Para obtener el <a href="https://es.wikipedia.org/wiki/Determinante_(matem%C3%A1tica)">determinante</a> de una <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> debemos calcular la suma de los productos de las diagonales de la <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> en una dirección menos la suma de los productos de las diagonales en la otra dirección. Se represente con el símbolo $|A|$ o $\det A$.
Algunas de sus propiedades que debemos tener en cuenta son:
a. El <a href="https://es.wikipedia.org/wiki/Determinante_(matem%C3%A1tica)">determinante</a> de la [matriz identidad](https://es.wikipedia.org/wiki/Matriz_identidad) es igual a 1. $\det I = 1$.
b. Una <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> $A$ es *singular* (no tiene [inversa](https://es.wikipedia.org/wiki/Matriz_invertible)) si su <a href="https://es.wikipedia.org/wiki/Determinante_(matem%C3%A1tica)">determinante</a> es igual a cero.
c. El <a href="https://es.wikipedia.org/wiki/Determinante_(matem%C3%A1tica)">determinante</a> cambia de signo cuando dos columnas(o filas) son intercambiadas.
d. Si dos filas de una <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> $A$ son iguales, entonces el <a href="https://es.wikipedia.org/wiki/Determinante_(matem%C3%A1tica)">determinante</a> es cero.
e. Si alguna fila de la <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> $A$ son todos ceros, entonces el <a href="https://es.wikipedia.org/wiki/Determinante_(matem%C3%A1tica)">determinante</a> es cero.
f. La [matriz transpuesta](http://es.wikipedia.org/wiki/Matriz_transpuesta) $A^T$, tiene el mismo <a href="https://es.wikipedia.org/wiki/Determinante_(matem%C3%A1tica)">determinante</a> que $A$.
g. El <a href="https://es.wikipedia.org/wiki/Determinante_(matem%C3%A1tica)">determinante</a> de $AB$ es igual al <a href="https://es.wikipedia.org/wiki/Determinante_(matem%C3%A1tica)">determinante</a> de $A$ multiplicado por el <a href="https://es.wikipedia.org/wiki/Determinante_(matem%C3%A1tica)">determinante</a> de $B$. $\det (AB) = \det A \cdot \det B$.
h. El <a href="https://es.wikipedia.org/wiki/Determinante_(matem%C3%A1tica)">determinante</a> es una [función lineal](https://es.wikipedia.org/wiki/Funci%C3%B3n_lineal) de cada una de las filas en forma separada. Si multiplicamos solo una fila por $\alpha$, entonces el <a href="https://es.wikipedia.org/wiki/Determinante_(matem%C3%A1tica)">determinante</a> también es multiplicado por $\alpha$.
Veamos como podemos obtener el <a href="https://es.wikipedia.org/wiki/Determinante_(matem%C3%A1tica)">determinante</a> con la ayuda de [Python](http://python.org/)
```python
# Determinante con sympy
A = sympy.Matrix( [[1, 2, 3],
[2,-2, 4],
[2, 2, 5]] )
A.det()
```
$\displaystyle 2$
```python
# Determinante con numpy
A = np.array([[1, 2, 3],
[2,-2, 4],
[2, 2, 5]] )
np.linalg.det(A)
```
$\displaystyle 2.0$
```python
# Determinante como funcion lineal de fila
A[0] = A[0:1]*5
np.linalg.det(A)
```
$\displaystyle 9.99999999999998$
```python
# cambio de signo de determinante
A = sympy.Matrix( [[2,-2, 4],
[1, 2, 3],
[2, 2, 5]] )
A.det()
```
$\displaystyle -2$
*Esta notebook fue creada originalmente como un blog post por [Raúl E. López Briega](http://relopezbriega.com.ar/) en [Mi blog sobre Python](http://relopezbriega.github.io). El contenido esta bajo la licencia BSD.*
*Este post fue escrito utilizando IPython notebook. Pueden descargar este [notebook](https://github.com/relopezbriega/relopezbriega.github.io/blob/master/downloads/LinearAlgebraPython.ipynb) o ver su version estática en [nbviewer](http://nbviewer.ipython.org/github/relopezbriega/relopezbriega.github.io/blob/master/downloads/LinearAlgebraPython.ipynb).*
```python
```
| 523a24404f34f580ac69f5f36ca14bb6385fd52c | 260,768 | ipynb | Jupyter Notebook | 2-EDA/4-Mates para DS/Algebra_Lineal.ipynb | erfederuiz/thebridge_ft_nov21 | 00f7216024ac0cf05e564eb8b1be6e888f277ea4 | [
"MIT"
] | null | null | null | 2-EDA/4-Mates para DS/Algebra_Lineal.ipynb | erfederuiz/thebridge_ft_nov21 | 00f7216024ac0cf05e564eb8b1be6e888f277ea4 | [
"MIT"
] | null | null | null | 2-EDA/4-Mates para DS/Algebra_Lineal.ipynb | erfederuiz/thebridge_ft_nov21 | 00f7216024ac0cf05e564eb8b1be6e888f277ea4 | [
"MIT"
] | null | null | null | 59.932889 | 15,666 | 0.733687 | true | 28,379 | Qwen/Qwen-72B | 1. YES
2. YES | 0.890294 | 0.907312 | 0.807775 | __label__spa_Latn | 0.810399 | 0.715064 |
# Modeling Magnetic Anomalies
In this section we discuss how to solve PDE for magnetic field anomalies due to a variation of
the magnetic susceptibility in the subsurface using `esys.escript`. It is assumed that you have
worked through the [introduction section on `esys.escript`](escriptBasics.ipynb).
First we will provide the basic theory:
## Theoretical Background
The observed magnetic field results $\mathbf{B}_t$ from the interaction
between the Earth's background field $\mathbf{B}_b$ and the magnetisation $\mathbf{M}$.
Under the assumption of a small magnetisation the
magnetisation is given as
\begin{equation}
\mathbf{M} = k \mathbf{B}_b
\end{equation}
where $k \ge 0$ is the magnetic susceptibility.
The magnetisation induces a total magnetic field $\mathbf{B}_t$ which can be decomposed in the background field $\mathbf{B}_b$ and
the magenetic anomaly field $\mathbf{B}_a$:
\begin{align}
\mathbf{B}_t & = \mathbf{B}_H + \mathbf{B}
\end{align}
In the SI system $\mathbf{B}$ has the units is in Tesla $T$. The Earth's background field $\mathbf{B}_H$ varies between $25,000$ and $65,000 nT$ ($nT = 10^{-9} T $ referes to *nano Tesla*)
The total magnetic field anomaly $b_a$ is the difference of the intensity of the
total magnetic field $\mathbf{B}$ and of the background field $\mathbf{B}_H$:
\begin{equation}
b_a=|\mathbf{B}_t|-|\mathbf{B}_b|
\end{equation}
where in 2D
\begin{equation}
|\mathbf{B}| = \sqrt{B_x^2 + B_z^2} \mbox{ if } \mathbf{B}=(B_x, B_z)
\end{equation}
The total magnetic field anomaly $b_a$ is what is measured in the field.
The Gauss's law for magnetism states that the magnetic flux $\mathbf{B_F}=\mathbf{B}_t+\mathbf{M}$
is divergence free:
\begin{equation}
\nabla^t \mathbf{B_F} = 0
\end{equation}
Assuming that $\nabla^t \mathbf{B}_b=0$ this simplifies to
\begin{equation} \label{EQGAUSS}
\nabla^t (\mathbf{B} + k \mathbf{B}_b ) = 0
\end{equation}
Analogously to gravity one can introduce a scalar potential $U$ with
\begin{equation} \label{EQPOTENTIAL}
\mathbf{B} = - \nabla U = (-\frac{\partial U}{\partial x}, -\frac{\partial U}{\partial z})
\end{equation}
in the 2D case. Equations \eqref{EQGAUSS} and \eqref{EQPOTENTIAL} form a PDE for the scalar potential $U$.
## In the `esys.escript` form
Recall the `esys.escript` PDE template: When $u$ is the unknown
the `flux` vector $\mathbf{F}$ is defined as
\begin{equation} \label{EQFLUX}
\mathbf{F} = - \mathbf{A} \mathbf{\nabla} u +\mathbf{X}
\end{equation}
with some matrix $\mathbf{A}$ and some vector $\mathbf{X}$. Then the flux vector $\mathbf{F}$ needs to fulfill the conservation equation :
\begin{equation}\label{EQCONSERVATION}
\mathbf{\nabla}^t \; \mathbf{F} + D \; u = Y
\end{equation}
where $D$ is a scalar and $Y$ is the right hand side.
We identify the flux $\mathbf{F}$ from \eqref{EQGAUSS} as $\mathbf{B} + k \mathbf{B}_b$ which gives
with \eqref{EQPOTENTIAL}
\begin{equation} \label{EQFLUX2}
\mathbf{F} = - \mathbf{\nabla} U +k \mathbf{B}_b
\end{equation}
from which we see that we need to choose
$\mathbf{A}$ as the identity matrix and $\mathbf{X}=k \mathbf{B}_b$ (that is the magnetization).
In this case it is $D=0$ and $Y=0$.
###### Problem
For a domain of $2 \times 2 km$ we want to model the total magnetic field anomaly $b_a$ over a horizontal
transect. The transect is located at a height of $H_0=1200m$ above the bottom edge of the domain and
is assumed to define the surface of the Earth.
The magnetic anomaly is produced by a vertical dyke of width $w=60m$ where the top end
of the dyke is located at a depth $D=100m$ below the surface along the vertical center line of the domain
(at $x_0=500m$). It is assumed that a 2D model is sufficient.
Of particular interest is the question how the total magnetic field response of the dyke
would eb different at the North Pole, the South Pole and the equator.
```python
dx=8 # grid line spacing in [m]
NEx=250
NEz=250
H0=1200 # height [m] of transect above bottom of domain (will be locked to grid)
b_b=45000.0 # intensity of the background magnetic field in nT
ksi=0.015 # assumed susceptibility
D=100
w=60
%matplotlib notebook
```
## Domain and Transect Set-up
Before we can do any modeling we need to set up the domain. This analogously to the gravity case.
```python
from esys.escript import *
from esys.finley import Rectangle
Lx=dx*NEx
Lz=dx*NEz
print("Domain dimension = %g x %g m"%(Lx, Lz))
domain=Rectangle(n0=NEx, n1=NEz, l0=Lx, l1=Lz)
```
Domain dimension = 2000 x 2000 m
Here we also define the `Locator` to pick the values for the total magnetic field anomaly $b_a$ from a
`Data` object. Again we first create the offsets along the transect.
```python
x0_transect=np.linspace(0., Lx, NEx)
```
We then add the appropriate $x_1$ coordinate (=`H0`) to build up the 2D coordinates in the domain:
```python
x_transect=[ (x0, H0) for x0 in x0_transect]
```
Now we are ready to create the `Locator` named `transect_locator` which we later to fetch the values
of the total magnetic field anomaly $b_a$ for the transect. We will calculate $b_a$ at element centers
indicated by the argument `where=ReducedFunction(domain)`:
```python
from esys.escript.pdetools import Locator
transect_locator=Locator(where=ReducedFunction(domain), x=x_transect )
```
As `transect_locator` could have moved the requested point `x_transect` locations toward
element centers but we want to use these locations later for plotting $b_a$ over the profile we
obtain the actually used $x_0$-coordinates (=offset within the transect):
```python
x0_transect=[ x[0] for x in transect_locator.getX()]
```
## Setup the PDE
Now we are ready to set up the PDE we need to solve to obtain the scalar potential $U$ for the
magnetic field anomaly $\mathbf{B}$:
```python
from esys.escript.linearPDEs import LinearSinglePDE
model=LinearSinglePDE(domain)
```
The easy bit is to `A`. It is the same as in the gravity case:
```python
model.setValue(A=identityTensor(domain))
```
Again we fix the potential to zero at the top and the bottom of the domain:
```python
x=domain.getX()
q_bottom=whereZero(x[1]) # 1 for face x_1=0
q_top=whereZero(x[1]-Lz) # 1 for face x_1=Lz
model.setValue(q=q_bottom+q_top)
```
The magnetic anomaly is defined as product of the
background magnetic field $\mathbf{B}_b$
and the susceptibility $k$. This is the magnetization which in fact becomes the value for $\mathbf{X}$
in the PDE template.
We start with susceptibility $k$. We want to have the ksi
for locations with an $x_1$ lower than $H_0-D$ where $H_0$ is the surface location
above the bottom of the domain and $D$ is the depth of the top edge of the dyke below the surface.
Again we use the `whereNegative` to set this up. This time we define the susceptibility at the center
of elements:
```python
X=ReducedFunction(domain).getX()
m1=whereNegative(X[1]-(H0-D))
```
As second condition we want to have a positive susceptibility $k$ only for those location that are
less then $\frac{w}{2}$ away from the central vertical line at $x_0=\frac{Lx}{2}$ that means
that
\begin{equation}
| x_0 - \frac{Lx}{2} | < \frac{w}{2}
\end{equation}
which is implemented using the `whereNegative` once again:
```python
m2=whereNegative(abs(X[0]-Lx/2)-w/2)
```
The susceptibility $k$ is then set to:
```python
k=ksi*m1*m2
```
Let's quickly check if we have done the right thing and plot the distribution of `k` with
`matplotlib`:
```python
k_np=convertToNumpy(k)
x_np=convertToNumpy(k.getFunctionSpace().getX())
import matplotlib.pyplot as plt
plt.clf()
plt.figure(figsize=(5,5))
plt.tricontourf(x_np[0], x_np[1], k_np[0], 15)
plt.xlabel('x [m]')
plt.ylabel('y [m]')
plt.title("susceptibility distribution")
```
<IPython.core.display.Javascript object>
<IPython.core.display.Javascript object>
Text(0.5,1,'susceptibility distribution')
At the equator the background magnetic field has south-east orientation parallel to the surface of the Earth:
\begin{equation}\label{EQBBEquator}
\mathbf{B}_b=
\begin{bmatrix}
-b_b \\
0
\end{bmatrix}
\end{equation}
With this we can now set the coefficient `X`:
```python
B_b=[-b_b, 0.]
model.setValue(X=k*B_b)
```
and get the scalar potential
```python
u=model.getSolution()
```
Now we can get magenetic anomaly field $\mathbf{B}_a$ as the gradient of the scalar potential. Here we calculate
the gradient at the center of elements:
```python
B_a=-grad(u, where=ReducedFunction(domain))
```
The total magnetic field anomaly $b_a$ is calculated and its values are picked along the transect:
```python
b_a=length(B_a+B_b)-b_b
b_a_transect=transect_locator.getValue(b_a)
```
```python
plt.figure()#figsize=(12,5))
plt.plot(x0_transect, b_a_transect)
plt.xlabel('offset [m]')
plt.ylabel('total anomaly [nT]')
plt.title("total magnetic field anomaly for dyke at the equator")
plt.show()
```
<IPython.core.display.Javascript object>
## Total Magnetic Anomaly at the Poles
We want compare the total magnetic field anomaly $b_a$ we have just calculated
for the equator with the one at the North Pole.
To simplify the coding we write a function `getTotalMagneticFieldAnomaly` that takes a
background magnetic field and return the total magnetic field anomaly along the transect.
```python
def getTotalMagneticFieldAnomaly(B_b):
model.setValue(X=k*B_b)
u=model.getSolution()
B_a=-grad(u, where=ReducedFunction(domain))
b_a=length(B_a+B_b)-sqrt(B_b[0]**2+B_b[1]**2)
b_a_transect=transect_locator.getValue(b_a)
return b_a_transect
```
```python
plt.figure()#figsize=(12,5))
plt.plot(x0_transect, getTotalMagneticFieldAnomaly(B_b=[0, -b_b]), label="north pole=south")
plt.plot(x0_transect, getTotalMagneticFieldAnomaly(B_b=[-b_b, 0.]), label="equator")
plt.plot(x0_transect, getTotalMagneticFieldAnomaly(B_b=[0, b_b]), label="south pole=north")
plt.xlabel('offset [m]')
plt.ylabel('total anomaly [nT]')
plt.title("total magnetic field anomaly for dyke")
plt.legend()
plt.show()
```
<IPython.core.display.Javascript object>
| 1a76782065536ca09defdd1fbe28c2f06500c859 | 247,010 | ipynb | Jupyter Notebook | B_GeophyicalModeling/Magnetic.ipynb | uqzzhao/Programming-Geophysics-in-Python | e6e8299116b4698892921b78927b71fc47ee018a | [
"Apache-2.0"
] | 20 | 2019-11-06T09:08:54.000Z | 2021-12-03T08:37:47.000Z | B_GeophyicalModeling/Magnetic.ipynb | uqzzhao/Programming-Geophysics-in-Python | e6e8299116b4698892921b78927b71fc47ee018a | [
"Apache-2.0"
] | null | null | null | B_GeophyicalModeling/Magnetic.ipynb | uqzzhao/Programming-Geophysics-in-Python | e6e8299116b4698892921b78927b71fc47ee018a | [
"Apache-2.0"
] | 3 | 2020-11-23T14:16:06.000Z | 2022-03-31T14:45:46.000Z | 64.191788 | 39,619 | 0.655556 | true | 3,043 | Qwen/Qwen-72B | 1. YES
2. YES | 0.79053 | 0.682574 | 0.539595 | __label__eng_Latn | 0.984309 | 0.09199 |
| |Pierre Proulx, ing, professeur|
|:---|:---|
|Département de génie chimique et de génie biotechnologique |** GCH200-Phénomènes d'échanges I **|
## Exemple 7-6.1
#### Solution
##### Bilan de masse
$\begin{equation*}
\boxed{\rho v_1 S_1 - \rho v_2 S_2 = 0}
\end{equation*}$ (1)
##### Bilan de quantité de mouvement linéaire dans la direction z
$\begin{equation*}
\boxed{ ( v_1w_1 + p_1 S_1)-( v_2w_2 + p_2 S_2) =
\vec F_{fluide \rightarrow surface}}
\end{equation*}$ (2)
mais la force que le fluide exerce sur la conduite peut être évalué en reconnaissant que la force exercée par le fluide sur la conduite dans la direction z est principalement la pression exercée par le fluide sur la surface annulaire juste après l'expansion. Donc $F =-p_1 (S_2-S_1)$ En substituant dans (2):
$\begin{equation*}
\boxed{ ( v_1w + p_1 S_1)-( v_2w + p_2 S_2) =
-p_1 (S_2-S_1)}
\end{equation*}$
Souvenons-nous que $w_1=w_2=w$ et que $\rho v_1S_1=\rho v_2 S_2$
$\begin{equation*}
\boxed{ S_1( \rho v_1^2 + p_1) - S_2( \rho v_2^2 + p_2) =
-p_1 (S_2-S_1)}
\end{equation*}$
ou
$\begin{equation*}
\boxed{ S_1 \rho v_1^2 - S_2 \rho v_2^2 = -S_2(p_1-p_2 )}
\end{equation*}$ (3)
## 4. Bilan d'énergie mécanique
$\begin{equation*}
\boxed{ \bigg ( \frac {1}{2} v_1^2 + \frac {p_1}{\rho_1} \bigg)-
\bigg ( \frac {1}{2} v_2^2 + \frac {p_2}{\rho_2} \bigg )=
E_v}
\end{equation*}$
```python
# Préparation de l'affichage et des outils de calcul symbolique
#
import sympy as sp
from IPython.display import *
sp.init_printing(use_latex=True)
%matplotlib inline
#
v_1,v_2,p_1,p_2,S_1,S_2,rho,E_v=sp.symbols('v_1,v_2,p_1,p_2,S_1,S_2,rho,E_v')
eq1=sp.Eq(v_1*S_1-v_2*S_2)
eq2=sp.Eq(v_1**2*S_1*rho-v_2**2*S_2*rho+S_2*(p_1-p_2))
eq3=sp.Eq(1/2*v_1**2+p_1/rho-1/2*v_2**2+p_2/rho -E_v)
solution=sp.solve((eq1,eq2,eq3),v_2,p_2,E_v)
display(solution[0][2].simplify())
```
```python
```
| 7ac35466b84d4c38015ff298b04213097502ec7f | 3,618 | ipynb | Jupyter Notebook | Chap-7-ex-7-6-1.ipynb | pierreproulx/GCH200 | 66786aa96ceb2124b96c93ee3d928a295f8e9a03 | [
"MIT"
] | 1 | 2018-02-26T16:29:58.000Z | 2018-02-26T16:29:58.000Z | Chap-7-ex-7-6-1.ipynb | pierreproulx/GCH200 | 66786aa96ceb2124b96c93ee3d928a295f8e9a03 | [
"MIT"
] | null | null | null | Chap-7-ex-7-6-1.ipynb | pierreproulx/GCH200 | 66786aa96ceb2124b96c93ee3d928a295f8e9a03 | [
"MIT"
] | 2 | 2018-02-27T15:04:33.000Z | 2021-06-03T16:38:07.000Z | 30.403361 | 318 | 0.528745 | true | 848 | Qwen/Qwen-72B | 1. YES
2. YES | 0.867036 | 0.782662 | 0.678596 | __label__fra_Latn | 0.479789 | 0.414938 |
# Solwing the Solow model with R&D
## Intro
In this project we will try to solve the solow model with R&D and furthermore extend the model by including human capital.
Our goal will be to dig into the following aspects:
* Finding the steady state rates by solveing for several transition equation
* Finding numeric values for the steady state rates
* Visualizing the transition towards steady state
The realistic parameters, which will be used to find a numeric value for the steady state rates can be found in the textbook "*Introducing Advanced Macroeconomics*" by Peter Birch Sørensen and Hans Jørgen Whitta-Jacobsen.
# Post peer feedback changes
* Improved both transition plots - More interactive features, made arrow follow the equilibrium etc.
* Some people have troubles running the plots - Theses errors should be fixed now
* Fixed typos and mislabels
### More Feedback ideas, which was taken into considerations but ultimately was not implemented
* Direction fields
* Calculating speed of convergence
```python
import numpy as np
import matplotlib.pyplot as plt
import ipywidgets as widgets
import sympy as sm
sm.init_printing(use_unicode=True)
```
We consider the following model for a closed economy, where the following equations are given:
1. \\[Y_{t}=K_{t}^{\alpha}\left(A_{t}L_{Y_{t}}\right)^{1-\alpha}, 0<\alpha<1\\]
2. \\[K_{t+1}-K_{t}=s_{K}Y_{t}-\delta K_{t},0<\delta<1,0<s_{k}<1,K_{0}>0 \text{ provided} \\]
3. \\[A_{t+1}-A_{t}=\rho A_{t}^{\phi}L_{A_{t}}^{\lambda},\rho>0,\phi>0,0<\lambda<1 \\]
4. \\[L_{A_{t}}=s_{R}L_{t},0<s_{R}<1 \\]
5. \\[L_{t}=L_{A_{t}}+L_{Y_{t}} \\]
6. \\[L_{t+1}=\left(1+n\right)L_{t},n>0 \\]
Equation (1) is a Cobb-Douglas production function, which describes the aggregated production, $Y_{t}$, as a function of financial capital, $K_{t}$, production workers, $L_{Y_{t}}$ and the knowledge level, $A_{t}$, which determines the productivity of the workers.
Equation (2) describes how financial capital develops over time, where $s_{K}$ is the saving rate.
Equation (3) shows the development in the knowledge level, where $A_{t}$ is an expression for the knowledge at the time $t$ and $L_{A_{t}}$ is the amount of researchers. Equation (4) shows that the latter is the product of the population, $L_{t}$, and the share of researchers, $s_{R}$.
The following definitions will also be used throughout this project:
7. \\[ y_{t}=\frac{Y_{t}}{L_{Y_{t}}};k_{t}=\frac{K_{t}}{L_{Y_{t}}} \\]
8. \\[\tilde{y}=\frac{y_{t}}{A_{t}};\tilde{k}=\frac{k_{t}}{A_{t}} \\]
Where **7.** is the definition for capital-labo ratio and **8.** is the definition for the technology adjusted capital-labor ratio.
# The growth in technology
Our first goal in this model is to find the steady state rate for the exact growth rate of technology.
The following transitions equation is given for the growth in technology:
9. \\[g_{t+1}=\left(1+n\right)^{\lambda}g_{t}\left(1+g_{t}\right)^{\phi-1}\\]
We then wish to find the steady state for this equation. That is to solve for $g^*$ the following equation:
10. \\[g^*=\left(1+n\right)^{\lambda}g^*\left(1+g^*\right)^{\phi-1}\\]
```python
# Define parameters as symbols
g = sm.symbols('g')
n = sm.symbols('n')
l = sm.symbols('lambda')
phi = sm.symbols('phi')
```
```python
# Define the transition equation
trans_g = sm.Eq(1,(1+n)**(l)*1*(1+g)**(phi-1))
```
```python
# Solve for g
ss_g = sm.solve(trans_g,g)[0]
ss_g
```
Next we will estimate the numeric value of the steady state growth rate of technology by using realistic parameters.
$n = 0.01$
$\lambda = 1$
$\phi = 0.50$
```python
# Define function and print with given parameters
sol_func_g = sm.lambdify((n,l,phi),ss_g)
sol_num_g = sol_func_g(0.01,1,0.5)
f"The steady state growth rate of technology is {sol_num_g:.2f}"
```
'The steady state growth rate of technology is 0.02'
By using these given parameters, the steady state rate of growth in technology is estimated to be 2%.
# Solving the "simple" model with R&D
Our next step is to solve the simple model with R&D.
The transition equation for capital accumulation can then be expressed as:
11. \\[ {\tilde{k}_{t+1}=\frac{1}{\left(1+n\right)\left(1+g_{t}\right)}\left(s_{k}\tilde{k}_{t}^{\alpha}+\left(1-\delta\right)\tilde{k}_{t}\right)} \\]
To find the solution for this model, we will have to find the steady state rate. That is to solve for $\tilde{k}$ in the following equation:
12. \\[{\tilde{k}^{*}=\frac{1}{\left(1+n\right)\left(1+g_{t}\right)}\left(s_{k}\tilde{k}^{\alpha*}+\left(1-\delta\right)\tilde{k}^{*}\right)} \\]
But first, we will have to define the following parameters and variables as symbols
```python
# Define variables and parameters as symbols
k = sm.symbols('k')
alpha = sm.symbols('alpha')
delta = sm.symbols('delta')
s_k = sm.symbols('s_K')
```
```python
# Define equation and solve for k
trans_k = sm.Eq(k,(s_k*k**alpha+(1-delta)*k)/((1+n)*(1+g)))
ss_k = sm.solve(trans_k,k)[0]
ss_k
```
Let's try to find the numeric value for the steady state growth rate of capital per effecient worker by using realistic paramters:
$$s_K = 0.2$$
$$g = 0.02$$
$$n = 0.01$$
$$\delta = 0.06$$
$$\alpha = 1/3 $$
```python
# Define function and print with parameters
sol_func_k = sm.lambdify((s_k,g,n,delta,alpha),ss_k)
sol_num_k = sol_func_k(0.2,0.02,0.01,0.06,1/3)
f"The Adjusted capital-labor ratio SS is {sol_num_k:.2f}"
```
'The Adjusted capital-labor ratio SS is 3.30'
We could also estimate the rate by changing some parameters and holding the others fixed. Hence, we create a slider
```python
# Define function for the solution
def sol_num_k_func(s_k,g,n,delta,alpha):
return f"The steady state growth rate of capital per effecient worker is {((delta + g * n + g + n)/s_k)**(1/(alpha-1)):.3f}"
```
```python
# Create slider
slider_g_r = widgets.interact(sol_num_k_func,
s_k=widgets.FloatSlider(description="$s_k$", min=0.0, max=0.5, step=0.05, value=0.2),
g=widgets.FloatSlider(description="$g$", min=0.0, max=0.1, step=0.01, value=0.02),
n=widgets.FloatSlider(description="$n$", min=0.0, max=0.1, step=0.01, value=0.01),
delta=widgets.FloatSlider(description="$\delta$", min=0.0, max=0.2, step=0.02, value=0.06),
alpha=widgets.FloatSlider(description="$a$", min=0.1, max=0.67, step=0.03, value=1/3) )
```
interactive(children=(FloatSlider(value=0.2, description='$s_k$', max=0.5, step=0.05), FloatSlider(value=0.02,…
## Visualization of the path towards steady state - The transition diagram
Our next goal is to create a plot for the transition diagram.
```python
# Define functions of the transition equation and the numeric SS solutions.
def trans_k_func(k, s_k, alpha, delta, n, g):
return (s_k*k**alpha+(1-delta)*k)/((1+n)*(1+g))
def sol_val_k_func(s_k,g,n,delta,alpha):
return ((delta + g * n + g + n)/s_k)**(1/(alpha-1))
```
## Changing the value of the savings ratio
What will happen to our plot, if the savaings ratio was to be changed. We will explore this by creating a simple interactive slider.
**Please run the below cell twice to resize the plot propely**
```python
from ipywidgets import interactive
def slider(s_k):
plt.figure(dpi = 100)
plt.rc('text', usetex=True)
plt.rc('font', family='sans-serif')
#range for k
k = np.arange(0.0, 15, 0.005)
# paramters:
s_K = 0.2
g = 0.02
n = 0.01
delta = 0.06
alpha = 1/3
# label and title
plt.xlabel(r'$k_t$',fontsize=20 )
plt.ylabel(r'$k_{t+1}$', fontsize=20)
plt.title(r'The transition diagram', fontsize=20)
# plot the transition equation and 45-degree equation
plt.plot(k, trans_k_func(k, s_k, alpha, delta, n, g))
plt.plot(k, k)
sol_val_k = sol_val_k_func(
delta = delta, g = g, n = n, s_k = s_k, alpha = alpha)
# Legend
plt.legend((r'$\frac{1}{\left(1+n\right)\left(1+g\right)}\left(s\tilde{k}_{t}^{\alpha}+\left(1-\delta\right)\tilde{k}_{t}\right)$',
r'$\tilde{k}_{t+1}=\tilde{k}_{t}$'),
shadow=True, handlelength=1.4, fontsize=12)
# Create arrow, which points at SS
plt.annotate( f'Adjusted capital-labor ratio SS {sol_val_k:.3f}',
xy=(sol_val_k, sol_val_k),
xytext=(sol_val_k-1, sol_val_k-3),
arrowprops=dict(facecolor='black', shrink=0.01),
)
# plot the figure and interactive slider
interactive_plot = interactive(slider,
s_k=widgets.FloatSlider(description="$s_k$",
min=0.0,
max=0.5,
step=0.04,
value=0.20,
continuous_update=True)
)
interactive_plot
```
interactive(children=(FloatSlider(value=0.2, description='$s_k$', max=0.5, step=0.04), Output()), _dom_classes…
We see as the savings ratio increases, so does the steady state rate.
# Extending the model with human capital
We now want to extend the model and therefore adds *Human capital*.
Equation 1. will be swapped by:
1'. \\[Y_{t}=K_{t}^{\alpha}H_{t}^{\varphi}\left(A_{t}L_{Y_{t}}\right)^{1-\alpha-\varphi} \\]
And a new human capital accumulation equation is given as:
13. \\[H_{t+1}-H_{t}=s_{H}Y_{t}-\delta H_{t},0<\delta<1,0<s_{H}<1,H_{0}>0 \text{ provided} \\]
which will change equation (1) and (3). Additionally the equation below will be added to the model, which describes how human capital develops over time. Here $s_{H}$ is the saving rate in human capital. $\delta$ is the depreciation rate.
The following is the definition of per production worker variable and per effective production worker variable: $h_{t}=\frac{H_{t}}{L_{Y_{y}}}$ & $\tilde{h}=\frac{h_{t}}{A_{t}}$
With human capital included in the model, a new transition equation is given for human capital per effecient worker and capital per effecient worker.
14. \\[ \tilde{h}_{t+1}=\frac{1}{\left(1+n\right)\left(1+g_{t}\right)}\left(s_{K}\tilde{k}_{t}^{\alpha}\tilde{h}_{t}^{\varphi}+\left(1-\delta\right)\tilde{h}_{t}\right) \\]
15. \\[ \tilde{k}_{t+1}=\frac{1}{\left(1+n\right)\left(1+g_{t}\right)}\left(s_{H}\tilde{k}_{t}^{\alpha}\tilde{h}_{t}^{\varphi}+\left(1-\delta\right)\tilde{k}_{t}\right)\\]
Similar to the previous simple model, we will have to solve the following two equations to find the steady state values.
\\[ \tilde{k}^{*}=\frac{1}{\left(1+n\right)\left(1+g_{t}\right)}\left(s_{K}\tilde{k}^{*{\alpha}}\tilde{h}^{*{\varphi}}+\left(1-\delta\right)\tilde{k}^*\right)\\]
\\[ \tilde{h}^{*}=\frac{1}{\left(1+n\right)\left(1+g_{t}\right)}\left(s_{H}\tilde{k}^{*{\alpha}}\tilde{h}^{*{\varphi}}+\left(1-\delta\right)\tilde{h}^*\right)\\]
```python
# Define parameters as symbols
h = sm.symbols('h')
k = sm.symbols('k')
s_h =sm.symbols('s_H')
s_k = sm.symbols('s_K')
g = sm.symbols('g')
n = sm.symbols('n')
delta = sm.symbols('delta')
vphi = sm.symbols('varphi')
alpha = sm.symbols('alpha')
vphi = sm.symbols('varphi')
```
```python
# Define the two new transition equations
trans_k2 = sm.Eq(k,(s_k*k**alpha*h**vphi+(1-delta)*k)/((1+n)*(1+g)))
trans_h = sm.Eq(h,(s_h*k**alpha*h**vphi+(1-delta)*h)/((1+n)*(1+g)))
```
```python
# Solve a system of two equations
ss_kh = sm.solve([trans_k2,trans_h], (k,h))[0]
ss_kh
```
While this might not be the most aesthetically pleasing solution, it is the correct.
Next we are able to find the steady state rate for GDP per effecient worker, $\tilde{y}^* = (\tilde{k})^\alpha*(\tilde{h})^\varphi $.
```python
# Find the steady state rate for GDP per effecient worker
ss_gdp_per_worker = ss_kh[0]**vphi*ss_kh[1]**alpha
ss_gdp_per_worker
```
Next we will estimate the numeric value of the steady state rate of:
* Capital per effecient worker, $k^*$
* Human Capital per effecient worker, $h^*$
* GDP per effecient worker, $y^*$
by using the following paramters:
$$s_K = 0.2$$
$$s_H = 0.2$$
$$g = 0.02$$
$$n = 0.01$$
$$\delta = 0.06$$
$$\alpha = \varphi = 1/3 $$
```python
# Define and insert parameters for capital
sol_func_k2 = sm.lambdify((s_k,s_h,g,n,delta,alpha,vphi),ss_kh[0])
sol_num_k2 = sol_func_k2(0.2,0.2,0.02,0.01,0.06,1/3,1/3)
f" The Adjusted capital-labor ratio SS is {sol_num_k2:.3f}"
```
' The Adjusted capital-labor ratio SS is 10.901'
```python
# Define and insert parameters for human capital
sol_func_h = sm.lambdify((s_k,s_h,g,n,delta,alpha,vphi),ss_kh[1])
sol_num_h = sol_func_h(0.2,0.2,0.02,0.01,0.06,1/3,1/3)
f"The Adjusted human capital-labor ratio SS is {sol_num_h:.3f}"
```
'The Adjusted human capital-labor ratio SS is 10.901'
```python
# Define and insert parameters for GDP
sol_func_y = sm.lambdify((s_k,s_h,g,n,delta,alpha,vphi),ss_gdp_per_worker)
sol_num_y = sol_func_y(0.2,0.2,0.02,0.01,0.06,1/3,1/3)
f"The steady state rate for GDP pr. worker is {sol_num_y:.3f}"
```
'The steady state rate for GDP pr. worker is 4.916'
## Visualization of the path towards steady state - The transition diagram
Our next goal is to create a plot for the transition diagram for the model including human capital.
First we have to find the "nullclines". In order for us to this, we first have to find the solow equations. This is done by substracting $\tilde{k}_{t}$ or $\tilde{h}_{t}$ from the respective transition equation.
16. \\[ \tilde{h}_{t+1}-\tilde{h}_{t}=\frac{1}{\left(1+n\right)\left(1+g_{t}\right)}\left(s_{K}\tilde{k}_{t}^{\alpha}\tilde{h}_{t}^{\varphi}+\left(1-\delta\right)\tilde{h}_{t}\right)-\tilde{h}_{t} \\]
17. \\[ \tilde{k}_{t+1}-\tilde{k}_{t}=\frac{1}{\left(1+n\right)\left(1+g_{t}\right)}\left(s_{H}\tilde{k}_{t}^{\alpha}\tilde{h}_{t}^{\varphi}+\left(1-\delta\right)\tilde{k}_{t}\right)-\tilde{k}_{t} \\]
To find the nullclines, we will have to solve for $\tilde{h}$ in the two solow equations.
```python
# Define the two solow equations
solow_k = sm.Eq(k-k,((s_k*k**alpha*h**vphi+(1-delta)*k)/((1+n)*(1+g))-k))
solow_h = sm.Eq(h-h,((s_h*k**alpha*h**vphi+(1-delta)*h)/((1+n)*(1+g))-h))
solow_k, solow_h
```
```python
# Now solve for each of the equation with respect to h
solow_sol_k = sm.solve(solow_k, h)[0]
solow_sol_h = sm.solve(solow_h, h)[0]
solow_sol_k, solow_sol_h
```
The first nullcine is labed as ($\Delta\tilde{k}_{t} = 0)$ and the second nullcline as ($\Delta\tilde{h}_{t} = 0)$.
```python
# Parameters
g = 0.02
n = 0.01
delta = 0.06
alpha = 1/3
vphi = 1/3
# Define the two null clines as functions
def nullcline_1(k, s_k, s_h, alpha, delta, n, g):
return (((k**(-alpha+1)*(delta + g*n + g + n)) / s_k)**(1/vphi))
def nullcline_2(k, s_k, s_h, alpha, delta, n, g):
return (((k**(-alpha)*(delta + g*n + g + n)) / s_h)**(1/(vphi-1)))
```
```python
def slider2(s_k, s_h):
# Range for k
k = np.arange(0.0,80.0,0.1)
plt.figure(dpi=100)
# labels and title
plt.xlabel(r'$\tilde{k_t}$',fontsize=20, color = "black" )
plt.ylabel(r'$\tilde{h_t}$', fontsize=20, color = "black")
plt.title(r'The transition diagram with human capital', fontsize=20)
# Plot the nullclines
plt.plot(k, nullcline_1(k, s_k, s_h, alpha, delta, n, g))
plt.plot(k, nullcline_2(k, s_k, s_h, alpha, delta, n, g))
# Calculate the numeric SS values
sol_val_h = sol_func_h(s_k,s_h,g,n,delta,alpha,vphi)
sol_val_k2 = sol_func_k2(s_k,s_h,g,n,delta,alpha,vphi)
# Dashed lines for SS values
plt.axhline(y=sol_val_h, xmin=0., xmax=1, linestyle = '--', color = "black")
plt.axvline(x=sol_val_k2, ymin=0., ymax=1, linestyle = '--', color = "black")
# Legend
plt.legend((r'$\left[\Delta\tilde{k}_{t}=0\right]$',
r'$\left[\Delta\tilde{h}_{t}=0\right]$'),
shadow=True, handlelength=1.4, fontsize=16)
# labels for dashed lines
plt.annotate(r'$\tilde{k}^{*}$', xy=(0, 0), xytext=(sol_val_k2, -5))
plt.annotate(r'$\tilde{h}^{*}$', xy=(0, 0), xytext=(-5, sol_val_h))
plt.axis([0,80,0,80])
plt.show()
# plot the figure and interactive slider for s_k and s_h
interactive_plot = interactive(slider2,
s_k=widgets.FloatSlider(description="$s_k$",
min=0.0,
max=0.4,
step=0.04,
value=0.20,
continuous_update=True),
s_h=widgets.FloatSlider(description="$s_h$",
min=0.0,
max=0.4,
step=0.04,
value=0.20,
continuous_update=True)
)
interactive_plot
```
interactive(children=(FloatSlider(value=0.2, description='$s_k$', max=0.4, step=0.04), FloatSlider(value=0.2, …
# Finding the steady state growth path for GDP per capita
Under the assumption that $\phi < 1 $, we'll try to derive the steady-state growth path for GDP per capita. \\[\hat{y}_{t}^{*}=\tilde{y}^{*}\left(1-s_{R}\right)\left(\frac{\rho}{g_{se}}\right)^{\frac{1}{1-\phi}}s_{R}^{\frac{\lambda}{1-\phi}}\left(1+g_{se}\right)^{t}L_{0}^{\frac{\lambda}{1-\phi}}\\]
```python
# Define the following parameters as symbols
y_hat = sm.symbols('\hat{y}_{t}^{*}')
y_tilde = sm.symbols(r'\tilde{y}^{*}')
l = sm.symbols('lambda')
phi = sm.symbols('phi')
s_r = sm.symbols('s_{R}')
rho = sm.symbols('rho')
g_se = sm.symbols('g_{se}')
t = sm.symbols('t')
L_0 = sm.symbols('L_{0}')
```
```python
# Defining the equation
ss_gp = y_tilde*(1-s_r)*(rho/g_se)**(1/(1-phi))*s_r**(l/(1-phi))*(1+g_se)**t*L_0**(l/(1-phi))
```
Next we will find the "Golden rule" for $s_R$, which is the optimal share of the population being researchers. This is done by differentiation with respect to $s_R$,$\frac{\partial\hat{y}_{t}^{*}}{\partial s_{R}}$, and then solving for $s_R$.
```python
# Differentiate the equation with respect to s_R
diff_ss_gp = sm.diff(ss_gp,s_r)
diff_ss_gp
```
```python
#Solve for s_R
golden_rule = sm.solve(diff_ss_gp,s_r)[0]
golden_rule
```
As presented above, the optimal share of researchers is $\left[\frac{\lambda}{\lambda-\phi+1}\right]$. Next let's try to find the numerical value by using the following realistic parameters. $\lambda = 1, \phi = 0.5$
```python
# parametrize the golden rule
golden_rule_num = sm.lambdify((l,phi),golden_rule)
g_r = golden_rule_num(1,0.5)
f"The optimal share of researchers are {g_r:.3f}"
```
'The optimal share of researchers are 0.667'
Last, let's create a simple slider for changing values of $\phi$ while holding $\lambda$ fixed.
```python
# Define function of the golden rule
def g_rule(l,phi):
return f"s_r = {l/(1+l-phi):.3f}"
```
```python
# Create slider
slider_g_r = widgets.interact(g_rule,
phi=widgets.FloatSlider(description="$\phi$", min=0.0, max=1.0, step=0.05, value=0.5),
l=widgets.fixed(1)
)
```
interactive(children=(FloatSlider(value=0.5, description='$\\phi$', max=1.0, step=0.05), Output()), _dom_class…
We see as $\phi$, which is a messaure for how capital affects "knowledge", continues to grow the optimal share of researchers also increases.
# Overall conclusion and further perspectives
In this project, we have succeded in solving the solow model with R&D and extended it with human capital. The dynamic paths towards steady state have been visualized by using realistic parameters. We have also seen how the steady state rate develops, when some of the parameters are changed.
To extend this project more simulations and visualization are posible options.
| 74c2486c14e7e2e739875daafd734b038a3851d0 | 76,354 | ipynb | Jupyter Notebook | modelproject/model_project.ipynb | NumEconCopenhagen/projects-2019-pickles | 7f4d66612bf8f575b745a2c1c32477938e3dcbb6 | [
"MIT"
] | null | null | null | modelproject/model_project.ipynb | NumEconCopenhagen/projects-2019-pickles | 7f4d66612bf8f575b745a2c1c32477938e3dcbb6 | [
"MIT"
] | 13 | 2019-04-08T17:31:57.000Z | 2019-05-14T18:47:13.000Z | modelproject/model_project.ipynb | NumEconCopenhagen/projects-2019-pickles | 7f4d66612bf8f575b745a2c1c32477938e3dcbb6 | [
"MIT"
] | 2 | 2019-05-14T08:10:26.000Z | 2019-12-09T09:29:09.000Z | 63.154673 | 8,832 | 0.665571 | true | 6,233 | Qwen/Qwen-72B | 1. YES
2. YES | 0.841826 | 0.859664 | 0.723687 | __label__eng_Latn | 0.931846 | 0.519699 |
# the German tank problem
the Germans have a population of $n$ tanks labeled with serial numbers $1,2,...,n$ on the tanks. The number of tanks $n$ is unknown and of interest to the Allied forces. The Allied forces randomly capture $k$ tanks from the Germans with replacement and observe their serial numbers $\{x_1, x_2, ..., x_k\}$. The goal is to, from observing the serial numbers on this random sample of the tanks, estimate $n$.
the *estimator* of $n$ maps an outcome of the experiment to an estimate of $n$, $\hat{n}$.
```julia
using StatsBase
using PyPlot
using Statistics
using Printf
PyPlot.matplotlib.style.use("seaborn-pastel")
rcParams = PyPlot.PyDict(PyPlot.matplotlib."rcParams")
rcParams["font.size"] = 16;
```
## data structure for a tank
for elegance
```julia
struct Tank
serial_no::Int
end
tank = Tank(3)
```
Tank(3)
## visualizing the captured tanks and their serial numbers
```julia
function viz_tanks(tanks::Array{Tank}, savename=nothing)
nb_tanks = length(tanks)
img = PyPlot.matplotlib.image.imread("tank.png")
fig, ax = subplots(1, nb_tanks)
for (t, tank) in enumerate(tanks)
ax[t].imshow(img)
ax[t].set_title(tank.serial_no)
ax[t].axis("off")
end
tight_layout()
if ! isnothing(savename)
savefig(savename * ".png", format="png", dpi=300)
# Linux command line tool to trim white space
run(`convert $savename.png -trim $savename.png`)
end
end
n = 7
tanks = [Tank(s) for s in 1:n]
viz_tanks(tanks)
```
## simulating tank capture
write a function `capture_tanks` to simulate the random sampling of `nb_tanks_captured` tanks from all `nb_tanks` tanks the Germans have (without replacement). return a random sample of tanks.
```julia
function capture_tanks(num_captured::Int, num_tanks::Int)
return sample([Tank(i) for i in 1:num_tanks], num_captured, replace=false)
end
tanks = capture_tanks(4, 50)
viz_tanks(tanks)
```
## defining different estimators
an estimator maps an outcome $\{x_1, x_2, ..., x_k\}$ to an estimate for $n$, $\hat{n}$.
### estimator (1): maximum serial number
this is the maximum likelihood estimator.
\begin{equation}
\hat{n} = \max_i x_i
\end{equation}
```julia
function max_serial_no(captured_tanks::Array{Tank})
return maximum([t.serial_no for t in captured_tanks])
end
max_serial_no(tanks)
```
44
### estimator (2): maximum serial number plus initial gap
\begin{equation}
\hat{n} = \max_i x_i + \bigl(\min_i x_i -1\bigr)
\end{equation}
```julia
function max_plus_first_gap(captured_tanks::Array{Tank})
# only need to store this array once
serials = [t.serial_no for t in captured_tanks]
return maximum(serials) + minimum(serials) - 1
end
max_plus_first_gap(tanks)
```
52
### estimator (3): maximum serial number plus gap if samples are evenly spaced
\begin{equation}
\hat{n} = \max_i x_i + \bigl( \max_i x_i / k -1 \bigr)
\end{equation}
```julia
function max_plus_even_gap(captured_tanks::Array{Tank})
serials = [t.serial_no for t in captured_tanks]
max_serial = maximum(serials)
# operations are costly; don't do it this way
# return maximum(serials) * (1 + 1 / length(serials)) - 1
return max_serial + max_serial / length(serials) - 1
end
max_plus_even_gap(tanks)
```
54.0
## assessing the bias and variance of different estimators
say the Germans have `nb_tanks` tanks, and we randomly capture `nb_tanks_captured`. what is the distribution of the estimators (over different outcomes of this random experiment), and how does the distribution compare to the true `nb_tanks`?
```julia
function sim_̂n(num_sims::Int, num_tanks::Int, num_tanks_captured::Int, estimator::Function)
return [estimator(capture_tanks(num_tanks_captured, num_tanks)) for i in 1:num_sims]
end
num_tanks = 100
num_captured = 5
num_sims = 1000
mean(sim_̂n(num_sims, num_tanks, num_captured, max_serial_no))
```
84.74
```julia
num_sims = 10000
fig, ax = subplots(1, 1, figsize=(18, 8))
estimators = [max_serial_no, max_plus_first_gap, max_plus_even_gap]
for e in estimators
results = sim_̂n(num_sims, num_tanks, num_captured, e)
est_name = replace(string(e), "_" => " ")
ax.hist(results, label=est_name, alpha=0.7)
println(est_name)
println(" (̂n):", mean(results))
println(" std(̂n):", std(results), "\n")
end
ax.axvline(num_tanks, color="k", ls="--", linewidth=3)
ax.legend()
```
```julia
print(max_plus_even_gap)
```
max_plus_even_gap
notes:
* **efficiency**: small variance
* **consistency**: as number of samples in the estimate increases, the estimator converges
* **unbiasedness**: average estimator is correct
## what happens as we capture more and more tanks, i.e. increase $k$?
assess estimator (3).
```julia
```
## one-sided confidence interval
how confident are we that the Germans don't have *more* tanks?
significance level: $\alpha$
test statistic = estimator (3) = $\hat{n} = \max_i x_i + \bigl( \max_i x_i / k -1 \bigr)$
**null hypothesis**: the number of tanks is $n=n_0$<br>
**alternative hypothesis**: the number of tanks is less than $n_0$
we reject the null hypothesis (say, "the data does not support the null hypothesis") that the number of tanks is $n=n_0$ if the p-value is less than $\alpha$. the p-value is the probability that, if the null hypothesis is true, we get a test statistic equal to or smaller than we observed.
we want to find the highest $n_0$ such that we have statistical power to reject the null hypothesis in favor of the alternative hypothesis. this is the upper bound on the confidence interval!
then the idea is that, if the null hypothesis is that the number of tanks is absurdly large compared to the largest serial number we saw in our sample, it would be very unlikely that we would see such small serial numbers compared to the number of tanks, so we'd reject the null hypothesis.
```julia
```
say $\alpha=0.05$ and we seek a 95% one-sided confidence interval
```julia
```
```julia
```
| a6f97a896d5a264829a39d1db70792df9ef864d5 | 85,428 | ipynb | Jupyter Notebook | In-Class Notes/German Tank Problem/German tank problem_sparse.ipynb | cartemic/CHE-599-intro-to-data-science | a2afe72b51a3b9e844de94d59961bedc3534a405 | [
"MIT"
] | null | null | null | In-Class Notes/German Tank Problem/German tank problem_sparse.ipynb | cartemic/CHE-599-intro-to-data-science | a2afe72b51a3b9e844de94d59961bedc3534a405 | [
"MIT"
] | null | null | null | In-Class Notes/German Tank Problem/German tank problem_sparse.ipynb | cartemic/CHE-599-intro-to-data-science | a2afe72b51a3b9e844de94d59961bedc3534a405 | [
"MIT"
] | 2 | 2019-10-02T16:11:36.000Z | 2019-10-15T20:10:40.000Z | 159.380597 | 53,726 | 0.894426 | true | 1,695 | Qwen/Qwen-72B | 1. YES
2. YES | 0.913677 | 0.867036 | 0.79219 | __label__eng_Latn | 0.965016 | 0.678856 |
# Кластерный анализ
## Метод к-средних
Дана матрица данных $X$ и дано число $k$ предполагаемых кластеров. Цель кластеризации представить данные в виде групп кластеров $C=\{C_1, C_2, \ldots, C_k\}$. Каждый кластер имеет свой центр:
\begin{equation}
\mu_i = \frac{1}{n_i} \sum \limits_{x_j \in C_i} x_j
\end{equation}
где $n_i = |C_i|$ - это количество точек в кластере $C_i$.
Таким образом, мы имеем некоторые кластеры $C=\{C_1, C_2, \ldots, C_k\}$ и необходимо оценить качество разбиения. Для этого будем вычислять сумму квадратических ошибок (SSE):
\begin{equation}
SSE(C) = \sum \limits_{i=1}^{k} \sum \limits_{x_j \in C_i} ||x_j - \mu_i||^2
\end{equation}
Цель найти
\begin{equation}
C^* = arg\min\limits_C \{SSE(C)\}
\end{equation}
### Алгоритм к-средних
На вход алгоритм получает матрицу данных $D$, количество кластеров $k$, и критерий остановки $\epsilon$:
1. t = 0
2. случайным образом инициализируем $k$ центров кластеров: $\mu_1^t, \mu_2^t, \ldots, \mu_k^t \in R^d$;
3. повторять
4. $t = t + 1$;
5. $C_j = 0$ для всех $j = 1, \ldots, k$
6. для каждого $x_j \in D$
7. $j^* = arg\min\limits_C \{||X_j - \mu_i^{t-1}||^2\}$ \\\ присваиваем $x_j$ к ближайшему центру
8. $C_{j^*} = C_{j^*} \cup {x_j}$
9. для каждого i=1 до k
10. $\mu_i = \frac{1}{|C_i|} \sum_{x_j \in C_i} x_j$
11. пока $\sum_{i=1}^k ||\mu_i^{t} - \mu_i^{t-1}||^2 \leq \epsilon$
## Задание
1. Написать программу, реализующую алгоритм к-средних.
2. Визуализировать сходимость центров кластеров.
3. Оценить $SSE$ для значений $k = 1, \ldots, 10$ и построить график зависимости $SSE$ от количества кластеров.
```python
import matplotlib.pyplot as plt
import numpy as np
from scipy.spatial import distance
from sklearn.datasets import make_blobs
X, Y = make_blobs(n_samples = 1000, n_features=2, centers=5, cluster_std = 1.2, random_state=17)
plt.scatter(X[:,0], X[:,1])
plt.show()
```
```python
def get_center(cluster):
return np.array([cluster[:,0].mean(), cluster[:,1].mean()])
def get_dist(point_1, point_2):
return np.linalg.norm(point_1 - point_2)
def get_clusters(X, centers):
clusters = dict()
for point in X:
best_cluster = min([(index, get_dist(point, center)) for index, center in enumerate(centers)],
key=lambda t:t[1])[0]
try:
clusters[best_cluster].append(point)
except KeyError:
clusters[best_cluster] = [point]
return {key: np.array(clusters[key]) for key in clusters}
def get_new_value_centers(clusters):
new_centers = list()
for key in clusters:
new_centers.append(np.mean(clusters[key], axis = 0))
return np.array(new_centers)
def get_cluster_plt(clusters):
for key in clusters:
plt.scatter(clusters[key][0:,0], clusters[key][0:,1])
plt.show()
def k_means(X, k, early_stop, show_clusters=False):
centers = np.array([get_center(cluster) for cluster in np.array_split(X, k)])
eps = float("inf")
while eps >= early_stop:
clusters = get_clusters(X, centers)
if show_clusters: get_cluster_plt(clusters)
new_centers = get_new_value_centers(clusters)
eps = sum([get_dist(centers[index], new_centers[index]) for index, _ in enumerate(new_centers)])
centers = new_centers
return clusters
```
Сходимость центров кластеров
```python
clusters = k_means(X, 4, 0.1, show_clusters=True)
```
Оценка SSE
```python
def k_means_centers(X, k, early_stop, show_clusters=False):
centers = np.array([get_center(cluster) for cluster in np.array_split(X, k)])
eps = float("inf")
while eps >= early_stop:
clusters = get_clusters(X, centers)
if show_clusters: get_cluster_plt(clusters)
new_centers = get_new_value_centers(clusters)
eps = sum([get_dist(centers[index], new_centers[index]) for index, _ in enumerate(new_centers)])
centers = new_centers
return centers
```
```python
def get_sse(data,clusters):
error = 0
for j in range(len(data)):
min_cluster_distance = min([distance.euclidean(clusters[i],data[j]) for i in range(len(clusters))])
error = error + min_cluster_distance ** 2
return error
k = list(range(1,11))
k_sse = [get_sse(X,k_means_centers(X,k[i],0.1)) for i in range(len(k))]
plt.plot(k, k_sse)
plt.gcf().set_size_inches(18.5, 10.5)
```
## Реальные данные
используйте метод KMeans из sklearn.clustering
1. Выбрать оптимальное количество кластеров.
2. Построить
2. Произвести анализ получившихся кластеров:
1. определите средний год автомобилей;
2. определите средний пробег автомобилей;
3. определите среднюю мощность;
4. определите среднюю цену автомобилей;
5. основные марки автомобилей в кластере;
6. определите тип топлива;
7. определите основной тип кузова;
8. определите основной тип привода;
9. определите основной тип КПП;
10. определите количество хозяев автомобиля.
Охарактеризуйте каждый класстер.
```python
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
df = pd.read_csv('data.csv', encoding='cp1251')
df = df.drop(columns=['Модель', 'Цвет'])
df.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Марка</th>
<th>Год</th>
<th>Состояние</th>
<th>Пробег</th>
<th>Объем</th>
<th>Топливо</th>
<th>Мощность</th>
<th>Кузов</th>
<th>Привод</th>
<th>КПП</th>
<th>Руль</th>
<th>Хозяев в ПТС</th>
<th>Цена</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>Volkswagen</td>
<td>2013.0</td>
<td>БУ</td>
<td>42000.0</td>
<td>1200.0</td>
<td>бензин</td>
<td>105.0</td>
<td>хэтчбек</td>
<td>передний</td>
<td>автомат</td>
<td>левый</td>
<td>1 владелец</td>
<td>689196.0</td>
</tr>
<tr>
<th>1</th>
<td>Skoda</td>
<td>2012.0</td>
<td>БУ</td>
<td>62000.0</td>
<td>1800.0</td>
<td>бензин</td>
<td>152.0</td>
<td>кроссовер</td>
<td>полный</td>
<td>механика</td>
<td>левый</td>
<td>1 владелец</td>
<td>639196.0</td>
</tr>
<tr>
<th>2</th>
<td>Renault</td>
<td>2015.0</td>
<td>БУ</td>
<td>4700.0</td>
<td>1600.0</td>
<td>бензин</td>
<td>106.0</td>
<td>хэтчбек</td>
<td>передний</td>
<td>механика</td>
<td>левый</td>
<td>1 владелец</td>
<td>629196.0</td>
</tr>
<tr>
<th>3</th>
<td>Nissan</td>
<td>2012.0</td>
<td>БУ</td>
<td>70000.0</td>
<td>1600.0</td>
<td>бензин</td>
<td>110.0</td>
<td>хэтчбек</td>
<td>передний</td>
<td>автомат</td>
<td>левый</td>
<td>1 владелец</td>
<td>479196.0</td>
</tr>
<tr>
<th>4</th>
<td>УАЗ</td>
<td>2014.0</td>
<td>БУ</td>
<td>50000.0</td>
<td>2700.0</td>
<td>бензин</td>
<td>128.0</td>
<td>внедорожник</td>
<td>полный</td>
<td>механика</td>
<td>левый</td>
<td>1 владелец</td>
<td>599196.0</td>
</tr>
</tbody>
</table>
</div>
```python
new_df = pd.get_dummies(df)
new_df.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Год</th>
<th>Пробег</th>
<th>Объем</th>
<th>Мощность</th>
<th>Цена</th>
<th>Марка_Acura</th>
<th>Марка_Alfa Romeo</th>
<th>Марка_Audi</th>
<th>Марка_BMW</th>
<th>Марка_BYD</th>
<th>...</th>
<th>Привод_полный</th>
<th>КПП_автомат</th>
<th>КПП_вариатор</th>
<th>КПП_механика</th>
<th>КПП_роботизированная</th>
<th>Руль_левый</th>
<th>Руль_правый</th>
<th>Хозяев в ПТС_1 владелец</th>
<th>Хозяев в ПТС_2 владельца</th>
<th>Хозяев в ПТС_3 и более</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>2013.0</td>
<td>42000.0</td>
<td>1200.0</td>
<td>105.0</td>
<td>689196.0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>...</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>1</th>
<td>2012.0</td>
<td>62000.0</td>
<td>1800.0</td>
<td>152.0</td>
<td>639196.0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>...</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>2</th>
<td>2015.0</td>
<td>4700.0</td>
<td>1600.0</td>
<td>106.0</td>
<td>629196.0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>...</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>3</th>
<td>2012.0</td>
<td>70000.0</td>
<td>1600.0</td>
<td>110.0</td>
<td>479196.0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>...</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>4</th>
<td>2014.0</td>
<td>50000.0</td>
<td>2700.0</td>
<td>128.0</td>
<td>599196.0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>...</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
</tbody>
</table>
<p>5 rows × 126 columns</p>
</div>
```python
from sklearn.preprocessing import StandardScaler
ss = StandardScaler()
new_df[['Год', 'Пробег', 'Объем', 'Мощность', 'Цена']] = ss.fit_transform(new_df[['Год', 'Пробег', 'Объем', 'Мощность', 'Цена']])
new_df.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Год</th>
<th>Пробег</th>
<th>Объем</th>
<th>Мощность</th>
<th>Цена</th>
<th>Марка_Acura</th>
<th>Марка_Alfa Romeo</th>
<th>Марка_Audi</th>
<th>Марка_BMW</th>
<th>Марка_BYD</th>
<th>...</th>
<th>Привод_полный</th>
<th>КПП_автомат</th>
<th>КПП_вариатор</th>
<th>КПП_механика</th>
<th>КПП_роботизированная</th>
<th>Руль_левый</th>
<th>Руль_правый</th>
<th>Хозяев в ПТС_1 владелец</th>
<th>Хозяев в ПТС_2 владельца</th>
<th>Хозяев в ПТС_3 и более</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>1.055883</td>
<td>-1.129295</td>
<td>-1.096928</td>
<td>-0.458895</td>
<td>0.420103</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>...</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>1</th>
<td>0.868335</td>
<td>-0.842782</td>
<td>-0.125615</td>
<td>0.440164</td>
<td>0.315187</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>...</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>2</th>
<td>1.430978</td>
<td>-1.663641</td>
<td>-0.449386</td>
<td>-0.439766</td>
<td>0.294203</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>...</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>3</th>
<td>0.868335</td>
<td>-0.728177</td>
<td>-0.449386</td>
<td>-0.363250</td>
<td>-0.020546</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>...</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>4</th>
<td>1.243431</td>
<td>-1.014690</td>
<td>1.331355</td>
<td>-0.018930</td>
<td>0.231254</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>...</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
</tbody>
</table>
<p>5 rows × 126 columns</p>
</div>
```python
from sklearn.cluster import KMeans, AgglomerativeClustering, DBSCAN, AffinityPropagation
inertia = []
for it in np.arange(1,20,1):
method = KMeans(n_clusters=it)
method.fit(new_df)
inertia.append(method.inertia_)
print(it)
```
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
```python
for i in range(len(inertia)):
print(i+1,inertia[i])
```
1 303666.09226419945
2 238066.35809271294
3 203215.59994571476
4 188096.71848079565
5 176308.26383170072
6 166868.51967859885
7 160779.23246553304
8 155563.4221062198
9 150434.37152236237
10 146538.95858094987
11 143247.76194929198
12 140234.2185084517
13 137206.36955768696
14 134389.44205518265
15 132121.6934500015
16 129743.02831074157
17 127996.209894227
18 126181.09337907878
19 124530.70736049648
Оптимальное количество кластеров 15
```python
def cls_info(df, k):
print('------ Кластер ', k, ' -------')
claster = df[method.labels_ == k]
print(claster['Год'].mean())
print(claster['Пробег'].mean())
print(claster['Объем'].mean())
print(claster['Мощность'].mean())
print(claster['Цена'].mean())
print(claster['Привод'].value_counts().head(1))
print(claster['Марка'].value_counts().head(2))
print(claster['Кузов'].value_counts().head(2))
print(claster['КПП'].value_counts().head(2))
print(claster['Хозяев в ПТС'].value_counts().head(2))
print('---------------------------')
```
```python
print('------ Реальность -------')
print(df['Год'].mean())
print(df['Пробег'].mean())
print(df['Объем'].mean())
print(df['Мощность'].mean())
print(df['Цена'].mean())
print(df['Привод'].value_counts().head(1))
print(df['Марка'].value_counts().head(2))
print(df['Кузов'].value_counts().head(2))
print(df['КПП'].value_counts().head(2))
print(df['Хозяев в ПТС'].value_counts().head(2))
print('---------------------------')
```
------ Реальность -------
2007.3700562551758
120830.33855906794
1877.595019846369
128.98960564265113
488987.6818298638
передний 23933
Name: Привод, dtype: int64
ВАЗ 5497
Toyota 2700
Name: Марка, dtype: int64
седан 14241
хэтчбек 8462
Name: Кузов, dtype: int64
механика 19308
автомат 13935
Name: КПП, dtype: int64
3 и более 12613
2 владельца 12070
Name: Хозяев в ПТС, dtype: int64
---------------------------
```python
for it in range(16):
cls_info(df, it)
```
------ Кластер 0 -------
2012.1751925192518
63387.76347634764
1542.7117711771177
104.2871287128713
432276.9573707371
передний 3525
Name: Привод, dtype: int64
ВАЗ 684
Renault 345
Name: Марка, dtype: int64
седан 1675
хэтчбек 1113
Name: Кузов, dtype: int64
механика 3556
роботизированная 60
Name: КПП, dtype: int64
1 владелец 3519
3 и более 75
Name: Хозяев в ПТС, dtype: int64
---------------------------
------ Кластер 1 -------
1998.067221067221
72930.8024948025
1722.3146223146223
84.61607761607762
86363.76507276508
задний 554
Name: Привод, dtype: int64
ВАЗ 889
УАЗ 143
Name: Марка, dtype: int64
седан 730
внедорожник 323
Name: Кузов, dtype: int64
механика 1383
автомат 60
Name: КПП, dtype: int64
3 и более 757
2 владельца 468
Name: Хозяев в ПТС, dtype: int64
---------------------------
------ Кластер 2 -------
2012.8614298323037
60326.43865842895
2132.8773168578996
171.48455428067078
1227472.0706090026
полный 1404
Name: Привод, dtype: int64
Toyota 344
Audi 203
Name: Марка, dtype: int64
кроссовер 1228
седан 552
Name: Кузов, dtype: int64
автомат 1815
вариатор 229
Name: КПП, dtype: int64
1 владелец 1587
2 владельца 590
Name: Хозяев в ПТС, dtype: int64
---------------------------
------ Кластер 3 -------
1999.6247086247085
237500.89393939395
2937.5291375291376
191.7937062937063
407217.94755244756
полный 506
Name: Привод, dtype: int64
Mercedes-Benz 142
BMW 111
Name: Марка, dtype: int64
седан 326
внедорожник 279
Name: Кузов, dtype: int64
автомат 717
механика 133
Name: КПП, dtype: int64
3 и более 563
2 владельца 235
Name: Хозяев в ПТС, dtype: int64
---------------------------
------ Кластер 4 -------
2008.9737206085754
98074.15952051636
1448.6399262332873
95.61180267404333
298626.3042876902
передний 2132
Name: Привод, dtype: int64
ВАЗ 520
Ford 235
Name: Марка, dtype: int64
хэтчбек 2138
внедорожник 7
Name: Кузов, dtype: int64
механика 1736
автомат 294
Name: КПП, dtype: int64
2 владельца 2168
1 владелец 1
Name: Хозяев в ПТС, dtype: int64
---------------------------
------ Кластер 5 -------
2008.9607907742998
108518.84349258649
1605.7001647446457
106.36507413509061
326849.65667215816
передний 2910
Name: Привод, dtype: int64
ВАЗ 495
Chevrolet 268
Name: Марка, dtype: int64
седан 2368
универсал 238
Name: Кузов, dtype: int64
механика 2757
автомат 194
Name: КПП, dtype: int64
2 владельца 3035
Name: Хозяев в ПТС, dtype: int64
---------------------------
------ Кластер 6 -------
2002.2536157024792
193473.40857438016
1476.9111570247933
82.2448347107438
98850.0444214876
передний 1843
Name: Привод, dtype: int64
ВАЗ 1406
Daewoo 62
Name: Марка, dtype: int64
седан 918
хэтчбек 783
Name: Кузов, dtype: int64
механика 1914
автомат 15
Name: КПП, dtype: int64
3 и более 1259
2 владельца 491
Name: Хозяев в ПТС, dtype: int64
---------------------------
------ Кластер 7 -------
2011.2364380757422
91274.27328556807
3325.793244626407
256.89252814738995
1657216.7727737974
полный 909
Name: Привод, dtype: int64
Toyota 183
BMW 150
Name: Марка, dtype: int64
кроссовер 440
внедорожник 345
Name: Кузов, dtype: int64
автомат 931
вариатор 24
Name: КПП, dtype: int64
2 владельца 431
1 владелец 430
Name: Хозяев в ПТС, dtype: int64
---------------------------
------ Кластер 8 -------
2009.939024390244
72631.38455284553
2067.967479674797
105.95365853658537
418875.9130081301
полный 1204
Name: Привод, dtype: int64
УАЗ 344
Chevrolet 289
Name: Марка, dtype: int64
внедорожник 1038
кроссовер 95
Name: Кузов, dtype: int64
механика 1215
автомат 12
Name: КПП, dtype: int64
1 владелец 596
2 владельца 433
Name: Хозяев в ПТС, dtype: int64
---------------------------
------ Кластер 9 -------
2007.0385576579793
136103.9032488397
1629.9892895394503
106.0124955373081
262912.8768297037
передний 2686
Name: Привод, dtype: int64
ВАЗ 370
Ford 294
Name: Марка, dtype: int64
седан 2489
универсал 116
Name: Кузов, dtype: int64
механика 2465
автомат 249
Name: КПП, dtype: int64
3 и более 2475
1 владелец 326
Name: Хозяев в ПТС, dtype: int64
---------------------------
------ Кластер 10 -------
2006.8225130890053
145801.34764397907
2130.418848167539
165.565445026178
513192.24659685866
передний 1478
Name: Привод, dtype: int64
Toyota 243
BMW 167
Name: Марка, dtype: int64
седан 1502
хэтчбек 129
Name: Кузов, dtype: int64
автомат 1589
механика 179
Name: КПП, dtype: int64
3 и более 1009
2 владельца 704
Name: Хозяев в ПТС, dtype: int64
---------------------------
------ Кластер 11 -------
2007.657043305127
138630.5719263315
2279.5420607267297
162.04828272772522
671646.3285216526
полный 1892
Name: Привод, dtype: int64
Nissan 256
Hyundai 249
Name: Марка, dtype: int64
кроссовер 1178
внедорожник 450
Name: Кузов, dtype: int64
автомат 1238
механика 627
Name: КПП, dtype: int64
2 владельца 967
3 и более 709
Name: Хозяев в ПТС, dtype: int64
---------------------------
------ Кластер 12 -------
1998.7640871525168
258800.57325319308
1972.1262208865514
123.88354620586026
209069.97595792636
передний 854
Name: Привод, dtype: int64
Volkswagen 117
Mercedes-Benz 107
Name: Марка, dtype: int64
седан 813
хэтчбек 132
Name: Кузов, dtype: int64
механика 1069
автомат 245
Name: КПП, dtype: int64
3 и более 949
2 владельца 282
Name: Хозяев в ПТС, dtype: int64
---------------------------
------ Кластер 13 -------
2006.2400306748466
153372.99386503067
3458.128834355828
258.6296012269939
755522.2369631901
полный 1070
Name: Привод, dtype: int64
BMW 201
Mercedes-Benz 143
Name: Марка, dtype: int64
кроссовер 613
седан 320
Name: Кузов, dtype: int64
автомат 1210
вариатор 68
Name: КПП, dtype: int64
3 и более 628
2 владельца 495
Name: Хозяев в ПТС, dtype: int64
---------------------------
------ Кластер 14 -------
1997.5278246205733
249334.54890387857
1879.4266441821248
127.0725126475548
195677.2639123103
передний 771
Name: Привод, dtype: int64
Toyota 464
Nissan 236
Name: Марка, dtype: int64
седан 585
минивэн 231
Name: Кузов, dtype: int64
автомат 988
механика 120
Name: КПП, dtype: int64
3 и более 829
2 владельца 235
Name: Хозяев в ПТС, dtype: int64
---------------------------
------ Кластер 15 -------
2007.2490660024907
118465.71731008717
1459.194686591947
92.75425487754255
233711.92818596927
передний 2355
Name: Привод, dtype: int64
ВАЗ 808
Ford 284
Name: Марка, dtype: int64
хэтчбек 2276
универсал 66
Name: Кузов, dtype: int64
механика 1886
автомат 363
Name: КПП, dtype: int64
3 и более 2108
1 владелец 301
Name: Хозяев в ПТС, dtype: int64
---------------------------
Для задачи классификации исходных данных я посчитал оптимальным 15 кластеров. Каждый кластер описывает новые данные об автомобилях, однако начиная с 15 отличия в кластерах менее очевидны (если 4 и 9 отличаются довольно сильно, то сходства между 9 и 15 гораздо больше)
## Работу выполнил
---
Студент группы **РИМ-181226**
Кабанов Евгений Алексеевич
| b03c91f3ad2e1a72afd558af796601be1e215645 | 169,524 | ipynb | Jupyter Notebook | DA-LR6-Kabanov.ipynb | ghspbravo/Data-Analysis | 25a0102378bf73bfd775ab15a2ad64fcd658ad73 | [
"MIT"
] | null | null | null | DA-LR6-Kabanov.ipynb | ghspbravo/Data-Analysis | 25a0102378bf73bfd775ab15a2ad64fcd658ad73 | [
"MIT"
] | null | null | null | DA-LR6-Kabanov.ipynb | ghspbravo/Data-Analysis | 25a0102378bf73bfd775ab15a2ad64fcd658ad73 | [
"MIT"
] | null | null | null | 117.317647 | 23,816 | 0.815047 | true | 10,291 | Qwen/Qwen-72B | 1. YES
2. YES | 0.932453 | 0.815232 | 0.760166 | __label__kor_Hang | 0.079642 | 0.604453 |
# Physics 420/580 Midterm Exam
## October 19, 2017 1pm-2pm
Do the following problems. Use the Jupyter notebook, inserting your code and any textual answers/explanations in cells between the questions. (Feel free to add additional cells!) Marks will be given based on how clearly you demonstrate your understanding.
There are no restrictions on downloading from the internet, eclass, or the use of books, notes, or any other widely available computing resources. However, **you are not allowed** to communicate with each other or collaborate in any way and uploading to the internet or sending or receiving direct communications is not appropriate.
When you are finished, upload the jupyter notebook to eclass. Eclass times out after 2:05 so make sure that you upload things before then. Also be careful to save the notebook periodically and that you upload your final exam file.
```python
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
import scipy
from scipy.integrate import odeint
from scipy.integrate import quad
from scipy.optimize import minimize
from scipy.optimize import fsolve
from scipy.optimize import least_squares
from scipy.interpolate import interp1d
from scipy.interpolate import CubicSpline
```
```python
mpl.rc('figure',dpi=250)
mpl.rc('text',usetex=True)
```
```python
def add_labels(xlabel, ylabel, title):
plt.xlabel(xlabel)
plt.ylabel(ylabel)
plt.title(title)
plt.legend()
```
## Graphics
Plot the two curves:
$$\begin{align} y&=4x^3-3x-2\\
x&=\sin(\frac{y^4}{4}-2y^2+2)
\end{align}
$$
How many intersections are there? Read the x,y value corresponding to the intersections from the plots.
```python
fsolve(lambda x: x**3 - 1, 9)
```
array([1.])
```python
"""
Plan:
-Initialize an array of xdata, ydata.
-Create two functions and then plot the two functions.
-Find the intersections by inspection, or if time allows, using fsolve/brute force.
- Read off the x, y values.
"""
```
```python
y1 = lambda x: 4*x**3 - 3*x - 2
xdata = np.linspace(-2, 2, 1000)
plt.plot(xdata, y1(xdata))
x1 = lambda y: np.sin(y**4/4 - 2*y**2 + 2)
ydata = np.linspace(-10, 10,1000)
plt.plot(x1(ydata), ydata)
curve1_x = xdata
curve1_y = y1(xdata)
curve2_x = x1(ydata)
curve2_y = ydata
tol = 1e-4
plt.ylim(-5, 2.5)
plt.xlim(-1.5, 1.5)
```
```python
from ipywidgets import interact, interact_manual
def plot(a):
x = np.linspace(0, 5, 50)
plt.plot(x, np.exp(a*x))
plt.ylim(0, 5)
interact_manual(plot, a=5)
```
# Reconstruction
A instantaneous flash of light occurs in a large tank of water at time $t_0$ and at position $\vec{x_0}$
The group velocity of light in water is about $2.2 \times 10^8$ m/s. Four sensors detect the light flash, and report a measurement of the time at which light strikes the sensor. The locations of the sensors and the time which each sensor was hit is recorded in the table below:
|Sensor #| x| y| z| Time|
|--------|------------|------------|------------|------------|
| |[m]| [m]| [m]| [s]|
|1 |0 |0| 10 | 6.3859E-08|
|2 |8.66025404| 0| -5| 1.1032E-07|
|3 |-4.33012702| 7.5| -5| 7.9394E-08|
|4 |-4.33012702| -7.5| -5| 1.0759E-07|
Calculate the initial time $t_0$ and the location of the flash.
Here is a sketch of the general geometry. Note that the gray region is not different from the rest of the water- it is just to show the tetrahedral arrangement of the light sensors. The flash, shown as the yellow star is in an arbitrary location which you want to find.
```python
"""
Plan:
- Pick a random point called position. Calculate the distance to the each sensors, call it d
- The optimal point minimizes d - v*t for each sensor.
- Calculate the residuals
- Run least_squares on the residuals
- Find the optimal point and print it.
"""
```
```python
"cost function solution"
sen1 = np.array([0, 0, 10, 6.3859E-08])
sen2 = np.array([8.66025404, 0, -5, 1.1032E-07])
sen3 = np.array([-4.33012702, 7.5, -5, 7.9394E-08])
sen4 = np.array([-4.33012702, -7.5, -5, 1.0759E-07])
v = 2.2*10**8
def cost(pos):
error = 0
for sen in [sen1, sen2, sen3, sen4]:
error += (np.linalg.norm(pos - sen[0:3]) - v*sen[3])**2
return error
minimize(cost, np.random.randn(3))
```
fun: 2.889558609330956
hess_inv: array([[ 2.7050527 , 1.63590179, 0.27960445],
[ 1.63590179, 1.60674196, -0.18068716],
[ 0.27960445, -0.18068716, 0.51680401]])
jac: array([ 1.04308128e-06, 8.94069672e-08, -1.07288361e-06])
message: 'Optimization terminated successfully.'
nfev: 85
nit: 12
njev: 17
status: 0
success: True
x: array([-7.8833527 , 10.57964442, 10.80804805])
```python
from scipy.optimize import least_squares
"residuals solution"
sen1 = np.array([0, 0, 10, 6.3859E-08])
sen2 = np.array([8.66025404, 0, -5, 1.1032E-07])
sen3 = np.array([-4.33012702, 7.5, -5, 7.9394E-08])
sen4 = np.array([-4.33012702, -7.5, -5, 1.0759E-07])
def cost(pos):
error = np.zeros(4)
for i, sen in enumerate([sen1, sen2, sen3, sen4]):
error[i] = (np.linalg.norm(pos - sen[0:3]) - v*sen[3])
return error
least_squares(cost, np.random.randn(3))
```
active_mask: array([0., 0., 0.])
cost: 1.4447793059407599
fun: array([-0.83046438, 0.93904664, -0.97413042, 0.6075762 ])
grad: array([-1.42800140e-05, -1.02340696e-05, -8.34587431e-07])
jac: array([[-0.59639614, 0.80035917, 0.06112968],
[-0.65625106, 0.41966651, 0.62706827],
[-0.21545151, 0.1867243 , 0.95849604],
[-0.14636443, 0.74470816, 0.651143 ]])
message: '`ftol` termination condition is satisfied.'
nfev: 15
njev: 15
optimality: 1.4280013981682327e-05
status: 2
success: True
x: array([-7.88347184, 10.57956012, 10.80804357])
```python
'''
Alternative solution
'''
class Sensor():
def __init__(self, position, time):
self.position = position
self.time = time
self.velocity = 2.2*10**8
def draw_radius(self):
r = self.velocity*self.time
possible_vals = []
for phi in np.linspace(0, 2*np.pi):
for theta in np.linspace(0, np.pi):
x = np.array([np.sin(theta)*np.cos(phi), np.sin(theta)*np.sin(phi), np.cos(theta)])
possible_vals.append(self.position + r*x)
return np.array(possible_vals)
```
```python
sen1 = Sensor(np.array([0, 0, 10]), 6.3859E-08)
sen2 = Sensor(np.array([8.66025404, 0, -5]), 1.1032E-07)
sen3 = Sensor(np.array([4.33012702, 7.5, -5]), 7.9394E-08)
sen4 = Sensor(np.array([-4.33012702, -7.5, -5]), 1.0759E-07)
circ = []
for i in [sen1, sen2, sen3, sen4]:
circ.append(i.draw_radius())
```
```python
tol = 18
mask = np.abs(circ[0]-circ[1]-circ[2]-circ[3]) < tol
for i, j in enumerate(mask):
if j.all() == True:
print(i, j)
print(circ[0][i])
```
1010 [ True True True]
[-7.04232267 4.58404406 21.25904395]
1460 [ True True True]
[-7.04232267 -4.58404406 21.25904395]
## Trajectories
An alpha-particle is a helium nucleus, with mass 6.644×10−27 kg or 4.002 u, and charge equal to twice the charge of an electron (but positive). Calculate the trajectory of a 5MeV=(5*1.609 e-13 J) alpha particle as it moves by a gold nucleus (mass 196.966 u), with charge 79e, as a function of the impact parameter b, below. Assume both the alpha and the gold are point particles, and ignore special relativity. Plot the scattering angle $\theta$ and energy loss of the alpha as a function of $b$, for values of b between 1e-16 and 1e-9m.
The force between two charge particles is given by the Coulomb potential:
$$\vec{F}=\frac{1}{4\pi\epsilon_0}\frac{q_1q_2 (\vec{r_2}-\vec{r_1})}{|\vec{r_2}-\vec{r_1}|^3}$$ with $\epsilon_0=8.85\times10^{-12}\frac{\rm{C}^2}{\rm{N\cdot m^2}}$
```python
"""
Plan:
- Write NEwtons law as system of coupled ODE and integrate.
- F is couloumbs law, so the acceleration is F/m
-
"""
```
```python
eps = 8.85*10**-12
elementary_charge = 1.60*10**-19
q1 = 2*elementary_charge
q2 = 79*elementary_charge
u = 1.66*10**-27
m1 = 4*u
m2 = 196.966*u
def CouloubsLaw(r1, r2, q1, q2):
distance = np.linalg.norm(r2-r1)
direction = (r2-r1)/distance
F = (1/(4*np.pi*eps))*(q2/distance**2)*direction
return F
```
```python
kinetic = 5*1.609e-13
velocity = np.sqrt(2*kinetic/m1)
velocity
```
15566607.758546297
```python
import numpy as np
from scipy.integrate import odeint
def main(y, t, m1 = m1, m2 = m2):
'''
Main function to be integrated with odeint.
Input: y (array of elements 12), t (scalar)
Output: dydt (array of elements 12)
'''
particle1_x = y[0:2]
particle1_dx = y[2:4]
particle2_x = np.array([0, velocity])
F = CouloubsLaw(particle1_x, particle2_x, q1, q2)
dxdt = np.zeros(4)
dxdt[0:2] = particle1_dx
dxdt[2:4] = F/m1
#print(y[0:2])
return dxdt
b = 1e-16
y0 = np.array([0, b, velocity, 0])
t = np.linspace(0, 3, num=500)
y = odeint(main, y0, t)
```
```python
y
plt.plot(y[:,0], y[:,1])
```
```python
energy = 1/2*m1*y[:, 2]**2 + 1/2*m1*y[:, 3]**2
plt.plot(t, energy)
```
```python
for b in np.logspace(-16, -9, num=10):
y0 = np.array([0, b, velocity, 0])
t = np.linspace(0, 3, num=500)
y = odeint(main, y0, t)
plt.plot(y[:,0], y[:,1], label=b)
add_labels('x', 'y', 'Scattering for various b')
```
```python
for b in np.logspace(-16, -9, num=10):
y0 = np.array([0, b, velocity, 0])
t = np.linspace(0, 3, num=500)
y = odeint(main, y0, t)
energy = 1/2*m1*y[:, 2]**2 + 1/2*m1*y[:, 3]**2
plt.plot(t, energy, label = b)
add_labels('x', 'y', 'Scattering for various b')
```
```python
```
```python
def f(x):
return np.array([x**2, np.sin(x)])
xdata = np.linspace(0, 10, num=10)
ydata = f(xdata)
g = scipy.interpolate.interp1d(xdata, ydata, kind='cubic')
```
```python
x = np.linspace(0, 10)
plt.plot(x, g(x)[0])
plt.plot(x, g(x)[1])
```
```python
```
# Don't forget to upload your work to eclass!
```python
```
| 56722069cd1d6e945bac1f368991e53a8823f819 | 298,937 | ipynb | Jupyter Notebook | Assignments/Finished/.ipynb_checkpoints/Midterm 2018 (1)-checkpoint.ipynb | hanzhihua72/phys-420 | 748d29b55d57680212b15bb70879a24b79cb16a9 | [
"MIT"
] | null | null | null | Assignments/Finished/.ipynb_checkpoints/Midterm 2018 (1)-checkpoint.ipynb | hanzhihua72/phys-420 | 748d29b55d57680212b15bb70879a24b79cb16a9 | [
"MIT"
] | null | null | null | Assignments/Finished/.ipynb_checkpoints/Midterm 2018 (1)-checkpoint.ipynb | hanzhihua72/phys-420 | 748d29b55d57680212b15bb70879a24b79cb16a9 | [
"MIT"
] | null | null | null | 406.164402 | 90,132 | 0.936428 | true | 3,522 | Qwen/Qwen-72B | 1. YES
2. YES | 0.831143 | 0.901921 | 0.749625 | __label__eng_Latn | 0.856859 | 0.579962 |
# Sweep Signals and their Spectra
*This Jupyter notebook is part of a [collection of notebooks](../index.ipynb) in the masters module Selected Topics in Audio Signal Processing, Communications Engineering, Universität Rostock. Please direct questions and suggestions to [Sascha.Spors@uni-rostock.de](mailto:Sascha.Spors@uni-rostock.de).*
## The Linear Sweep
A [linear sweep](https://en.wikipedia.org/wiki/Chirp#Linear) is an exponential signal with linear increase in its instantaneous frequency. It is defined as
\begin{equation}
x(t) = e^{j \omega(t) t}
\end{equation}
with
\begin{equation}
\omega(t) = \omega_\text{l} - \frac{\omega_\text{u} - \omega_\text{l}}{2 T} t
\end{equation}
where $\omega_\text{l}$ and $\omega_\text{u}$ denote its lower and upper frequency limit, and $T$ its total duration. The linear sweep is generated in the following by sampling the continuous time.
```python
import numpy as np
import matplotlib.pyplot as plt
import soundfile as sf
fs = 48000 # sampling frequency
om_l = 2*np.pi*200 # lower angluar frequency of sweep
om_u = 2*np.pi*18000 # upper angular frequency of sweep
T = 5 # duration of sweep
k = (om_u - om_l)/(2*T)
t = np.linspace(0, T, fs*T)
x = np.exp(1j*(om_l + k*t)*t)
```
A short section of the linear sweep signal is plotted for illustration
```python
idx = range(5000) # portion of the signal to show
plt.figure(figsize=(10, 5))
plt.plot(t[idx], np.real(x[idx]))
plt.xlabel(r'$t$ in s')
plt.ylabel(r'$x(t)$')
plt.grid()
```
### Auralization
Lets listen to the linear sweep. Please be careful with the volume of your speakers or headphones. Start with a very low volume and increase if necessary. This holds especially for the low and high frequencies which can damage your speakers at high levels.
```python
sf.write('linear_sweep.wav', np.real(x), fs)
```
<audio src="linear_sweep.wav" controls>Your browser does not support the audio element.</audio>
[linear_sweep.wav](linear_sweep.wav)
### Spectrogram
The spectrogram of the linear sweep is computed and plotted
```python
plt.figure(figsize=(8, 6))
plt.specgram(x, Fs=fs, sides='onesided')
plt.xlabel('$t$ in s')
plt.ylabel('$f$ in Hz');
```
### Spectrum of a Linear Sweep (Analytic Solution)
The analytic solution of the Fourier transform of the linear sweep signal is used to compute and plot its overall magnitude spectrum $|X(j \omega)|$
```python
from scipy.special import fresnel
f = np.linspace(10, 20000, 1000)
om = 2*np.pi*f
om_s = 2*np.pi*200 # lower angluar frequency of sweep
om_e = 2*np.pi*18000 # upper angular frequency of sweep
T = 5 # duration of sweep
k = (om_e - om_s)/(2*T)
a = (om-om_s)/np.sqrt(2*np.pi*k)
b = (2*k*T-(om-om_s))/np.sqrt(2*np.pi*k)
Sa, Ca = fresnel(a)
Sb, Cb = fresnel(b)
X = np.sqrt(np.pi/(2*k)) * np.sqrt((Ca + Cb)**2 + (Sa + Sb)**2)
plt.figure(figsize=(8, 5))
plt.plot(f, X)
plt.xlabel('$f$ in Hz')
plt.ylabel(r'$|X(f)|$')
plt.grid()
```
### Crest Factor
The [Crest factor](https://en.wikipedia.org/wiki/Crest_factor) of the sweep is computed
```python
xrms = np.sqrt(1/T * np.sum(np.real(x)**2) * 1/fs)
C = np.max(np.real(x)) / xrms
print('Crest factor C = {:<1.5f}'.format(C))
```
Crest factor C = 1.41421
## Exponential Sweep
An [exponential sweep](https://en.wikipedia.org/wiki/Chirp#Exponential) is an exponential signal with an exponential increase in its instantaneous frequency. It is defined as
\begin{equation}
x(t) = e^{j \frac{\omega_l}{\ln(k)} (k^t - 1)}
\end{equation}
with
\begin{equation}
k = \left( \frac{\omega_\text{u}}{\omega_\text{l}} \right)^\frac{1}{T}
\end{equation}
where $\omega_\text{l}$ and $\omega_\text{u}$ denote its lower and upper frequency limit, and $T$ its total duration. The exponential sweep is generated in the following by sampling the continuous time.
```python
om_l = 2*np.pi*100 # lower angluar frequency of sweep
om_u = 2*np.pi*18000 # upper angular frequency of sweep
T = 5 # duration of sweep
k = (om_e / om_s)**(1/T)
t = np.linspace(0, T, fs*T)
x = np.exp(1j*om_s * (k**(t) - 1) / np.log(k))
```
A short section of the linear sweep signal is plotted for illustration
```python
idx = range(5000) # portion of the signal to show
plt.figure(figsize=(10, 5))
plt.plot(t[idx], np.real(x[idx]))
plt.xlabel(r'$t$ in s')
plt.ylabel(r'$x(t)$')
plt.grid()
```
### Spectrogram
The spectrogram of the logarithmic sweep is computed and plotted
```python
plt.figure(figsize=(8, 6))
plt.specgram(x, Fs=fs, sides='onesided')
plt.xlabel('$t$ in s')
plt.ylabel('$f$ in Hz');
```
### Spectrum
The discrete Fourier transform of the logarithmic sweep is computed and its magnitude spectrum is plotted.
```python
X = np.fft.rfft(np.real(x))
f = np.linspace(0, fs/2, len(X))
plt.figure(figsize=(8, 6))
plt.plot(f, 20*np.log10(np.abs(X)))
plt.xlabel(r'$f$ in Hz')
plt.ylabel(r'$|X(f)|$ in dB')
plt.axis([20, 20000, 40, 70])
plt.grid()
```
### Auralization
Lets listen to the exponential sweep. Please be careful with the volume of your speakers or headphones. Start with a very low volume and increase if necessary. This holds especially for the low and high frequencies which can damage your speakers at high levels.
```python
sf.write('exponential_sweep.wav', np.real(x), fs)
```
<audio src="exponential_sweep.wav" controls>Your browser does not support the audio element.</audio>
[exponential_sweep.wav](exponential_sweep.wav)
**Copyright**
This notebook is provided as [Open Educational Resources](https://en.wikipedia.org/wiki/Open_educational_resources). Feel free to use the notebook for your own purposes. The text/images/data are licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Sascha Spors, Selected Topics in Audio Signal Processing - Supplementary Material*.
| fde5143298e1d254127b57b95b908e62597dbbb9 | 491,829 | ipynb | Jupyter Notebook | electroacoustics/sweep_spectrum.ipynb | spatialaudio/-selected-topics-in-audio-signal-processing-lecture- | d56c54401ad15f72042baeba88a22809c6c9f85c | [
"MIT"
] | 14 | 2017-10-19T14:54:02.000Z | 2021-12-30T12:39:02.000Z | electroacoustics/sweep_spectrum.ipynb | spatialaudio/-selected-topics-in-audio-signal-processing-lecture- | d56c54401ad15f72042baeba88a22809c6c9f85c | [
"MIT"
] | null | null | null | electroacoustics/sweep_spectrum.ipynb | spatialaudio/-selected-topics-in-audio-signal-processing-lecture- | d56c54401ad15f72042baeba88a22809c6c9f85c | [
"MIT"
] | null | null | null | 1,083.323789 | 146,024 | 0.957579 | true | 1,672 | Qwen/Qwen-72B | 1. YES
2. YES | 0.941654 | 0.884039 | 0.832459 | __label__eng_Latn | 0.966119 | 0.772415 |
<b>Construir o gráfico e encontrar o foco e uma equação da diretriz.</b>
<b>4. $x^2 + y = 0$</b>
<b>Arrumando a equação</b><br><br>
$x^2 = -y$<br><br>
$2p = -1$,<b>logo</b><br><br>
$p = -\frac{1}{2}$<br><br><br>
<b>Calculando o foco</b><br><br>
$F = -\frac{p}{2}$<br><br>
$F = \frac{-\frac{1}{2}}{2}$<br><br>
$F = -\frac{1}{2}\cdot \frac{1}{2}$<br><br>
$F = -\frac{1}{4}$<br><br>
$F(0,-\frac{1}{4})$<br><br><br>
<b>Calculando a diretriz</b><br><br>
$d = -\frac{p}{2}$<br><br>
$d = -(-\frac{1}{4})$<br><br>
$d : y = \frac{1}{4}$<br><br>
$V(0,0)$<br><br>
$F(0,-\frac{1}{4})$
<b>Gráfico da parábola</b>
```python
from sympy import *
from sympy.plotting import plot_implicit
x, y = symbols("x y")
plot_implicit(Eq((x-0)**2, -1*(y+0)), (x,-3,3), (y,-3,3),
title=u'Gráfico da parábola', xlabel='x', ylabel='y');
```
| b25aa0622b84fb0a6385a733ac4343d9173e4313 | 12,111 | ipynb | Jupyter Notebook | Problemas Propostos. Pag. 172 - 175/04.ipynb | mateuschaves/GEOMETRIA-ANALITICA | bc47ece7ebab154e2894226c6d939b7e7f332878 | [
"MIT"
] | 1 | 2020-02-03T16:40:45.000Z | 2020-02-03T16:40:45.000Z | Problemas Propostos. Pag. 172 - 175/04.ipynb | mateuschaves/GEOMETRIA-ANALITICA | bc47ece7ebab154e2894226c6d939b7e7f332878 | [
"MIT"
] | null | null | null | Problemas Propostos. Pag. 172 - 175/04.ipynb | mateuschaves/GEOMETRIA-ANALITICA | bc47ece7ebab154e2894226c6d939b7e7f332878 | [
"MIT"
] | null | null | null | 130.225806 | 10,060 | 0.878623 | true | 393 | Qwen/Qwen-72B | 1. YES
2. YES | 0.896251 | 0.800692 | 0.717621 | __label__por_Latn | 0.361745 | 0.505606 |
# GPS
Based on D. Kalman's method, I expand the four constraint equations as following:
$
\begin{align}
2.4x + 4.6y + 0.4z - 2(0.047^2)9.9999 t &= 1.2^2 + 2.3^2 + 0.2^2 + x^2 + y^2 + z^2 - 0.047^2 t^2 \\
-1x + 3y + 3.6z - 2(0.047^2)13.0681 t &= 0.5^2 + 1.5^2 + 1.8^2 + x^2 + y^2 + z^2 - 0.047^2 t^2 \\
-3.4x + 1.6y + 2.6z - 2(0.047^2)2.0251 t &= 1.7^2 + 0.8^2 + 1.3^2 + x^2 + y^2 + z^2 - 0.047^2 t^2 \\
3.4x + 2.8y - 1z - 2(0.047^2)10.5317 t &= 1.7^2 + 1.4^2 + 0.5^2 + x^2 + y^2 + z^2 - 0.047^2 t^2
\end{align},
$
And subtract the first equation from the rest.
$
\begin{align}
-3.4x - 1.6y + 3.2z - 2(0.047^2)3.0682 t &= -1.0299 \\
-5.8x - 3y + 2.2z + 2(0.047^2)7.9750 t &= -1.5499 \\
1x - 1.8y - 1.4z - 2(0.047^2)0.5318 t &= -1.6699
\end{align},
$
Converting it to matrix format as following:
$
\begin{equation}
\begin{bmatrix}
-3.4 & 1.6 & 3.2 & -0.01355 \\
-5.8 & -3 & 2.2 & 0.0352 \\
1 & -1.8 & -1.4 & -0.00235
\end{bmatrix}
\begin{bmatrix}
x \\ y \\ z \\ t
\end{bmatrix}
=
\begin{bmatrix}
-1.0299 \\ -1.5499 \\ -1.6699
\end{bmatrix}
\end{equation}.
$
# Newton's Method
For having $f(x)=0$, based on state variables $x=(x, y, z, t)$ we can convert the nonlinear distance equation as following:
$
\begin{equation}
f(x) = \frac{1}{2}
\begin{bmatrix}
(x-1.2)^2+(y-2.3)^2+(z-0.2)^2-(0.047*(t-09.9999))^2 \\
(x+0.5)^2+(y-1.5)^2+(z-1.8)^2-(0.047*(t-13.0681))^2 \\
(x+1.7)^2+(y-0.8)^2+(z-1.3)^2-(0.047*(t-02.0251))^2 \\
(x-1.7)^2+(y-1.4)^2+(z+0.5)^2-(0.047*(t-10.5317))^2
\end{bmatrix}
= 0.
\end{equation}
$
```julia
f(x) = [(x[1]-1.2)^2+(x[2]-2.3)^2+(x[3]-0.2)^2-0.047*(x[4]-09.9999)^2,
(x[1]+0.5)^2+(x[2]-1.5)^2+(x[3]-1.8)^2-0.047*(x[4]-13.0681)^2,
(x[1]+1.7)^2+(x[2]-0.8)^2+(x[3]-1.3)^2-0.047*(x[4]-02.0251)^2,
(x[1]-1.7)^2+(x[2]-1.4)^2+(x[3]+0.5)^2-0.047*(x[4]-10.5317)^2]./2
f([1.2 2.3 0.2 9.9999])
```
4-element Array{Float64,1}:
0.0
2.82377449586
4.4404602765600005
0.7683539358599999
## Jacobian
$
\begin{equation}
Df(x_0) =
\begin{bmatrix}
(x-1.2) & (y-2.3) & (z-0.2) & -0.047(t-09.9999)\\
(x+0.5) & (y-1.5) & (z-1.8) & -0.047(t-13.0681)\\
(x+1.7) & (y-0.8) & (z-1.3) & -0.047(t-02.0251)\\
(x-1.7) & (y-1.4) & (z+0.5) & -0.047(t-10.5317)
\end{bmatrix}.
\end{equation}
$
```julia
D(x) = [x[1]-1.2 x[2]-2.3 x[3]-0.2 -0.047*(x[4]-09.9999);
x[1]+0.5 x[2]-1.5 x[3]-1.8 -0.047*(x[4]-13.0681);
x[1]+1.7 x[2]-0.8 x[3]-1.3 -0.047*(x[4]-02.0251);
x[1]-1.7 x[2]-1.4 x[3]+0.5 -0.047*(x[4]-10.5317)]/2
D([1.7 1.4 -0.5 10.5317])
```
4×4 Array{Float64,2}:
0.25 -0.45 -0.35 -0.0124973
1.1 -0.05 -1.15 0.0596054
1.7 0.3 -0.9 -0.199905
0.0 0.0 0.0 -0.0
# Solutions
```julia
using LinearAlgebra
function gps(x)
t = 1 # tolerance of answers
for i=1:1e5
d = f(x)\D(x)
x -= d
if norm(d)<1e-15 return x end
end
return x
end
s = gps([0 0 0 0])
```
1×4 Array{Float64,2}:
-0.152014 1.16284 0.239069 3.61735
```julia
b = [-1.0299;-1.5499;-1.6699]
A = [-3.4 1.6 3.2 -0.01355 ;
-5.8 -3 2.2 0.0352 ;
1 -1.8 -1.4 -0.00235]
s = b\A
```
1×4 Transpose{Float64,Array{Float64,1}}:
1.73099 0.961006 -0.698654 -0.00586697
## Adjourn
```julia
using Dates
println("mahdiar")
Dates.format(now(), "Y/U/d HH:MM")
```
mahdiar
"2021/March/19 11:13"
| 3a8bc4e4aa901efc53c60b136a6fc6843ffb08c1 | 6,698 | ipynb | Jupyter Notebook | HW08/3.ipynb | mahdiarsadeghi/NumericalAnalysis | 95a0914c06963b0510971388f006a6b2fc0c4ef9 | [
"MIT"
] | null | null | null | HW08/3.ipynb | mahdiarsadeghi/NumericalAnalysis | 95a0914c06963b0510971388f006a6b2fc0c4ef9 | [
"MIT"
] | null | null | null | HW08/3.ipynb | mahdiarsadeghi/NumericalAnalysis | 95a0914c06963b0510971388f006a6b2fc0c4ef9 | [
"MIT"
] | null | null | null | 23.584507 | 132 | 0.415796 | true | 1,884 | Qwen/Qwen-72B | 1. YES
2. YES | 0.826712 | 0.727975 | 0.601826 | __label__yue_Hant | 0.1684 | 0.236573 |
<a href="https://colab.research.google.com/github/MathewsJosh/mecanica-estruturas-ufjf/blob/main/%5BMAC023%5D_Trabalho_02.ipynb" target="_parent"></a>
# **MAC023 - Mecânica das Estruturas**
# ME-02 - Segunda Avaliação de Conhecimentos
Alunos:
Brian Luis Coimbra Maia
Mathews Edwirds Gomes Almeida
# Condições Gerais
Esta avaliação tem como objetivo avaliar os conhecimentos adquiridos na primeira parte da disciplina de Mecânica das Estruturas.
---
As condicões abaixo devem ser observadas:
1. Serão formadas equipes e cada uma delas com no mínimo **2** e no máximo **3** integrantes.
2. A avaliação será realizada por meio da entrega de uma cópia deste notebook com as soluções desenvolvidas até a data estipulada de entrega.
3. Da entrega da avaliação.
* Os documentos necessários para a entrega do trabalho são (1) os códigos desenvolvidos pela equipe e (2) vídeo com a descrição da solução.
* A equipe deve usar este modelo de notebook para desenvolver os códigos.
* Os códigos podem ser desenvolvidos combinado a linguagem LaTeX e computação simbólica ou computação numérica quando necessário.
* Os gráficos necessários para a apresentação da solução devem estar embutidos no notebook.
4. Da distribuição das questões.
* A quantidade de questões será a mesma para cada grupo.
* Serão atribuídas as mesmas questões para todos os grupos.
* A pontuacão referente a cada questão será igualitária e o valor total da avaliação será 100 pontos.
5. As equipes devem ser formadas até às **23:59 horas o dia 18/01/2022** por meio do preenchimento da planilha [[MAC023] Formação das Equipes](https://docs.google.com/spreadsheets/d/1Dlftymao970nnrE4mu958iP8nMqKqSuhHiiLH91BKpQ/edit#gid=153704268).
6. A formação das equipes pode ser acompanhada arquivo indicado acima. Cada equipe será indentificada por uma letra em ordem alfabética seguida do número 2 (A2, B2, C2, e assim por diante). O arquivo está aberto para edição e pode ser alterado pelos alunos até a data estipulada.
7. Equipes formadas após a data estabelecida para a formação das equipes terão a nota da avaliação multiplicada por um coeficiente de **0.80**.
8. A equipe deve indicar no arquivo de indicação de equipes um responsável pela entrega do projeto.
* Somente o responsável pela entrega deve fazer o upload do arquivo na plataforma
9. A entrega dos projetos deve ocorrer até às **23:59 do dia 04/02/2021** na plataforma da disciplina pelo responsável pela entrega.
* Caso a entrega seja feita por outro integrante diferente daquele indicado pela pela equipe a avaliação será desconsiderada e não será corrigida até que a a condição de entrega seja satisfeita.
10. Quaisquer dúvidas ou esclarecimentos devem ser encaminhadas pela sala de aula virtual.
## (Q1) Encontre para a estrutura abaixo: (a) a deflexão máxima e (b) o ângulo máximo em relação a horizontal.
---
Dado que, uma simples viga AB é sujeita a uma carga distribuída de intensidade $w(x) = w_0 * sin(\frac{Πx}{L})$, onde $w_0$ é a intensidade máxima da carga.
Vamos integrar sucessivamente a equação de w(x) a fim de encontrar as devidas equações de deflexão, *slope*, momento, cortante etc.
$EIv^{(iv)} = -w_0 * sin(\frac{πx}{L}) \\
EIv''' = w_0 * \frac{L}{π} * cos(\frac{πx}{L}) + c1 \\
EIv'' = w_0 * (\frac{L}{π})^2 * sin(\frac{πx}{L}) + c1*x + c2 \\
EIv' = - w_0 * (\frac{L}{π})^3 * cos(\frac{πx}{L}) + c1*(\frac{x^2}{2}) + c2*x + c3 \\
EIv = - w_0 * (\frac{L}{π})^4 * sin(\frac{πx}{L}) + c1*(\frac{x^3}{6}) + c2*(\frac{x^2}{2}) + c3*x + c4$ \\
Igualando as constantes de integração à zero:
$C1 = C2 = C3 = C4 = 0 ⇒ \\
EIv = - w_0 * (\frac{L}{π})^4 * sin(\frac{πx}{L})$
a) Assim, a deflexão máxima foi encontrada pela integração do esforço cortante e equações de carga:
$v = - \frac{w_o * L^4}{π^4*EI} * sin(\frac{πx}{L})$
ou, considerando que a deflexão máxima ocorre no centro da viga ($\frac{L}{2}$), temos:
$v = - \frac{w_o * L^4}{π^4*EI} * sin(\frac{πx}{L}) ⇒ \\
v(\frac{L}{2}) = - \frac{w_o * L^4}{π^4*EI} * sin(\frac{π*L}{L*2}) ⇒ \\
v(\frac{L}{2}) = - \frac{w_o * L^4}{π^4*EI} * sin(\frac{π}{2}) ⇒ \\
v(\frac{L}{2}) = - \frac{w_o * L^4}{π^4*EI} * 1
$
<br><br>
Assim, temos a deflexão máxima:
$δ_{max} = v(\frac{L}{2}) = + \frac{w_0 * L^4}{π^4 * EI}$
b) Para encontrar o angulo máximo de rotação em relação ao eixo horizontal, basta substituir os devidos valores da viga na equação da deflexão (encontrada logo acima) para os apoios A($0$) e B($L$):
Para o apoio A: \\
$θ_A (0) = v' = - \frac{w_0 * L^3}{π^3 * EI} * cos(\frac{π0}{L}) ⇒ \\
θ_A (0) = - \frac{w_0 * L^3}{π^3 * EI} * cos(0) ⇒ \\
θ_A (0) = - \frac{w_0 * L^3}{π^3 * EI} * 1
$
<br><br>
Para o apoio B:
$θ_B (L) = v' = - \frac{w_0 * L^3}{π^3 * EI} * cos(\frac{πL}{L}) ⇒ \\
θ_B (L) = v' = - \frac{w_0 * L^3}{π^3 * EI} * cos(π) ⇒ \\
θ_B (L) = v' = - \frac{w_0 * L^3}{π^3 * EI} * (-1) ⇒ \\
θ_B (L) = v' = + \frac{w_0 * L^3}{π^3 * EI}$
## (Q2) Determina a razão $P/Q$ para que (a) o ângulo em $C$ seja zero e (b) a deflexão em $B$ seja nula
---
Primeiro determinamos o momento fletor e as derivadas parciais para o seguimento BC:
$ M_{BC} = - Px ,\textrm{ sendo }(0 ≤ x ≤ L) $ \\
$\frac{∂M_{BC}}{∂Q} = 0$ \\
$\frac{∂M_{BC}}{∂P} = -x$ \\
Em segundo lugar, determinamos o momento fletor e as derivadas parciais para o seguimento AB:
$ M_{AB} = Q(L - x) - Px ,\textrm{ sendo }(0 ≤ x ≤ L) $ \\
$\frac{∂M_{AB}}{∂Q} = L - x $ \\
$\frac{∂M_{AB}}{∂P} = -x $ \\
a) Primeiro calculamos a deflexão em C:
$δ_C = \frac{1}{EI} \displaystyle \int_{0}^{L} (M_{BC})(\frac{∂M_{BC}}{∂P})dx + \frac{1}{EI}
\displaystyle \int_{0}^{L} (M_{AB})(\frac{∂M_{AB}}{∂P})dx$ \\
$δ_C = \frac{1}{EI} \displaystyle \int_{0}^{L} (-Px)(-x)dx + \frac{1}{EI}
\displaystyle \int_{0}^{L} (Q*(L - x) - Px)(-x)dx$ \\
$δ_C = L^3 * (\frac{2 P}{3 EI} - \frac{Q}{6 EI}) = \frac{L^3 * (4 P - Q)}{6 EI}$ <br><br>
Então derivamos a deflexão para encontrar o angulo em C: \\
$δ_C' = θ_C ⇒ \frac{\partial f}{\partial L} \frac{L^3 (4 P - Q)}{6 EI} = \frac{(4 P - Q) L^2}{2 EI}$ \\
$θ_C = \frac{L^2 * (4 P - Q)}{2 EI}$ <br><br>
Sendo assim, para encontrar a razão $\frac{P}{Q}$ em C, precisaremos igualar $θ_C$ a 0, então:
$θ_C = \frac{L^2 * (4 P - Q)}{2 EI} = 0$ \\
$(4 P - Q) = 0$ \\
$4 P = Q$ \\
$\frac{P}{Q} = \frac{1}{4} \textrm{ ou } 0.25$
b) Agora calculamos a deflexão em B:
$δ_B = \frac{1}{EI} \displaystyle \int_{0}^{L} (M_{AB})(\frac{∂M_{AB}}{∂Q})dx$
$δ_B = \frac{1}{EI} \displaystyle \int_{0}^{L} (Q*(L - x) - Px)(L-x)dx$
$δ_B = \frac{L^3 (2Q - P)}{6 EI}$ <br><br>
Sendo assim, para encontrar a razão $\frac{P}{Q}$, vamos igualar a deflexão em B a 0 ($δ_B = 0$), faremos: \\
$δ_B = \frac{L^3 (2Q - P)}{6 EI} = 0$ \\
$(2 Q - P) = 0$ \\
$2Q = P $ \\
$\frac{P}{Q} = 2$
## (Q3) Determine o deslocamento no meio do vão
---
```
# Importação das bibliotecas e inicialização de parâmetros
import sympy as sp
from sympy import N
sp.init_printing(use_unicode=False, wrap_line=False, no_global=True)
x, l, w, E, I, a, b = sp.var('x L w E I a b', real=True, positive=True)
# Separando as equações do momento fletor real por trechos
MR = [(w/8)*(3*l*x - 4*x**2), (w/8)*(l**2-l*x)] #AC e CB
print("Equações do momento fletor real por trechos:")
display(MR)
# Separando as equações do momento fletor virtual por trechos
MV = [x/2, x/2-(x-l/2)]
print("\n")
print("Equações do momento fletor virtual por trechos:")
display(MV)
# Integrando em cada trecho
delta1 = sp.integrate(MR[0]*MV[0]/(2*E*I),(x,0,l/2))
delta2 = sp.integrate(MR[1]*MV[1]/(E*I),(x,l/2,l))
print("\n")
print("O deslocamento no meio do vão é dado pela soma dos resultados das duas integrações:")
delta1+delta2
```
Outra solução pode ser pela separação da figura em dois trechos AC e CB, seguida pela montagem de suas deflexões:
$y_{AC} = - \frac{wx}{384*2*EI}*(9L^3 -24Lx^2 + 16x^3)$ \\
$y_{CB} = - \frac{wx}{384*EI}*(8x^3 -24Lx^2 + 17L^2x - L^3)$ \\
Derivando essas equações, encontramos os *slopes* para os pontos AC, CB:
$Θ_{AC} = - \frac{w}{384*2*EI}*(9L^3 -72Lx^2 + 64x^3)$ \\
$Θ_{CB} = - \frac{wL}{384*EI}*(24x^2 -48Lx + 17L^2)$ \\
Assim, basta substituirmos a posição (L) na equação $Θ_{CB}$, para encontrarmos o deslocamento no meio do vão.
$Θ_{B} (x)= - \frac{wL}{384*EI}*(24x^2 -48Lx + 17L^2)$ \\
$Θ_{B} (L)= \frac{7wL^4}{1536*EI}$ \\
## (Q4) Encontre uma expressão para (a) o deslocamento **vertical** no ponto de aplicação da carga **que está aplicada a um terço do comprimento horizontal a partir do ponto $B$** e (b) o deslocamento horizontal do ponto $C$
---
##A)
```
#-- separando as equações do momento fletor real por trechos
MR=[
0, # trecho AB, origem em A
w*x*(2/3)/l, # trecho BW, origem em B
w*(1/3)*(l-x)/l, # trecho WC, origem em W
0, # trecho CD, origem em C
]
# Separando as equações do momento fletor virtual por trechos
MV=[
x, # trecho AB, origem em A
x, # trecho BW, origem em B
x, # trecho WC, origem em W
x, # trecho CD, origem em C
]
# Integrando em cada trecho
delta1 = sp.integrate(MR[0]*MV[0]/(E*I),(x,0,l))
delta2 = sp.integrate(MR[1]*MV[1]/(E*I),(x,0,l/3))
print("O deslocamento no ponto de aplicação da carga é dado pela soma dos resultados das integrações envolvendo AB e BW:")
desloc_W = delta1+delta2
N(desloc_W, 4)
```
##B)
```
delta1 = sp.integrate(MR[0]*MV[0]/(E*I),(x,0,l))
delta2 = sp.integrate(MR[1]*MV[1]/(E*I),(x,0,l/3))
delta3 = sp.integrate(MR[2]*MV[2]/(E*I),(x,l/3,l))
print("O deslocamento horizontal do ponto C é dado pela soma dos resultados das integrações envolvendo AB, BW e WC:")
desloc_C = delta1+delta2+delta3
N(desloc_C, 4)
```
## (Q5) Determine o deslocamento vertical na extremidade livre da viga em balanço. Observe que a barra possui uma variação linear na sua altura que influencia as propriedades geométricas. Modifique a formulação do método da carga unitária para acomodar a variação da altura ao longo do comprimento.
Dados:
tf - tonelada-força
```
print("Primeiro equacionamos os momentos reais de cada uma das barras da figura")
MR = 2*x - 2*l
display(MR)
print("Agora equacionamos os momentos virtuais em cada uma das barra da figura")
MV = x - l
display(MV)
print("Assim temos:")
I = -1.575E-3*x/l + 1.8E-3
display(I)
```
```
delta_B = sp.integrate(MR*MV/(E*I),(x,0,l))
display(delta_B)
```
Formula dos Trapézios (Δx=L/m):
$\int_{x_0}^{x_n} y(x) dx ≈ (\frac{Δx}{2})(y_0+2y_1+2y_2+...+2y_{m-1}+y_m)$
Fórmula de Simpson (Δx=L/2m):
$\int_{x_0}^{x_n} y(x)dx ≈ (\frac{Δx}{3})(y_0+4y_1+2y_2+4y_3+...+4y_{2m-3}+2y_{2m-2}+4y_{2m-1}+y_{2m})$
Fórmula de Simpson (Δx=L/3):
$\int_{x_0}^{x_n} y(x)dx ≈ (\frac{3Δx}{8})(y_0+3y_1+3y_2+2y_3+...+2y_{3m-3}+3y_{3m-2}+3y_{3m-1}+y_{3m})$
```
# Importação das bibliotecas e inicialização de parâmetros
import pandas as pd
# Definindo os dados iniciais
h1, h2, L, b = 0.3, 0.6, 6, 0.1 #m
P = 2 #tf
E = 2100000 #tf/m^2
divisoes = ['6 (Δx=1)', '12 (Δx=1/2)', '18 (Δx=1/3)', '24 (Δx=1/4)']
trapezios = [6.2421, 6.2333, 6.2320, 6.2314]
simpson_l_2 = [6.2365, 6.2310, 6.2307, 6.2306]
simpson_l_3 = [6.2420, 6.2315, 6.2308, 6.2307]
print("A partir dos valores dados, podemos encontrar os deslocamentos verticais dos pontos para diferentes divisões da viga, usando as formulas dos Trápezios (Δx=L/m), Simpson(Δx=L/2m) e Simpson(Δx=l/3m)\n")
d = {'Intervalos': divisoes, 'Trápezios (Δx=L/m)': trapezios, 'Simpson(Δx=L/2m)':simpson_l_2, 'Simpson(Δx=l/3m)':simpson_l_3}
df = pd.DataFrame(data=d)
print(df.to_string(index=False))
```
A partir dos valores dados, podemos encontrar os deslocamentos verticais dos pontos para diferentes divisões da viga, usando as formulas dos Trápezios (Δx=L/m), Simpson(Δx=L/2m) e Simpson(Δx=l/3m)
Intervalos Trápezios (Δx=L/m) Simpson(Δx=L/2m) Simpson(Δx=l/3m)
6 (Δx=1) 6.2421 6.2365 6.2420
12 (Δx=1/2) 6.2333 6.2310 6.2315
18 (Δx=1/3) 6.2320 6.2307 6.2308
24 (Δx=1/4) 6.2314 6.2306 6.2307
| 30445e2e790d54efda02f55fa75417c733cd9f67 | 251,590 | ipynb | Jupyter Notebook | [MAC023]_Trabalho_02.ipynb | MathewsJosh/mecanica-estruturas-ufjf | 7f94b9a7cdacdb5a22b8fc491959f309be65acfb | [
"MIT"
] | null | null | null | [MAC023]_Trabalho_02.ipynb | MathewsJosh/mecanica-estruturas-ufjf | 7f94b9a7cdacdb5a22b8fc491959f309be65acfb | [
"MIT"
] | null | null | null | [MAC023]_Trabalho_02.ipynb | MathewsJosh/mecanica-estruturas-ufjf | 7f94b9a7cdacdb5a22b8fc491959f309be65acfb | [
"MIT"
] | null | null | null | 323.796654 | 56,162 | 0.895894 | true | 4,776 | Qwen/Qwen-72B | 1. YES
2. YES | 0.754915 | 0.812867 | 0.613646 | __label__por_Latn | 0.994021 | 0.264035 |
# Direct Optimal Control Plotting
```python
import sys; sys.path.append(2*'../') # go n dirs back
from src import *
# Change device according to your configuration
# device = torch.device('cuda:1') if torch.cuda.is_available() else torch.device('cpu')
device = torch.device('cpu') # feel free to change :)
```
```python
import time
import torch
import torch.nn as nn
import matplotlib.pyplot as plt
from torchdyn.core import NeuralODE
from torchdyn.datasets import *
from torchdyn.numerics import odeint, Euler, HyperEuler
```
## Optimal Control
We want to control an inverted pendulum and stabilize it in the upright position. The equations in Hamiltonian form describing an inverted pendulum with a torsional spring are as following:
$$\begin{equation}
\begin{bmatrix} \dot{q}\\ \dot{p}\\ \end{bmatrix} =
\begin{bmatrix}
0& 1/m \\
-k& -\beta/m\\
\end{bmatrix}
\begin{bmatrix} q\\ p\\ \end{bmatrix} -
\begin{bmatrix}
0\\
mgl \sin{q}\\
\end{bmatrix}+
\begin{bmatrix}
0\\
1\\
\end{bmatrix} u
\end{equation}$$
```python
class ControlledPendulum(nn.Module):
"""
Inverted pendulum with torsional spring
"""
def __init__(self, u, m=1., k=.5, l=1., qr=0., β=.01, g=9.81):
super().__init__()
self.u = u # controller (nn.Module)
self.nfe = 0 # number of function evaluations
self.cur_f = None # current function evaluation
self.cur_u = None # current controller evaluation
self.m, self.k, self.l, self.qr, self.β, self.g = m, k, l, qr, β, g # physics
def forward(self, t, x):
self.nfe += 1
q, p = x[..., :1], x[..., 1:]
self.cur_u = self.u(t, x)
dq = p/self.m
dp = -self.k*(q - self.qr) - self.m*self.g*self.l*torch.sin(q) \
-self.β*p/self.m + self.cur_u
self.cur_f = torch.cat([dq, dp], -1)
return self.cur_f
from math import pi as π
x_star = torch.Tensor([0., 0.]).to(device)
cost_func = IntegralCost(x_star, P=1, Q=1) # final position is more important
# Time span
dt = 0.2
t0, tf = 0, 3 # initial and final time for controlling the system
steps = int((tf - t0)/dt) + 1 # so we have a time step of 0.2s
t_span = torch.linspace(t0, tf, steps).to(device)
```
```python
u_euler = torch.load('saved_models/u_euler.pt')
u_hyper = torch.load('saved_models/u_hyper.pt')
u_mp = torch.load('saved_models/u_mp.pt')
u_rk4 = torch.load('saved_models/u_rk4.pt')
```
```python
# Testing the controller on the real system
frac = 0.95
init_dist = torch.distributions.Uniform(torch.Tensor([-frac*π, -frac*π]), torch.Tensor([frac*π, frac*π]))
x0 = init_dist.sample((50,)).to(device)
t0, tf, steps = 0, 5, 10*10 + 1 # nominal trajectory
t_span_fine = torch.linspace(t0, tf, steps).to(device)
sys = ControlledPendulum(None)
sys.u = u_euler; _, traj_eu = odeint(sys, x0, t_span_fine, solver='tsit5', atol=1e-5, rtol=1e-5)
sys.u = u_hyper; _, traj_hyper = odeint(sys, x0, t_span_fine, solver='tsit5', atol=1e-5, rtol=1e-5)
sys.u = u_mp; _, traj_mp = odeint(sys, x0, t_span_fine, solver='tsit5', atol=1e-5, rtol=1e-5)
sys.u = u_rk4; _, traj_rk4 = odeint(sys, x0, t_span_fine, solver='tsit5', atol=1e-5, rtol=1e-5)
print(cost_func(traj_eu), cost_func(traj_hyper), cost_func(traj_mp), cost_func(traj_rk4))
t_span_fine = t_span_fine.detach().cpu()
traj_eu, traj_hyper, traj_mp, traj_rk4 = traj_eu.detach().cpu(), traj_hyper.detach().cpu(), traj_mp.detach().cpu(), traj_rk4.detach().cpu()
# Plotting
fig, axs = plt.subplots(4, 2, figsize=(16,8))
for i in range(len(x0)):
axs[0,0].plot(t_span_fine, traj_eu[:,i,0], 'r', alpha=.3, label='p' if i==0 else None)
axs[0,1].plot(t_span_fine, traj_eu[:,i,1], 'r-.', alpha=.3, label='q' if i==0 else None)
axs[1,0].plot(t_span_fine, traj_hyper[:,i,0], color='orange', alpha=.3, label='p' if i==0 else None)
axs[1,1].plot(t_span_fine, traj_hyper[:,i,1], color='orange', linestyle='-.', alpha=.3, label='q' if i==0 else None)
axs[2,0].plot(t_span_fine, traj_mp[:,i,0], color='green', alpha=.3, label='p' if i==0 else None)
axs[2,1].plot(t_span_fine, traj_mp[:,i,1], color='green', linestyle='-.', alpha=.3, label='q' if i==0 else None)
axs[3,0].plot(t_span_fine, traj_rk4[:,i,0], color='purple', alpha=.3, label='p' if i==0 else None)
axs[3,1].plot(t_span_fine, traj_rk4[:,i,1], color='purple', linestyle='-.', alpha=.3, label='q' if i==0 else None)
# ax.legend()
# ax.set_title('Controlled trajectories')
# ax.set_xlabel(r"$t~[s]$")
```
Great 🎉
Training the controller with `HyperEuler` resulted in a working controller stabilizing the system in the given time; more importantly, while in the first part we used a high accuracy solver to obtain similar results training with the same number of epochs and all the same hyperparameters but using the hypersolver took less than time than higher-order solvers! Now, let's see the results
```python
# plt.rcParams.update({
# "text.usetex": True,
# "font.family": "serif",
# "font.serif": ["Palatino"],
# })
```
```python
# Cool plots
from matplotlib import colors, cm
import matplotlib.gridspec as gridspec
norm = colors.Normalize(vmin=0, vmax=16)
controller = [u_hyper, u_euler, u_mp, u_rk4]
colors = ['tab:orange', 'tab:red', 'tab:green', 'tab:purple']
labels = ['HyperEuler', 'Euler', 'Midpoint', 'RK4']
cmap='bone'
titles = ['HyperEuler', 'Euler', 'Midpoint', 'RK4']
sys = []
fig = plt.figure(figsize=(11,5), constrained_layout=False)
spec = gridspec.GridSpec(ncols=4, nrows=8, figure=fig)
# Time span
t0, tf = 0, 5 # initial and final time for controlling the system
steps = int((tf - t0)/dt) + 1 # so we have a time step of 0.2s
t_span = torch.linspace(t0, tf, steps*20).to(device) # nominal
lim = π
graph_lim = lim*1.6
# x0 = torch.Tensor(50, 2).uniform_(-lim, lim).to(device)
x0_test = init_dist.sample((1000,)).to(device) # test on 1000 trajectories from random points
axs = []
axs_q, axs_p = [], []
for i in range(4):
axs.append(fig.add_subplot(spec[:4, i]))
axs_q.append(fig.add_subplot(spec[4:6, i]))
axs_p.append(fig.add_subplot(spec[6:8, i])) #, sharex=axs_q[-1]
for u, ax, i in zip(controller, axs, range(len(controller))):
sys.append(ControlledPendulum(u).to(device))
n_grid = 50
x = torch.linspace(-graph_lim, graph_lim, n_grid).to(device)
Q, P = torch.meshgrid(x, x) ; z = torch.cat([Q.reshape(-1, 1), P.reshape(-1, 1)], 1)
f = sys[-1](0, z).detach().cpu()
Fq, Fp = f[:,0].reshape(n_grid, n_grid), f[:,1].reshape(n_grid, n_grid)
val = sys[-1].u(0, z).detach().cpu()
U = val.reshape(n_grid, n_grid)
ax.streamplot(Q.T.detach().cpu().numpy(), P.T.detach().cpu().numpy(),
Fq.T.detach().cpu().numpy(), Fp.T.detach().cpu().numpy(), color='black', density=0.6, linewidth=0.5)
ax.set_xlim([-graph_lim*0.7, graph_lim*0.7]) ; ax.set_ylim([-graph_lim, graph_lim])
traj = odeint(sys[-1], x0, t_span, solver='dopri5')[1].detach().cpu()
for j in range(traj.shape[1]):
ax.plot(traj[:,j,0], traj[:,j,1], c=colors[i], alpha=.4, label=labels[i] if j==0 else "")
axs_q[i].plot(t_span, traj[:,j,0], c=colors[i], alpha=.4)
axs_p[i].plot(t_span, traj[:,j,1], c=colors[i], alpha=.4)
traj_test = odeint(sys[-1], x0_test, t_span, solver='dopri5')[1].detach().cpu()
print(titles[i], ' loss:\n', cost_func(traj.to(device)).item())
distance_mean, distance_std = traj_test[-1,:,0].mean(), traj_test[-1,...,0].std()
# Not actually "error", this the mean value of positions and std
print(titles[i], ' error mean:', distance_mean, ' error_std: ', distance_std)
ax.set_xlabel(r'$q$', labelpad=-10)
ax.set_ylabel(r'$p$', labelpad=0)
ax.set_xticks([-3, 3])
ax.set_yticks([-5, 0, 5])
ax.set_title(titles[i], family='cursive')
plt.suptitle('Pendulum Controlled Trajectories',fontsize=16, weight='bold', y=0.97)
axs_q[0].set_ylabel(r'$q(t)$', labelpad=0)
axs_p[0].set_ylabel(r'$p(t)$', labelpad=0)
for axq, axp in zip(axs_q, axs_p):
axq.set_xticks([0, 1, 2, 3, 4, 5])
axp.set_xticks([0, 1, 2, 3, 4, 5])
axp.set_xlabel(r'$t$', labelpad=0)
axq.tick_params(labelbottom=False)
axq.set_yticks([-4, 0, 4])
axp.set_yticks([-5, 0, 5])
axq.set_ylim([-4, 4])
axp.set_ylim([-5, 5])
axs_q[0].set_yticks([-4, 0, 4])
axs_p[0].set_yticks([-5, 0, 5])
for i in range(1, 4, 1):
axs_q[i].tick_params(labelleft=False)
axs_p[i].tick_params(labelleft=False)
pad = 3
axs[0].annotate("Phase Space", xy=(0, 0.5), xytext=(-axs[0].yaxis.labelpad - pad, 0),
xycoords=axs[0].yaxis.label, textcoords='offset points',
size='large', ha='right', va='center', rotation='vertical', fontweight='bold')
axs_q[0].annotate("Position", xy=(0, 0.5), xytext=(-axs_q[0].yaxis.labelpad - pad, 0),
xycoords=axs_q[0].yaxis.label, textcoords='offset points',
size='large', ha='right', va='center', rotation='vertical', fontweight='bold')
axs_p[0].annotate("Momentum", xy=(0, 0.5), xytext=(-axs_p[0].yaxis.labelpad - pad, 0),
xycoords=axs_p[0].yaxis.label, textcoords='offset points',
size='large', ha='right', va='center', rotation='vertical', fontweight='bold')
fig.tight_layout(h_pad=0, w_pad=-0.1)
# Saving
# import tikzplotlib
fig.savefig('media/pendulum_control.pdf', bbox_inches = 'tight')
# tikzplotlib.save("media/pendulum_control.tex")
```
## Calculating MACs and plotting losses
We can use the number of MACs instead of NFEs so we obtain a simpler
```python
from ptflops import get_model_complexity_info
bs = 1024 # batch size we used in training
def get_macs(net:nn.Module):
params = []
for p in net.parameters(): params.append(p.shape)
with torch.cuda.device(0):
macs, _ = get_model_complexity_info(net, (bs, params[0][1]), as_strings=False)
return int(macs)
```
```python
controller = nn.Sequential(
nn.Linear(2, 64),
nn.Softplus(),
nn.Linear(64, 64),
nn.Softplus(),
nn.Linear(64, 64),
nn.Tanh(),
nn.Linear(64, 1))
hypersolver = nn.Sequential(nn.Linear(2, 32), nn.Tanh(), nn.Linear(32, 1)).to(device)
hs_macs = get_macs(hypersolver)
u_macs = get_macs(controller)
print('Controller MACs per NFE:', u_macs, '\nHypersolver MACs per NFE:', hs_macs)
```
```python
unit_mac_hyper = u_macs + hs_macs
unit_mac_euler = u_macs
unit_mac_midpoint = u_macs*2 # midpoint has to evaluate the vector field twice per NFE
unit_mac_rk4 = u_macs*4 # rk4 has to do it 4 times
```
```python
multiplier = 2 # mac to flops
macs_hyper = unit_mac_hyper*multiplier
macs_euler = unit_mac_euler*multiplier
macs_midpoint = unit_mac_midpoint*multiplier
macs_rk4 = unit_mac_rk4*multiplier
```
## Getting FLOPs per each evaluation
```python
# # losses from above
# # Flops are 2*GMACs
# # https://github.com/sovrasov/flops-counter.pytorch/issues/16
print('HyperEuler FLOPs:', macs_hyper)
print('Euler FLOPs:', macs_euler)
print('Midpoing FLOPs:', macs_midpoint)
print('RK4 FLOPs:', macs_rk4)
```
HyperEuler FLOPs: 17367492
Euler FLOPs: 17170818
Midpoing FLOPs: 34341636
RK4 FLOPs: 68683272
```python
# Ratio
print('Hyper vs euler:', macs_hyper/macs_euler*100, '%')
print('Hyper vs midpoint:', 1-(macs_hyper/macs_midpoint)*100, '%')
```
Hyper vs euler: 101.14539680054845 %
Hyper vs midpoint: -49.57269840027423 %
| d52e17236757a27674e2d9f16da41d778c784fa0 | 503,325 | ipynb | Jupyter Notebook | hypersolvers-control/experiments/pendulum/03c_plot.ipynb | Juju-botu/diffeqml-research | aa796c87447e5299ec4f25a07fc4d032afb1f63e | [
"Apache-2.0"
] | null | null | null | hypersolvers-control/experiments/pendulum/03c_plot.ipynb | Juju-botu/diffeqml-research | aa796c87447e5299ec4f25a07fc4d032afb1f63e | [
"Apache-2.0"
] | null | null | null | hypersolvers-control/experiments/pendulum/03c_plot.ipynb | Juju-botu/diffeqml-research | aa796c87447e5299ec4f25a07fc4d032afb1f63e | [
"Apache-2.0"
] | null | null | null | 1,014.768145 | 245,130 | 0.950142 | true | 3,768 | Qwen/Qwen-72B | 1. YES
2. YES | 0.824462 | 0.798187 | 0.658075 | __label__eng_Latn | 0.357218 | 0.367259 |
Before you turn this problem in, make sure everything runs as expected. First, **restart the kernel** (in the menubar, select Kernel$\rightarrow$Restart) and then **run all cells** (in the menubar, select Cell$\rightarrow$Run All).
Make sure you fill in any place that says `YOUR CODE HERE` or "YOUR ANSWER HERE", as well as your name and collaborators below:
```python
NAME = "Prabal CHowdhury"
COLLABORATORS = ""
```
---
# Differentiation: Forward, Backward, And Central
---
## Task 1: Differentiation
We have already learnt about *forward differentiation*, *backward diferentiation* and *central differentiation*. In this part of the assignment we will write methods to calculate this values and check how they perform.
The equations are as follows,
\begin{align}
\text{forward differentiation}, f^\prime(x) \simeq \frac{f(x+h)-f(x)}{h} \tag{4.6} \\
\text{backward differentiation}, f^\prime(x) \simeq \frac{f(x)-f(x-h)}{h} \tag{4.7} \\
\text{central differentiation}, f^\prime(x) \simeq \frac{f(x+h)-f(x-h)}{2h} \tag{4.8}
\end{align}
## Importing libraries
```python
import numpy as np
import matplotlib.pyplot as plt
from numpy.polynomial import Polynomial
```
Here, `forward_diff(f, h, x)`, `backward_diff(f, h, x)`, and `central_diff(f, h, x)` calculates the *forward differentiation*, *backward differentiation* and *central differentiation* respectively. finally the `error(f, f_prime, h, x)` method calculates the different values for various $h$ and returns the errors.
Later we will run some code to test out performance. The first one is done for you.
```python
def forward_diff(f, h, x):
return (f(x+h) - f(x)) / h
```
```python
def backward_diff(f, h, x):
# --------------------------------------------
# YOUR CODE HERE
return (f(x-h) - f(x)) / h
# --------------------------------------------
```
```python
def central_diff(f, h, x):
# --------------------------------------------
# YOUR CODE HERE
return (f(x+h) - f(x-h)) / (2*h)
# --------------------------------------------
```
```python
def error(f, f_prime, h, x):
Y_correct = f_prime(x)
f_error = np.array([])
b_error = np.array([])
c_error = np.array([])
for h_i in h:
# for different values of h (h_i)
# calculate the error at the point x for (i) forward method
# (ii) backward method
# (ii) central method
# the first one is done for you
f_error_h_i = forward_diff(f, h_i, x) - Y_correct
f_error = np.append(f_error, f_error_h_i)
b_error_h_i = backward_diff(f, h_i, x) - Y_correct
b_error = np.append(b_error, b_error_h_i)
c_error_h_i = central_diff(f, h_i, x) - Y_correct
c_error = np.append(c_error, c_error_h_i)
return f_error, b_error, c_error
```
## Plot1
Polynomial and Actual Derivative Function
```python
fig, ax = plt.subplots()
ax.axhline(y=0, color='k')
p = Polynomial([2.0, 1.0, -6.0, -2.0, 2.5, 1.0])
data = p.linspace(domain=[-2.4, 1.5])
ax.plot(data[0], data[1], label='Function')
p_prime = p.deriv(1)
data2 = p_prime.linspace(domain=[-2.4, 1.5])
ax.plot(data2[0], data2[1], label='Derivative')
ax.legend()
```
```python
h = 1
fig, bx = plt.subplots()
bx.axhline(y=0, color='k')
x = np.linspace(-2.0, 1.3, 50, endpoint=True)
y = forward_diff(p, h, x)
bx.plot(x, y, label='Forward; h=1')
y = backward_diff(p, h, x)
bx.plot(x, y, label='Backward; h=1')
y = central_diff(p, h, x)
bx.plot(x, y, label='Central; h=1')
data2 = p_prime.linspace(domain=[-2.0, 1.3])
bx.plot(data2[0], data2[1], label='actual')
bx.legend()
```
```python
h = 0.1
fig, bx = plt.subplots()
bx.axhline(y=0, color='k')
x = np.linspace(-2.2, 1.3, 50, endpoint=True)
y = forward_diff(p, h, x)
bx.plot(x, y, label='Forward; h=0.1')
y = backward_diff(p, h, x)
bx.plot(x, y, label='Backward; h=0.1')
y = central_diff(p, h, x)
bx.plot(x, y, label='Central; h=0.1')
data2 = p_prime.linspace(domain=[-2.2, 1.3])
bx.plot(data2[0], data2[1], label='actual')
bx.legend()
```
```python
h = 0.01
fig, bx = plt.subplots()
bx.axhline(y=0, color='k')
x = np.linspace(-2.2, 1.3, 50, endpoint=True)
y = forward_diff(p, h, x)
bx.plot(x, y, label='Forward; h=0.01')
y = backward_diff(p, h, x)
bx.plot(x, y, label='Backward; h=0.01')
y = central_diff(p, h, x)
bx.plot(x, y, label='Central; h=0.01')
data2 = p_prime.linspace(domain=[-2.2, 1.3])
bx.plot(data2[0], data2[1], label='actual')
bx.legend()
```
```python
fig, bx = plt.subplots()
bx.axhline(y=0, color='k')
h = np.array([1., 0.55, 0.3, .17, 0.1, 0.055, 0.03, 0.017, 0.01])
err = error(p, p_prime, h, 2.0)
bx.plot(h, err[0], label='Forward')
bx.plot(h, err[1], label='Backward')
bx.plot(h, err[2], label='Central')
bx.legend()
```
```python
```
| 2cfc6b18a9465c463bf20d6d07db11d4c4509b8a | 136,578 | ipynb | Jupyter Notebook | Forward_Backward_and_Central_Differentiation.ipynb | PrabalChowdhury/CSE330-NUMERICAL-METHODS | aabfea01f4ceaecfbb50d771ee990777d6e1122c | [
"MIT"
] | null | null | null | Forward_Backward_and_Central_Differentiation.ipynb | PrabalChowdhury/CSE330-NUMERICAL-METHODS | aabfea01f4ceaecfbb50d771ee990777d6e1122c | [
"MIT"
] | null | null | null | Forward_Backward_and_Central_Differentiation.ipynb | PrabalChowdhury/CSE330-NUMERICAL-METHODS | aabfea01f4ceaecfbb50d771ee990777d6e1122c | [
"MIT"
] | null | null | null | 213.737089 | 31,666 | 0.894244 | true | 1,522 | Qwen/Qwen-72B | 1. YES
2. YES | 0.907312 | 0.91118 | 0.826724 | __label__eng_Latn | 0.580442 | 0.759091 |
# Artificial Intelligence in Finance
## Normative Finance
Dr Yves J Hilpisch | The AI Machine
http://aimachine.io | http://twitter.com/dyjh
## Uncertainty and Risk
```python
import numpy as np
```
```python
S0 = 10
B0 = 10
```
```python
S1 = np.array((20, 5))
B1 = np.array((11, 11))
```
```python
M0 = np.array((S0, B0))
M0
```
```python
M1 = np.array((S1, B1)).T
M1
```
```python
K = 14.5
```
```python
C1 = np.maximum(S1 - K, 0)
C1
```
```python
phi = np.linalg.solve(M1, C1)
phi
```
```python
np.allclose(C1, np.dot(M1, phi))
```
```python
C0 = np.dot(M0, phi)
C0
```
## Expected Utility Theory
```python
def u(x):
return np.sqrt(x)
```
```python
phi_A = np.array((0.75, 0.25))
phi_D = np.array((0.25, 0.75))
```
```python
np.dot(M0, phi_A) == np.dot(M0, phi_D)
```
```python
A1 = np.dot(M1, phi_A)
A1
```
```python
D1 = np.dot(M1, phi_D)
D1
```
```python
P = np.array((0.5, 0.5))
```
```python
def EUT(x):
return np.dot(P, u(x))
```
```python
EUT(A1)
```
```python
EUT(D1)
```
```python
from scipy.optimize import minimize
```
```python
w = 10
```
```python
cons = {'type': 'eq', 'fun': lambda phi: np.dot(M0, phi) - w}
```
```python
def EUT_(phi):
x = np.dot(M1, phi)
return EUT(x)
```
```python
opt = minimize(lambda phi: -EUT_(phi),
x0=phi_A,
constraints=cons)
```
```python
opt
```
```python
EUT_(opt['x'])
```
```python
np.dot(M0, opt['x'])
```
## Mean-Variance Portfolio Theory
```python
rS = S1 / S0 - 1
rS
```
```python
rB = B1 / B0 - 1
rB
```
```python
def mu(rX):
return np.dot(P, rX)
```
```python
mu(rS)
```
```python
mu(rB)
```
```python
rM = M1 / M0 - 1
rM
```
```python
mu(rM)
```
```python
def var(rX):
return ((rX - mu(rX)) ** 2).mean()
```
```python
var(rS)
```
```python
var(rB)
```
```python
def sigma(rX):
return np.sqrt(var(rX))
```
```python
sigma(rS)
```
```python
sigma(rB)
```
```python
np.cov(rM.T, aweights=P, ddof=0)
```
```python
phi = np.array((0.5, 0.5))
```
```python
def mu_phi(phi):
return np.dot(phi, mu(rM))
```
```python
mu_phi(phi)
```
```python
def var_phi(phi):
cv = np.cov(rM.T, aweights=P, ddof=0)
return np.dot(phi, np.dot(cv, phi))
```
```python
var_phi(phi)
```
```python
def sigma_phi(phi):
return var_phi(phi) ** 0.5
```
```python
sigma_phi(phi)
```
```python
from pylab import plt, mpl
plt.style.use('seaborn')
mpl.rcParams['savefig.dpi'] = 300
mpl.rcParams['font.family'] = 'serif'
```
```python
phi_mcs = np.random.random((2, 200))
```
```python
phi_mcs = (phi_mcs / phi_mcs.sum(axis=0)).T
```
```python
mcs = np.array([(sigma_phi(phi), mu_phi(phi))
for phi in phi_mcs])
```
```python
plt.figure(figsize=(10, 6))
plt.plot(mcs[:, 0], mcs[:, 1], 'ro')
plt.xlabel('expected volatility')
plt.ylabel('expected return');
```
```python
P = np.ones(3) / 3
P
```
```python
S1 = np.array((20, 10, 5))
```
```python
T0 = 10
T1 = np.array((1, 12, 13))
```
```python
M0 = np.array((S0, T0))
M0
```
```python
M1 = np.array((S1, T1)).T
M1
```
```python
rM = M1 / M0 - 1
rM
```
```python
mcs = np.array([(sigma_phi(phi), mu_phi(phi))
for phi in phi_mcs])
```
```python
plt.figure(figsize=(10, 6))
plt.plot(mcs[:, 0], mcs[:, 1], 'ro')
plt.xlabel('expected volatility')
plt.ylabel('expected return');
```
```python
cons = {'type': 'eq', 'fun': lambda phi: np.sum(phi) - 1}
```
```python
bnds = ((0, 1), (0, 1))
```
```python
min_var = minimize(sigma_phi, (0.5, 0.5),
constraints=cons, bounds=bnds)
```
```python
min_var
```
```python
def sharpe(phi):
return mu_phi(phi) / sigma_phi(phi)
```
```python
max_sharpe = minimize(lambda phi: -sharpe(phi), (0.5, 0.5),
constraints=cons, bounds=bnds)
```
```python
max_sharpe
```
```python
plt.figure(figsize=(10, 6))
plt.plot(mcs[:, 0], mcs[:, 1], 'ro', ms=5)
plt.plot(sigma_phi(min_var['x']), mu_phi(min_var['x']),
'^', ms=12.5, label='minimum volatility')
plt.plot(sigma_phi(max_sharpe['x']), mu_phi(max_sharpe['x']),
'v', ms=12.5, label='maximum Sharpe ratio')
plt.xlabel('expected volatility')
plt.ylabel('expected return')
plt.legend();
```
```python
cons = [{'type': 'eq', 'fun': lambda phi: np.sum(phi) - 1},
{'type': 'eq', 'fun': lambda phi: mu_phi(phi) - target}]
```
```python
bnds = ((0, 1), (0, 1))
```
```python
targets = np.linspace(mu_phi(min_var['x']), 0.16)
```
```python
frontier = []
for target in targets:
phi_eff = minimize(sigma_phi, (0.5, 0.5),
constraints=cons, bounds=bnds)['x']
frontier.append((sigma_phi(phi_eff), mu_phi(phi_eff)))
frontier = np.array(frontier)
```
```python
plt.figure(figsize=(10, 6))
plt.plot(frontier[:, 0], frontier[:, 1], 'mo', ms=5,
label='efficient frontier')
plt.plot(sigma_phi(min_var['x']), mu_phi(min_var['x']),
'^', ms=12.5, label='minimum volatility')
plt.plot(sigma_phi(max_sharpe['x']), mu_phi(max_sharpe['x']),
'v', ms=12.5, label='maximum Sharpe ratio')
plt.xlabel('expected volatility')
plt.ylabel('expected return')
plt.legend();
```
## Capital Asset Pricing Model
```python
plt.figure(figsize=(10, 6))
plt.plot((0, 0.3), (0.01, 0.22), label='capital market line')
plt.plot(0, 0.01, 'o', ms=9, label='risk-less asset')
plt.plot(0.2, 0.15, '^', ms=9, label='market portfolio')
plt.annotate('$(0, \\bar{r})$', (0, 0.01), (-0.01, 0.02))
plt.annotate('$(\sigma_M, \mu_M)$', (0.2, 0.15), (0.19, 0.16))
plt.xlabel('expected volatility')
plt.ylabel('expected return')
plt.legend();
```
```python
phi_M = np.array((0.8, 0.2))
```
```python
mu_M = mu_phi(phi_M)
mu_M
```
```python
sigma_M = sigma_phi(phi_M)
sigma_M
```
```python
r = 0.0025
```
```python
plt.figure(figsize=(10, 6))
plt.plot(frontier[:, 0], frontier[:, 1], 'm.', ms=5,
label='efficient frontier')
plt.plot(0, r, 'o', ms=9, label='risk-less asset')
plt.plot(sigma_M, mu_M, '^', ms=9, label='market portfolio')
plt.plot((0, 0.6), (r, r + ((mu_M - r) / sigma_M) * 0.6),
'r', label='capital market line', lw=2.0)
plt.annotate('$(0, \\bar{r})$', (0, r), (-0.015, r + 0.01))
plt.annotate('$(\sigma_M, \mu_M)$', (sigma_M, mu_M),
(sigma_M - 0.025, mu_M + 0.01))
plt.xlabel('expected volatility')
plt.ylabel('expected return')
plt.legend();
```
```python
def U(p):
mu, sigma = p
return mu - 1 / 2 * (sigma ** 2 + mu ** 2)
```
```python
cons = {'type': 'eq',
'fun': lambda p: p[0] - (r + (mu_M - r) / sigma_M * p[1])}
```
```python
opt = minimize(lambda p: -U(p), (0.1, 0.3), constraints=cons)
```
```python
opt
```
```python
from sympy import *
init_printing(use_unicode=False, use_latex=False)
```
```python
mu, sigma, b, v = symbols('mu sigma b v')
```
```python
sol = solve('mu - b / 2 * (sigma ** 2 + mu ** 2) - v', mu)
```
```python
sol
```
```python
u1 = sol[0].subs({'b': 1, 'v': 0.1})
u1
```
```python
u2 = sol[0].subs({'b': 1, 'v': 0.125})
u2
```
```python
f1 = lambdify(sigma, u1)
f2 = lambdify(sigma, u2)
```
```python
sigma_ = np.linspace(0.0, 0.5)
u1_ = f1(sigma_)
u2_ = f2(sigma_)
```
```python
plt.figure(figsize=(10, 6))
plt.plot(sigma_, u1_, label='$v=0.1$')
plt.plot(sigma_, u2_, '--', label='$v=0.125$')
plt.xlabel('expected volatility')
plt.ylabel('expected return')
plt.legend();
```
```python
u = sol[0].subs({'b': 1, 'v': -opt['fun']})
u
```
```python
f = lambdify(sigma, u)
```
```python
u_ = f(sigma_)
```
```python
plt.figure(figsize=(10, 6))
plt.plot(0, r, 'o', ms=9, label='risk-less asset')
plt.plot(sigma_M, mu_M, '^', ms=9, label='market portfolio')
plt.plot(opt['x'][1], opt['x'][0], 'v', ms=9, label='optimal portfolio')
plt.plot((0, 0.5), (r, r + (mu_M - r) / sigma_M * 0.5),
label='capital market line', lw=2.0)
plt.plot(sigma_, u_, '--', label='$v={}$'.format(-round(opt['fun'], 3)))
plt.xlabel('expected volatility')
plt.ylabel('expected return')
plt.legend();
```
## Arbitrage Pricing Theory
```python
M1
```
```python
M0
```
```python
V1 = np.array((12, 15, 7))
```
```python
reg = np.linalg.lstsq(M1, V1, rcond=-1)[0]
reg
```
```python
np.dot(M1, reg)
```
```python
np.dot(M1, reg) - V1
```
```python
V0 = np.dot(M0, reg)
V0
```
```python
U0 = 10
U1 = np.array((12, 5, 11))
```
```python
M0_ = np.array((S0, T0, U0))
```
```python
M1_ = np.concatenate((M1.T, np.array([U1,]))).T
```
```python
M1_
```
```python
np.linalg.matrix_rank(M1_)
```
```python
reg = np.linalg.lstsq(M1_, V1, rcond=-1)[0]
reg
```
```python
np.allclose(np.dot(M1_, reg), V1)
```
```python
V0_ = np.dot(M0_, reg)
V0_
```
<br><br><br><a href="http://tpq.io" target="_blank">http://tpq.io</a> | <a href="http://twitter.com/dyjh" target="_blank">@dyjh</a> | <a href="mailto:ai@tpq.io">ai@tpq.io</a>
| a811b7554bd4caa263f653a260bb437e5977626b | 24,674 | ipynb | Jupyter Notebook | 03_normative_finance.ipynb | pepelawycliffe/AI_in_Finance | 5eb29afed137c809955d116e7a7764b5914add96 | [
"MIT"
] | 3 | 2021-03-15T05:30:50.000Z | 2021-12-14T07:28:44.000Z | 03_normative_finance.ipynb | pepelawycliffe/AI_in_Finance | 5eb29afed137c809955d116e7a7764b5914add96 | [
"MIT"
] | null | null | null | 03_normative_finance.ipynb | pepelawycliffe/AI_in_Finance | 5eb29afed137c809955d116e7a7764b5914add96 | [
"MIT"
] | null | null | null | 20.109209 | 191 | 0.445043 | true | 3,159 | Qwen/Qwen-72B | 1. YES
2. YES | 0.877477 | 0.841826 | 0.738682 | __label__eng_Latn | 0.187549 | 0.554539 |
```python
%matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
import pathlib
import os
import pwd
import figformat
fig_width,fig_height,params=figformat.figure_format(fig_width=3.4,fig_height=3.4)
mpl.rcParams.update(params)
#mpl.rcParams.keys()
#help(figformat)
def get_username():
return pwd.getpwuid(os.getuid())[0]
user = get_username()
run_dir = pathlib.Path(rf"/Users/{user}/GitHub/LEC/examples/") #path to TABLES
print(f"run_dir: {run_dir}")
```
run_dir: /Users/StevE/GitHub/LEC/examples
### The function $q(\chi_e)$ for $\chi_e\ll 1$
\begin{equation}
q(\chi_e\ll 1)\approx 1-\frac{55}{16}\sqrt{3}\chi + 48\chi^2 \nonumber
\end{equation}
```python
def func(x):
f=1-(55/16)*np.sqrt(3)*x + 48*x**2
return f
x=np.arange(0.001,10,0.001)
```
### The function $q(\chi_e)$ for $\chi_e\gg 1$
\begin{equation}
q(\chi_e\gg 1)\approx\frac{48}{243}\Gamma(\frac{2}{3})\chi^{-4/3}
\left[ 1 -\frac{81}{16\Gamma(2/3)}(3\chi)^{-2/3} \right] \nonumber
\end{equation}
```python
def func2(y):
f2=(48/243)*1.354*2.08*y**(-4/3)*(1-(81/16)*0.73855*(3*y)**(-2/3))
return f2
y=np.arange(0.1,100,0.001)
```
### Extract $\chi_e$, $P_\mathrm{Q}$, $P_\mathrm{C}$, and $q(\chi_e)$
```python
Xi,Prad,Pc,g = np.loadtxt('P_rad.dat',unpack=True,usecols=[0,1,2,3],dtype=np.float)
```
```python
fig, ax = plt.subplots()
ax.plot(x,func(x), "b", ls='--')
ax.plot(y,func2(y),"g", ls='-.')
ax.plot(Xi, g, color='black')
ax.set_xlim(0.001,100)
ax.set_ylim(0.001,3)
ax.set_yscale('log')
ax.set_xscale('log')
ax.set_xlabel("$\chi_e$")
ax.set_ylabel("$q(\chi_e)$")
ax.minorticks_on()
ax2 = ax.twinx()
ax2.plot(Xi,Prad,'-y',color='red',label='$P_Q$')
ax2.plot(Xi,Pc,'--',color='red',label='$P_Q$')
ax2.set_ylim(1,1e7)
ax2.set_yscale('log')
ax2.set_ylabel("$P_\mathrm{rad} (W)$",color='red')
ax2.minorticks_on()
ax2.tick_params(axis='y',labelcolor='red')
ax2.tick_params(which='major',color='red')
ax2.tick_params(which='minor',color='red')
ax2.spines['right'].set_color('red')
eq1 = r"\begin{eqnarray*}" + \
r"g(\chi_e\ll 1)&\approx&1-\frac{55}{16}\sqrt{3}\chi \\&+& 48\chi^2" + \
r"\end{eqnarray*}"
#ax.text(0.35, 0.4, eq1, {'color': 'b', 'fontsize': 15}, va="top", ha="right")
eq2 = r"\begin{eqnarray*}" + \
r"&&g(\chi_e\gg 1)\approx\frac{48}{243}\Gamma(\frac{2}{3})\chi^{-4/3} \\" + \
r"&& \times \{ 1 -\frac{81}{16\Gamma(2/3)}(3\chi)^{-2/3} \}" + \
r"\end{eqnarray*}"
#ax.text(1, 0.01, eq2, {'color': 'g', 'fontsize': 15}, va="top", ha="right")
fig = plt.gcf()
fig.set_size_inches(fig_width, fig_width/1.618)
fig.tight_layout()
plt.show()
```
<IPython.core.display.Javascript object>
```python
fig.savefig(rf"/Users/StevE/GitHub/LEC/docs/source/figures/qchi.png",format='png',dpi=600,transparent=True, bbox_inches='tight')
```
```python
```
| 50e2344d3b2074c3bc38239697b38779643e966c | 51,502 | ipynb | Jupyter Notebook | examples/01_Python_emission_qfactor.ipynb | StevE-Ong/LEC | df479b38de83f6629ad5453760c166f9289b38e6 | [
"BSD-2-Clause"
] | 3 | 2020-07-02T14:57:31.000Z | 2021-11-19T09:44:40.000Z | examples/01_Python_emission_qfactor.ipynb | StevE-Ong/LEC | df479b38de83f6629ad5453760c166f9289b38e6 | [
"BSD-2-Clause"
] | 2 | 2020-07-23T15:05:27.000Z | 2021-11-19T12:12:53.000Z | examples/01_Python_emission_qfactor.ipynb | StevE-Ong/LEC | df479b38de83f6629ad5453760c166f9289b38e6 | [
"BSD-2-Clause"
] | 4 | 2020-07-24T08:11:32.000Z | 2022-01-10T02:58:14.000Z | 52.876797 | 11,603 | 0.585336 | true | 1,077 | Qwen/Qwen-72B | 1. YES
2. YES
| 0.839734 | 0.712232 | 0.598086 | __label__yue_Hant | 0.159924 | 0.227883 |
# Anomaly Detection with LSTM in Keras
Predict Anomalies using Confidence Intervals
https://towardsdatascience.com/anomaly-detection-with-lstm-in-keras-8d8d7e50ab1b
Detection of anomaly is useful in every business and the difficultness to detect these observations depends on the field of applications. If you are engaged in a problem of anomaly detection, which involves human activities (like prediction of sales or demand), you can take advantages from fundamental assumptions of human behaviors and plan a more efficient solution.
We try to predict the Taxi demand in NYC in a critical time period. We formulate easy and important assumptions about human behaviors, which will permit us to detect an easy solution to forecast anomalies. All the dirty job is made by a loyalty LSTM, developed in Keras, which makes predictions and detection of anomalies at the same time!
```python
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import re
import os
import tqdm
import random
import datetime
from sklearn.metrics import mean_squared_log_error
import tensorflow as tf
from tensorflow.keras.layers import *
from tensorflow.keras.models import *
import tensorflow.keras.backend as K
import statsmodels.api as sm
import statsmodels.formula.api as smf
%matplotlib inline
```
## THE DATASET
I took the dataset for our analysis from [Numenta](https://numenta.org/) community. In particular I chose the [NYC Taxi Dataset](https://github.com/numenta/NAB/blob/master/data/realKnownCause/nyc_taxi.csv). This dataset shows the NYC taxi demand from 2014–07–01 to 2015–01–31 with an observation every half hour.
```python
### READ DATA
df = pd.read_csv('../dataset/nyc_taxi.csv',index_col='timestamp')
# Converting the index as date
df.index = pd.to_datetime(df.index)
### CREATE FEATURES FOR year, month, day, hour ###
df['yr'] = df.index.year
df['mt'] = df.index.month
df['d'] = df.index.day
df['H'] = df.index.hour
print(df.shape)
df.head()
```
(10320, 5)
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>value</th>
<th>yr</th>
<th>mt</th>
<th>d</th>
<th>H</th>
</tr>
<tr>
<th>timestamp</th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<th>2014-07-01 00:00:00</th>
<td>10844</td>
<td>2014</td>
<td>7</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<th>2014-07-01 00:30:00</th>
<td>8127</td>
<td>2014</td>
<td>7</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<th>2014-07-01 01:00:00</th>
<td>6210</td>
<td>2014</td>
<td>7</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<th>2014-07-01 01:30:00</th>
<td>4656</td>
<td>2014</td>
<td>7</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<th>2014-07-01 02:00:00</th>
<td>3820</td>
<td>2014</td>
<td>7</td>
<td>1</td>
<td>2</td>
</tr>
</tbody>
</table>
</div>
In this period 5 anomalies are present, in terms of deviation from normal behavior. They occur respectively during the
- NYC marathon,
- Thanksgiving,
- Christmas,
- New Year’s day, and a
- snowstorm.
### Exemple of Weekly NORMAL observations
```python
### PLOT SAMPLE OF DATA ###
df.iloc[4000:4000+7*48,:].value.plot(title='NORMAL observations');
```
### NYC Marathon
```python
### PLOT SAMPLE OF DATA ###
df.iloc[4000+37*48:4000+37*48+7*48,:].value.plot(title='NYC Marathon anomality');
```
### Christmas
```python
### PLOT SAMPLE OF DATA ###
df.iloc[8400:8400+7*48,:].value.plot(title='Christmas anomality');
```
### Anormality (outlier) detection:
The objective is to learn what "normal" data look like, and then use that to detect abnormal instances, such as new trends in the time series.
Our purpose is to detect these abnormal observetions in advance!
The first consideration we noticed, looking at the data, is the presence of an obvious daily pattern (during the day the demand is higher than night hours). The taxi demand seems to be driven also by a weekly trend: in certain days of the week the taxi demand is higher than others. We simply prove this computing autocorrelation.
```python
### WEEKLY AUTOCORR PLOT (10 WEEKS DEPTH) ###
timeLags = np.arange(1,10*48*7)
autoCorr = [df.value.autocorr(lag=dt) for dt in timeLags]
plt.plot(1.0/(48*7)*timeLags, autoCorr);
plt.xlabel('time lag [weeks]'); plt.ylabel('correlation coeff', fontsize=12);
plt.title('AutoCorrelation 10 weeks depth')
plt.show()
```
What we can do now is to take note of this important behaviours for our further analysis. I compute and store the means for every days of the weeks at every hours. This will be useful when we’ll standardized the data to build our model in order to reduce every kind of temporal dependency (I compute the means for the first 5000 observations that will become our future train set).
## THE MODEL
We need a strategy to detect outliers in advance. To do this, we decided to care about taxi demand predictions. We want to develop a model which is able to forecast demand taking into account uncertainty. One way to do this is to develop [quantile regression](https://en.wikipedia.org/wiki/Quantile_regression).
### Quantile Regression
Let’s examine the python [statsmodels example for QuantReg](https://www.statsmodels.org/dev/examples/notebooks/generated/quantile_regression.html), which takes a look at the relationship between income and expenditures on food for a sample of working class Belgian households in 1857 [Engel], and see what kind of statistical analysis we can do.
We first need to load some modules and to retrieve the data. Conveniently, the Engel dataset is shipped with statsmodels.
```python
data = sm.datasets.engel.load_pandas().data
data.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>income</th>
<th>foodexp</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>420.157651</td>
<td>255.839425</td>
</tr>
<tr>
<th>1</th>
<td>541.411707</td>
<td>310.958667</td>
</tr>
<tr>
<th>2</th>
<td>901.157457</td>
<td>485.680014</td>
</tr>
<tr>
<th>3</th>
<td>639.080229</td>
<td>402.997356</td>
</tr>
<tr>
<th>4</th>
<td>750.875606</td>
<td>495.560775</td>
</tr>
</tbody>
</table>
</div>
```python
data.plot.scatter(x='income',y='foodexp');
```
#### Least Absolute Deviation (LAD)
The LAD model is a special case of quantile regression where $q=0.5$.
```python
mod = smf.quantreg('foodexp ~ income', data)
res = mod.fit(q=.5)
print(res.summary())
print('Parameters: ', res.params)
print('R2: ', res.rsquared)
# plot
x = np.arange(data.income.min(), data.income.max(), 50)
get_y = lambda a, b: a + b * x
y = get_y(res.params['Intercept'], res.params['income'])
fig, ax = plt.subplots(figsize=(8, 6))
ax.plot(x, y, color='red', label='LAD $p=0.5$')
ax.legend()
data.plot.scatter(x='income',y='foodexp',ax=ax);
plt.show()
```
We estimate the quantile regression model for many quantiles between .05 and .95, and compare best fit line from each of these models to Ordinary Least Squares results.
```python
quantiles = np.arange(.05, .96, .15)
def fit_model(q):
res = mod.fit(q=q)
return [q, res.params['Intercept'], res.params['income']] + \
res.conf_int().loc['income'].tolist()
models = [fit_model(x) for x in quantiles]
models = pd.DataFrame(models, columns=['q', 'a', 'b', 'lb', 'ub'])
models = models.set_index('q')
print(models)
# plot
x = np.arange(data.income.min(), data.income.max(), 50)
get_y = lambda a, b: a + b * x
fig, ax = plt.subplots(figsize=(8, 6))
for i in quantiles:
y = get_y(models.a[i], models.b[i])
ax.plot(x, y, linestyle='dotted', label='LAD $q={:.2f}$'.format(i))
ax.legend()
data.plot.scatter(x='income',y='foodexp',ax=ax);
plt.show()
```
#### Ordinary Least Squares (OLS)
[OLS](https://en.wikipedia.org/wiki/Ordinary_least_squares) is a type of linear least squares method for estimating the unknown parameters in a linear regression model.
**Principle of least squares**: minimizing the sum of the squares of the differences between the observed dependent variable $Y$ and those predicted by the linear function of the independent variable $X$.
```python
mod=smf.ols('foodexp ~ income', data)
res=mod.fit()
print(res.summary())
print('Parameters: ', res.params)
print('Limits: ', res.conf_int().loc['income'].tolist())
print('R2: ', res.rsquared)
# plot
x = np.arange(data.income.min(), data.income.max(), 50)
get_y = lambda a, b: a + b * x
fig, ax = plt.subplots(figsize=(8, 6))
for i in quantiles:
y = get_y(models.a[i], models.b[i])
ax.plot(x, y, linestyle='dotted', label='LAD $q={:.2f}$'.format(i))
y = get_y(res.params['Intercept'], res.params['income'])
ax.plot(x, y, color='red', label='OLS')
ax.legend()
data.plot.scatter(x='income',y='foodexp',ax=ax);
plt.show()
```
**note**: LAD for $q=0.5$ and OLS are not the same. The first can be interpreted as the median and the second as the mean.
```python
n = models.shape[0]
p1 = plt.plot(models.index, models.b, color='black', label='Quantile Reg.')
p2 = plt.plot(models.index, models.ub, linestyle='dotted', color='black')
p3 = plt.plot(models.index, models.lb, linestyle='dotted', color='black')
p4 = plt.plot(models.index, [res.params['income']] * n, color='red', label='OLS')
p5 = plt.plot(models.index, [res.conf_int().loc['income'].tolist()[0]] * n, linestyle='dotted', color='red')
p6 = plt.plot(models.index, [res.conf_int().loc['income'].tolist()[1]] * n, linestyle='dotted', color='red')
plt.ylabel(r'$\beta_{income}$')
plt.xlabel('Quantiles of the conditional food expenditure distribution')
plt.legend()
plt.show()
```
The dotted black lines form 95% point-wise confidence band around 10 quantile regression estimates (solid black line). The red lines represent OLS regression results along with their 95% confidence interval.
In most cases, the quantile regression point estimates lie outside the OLS confidence interval, which suggests that the effect of income on food expenditure may not be constant across the distribution.
#### [Deep Quantile Regression](https://github.com/sachinruk/KerasQuantileModel/blob/master/Keras%20Quantile%20Model.ipynb)
One area that Deep Learning has not explored extensively is the uncertainty in estimates. However, as far as decision making goes, most people actually require quantiles as opposed to true uncertainty in an estimate. eg. For a given age the weight of an individual will vary. What would be interesting is the (for arguments sake) the 10th and 90th percentile. The uncertainty of the estimate of an individuals weight is less interesting.
Standardise the inputs and outputs so that it is easier to train. I have't saved the mean and standard deviations, but you should.
```python
data['income_st']= (data.income - data.income.mean())/data.income.std()
data['foodexp_st']= (data.foodexp - data.foodexp.mean())/data.foodexp.std()
data
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>income</th>
<th>foodexp</th>
<th>income_st</th>
<th>foodexp_st</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>420.157651</td>
<td>255.839425</td>
<td>-1.082978</td>
<td>-1.332253</td>
</tr>
<tr>
<th>1</th>
<td>541.411707</td>
<td>310.958667</td>
<td>-0.849451</td>
<td>-1.132876</td>
</tr>
<tr>
<th>2</th>
<td>901.157457</td>
<td>485.680014</td>
<td>-0.156608</td>
<td>-0.500874</td>
</tr>
<tr>
<th>3</th>
<td>639.080229</td>
<td>402.997356</td>
<td>-0.661349</td>
<td>-0.799954</td>
</tr>
<tr>
<th>4</th>
<td>750.875606</td>
<td>495.560775</td>
<td>-0.446039</td>
<td>-0.465133</td>
</tr>
<tr>
<th>...</th>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<th>230</th>
<td>440.517424</td>
<td>306.519079</td>
<td>-1.043766</td>
<td>-1.148935</td>
</tr>
<tr>
<th>231</th>
<td>541.200597</td>
<td>299.199328</td>
<td>-0.849858</td>
<td>-1.175412</td>
</tr>
<tr>
<th>232</th>
<td>581.359892</td>
<td>468.000798</td>
<td>-0.772514</td>
<td>-0.564823</td>
</tr>
<tr>
<th>233</th>
<td>743.077243</td>
<td>522.601906</td>
<td>-0.461058</td>
<td>-0.367320</td>
</tr>
<tr>
<th>234</th>
<td>1057.676711</td>
<td>750.320163</td>
<td>0.144837</td>
<td>0.456382</td>
</tr>
</tbody>
</table>
<p>235 rows × 4 columns</p>
</div>
```python
model = Sequential()
model.add(Dense(units=10, input_dim=1,activation='relu'))
model.add(Dense(units=10, input_dim=1,activation='relu'))
model.add(Dense(1))
model.compile(loss='mae', optimizer='adadelta')
model.fit(data.income_st.values, data.foodexp_st.values, epochs=50000, batch_size=32, verbose=0)
model.evaluate(data.income_st.values, data.foodexp_st.values)
```
8/8 [==============================] - 0s 4ms/step - loss: 0.2606
0.2605687081813812
```python
t_test = np.linspace(data.income_st.min(),data.income_st.max(),200)
y_test = model.predict(t_test)
plt.scatter(data.income_st,data.foodexp_st)
plt.plot(t_test, y_test,'r')
plt.show()
```
##### Quantile Regression Loss function
In regression the most commonly used loss function is the mean squared error function. If we were to take the negative of this loss and exponentiate it, the result would correspond to the gaussian distribution. The mode of this distribution (the peak) corresponds to the mean parameter. Hence, when we predict using a neural net that minimised this loss we are predicting the mean value of the output which may have been noisy in the training set.
The loss in Quantile Regression for an individual data point is defined as:
$$
\begin{align}
\mathcal{L}(\xi_i|\alpha)=\begin{cases}
\alpha \xi_i &\text{if }\xi_i\ge 0, \\
(\alpha-1) \xi_i &\text{if }\xi_i<0.
\end{cases}
\end{align}
$$
where $\alpha$ is the required quantile (a value between 0 and 1) and $\xi_i = y_i - f(\mathbf{x}_i)$ and, $f(\mathbf{x}_i)$ is the predicted (quantile) model and $y$ is the observed value for the corresponding input $x$. The final overall loss is defines as:
$$
\mathcal{L}(\mathbf{y},\mathbf{f}|\alpha)=\frac{1}{N} \sum_{i=1}^N \mathcal{L}(y_i-f(\mathbf{x}_i)|\alpha)
$$
If we were to take the negative of the individual loss and exponentiate it, we get the distribution know as the Asymmetric Laplace distribution, shown below. The reason that this loss function works is that if we were to find the area under the graph to the left of zero it would be alpha, the required quantile.
https://miro.medium.com/max/700/1*gUrYM90-7NNKwYO6-ONc5A.png
The following function defines the loss function for a quantile model.
Note: The following 4 lines is ALL that you change in comparison to a normal Deep Learning method, i.e. The loss function is all that changes.
```python
def tilted_loss(q,y,f):
e = (y-f)
return K.mean(K.maximum(q*e, (q-1)*e), axis=-1)
```
```python
def engelModel():
model = Sequential()
model.add(Dense(units=10, input_dim=1,activation='relu'))
model.add(Dense(units=10, input_dim=1,activation='relu'))
model.add(Dense(1))
return model
```
```python
qs = [0.1, 0.5, 0.9]
t_test = np.linspace(data.income_st.min(),data.income_st.max(),200)
plt.scatter(data.income_st,data.foodexp_st)
for q in qs:
model = engelModel()
model.compile(loss=lambda y,f: tilted_loss(q,y,f), optimizer='adadelta')
model.fit(data.income_st.values, data.foodexp_st.values, epochs=50000, batch_size=32, verbose=0)
# Predict the quantile
y_test = model.predict(t_test)
plt.plot(t_test, y_test, label=q) # plot out this quantile
plt.legend()
plt.show()
```
##### Final Notes
1. Note that the quantile 0.5 is the same as median, which you can attain by minimising Mean Absolute Error, which you can attain in Keras regardless with loss='mae'.
2. Uncertainty and quantiles are not the same thing. But most of the time you care about quantiles and not uncertainty.
3. If you really do want uncertainty with deep nets checkout http://mlg.eng.cam.ac.uk/yarin/blog_3d801aa532c1ce.html
*** back to the outlyer detection Model ***
We focus on predictions of extreme values: lower (10th quantile), upper (90th quantile) and the classical 50th quantile. Computing also the 90th and 10th quantile we cover the most likely values the reality can assume. The width of this range can be very depth; we know that it is small when our model is sure about the future and it can be huge when our model isn’t able to see important changes in the domain of interest. We took advantage from this behaviour and let our model says something about outliers detection in the field of taxi demand preditcion. We are expecting to get a tiny interval (90–10 quantile range) when our model is sure about the future because it has all under control; on the other hand we are expecting to get an anomaly when the interval becomes bigger. This is possible because our model isn’t trained to handle this kind of scenario which can results in anomaly.
We make all this magic reality building a simple LSTM Neural Network in Keras. Our model will receive as input the past observations. We resize our data for feeding our LSTM with daily window size (48 observations: one observation for every half hour). When we were generating data, as I cited above, we operated logarithmic transformation and standardization subtracting the mean daily hour values, in order to see an observation as the logarithmic variation from its daily mean hour value. We build our target variables in the same way with half hour shifting (we want to predict the demand values for the next thirty minutes).
```python
### CREATE WEEKDAY FEATURE AND COMPUTE THE MEAN FOR WEEKDAYS AT EVERY HOURS ###
df['weekday'] = df.index.weekday
df['weekday_hour'] = df.weekday.astype(str) +' '+ df.H.astype(str)
df['m_weekday'] = df.weekday_hour.replace(df[:5000].groupby('weekday_hour')['value'].mean().to_dict())
df
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>value</th>
<th>yr</th>
<th>mt</th>
<th>d</th>
<th>H</th>
<th>weekday</th>
<th>weekday_hour</th>
<th>m_weekday</th>
</tr>
<tr>
<th>timestamp</th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<th>2014-07-01 00:00:00</th>
<td>10844</td>
<td>2014</td>
<td>7</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>1 0</td>
<td>8774.433333</td>
</tr>
<tr>
<th>2014-07-01 00:30:00</th>
<td>8127</td>
<td>2014</td>
<td>7</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>1 0</td>
<td>8774.433333</td>
</tr>
<tr>
<th>2014-07-01 01:00:00</th>
<td>6210</td>
<td>2014</td>
<td>7</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1 1</td>
<td>5242.933333</td>
</tr>
<tr>
<th>2014-07-01 01:30:00</th>
<td>4656</td>
<td>2014</td>
<td>7</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1 1</td>
<td>5242.933333</td>
</tr>
<tr>
<th>2014-07-01 02:00:00</th>
<td>3820</td>
<td>2014</td>
<td>7</td>
<td>1</td>
<td>2</td>
<td>1</td>
<td>1 2</td>
<td>3170.433333</td>
</tr>
<tr>
<th>...</th>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<th>2015-01-31 21:30:00</th>
<td>24670</td>
<td>2015</td>
<td>1</td>
<td>31</td>
<td>21</td>
<td>5</td>
<td>5 21</td>
<td>21298.033333</td>
</tr>
<tr>
<th>2015-01-31 22:00:00</th>
<td>25721</td>
<td>2015</td>
<td>1</td>
<td>31</td>
<td>22</td>
<td>5</td>
<td>5 22</td>
<td>23126.666667</td>
</tr>
<tr>
<th>2015-01-31 22:30:00</th>
<td>27309</td>
<td>2015</td>
<td>1</td>
<td>31</td>
<td>22</td>
<td>5</td>
<td>5 22</td>
<td>23126.666667</td>
</tr>
<tr>
<th>2015-01-31 23:00:00</th>
<td>26591</td>
<td>2015</td>
<td>1</td>
<td>31</td>
<td>23</td>
<td>5</td>
<td>5 23</td>
<td>24726.433333</td>
</tr>
<tr>
<th>2015-01-31 23:30:00</th>
<td>26288</td>
<td>2015</td>
<td>1</td>
<td>31</td>
<td>23</td>
<td>5</td>
<td>5 23</td>
<td>24726.433333</td>
</tr>
</tbody>
</table>
<p>10320 rows × 8 columns</p>
</div>
```python
### CREATE GENERATOR FOR LSTM ###
sequence_length = 48
def gen_index(id_df, seq_length, seq_cols):
data_matrix = id_df[seq_cols]
num_elements = data_matrix.shape[0]
for start, stop in zip(range(0, num_elements-seq_length, 1), range(seq_length, num_elements, 1)):
yield data_matrix[stop-sequence_length:stop].values.reshape((-1,len(seq_cols)))
```
```python
### CREATE AND STANDARDIZE DATA FOR LSTM ###
cnt, mean = [], []
for sequence in gen_index(df, sequence_length, ['value']):
cnt.append(sequence)
for sequence in gen_index(df, sequence_length, ['m_weekday']):
mean.append(sequence)
cnt, mean = np.log(cnt), np.log(mean)
cnt = cnt - mean
cnt.shape
```
(10272, 48, 1)
```python
### CREATE AND STANDARDIZE LABEL FOR LSTM ###
init = df.m_weekday[sequence_length:].apply(np.log).values
label = df.value[sequence_length:].apply(np.log).values - init
label.shape
```
(10272,)
Operate quantile regression in Keras is very simple (I took inspiration from [this post](https://towardsdatascience.com/deep-quantile-regression-c85481548b5a)). We easily define the custom quantile loss function which penalizes errors based on the quantile and whether the error was positive (actual > predicted) or negative (actual < predicted). Our network has 3 outputs and 3 losses, one for every quantile we try to predict.
```python
### DEFINE QUANTILE LOSS ###
def q_loss(q,y,f):
e = (y-f)
return K.mean(K.maximum(q*e, (q-1)*e), axis=-1)
```
```python
### TRAIN TEST SPLIT ###
X_train, X_test = cnt[:5000], cnt[5000:]
y_train, y_test = label[:5000], label[5000:]
train_date, test_date = df.index.values[sequence_length:5000+sequence_length], df.index.values[5000+sequence_length:]
```
```python
tf.random.set_seed(33)
os.environ['PYTHONHASHSEED'] = str(33)
np.random.seed(33)
random.seed(33)
session_conf = tf.compat.v1.ConfigProto(
intra_op_parallelism_threads=1,
inter_op_parallelism_threads=1
)
sess = tf.compat.v1.Session(
graph=tf.compat.v1.get_default_graph(),
config=session_conf
)
tf.compat.v1.keras.backend.set_session(sess)
### CREATE MODEL ###
losses = [lambda y,f: q_loss(0.1,y,f), lambda y,f: q_loss(0.5,y,f), lambda y,f: q_loss(0.9,y,f)]
inputs = Input(shape=(X_train.shape[1], X_train.shape[2]))
lstm = Bidirectional(LSTM(64, return_sequences=True, dropout=0.3))(inputs, training = True)
lstm = Bidirectional(LSTM(16, return_sequences=False, dropout=0.3))(lstm, training = True)
dense = Dense(50)(lstm)
out10 = Dense(1)(dense)
out50 = Dense(1)(dense)
out90 = Dense(1)(dense)
model = Model(inputs, [out10,out50,out90])
model.compile(loss=losses, optimizer='adam', loss_weights = [0.3,0.3,0.3])
model.fit(X_train, [y_train,y_train,y_train], epochs=50, batch_size=128, verbose=2)
```
Epoch 1/50
WARNING:tensorflow:AutoGraph could not transform <function <lambda> at 0x14cd1a680> and will run it as-is.
Cause: could not parse the source code of <function <lambda> at 0x14cd1a680>: found multiple definitions with identical signatures at the location. This error may be avoided by defining each lambda on a single line and with unique argument names.
Match 0:
(lambda y, f: q_loss(0.1, y, f))
Match 1:
(lambda y, f: q_loss(0.5, y, f))
Match 2:
(lambda y, f: q_loss(0.9, y, f))
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING: AutoGraph could not transform <function <lambda> at 0x14cd1a680> and will run it as-is.
Cause: could not parse the source code of <function <lambda> at 0x14cd1a680>: found multiple definitions with identical signatures at the location. This error may be avoided by defining each lambda on a single line and with unique argument names.
Match 0:
(lambda y, f: q_loss(0.1, y, f))
Match 1:
(lambda y, f: q_loss(0.5, y, f))
Match 2:
(lambda y, f: q_loss(0.9, y, f))
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING:tensorflow:AutoGraph could not transform <function <lambda> at 0x14cd1a290> and will run it as-is.
Cause: could not parse the source code of <function <lambda> at 0x14cd1a290>: found multiple definitions with identical signatures at the location. This error may be avoided by defining each lambda on a single line and with unique argument names.
Match 0:
(lambda y, f: q_loss(0.1, y, f))
Match 1:
(lambda y, f: q_loss(0.5, y, f))
Match 2:
(lambda y, f: q_loss(0.9, y, f))
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING: AutoGraph could not transform <function <lambda> at 0x14cd1a290> and will run it as-is.
Cause: could not parse the source code of <function <lambda> at 0x14cd1a290>: found multiple definitions with identical signatures at the location. This error may be avoided by defining each lambda on a single line and with unique argument names.
Match 0:
(lambda y, f: q_loss(0.1, y, f))
Match 1:
(lambda y, f: q_loss(0.5, y, f))
Match 2:
(lambda y, f: q_loss(0.9, y, f))
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING:tensorflow:AutoGraph could not transform <function <lambda> at 0x14cb12c20> and will run it as-is.
Cause: could not parse the source code of <function <lambda> at 0x14cb12c20>: found multiple definitions with identical signatures at the location. This error may be avoided by defining each lambda on a single line and with unique argument names.
Match 0:
(lambda y, f: q_loss(0.1, y, f))
Match 1:
(lambda y, f: q_loss(0.5, y, f))
Match 2:
(lambda y, f: q_loss(0.9, y, f))
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING: AutoGraph could not transform <function <lambda> at 0x14cb12c20> and will run it as-is.
Cause: could not parse the source code of <function <lambda> at 0x14cb12c20>: found multiple definitions with identical signatures at the location. This error may be avoided by defining each lambda on a single line and with unique argument names.
Match 0:
(lambda y, f: q_loss(0.1, y, f))
Match 1:
(lambda y, f: q_loss(0.5, y, f))
Match 2:
(lambda y, f: q_loss(0.9, y, f))
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
40/40 - 8s - loss: 0.0285 - dense_13_loss: 0.0274 - dense_14_loss: 0.0438 - dense_15_loss: 0.0238
Epoch 2/50
40/40 - 3s - loss: 0.0259 - dense_13_loss: 0.0233 - dense_14_loss: 0.0413 - dense_15_loss: 0.0216
Epoch 3/50
40/40 - 3s - loss: 0.0252 - dense_13_loss: 0.0223 - dense_14_loss: 0.0405 - dense_15_loss: 0.0213
Epoch 4/50
40/40 - 3s - loss: 0.0248 - dense_13_loss: 0.0221 - dense_14_loss: 0.0398 - dense_15_loss: 0.0207
Epoch 5/50
40/40 - 3s - loss: 0.0241 - dense_13_loss: 0.0210 - dense_14_loss: 0.0390 - dense_15_loss: 0.0205
Epoch 6/50
40/40 - 3s - loss: 0.0235 - dense_13_loss: 0.0203 - dense_14_loss: 0.0383 - dense_15_loss: 0.0198
Epoch 7/50
40/40 - 3s - loss: 0.0233 - dense_13_loss: 0.0200 - dense_14_loss: 0.0379 - dense_15_loss: 0.0198
Epoch 8/50
40/40 - 3s - loss: 0.0224 - dense_13_loss: 0.0191 - dense_14_loss: 0.0368 - dense_15_loss: 0.0189
Epoch 9/50
40/40 - 3s - loss: 0.0219 - dense_13_loss: 0.0186 - dense_14_loss: 0.0360 - dense_15_loss: 0.0184
Epoch 10/50
40/40 - 3s - loss: 0.0211 - dense_13_loss: 0.0177 - dense_14_loss: 0.0347 - dense_15_loss: 0.0180
Epoch 11/50
40/40 - 3s - loss: 0.0207 - dense_13_loss: 0.0176 - dense_14_loss: 0.0340 - dense_15_loss: 0.0175
Epoch 12/50
40/40 - 3s - loss: 0.0205 - dense_13_loss: 0.0176 - dense_14_loss: 0.0336 - dense_15_loss: 0.0173
Epoch 13/50
40/40 - 3s - loss: 0.0199 - dense_13_loss: 0.0171 - dense_14_loss: 0.0327 - dense_15_loss: 0.0165
Epoch 14/50
40/40 - 3s - loss: 0.0198 - dense_13_loss: 0.0170 - dense_14_loss: 0.0326 - dense_15_loss: 0.0163
Epoch 15/50
40/40 - 3s - loss: 0.0195 - dense_13_loss: 0.0169 - dense_14_loss: 0.0320 - dense_15_loss: 0.0161
Epoch 16/50
40/40 - 3s - loss: 0.0194 - dense_13_loss: 0.0164 - dense_14_loss: 0.0322 - dense_15_loss: 0.0162
Epoch 17/50
40/40 - 3s - loss: 0.0189 - dense_13_loss: 0.0163 - dense_14_loss: 0.0312 - dense_15_loss: 0.0155
Epoch 18/50
40/40 - 3s - loss: 0.0189 - dense_13_loss: 0.0161 - dense_14_loss: 0.0312 - dense_15_loss: 0.0157
Epoch 19/50
40/40 - 3s - loss: 0.0188 - dense_13_loss: 0.0160 - dense_14_loss: 0.0312 - dense_15_loss: 0.0153
Epoch 20/50
40/40 - 3s - loss: 0.0184 - dense_13_loss: 0.0157 - dense_14_loss: 0.0308 - dense_15_loss: 0.0150
Epoch 21/50
40/40 - 3s - loss: 0.0184 - dense_13_loss: 0.0155 - dense_14_loss: 0.0310 - dense_15_loss: 0.0147
Epoch 22/50
40/40 - 3s - loss: 0.0182 - dense_13_loss: 0.0152 - dense_14_loss: 0.0305 - dense_15_loss: 0.0149
Epoch 23/50
40/40 - 3s - loss: 0.0178 - dense_13_loss: 0.0152 - dense_14_loss: 0.0299 - dense_15_loss: 0.0142
Epoch 24/50
40/40 - 3s - loss: 0.0177 - dense_13_loss: 0.0151 - dense_14_loss: 0.0299 - dense_15_loss: 0.0140
Epoch 25/50
40/40 - 3s - loss: 0.0174 - dense_13_loss: 0.0148 - dense_14_loss: 0.0293 - dense_15_loss: 0.0140
Epoch 26/50
40/40 - 3s - loss: 0.0180 - dense_13_loss: 0.0151 - dense_14_loss: 0.0305 - dense_15_loss: 0.0145
Epoch 27/50
40/40 - 3s - loss: 0.0174 - dense_13_loss: 0.0147 - dense_14_loss: 0.0294 - dense_15_loss: 0.0140
Epoch 28/50
40/40 - 3s - loss: 0.0173 - dense_13_loss: 0.0147 - dense_14_loss: 0.0293 - dense_15_loss: 0.0136
Epoch 29/50
40/40 - 3s - loss: 0.0172 - dense_13_loss: 0.0145 - dense_14_loss: 0.0292 - dense_15_loss: 0.0135
Epoch 30/50
40/40 - 3s - loss: 0.0171 - dense_13_loss: 0.0143 - dense_14_loss: 0.0294 - dense_15_loss: 0.0135
Epoch 31/50
40/40 - 3s - loss: 0.0170 - dense_13_loss: 0.0142 - dense_14_loss: 0.0288 - dense_15_loss: 0.0137
Epoch 32/50
40/40 - 3s - loss: 0.0171 - dense_13_loss: 0.0145 - dense_14_loss: 0.0291 - dense_15_loss: 0.0133
Epoch 33/50
40/40 - 3s - loss: 0.0171 - dense_13_loss: 0.0146 - dense_14_loss: 0.0293 - dense_15_loss: 0.0132
Epoch 34/50
40/40 - 3s - loss: 0.0167 - dense_13_loss: 0.0142 - dense_14_loss: 0.0285 - dense_15_loss: 0.0131
Epoch 35/50
40/40 - 3s - loss: 0.0169 - dense_13_loss: 0.0144 - dense_14_loss: 0.0287 - dense_15_loss: 0.0132
Epoch 36/50
40/40 - 3s - loss: 0.0169 - dense_13_loss: 0.0142 - dense_14_loss: 0.0288 - dense_15_loss: 0.0135
Epoch 37/50
40/40 - 3s - loss: 0.0168 - dense_13_loss: 0.0139 - dense_14_loss: 0.0287 - dense_15_loss: 0.0136
Epoch 38/50
40/40 - 3s - loss: 0.0170 - dense_13_loss: 0.0141 - dense_14_loss: 0.0288 - dense_15_loss: 0.0136
Epoch 39/50
40/40 - 3s - loss: 0.0169 - dense_13_loss: 0.0140 - dense_14_loss: 0.0288 - dense_15_loss: 0.0134
Epoch 40/50
40/40 - 3s - loss: 0.0167 - dense_13_loss: 0.0141 - dense_14_loss: 0.0283 - dense_15_loss: 0.0133
Epoch 41/50
40/40 - 3s - loss: 0.0171 - dense_13_loss: 0.0142 - dense_14_loss: 0.0296 - dense_15_loss: 0.0132
Epoch 42/50
40/40 - 3s - loss: 0.0166 - dense_13_loss: 0.0138 - dense_14_loss: 0.0284 - dense_15_loss: 0.0131
Epoch 43/50
40/40 - 3s - loss: 0.0167 - dense_13_loss: 0.0137 - dense_14_loss: 0.0286 - dense_15_loss: 0.0132
Epoch 44/50
40/40 - 3s - loss: 0.0167 - dense_13_loss: 0.0139 - dense_14_loss: 0.0284 - dense_15_loss: 0.0134
Epoch 45/50
40/40 - 3s - loss: 0.0169 - dense_13_loss: 0.0141 - dense_14_loss: 0.0289 - dense_15_loss: 0.0134
Epoch 46/50
40/40 - 3s - loss: 0.0169 - dense_13_loss: 0.0140 - dense_14_loss: 0.0288 - dense_15_loss: 0.0135
Epoch 47/50
40/40 - 3s - loss: 0.0165 - dense_13_loss: 0.0138 - dense_14_loss: 0.0284 - dense_15_loss: 0.0130
Epoch 48/50
40/40 - 3s - loss: 0.0167 - dense_13_loss: 0.0141 - dense_14_loss: 0.0286 - dense_15_loss: 0.0130
Epoch 49/50
40/40 - 3s - loss: 0.0166 - dense_13_loss: 0.0138 - dense_14_loss: 0.0285 - dense_15_loss: 0.0131
Epoch 50/50
40/40 - 3s - loss: 0.0168 - dense_13_loss: 0.0140 - dense_14_loss: 0.0286 - dense_15_loss: 0.0133
<tensorflow.python.keras.callbacks.History at 0x14bd0bc50>
## CROSSOVER PROBLEM
When dealing with Neural Network in Keras, one of the tedious problem is the uncertainty of results due to the internal weigths initialization. With its formulation, our problem seems to particularly suffer of this kind of problem; i.e. computing quantile predictions we can’t permit quantiles overlapping, this not make sense! To avoid this pitfall I make use of bootstrapping in prediction phase: I reactivate dropout of my network (trainable: true in the model), iterate predition for 100 times, store them and finally calculate the desired quantiles (I make use of this clever technique also in this post).
```python
### QUANTILEs BOOTSTRAPPING ###
pred_10, pred_50, pred_90 = [], [], []
for i in tqdm.tqdm(range(0,100)):
predd = model.predict(X_test)
pred_10.append(predd[0])
pred_50.append(predd[1])
pred_90.append(predd[2])
pred_10 = np.asarray(pred_10)[:,:,0]
pred_50 = np.asarray(pred_50)[:,:,0]
pred_90 = np.asarray(pred_90)[:,:,0]
```
100%|██████████| 100/100 [02:43<00:00, 1.64s/it]
This process is graphically explained below with a little focus on a subset of predictions. Given quantile bootstraps, we calculated summary measures (red lines) of them, avoiding crossover.
```python
### REVERSE TRANSFORM PREDICTIONS ###
pred_90_m = np.exp(np.quantile(pred_90,0.9,axis=0) + init[5000:])
pred_50_m = np.exp(pred_50.mean(axis=0) + init[5000:])
pred_10_m = np.exp(np.quantile(pred_10,0.1,axis=0) + init[5000:])
```
```python
### EVALUATION METRIC ###
mean_squared_log_error(np.exp(y_test + init[5000:]), pred_50_m)
```
0.06907365954001285
## RESULTS
As I previously cited, I used the firstly 5000 observations for training and the remaining (around 5000) for testing.
Our model reaches a great performance forecasting taxi demand with the 50th quantile. Around 0.055 Mean Squared Log Error is a brilliant result! This means that the LSTM Network is able to understand the underling rules that drive taxi demand. So our approach for anomaly detection sounds great… We computed the difference among the 90th quantile predictions and 10th quantile predictions and see what’s appened.
```python
### PLOT QUANTILE PREDICTIONS ###
plt.figure(figsize=(16,8))
plt.plot(test_date, pred_90_m, color='cyan')
plt.plot(test_date, pred_50_m, color='blue')
plt.plot(test_date, pred_10_m, color='green')
### CROSSOVER CHECK ###
plt.scatter(np.where(np.logical_or(pred_50_m>pred_90_m, pred_50_m<pred_10_m))[0],
pred_50_m[np.logical_or(pred_50_m>pred_90_m, pred_50_m<pred_10_m)], c='red', s=50)
```
The quantile interval range (blue dots) is higher in period of uncertainty. In the other cases, the model tends to generalize well, as we expected. Going deeper, we start to investigate about these periods of high uncertainty. We noticed that these coincide with our initial assumptions. The orange circles plotted below are respectively: NYC marathon, Thanksgiving, Christmas, New Years day, and a snow storm.
```python
### PLOT UNCERTAINTY INTERVAL LENGHT WITH REAL DATA ###
plt.figure(figsize=(16,8))
plt.plot(test_date, np.exp(y_test + init[5000:]), color='red', alpha=0.4)
plt.scatter(test_date, pred_90_m - pred_10_m)
```
We can conclude that we reach our initial targets: achive a great forecating power and exploit the strength of our model to identificate uncertainty. We also make use of this to say something about anomalies detection.
## SUMMARY
Wereproduce a good solution for anomaly detection and forecasting. We make use of a LSTM Network to learn the behaviour of taxi demand in NYC. We utilized what we learned to make predition and estimate uncertainty at the same time. We implicitly define an anomaly as an unpredictable observation — i.e. with a great amout of uncertainty. This simple assumption permits to our LSTM to make all the work for us.
| a3ef2227b3f473ac04f3771bce6927d07a372ba4 | 851,419 | ipynb | Jupyter Notebook | misc/Anomaly-Detection-LSTM.ipynb | OleBo/Stock-Prediction-Models | 3abd726d57b5d588d560c6a27db19b589cdae52f | [
"Apache-2.0"
] | null | null | null | misc/Anomaly-Detection-LSTM.ipynb | OleBo/Stock-Prediction-Models | 3abd726d57b5d588d560c6a27db19b589cdae52f | [
"Apache-2.0"
] | null | null | null | misc/Anomaly-Detection-LSTM.ipynb | OleBo/Stock-Prediction-Models | 3abd726d57b5d588d560c6a27db19b589cdae52f | [
"Apache-2.0"
] | 2 | 2020-03-27T15:25:41.000Z | 2020-12-17T10:51:15.000Z | 460.226486 | 215,116 | 0.930659 | true | 12,551 | Qwen/Qwen-72B | 1. YES
2. YES | 0.826712 | 0.787931 | 0.651392 | __label__eng_Latn | 0.764459 | 0.351733 |
# Euler angle worksheet
This is a [jupyter notebook](https://jupyter.org/). Jupyter Notebooks allows you to combine notes, code, and output into a single document. You can even export your document as a presentation or presentation.
In this worksheet, we will use the python3 SymPy package to derive expressions for converting between euler angles and matrices.
If you would like more information about jupyter notebook features:
* [Getting started tutorial](https://realpython.com/jupyter-notebook-introduction/#creating-a-notebook)
* [Reference on markdown text](https://help.github.com/articles/markdown-basics/)
For this assignment, you do not need to run this notebook. It has been compiled for you and saved as a webpage. However, if you would like to play with it, start by running all the cells (from the menu: goto 'Cell' -> 'Run All').
```python
# python3
from sympy import *
init_printing(use_latex='mathjax')
import math
```
```python
# Define symbols
cx,sx = symbols('cx sx')
cy,sy = symbols('cy sy')
cz,sz = symbols('cz sz')
Rx = Matrix([
[1, 0, 0],
[0, cx,-sx],
[0, sx, cx]])
Ry = Matrix([
[ cy, 0, sy],
[ 0, 1, 0],
[-sy, 0, cy]])
Rz = Matrix([
[cz, -sz, 0],
[sz, cz, 0],
[0, 0, 1]])
```
# Convert from ZYX euler angles to a matrix
We can compute the matrix Rzyx by multiplying matrixes corresponding to each consecutive rotation, e.g.
$$
R_{zyx}(\theta_x, \theta_y, \theta_z) = R_z(\theta_z) * R_y(\theta_y) * R_x(\theta_x)
$$
In this file, we will use the [SymPy](https://www.sympy.org/en/index.html) to compute algebraic expressions for euler angle matrices. Using these expressions, we will be able to derive formulas for converting from matrices to euler angles.
In the following example, let
* cx = $cos(\theta_x)$
* sx = $sin(\theta_x)$
* cy = $cos(\theta_y)$
* sy = $sin(\theta_y)$
* cz = $cos(\theta_z)$
* sz = $sin(\theta_z)$
```python
Rzyx = Rz * Ry * Rx
pprint(Rzyx)
```
⎡cy⋅cz -cx⋅sz + cz⋅sx⋅sy cx⋅cz⋅sy + sx⋅sz⎤
⎢ ⎥
⎢cy⋅sz cx⋅cz + sx⋅sy⋅sz cx⋅sy⋅sz - cz⋅sx⎥
⎢ ⎥
⎣ -sy cy⋅sx cx⋅cy ⎦
Now that we have a matrix expression for the ZYX euler angles, we have formulas which describe how matrices and euler angles relate to each. Specifically, suppose we have a 3x3 rotation matrix R with the following elements
$$
R =
\begin{bmatrix}
r_{00} & r_{01} & r_{02} \\
r_{10} & r_{11} & r_{12} \\
r_{20} & r_{21} & r_{22} \\
\end{bmatrix}
$$
where each $r_{ij}$ represents a scalar value in $\mathbb{R}$. Usually math texts will use indexing at 1, but here let's use 0-based indexing so that it will be easier to use implement these formulas later.
Now suppose we wish to extract the euler angles from this matrix. We can get the Y rotation back from the term $r_{20}$.
$$
r_{20} = -\sin(\theta_y) \\
=> \theta_y = \sin(-r_{20})
$$
What about the rotations around X and Z? We can obtain these similarly using the terms from the first column and last row. A robust method involves using the fact that
$$
\tan(\theta) = \frac{\sin(\theta)}{\cos(\theta)}
$$
to form the following expression for obtaining $\theta_x$
$$
\frac{r_{21}}{r_{22}} = \frac{\sin(\theta_x)}{\cos(\theta_x)} = \tan(\theta_x) \\
=> \theta_x = \text{atan2}(r_{21}, r_{22})
$$
The expression for $\theta_z$ can be obtained similarly
$$
\frac{r_{10}}{r_{00}} = \frac{\sin(\theta_z)}{\cos(\theta_z)} = \tan(\theta_z) \\
=> \theta_z = \text{atan2}(r_{10}, r_{00})
$$
Using atan2 makes it easier to handle the cases when $\theta$ is near 0, 90, or 180 degrees, which makes sine and cosine close to zero and 1. Be careful when using acos and asin because values even *slightly* out of the range [-1,1] can lead to NaNs. The computer will not tolerate nansense!
## What happens to Rzyx when y is +/- 90 degrees?
When the middle euler angle is 90 degrees, we need to look to the non-zero terms to values for the first and last angles. For example, for ZYX euler angles, we need to handle the case when Y is either positive or negative 90 degrees.
```python
# Compute Rzyx when y is +90
Ry90 = Matrix([
[ 0, 0, 1],
[ 0, 1, 0],
[-1, 0, 0]])
Rzyx = Rz * Ry90 * Rx
pprint(Rzyx)
```
⎡0 -cx⋅sz + cz⋅sx cx⋅cz + sx⋅sz⎤
⎢ ⎥
⎢0 cx⋅cz + sx⋅sz cx⋅sz - cz⋅sx⎥
⎢ ⎥
⎣-1 0 0 ⎦
So now we have the above expression. We know that the Y rotation is 90 but what about the X and Z rotations? We need to look at the upper part of the matrix to figure these out.
Let's apply the sine and cosine [addition rules](https://en.wikipedia.org/wiki/List_of_trigonometric_identities)
$$
\sin(z + x) = \sin(z) \cos(x) + \cos(z) \sin(x) \\
\sin(z - x) = \sin(z) \cos(x) - \cos(z) \sin(x) \\
\cos(z + x) = \cos(z) \cos(x) - \sin(z) \sin(x) \\
\cos(z - x) = \cos(z) \cos(x) + \sin(z) \sin(x) \\
$$
Another useful property of sine and cosine is the following
$$
\sin(-\theta) = -\sin(\theta) \\
\cos(-\theta) = \cos(\theta)
$$
Let's try to simplify the above matrix using these rules. For example, the term in position $r_{12}$ has two terms containing both sine and cosine, so it corresponds to one of the sine rules. It also has a negative, so its the difference between two angles X and Z.
$$
\begin{bmatrix}
0 & s(x-z) & c(x-z) \\
0 & c(x-z) & s(z-x) \\
-1 & 0 & 0
\end{bmatrix}
$$
which can be rewritten so every term has angle $x-z$
$$
\begin{bmatrix}
0 & s(x-z) & c(x-z) \\
0 & c(x-z) & -s(x-z) \\
-1 & 0 & 0
\end{bmatrix}
$$
Therefore, we can use atan2($r_{12}$, $r_{13}$) to get the $\theta$ angle corresponding to the difference between $x-z$. Many values for X and Z could combine to be $\theta$. Let's choose one of X or Z to be zero and then the other can be $\theta$.
```python
# Compute Rzyx when y is -90
Ry90_Minus = Ry90.T
Rzyx = Rz * Ry90_Minus * Rx
pprint(Rzyx)
```
⎡0 -cx⋅sz - cz⋅sx -cx⋅cz + sx⋅sz⎤
⎢ ⎥
⎢0 cx⋅cz - sx⋅sz -cx⋅sz - cz⋅sx⎥
⎢ ⎥
⎣1 0 0 ⎦
$$
\begin{bmatrix}
0 & -s(x+z) & c(x+z) \\
0 & c(z+x) & -s(x+z) \\
1 & 0 & 0
\end{bmatrix}
$$
# Convert from all euler angles to a matrix
The other five euler angle combinations can be derived similarly.
# XYZ
```python
print("Rxyz")
pprint(Rx * Ry * Rz)
print()
print()
print("Y = 90")
pprint(Rx * Ry90 * Rz)
print()
print()
print("Y = -90")
pprint(Rx * Ry90_Minus * Rz)
print()
print()
```
Rxyz
⎡ cy⋅cz -cy⋅sz sy ⎤
⎢ ⎥
⎢cx⋅sz + cz⋅sx⋅sy cx⋅cz - sx⋅sy⋅sz -cy⋅sx⎥
⎢ ⎥
⎣-cx⋅cz⋅sy + sx⋅sz cx⋅sy⋅sz + cz⋅sx cx⋅cy ⎦
Y = 90
⎡ 0 0 1⎤
⎢ ⎥
⎢cx⋅sz + cz⋅sx cx⋅cz - sx⋅sz 0⎥
⎢ ⎥
⎣-cx⋅cz + sx⋅sz cx⋅sz + cz⋅sx 0⎦
Y = -90
⎡ 0 0 -1⎤
⎢ ⎥
⎢cx⋅sz - cz⋅sx cx⋅cz + sx⋅sz 0 ⎥
⎢ ⎥
⎣cx⋅cz + sx⋅sz -cx⋅sz + cz⋅sx 0 ⎦
Y = 90
$$
\begin{bmatrix}
0 & 0 & 1 \\
s(x+z) & c(x+z) & 0\\
-c(x+z) & s(x+z) & 0 \\
\end{bmatrix}
$$
Y = -90
$$
\begin{bmatrix}
0 & 0 & -1 \\
s(z-x) & c(z-x) & 0 \\
c(z-x) & -s(z-x) & 0 \\
\end{bmatrix}
$$
# YXZ
```python
print("Ryxz")
pprint(Ry * Rx * Rz)
print()
print()
Rx90 = Matrix([
[1, 0, 0],
[0, 0,-1],
[0, 1, 0]])
print("+90")
pprint(Ry * Rx90 * Rz)
print()
print()
Rx90_Minus = Rx90.T
print("-90")
pprint(Ry * Rx90.T * Rz)
print()
print()
```
Ryxz
⎡cy⋅cz + sx⋅sy⋅sz -cy⋅sz + cz⋅sx⋅sy cx⋅sy⎤
⎢ ⎥
⎢ cx⋅sz cx⋅cz -sx ⎥
⎢ ⎥
⎣cy⋅sx⋅sz - cz⋅sy cy⋅cz⋅sx + sy⋅sz cx⋅cy⎦
+90
⎡cy⋅cz + sy⋅sz -cy⋅sz + cz⋅sy 0 ⎤
⎢ ⎥
⎢ 0 0 -1⎥
⎢ ⎥
⎣cy⋅sz - cz⋅sy cy⋅cz + sy⋅sz 0 ⎦
-90
⎡cy⋅cz - sy⋅sz -cy⋅sz - cz⋅sy 0⎤
⎢ ⎥
⎢ 0 0 1⎥
⎢ ⎥
⎣-cy⋅sz - cz⋅sy -cy⋅cz + sy⋅sz 0⎦
X = 90
$$
\begin{bmatrix}
c(y-z) & s(y-z) & 0\\
0 & 0 & -1 \\
-s(y-z) & c(y-z) & 0 \\
\end{bmatrix}
$$
X = -90
$$
\begin{bmatrix}
c(y+z) & -s(y+z) & 0 \\
0 & 0 & 1 \\
-s(y+z) & -c(y+z) & 0 \\
\end{bmatrix}
$$
# ZXY
```python
print("Rzxy")
pprint(Rz * Rx * Ry)
print()
print()
print("+90")
pprint(Rz * Rx90 * Ry)
print()
print()
print("-90")
pprint(Rz * Rx90.T * Ry)
print()
print()
```
Rzxy
⎡cy⋅cz - sx⋅sy⋅sz -cx⋅sz cy⋅sx⋅sz + cz⋅sy ⎤
⎢ ⎥
⎢cy⋅sz + cz⋅sx⋅sy cx⋅cz -cy⋅cz⋅sx + sy⋅sz⎥
⎢ ⎥
⎣ -cx⋅sy sx cx⋅cy ⎦
+90
⎡cy⋅cz - sy⋅sz 0 cy⋅sz + cz⋅sy ⎤
⎢ ⎥
⎢cy⋅sz + cz⋅sy 0 -cy⋅cz + sy⋅sz⎥
⎢ ⎥
⎣ 0 1 0 ⎦
-90
⎡cy⋅cz + sy⋅sz 0 -cy⋅sz + cz⋅sy⎤
⎢ ⎥
⎢cy⋅sz - cz⋅sy 0 cy⋅cz + sy⋅sz ⎥
⎢ ⎥
⎣ 0 -1 0 ⎦
X = 90
$$
\begin{bmatrix}
c(y+z) & 0 & s(y+z) \\
s(y+z) & 0 & -c(y+z) \\
0 & 1 & 0 \\
\end{bmatrix}
$$
X = -90
$$
\begin{bmatrix}
c(y-z) & 0 & s(y-z) \\
-s(y-z) & 0 & c(y-z) \\
0 & -1 & 0 \\
\end{bmatrix}
$$
# XZY
```python
print("Rxzy")
pprint(Rx * Rz * Ry)
print()
print()
Rz90 = Matrix([
[0, -1, 0],
[1, 0, 0],
[0, 0, 1]])
print("+90")
pprint(Rx * Rz90 * Ry)
print()
print()
print("-90")
pprint(Rx * Rz90.T * Ry)
print()
print()
```
Rxzy
⎡ cy⋅cz -sz cz⋅sy ⎤
⎢ ⎥
⎢cx⋅cy⋅sz + sx⋅sy cx⋅cz cx⋅sy⋅sz - cy⋅sx⎥
⎢ ⎥
⎣-cx⋅sy + cy⋅sx⋅sz cz⋅sx cx⋅cy + sx⋅sy⋅sz⎦
+90
⎡ 0 -1 0 ⎤
⎢ ⎥
⎢cx⋅cy + sx⋅sy 0 cx⋅sy - cy⋅sx⎥
⎢ ⎥
⎣-cx⋅sy + cy⋅sx 0 cx⋅cy + sx⋅sy⎦
-90
⎡ 0 1 0 ⎤
⎢ ⎥
⎢-cx⋅cy + sx⋅sy 0 -cx⋅sy - cy⋅sx⎥
⎢ ⎥
⎣-cx⋅sy - cy⋅sx 0 cx⋅cy - sx⋅sy ⎦
Z = 90
$$
\begin{bmatrix}
0 & -1 & 0 \\
c(x-y) & 0 & -s(x-y) \\
s(x-y) & 0 & c(x-y) \\
\end{bmatrix}
$$
Z = -90
$$
\begin{bmatrix}
0 & 1 & 0 \\
-c(x+y) & 0 & -s(x+y) \\
-s(x+y) & 0 & c(x+y) \\
\end{bmatrix}
$$
# YZX
```python
print("Ryzx")
pprint(Ry * Rz * Rx)
print()
print()
print("+90")
pprint(Ry * Rz90 * Rx)
print()
print()
print("-90")
pprint(Ry * Rz90.T * Rx)
print()
print()
```
Ryzx
⎡cy⋅cz -cx⋅cy⋅sz + sx⋅sy cx⋅sy + cy⋅sx⋅sz⎤
⎢ ⎥
⎢ sz cx⋅cz -cz⋅sx ⎥
⎢ ⎥
⎣-cz⋅sy cx⋅sy⋅sz + cy⋅sx cx⋅cy - sx⋅sy⋅sz⎦
+90
⎡0 -cx⋅cy + sx⋅sy cx⋅sy + cy⋅sx⎤
⎢ ⎥
⎢1 0 0 ⎥
⎢ ⎥
⎣0 cx⋅sy + cy⋅sx cx⋅cy - sx⋅sy⎦
-90
⎡0 cx⋅cy + sx⋅sy cx⋅sy - cy⋅sx⎤
⎢ ⎥
⎢-1 0 0 ⎥
⎢ ⎥
⎣0 -cx⋅sy + cy⋅sx cx⋅cy + sx⋅sy⎦
Z = 90
$$
\begin{bmatrix}
0 & c(x-y) & -s(x-y) \\
1 & 0 & 0 \\
0 & s(x-y) & c(x-y) \\
\end{bmatrix}
$$
Z = -90
$$
\begin{bmatrix}
0 & c(y-x) & s(y-x) \\
-1 & 0 & 0 \\
0 & -s(y-x) & c(y-x) \\
\end{bmatrix}
$$
| e6851d2a32d1cee8e3520944e094d933b8bfb427 | 19,607 | ipynb | Jupyter Notebook | Labs/EulerAngles.ipynb | isaacwasserman/website | c052e1e8b28b9a600623589768691585eeda774d | [
"MIT"
] | null | null | null | Labs/EulerAngles.ipynb | isaacwasserman/website | c052e1e8b28b9a600623589768691585eeda774d | [
"MIT"
] | null | null | null | Labs/EulerAngles.ipynb | isaacwasserman/website | c052e1e8b28b9a600623589768691585eeda774d | [
"MIT"
] | 1 | 2021-09-28T20:41:54.000Z | 2021-09-28T20:41:54.000Z | 26.895748 | 298 | 0.387209 | true | 4,905 | Qwen/Qwen-72B | 1. YES
2. YES | 0.930458 | 0.919643 | 0.855689 | __label__eng_Latn | 0.73384 | 0.826386 |
# 逆行列を求める方法
- `np.linalg` に `inv` という関数がある
```python
import numpy as np
```
```python
a = np.array([[3, 1, 1], [1, 2, 1], [0, -1, 1]])
```
```python
np.linalg.inv(a)
```
array([[ 0.42857143, -0.28571429, -0.14285714],
[-0.14285714, 0.42857143, -0.28571429],
[-0.14285714, 0.42857143, 0.71428571]])
# ひとつの連立方程式を解く方法
下記の連立方程式を解くには逆行列を求めるよりも `solve` 関数を使うほうが良い。(高速かつ数値安定的なアルゴリズムを背後で利用しているため)
\begin{equation}
\begin{pmatrix}
3& 1& 1\\
1& 2& 1\\
0& -1& 1
\end{pmatrix}
\begin{pmatrix}
x \\ y\\ z
\end{pmatrix}
=
\begin{pmatrix}
1 \\ 2 \\ 3
\end{pmatrix}
\end{equation}
```python
b = np.array([[3, 1, 1], [1, 2, 1], [0, -1, 1]])
```
```python
c = np.array([1, 2, 3])
```
```python
np.linalg.solve(b, c)
```
array([-0.57142857, -0.14285714, 2.85714286])
# 同じ係数行列からなく複数の連立方程式を解く方法
\begin{equation}
Ax=b_1, Ax=b_2, \dots, Ax=b_m
\end{equation}
となる連立方程式があったときは、 $A^{-1}$ を計算することで、
\begin{equation}
A^{-1}b_1, A^{-1}b_2, \dots, A^{-1}b_m
\end{equation}
と解が計算できる。しかし、もっと良い方法がある。
## LU分解
$A=PLU$ の形に分解することで連立方程式を高速かつ数値安定的に解くことができる。
ここで $L$ は下三角行列で対角成分が $1$ となるもの、 $U$ は上三角行列、 $P$ は各行に $1$ となる成分がただひとつだけある行列でそのほかの成分は $0$(置換行列)
\begin{equation}
PLUx = b
\end{equation}
という連立方程式は次の3つの方程式を逐次的に解くことで解 $x$ を求めることができる。
\begin{align}
Uz &= b \\
Ly &= z \\
Px &= y
\end{align}
```python
# scipy を利用
from scipy import linalg
```
```python
a = np.array([[3, 1, 1], [1, 2, 1], [0, -1, 1]])
b = np.array([1, 2, 3])
```
```python
lu, p = linalg.lu_factor(a)
linalg.lu_solve((lu, p), b)
```
array([-0.57142857, -0.14285714, 2.85714286])
```python
```
| 3c7d876a06b6931b103f45efa16f6673a3092cdc | 4,575 | ipynb | Jupyter Notebook | notebooks/linear_equations.ipynb | 515hikaru/essence-of-machine-learning | 7f46be9316d227626f27a06deac64b43191cb4d7 | [
"MIT"
] | null | null | null | notebooks/linear_equations.ipynb | 515hikaru/essence-of-machine-learning | 7f46be9316d227626f27a06deac64b43191cb4d7 | [
"MIT"
] | 11 | 2018-10-04T14:33:15.000Z | 2018-10-09T13:40:35.000Z | notebooks/linear_equations.ipynb | 515hikaru/essence-of-machine-learning | 7f46be9316d227626f27a06deac64b43191cb4d7 | [
"MIT"
] | null | null | null | 18.983402 | 99 | 0.460109 | true | 913 | Qwen/Qwen-72B | 1. YES
2. YES | 0.959154 | 0.887205 | 0.850966 | __label__yue_Hant | 0.494295 | 0.815413 |
# Matérn Spectral Mixture (MSM) kernel
## Gaussian process priors for pitch detection in polyphonic music
### Learning kernels in frequency domain
#### Written by Pablo A. Alvarado, Centre for Digital Music, Queen Mary University of London.
*Last updated Friday, 12 May 2017.*
The aim of this notebook is to ilustrate the approach proposed in (link) for pitch detection in polyphonic signals, applying Gaussian processes (GPs) models in Python using [GPflow](https://github.com/GPflow/GPflow). We first outline the mathematical formulation of the proposed model. Then we introduce how to learn hyperparameters in frequency domain. Finally we present an example for detecting two pitches in a polyphonic music signal.
The dataset used in this tutorial corresponds to the electric guitar mixture signal in http://winnie.kuis.kyoto-u.ac.jp/~yoshii/psdtf/. The first 4 seconds of the data were used for training, this segment corresponds to 2 isolated sound events, with pitch C4 and E4 respectively. The test data was made of three sound events with pitches C4, E4, C4+E4, i.e. the training data in addition to an event with both pitches.
## GP additive model for pitch detection
Automatic music transcription aims to infer a symbolic representation, such as piano-roll or score, given an observed audio recording. From a Bayesian latent variable perspective, transcription consists in updating our beliefs about the symbolic description for a certain piece of music, after observing a corresponding audio recording.
We approach the transcription problem from a time-domain source separation perspective. That is, given an audio recording $\mathcal{D}=\left\lbrace y_n, t_n \right\rbrace_{n=1}^{N}$, we seek to formulate a generative probabilistic model that describes how the observed polyphonic signal (mixture of sources) was generated and, moreover, that allows us to infer the latent variables associated with the piano-roll representation. To do so, we use the regression model
\begin{align}
y_n = f(t_n) + \epsilon,
\end{align}
where $y_n$ is the value of the analysed polyphonic signal at time $t_n$, the noise follows a normal distribution $\epsilon \sim \mathcal{N}(0,\sigma^2)$, and the function $f(t)$ is a random process composed by a linear combination of $M$ sources $\left\lbrace f_m(t) \right\rbrace _{m=1}^{M} $, that is
\begin{align}
f(t) = \sum_{m=1}^{M} f_{m}(t).
\end{align}
Each source is decomposed into the product of two factors, an amplitude-envelope or activation function $\phi_m(t)$, and a quasi-periodic or component function $w_{m}(t)$. The overall model is then
\begin{align}
y(t) = \sum_{m=1}^{M} \phi_{m}(t) w_{m}(t) + \epsilon.
\end{align}
We can interpret the set $\left\lbrace w_{m}(t)\right\rbrace_{m=1}^{M}$ as a dictionary where each component $ w_{m}(t)$ is a quasi-periodic stochastic function with a defined pitch. Likewise, each stochastic function in $\left\lbrace \phi_{m}(t)\right\rbrace_{m=1}^{M}$ represents a row of the piano roll-matrix, i.e the time dependent non-negative activation of a specific pitch throughout the analysed piece of music.
Components $\left\lbrace w_{m}(t)\right\rbrace_{m=1}^{M}$ follow
\begin{align}
w_m(t) \sim \mathcal{GP}(0,k_m(t,t')),
\end{align}
where the covariance $k_m(t,t')$ reflect the frequency content of the $m^{\text{th}}$ component, and has the form of a Matérn spectral mixture (MSM) kernel.
To guarantee the activations to be non-negative we apply non-linear transformations to GPs. To do so, we use the sigmoid function
\begin{align}
\sigma(x) = \left[ 1 + \exp(-x) \right]^{-1},
\end{align}
Then, activations are defined as
\begin{align*}
\phi_m(t) = \sigma( {g_{m}(t)} ),
\end{align*}
where $ \left\lbrace g_{m}(t)\right\rbrace_{m=1}^{M} $ are GPs. The sigmoid model follows
\begin{align}
y(t)=
\sum_{m = 1}^{M}
\sigma( {g_{m}(t)} )
\
w_{m}(t)
+ \epsilon.
\end{align}
## Learning hyperparameters in frequency domain
In this section we describe how to learn the hyperparameters for the components $\left\lbrace w_{m}(t)\right\rbrace_{m=1}^{M}$, and the activations $\left\lbrace g_{m}(t)\right\rbrace_{m=1}^{M}$.
```python
%matplotlib inline
```
```python
import matplotlib.pyplot as plt
plt.style.use('ggplot')
plt.rcParams['figure.figsize'] = (16, 4)
import numpy as np
import scipy as sp
import scipy.io as sio
import scipy.io.wavfile as wav
from scipy import signal
from scipy.fftpack import fft
import gpflow
import GPitch
```
```python
sf, y = wav.read('guitar.wav') #loading dataset
y = y.astype(np.float64)
yaudio = y / np.max(np.abs(y))
N = np.size(yaudio)
Xaudio = np.linspace(0, (N-1.)/sf, N)
X1, y1 = Xaudio[0:2*sf], yaudio[0:2*sf] # define training data for component 1 and 2
X2, y2 = Xaudio[2*sf:4*sf], yaudio[2*sf:4*sf]
y1f, y2f = sp.fftpack.fft(y1), sp.fftpack.fft(y2) # get spectral density for each isolated sound event
N1 = y1.size # Number of sample points
T = 1.0 / sf # sample period
F = np.linspace(0.0, 1.0/(2.0*T), N1/2)
S1 = 2.0/N1 * np.abs(y1f[0:N1/2])
S2 = 2.0/N1 * np.abs(y2f[0:N1/2])
```
```python
plt.figure()
plt.subplot(1,2,1), plt.title('Waveform sound event with pitch C4'), plt.xlabel('Time (s)'),
plt.plot(X1, y1),
plt.subplot(1,2,2), plt.title('Waveform sound event with pitch E4'), plt.xlabel('Time (s)'),
plt.plot(X2, y2)
plt.figure()
plt.subplot(1,2,1), plt.title('Spectral density sound event with pitch C4'), plt.xlabel('Frequency (Hz)'),
plt.plot(F, S1, lw=2), plt.xlim([0, 5000])
plt.subplot(1,2,2), plt.title('Spectral density sound event with pitch E4'), plt.xlabel('Frequency (Hz)'),
plt.plot(F, S2, lw=2), plt.xlim([0, 5000]);
```
### Learning frequency content of each component process $\left\lbrace w_{m}(t)\right\rbrace_{m=1}^{M}$
#### Example fitting one frequency component
```python
# example fitting one harmonic
idx = np.argmax(S1)
a, b = idx - 50, idx + 50
X, y = F[a: b,].reshape(-1,), S1[a: b,].reshape(-1,)
p0 = np.array([1., 1., 2*np.pi*F[idx]])
phat = sp.optimize.minimize(GPitch.Lloss, p0, method='L-BFGS-B', args=(X, y), tol = 1e-10, options={'disp': True})
pstar = phat.x
Xaux = np.linspace(X.min(), X.max(), 1000)
L = GPitch.Lorentzian(pstar,Xaux)
plt.figure(), plt.xlim([X.min(), X.max()])
plt.plot(Xaux, L, lw=2)
plt.plot(X, y, '.', ms=8);
```
```python
Nh = 15 # maximun number of harmonics
s_1, l_1, f_1 = GPitch.learnparams(F, S1, Nh)
Nh1 = s_1.size
s_2, l_2, f_2 = GPitch.learnparams(F, S2, Nh)
Nh2 = s_2.size
```
```python
plt.figure()
for i in range(0, Nh1):
idx = np.argmin(np.abs(F - f_1[i]))
a = idx - 50
b = idx + 50
pstar = np.array([s_1[i], 1./l_1[i], 2.*np.pi*f_1[i]])
learntfun = GPitch.Lorentzian(pstar, F)
plt.subplot(1,Nh,i+1)
plt.plot(F[a:b],learntfun[a:b],'', lw = 2)
plt.plot(F[a:b],S1[a:b],'.', ms = 3)
plt.axis('off')
plt.ylim([S1.min(), S1.max()])
plt.figure()
for i in range(0, Nh2):
idx = np.argmin(np.abs(F - f_2[i]))
a = idx - 50
b = idx + 50
pstar = np.array([s_2[i], 1./l_2[i], 2.*np.pi*f_2[i]])
learntfun = GPitch.Lorentzian(pstar, F)
plt.subplot(1,Nh,i+1)
plt.plot(F[a:b],learntfun[a:b],'', lw = 2)
plt.plot(F[a:b],S2[a:b],'.', ms = 3)
plt.axis('off')
plt.ylim([S2.min(), S2.max()])
```
```python
S1k = GPitch.LorM(F, s=s_1, l=1./l_1, f=2*np.pi*f_1)
S2k = GPitch.LorM(F, s=s_2, l=1./l_2, f=2*np.pi*f_2)
plt.figure(), plt.title('Approximate spectrum using Lorentzian mixture')
plt.subplot(1,2,1), plt.plot(F, S1, lw=2), plt.plot(F, S1k, lw=2)
plt.legend(['FT source 1', 'S kernel 1']), plt.xlabel('Frequency (Hz)')
plt.subplot(1,2,2), plt.plot(F, S2, lw=2), plt.plot(F, S2k, lw=2)
plt.legend(['FT source 2', 'S kernel 2']), plt.xlabel('Frequency (Hz)');
```
```python
Xk = np.linspace(-1.,1.,2*16000).reshape(-1,1)
IFT_y1 = np.fft.ifft(np.abs(y1f))
IFT_y2 = np.fft.ifft(np.abs(y2f))
k1 = GPitch.MaternSM(Xk, s=s_1, l=1./l_1, f=2*np.pi*f_1)
k2 = GPitch.MaternSM(Xk, s=s_2, l=1./l_2, f=2*np.pi*f_2)
plt.figure()
plt.subplot(1,2,1), plt.plot(Xk, k1, lw=2), plt.xlim([-0.005, 0.005])
plt.subplot(1,2,2), plt.plot(Xk, k2, lw=2), plt.xlim([-0.005, 0.005])
plt.figure()
plt.subplot(1,2,1), plt.plot(Xk, k1)
plt.subplot(1,2,2), plt.plot(Xk, k2)
plt.figure(),
plt.subplot(1,2,1), plt.plot(X1[0:16000],IFT_y1[0:16000], lw=1), plt.plot(X1[0:16000],k1[16000:32000], lw=1)
plt.xlim([0, 0.03])
plt.subplot(1,2,2), plt.plot(X1[0:16000],IFT_y2[0:16000], lw=1), plt.plot(X1[0:16000],k2[16000:32000], lw=1)
plt.xlim([0, 0.03]);
```
### Learning hyperparameters of activation processes $\left\lbrace \phi_{m}(t)\right\rbrace_{m=1}^{M}$
So far we have learnt the harmoninc content of the isolated events. Now we try to learn the parameters of the GP that describes the amplitude envelope.
```python
win = signal.hann(200)
env1 = signal.convolve(np.abs(y1), win, mode='same') / sum(win)
env1 = np.max(np.abs(y1))*(env1 / np.max(env1))
env1 = env1.reshape(-1,)
env2 = signal.convolve(np.abs(y2), win, mode='same') / sum(win)
env2 = np.max(np.abs(y2))*(env2 / np.max(env2))
env2 = env2.reshape(-1,)
plt.figure()
plt.subplot(1,2,1), plt.plot(X1, y1, ''), plt.plot(X1, env1, '', lw = 2)
plt.subplot(1,2,2), plt.plot(X2, y2, ''), plt.plot(X2, env2, '', lw = 2);
```
```python
yf = fft(env1)
xf = np.linspace(-1.0/(2.0*T), 1.0/(2.0*T), N1)
S = 1.0/N1 * np.abs(yf)
sht = np.fft.fftshift(S)
a = np.argmin(np.abs(xf - (-10.)))
b = np.argmin(np.abs(xf - ( 10.)))
X, y = xf[a:b].reshape(-1,), sht[a:b].reshape(-1,)
p0 = np.array([1., 1., 0.])
phat = sp.optimize.minimize(GPitch.Lloss, p0, method='L-BFGS-B', args=(X, y), tol = 1e-10, options={'disp': True})
X2 = np.linspace(X.min(), X.max(), 1000)
L = GPitch.Lorentzian(phat.x, X2)
plt.figure()
plt.subplot(1,2,1), plt.plot(X2, L), plt.plot(X, y, '.', ms=8)
plt.subplot(1,2,2), plt.plot(X2, L), plt.plot(X, y, '.', ms=8)
s_env, l_env, f_env = np.hsplit(phat.x, 3)
```
# Using the learnt MSM kernel for pitch detection
We keep the same form for the learnt variances $\sigma^{2}$, but we modify the lengthscale because we learnt the inverse, i.e. $l = \lambda^{-1}$. Also, we learnt the frequency vector in radians, that is why we convert it to Hz.
```python
# To run the pitch detection change: RunExperiment = True
RunExperiment = False
if RunExperiment == True:
GPitch.pitch('guitar.wav', windowsize=16000)
else:
results = np.load('SIG_FL_results.npz')
X = results['X']
g1 = results['mu_g1']
g2 = results['mu_g2']
phi1 = GPitch.logistic(g1)
phi2 = GPitch.logistic(g2)
w1 = results['mu_f1']
w2 = results['mu_f2']
f1 = phi1*w1
f2 = phi2*w2
Xtest1, ytest1 = Xaudio[0:4*sf], yaudio[0:4*sf]
Xtest2, ytest2 = Xaudio[6*sf:8*sf], yaudio[6*sf:8*sf]
Xtest = np.hstack((Xtest1, Xtest2)).reshape(-1,1)
ytest = np.hstack((ytest1, ytest2)).reshape(-1,1)
sf, sourceC = wav.read('source_C.wav')
sourceC = sourceC.astype(np.float64)
sourceC = sourceC / np.max(np.abs(sourceC))
sf, sourceE = wav.read('source_E.wav')
sourceE = sourceE.astype(np.float64)
sourceE = sourceE / np.max(np.abs(sourceE))
plt.figure()
plt.plot(Xtest, ytest)
```
```python
plt.figure()
plt.subplot(1,2,1), plt.plot(Xaudio, sourceC)
plt.xlim([X.min(), X.max()])
plt.subplot(1,2,2), plt.plot(Xaudio, sourceE)
plt.xlim([X.min(), X.max()])
plt.figure()
plt.subplot(1,2,1), plt.plot(X, f1)
plt.subplot(1,2,2), plt.plot(X, f2)
plt.figure()
plt.subplot(1,2,1), plt.plot(X, phi1)
plt.subplot(1,2,2), plt.plot(X, phi2)
plt.figure()
plt.subplot(1,2,1), plt.plot(X, w1)
plt.subplot(1,2,2), plt.plot(X, w2)
```
## Piano-roll
```python
#%% genetare piano-roll ground truth
jump = 100 #downsample
Xsubset = X[::jump]
oct1 = 24
oct2 = 84
Y = np.arange(oct1,oct2).reshape(-1,1)
Ns = Xsubset.size
Phi = np.zeros((Y.size, Ns))
#Phi[47-oct1, (Xsubset> 0.050 and Xsubset< 1.973)] = 1.
C4_a1 = np.argmin(np.abs(Xsubset-0.050))
C4_b1 = np.argmin(np.abs(Xsubset-1.973))
C4_a2 = np.argmin(np.abs(Xsubset-6.050))
C4_b2 = np.argmin(np.abs(Xsubset-7.979))
Phi[47-oct1, C4_a1:C4_b1 ] = 1.
Phi[47-oct1, C4_a2:C4_b2 ] = 1.
Phi[47-oct1, C4_a3:C4_b3 ] = 1.
Phi[47-oct1, C4_a4:C4_b4 ] = 1.
E4_a1 = np.argmin(np.abs(Xsubset-2.050))
E4_b1 = np.argmin(np.abs(Xsubset-3.979))
E4_a2 = np.argmin(np.abs(Xsubset-6.050))
E4_b2 = np.argmin(np.abs(Xsubset-7.979))
Phi[51-oct1, E4_a1:E4_b1 ] = 1.
Phi[51-oct1, E4_a2:E4_b2 ] = 1.
Phi[51-oct1, E4_a3:E4_b3 ] = 1.
Phi[51-oct1, E4_a4:E4_b4 ] = 1.
Phi = np.abs(Phi-1)
[Xm, Ym] = np.meshgrid(Xsubset,Y)
#infered piano roll
Phi_i = np.zeros((Y.size, Ns))
Phi_i[47-oct1, :] = phi1[::jump].reshape(-1,)
Phi_i[51-oct1, :] = phi2[::jump].reshape(-1,)
Phi_i = np.abs(Phi_i-1)
```
```python
plt.figure()
plt.ylabel('')
plt.xlabel('Time (s)')
plt.pcolormesh(Xm, Ym, Phi, cmap='gray')
plt.ylim([oct1, oct2])
plt.xlim([0, 8])
plt.xlabel('Time (seconds)')
plt.figure()
plt.ylabel('')
plt.xlabel('Time (s)')
plt.pcolormesh(Xm, Ym, Phi_i, cmap='gray')
plt.ylim([oct1, oct2])
plt.xlim([0, 8])
plt.xlabel('Time (seconds)')
```
| ec4ec47787a11f33be7f9b33cdd5982e2033c3be | 422,921 | ipynb | Jupyter Notebook | demo_MSMK.ipynb | PabloAlvarado/MSMK | 9976429cf0a7f38e52d890e1cf6e6fc982fe2882 | [
"Apache-2.0"
] | 3 | 2018-02-20T20:38:40.000Z | 2021-12-12T14:26:12.000Z | demo_MSMK.ipynb | PabloAlvarado/MSMK | 9976429cf0a7f38e52d890e1cf6e6fc982fe2882 | [
"Apache-2.0"
] | 1 | 2018-11-16T15:00:03.000Z | 2019-04-08T08:27:10.000Z | demo_MSMK.ipynb | PabloAlvarado/MSMK | 9976429cf0a7f38e52d890e1cf6e6fc982fe2882 | [
"Apache-2.0"
] | 1 | 2020-06-01T07:21:59.000Z | 2020-06-01T07:21:59.000Z | 624.698671 | 79,588 | 0.942817 | true | 4,512 | Qwen/Qwen-72B | 1. YES
2. YES | 0.808067 | 0.695958 | 0.562381 | __label__eng_Latn | 0.717111 | 0.144929 |
# Music Machine Learning - Probability distributions
### Author: Philippe Esling (esling@ircam.fr)
In this course we will cover
1. A [quick recap](#recap) on simple probability concepts
2. An introduction to [probability distributions](#distributions)
3. An explanation on how to [sample](#sampling) from distributions ourselves
<a id="recap"> </a>
## Quick recap on probability
The field of probability aims to model random or uncertain events through random variables $X$ denoting a quantity that is uncertain, which take values $\omega$ in a sample space $\omega \in \Omega$. If we observe several occurrences of this variable $\{\mathbf{x}_{i}\}_{i=1}$, we might try to model the _probability distribution_ $p(\mathbf{x})$ of that variable. We recall here that the probability of an event $a$ is a real number $p(a)$, with $0 \leq p(a) \leq 1$, knowing that $p(\Omega)=1$ and $p(\{\})=0$. The probability of two events occuring simultaneously is defined as $p\left(a \cup b \right)$. Therefore, the probability of one event **or** the other occuring is defined as
$
\begin{equation}
p\left(a \cap b \right) = p(a) + p(b) - p\left(a \cup b \right)
\end{equation}
$
The *conditional probability* of an event $a$ occuring *given* another event $b$ is denoted $p \left(a \mid b \right)$ and is defined as
$$
\begin{equation}
p \left(a \mid b \right) = \frac{p \left(a , b \right)}{p \left(b \right)}
\end{equation}
$$
This can be understood as the probability of event $a$ to occur if we restrict the world of possibilities to event $b$. The *chain rule* defines the probabilities of a set of events to co-occur simultaneously
$$
\begin{equation}
p \left(x_{1},...,x_{n} \right) = \prod_{i=n}^{1}{p \left(x_{i}\mid x_{i-1},..., x_{1} \right)}
\end{equation}
$$
Finally, we say that two events are independent if $p(a\mid b) = p(a)$.
To understand these concepts graphically, we will rely on `PyTorch` and specifically the `distributions` package.
```python
import torch.distributions as distribution
import torch.distributions.transforms as transform
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from helper_plot import hdr_plot_style
hdr_plot_style()
```
<a id="distributions"> </a>
### Probability distributions
Let $X$ be a random variable associated with a *probability distribution function* that assigns probabilities to the different outcomes $X$ can take in the sample space. We can divide random variables into three different types:
- **$X$ is discrete**: Discrete random variables may only assume values on a specified list.
- **$X$ is continuous**: Continuous random variable can take on arbitrarily exact values.
- **$X$ is mixed**: Mixed random variables assign probabilities to both discrete and continuous random variables, (i.e. a combination of the above two categories).
**Expected Value**
The expected value $\mathbb{E}\left[X\right]$ for a given probability distribution can be described as _"the mean expected value for many repeated samples from that distribution."_ As the number of repeated observation goes to infinity, the difference between the average outcome and the expected value becomes arbitrarily small. Formally, it is defined for discrete variables as
$$
\mathbb{E}\left[X\right] = \sum\limits_{i}\bx_i p(\bx=\bx_i)
$$
#### Discrete distributions
If $X$ is discrete, then its distribution is called a *probability mass function* (pmf), which measures the probability $X$ takes on the value $x_{i}$, denoted $P(X=x_{i})$. Let $\mathbf{x}$ be a discrete random variable with range $R_{X}=\{x_1,\cdots,x_n\}$ (finite or countably infinite). The function
$$
p_{X}(x_{i})=p(X=x_{i}), \forall i\in\{1,\cdots,n\}
$$
is called the probability mass function (PMF) of $X$.
Hence, the PMF defines the probabilities of all possible values for a random variable. The above notation allows to express that the PMF is defined for the random variable $X$, so that $p_{X}(1)$ gives the probability that $X=1$. For discrete random variables, the PMF is also called the \textit{probability distribution}. The PMF is a probability measure, therefore it satisfies all the corresponding properties
- $0 \leq p_{X}(x_i) < 1, \forall x_i$
- $\sum_{x_i\in R_{X}} p_{X}(x_i) = 1$
- $\forall A \subset R_{X}, p(X \in A)=\sum_{x_a \in A}p_{X}(x_a)$
A very simple example of discrete distribution is the `Bernoulli` distribution. With this distribution, _we can model a coin flip_, if it has equal probability. More formally, a Bernoulli distribution is defined as
$$
Bernoulli(x)= p^x (1-p)^{(1-x)}
$$
with $p$ controlling the probability of the two classes. Hence, a fair coin should have $p=0.5$, and if we throw the coin a very large number of times, we hope to see on average an equal amount of _heads_ and _tails_.
```python
bernoulli = distribution.Bernoulli(0.5)
samples = bernoulli.sample((10000,))
plt.figure(figsize=(10,8))
sns.distplot(samples)
plt.title("Samples from a Bernoulli (coin toss)")
plt.show()
```
However, we can also _sample_ from the distribution to have individual values of a single throw. In that case, we obtain a series of separate events that _follow_ the distribution
```python
vals = ['heads', 'tails']
samples = bernoulli.sample((10, ))
for s in samples:
print('Coin is tossed on ' + vals[int(s)])
```
Now, we can mess up our probability to model an unfair (loaded) coin, as shown in the following example (where we use a cheated coin that should give us a lot more of _heads_ than _tails_)
```python
bernoulli = distribution.Bernoulli(0.3)
samples = bernoulli.sample((10000,))
plt.figure(figsize=(10, 8)); sns.distplot(samples)
plt.title("Samples from an unfair coin"); plt.show()
vals = ['heads', 'tails']
samples = bernoulli.sample((10, ))
for s in samples:
print('Coin is tossed on ' + vals[int(s)])
```
#### Poisson distribution
Let's introduce one of the (many) useful probability mass functions. We say $Z$ is *Poisson*-distributed if:
$$
\begin{equation}
P\left(Z = k\right) =\frac{ \lambda^k e^{-\lambda} }{k!}, \; \; k \in \mathbb{N^{+}}
\end{equation}
$$
$\lambda \in \mathbb{R}$ is a parameter of the distribution that controls its shape (usually termed the *intensity* of the Poisson distribution). By increasing $\lambda$, we add more probability to larger values. One can describe $\lambda$ as the *intensity* of the Poisson distribution. If a random variable $Z$ has a Poisson mass distribution, we denote it by
$$
\begin{equation}
Z \sim \text{Poi}(\lambda)
\end{equation}
$$
One useful property of the Poisson distribution is that its expected value is equal to its parameter, i.e.:
$$E\large[ \;Z\; | \; \lambda \;\large] = \lambda $$
We will plot the probability mass distribution for different $\lambda$ values.
### Continuous distributions
The same ideas apply to _continuous_ random variables, which can model for instance the height of human beings. If we try to guess the height of someone that we do not know, there is a higher probability that this person will be around 1m70, instead of 20cm or 3m. For the rest of this course, we will use the shorthand notation $p(\mathbf{x})$ for the distribution $p(\mathbf{x}=x_{i})$, which expresses for a real-valued random variable $\mathbf{x}$, evaluated at $x_{i}$, the probability that $\mathbf{x}$ takes the value $x_i$.
One notorious example of such distributions is the Gaussian (or Normal) distribution, which is defined as
\begin{equation}
p(x)=\mathcal{N}(\mu,\sigma)=\frac{1}{\sqrt{2\pi\sigma^{2}}}e^{-\frac{(x-\mu)^{2}}{2\sigma^{2}}}
\end{equation}
Similarly as before, we can observe the behavior of this distribution with the following code (in our height example)
```python
normal = distribution.Normal(150., 30.)
samples = normal.sample((10000, ))
sns.distplot(samples)
plt.title("Samples from a standard Normal")
plt.show()
```
If we have access to this complete probability distribution (its exact parameterization and function), we can generate samples (in this case "new humans") that follow the correct distribution. You can experiment with this in the following code space.
```python
######################
# YOUR CODE GOES HERE
######################
```
An example of continuous random variable is a random variable with *exponential density*
$$
\begin{equation}
f_Z(z | \lambda) = \lambda e^{-\lambda z }, \;\; z\ge 0
\end{equation}
$$
When a random variable $Z$ has an exponential distribution with parameter $\lambda$, we say *$Z$ is exponential*
$$Z \sim \text{Exp}(\lambda)$$
Given a specific $\lambda$, the expected value of an exponential random variable is equal to the inverse of $\lambda$, that is
$$E[\; Z \;|\; \lambda \;] = \frac{1}{\lambda}$$
<a id="distribs"></a>
### PyTorch distributions
Here, we rely on the [PyTorch distributions module](https://pytorch.org/docs/stable/_modules/torch/distributions/), which is defined in `torch.distributions`. Most notably, we are going to rely both on the `Distribution` and `Transform` objects.
```python
# Imports for plotting
import numpy as np
import matplotlib.pyplot as plt
# Define grids of points (for later plots)
x = np.linspace(-4, 4, 1000)
z = np.array(np.meshgrid(x, x)).transpose(1, 2, 0)
z = np.reshape(z, [z.shape[0] * z.shape[1], -1])
```
Inside this toolbox, we can already find some of the major probability distributions that we are used to deal with
```python
p = distrib.Normal(loc=0, scale=1)
p = distrib.Bernoulli(probs=torch.tensor([0.5]))
p = distrib.Beta(concentration1=torch.tensor([0.5]), concentration0=torch.tensor([0.5]))
p = distrib.Gamma(concentration=torch.tensor([1.0]), rate=torch.tensor([1.0]))
p = distrib.Pareto(alpha=torch.tensor([1.0]), scale=torch.tensor([1.0]))
```
The interesting aspect of these `Distribution` objects is that we can both obtain some samples from it through the `sample` (or `sample_n`) function, but we can also obtain the analytical density at any given point through the `log_prob` function
```python
# Based on a normal
n = distrib.Normal(0, 1)
# Obtain some samples
samples = n.sample((1000, ))
# Evaluate true density at given points
density = torch.exp(n.log_prob(torch.Tensor(x))).numpy()
# Plot both samples and density
fig, (ax1, ax2) = plt.subplots(1, 2, sharex=True, figsize=(15, 4))
ax1.hist(samples, 50, alpha=0.8);
ax1.set_title('Empirical samples', fontsize=18);
ax2.plot(x, density); ax2.fill_between(x, density, 0, alpha=0.5)
ax2.set_title('True density', fontsize=18);
```
Here you can experiment with different distributions, and try to compare how they behave depending on their parameters and also on how much _samples_ you draw from these.
```python
######################
# YOUR CODE GOES HERE
######################
```
<a id="sampling"></a>
## Sampling from distributions
The advantage of using probability distributions is that we can *sample* from these to obtain examples that follow the distribution. For instance, if we perform sampling repeatedly (up to infinity) from a Gaussian PDF, the different values will be distributed following the exact Gaussian distribution. However, although we know the PDF, we need to compute the *Cumulative Distribution Function* (CDF), and then its inverse to obtain the sampling function. Therefore, if we denote the PDF as $f_{X}(x)$, we need to compute the CDF
$$
\begin{equation}
F_{X}\left(x\right)=\intop_{\infty}^{x}f_{X}\left(t\right)dt
\end{equation}
$$
Then, the *inverse sampling method* consists in solving and applying the inverse CDF $F_{X}^{-1}\left(x\right)$. Here, we recall that the Exponential probability is defined with the following function.
$$
p_{\lambda}(x) = \lambda e^{-\lambda x}
$$
with $\lambda$ defining the _rate_ parameter. Therefore, to be able to define our own `sample` method, we need to solve for
$$
F^{-1}_Y(x) = \left(\int_0^x \lambda e^{-\lambda y} dy\right)^{-1}
$$
***
**Exercise**
1. Code the exponential probability density functions
2. Perform the inverse transform method on the exponential distribution PDF
3. Code the `sample_exponential` function to sample from the exponential
4. (optional) Perform the same operations for sampling from the Beta distribution
***
```python
from scipy.stats import expon
nb_samples = 500
nb_bins = 50
def sample_exponential(mu, n, m):
######################
# YOUR CODE GOES HERE
######################
######################
# Solution
samples = np.random.uniform(0, 1, size=(m,n))
samples = - np.log(1 - samples)
samples *= mu
######################
return samples
# Exponential distribution
mu = 2
samples = np.random.exponential(mu, nb_samples)
samples_ex = sample_exponential(mu, nb_samples, 1)
# Compute the PDF
X = np.linspace(0, np.max(samples), int(np.max(samples)) * 100)
y1 = expon.pdf(X) #* (nb_samples / nb_bins) * int(np.max(samples) * 1.5)
# Display both
plt.figure(figsize=(12, 8))
plt.subplot(2,1,1)
plt.hist(samples, 50, label='Samples')
plt.plot(X,y1,ls='--',c='r',linewidth=2, label='Exponential PDF')
plt.legend(loc=1)
plt.subplot(2,1,2)
plt.hist(samples_ex, 50, label='Samples')
plt.plot(X,y1,ls='--',c='r',linewidth=2, label='Our exponential sampples')
plt.legend(loc=1)
```
```python
def sampleBeta(a, b, M, N):
######################
# YOUR CODE GOES HERE
######################
######################
# Solution
samples = np.random.beta(a, b, size=(M, N))
######################
return samples
from scipy.stats import beta
# Beta distribution
a = 0.6
b = 1.5
samples = np.random.beta(a, b, nbSamples)
samplesBeta = sampleBeta(a, b, nbSamples, 1)
# Compute the PDF
X = np.linspace(0, 1, 100)
y1 = beta.pdf(X, a, b) * (nbSamples / nbBins) * (np.max(samples) * 1.5)
# Display both
plt.figure(figsize=(12, 8))
plt.subplot(2,1,1)
plt.hist(samples, 50, label='Samples')
plt.plot(X,y1,ls='--',c='r',linewidth=2, label='Beta PDF')
plt.legend(loc=1)
plt.subplot(2,1,2)
plt.hist(samplesBeta, 50, label='Samples')
plt.plot(X,y1,ls='--',c='r',linewidth=2, label='Beta PDF')
plt.legend(loc=1)
```
### Comparing distributions (KL divergence)
$
\newcommand{\R}{\mathbb{R}}
\newcommand{\bb}[1]{\mathbf{#1}}
\newcommand{\bx}{\bb{x}}
\newcommand{\by}{\bb{y}}
\newcommand{\bz}{\bb{z}}
\newcommand{\KL}[2]{\mathcal{D}_{\text{KL}}\left[#1 \| #2\right]}$
Originally defined in the field of information theory, the _Kullback-Leibler (KL) divergence_ (usually noted $\KL{p(\bx)}{q(\bx)}$) is a dissimilarity measure between two probability distributions $p(\bx)$ and $q(\bx)$. In the view of information theory, it can be understood as the cost in number of bits necessary for coding samples from $p(\bx)$ by using a code optimized for $q(\bx)$ rather than the code optimized for $p(\bx)$. In the view of probability theory, it represents the amount of information lost when we use $q(\bx)$ to approximate the true distribution $p(\bx)$. %that explicit the cost incurred if events were generated by $p(\bx)$ but charged under $q(\bx)$
Given two probability distributions $p(\bx)$ and $q(\bx)$, the Kullback-Leibler divergence of $q(\bx)$ _from_ $p(\bx)$ is defined to be
\begin{equation}
\KL{p(\bx)}{q(\bx)}=\int_{\R} p(\bx) \log \frac{p(\bx)}{q(\bx)}d\bx
\end{equation}
Note that this dissimilarity measure is _asymmetric_, therefore, we have
\begin{equation}
\KL{p(\bx)}{q(\bx)}\neq \KL{q(\bx)}{p(\bx)}
\end{equation}
This asymmetry also describes an interesting behavior of the KL divergence, depending on the order to which it is evaluated. The KL divergence can either be a _mode-seeking_ or _mode-coverage_ measure (we will come back to these notions in the _approximate inference_ course)
```python
```
| 8cd934b6febdda2186cdda4118dea1246ccf472a | 22,251 | ipynb | Jupyter Notebook | 04b_distributions.ipynb | piptouque/atiam_ml | 9da637eae179237d30a15dd9ce3e95a2a956c385 | [
"MIT"
] | null | null | null | 04b_distributions.ipynb | piptouque/atiam_ml | 9da637eae179237d30a15dd9ce3e95a2a956c385 | [
"MIT"
] | null | null | null | 04b_distributions.ipynb | piptouque/atiam_ml | 9da637eae179237d30a15dd9ce3e95a2a956c385 | [
"MIT"
] | null | null | null | 39.312721 | 713 | 0.58694 | true | 4,281 | Qwen/Qwen-72B | 1. YES
2. YES | 0.893309 | 0.855851 | 0.76454 | __label__eng_Latn | 0.986619 | 0.614614 |
```python
from IPython.display import Image
from IPython.core.display import HTML
from sympy import *; x,h,y = symbols("x h y")
Image(url= "https://i.imgur.com/bNY57ZF.png")
```
```python
expr = 4/(x-2)
def F(x):
return expr
F(x)
#first step is to find dF(x)
#then we do slope point form with dF(x) as m so y = y0 + dF(x)(x-x0)
```
$\displaystyle \frac{4}{x - 2}$
```python
dF = diff(F(x))
dF
```
$\displaystyle - \frac{4}{\left(x - 2\right)^{2}}$
```python
print(dF.subs(x,4))
```
-1
```python
MofT = (dF.subs(x,4))
y = 2 + MofT*(x - 4)
y
```
$\displaystyle 6 - x$
```python
Image(url= "https://i.imgur.com/MJ6qRQd.png")
```
```python
```
| c415478cb7a1e1ff5835b65d17226060fe43e693 | 3,291 | ipynb | Jupyter Notebook | Calculus_Homework/WWB08.16.ipynb | NSC9/Sample_of_Work | 8f8160fbf0aa4fd514d4a5046668a194997aade6 | [
"MIT"
] | null | null | null | Calculus_Homework/WWB08.16.ipynb | NSC9/Sample_of_Work | 8f8160fbf0aa4fd514d4a5046668a194997aade6 | [
"MIT"
] | null | null | null | Calculus_Homework/WWB08.16.ipynb | NSC9/Sample_of_Work | 8f8160fbf0aa4fd514d4a5046668a194997aade6 | [
"MIT"
] | null | null | null | 18.385475 | 76 | 0.449711 | true | 256 | Qwen/Qwen-72B | 1. YES
2. YES | 0.891811 | 0.771843 | 0.688339 | __label__yue_Hant | 0.310087 | 0.437572 |
# Custom Gradient
This is a brief demonstration of tensorflow [custom gradients](https://www.tensorflow.org/api_docs/python/tf/custom_gradient)
## Chain rule
Lets say we have a function $f(x) = x^2$. If we now compose this function such that $y = f(f(f(x)))$. Now we want to find the gradient $\frac{dy}{dx}$.
We first decompose this into
\begin{align*}
y &= x_0^2\\
x_0 &= x_1^2\\
x_1 &= x^2
\end{align*}
On taking first order derivative, we get
\begin{align*}
\frac{dy}{dx_0} &= 2x_0\\
\frac{dx_0}{dx_1} &= 2x_1\\
\frac{dx_1}{dx} &= 2x
\end{align*}
Using chain rule
\begin{equation*}
\frac{dy}{dx} = \frac{dy}{dx_0} \frac{dx_0}{dx_1} \frac{dx_1}{dx}
\end{equation*}
To generalize
\begin{equation*}
\frac{dy}{dx} = \frac{dy}{dx_{0}} ... \frac{dx_i}{dx_{i+1}} ... \frac{dx_n}{dx}
\end{equation*}
In tensorflow the `upstream` gradient is passed as an argument to the inner function `grad`.
\begin{equation*}
upstream = \frac{dx_{i+1}}{dx_{i+2}} ... \frac{dx_n}{dx}
\end{equation*}
Now we can multiply this upstream gradient to the gradient of the current function $\frac{dx_{i}}{dx_{i+1}}$ and pass it downstream.
\begin{equation*}
downstream = \frac{dx_{i}}{dx_{i+1}} * upstream
\end{equation*}
```python
import tensorflow as tf
```
```python
# @tf.function
@tf.custom_gradient
def foo(x):
tf.debugging.assert_rank(x, 0)
def grad(dy_dx_upstream):
dy_dx = 2 * x
dy_dx_downstream = dy_dx * dy_dx_upstream
tf.print(f'x={x}\tupstream={dy_dx_upstream}\tcurrent={dy_dx}\t\tdownstream={dy_dx_downstream}')
return dy_dx_downstream
y = x ** 2
tf.print(f'x={x}\ty={y}')
return y, grad
x = tf.constant(2.0, dtype=tf.float32)
with tf.GradientTape(persistent=True) as tape:
tape.watch(x)
y = foo(foo(foo(x))) # y = x ** 8
tf.print(f'\nfinal dy/dx={tape.gradient(y, x)}')
```
x=2.0 y=4.0
x=4.0 y=16.0
x=16.0 y=256.0
x=16.0 upstream=1.0 current=32.0 downstream=32.0
x=4.0 upstream=32.0 current=8.0 downstream=256.0
x=2.0 upstream=256.0 current=4.0 downstream=1024.0
final dy/dx=1024.0
### Gradients with multiple variables
If the function takes multiple variables, then the gradient for each variable has to be returned as demonstrated in the example.
```python
@tf.custom_gradient
def bar(x, y):
tf.debugging.assert_rank(x, 0)
tf.debugging.assert_rank(y, 0)
def grad(upstream):
dz_dx = y
dz_dy = x
return upstream * dz_dx, upstream * dz_dy
z = x * y
return z, grad
x = tf.constant(2.0, dtype=tf.float32)
y = tf.constant(3.0, dtype=tf.float32)
with tf.GradientTape(persistent=True) as tape:
tape.watch(x)
tape.watch(y)
z = bar(x, y)
tf.print(z)
tf.print(tape.gradient(z, x))
tf.print(tape.gradient(z, y))
tf.print(tape.gradient(x, y))
```
6
3
2
None
## Application of custom gradients
### Toy example: Differentiable approximation of non-differentiable functions
We take the sign function as an example
\begin{equation}
sign(x)= \\
\begin{cases}
-1, & \text{if}\ x<0 \\
0, & \text{if}\ x=0 \\
1, & \text{if}\ x>0 \\
\end{cases}\end{equation}
By implementing a custom gradient, we can continue to have the $sign(x)$ function in forward pass but a differentiable approximation in the backward pass. In this case we approximate $sign(x)$ with the sigmoid function $ \sigma(x)$
\begin{equation}
\frac{dsign_{approx}(x)}{dx} = \sigma(x) (1 - \sigma(x)) \\
sign_{approx}(x) = \sigma(x) + C \\
\end{equation}
```python
# @tf.function
@tf.custom_gradient
def differentiable_sign(x):
tf.debugging.assert_rank(x, 0)
def grad(upstream):
dy_dx = tf.math.sigmoid(x) * (1 - tf.math.sigmoid(x))
return upstream * dy_dx
if x > tf.constant(0.0):
return tf.constant(1.0), grad
else:
return tf.constant(-1.0), grad
x = tf.constant(3.0, dtype=tf.float32)
with tf.GradientTape(persistent=True) as tape:
tape.watch(x)
y = differentiable_sign(x)
loss = tf.nn.l2_loss(y - tf.constant(-1.0))
tf.print(y)
tf.print(tape.gradient(y, x))
tf.print(loss)
tf.print(tape.gradient(loss, x))
```
1
0.0451766551
2
0.0903533101
```python
x = tf.Variable(-1.0)
opt = tf.keras.optimizers.Adam(1e-1)
# opt = tf.keras.optimizers.SGD(1)
def train_step():
with tf.GradientTape() as tape:
y = differentiable_sign(x)
loss = tf.nn.l2_loss(y - tf.constant(1.0))
grads = tape.gradient(loss, x)
opt.apply_gradients(zip([grads], [x]))
return loss, y, grads
for i in range(100):
loss, y, grads = train_step()
if i % 10 == 0:
tf.print(i, loss, grads, x, y)
```
0 2 -0.393223882 -0.89999783 -1
10 0 0 0.0995881185 1
20 0 0 0.6450876 1
30 0 0 0.855591536 1
40 0 0 0.938390434 1
50 0 0 0.970632374 1
60 0 0 0.983009696 1
70 0 0 0.987700462 1
80 0 0 0.989459515 1
90 0 0 0.990113616 1
```python
```
| 714b44be1b5c9e4648a47909c84381abb78f802d | 8,436 | ipynb | Jupyter Notebook | notebooks/custom-gradient.ipynb | Ghost---Shadow/gradient-tape-experiments | 1ded25074defe2d5936c97d2b8d570a3d3614a18 | [
"MIT"
] | 4 | 2020-10-24T19:07:45.000Z | 2021-12-23T20:23:43.000Z | notebooks/custom-gradient.ipynb | Ghost---Shadow/gradient-tape-experiments | 1ded25074defe2d5936c97d2b8d570a3d3614a18 | [
"MIT"
] | null | null | null | notebooks/custom-gradient.ipynb | Ghost---Shadow/gradient-tape-experiments | 1ded25074defe2d5936c97d2b8d570a3d3614a18 | [
"MIT"
] | 1 | 2021-04-06T10:51:35.000Z | 2021-04-06T10:51:35.000Z | 26.528302 | 241 | 0.491465 | true | 1,724 | Qwen/Qwen-72B | 1. YES
2. YES | 0.926304 | 0.870597 | 0.806437 | __label__eng_Latn | 0.542879 | 0.711957 |
# Extension of DV Formula
$\def\abs#1{\left\lvert #1 \right\rvert}
\def\Set#1{\left\{ #1 \right\}}
\def\mc#1{\mathcal{#1}}
\def\M#1{\boldsymbol{#1}}
\def\R#1{\mathsf{#1}}
\def\RM#1{\boldsymbol{\mathsf{#1}}}
\def\op#1{\operatorname{#1}}
\def\E{\op{E}}
\def\d{\mathrm{\mathstrut d}}$
## $f$-divergence
Consider the more general problem of estimating the $f$-divergence in {eq}`f-D`:
$$
D_f(P_{\R{Z}}\|P_{\R{Z}'}) = E\left[f\left(\frac{dP_{\R{Z}}(\R{Z}')}{dP_{\R{Z}'}(\R{Z}')}\right)\right].
$$
**Exercise**
How to estimate $f$-divergence using the DV formula?
YOUR ANSWER HERE
Instead of using the DV bound, it is desirable to train a network to optimize a tight bound on the $f$-divergence because:
- *Estimating KL divergence well does not imply the underlying neural network approximates the density ratio well*:
- While KL divergence is just a non-negative real number,
- the density ratio is in a high dimensional function space.
- DV formula does not directly maximizes a bound on $f$-divergence, i.e.
it does not directly minimize the error in estimating $f$-divergence.
- $f$-divergence may have bounds that are easier/more stable for training a neural network.
**How to extend the DV formula to $f$-divergence?**
The idea is to think of the $f$-divergence as a convex *function(al)* evaluated at the density ratio:
---
**Proposition**
:label: D->F
$f$-divergence {eq}`f-D` is
$$
\begin{align}
D_f(P_{\R{Z}}\|P_{\R{Z}'}) = F\left[ \frac{P_{\R{Z}}}{P_{\R{Z}'}}\right]
\end{align}
$$ (D->F)
where
$$
\begin{align}
F[r] := E [ \underbrace{(f \circ r)(\R{Z}')}_{f(r(\R{Z}'))}]
\end{align}
$$ (F)
for any function $r:\mc{Z} \to \mathbb{R}$.
---
This is more like a re-definition than a proposition as the proof is immediate:
{eq}`f-D` is obtained from {eq}`D->F` by substituting $r=\frac{dP_{\R{Z}}}{dP_{\R{Z}'}}$.
As mentioned before, the KL divergence $D(P_{\R{Z}}\|P_{\R{Z}'})$ is a special case of $f$-divergence:
$$
D(P_{\R{Z}}\|P_{\R{Z}'}) = F\left[r\right]
$$
where
$$
\begin{align*}
F[r] &:= E\left[ r(\R{Z}')\log r(\R{Z}')\right].
\end{align*}
$$ (KL:F)
**Exercise** When is $F[r]=0$ equal to $0$?
YOUR ANSWER HERE
**Exercise**
Show using the properties of $f$ that $F$ is strictly convex.
**Solution**
For $\lambda\in [0,1]$ and functions $r_1, r_2\in \Set{r:\mc{Z}\to \mathbb{R}}$,
$$
\begin{align*}
F[\lambda r_1 + (1-\lambda) r_2]
&= E[\underbrace{ f(\lambda r_1(\R{Z}') + (1-\lambda) r_2(\R{Z}'))}_{\stackrel{\text{(a)}}{\geq} \lambda f(r_1(\R{Z}'))+(1-\lambda) f(r_2(\R{Z}'))}]\\
&\stackrel{\text{(b)}}{\geq} \lambda E[f(r_1(\R{Z}'))] + (1-\lambda) E[f(r_2(\R{Z}'))]
\end{align*}
$$
where (a) is by the convexity of $f$, and (b) is by the linearity of expectation. $F$ is strictly convex because (b) is satisfied with equality iff (a) is almost surely.
For a clearer understanding, consider a different choice of $F$ for the KL divergence:
$$
D(P_{\R{Z}}\|P_{\R{Z}'}) = F'\left[r\right]
$$
where
$$
\begin{align*}
F'[r] &:= E\left[ \log r(\R{Z})\right].
\end{align*}
$$ (rev-KL:F)
Note that $F'$ in {eq}`rev-KL:F` defined above is concave in $r$. In other words, {eq}`D2` in {prf:ref}`DV2`
$$
\begin{align*}
D(P_{\R{Z}}\|P_{\R{Z}'}) & = \sup_{\substack{r:\mc{Z}\to \mathbb{R}_+\\ E[r(\R{Z}')]=1}} E \left[ \log r(\R{Z}) \right]
\end{align*}
$$
is maximizing a concave function and therefore has a unique solution, namely, $r=\frac{dP_{\R{Z}}}{dP_{\R{Z}'}}$. Here comes the tricky question:
**Exercise**
Is KL divergence concave or convex in the density ratio $\frac{dP_{\R{Z}}}{dP_{\R{Z}'}}$? Note that $F$ defined in {eq}`KL:F` is convex in $r$.
YOUR ANSWER HERE
## Convex conjugation
Given $P_{\R{Z}'}\in \mc{P}(\mc{Z})$, consider
- a function space $\mc{R}$,
$$
\begin{align}
\mc{R} &\supseteq \Set{r:\mathcal{Z}\to \mathbb{R}_+\mid E\left[r(\R{Z}')\right] = 1},
\end{align}
$$ (R)
- a dual space $\mc{T}$, and
$$
\begin{align}
\mc{T} &\subseteq \Set{t:\mc{Z} \to \mathbb{R}}
\end{align}
$$ (T)
- the corresponding inner product $\langle\cdot,\cdot \rangle$:
$$
\begin{align}
\langle t,r \rangle &= \int_{z\in \mc{Z}} t(z) r(z) dP_{\R{Z}'}(z) = E\left[ t(\R{Z}') r(\R{Z}') \right].
\end{align}
$$ (inner-prod)
The following is a generalization of DV formula for estimating $f$-divergence {cite}`nguyen2010estimating`{cite}`ruderman2012tighter`:
---
**Proposition**
:label: convex-conjugate
$$
\begin{align}
D_{f}(P_{\R{Z}} \| P_{\R{Z}'}) = \sup _{t\in \mc{T}} E[g(\R{Z})] - F^*[t],
\end{align}
$$ (convex-conjugate2)
where
$$
\begin{align}
F^*[t] = \sup_{r\in \mc{R}} E[t(\R{Z}') r(\R{Z}')] - F[r].
\end{align}
$$ (convex-conjugate1)
---
---
**Proof**
Note that the supremums in {eq}`convex-conjugate1` and {eq}`convex-conjugate2` are [Fenchel-Legendre transforms][FL]. Denoting the transform as $[\cdot]^*$,
$$\underbrace{[[F]^*]^*}_{=F}\left[\frac{dP_{\R{Z}}}{dP_{\R{Z}'}}\right]$$
gives {eq}`convex-conjugate2` by expanding the outer/later transform. The equality is by the property that Fenchel-Legendre transform is its own inverse for strictly convex functional $F$. This completes the proof by {eq}`D->F`.
[FL]: https://en.wikipedia.org/wiki/Convex_conjugate
---
The proof is illustrated in the following figure:
Let's breakdown the details:
**Step 1**
For the purpose of the illustration, visualize the convex functional $F$ simply as a curve in 2D.
The $f$-divergence is then the $y$-coordinate of a point on the curve indicated above, with $r$ being the density ratio $\frac{dP_{\R{Z}}}{dP_{\R{Z}'}}$.
**Step 2**
To obtain a lower bound on $F$, consider any tangent of the curve with an arbitrary slope $t\cdot dP_{\R{Z}'}$
The lower bound is given by the $y$-coordinate of a point on the tangent with $r$ being the density ratio.
**Exercise**
Why is the $y$-coordinate of the tangent a lower bound on the $f$-divergence?
**Solution**
By the convexity of $F$, the tangent must be below $F$.
**Step 3**
To calculate the lower bound, denote the $y$-intercept as $-F^*[t]$:
Thinking of a function as nothing but a vector, the displacement from the $y$-intercept to the lower bound is given by the inner product of the slope and the density ratio.
**Step 4**
To make the bound tight, maximize the bound over the choice of the slope or $t$:
This gives the bound in {eq}`convex-conjugate2`. It remains to show {eq}`convex-conjugate1`.
**Step 5**
To compute the $y$-intercept or $F^*[t]$, let $r^*$ be the value of $r$ where the tangent touches the convex curve:
The displacement from the point at $r^*$ to the $y$-intercept can be computed as the inner product of the slope and $r^*$.
**Exercise**
Show that for the functional $F$ {eq}`KL:F` defined for KL divergence,
$$F^*[t]=\log E[e^{t(\R{Z}')}]$$
with $\mc{R}=\Set{r:\mc{Z}\to \mathbb{R}_+}$ and so {eq}`convex-conjugate2` gives the DV formula {eq}`DV` as a special case.
YOUR ANSWER HERE
| e3a873e943c12d2450ba903fe091342920d33afb | 133,110 | ipynb | Jupyter Notebook | part1/f-Divergence.ipynb | ccha23/cscit21 | 87de8c48c640406d9a9d282fc10a238122814f53 | [
"MIT"
] | null | null | null | part1/f-Divergence.ipynb | ccha23/cscit21 | 87de8c48c640406d9a9d282fc10a238122814f53 | [
"MIT"
] | null | null | null | part1/f-Divergence.ipynb | ccha23/cscit21 | 87de8c48c640406d9a9d282fc10a238122814f53 | [
"MIT"
] | null | null | null | 179.393531 | 69,604 | 0.901149 | true | 2,490 | Qwen/Qwen-72B | 1. YES
2. YES | 0.782662 | 0.849971 | 0.665241 | __label__eng_Latn | 0.954804 | 0.383908 |
### SETUP
Paste the code in [coupled_euler.py](../coupled_euler.py) in a python file in the working directory and make another new .py file
Now, use this command to import all the functions and class definitions in coupled_euler.py
```python
from coupled_euler import *
```
Also make the following imports,
```python
import numpy as np
import matplotlib.pyplot as plt
```
### THE PROBLEM
#### CIRCUIT DIAGRAM
#### MATH
Applying KVL in two inner loops,
$$
\begin{equation}
L\frac{dI_b}{dt} = C'^{-1}Q_2 - C^{-1}Q_3 - R I_b
\end{equation}
$$
$$
\begin{equation}
L\frac{dI_b}{dt} = C'^{-1}Q_2 - C^{-1}Q_3 - R I_b
\end{equation}
$$
Now, applying KCL at node b,
$$
\begin{equation}
I_a = I_b + I_c
\end{equation}
$$
In our case setting internal resistance R of inductors zero
$$
\begin{equation*}
\frac{dQ_1}{dt} = -I_a, \frac{dQ_2}{dt} = -I_a - I_b, \frac{dQ_3}{dt} = I_b
\end{equation*}
$$
#### CODE
Now to play with initial value problem understand and use the following code which can also be found in [here](../coupled_oscillators.py).
```python
#l stands for inductance L , c1, c2 corresponds to C and C'
l,c1,c2 = 0.1,1,1.3
#resistance
r=0
ivp = set_problem(
f=[lambda t,x,y,q1,q2,q3 : (q1/c1 - q2/c2 -r*x)/l, #Function associated with (1)
lambda t,x,y,q1,q2,q3 :(q2/c2 - q3/c1 -r*y)/l, #Function associated with (2)
lambda t,x,y,q1,q2,q3 : -x , #Function associated with dq1/dt
lambda t,x,y,q1,q2,q3 : -x-y, #Function associated with dq2/dt
lambda t,x,y,q1,q2,q3 : y], #Function associated with dq3/dt
dom=(0,15), # Time Domain
ini=(0,10000,10000,1,0,-1), #initial conditions in ordered tuple(t,I_a,I_b,q1,q2,q3)
N=int(20000),# No. of nodes/ control step size
vars = ("t","$I_a$","$I_b$","q1","q2","q3") # var names for labels
)
d=ivp.rk4() # rk4 called to solve the ivp problem
fig,ax = plt.subplots(1,1)
ivp.jt_plot(ax,1) # plots I_a vs t on ax
ivp.jt_plot(ax,2) # plots I_b vs t on ax
#ivp.jt_plot(ax,3) # plots q1 vs t
#ivp.jt_plot(ax,5) # plots q3 vs t
#ivp.kj_plot(ax,3,5) # plots q3 vs q1
#ivp.kj_plot(ax,1,2) # plots I_b vs I_a on ax
plt.legend()
plt.show()
```
The result that I obtain for specifically the above Initial value problem,
which is clearly not what one expects.
| 38f152cd41d70e26d632540e0cd1ee47af39eec9 | 4,153 | ipynb | Jupyter Notebook | coupled_DE/.ipynb_checkpoints/LC_oscillations-checkpoint.ipynb | plancky/mathematical_physics_II | c912dca1a58c218ddb06dc6cbca021b03a703540 | [
"CC0-1.0"
] | null | null | null | coupled_DE/.ipynb_checkpoints/LC_oscillations-checkpoint.ipynb | plancky/mathematical_physics_II | c912dca1a58c218ddb06dc6cbca021b03a703540 | [
"CC0-1.0"
] | null | null | null | coupled_DE/.ipynb_checkpoints/LC_oscillations-checkpoint.ipynb | plancky/mathematical_physics_II | c912dca1a58c218ddb06dc6cbca021b03a703540 | [
"CC0-1.0"
] | null | null | null | 27.143791 | 144 | 0.521551 | true | 828 | Qwen/Qwen-72B | 1. YES
2. YES | 0.868827 | 0.763484 | 0.663335 | __label__eng_Latn | 0.850675 | 0.379481 |
# Stochastic Processes: <br>Data Analysis and Computer Simulation
<br>
# Brownian motion 2: computer simulation
<br>
# 3. Simulations with on-the-fly animation
<br>
# 3.1. Simulation code with on-the-fly animation
## Import libraries
```python
% matplotlib nbagg
import numpy as np # import numpy library as np
import matplotlib.pyplot as plt # import pyplot library as plt
import matplotlib.mlab as mlab # import mlab module to use MATLAB commands with the same names
import matplotlib.animation as animation # import animation modules from matplotlib
from mpl_toolkits.mplot3d import Axes3D # import Axes3D from mpl_toolkits.mplot3d
plt.style.use('ggplot') # use "ggplot" style for graphs
```
## Define `init` function for `FuncAnimation`
```python
def init():
global R,V,W,Rs,Vs,Ws,time
R[:,:] = 0.0 # initialize all the variables to zero
V[:,:] = 0.0 # initialize all the variables to zero
W[:,:] = 0.0 # initialize all the variables to zero
Rs[:,:,:] = 0.0 # initialize all the variables to zero
Vs[:,:,:] = 0.0 # initialize all the variables to zero
Ws[:,:,:] = 0.0 # initialize all the variables to zero
time[:] = 0.0 # initialize all the variables to zero
title.set_text(r'') # empty title
line.set_data([],[]) # set line data to show the trajectory of particle n in 2d (x,y)
line.set_3d_properties([]) # add z-data separately for 3d plot
particles.set_data([],[]) # set position current (x,y) position data for all particles
particles.set_3d_properties([]) # add current z data of particles to get 3d plot
return particles,title,line # return listed objects that will be drawn by FuncAnimation
```
## Define `animate` function for `FuncAnimation`
```python
def animate(i):
global R,V,W,Rs,Vs,Ws,time # define global variables
time[i]=i*dt # store time in each step in an array time
W = std*np.random.randn(nump,dim) # generate an array of random forces accordingly to Eqs.(F10) and (F11)
R, V = R + V*dt, V*(1-zeta/m*dt)+W/m # update R & V via Eqs.(F5)&(F9)
Rs[i,:,:]=R # accumulate particle positions at each step in an array Rs
Vs[i,:,:]=V # accumulate particle velocitys at each step in an array Vs
Ws[i,:,:]=W # accumulate random forces at each step in an array Ws
title.set_text(r"t = "+str(time[i])) # set the title to display the current time
line.set_data(Rs[:i+1,n,0],Rs[:i+1,n,1]) # set the line in 2D (x,y)
line.set_3d_properties(Rs[:i+1,n,2]) # add z axis to set the line in 3D
particles.set_data(R[:,0],R[:,1]) # set the current position of all the particles in 2d (x,y)
particles.set_3d_properties(R[:,2]) # add z axis to set the particle in 3D
return particles,title,line # return listed objects that will be drawn by FuncAnimation
```
## Set parameters and initialize variables
```python
dim = 3 # system dimension (x,y,z)
nump = 1000 # number of independent Brownian particles to simulate
nums = 1024 # number of simulation steps
dt = 0.05 # set time increment, \Delta t
zeta = 1.0 # set friction constant, \zeta
m = 1.0 # set particle mass, m
kBT = 1.0 # set temperatute, k_B T
std = np.sqrt(2*kBT*zeta*dt) # calculate std for \Delta W via Eq.(F11)
np.random.seed(0) # initialize random number generator with a seed=0
R = np.zeros([nump,dim]) # array to store current positions and set initial condition Eq.(F12)
V = np.zeros([nump,dim]) # array to store current velocities and set initial condition Eq.(F12)
W = np.zeros([nump,dim]) # array to store current random forcces
Rs = np.zeros([nums,nump,dim]) # array to store positions at all steps
Vs = np.zeros([nums,nump,dim]) # array to store velocities at all steps
Ws = np.zeros([nums,nump,dim]) # array to store random forces at all steps
time = np.zeros([nums]) # an array to store time at all steps
```
## Perform and animate the simulation using `FuncAnimation`
```python
fig = plt.figure(figsize=(10,10)) # set fig with its size 10 x 10 inch
ax = fig.add_subplot(111,projection='3d') # creates an additional axis to the standard 2D axes
box = 40 # set draw area as box^3
ax.set_xlim(-box/2,box/2) # set x-range
ax.set_ylim(-box/2,box/2) # set y-range
ax.set_zlim(-box/2,box/2) # set z-range
ax.set_xlabel(r"x",fontsize=20) # set x-lavel
ax.set_ylabel(r"y",fontsize=20) # set y-lavel
ax.set_zlabel(r"z",fontsize=20) # set z-lavel
ax.view_init(elev=12,azim=120) # set view point
particles, = ax.plot([],[],[],'ro',ms=8,alpha=0.5) # define object particles
title = ax.text(-180.,0.,250.,r'',transform=ax.transAxes,va='center') # define object title
line, = ax.plot([],[],[],'b',lw=1,alpha=0.8) # define object line
n = 0 # trajectry line is plotted for the n-th particle
anim = animation.FuncAnimation(fig,func=animate,init_func=init,
frames=nums,interval=5,blit=True,repeat=False)
## If you have ffmpeg installed on your machine
## you can save the animation by uncomment the last line
## You may install ffmpeg by typing the following command in command prompt
## conda install -c menpo ffmpeg
##
# anim.save('movie.mp4',fps=50,dpi=100)
```
<IPython.core.display.Javascript object>
## Summary of simulation methods
### Original differential equation
\begin{equation}
\frac{d\mathbf{R}(t)}{dt}=\mathbf{V}(t)\tag{F1}
\end{equation}
\begin{equation}
m\frac{d\mathbf{V}(t)}{dt}=\color{black}{-\zeta\mathbf{V}(t)}+\color{black}{\mathbf{F}(t)}
\tag{F2}
\end{equation}
$\hspace{80mm}$with
\begin{equation}
\langle \mathbf{F}(t)\rangle=\mathbf{0}
\tag{F3}
\end{equation}
\begin{equation}
\langle \mathbf{F}(t)\mathbf{F}(0)\rangle = {2k_B T\zeta}\mathbf{I}\delta(t)
\tag{F4}
\end{equation}
### Euler method
$$
\mathbf{V}_{i+1}
=\left(1-\frac{\zeta}{m}\Delta t\right)\mathbf{V}_i + \frac{1}{m} {\Delta \mathbf{W}_i}
\tag{F9}
$$
$$
\mathbf{R}_{i+1}=\mathbf{R}_i+\mathbf{V}_i \Delta t \hspace{15mm}\tag{B3}
$$
$\hspace{80mm}$with
\begin{equation}
\langle \Delta \mathbf{W}_i\rangle=\mathbf{0}
\tag{F10}
\end{equation}
\begin{equation}
\langle \Delta \mathbf{W}_i\Delta \mathbf{W}_j\rangle = {2k_B T\zeta}\Delta t\mathbf{I}\delta_{ij}
\tag{F11}
\end{equation}
### 2nd order Runge-Kutta method
$$
\mathbf{V}'_{i+\frac{1}{2}}
=\mathbf{V}_i-\frac{\zeta}{m}\frac{\Delta t}{2}\mathbf{V}_{i}
=\left(1-\frac{\zeta}{m}\frac{\Delta t}{2}\right)\mathbf{V}_{i}
\tag{F12}
$$
$$
\mathbf{V}_{i+1}
=\mathbf{V}_i-\frac{\zeta}{m}\Delta t\mathbf{V}'_{i+\frac{1}{2}} + \frac{1}{m} {\Delta \mathbf{W}_i}
\tag{F13}
$$
$$
\mathbf{R}_{i+1}=\mathbf{R}_i+\mathbf{V}'_{i+\frac{1}{2}} \Delta t \hspace{15mm}
\tag{F14}
$$
### 4th order Runge-Kutta method
$$
\mathbf{V}'_{i+\frac{1}{2}}
=\mathbf{V}_i-\frac{\zeta}{m}\frac{\Delta t}{2}\mathbf{V}_{i}
\tag{F15}
$$
$$
\mathbf{V}''_{i+\frac{1}{2}}
=\mathbf{V}_i-\frac{\zeta}{m}\frac{\Delta t}{2}\mathbf{V}'_{i+\frac{1}{2}}
\tag{F16}
$$
$$
\mathbf{V}'''_{i+1}
=\mathbf{V}_i-\frac{\zeta}{m}{\Delta t}\mathbf{V}''_{i+\frac{1}{2}}
\tag{F17}
$$
$$
\mathbf{V}_{i+1}
=\mathbf{V}_i-\frac{\zeta}{m}\frac{\Delta t}{6}\left(\mathbf{V}+\mathbf{V}'_{i+\frac{1}{2}}+\mathbf{V}''_{i+\frac{1}{2}}+\mathbf{V}'''_{i+1}\right) + \frac{1}{m} {\Delta \mathbf{W}_i}
\tag{F18}
$$
$$
\mathbf{R}_{i+1}=\mathbf{R}_i+
\frac{\Delta t}{6}\left(\mathbf{V}+\mathbf{V}'_{i+\frac{1}{2}}+\mathbf{V}''_{i+\frac{1}{2}}+\mathbf{V}'''_{i+1}\right)
\hspace{15mm}
\tag{F19}
$$
| 1c5c3ebc46d1f53a3c42e0ef78db157173c0a84c | 1,003,781 | ipynb | Jupyter Notebook | edx-stochastic-data-analysis/downloaded_files/04/009x_43.ipynb | mirandagil/extra-courses | 51858f5089b10b070de43ea3809697760aa261ec | [
"MIT"
] | null | null | null | edx-stochastic-data-analysis/downloaded_files/04/009x_43.ipynb | mirandagil/extra-courses | 51858f5089b10b070de43ea3809697760aa261ec | [
"MIT"
] | null | null | null | edx-stochastic-data-analysis/downloaded_files/04/009x_43.ipynb | mirandagil/extra-courses | 51858f5089b10b070de43ea3809697760aa261ec | [
"MIT"
] | null | null | null | 856.46843 | 957,092 | 0.942863 | true | 2,525 | Qwen/Qwen-72B | 1. YES
2. YES | 0.896251 | 0.721743 | 0.646863 | __label__eng_Latn | 0.65105 | 0.341211 |
###### Content under Creative Commons Attribution license CC-BY 4.0, code under MIT license (c)2014 L.A. Barba, G.F. Forsyth.
# Relax and hold steady
Ready for more relaxing? This is the third lesson of **Module 5** of the course, exploring solutions to elliptic PDEs.
In [Lesson 1](http://nbviewer.ipython.org/github/numerical-mooc/numerical-mooc/blob/master/lessons/05_relax/05_01_2D.Laplace.Equation.ipynb) and [Lesson 2](http://nbviewer.ipython.org/github/numerical-mooc/numerical-mooc/blob/master/lessons/05_relax/05_02_2D.Poisson.Equation.ipynb) of this module we used the Jacobi method (a relaxation scheme) to iteratively find solutions to Laplace and Poisson equations.
And it worked, so why are we still talking about it? Because the Jacobi method is slow, very slow to converge. It might not have seemed that way in the first two notebooks because we were using small grids, but we did need more than 3,000 iterations to reach the exit criterion while solving the Poisson equation on a $41\times 41$ grid.
You can confirm this below: using `nx,ny=` $128$ on the Laplace problem of Lesson 1, the Jacobi method requires nearly *20,000* iterations before we reach $10^{-8}$ for the L2-norm of the difference between two iterates. That's a *lot* of iterations!
Now, consider this application: an incompressible Navier-Stokes solver has to ensure that the velocity field is divergence-free at every timestep. One of the most common ways to ensure this is to solve a Poisson equation for the pressure field. In fact, the pressure Poisson equation is responsible for the majority of the computational expense of an incompressible Navier-Stokes solver. Imagine having to do 20,000 Jacobi iterations for *every* time step in a fluid-flow problem with many thousands or perhaps millions of grid points!
The Jacobi method is the slowest of all relaxation schemes, so let's learn how to improve on it. In this lesson, we'll study the Gauss-Seidel method—twice as fast as Jacobi, in theory—and the successive over-relaxation (SOR) method. We also have some neat Python tricks lined up for you to get to the solution even faster. Let's go!
### Test problem
Let's use the same example problem as in [Lesson 1](./05_01_2D.Laplace.Equation.ipynb): Laplace's equation with boundary conditions
\begin{equation}
\begin{gathered}
p=0 \text{ at } x=0\\
\frac{\partial p}{\partial x} = 0 \text{ at } x = L\\
p = 0 \text{ at }y = 0 \\
p = \sin \left( \frac{\frac{3}{2}\pi x}{L} \right) \text{ at } y = H
\end{gathered}
\end{equation}
We import our favorite Python libraries, and also some custom functions that we wrote in [Lesson 1](./05_01_2D.Laplace.Equation.ipynb), which we have saved in a 'helper' Python file for re-use.
```python
import numpy
from matplotlib import pyplot, cm
from mpl_toolkits.mplot3d import Axes3D
%matplotlib inline
from matplotlib import rcParams
rcParams['font.family'] = 'serif'
rcParams['font.size'] = 16
```
```python
from laplace_helper import p_analytical, plot_3D, L2_rel_error
```
We now have the analytical solution in the array `p_analytical`, and we have the functions `plot_3D` and `L2_rel_error` in our namespace. If you can't remember how they work, just use `help()` and take advantage of the docstrings. It's a good habit to always write docstrings in your functions, and now you see why!
In this notebook, we are going to use larger grids than before, to better illustrate the speed increases we achieve with different iterative methods. Let's create a $128\times128$ grid and initialize.
```python
nx = 128
ny = 128
L = 5
H = 5
x = numpy.linspace(0,L,nx)
y = numpy.linspace(0,H,ny)
dx = L/(nx-1)
dy = H/(ny-1)
p0 = numpy.zeros((ny, nx))
p0[-1,:] = numpy.sin(1.5*numpy.pi*x/x[-1])
```
We said above that the Jacobi method takes nearly 20,000 iterations before it satisfies our exit criterion of $10^{-8}$ (L2-norm difference between two consecutive iterations). You'll just have to confirm that now. Have a seat!
```python
def laplace2d(p, l2_target):
'''Solves the Laplace equation using the Jacobi method
with a 5-point stencil
Parameters:
----------
p: 2D array of float
Initial potential distribution
l2_target: float
Stopping criterion
Returns:
-------
p: 2D array of float
Potential distribution after relaxation
'''
l2norm = 1
pn = numpy.empty_like(p)
iterations = 0
while l2norm > l2_target:
pn = p.copy()
p[1:-1,1:-1] = .25 * (pn[1:-1,2:] + pn[1:-1,:-2] +\
pn[2:,1:-1] + pn[:-2,1:-1])
##Neumann B.C. along x = L
p[1:-1,-1] = .25 * (2*pn[1:-1,-2] + pn[2:,-1] + pn[:-2, -1])
l2norm = numpy.sqrt(numpy.sum((p - pn)**2)/numpy.sum(pn**2))
iterations += 1
return p, iterations
```
```python
l2_target = 1e-8
p, iterations = laplace2d(p0.copy(), l2_target)
print ("Jacobi method took {} iterations at tolerance {}".\
format(iterations, l2_target))
```
Jacobi method took 19993 iterations at tolerance 1e-08
Would we lie to you? 19,993 iterations before we reach the exit criterion of $10^{-8}$. Yikes!
We can also time how long the Jacobi method takes using the `%%timeit` cell-magic. Go make some tea, because this can take a while—the `%%timeit` magic runs the function a few times and then averages their runtimes to give a more accurate result.
- - -
##### Notes
1. When using `%%timeit`, the return values of a function (`p` and `iterations` in this case) *won't* be saved.
2. We document our timings below, but your timings can vary quite a lot, depending on your hardware. In fact, you may not even see the same trends (some recent hardware can play some fancy tricks with optimizations that you have no control over).
- - -
With those caveats, let's give it a shot:
```python
%%timeit
laplace2d(p0.copy(), l2_target)
```
1 loops, best of 3: 5.92 s per loop
The printed result above (and others to come later) is from a mid-2007 Mac Pro, powered by two 3-GHz quad-core Intel Xeon X5364 (Clovertown). We tried also on more modern machines, and got conflicting results—like the Gauss-Seidel method being slightly slower than Jacobi, even though it required fewer iterations. Don't get too hung up on this: the hardware optimizations applied by more modern CPUs are varied and make a big difference sometimes.
Meanwhile, let's check the overall accuracy of the numerical calculation by comparing it to the analytical solution.
```python
pan = p_analytical(x,y)
```
```python
L2_rel_error(p,pan)
```
6.1735513352884566e-05
That's a pretty small error. Let's assume it is good enough and focus on speeding up the process.
## Gauss-Seidel
You will recall from [Lesson 1](./05_01_2D.Laplace_Equation.ipynb) that a single Jacobi iteration is written as:
\begin{equation}
p^{k+1}_{i,j} = \frac{1}{4} \left(p^{k}_{i,j-1} + p^k_{i,j+1} + p^{k}_{i-1,j} + p^k_{i+1,j} \right)
\end{equation}
The Gauss-Seidel method is a simple tweak to this idea: use updated values of the solution as soon as they are available, instead of waiting for the values in the whole grid to be updated.
If you imagine that we progress through the grid points in the order shown by the arrow in Figure 1, then you can see that the updated values $p^{k+1}_{i-1,j}$ and $p^{k+1}_{i,j-1}$ can be used to calculate $p^{k+1}_{i,j}$.
#### Figure 1. Assumed order of updates on a grid.
The iteration formula for Gauss-Seidel is thus:
\begin{equation}
p^{k+1}_{i,j} = \frac{1}{4} \left(p^{k+1}_{i,j-1} + p^k_{i,j+1} + p^{k+1}_{i-1,j} + p^k_{i+1,j} \right)
\end{equation}
There's now a problem for the Python implementation. You can no longer use NumPy's array operations to evaluate the solution updates. Since Gauss-Seidel requires using values immediately after they're updated, we have to abandon our beloved array operations and return to nested `for` loops. Ugh.
We don't like it, but if it saves us a bunch of time, then we can manage. But does it?
Here's a function to compute the Gauss-Seidel updates using a double loop.
```python
def laplace2d_gauss_seidel(p, nx, ny, l2_target):
iterations = 0
iter_diff = l2_target+1 #init iter_diff to be larger than l2_target
while iter_diff > l2_target:
pn = p.copy()
iter_diff = 0.0
for j in range(1,ny-1):
for i in range(1,nx-1):
p[j,i] = .25 * (p[j,i-1] + p[j,i+1] + p[j-1,i] + p[j+1,i])
iter_diff += (p[j,i] - pn[j,i])**2
#Neumann 2nd-order BC
for j in range(1,ny-1):
p[j,-1] = .25 * (2*p[j,-2] + p[j+1,-1] + p[j-1, -1])
iter_diff = numpy.sqrt(iter_diff/numpy.sum(pn**2))
iterations += 1
return p, iterations
```
We would then run this with the following function call:
```Python
p, iterations = laplace2d_gauss_seidel(p,1e-8)
```
<br>
But **don't do it**. We did it so that you don't have to!
The solution of our test problem with the Gauss-Seidel method required several thousand fewer iterations than the Jacobi method, but it took nearly *10 minutes* to run on our machine.
##### What happened?
If you think back to the far off days when you first learned about array operations, you might recall that we discovered that NumPy array operations could drastically improve code performance compared with nested `for` loops. NumPy operations are written in C and pre-compiled, so they are *much* faster than vanilla Python.
But the Jacobi method is not algorithmically optimal, giving slow convergence. We want to take advantage of the faster-converging iterative methods, yet unpacking the array operations into nested loops destroys performance. *What can we do?*
## Use Numba!
[Numba](http://numba.pydata.org) is an open-source optimizing compiler for Python. It works by reading Python functions that you give it, and generating a compiled version for you—also called Just-In-Time (JIT) compilation. You can then use the function at performance levels that are close to what you can get with compiled languages (like C, C++ and fortran).
It can massively speed up performance, especially when dealing with loops. Plus, it's pretty easy to use. Like we overheard at a conference: [*Numba is a Big Deal.*](http://twitter.com/lorenaabarba/status/625383941453656065)
##### Caveat
We encourage everyone following the course to use the [Anaconda Python](https://www.continuum.io/downloads) distribution because it's well put-together and simple to use. If you *haven't* been using Anaconda, that's fine, but let us **strongly** suggest that you take the plunge now. Numba is great and easy to use, but it is **not** easy to install without help. Those of you using Anaconda can install it by running <br><br>
`conda install numba`<br><br>
If you *really* don't want to use Anaconda, you will have to [compile all of Numba's dependencies](https://pypi.python.org/pypi/numba).
- - -
### Intro to Numba
Let's dive in! Numba is great and easy to use. We're going to first walk you through a simple example to give you a taste of Numba's abilities.
After installing Numba (see above), we can use it by adding a line to `import numba` and another to `import autojit` (more on this in a bit).
```python
import numba
from numba import jit
```
You tell Numba which functions you want to accelerate by using a [Python decorator](http://www.learnpython.org/en/Decorators), a special type of command that tells the Python interpreter to modify a callable object (like a function). For example, let's write a quick function to calculate the $n^{\text{th}}$ number in the Fibonacci sequence:
```python
def fib_it(n):
a = 1
b = 1
for i in range(n-2):
a, b = b, a+b
return b
```
There are several faster ways to program the Fibonacci sequence, but that's not a concern right now (but if you're curious, [check them out](http://mathworld.wolfram.com/BinetsFibonacciNumberFormula.html)). Let's use `%%timeit` and see how long this simple function takes to find the 500,000-th Fibonacci number.
```python
%%timeit
fib_it(500000)
```
1 loops, best of 3: 4.22 s per loop
Now let's try Numba! Just add the `@jit` decorator above the function name and let's see what happens!
```python
@jit
def fib_it(n):
a = 1
b = 1
for i in range(n-2):
a, b = b, a+b
return b
```
```python
%%timeit
fib_it(500000)
```
The slowest run took 138.50 times longer than the fastest. This could mean that an intermediate result is being cached
1000 loops, best of 3: 499 µs per loop
*Holy cow!* In our machine, that's more than 8,000 times faster!
That warning from `%%timeit` is due to the compilation overhead for Numba. The very first time that it executes the function, it has to compile it, then it caches that code for reuse without extra compiling. That's the 'Just-In-Time' bit. You'll see it disappear if we run `%%timeit` again.
```python
%%timeit
fib_it(500000)
```
1000 loops, best of 3: 499 µs per loop
We would agree if you think that this is a rather artificial example, but the speed-up is very impressive indeed. Just adding the one-word decorator!
##### Running in `nopython` mode
Numba is very clever, but it can't optimize everything. When it can't, rather than failing to run, it will fall back to the regular Python, resulting in poor performance again. This can be confusing and frustrating, since you might not know ahead of time which bits of code will speed up and which bits won't.
To avoid this particular annoyance, you can tell Numba to use `nopython` mode. In this case, your code will simply fail if the "jitted" function can't be optimized. It's simply an option to give you "fast or nothing."
Use `nopython` mode by adding the following line above the function that you want to JIT-compile:
```Python
@jit(nopython=True)
```
- - -
##### Numba version check
In these examples, we are using the latest (as of publication) version of Numba: 0.22.1. Make sure to upgrade or some of the code examples below may not run.
- - -
```python
print(numba.__version__)
```
0.22.1
## Back to Jacobi
We want to compare the performance of different iterative methods under the same conditions. Because the Gauss-Seidel method forces us to unpack the array operations into nested loops (which are very slow in Python), we use Numba to get the code to perform well. Thus, we need to write a new Jacobi method using for-loops and Numba (instead of NumPy), so we can make meaningful comparisons.
Let's write a "jitted" Jacobi with loops.
```python
@jit(nopython=True)
def laplace2d_jacobi(p, pn, l2_target):
'''Solves the Laplace equation using the Jacobi method
with a 5-point stencil
Parameters:
----------
p: 2D array of float
Initial potential distribution
pn: 2D array of float
Allocated array for previous potential distribution
l2_target: float
Stopping criterion
Returns:
-------
p: 2D array of float
Potential distribution after relaxation
'''
iterations = 0
iter_diff = l2_target+1 #init iter_diff to be larger than l2_target
denominator = 0.0
ny, nx = p.shape
l2_diff = numpy.zeros(20000)
while iter_diff > l2_target:
for j in range(ny):
for i in range(nx):
pn[j,i] = p[j,i]
iter_diff = 0.0
denominator = 0.0
for j in range(1,ny-1):
for i in range(1,nx-1):
p[j,i] = .25 * (pn[j,i-1] + pn[j,i+1] + pn[j-1,i] + pn[j+1,i])
#Neumann 2nd-order BC
for j in range(1,ny-1):
p[j,-1] = .25 * (2*pn[j,-2] + pn[j+1,-1] + pn[j-1, -1])
for j in range(ny):
for i in range(nx):
iter_diff += (p[j,i] - pn[j,i])**2
denominator += (pn[j,i]*pn[j,i])
iter_diff /= denominator
iter_diff = iter_diff**0.5
l2_diff[iterations] = iter_diff
iterations += 1
return p, iterations, l2_diff
```
```python
p, iterations, l2_diffJ = laplace2d_jacobi(p0.copy(), p0.copy(), 1e-8)
print("Numba Jacobi method took {} iterations at tolerance {}".format(iterations, l2_target))
```
Numba Jacobi method took 19993 iterations at tolerance 1e-08
```python
%%timeit
laplace2d_jacobi(p0.copy(), p0.copy(), 1e-8)
```
1 loops, best of 3: 2.41 s per loop
In our old machine, that's faster than the NumPy version of Jacobi, but on some newer machines it might not be. Don't obsess over this: there is much hardware black magic that we cannot control.
Remember that NumPy is a highly optimized library. The fact that we can get competitive execution times with this JIT-compiled code is kind of amazing. Plus(!) now we get to try out those techniques that aren't possible with NumPy array operations.
##### Note
We're also saving the history of the L2-norm of the difference between consecutive iterations. We'll take a look at that once we have a few more methods to compare.
- - -
##### Another Note
Why did we use
```Python
l2_diff = numpy.zeros(20000)
```
Where did the `20000` come from?
We cheated a little bit. Numba doesn't handle _mutable_ objects well in `nopython` mode, which means we can't use a *list* and append each iteration's value of the L2-norm. So we need to define an array big enough to hold all of them and we know from the first run that Jacobi converges in fewer than 20,000 iterations.
- - -
##### Challenge task
It is possible to get a good estimate of the number of iterations needed by the Jacobi method to reduce the initial error by a factor $10^{-m}$, for given $m$. The formula depends on the largest eigenvalue of the coefficient matrix, which is known for the discrete Poisson problem on a square domain. See Parviz Moin, *"Fundamentals of Engineering Numerical Analysis"* (2nd ed., pp.141–143).
* Find the estimated number of iterations to reduce the initial error by $10^{-8}$ when using the grids listed below, in the section on grid convergence, with $11$, $21$, $41$ and $81$ grid points on each coordinate axis.
## Back to Gauss-Seidel
If you recall, the reason we got into this Numba sidetrack was to try out Gauss-Seidel and compare the performance with Jacobi. Recall from above that the formula for Gauss-Seidel is as follows:
\begin{equation}
p^{k+1}_{i,j} = \frac{1}{4} \left(p^{k+1}_{i,j-1} + p^k_{i,j+1} + p^{k+1}_{i-1,j} + p^k_{i+1,j} \right)
\end{equation}
We only need to slightly tweak the Jacobi function to get one for Gauss-Seidel. Instead of updating `p` in terms of `pn`, we just update `p` using `p`!
```python
@jit(nopython=True)
def laplace2d_gauss_seidel(p, pn, l2_target):
'''Solves the Laplace equation using Gauss-Seidel method
with a 5-point stencil
Parameters:
----------
p: 2D array of float
Initial potential distribution
pn: 2D array of float
Allocated array for previous potential distribution
l2_target: float
Stopping criterion
Returns:
-------
p: 2D array of float
Potential distribution after relaxation
'''
iterations = 0
iter_diff = l2_target + 1 #initialize iter_diff to be larger than l2_target
denominator = 0.0
ny, nx = p.shape
l2_diff = numpy.zeros(20000)
while iter_diff > l2_target:
for j in range(ny):
for i in range(nx):
pn[j,i] = p[j,i]
iter_diff = 0.0
denominator = 0.0
for j in range(1,ny-1):
for i in range(1,nx-1):
p[j,i] = .25 * (p[j,i-1] + p[j,i+1] + p[j-1,i] + p[j+1,i])
#Neumann 2nd-order BC
for j in range(1,ny-1):
p[j,-1] = .25 * (2*p[j,-2] + p[j+1,-1] + p[j-1, -1])
for j in range(ny):
for i in range(nx):
iter_diff += (p[j,i] - pn[j,i])**2
denominator += (pn[j,i]*pn[j,i])
iter_diff /= denominator
iter_diff = iter_diff**0.5
l2_diff[iterations] = iter_diff
iterations += 1
return p, iterations, l2_diff
```
```python
p, iterations, l2_diffGS = laplace2d_gauss_seidel(p0.copy(), p0.copy(), 1e-8)
print("Numba Gauss-Seidel method took {} iterations at tolerance {}".format(iterations, l2_target))
```
Numba Gauss-Seidel method took 13939 iterations at tolerance 1e-08
Cool! Using the most recently updated values of the solution in the Gauss-Seidel method saved 6,000 iterations! Now we can see how much faster than Jacobi this is, because both methods are implemented the same way:
```python
%%timeit
laplace2d_gauss_seidel(p0.copy(), p0.copy(), 1e-8)
```
1 loops, best of 3: 2.09 s per loop
We get some speed-up over the Numba version of Jacobi, but not a lot. And you may see quite different results—on some of the machines we tried, we could still not beat the NumPy version of Jacobi. This can be confusing, and hard to explain without getting into the nitty grity of hardware optimizations.
Don't lose hope! We have another trick up our sleeve!
## Successive Over-Relaxation (SOR)
Successive over-relaxation is able to improve on the Gauss-Seidel method by using in the update a linear combination of the previous and the current solution, as follows:
\begin{equation}
p^{k+1}_{i,j} = (1 - \omega)p^k_{i,j} + \frac{\omega}{4} \left(p^{k+1}_{i,j-1} + p^k_{i,j+1} + p^{k+1}_{i-1,j} + p^k_{i+1,j} \right)
\end{equation}
The relaxation parameter $\omega$ will determine how much faster SOR will be than Gauss-Seidel. SOR iterations are only stable for $0 < \omega < 2$. Note that for $\omega = 1$, SOR reduces to the Gauss-Seidel method.
If $\omega < 1$, that is technically an "under-relaxation" and it will be slower than Gauss-Seidel.
If $\omega > 1$, that's the over-relaxation and it should converge faster than Gauss-Seidel.
Let's write a function for SOR iterations of the Laplace equation, using Numba to get high performance.
```python
@jit(nopython=True)
def laplace2d_SOR(p, pn, l2_target, omega):
'''Solves the Laplace equation using SOR with a 5-point stencil
Parameters:
----------
p: 2D array of float
Initial potential distribution
pn: 2D array of float
Allocated array for previous potential distribution
l2_target: float
Stopping criterion
omega: float
Relaxation parameter
Returns:
-------
p: 2D array of float
Potential distribution after relaxation
'''
iterations = 0
iter_diff = l2_target + 1 #initialize iter_diff to be larger than l2_target
denominator = 0.0
ny, nx = p.shape
l2_diff = numpy.zeros(20000)
while iter_diff > l2_target:
for j in range(ny):
for i in range(nx):
pn[j,i] = p[j,i]
iter_diff = 0.0
denominator = 0.0
for j in range(1,ny-1):
for i in range(1,nx-1):
p[j,i] = (1-omega)*p[j,i] + omega*.25 * (p[j,i-1] + p[j,i+1] + p[j-1,i] + p[j+1,i])
#Neumann 2nd-order BC
for j in range(1,ny-1):
p[j,-1] = .25 * (2*p[j,-2] + p[j+1,-1] + p[j-1, -1])
for j in range(ny):
for i in range(nx):
iter_diff += (p[j,i] - pn[j,i])**2
denominator += (pn[j,i]*pn[j,i])
iter_diff /= denominator
iter_diff = iter_diff**0.5
l2_diff[iterations] = iter_diff
iterations += 1
return p, iterations, l2_diff
```
That wasn't too bad at all. Let's try this out first with $\omega = 1$ and check that it matches the Gauss-Seidel results from above.
```python
l2_target = 1e-8
omega = 1
p, iterations, l2_diffSOR = laplace2d_SOR(p0.copy(), p0.copy(), l2_target, omega)
print("Numba SOR method took {} iterations\
at tolerance {} with omega = {}".format(iterations, l2_target, omega))
```
Numba SOR method took 13939 iterations at tolerance 1e-08 with omega = 1
We have the exact same number of iterations as Gauss-Seidel. That's a good sign that things are working as expected.
Now let's try to over-relax the solution and see what happens. To start, let's try $\omega = 1.5$.
```python
l2_target = 1e-8
omega = 1.5
p, iterations, l2_diffSOR = laplace2d_SOR(p0.copy(), p0.copy(), l2_target, omega)
print("Numba SOR method took {} iterations\
at tolerance {} with omega = {}".format(iterations, l2_target, omega))
```
Numba SOR method took 7108 iterations at tolerance 1e-08 with omega = 1.5
Wow! That really did the trick! We dropped from 13939 iterations down to 7108. Now we're really cooking! Let's try `%%timeit` on SOR.
```python
%%timeit
laplace2d_SOR(p0.copy(), p0.copy(), l2_target, omega)
```
1 loops, best of 3: 1.18 s per loop
Things continue to speed up. But we can do even better!
### Tuned SOR
Above, we picked $\omega=1.5$ arbitrarily, but we would like to over-relax the solution as much as possible without introducing instability, as that will result in the fewest number of iterations.
For square domains, it turns out that the ideal factor $\omega$ can be computed as a function of the number of nodes in one direction, e.g., `nx`.
\begin{equation}
\omega \approx \frac{2}{1+\frac{\pi}{nx}}
\end{equation}
This is not some arbitrary formula, but its derivation lies outside the scope of this course. (If you're curious and have some serious math chops, you can check out Reference 3 for more information). For now, let's try it out and see how it works.
```python
l2_target = 1e-8
omega = 2./(1 + numpy.pi/nx)
p, iterations, l2_diffSORopt = laplace2d_SOR(p0.copy(), p0.copy(), l2_target, omega)
print("Numba SOR method took {} iterations\
at tolerance {} with omega = {:.4f}".format(iterations, l2_target, omega))
```
Numba SOR method took 1110 iterations at tolerance 1e-08 with omega = 1.9521
Wow! That's *very* fast. Also, $\omega$ is very close to the upper limit of 2. SOR tends to work fastest when $\omega$ approaches 2, but don't be tempted to push it. Set $\omega = 2$ and the walls will come crumbling down.
Let's see what `%%timeit` has for us now.
```python
%%timeit
laplace2d_SOR(p0.copy(), p0.copy(), l2_target, omega)
```
10 loops, best of 3: 184 ms per loop
Regardless of the hardware in which we tried this, the tuned SOR gave *big* speed-ups, compared to the Jacobi method (whether implemented with NumPy or Numba). Now you know why we told you at the end of [Lesson 1](http://nbviewer.ipython.org/github/numerical-mooc/numerical-mooc/blob/master/lessons/05_relax/05_01_2D.Laplace.Equation.ipynb) that the Jacobi method is the *worst* iterative solver and almost never used.
Just to convince ourselves that everything is OK, let's check the error after the 1,110 iterations of tuned SOR:
```python
L2_rel_error(p,pan)
```
7.7927433550683514e-05
Looking very good, indeed.
We didn't explain it in any detail, but notice the very interesting implication of Equation $(6)$: the ideal relaxation factor is a function of the grid size.
Also keep in mind that the formula only works for square domains with uniform grids. If your problem has an irregular geometry, you will need to find a good value of $\omega$ by numerical experiments.
## Decay of the difference between iterates
In the [Poisson Equation notebook](./05_02_2D.Poisson.Equation.ipynb), we noticed how the norm of the difference between consecutive iterations first dropped quite fast, then settled for a more moderate decay rate. With Gauss-Seidel, SOR and tuned SOR, we reduced the number of iterations required to reach the stopping criterion. Let's see how that reflects on the time history of the difference between consecutive solutions.
```python
pyplot.figure(figsize=(8,8))
pyplot.xlabel(r'iterations', fontsize=18)
pyplot.ylabel(r'$L_2$-norm', fontsize=18)
pyplot.semilogy(numpy.trim_zeros(l2_diffJ,'b'),
'k-', lw=2, label='Jacobi')
pyplot.semilogy(numpy.trim_zeros(l2_diffGS,'b'),
'k--', lw=2, label='Gauss-Seidel')
pyplot.semilogy(numpy.trim_zeros(l2_diffSOR,'b'),
'g-', lw=2, label='SOR')
pyplot.semilogy(numpy.trim_zeros(l2_diffSORopt,'b'),
'g--', lw=2, label='Optimized SOR')
pyplot.legend(fontsize=16);
```
The Jacobi method starts out with very fast convergence, but then it settles into a slower rate. Gauss-Seidel shows a faster rate in the first few thousand iterations, but it seems to be slowing down towards the end. SOR is a lot faster to converge, though, and optimized SOR just plunges down!
## References
1. [Gonsalves, Richard J. Computational Physics I. State University of New York, Buffalo: (2011): Section 3.1 ](http://www.physics.buffalo.edu/phy410-505/2011/index.html)
2. Moin, Parviz, "Fundamentals of Engineering Numerical Analysis," Cambridge University Press, 2nd edition (2010).
3. Young, David M. "A bound for the optimum relaxation factor for the successive overrelaxation method." Numerische Mathematik 16.5 (1971): 408-413.
```python
from IPython.core.display import HTML
css_file = '../../styles/numericalmoocstyle.css'
HTML(open(css_file, "r").read())
```
<link href='http://fonts.googleapis.com/css?family=Alegreya+Sans:100,300,400,500,700,800,900,100italic,300italic,400italic,500italic,700italic,800italic,900italic' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Arvo:400,700,400italic' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=PT+Mono' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Shadows+Into+Light' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Nixie+One' rel='stylesheet' type='text/css'>
<link href='https://fonts.googleapis.com/css?family=Source+Code+Pro' rel='stylesheet' type='text/css'>
<style>
@font-face {
font-family: "Computer Modern";
src: url('http://mirrors.ctan.org/fonts/cm-unicode/fonts/otf/cmunss.otf');
}
#notebook_panel { /* main background */
background: rgb(245,245,245);
}
div.cell { /* set cell width */
width: 750px;
}
div #notebook { /* centre the content */
background: #fff; /* white background for content */
width: 1000px;
margin: auto;
padding-left: 0em;
}
#notebook li { /* More space between bullet points */
margin-top:0.8em;
}
/* draw border around running cells */
div.cell.border-box-sizing.code_cell.running {
border: 1px solid #111;
}
/* Put a solid color box around each cell and its output, visually linking them*/
div.cell.code_cell {
background-color: rgb(256,256,256);
border-radius: 0px;
padding: 0.5em;
margin-left:1em;
margin-top: 1em;
}
div.text_cell_render{
font-family: 'Alegreya Sans' sans-serif;
line-height: 140%;
font-size: 125%;
font-weight: 400;
width:600px;
margin-left:auto;
margin-right:auto;
}
/* Formatting for header cells */
.text_cell_render h1 {
font-family: 'Nixie One', serif;
font-style:regular;
font-weight: 400;
font-size: 45pt;
line-height: 100%;
color: rgb(0,51,102);
margin-bottom: 0.5em;
margin-top: 0.5em;
display: block;
}
.text_cell_render h2 {
font-family: 'Nixie One', serif;
font-weight: 400;
font-size: 30pt;
line-height: 100%;
color: rgb(0,51,102);
margin-bottom: 0.1em;
margin-top: 0.3em;
display: block;
}
.text_cell_render h3 {
font-family: 'Nixie One', serif;
margin-top:16px;
font-size: 22pt;
font-weight: 600;
margin-bottom: 3px;
font-style: regular;
color: rgb(102,102,0);
}
.text_cell_render h4 { /*Use this for captions*/
font-family: 'Nixie One', serif;
font-size: 14pt;
text-align: center;
margin-top: 0em;
margin-bottom: 2em;
font-style: regular;
}
.text_cell_render h5 { /*Use this for small titles*/
font-family: 'Nixie One', sans-serif;
font-weight: 400;
font-size: 16pt;
color: rgb(163,0,0);
font-style: italic;
margin-bottom: .1em;
margin-top: 0.8em;
display: block;
}
.text_cell_render h6 { /*use this for copyright note*/
font-family: 'PT Mono', sans-serif;
font-weight: 300;
font-size: 9pt;
line-height: 100%;
color: grey;
margin-bottom: 1px;
margin-top: 1px;
}
.CodeMirror{
font-family: "Source Code Pro";
font-size: 90%;
}
.alert-box {
padding:10px 10px 10px 36px;
margin:5px;
}
.success {
color:#666600;
background:rgb(240,242,229);
}
</style>
| 4180c554be123d4a53ce2a397bd1fb65d4d0fa9d | 102,331 | ipynb | Jupyter Notebook | lessons/05_relax/05_03_Iterate.This.ipynb | sergiommr/numerical-mooc | b088e9d205f15dbc22f83e45c2181a2c5809365f | [
"CC-BY-3.0"
] | 11 | 2018-04-12T08:05:58.000Z | 2022-03-31T17:24:14.000Z | lessons/05_relax/05_03_Iterate.This.ipynb | sergiommr/numerical-mooc | b088e9d205f15dbc22f83e45c2181a2c5809365f | [
"CC-BY-3.0"
] | 1 | 2017-01-16T20:53:59.000Z | 2017-01-16T20:53:59.000Z | lessons/05_relax/05_03_Iterate.This.ipynb | sergiommr/numerical-mooc | b088e9d205f15dbc22f83e45c2181a2c5809365f | [
"CC-BY-3.0"
] | 12 | 2016-05-02T16:47:17.000Z | 2020-03-24T16:24:16.000Z | 66.839321 | 51,376 | 0.751278 | true | 9,168 | Qwen/Qwen-72B | 1. YES
2. YES | 0.805632 | 0.740174 | 0.596308 | __label__eng_Latn | 0.980821 | 0.223754 |
```python
%matplotlib inline
```
使用 PyTorch 进行 深度学习
**************************
翻译者: http://www.studyai.com/antares
深度学习构建块: 仿射映射, 非线性单元 和 目标函数
==========================================================================
深度学习包括以聪明的方式组合线性和非线性。非线性的引入使模型变得很强大。
在本节中,我们将使用这些核心组件,构造一个目标函数,并查看模型是如何训练的。
仿射映射
~~~~~~~~~~~
深度学习的核心工作之一是仿射映射(affine map),它是一个函数 $f(x)$ ,其中
\begin{align}f(x) = Ax + b\end{align}
对矩阵 $A$ 和向量 $x, b$. 这里我们要学习的参数就是 $A$ 和 $b$.
通常, $b$ 被称之为 偏置项(the *bias* term)。
PyTorch和大多数其他深度学习框架所做的事情与传统的线性代数略有不同。它映射输入的行而不是列。
也就是说,下面第 $i$ 行的输出是输入的第 $i$ 行的映射,再加上偏置项。看下面的例子。
```python
# Author: Robert Guthrie
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
torch.manual_seed(1)
```
```python
lin = nn.Linear(5, 3) # maps from R^5 to R^3, parameters A, b
# data is 2x5. A maps from 5 to 3... can we map "data" under A?
data = torch.randn(2, 5)
print(lin(data)) # yes
```
非线性单元
~~~~~~~~~~~~~~~
首先, 注意以下事实, 它告诉我们为什么需要非线性单元。假定 我们有两个仿射映射:
$f(x) = Ax + b$ 和 $g(x) = Cx + d$ 。 那么 $f(g(x))$ 是什么呢?
\begin{align}f(g(x)) = A(Cx + d) + b = ACx + (Ad + b)\end{align}
$AC$ 是一个矩阵 而 $Ad + b$ 是一个向量, 因此我们看到组合仿射映射之后的结果还是仿射映射。
从这里,你可以看到,如果你想要你的神经网络是很多仿射变换构成的长链条,这并没有为你的模型增加新的能力,
你的模型,最终只是做单个仿射映射而已。
如果我们在仿射层之间引入非线性,情况就不再是这样了,我们可以建立更强大的模型。
已经有一些核心的非线性单元: $\tanh(x), \sigma(x), \text{ReLU}(x)$ 是最常见的。
你或许会质疑: "为什么非得是这些非线性函数? 我可以想到的其他形式的非线性函数多如牛毛"。
这主要是因为它们的梯度非常容易计算,而且梯度计算在学习中有着至关重要的作用。
比如说:
\begin{align}\frac{d\sigma}{dx} = \sigma(x)(1 - \sigma(x))\end{align}
注意:虽然你在AI类的介绍中学到了一些神经网络,其中 $\sigma(x)$ 是默认的非线性单元,
但在实践中人们通常回避它。这是因为梯度随着参数绝对值的增长而迅速消失。
小梯度意味着很难学习。大多数人默认为tanh或relu。
```python
# 在 pytorch 中, 大多数非线性单元是在 torch.nn.functional (我们将它导入为F)
# 注意,非线性单元通常没有仿射映射那样的可学习参数。也就是说,他们没有在训练中更新的权重。
data = torch.randn(2, 2)
print(data)
print(F.relu(data))
```
软最大化 和 概率分布
~~~~~~~~~~~~~~~~~~~~~~~~~
函数 $\text{Softmax}(x)$ 也是一个非线性函数,但它的特殊之处在于它通常是网络中的最后一次运算。
这是因为它接受一个实数向量,并返回一个概率分布。
其定义如下。设 $x$ 是实数向量(正的,负的,随便什么,没有约束)。
那么 $\text{Softmax}(x)$ 的第 i 个分量是:
\begin{align}\frac{\exp(x_i)}{\sum_j \exp(x_j)}\end{align}
应该清楚的是,输出是一个概率分布:每个元素都是非负的,所有分量之和等于 1。
你也可以认为它只是对输入向量进行按元素的指数运算,使一切非负,然后除以归一化常数。
```python
# Softmax 也在 torch.nn.functional 中
data = torch.randn(5)
print(data)
print(F.softmax(data, dim=0))
print(F.softmax(data, dim=0).sum()) # Sums to 1 because it is a distribution!
print(F.log_softmax(data, dim=0)) # theres also log_softmax
```
目标函数
~~~~~~~~~~~~~~~~~~~
目标函数是您的网络正在被训练以最小化的函数(在这种情况下,它通常被称为损失函数
*loss function* 或代价(成本)函数 *cost function*)。
首先选择一个训练实例,通过你的神经网络运行它,然后计算输出的损失。
然后利用损失函数的导数对模型的参数进行更新。直觉地说,如果你的模型对它的答案完全有信心,
而且它的答案是错误的,那么你的损失就会很大。如果它对它的答案很有信心,
而且它的答案是正确的,那么损失就会很小。
在你的训练样例上最小化损失函数的思想是,希望您的网络能够很好地泛化(generalize),
并在开发集、测试集或生产中的未见过的样例中损失较小。
一个典型的损失函数是负对数似然损失(*negative log likelihood loss*),
这是多类分类的一个非常常见的目标。对于有监督的多类分类,
这意味着训练网络最小化正确输出的负对数概率(或等效地,最大化正确输出的对数概率)。
优化和训练
=========================
那么,我们能为一个实例计算一个损失函数吗?那我们该怎么办?我们早些时候看到张量知道如何计算该张量
相对于那些计算出它的变量的梯度。既然我们的损失是张量,我们就可以计算所有用于计算它的参数的梯度!
然后我们可以执行标准梯度更新。设 $\theta$ 为我们的参数,$L(\theta)$ 为损失函数,
$\eta$ 为正的学习速率.然后:
\begin{align}\theta^{(t+1)} = \theta^{(t)} - \eta \nabla_\theta L(\theta)\end{align}
有大量的算法和积极的研究试图做的不仅仅是这个普通的梯度更新。许多人试图根据训练时段发生的情况来改变学习速度。
除非您真正感兴趣,否则您不需要担心这些算法具体做了什么。Torch在 Torch.optim 包中提供了许多学习速率调节策略,而且它们都是完全透明的。
使用最简单的梯度更新和更复杂的算法是一样的。尝试不同的更新算法和更新算法的不同参数
(比如不同的初始学习速率)对于优化网络的性能非常重要。
通常,用Adam或RMSProp这样的优化器代替普通的SGD会显着地提高性能。
在 PyTorch 中创建神经网络组件
======================================
在我们继续关注NLP之前,让我们做一个带注释的例子,在PyTorch中使用仿射映射和非线性来构建网络。
我们还将看到如何计算损失函数,使用PyTorch的负对数似然,并通过反向传播更新参数。
所有网络组件都应该继承nn.Module并覆盖forward()方法。嗯,没错,就是这样一个套路。
从nn.Module继承可以为组件提供很多功能。例如,它可以跟踪它的可训练参数,
您可以使用 ``.to(device)`` 方法在CPU和GPU之间交换它,
其中设备可以是CPU设备 ``torch.device("cpu")`` 或CUDA设备 ``torch.device("cuda:0")`` 。
让我们编写一个带标注的示例网络,它接受稀疏的词袋表示(sparse bag-of-words representation),
并在两个标签上输出概率分布:“英语”和“西班牙语”。这个模型只是个Logistic回归模型。
案例: Logistic回归词袋分类器
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
我们的模型将把稀疏词袋(BoW)表示映射到标签上的概率。我们为词汇库(vocab)的单词分配一个索引(index)。
例如,假设我们的整个词汇是两个单词“hello”和“world”,索引分别为0和1。
"hello hello hello hello" 这个句子的BoW向量是
\begin{align}\left[ 4, 0 \right]\end{align}
对于句子 "hello world world hello" 的 BoW 向量是
\begin{align}\left[ 2, 2 \right]\end{align}
etc. 通用意义上说, 它是
\begin{align}\left[ \text{Count}(\text{hello}), \text{Count}(\text{world}) \right]\end{align}
把这个BoW向量记为 $x$ 。 那么我们网络的输出是:
\begin{align}\log \text{Softmax}(Ax + b)\end{align}
那就是说, 我们传递输入使其通过一个仿射映射,然后做对数软最大化(log softmax).
```python
data = [("me gusta comer en la cafeteria".split(), "SPANISH"),
("Give it to me".split(), "ENGLISH"),
("No creo que sea una buena idea".split(), "SPANISH"),
("No it is not a good idea to get lost at sea".split(), "ENGLISH")]
test_data = [("Yo creo que si".split(), "SPANISH"),
("it is lost on me".split(), "ENGLISH")]
# word_to_ix maps each word in the vocab to a unique integer, which will be its
# index into the Bag of words vector
word_to_ix = {}
for sent, _ in data + test_data:
for word in sent:
if word not in word_to_ix:
word_to_ix[word] = len(word_to_ix)
print(word_to_ix)
VOCAB_SIZE = len(word_to_ix)
NUM_LABELS = 2
class BoWClassifier(nn.Module): # inheriting from nn.Module!
def __init__(self, num_labels, vocab_size):
# calls the init function of nn.Module. Dont get confused by syntax,
# just always do it in an nn.Module
super(BoWClassifier, self).__init__()
# Define the parameters that you will need. In this case, we need A and b,
# the parameters of the affine mapping.
# Torch defines nn.Linear(), which provides the affine map.
# Make sure you understand why the input dimension is vocab_size
# and the output is num_labels!
self.linear = nn.Linear(vocab_size, num_labels)
# NOTE! The non-linearity log softmax does not have parameters! So we don't need
# to worry about that here
def forward(self, bow_vec):
# Pass the input through the linear layer,
# then pass that through log_softmax.
# Many non-linearities and other functions are in torch.nn.functional
return F.log_softmax(self.linear(bow_vec), dim=1)
def make_bow_vector(sentence, word_to_ix):
vec = torch.zeros(len(word_to_ix))
for word in sentence:
vec[word_to_ix[word]] += 1
return vec.view(1, -1)
def make_target(label, label_to_ix):
return torch.LongTensor([label_to_ix[label]])
model = BoWClassifier(NUM_LABELS, VOCAB_SIZE)
# 模型知道它的参数. 下面的第一个输出是A, 第二个输出是 b。
# 不论何时,你给一个module的 __init__() 函数中的类变量分配一个组件
# 就像这一句做的这样:self.linear = nn.Linear(...)
# 然后通过一些 Python的魔法函数,你的module(本例中的 BoWClassifier )将会存储关于 nn.Linear 的参数的知识
for param in model.parameters():
print(param)
# 要运行这个模型, 传递一个 BoW vector
# 这里我们先不考虑训练, 因此代码被封装在 torch.no_grad() 中:
with torch.no_grad():
sample = data[0]
bow_vector = make_bow_vector(sample[0], word_to_ix)
log_probs = model(bow_vector)
print(log_probs)
```
以上哪个值对应于英语的对数概率(log probability),哪个值对应于西班牙语?我们从来没有定义过它,
但是如果我们想训练它的话,我们就需要这样做。
```python
label_to_ix = {"SPANISH": 0, "ENGLISH": 1}
```
所以让我们训练吧!为此,我们把样本实例传给网络获取输出的对数概率,计算损失函数的梯度,然后用梯度更新一步参数。
损失函数由PyTorch中的nn包提供。nn.nLLoss() 是我们想要的负对数似然损失(negative log likelihood loss)。
PyTorch中的torch.optim包中定义了优化函数。在这里,我们将只使用SGD。
注意,NLLoss的 *input* 是对数概率的向量还有目标标签向量。它不计算我们的对数概率(log probabilities)。
这就是为什么我们网络的最后一层是 log softmax。损失函数nn.CrossEntroyLoss()与NLLoss()相同,
只是它为您做了log softmax。
```python
# 在我们训练之前在测试数据上运行一下模型, 这么做是想把训练前后模型在测试数据上的输出做个对比
with torch.no_grad():
for instance, label in test_data:
bow_vec = make_bow_vector(instance, word_to_ix)
log_probs = model(bow_vec)
print(log_probs)
# Print the matrix column corresponding to "creo"
print(next(model.parameters())[:, word_to_ix["creo"]])
loss_function = nn.NLLLoss()
optimizer = optim.SGD(model.parameters(), lr=0.1)
# 通常,您要传递好几次训练数据。100次epoch比在实际数据集上的epoch大得多,
# 但实际数据集有两个以上的实例。通常,在5至30个epoch之间是合理的。
for epoch in range(100):
for instance, label in data:
# Step 1. 请记住,PyTorch累积了梯度。我们需要在每个样例(或batch)之前清除它们
model.zero_grad()
# Step 2. 产生我们的BOW向量,同时,我们必须将目标装在一个整数张量中。
# 例如,如果目标是西班牙语,那么我们包装整数0。然后,
# 损失函数知道对数概率向量的第0元素是与西班牙语对应的对数概率。
bow_vec = make_bow_vector(instance, word_to_ix)
target = make_target(label, label_to_ix)
# Step 3. 运行我们的前向传递过程.
log_probs = model(bow_vec)
# Step 4. 计算 loss, gradients, 并通过调用 optimizer.step() 更新参数
loss = loss_function(log_probs, target)
loss.backward()
optimizer.step()
with torch.no_grad():
for instance, label in test_data:
bow_vec = make_bow_vector(instance, word_to_ix)
log_probs = model(bow_vec)
print(log_probs)
# Index corresponding to Spanish goes up, English goes down!
print(next(model.parameters())[:, word_to_ix["creo"]])
```
我们得到了正确的答案!您可以看到,在第一个样例中西班牙语的对数概率要高得多,
而在第二个测试数据中英语的对数概率要高得多,这是应该的。
现在,您将看到如何制作PyTorch组件,通过它传递一些数据并进行梯度更新。
我们准备深入挖掘深度NLP所能提供的内容。
| 7afa85ed19e52ac2795c3f25b5a250923b348144 | 25,408 | ipynb | Jupyter Notebook | build/_downloads/97d5fed33a2c5bb8f1875babdea02f4c/deep_learning_tutorial.ipynb | ScorpioDoctor/antares02 | 631b817d2e98f351d1173b620d15c4a5efed11da | [
"BSD-3-Clause"
] | null | null | null | build/_downloads/97d5fed33a2c5bb8f1875babdea02f4c/deep_learning_tutorial.ipynb | ScorpioDoctor/antares02 | 631b817d2e98f351d1173b620d15c4a5efed11da | [
"BSD-3-Clause"
] | null | null | null | build/_downloads/97d5fed33a2c5bb8f1875babdea02f4c/deep_learning_tutorial.ipynb | ScorpioDoctor/antares02 | 631b817d2e98f351d1173b620d15c4a5efed11da | [
"BSD-3-Clause"
] | null | null | null | 138.84153 | 3,203 | 0.721662 | true | 4,639 | Qwen/Qwen-72B | 1. YES
2. YES | 0.839734 | 0.774583 | 0.650444 | __label__yue_Hant | 0.490528 | 0.34953 |
```python
from IPython.display import HTML
tag = HTML('''
Promijeni vidljivost <a href="javascript:code_toggle()">ovdje</a>.''')
display(tag)
```
Promijeni vidljivost <a href="javascript:code_toggle()">ovdje</a>.
```python
# Erasmus+ ICCT project (2018-1-SI01-KA203-047081)
%matplotlib notebook
import numpy as np
import math
import matplotlib.pyplot as plt
from scipy import signal
import ipywidgets as widgets
import control as c
import sympy as sym
from IPython.display import Latex, display, Markdown # For displaying Markdown and LaTeX code
from fractions import Fraction
import matplotlib.patches as patches
```
## Aproksimacija dominantnim polom
Pri proučavanju ponašanja sustava, često ih se aproksimira dominantnim polom ili parom dominantnih kompleksnih polova. Ovaj primjer demonstrira navedeno svojstvo.
Sustav drugog reda definiran je sljedećom prijenosnom funkcijom:
\begin{equation}
G(s)=\frac{\alpha\beta}{(s+\alpha)(s+\beta)}=\frac{1}{(\frac{1}{\alpha}s+1)(\frac{1}{\beta}s+1)},
\end{equation}
gdje je $\beta=1$, a $\alpha$ je iterabilan.
Sustav trećeg reda definiran je sljedećom prijenosnom funkcijom:
\begin{equation}
G(s)=\frac{\alpha{\omega_0}^2}{\big(s+\alpha\big)\big(s^2+2\zeta\omega_0s+\omega_0^2\big)}=\frac{1}{(\frac{1}{\alpha}s+1)(\frac{1}{\omega_0^2}s^2+\frac{2\zeta\alpha}{\omega_0}s+1)},
\end{equation}
gdje je $\beta=1$, $\omega_0=4.1$, $\zeta=0.24$ i $\alpha$ je iterabilan.
---
### Kako koristiti ovaj interaktivni primjer?
Odaberite između sustava drugog i trećeg reda te pomičite klizač za promjenu položaja pomičnog pola $a$.
<sub>Ovaj interaktivni primjer zasnovan je na sljedećem [tutorijalu](https://lpsa.swarthmore.edu/PZXferStepBode/DomPole.html "The Dominant Pole Approximation") autora Prof. Erika Cheevera.
```python
# System selector buttons
style = {'description_width': 'initial'}
typeSelect = widgets.ToggleButtons(
options=[('sustav drugog reda', 0), ('sustav trećeg reda', 1),],
description='Select: ',style=style)
```
```python
display(typeSelect)
continuous_update=False
# set up plot
fig, ax = plt.subplots(2,1,figsize=[9.8,7],num='Aproksimacija dominantnim polom')
plt.subplots_adjust(hspace=0.35)
ax[0].grid(True)
ax[1].grid(True)
# ax[2].grid(which='both', axis='both', color='lightgray')
ax[0].axhline(y=0,color='k',lw=.8)
ax[1].axhline(y=0,color='k',lw=.8)
ax[0].axvline(x=0,color='k',lw=.8)
ax[1].axvline(x=0,color='k',lw=.8)
ax[0].set_xlabel('Re')
ax[0].set_ylabel('Im')
ax[0].set_xlim([-10,0.5])
ax[1].set_xlim([-0.5,20])
ax[1].set_xlabel('$t$ [s]')
ax[1].set_ylabel('ulaz, izlaz')
ax[0].set_title('Dijagram polova i nula')
ax[1].set_title('Vremenski odziv')
plotzero, = ax[0].plot([], [])
response, = ax[1].plot([], [])
responseAdom, = ax[1].plot([], [])
responseBdom, = ax[1].plot([], [])
ax[1].step([0,50],[0,1],color='C0',label='ulaz')
# generate x values
def response_func(a,index):
global plotzero, response, responseAdom, responseBdom
# global bodePlot, bodePlotAdom, bodePlotBdom
t = np.linspace(0, 50, 1000)
if index==0:
b=1
num=a*b
den=([1,a+b,a*b])
tf_sys=c.TransferFunction(num,den)
poles_sys,zeros_sys=c.pzmap(tf_sys, Plot=False)
tout, yout = c.step_response(tf_sys,t)
den1=([1,a])
tf_sys1=c.TransferFunction(a,den1)
toutA, youtA = c.step_response(tf_sys1,t)
den2=([1,b])
tf_sys2=c.TransferFunction(b,den2)
toutB, youtB = c.step_response(tf_sys2,t)
mag, phase, omega = c.bode_plot(tf_sys, Plot=False) # Bode-plot
magA, phase, omegaA = c.bode_plot(tf_sys1, Plot=False) # Bode-plot
magB, phase, omegaB = c.bode_plot(tf_sys2, Plot=False) # Bode-plot
s=sym.Symbol('s')
eq=(a*b/((s+a)*(s+b)))
eq1=1/(((1/a)*s+1)*((1/b)*s+1))
display(Markdown('Pomični pol (ljubičasta krivulja) $\\alpha$ je jednak %.1f, fiksni pol (crvena krivulja) $b$ je jednak %i; Prijenosna funkcija je:'%(a,1)))
display(eq),display(Markdown('or')),display(eq1)
elif index==1:
omega0=4.1
zeta=0.24
num=a*omega0**2
den=([1,2*zeta*omega0+a,omega0**2+2*zeta*omega0*a,a*omega0**2])
tf_sys=c.TransferFunction(num,den)
poles_sys,zeros_sys=c.pzmap(tf_sys, Plot=False)
tout, yout = c.step_response(tf_sys,t)
den1=([1,a])
tf_sys1=c.TransferFunction(a,den1)
toutA, youtA = c.step_response(tf_sys1,t)
den2=([1,2*zeta*omega0,omega0**2])
tf_sys2=c.TransferFunction(omega0**2,den2)
toutB, youtB = c.step_response(tf_sys2,t)
mag, phase, omega = c.bode_plot(tf_sys, Plot=False) # Bode-plot
magA, phase, omegaA = c.bode_plot(tf_sys1, Plot=False) # Bode-plot
magB, phase, omegaB = c.bode_plot(tf_sys2, Plot=False) # Bode-plot
s=sym.Symbol('s')
eq=(a*omega0**2/((s+a)*(s**2+2*zeta*omega0*s+omega0*omega0)))
eq1=1/(((1/a)*s+1)*((1/(omega0*omega0))*s*s+(2*zeta*a/omega0)*s+1))
display(Markdown('Pomični pol (ljubičasta krivulja) $\\alpha$ je jednak %.1f, fiksni pol (crvena krivulja) $\\beta$ je jednak $1\pm4j$ ($\omega_0$ je postavljen na 4.1, $\zeta$ je postavljen na 0.24). Prijenosna funkcija je:'%(a)))
display(eq),display(Markdown('ili')),display(eq1)
ax[0].lines.remove(plotzero)
ax[1].lines.remove(response)
ax[1].lines.remove(responseAdom)
ax[1].lines.remove(responseBdom)
plotzero, = ax[0].plot(np.real(poles_sys), np.imag(poles_sys), 'xg', markersize=10, label = 'pol')
response, = ax[1].plot(tout,yout,color='C1',label='odziv sustava',lw=3)
responseAdom, = ax[1].plot(toutA,youtA,color='C4',label='odziv zasnovan samo na pomičnom polu (paru polova)')
responseBdom, = ax[1].plot(toutB,youtB,color='C3',label='odziv zasnovan samo na fiksnom polu')
ax[0].legend()
ax[1].legend()
a_slider=widgets.FloatSlider(value=0.1, min=0.1, max=10, step=.1,
description='$\\alpha$:',disabled=False,continuous_update=False,
orientation='horizontal',readout=True,readout_format='.2f',)
input_data=widgets.interactive_output(response_func,{'a':a_slider,'index':typeSelect})
def update_slider(index):
global a_slider
aval=[0.1,0.1]
a_slider.value=aval[index]
input_data2=widgets.interactive_output(update_slider,{'index':typeSelect})
display(a_slider,input_data)
```
ToggleButtons(description='Select: ', options=(('sustav drugog reda', 0), ('sustav trećeg reda', 1)), style=To…
<IPython.core.display.Javascript object>
FloatSlider(value=0.1, continuous_update=False, description='$\\alpha$:', max=10.0, min=0.1)
Output()
```python
```
| e1970d321f3e7c1d3aa28440562909c8832dc37b | 142,447 | ipynb | Jupyter Notebook | ICCT_hr/examples/02/TD-12-Aproksimacija_dominantnim_polom.ipynb | ICCTerasmus/ICCT | fcd56ab6b5fddc00f72521cc87accfdbec6068f6 | [
"BSD-3-Clause"
] | 6 | 2021-05-22T18:42:14.000Z | 2021-10-03T14:10:22.000Z | ICCT_hr/examples/02/.ipynb_checkpoints/TD-12-Aproksimacija_dominantnim_polom-checkpoint.ipynb | ICCTerasmus/ICCT | fcd56ab6b5fddc00f72521cc87accfdbec6068f6 | [
"BSD-3-Clause"
] | null | null | null | ICCT_hr/examples/02/.ipynb_checkpoints/TD-12-Aproksimacija_dominantnim_polom-checkpoint.ipynb | ICCTerasmus/ICCT | fcd56ab6b5fddc00f72521cc87accfdbec6068f6 | [
"BSD-3-Clause"
] | 2 | 2021-05-24T11:40:09.000Z | 2021-08-29T16:36:18.000Z | 128.794756 | 96,483 | 0.810659 | true | 2,304 | Qwen/Qwen-72B | 1. YES
2. YES | 0.766294 | 0.70253 | 0.538344 | __label__hrv_Latn | 0.178864 | 0.089084 |
# Using DQN to cross a bridge in World of Warcraft
## The bridge
The bridge can be found at coordinates X:1669.62, Y:-3731.47, Z:148.3 in the zone Howling Fjord. The bridge have no rails thus the agent can easily fall off. The goal is to cross the bridge without falling down with the help of DQN. A trained agent should be able to cross. The bridge is also slightly tilted from the north.
For this task, Keras will be used for modeling the network.
<table width="100%" border="0">
<tr>
<td></td>
<td></td>
</tr>
</table>
```python
from keras.models import Sequential, load_model
from keras.optimizers import Adam, RMSprop
from keras.layers import Dense, Flatten, Dropout
from keras.callbacks import TensorBoard, EarlyStopping
from keras.initializers import RandomUniform
from collections import deque
import random
import numpy as np
import matplotlib.pyplot as plt
from shapely.geometry import Point
from shapely.geometry.polygon import Polygon
from ctypes import cdll, POINTER, c_float, c_int
from random import randint
from time import sleep
from IPython.display import clear_output
```
## Measured constants
The first step was to measure the bridge as a polygon concealed within four points. These four points are initalized below and were measured from the World of Warcraft client. The inital start position x and y is between the eastern and western starting point.
The variables k and m are the linear equation between the end points on the other side of the bridge. This linear equation is useful to determine if the agent passed the end of the bridge or not.
The step size is equal to half the width of the bridge. This will determine how many units the agent will move in every step. The bridge is contained within a polygon that will use the raycasting algorithm to decide if the agent is in or outside the polygon aka the bridge.
The image visualize the polygon that surrounds the bridge in the simulation.
```python
bridge_start_west = np.array([-3728.34, 1668.16])
bridge_start_east = np.array([-3734.89, 1668.64])
bridge_end_west = np.array([-3735.31, 1580.07])
bridge_end_east = np.array([-3741.63, 1580.53])
init_pos = np.sum([bridge_start_west, bridge_start_east], axis=0) / 2
k = (bridge_end_west[1] - bridge_end_east[1]) / (bridge_end_west[0] - bridge_end_east[0])
m = bridge_end_west[1] -(k * bridge_end_west[0])
step_size = np.abs((bridge_start_west[0] - bridge_start_east[0]) / 2)
current_angle = 0
polygon = Polygon([bridge_start_west, bridge_start_east, bridge_end_east, bridge_end_west])
```
## Determine goal
The goal line can be defined as:
\begin{align}
y = kx + m
\end{align}
If the agents current y position is less than the goals y, the agent crossed the bridge. The reason why the agent y have to be lesser is beacuse it move in a north to south direction.
```python
def is_goal(x, y):
goal_y = (k*x) + m
return y < goal_y
```
## Is within bridge
This is calculated with the Shapley library that implement the raycasting algorithm.
```python
def is_within_bridge(x, y):
point = Point(x, y)
return polygon.contains(point)
```
## Calculate distance from the right edge
To determine the distance to the right edge, the linear equation is calculated for the right sided line. This will be used to later calculate the perpendicular line. The x position in realtion to the edge can then be solved by finding what x is intersection both lines. The euclidean distance can then be used once we have the x and y on the right edge line compared to the agents current position.
The first two methods are only used to plot how far the agent reach after each episode.
```python
def get_start_edge():
k_s = (bridge_start_west[1] - bridge_start_east[1]) / (bridge_start_west[0] - bridge_start_east[0])
m_s = bridge_start_west[1] -(k_s * bridge_start_west[0])
return k_s, m_s
k_s, m_s = get_start_edge()
```
```python
def get_start_edge_dist(x, y):
k_p = -1 / k_s
m_p = y - (k_p * x);
x_new = (m_p - m_s) / (k_s - k_p);
y_new = (k_s*x_new) + m_s;
a = (x_new - x);
b = (y_new - y);
d = np.sqrt(a**2 + b**2);
return d
```
```python
def get_right_edge():
k_r = (bridge_start_west[1] - bridge_end_west[1]) / (bridge_start_west[0] - bridge_end_west[0])
m_r = bridge_end_west[1] -(k_r * bridge_end_west[0])
return k_r, m_r
k_r, m_r = get_right_edge()
```
```python
def get_edge_dist(x, y):
k_p = -1 / k_r
m_p = y - (k_p * x);
x_new = (m_p - m_r) / (k_r - k_p);
y_new = (k_r*x_new) + m_r;
a = (x_new - x);
b = (y_new - y);
d = np.sqrt(a**2 + b**2);
return d
```
## Avaliable actions
The agent have four avaliable actions that follows:
<ol>
<li>Adjust angle $+\frac{\pi}{8}$ radians</li>
<li>Adjust angle $+\frac{\pi}{4}$ radians</li>
<li>Adjust angle 0 radians</li>
<li>Adjust angle $-\frac{\pi}{8}$ radians</li>
<li>Adjust angle $-\frac{\pi}{4}$ radians</li>
</ol>
```python
def get_new_angle(action):
v = np.pi / 8
return{
0 : current_angle + v,
1 : current_angle + v*2,
2 : current_angle,
3 : current_angle - v,
4 : current_angle - v*2,
}[action]
```
## New position from predicted angle
Once the new angle is predicted, the new position can be determined from the current position. The figure below visualizes how the triangle can look before a step where the current angle is equal to $\alpha$. Since $sin(\alpha) = \frac{a}{c}$ and $cos(\alpha) = \frac{b}{c}$, $a$ and $b$ can be calculated from this. The new position $x'$ and $y'$ ten evaluates to: $x' = x + a$, $y' = y + b$.
```python
def do_action(action, x, y):
v = get_new_angle(action)
current_angle = v
a = step_size * np.sin(v);
b = step_size * np.cos(v);
x_new = x - a;
y_new = y - b;
return x_new, y_new
```
## Rewards
These are the rewards that will determine how the agent perform.
```python
terminate_reward = 100
step_reward = -1
stuck_reward = -100
state_size = 2
action_size = 5
```
## Step
The state contain two variables, the curent angle and the distance to the right edge in realtion to the agent.
```python
def step(action, p_x, p_y):
tmp_player_x = p_x
tmp_player_y = p_y
new_pos = do_action(action, p_x, p_y)
p_x = new_pos[0]
p_y = new_pos[1]
reward = 0
done = False
if is_goal(p_x, p_y):
done = True
reward = terminate_reward
elif is_within_bridge(p_x, p_y):
reward = step_reward
else:
p_x = tmp_player_x
p_y = tmp_player_y
reward = stuck_reward;
done = True
state = np.reshape([current_angle, get_edge_dist(p_x, p_y)], [1, state_size])
return state, reward, done, p_x, p_y
```
```python
def reset():
p_x = init_pos[0]
p_y = init_pos[1]
current_angle = 0
state = np.reshape([current_angle, get_edge_dist(p_x, p_y)], [1, state_size])
return state, p_x, p_y
```
## The DQN
The agent class contain the logic for the DQN. The netowrk consist of two hidden layers with 40 neurons each. The neural net is visualized below.
```python
model_path = "my_model.h5"
class Agent:
def __init__(self, state_size, action_size, discount, eps, eps_decay, eps_min, l_rate, decay_linear):
self.path = model_path
self.state_size = state_size
self.action_size = action_size
self.mem = deque(maxlen=2000)
self.discount = discount
self.eps = eps
self.eps_decay = eps_decay
self.decay_linear = decay_linear
self.eps_min = eps_min
self.l_rate = l_rate
self.model = self.init_model()
def load_model(self):
return load_model(self.path)
def save_model(self):
self.model.save(self.path)
def init_model(self):
model = Sequential()
model.add(Dense(40, input_dim=self.state_size, activation='relu'))
model.add(Dense(40, activation='relu'))
model.add(Dense(self.action_size, activation='linear'))
model.compile(loss='mse', optimizer=Adam(lr=self.l_rate))
return model
def action(self, state):
if np.random.rand() <= self.eps:
return random.randrange(self.action_size)
actions = self.model.predict(state)
return np.argmax(actions[0])
def remember(self, state, action, reward, next_state, terminal):
self.mem.append((state, action, reward, next_state, terminal))
def replay(self, batch_size):
if len(self.mem) < batch_size:
batch = self.mem
else:
batch = random.sample(self.mem, batch_size)
for state, action, reward, next_state, terminal in batch:
target = reward
if not terminal:
target += self.discount * np.amax(self.model.predict(next_state)[0])
target_f = self.model.predict(state)
target_f[0][action] = target
self.model.fit(state, target_f, epochs=1, verbose=0)
self.decay()
def decay(self):
if self.eps > self.eps_min:
if self.decay_linear:
self.eps -= self.eps_decay
else:
self.eps *= self.eps_decay
```
```python
agent = Agent(
state_size = state_size,
action_size = action_size,
discount = 0.98,
eps = 1,
eps_decay = 0.001,
eps_min = 0.001,
l_rate = 0.001,
decay_linear = True
)
#Train agent
episodes = 2000
steps = 100
print_freq = 100
goalCounter = 0
goalAvg = -1
stepCounter = 0
epGoalCounter = 0
epGoalAvg = -1
epStepCounter = 0
total_ep = []
total_rewards = []
total_distance = []
for ep in range(1, episodes + 1):
state, p_x, p_y = reset()
player_x = p_x
player_y = p_y
total_reward = 0
for st in range(1, steps):
action = agent.action(state)
next_state, reward, done, p_x, p_y = step(action, player_x, player_y)
total_reward += reward
player_x = p_x
player_y = p_y
agent.remember(state, action, reward, next_state, done)
state = next_state
if done:
if reward > 1:
goalCounter += 1
epGoalCounter += 1
epStepCounter += st
epGoalAvg = epStepCounter / epGoalCounter
break
agent.replay(32)
if ep % print_freq == 0 or ep == 1:
total_ep.append(ep)
total_rewards.append(total_reward)
total_distance.append(get_start_edge_dist(player_x, player_y))
print("episode {} done, found goal {} times with and avg step of {}, total goals: {}".format(ep, epGoalCounter, epGoalAvg, goalCounter))
epGoalCounter = 0
epGoalAvg = -1
epStepCounter = 0
agent.save_model()
print("summary: total goals found: {}, an avg of {} episodes reached the goal".format(goalCounter, goalCounter/episodes))
```
episode 1 done, found goal 0 times with and avg step of -1, total goals: 0
episode 100 done, found goal 0 times with and avg step of -1, total goals: 0
episode 200 done, found goal 0 times with and avg step of -1, total goals: 0
episode 300 done, found goal 2 times with and avg step of 30.5, total goals: 2
episode 400 done, found goal 8 times with and avg step of 30.375, total goals: 10
episode 500 done, found goal 13 times with and avg step of 30.53846153846154, total goals: 23
episode 600 done, found goal 25 times with and avg step of 30.96, total goals: 48
episode 700 done, found goal 28 times with and avg step of 30.392857142857142, total goals: 76
episode 800 done, found goal 33 times with and avg step of 30.393939393939394, total goals: 109
episode 900 done, found goal 71 times with and avg step of 30.718309859154928, total goals: 180
episode 1000 done, found goal 90 times with and avg step of 30.68888888888889, total goals: 270
episode 1100 done, found goal 100 times with and avg step of 29.45, total goals: 370
episode 1200 done, found goal 100 times with and avg step of 29.06, total goals: 470
episode 1300 done, found goal 100 times with and avg step of 28.42, total goals: 570
episode 1400 done, found goal 100 times with and avg step of 28.85, total goals: 670
episode 1500 done, found goal 100 times with and avg step of 28.69, total goals: 770
episode 1600 done, found goal 100 times with and avg step of 30.03, total goals: 870
episode 1700 done, found goal 100 times with and avg step of 29.4, total goals: 970
episode 1800 done, found goal 100 times with and avg step of 29.81, total goals: 1070
episode 1900 done, found goal 100 times with and avg step of 29.14, total goals: 1170
episode 2000 done, found goal 100 times with and avg step of 29.25, total goals: 1270
summary: total goals found: 1270, an avg of 0.635 episodes reached the goal
```python
plt.plot(total_ep, total_rewards)
plt.title('Total reward from start')
plt.ylabel('Reward')
plt.xlabel('Episode')
plt.legend(['Reward'], loc='upper left')
plt.show()
```
```python
plt.plot(total_ep, total_distance, color='orange')
plt.title('Distance from start over time')
plt.ylabel('Distance')
plt.xlabel('Episode')
plt.legend(['Distance'], loc='upper left')
plt.show()
```
## Testing the trained network in the client
The last step is to confirm that the trained network works in the World of Warcraft client. To do this Click to move was used to walk towards a given x and y. This is an built in function for World of Warcraft. This can be accsed by manipulating memory variables with a language such as C++. The python class ctypes allow for calls to compiled C++ classes. The libenv.so were compiled with MinGW and the source code for this library can be found at: https://github.com/Jacobth/wow_rl_environment.
```python
lib = cdll.LoadLibrary('libenv.so')
class Env(object):
def __init__(self):
self.obj = lib.Env()
def reset(self, init_x, init_y):
lib.ResetTest.argtypes = [c_float, c_float]
lib.ResetTest(self.obj, init_x, init_y)
def step(self, new_x, new_y):
lib.StepTest.argtypes = [c_float, c_float]
lib.StepTest(self.obj, new_x, new_y)
def get_pos(self):
lib.GetPos.restype = POINTER(c_float * 2)
values = lib.GetPos(self.obj).contents
x = values[0]
y = values[1]
return x, y
```
```python
def test_model() :
env = Env()
model = load_model(model_path)
state, p_x, p_y = reset()
env.reset(c_float(p_x), c_float(p_y))
pl_x = p_x
pl_y = p_y
total_reward = 0
total_steps = 0
while True:
action = np.argmax(model.predict(state)[0])
next_state, reward, done, p_x, p_y = step(action, pl_x, pl_y)
env.step(c_float(p_x), c_float(p_y))
state = next_state
pl_x = p_x
pl_y = p_y
total_reward += reward
total_steps += 1
if done:
print("completed with reward: {} and {} steps.".format(total_reward, total_steps))
break;
test_model()
```
completed with reward: 72 and 29 steps.
This method can be used to test if the right distance behave correctly in the client.
```python
def test_dist() :
env = Env()
while True:
x, y = env.get_pos()
print(get_edge_dist(x, y))
sleep(0.5)
clear_output()
#test_dist()
```
| 2bbd43c3db81ee7eefdd67a20d59d7f62052634d | 58,385 | ipynb | Jupyter Notebook | wow_rl_sim.ipynb | Jacobth/wow_sim_notebook | bf398442e2f6d9ddf7ea8ae02ebe563252db4c2d | [
"MIT"
] | null | null | null | wow_rl_sim.ipynb | Jacobth/wow_sim_notebook | bf398442e2f6d9ddf7ea8ae02ebe563252db4c2d | [
"MIT"
] | null | null | null | wow_rl_sim.ipynb | Jacobth/wow_sim_notebook | bf398442e2f6d9ddf7ea8ae02ebe563252db4c2d | [
"MIT"
] | null | null | null | 75.530401 | 18,104 | 0.780235 | true | 4,269 | Qwen/Qwen-72B | 1. YES
2. YES | 0.828939 | 0.66888 | 0.554461 | __label__eng_Latn | 0.958315 | 0.126528 |
# Data Fitting Exercises
# 2. Linear Least-Squares
### How to define a line of “best fit”
In the previous exercise, you used the `linregress` function to calculate a line of best fit to your experimental data. But what is this function actually doing, and how do we define the quality of fit for a particular straight line?
In this exercise, you will work through the maths behind linear regression, and then learn how to implement this using functions in Python as a more general optimisation problem.
### The maths
Linear regression addresses the problem of how to find a line of “best fit” for some data that is appriximately described by a straight line. For example. we might have the data shown in the figure below:
and we want to find the straight line that best describes the relationship between $x$ and $y$:
How do we find the “best” line for this particular data set?
The equation of a straight line is
\begin{equation}
y=mx+c.
\end{equation}
This is a mathematical function that defines a set of $y$ values from a set of $x$ values, for any pair or parameters, $m$ and $c$.
More generally, we can say that $y$ is a function of some model parameters, $P$, and the input $x$:
\begin{equation}
y = f(P,x).
\end{equation}
In our example of a straight line, the parameter set $P$ contains the slope and intercept of the line:
\begin{equation}
P=\left\{m,c\right\},
\end{equation}
and the function $f$ is
\begin{equation}
f(P,x) = P_0 x + P_1.
\end{equation}
Now $P_0$ is the slope, and $P_1$ is the intercept.
In principle, we could choose any combination of $P_0$ and $P_1$ for our model parameters. Every combination gives a different straight line, which might be a better, or worse, fit to our experimental data.
To decide which set of model parameters best describes our real data, we need a way to score the agreement between the model $f(P,x)$ and the data. A common way to quantify the error between the model and the real data is to calculate the **sum of squared errors**. For every $x$ value in our input data set, we can calculate the $y_\mathrm{predicted}$ value predicted by the model. The difference between this predicted value and the real $y$ value for this data point is the **error** for that data point.
To quantify the overall error of this particular model, we can add up the *squares* of all the error terms:
\begin{equation}
\mathrm{error} = \sum_i\left[y_i - f(P,x_i)\right]^2
\end{equation}
Adding *squares* means that positive and negative errors contribute equally to the total error.
We can now determine mathematically whether one model is a better “fit” to the data than another. A better quality of fit will give us a smaller error. The **best** fit to the data (for this particular model function) is given by the set of parameters that **minimises** the error. The procedure of finding a set of model parameters that minimises the sum of squared errors is often called **least-squares** fitting.
### The code
In Python the model straight-line function can be expressed using the following function:
>```python
def model_function( P, x ):
m = P[0]
c = P[1]
y = m * x + c
return y
```
If you compare this to the mathematical function definition above, you should see the same structure in both the mathematical description and the Python representation.
The function `model_function()` also looks similar to the `line()` function you used in Exercise 1.
```python
def line( m, c, x ):
y = m * x + c
return y
```
The only difference is that in `model_function()` we pass in the model **parameters** as a **list**. This is then unpacked inside the function, instead of passing in $m$ and $c$ separately.
<div class="alert alert-success">
In the code cell below, define the functions <span style='font-family:monospace'>model_function()</span> and <span style='font-family:monospace'>line()</span>.
Using <span style='font-family:monospace'>x = 2</span>, check that both these functions return the same $y$ values for pairs of parameters, $m$ and $c$. For the <span style='font-family:monospace'>model_function()</span> function, remember that the parameters need to be passed in as a **list**.
</div>
```python
```
We can also write a Python function that calculates the error between the predictions of our model function and the real $y$ values:
>```python
def error_function( P, x, y ):
y_predicted = model_function( P, x )
error_terms = y - y_predicted
total_error = np.sum( error_terms**2 )
return total_error
```
* The first line calls the `model_function()` function with the list of model parameters, $P$, and the input $x$ values. This returns a set of *predicted* $y$ values for this particular model.
* The second line calculates the error terms, i.e. the differences between the predicted and actual $y$ values.
* The third line calculates the sum of squared errors.
* The fourth line returns the error.
<div class="alert alert-success">
By copying the appropriate code from Exercise 1, or writing it from scratch, read in the experimental data from <span style='font-family:monospace'>'data/equilibrium_constant.dat'</span> and plot it as points. <br/><br/>
Using the <span style='font-family:monospace'>model_function()</span> function, generate a series of lines with different parameters, and plot these against the experimental data. <br/><br/>
Define the function <span style='font-family:monospace'>error_function()</span>. For each of your guessed model parameter sets, calculate the error. You should see that models that appear closer to the experimental data give you lower errors.
</div>
```python
```
You should have seen how changing the parameters for your model lets you generate different lines, that each do better or worse jobs of describing the experimental data. In each case, the quality of fit between your model and the data is given by the error, calculated with your error function.
The final step in the problem is to find the **least-squares** solution. This corresponds to the set of model parameters that **minimise** the result from your error function.
Finding the minimum or maximum of a function is a common mathematical problem, and there are a large number of algorithms for doing this numerically, using computers.
The `scipy` module contains functions for doing exactly this, in `scipy.optimize`. For this exercise, you will import the `minimize()` function, and use this to find the parameters that minimise the output of your error function. You can import the `minimize()` function as follows:
>```python
from scipy.optimize import minimize
```
The `minimize()` function can be given a [large number of options](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html), but the simplest usage requires three arguments:
>```python
minimize( function_to_minimise, initial_guess, other_arguments )
```
- The first argument, `function_to_minimise`, is the name of the function to minimise. For this exercise, this is `error_function`.
- The second argument, `initial_guess`, is a starting guess for the parameters that you want to fit. e.g. you might guess that `P=[1.0,1.0]`.
- The third argument is a list of any other arguments that need to be passed into the function you are minimising. In this exercise, your function `error_function` takes three arguments: `error_function( P, x, y )`: The first is the set of model parameters, which you want to change during the fitting. The second and third are the **observed** $x$ and $y$ values (here these are $1/T$ and $\ln{K}$) for each data point. These do not change during the fit, and get passed into `minimize()` as a list in the `other_arguments` position.
e.g. if you have stored the $1/T$ ($x$ values) and $\ln{K}$ ($y$ values) for the experimental data in numpy arrays called, respectively, `inverse_temperature` and `ln_K` then you could write
>```python
P_initial = [ 1.0, 1.0 ]
other_args = inverse_temperature, ln_K
minimize( error_function, P_initial, other_args )
```
The `other_arg = inverse_temperature, ln_K` command stores the $x$ and $y$ data values in a data type called a **tuple**. From the perspective of this exercise, a tuple is like a list, in that it can store multiple values (here you store two `numpy` arrays).
<div class="alert alert-success">
In the following code cell, import the <span style='font-family:monospace'>minimize</span> function, and use it to minimise your error function, with the experimental data you loaded previously.
</div>
```python
```
Your output should look like:
```
fun: 0.026018593683863903
hess_inv: array([[ 1.53892987e+06, -4.95784961e+03],
[ -4.95784961e+03, 1.60346929e+01]])
jac: array([ 1.53901055e-07, -1.45286322e-07])
message: 'Optimization terminated successfully.'
nfev: 60
nit: 4
njev: 15
status: 0
success: True
x: array([ 6917.390252 , -21.05540855])
```
You will see that a lot of information about the fit is produced. For this exercise, the important parts of the output are the `message`, which tells you whether the optimisation was successful, and the result labelled `x`, which gives the input parameters that minimise your error function. The value of the error function with these optimised model parameters is given by the `fun` result.
You can access specific parts of the output by appending the variable names to the end of your `minimize()` call, e.g. to get the optimal parameters you can write
>```python
minimize( error_function, P_initial, other_args ).x
```
<div class="alert alert-success">
Repeat your optimisation, and store the optimised model parameters in a variable <span style='font-family:monospace'>P_opt</span>. Check these values against the fitted slope and intercept you found in Exercise 1.
</div>
| 21e4f0482649d0a6c83292947cdc588e651fc1b3 | 12,854 | ipynb | Jupyter Notebook | Data_Fitting_Exercise_S2_2.ipynb | pythoninchemistry/chem_data_analysis_jupyter | 4af545f1a8acdded28d96508bb5adc8929da92cc | [
"CC-BY-4.0"
] | 3 | 2019-05-05T00:21:55.000Z | 2021-09-16T14:15:15.000Z | Data_Fitting_Exercise_S2_2.ipynb | pythoninchemistry/chem_data_analysis_jupyter | 4af545f1a8acdded28d96508bb5adc8929da92cc | [
"CC-BY-4.0"
] | null | null | null | Data_Fitting_Exercise_S2_2.ipynb | pythoninchemistry/chem_data_analysis_jupyter | 4af545f1a8acdded28d96508bb5adc8929da92cc | [
"CC-BY-4.0"
] | null | null | null | 50.015564 | 544 | 0.642213 | true | 2,435 | Qwen/Qwen-72B | 1. YES
2. YES | 0.939025 | 0.851953 | 0.800005 | __label__eng_Latn | 0.998579 | 0.697012 |
# Entscheidungsbäume
Alice beobachtet die Tennisspieler auf dem Tennisplatz vor ihrem Haus. Sie möchte herausfinden, wie die Spieler entscheiden, bei welchem Wetter die Spieler Tennis spielen und wann nicht. Sie macht die folgenden Beobachtungen:
Erstellen Sie einen Entscheidungsbaum auf Basis dieser Trainingsdaten. Notieren Sie an jedem Knoten die Anzahl der yes- und no-Instanzen. Benutzen Sie die Informationsentropie als Maß für Knotenreinheit.
# K-Nearest-Neighbor-Klassifikator
Betrachten sie folgende Teilmenge des Iris-Datensatzes:
| Sepal.Length | Sepal.Width | Petal.Length | Petal.Width | Species|
| ------------- |-------------| -----|------|------|
| 5.1 | 3.5 | 1.4 | 0.2 | setosa
| 4.9 | 3.0 | 1.4 | 0.2 | setosa
| 7.0 | 3.2 | 4.7 | 1.4 | versicolor
| 6.4 | 3.2 | 4.5 | 1.5 | versicolor
Klassifizieren Sie das Sample
| 6.9 | 3.0 | 3.5 | 1 |
|-|-|-|-|
mit einem k-nearest-Neighbor-Klassifikator. Verwenden Sie k=1 und die euklidische Distanz.
# Kerndichteschätzung
In der Vorlesung haben Sie Kerndichteschätzer als eine nichtparametrische Methode zum Darstellen einer Verteilung kennen gelernt. Dabei wird eine Verteilung an einem Punkt $t$ wie folgt dargestellt:
\begin{equation}
p(t) = \frac{1}{n \, h} \sum_{i=1}^n \varphi\left(\frac{t-x_i}{h}\right)
\end{equation}
Hierbei ist $\varphi$ eine Fenster-Funktion, z.B. das Gauß-Fenster
\begin{equation}
\varphi(u) = \frac{1}{\sqrt{2\pi}} e^{-u^2/2}
\end{equation}
## Aufgabe 1
Implementieren Sie die Funktion `kde(t,h,x)`, die für einen Punkt $t$, eine Fenster-Breite $h$ und ein Array von Trainings-Pukten $x$, die Kerndichteschätzung für $p(t)$ berechnet.
```python
import math
import numpy as np
import matplotlib.pyplot as plt
def kde(t,h,x):
#TODO
return None
def k1(t):
return 1/math.sqrt(2*math.pi) * math.exp(-1/2 * t**2)
example = np.concatenate((np.random.normal(0,1,100),np.random.normal(5,1,100)))
dens = [kde(t,0.5,example) for t in np.arange(-2,8,0.05)]
plt.plot(dens)
```
## Aufgabe 2
Implementieren Sie die Funktion `classify_kde(xnew,x,classes)`, die eine Klassifikation mit dem Kerndichteschätzer durchführt. Das bedeutet, es handelt sich um einen Bayes-Klassifikator, bei dem die Likelihood mit dem Kerndichteschätzer geschätzt wird.
```python
import pandas as pd
from scipy.io import arff
def classify_kde(xnew,x,classes):
#TODO
return None
data = arff.loadarff('features1.arff')
df = pd.DataFrame(data[0])
feat = df["AccX_mean"]
cl = df["class"]
p = [classify_kde(x,feat,cl) for x in feat]
np.mean(p == cl)
```
| 096a43efe076eb1af87aa754c3eaf94665f43ae3 | 122,819 | ipynb | Jupyter Notebook | 05-Weitere-Klassifikatoren.ipynb | stefanluedtke/AI-II-Exercises | a1e816375c58f52609c3f7683b17a35fed64bc1d | [
"MIT"
] | null | null | null | 05-Weitere-Klassifikatoren.ipynb | stefanluedtke/AI-II-Exercises | a1e816375c58f52609c3f7683b17a35fed64bc1d | [
"MIT"
] | null | null | null | 05-Weitere-Klassifikatoren.ipynb | stefanluedtke/AI-II-Exercises | a1e816375c58f52609c3f7683b17a35fed64bc1d | [
"MIT"
] | null | null | null | 871.056738 | 118,424 | 0.950024 | true | 904 | Qwen/Qwen-72B | 1. YES
2. YES
| 0.893309 | 0.845942 | 0.755688 | __label__deu_Latn | 0.955825 | 0.594049 |
<div style='background-image: url("title01.png") ; padding: 0px ; background-size: cover ; border-radius: 5px ; height: 200px'>
<div style="float: right ; margin: 50px ; padding: 20px ; background: rgba(255 , 255 , 255 , 0.7) ; width: 50% ; height: 150px">
<div style="position: relative ; top: 50% ; transform: translatey(-50%)">
<div style="font-size: xx-large ; font-weight: 900 ; color: rgba(0 , 0 , 0 , 0.8) ; line-height: 100%">Computers, Waves, Simulations</div>
<div style="font-size: large ; padding-top: 20px ; color: rgba(0 , 0 , 0 , 0.5)">Finite-Difference Method - High-Order Taylor Operators</div>
</div>
</div>
</div>
#### This exercise covers the following aspects
* Learn how to define high-order central finite-difference operators
* Investigate the behaviour of the operators with increasing length
#### Basic Equations
The Taylor expansion of $f(x + dx)$ around $x$ is defined as
$$
f(x+dx)=\sum_{n=0}^\infty \frac{f^{(n)}(x)}{n!}dx^{n}
$$
Finite-difference operators can be calculated by seeking weights (here: $a$, $b$, $c$) with which function values have to be multiplied to obtain an interpolation or a derivative. Example:
$$
\begin{align}
a ~ f(x + dx) & \ = \ a ~ \left[ ~ f(x) + f^{'} (x) dx + \frac{1}{2!} f^{''} (x) dx^2 + \dotsc ~ \right] \\
b ~ f(x) & \ = \ b ~ \left[ ~ f(x) ~ \right] \\
c ~ f(x - dx) & \ = \ c ~ \left[ ~ f(x) - f^{'} (x) dx + \frac{1}{2!} f^{''} (x) dx^2 - \dotsc ~ \right]
\end{align}
$$
This can be expressed in matrix form by comparing coefficients, here seeking a 2nd derivative
$$
\begin{align}
&a ~~+~~ ~~~~b &+~~ c & = & 0 \\
&a ~~\phantom{+}~~ \phantom{b} &-~~ c & = & 0 \\
&a ~~\phantom{+}~~ \phantom{b} &+~~ c & = & \frac{2!}{\mathrm{d}x^2}
\end{align}
$$
which leads to
$$
\begin{pmatrix}
1 & 1 & 1 \\
1 & 0 & -1 \\
1 & 0 & 1
\end{pmatrix}
\begin{pmatrix}
a\\
b \\
c
\end{pmatrix}
=
\begin{pmatrix}
0\\
0 \\
\frac{2!}{dx^2}
\end{pmatrix}
$$
and using matrix inversion we obtain
$$
\begin{pmatrix}
a \\
b\\
c
\end{pmatrix}
=
\begin{pmatrix}
\frac{1}{2 \mathrm{d}x^2} \\
- \frac{2}{2 \mathrm{d}x^2} \\
\frac{1}{2 \mathrm{d}x^2}
\end{pmatrix}
$$
This is the the well known 3-point operator for the 2nd derivative. This can easily be generalized to higher point operators and higher order derivatives. Below you will find a routine that initializes the system matrix and solves for the Taylor operator.
#### Calculating the Taylor operator
The subroutine `central_difference_coefficients()` initializes the system matrix and solves for the difference weights assuming $dx=1$. It calculates the centered differences using an arbitrary number of coefficients, also for higher derivatives. The weights are defined at $x\pm i dx$ and $i=0,..,(nop-1)/2$, where $nop$ is the length of the operator. Careful! Because it is centered $nop$ has to be an odd number (3,5,...)!
It returns a central finite difference stencil (a vector of length $nop$) for the `n`th derivative.
```python
# Import libaries
import math
import numpy as np
import matplotlib.pyplot as plt
```
```python
# Define function to calculate Taylor operators
def central_difference_coefficients(nop, n):
"""
Calculate the central finite difference stencil for an arbitrary number
of points and an arbitrary order derivative.
:param nop: The number of points for the stencil. Must be
an odd number.
:param n: The derivative order. Must be a positive number.
"""
m = np.zeros((nop, nop))
for i in range(nop):
for j in range(nop):
dx = j - nop // 2
m[i, j] = dx ** i
s = np.zeros(nop)
s[n] = math.factorial(n)
# The following statement return oper = inv(m) s
oper = np.linalg.solve(m, s)
# Calculate operator
return oper
```
#### Plot Taylor operators
Investigate graphically the Taylor operators. Increase $nop$ for the first $n=1$ or higher order derivatives. Discuss the results and try to understand the interpolation operator (derivative order $n=0$).
```python
# Calculate and plot Taylor operator
# Give length of operator (odd)
nop = 25
# Give order of derivative (0 - interpolation, 1 - first derivative, 2 - second derivative)
n = 1
# Get operator from routine 'central_difference_coefficients'
oper = central_difference_coefficients(nop, n)
```
```python
# Plot operator
x = np.linspace(-(nop - 1) / 2, (nop - 1) / 2, nop)
# Simple plot with operator
plt.figure(figsize=(10, 4))
plt.plot(x, oper,lw=2,color='blue')
plt.plot(x, oper,lw=2,marker='o',color='blue')
plt.plot(0, 0,lw=2,marker='o',color='red')
#plt.plot (x, nder5-ader, label="Difference", lw=2, ls=":")
plt.title("Taylor Operator with nop = %i " % nop )
plt.xlabel('x')
plt.ylabel('Operator')
plt.grid()
plt.show()
```
#### Conclusions
* The Taylor operator weights decrease rapidly with distance from the central point
* In practice often 4th order operators are used to calculate space derivatives
| ea6fc4c8cf9813f58ddace380cb6266864a22b93 | 30,711 | ipynb | Jupyter Notebook | PDE's/Using Taylor.ipynb | MonitSharma/Computational-Methods-in-Physics | e3b2db36c37dd5f64b9a37ba39e9bb267ba27d85 | [
"MIT"
] | null | null | null | PDE's/Using Taylor.ipynb | MonitSharma/Computational-Methods-in-Physics | e3b2db36c37dd5f64b9a37ba39e9bb267ba27d85 | [
"MIT"
] | null | null | null | PDE's/Using Taylor.ipynb | MonitSharma/Computational-Methods-in-Physics | e3b2db36c37dd5f64b9a37ba39e9bb267ba27d85 | [
"MIT"
] | null | null | null | 119.034884 | 22,469 | 0.837387 | true | 1,534 | Qwen/Qwen-72B | 1. YES
2. YES | 0.904651 | 0.880797 | 0.796814 | __label__eng_Latn | 0.959784 | 0.689597 |
<a href="https://colab.research.google.com/github/anathnath/EDA/blob/master/CBCS_V_SchrodingerEquationPartI.ipynb" target="_parent"></a>
# $ \color{green}{ CoreP11-Quantum~ Mechanics~ and~ Application ~lab} $
## $ \color{green}{West~Bengal ~ State~ University} $
# Quantum Mechanics and Application: 60 class Hours 2 credits
# Dr. Anathnath Ghosh
# Dum dum Motijheel College
# email: anathnath.rivu@gmail.com
# References: 1. Physcis in Laboratory including Python
# by
# Dr. Pradipta Kumar Mandal
# 2. Google master
## Elementary Mathematical idea:
## Curve Fitting
```python
import numpy as np
from scipy.optimize import curve_fit
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
x=np.array([0,10,20,30,40,50,60,70,80,90])
y=np.array([76,92,106,123,132,151,179,203,227,249])
def f(x,a,b,c):
return a+b*x+c*x**2
par,var=curve_fit(f,x,y)
a,b,c=par
plt.plot(x,f(x,a,b,c),'r',label='Fitted Function')
plt.scatter(x,y,label='Data Points')
plt.xlabel('x')
plt.ylabel('f(x)')
plt.title("Curve Fitting")
plt.legend()
plt.show
```
```python
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
x=2*np.random.randint(0,2,size=1000)-1
x
np.cumsum(x)
plt.plot(np.cumsum(x))
plt.show()
```
```python
import numpy as np
import matplotlib.pyplot as plt
import numpy.polynomial.polynomial as poly
plt.style.use('fivethirtyeight')
x=np.array([0,10,20,30,40,50,60,70,80,90])
y=np.array([76,92,106,123,132,151,179,203,227,249])
coeffs=poly.polyfit(x,y,2)
yfit=poly.polyval(x,coeffs)
plt.plot(x,y,'o',x,yfit)
```
```python
import numpy as np
from scipy import constants as const
from scipy import sparse
from scipy.sparse.linalg import eigs
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
hbar=const.hbar
e=const.e
m_e=_e=const.m_e
pi=const.pi
epsilon_0=const.epsilon_0
l=0
n=1000
r,h=np.linspace(1.0e-9,0.0,n,endpoint=False,retstep=True)
coef1=-hbar**2/(2*m_e*h**2)
coef2=lambda r: -e**2/(4*pi*epsilon_0*r)+hbar**2/(2*m_e)*l*(l+1)/r**2
D=-2*np.ones(n)
D_off=np.ones(n-1)
M1=coef1*sparse.diags([D,D_off,D_off],(0,-1,1))
M2=coef2(r)*np.diag(np.ones(n))
H=M1+M2
n_eig=20
eig_val,eig_vec=eigs(H,k=n_eig,which='SM')
indx=np.argsort(eig_val)
eig_val=eig_val[indx]
eig_vec=eig_vec[:,indx]
```
```python
eig_val
```
```python
v=eig_vec.T
P=[np.abs(v[i])**2 for i in range(n_eig)]
plt.figure(figsize=(12,8))
r=r*1e+10
engy=['E={:5.2f} eV'.format(eig_val[i].real/e) for i in range(3)]
engy
plt.plot(r,P[0],'-',lw=2,label=engy[0])
plt.plot(r,P[1],'--',lw=2,label=engy[1])
plt.plot(r,P[2],'-.',lw=2,label=engy[2])
plt.legend()
plt.xlabel('r $\AA$',size=18)
plt.ylabel('Probability density($\AA^{-1})$',size=18)
plt.show()
```
# NUMERICAL SOLUTION OF SCHRODINGER WAVE EQUATION 1D
## Method
The method we will use here is called the ["Finite Difference Method"](https://en.wikipedia.org/wiki/Finite_difference_method) (see the linked Wikipedia article). In this method we will turn the function $\psi(x)$ into a vector, which is a list in Python, and the operator of the differential equation into a *matrix*. We then end up with a matrix eigenequation, which we can diagonalize to get our answer.
### Discretization
The process of discretization is simply turning our continuous space $x$, into a discrete number of steps, $N$, and our function $\psi(x)$ into an array of size $N$. We thus have $N$ values $x_i$, which have a stepsize $h = \Delta x = x_{i+1} - x_i$. Our choice of the *size* of our space, $N$, turns out to be important. Too large a number will slow down our computation and require too much computer memory, too small a number and the answers we compute will not be sufficiently accurate. A common practice is to start with a small number $N$ and then increase it until the accuracy is acceptable. The actual value you obtain in the end will depend on the problem you are studying.
### Forward and Backward first order differential
We first need to develop how we will take the derivative of our function. Going back to our introduction to calculus, we remember that the derivative was defined as:
$$
\frac{d}{dx} f(x) = \lim_{\Delta x \rightarrow 0} \frac{f(x+\Delta x) - f(x)}{\Delta x} \approx \frac{f(x+h) - f(x)}{h} + \mathrm{O}(h)
$$
When we take a *finite difference*, we simply not take the limit all the way down to 0, but stop at $\Delta x = h$. Note that for this equation we evaluate the point just *after* $x$, which we call the *forward difference*. If you actually let $\Delta x \rightarrow 0$, then this does not matter, but if you do a finite difference, you can also do:
$$
\frac{d}{dx} f(x) = \lim_{\Delta x \rightarrow 0} \frac{f(x) - f(x-\Delta x)}{\Delta x} \approx \frac{f(x) - f(x-h)}{h} + \mathrm{O}(h)
$$
which is known as the *backward difference*.
You can also comput a *central difference*, but cannot use steps of $\frac{1}{2}\Delta x$, since that does not exist in our space. The central difference is then a combination of the previous two:
$$
\frac{d}{dx} f(x) = \lim_{\Delta x \rightarrow 0} \frac{f(x+\Delta x) - f(x-\Delta x)}{2\Delta x}\approx \frac{f(x+h) - f(x-h)}{2h} + \mathrm{O}(h^2)
$$
This last one is a little more accurate than the first two.
Note that for any of these approximations to a derivative, we have a problem at the edges of our space. (In Python, C and Java, our space goes from $n=0$ to $n=N-1$, in Fortran $n=1$ to $N$.) Either on one end or the other, there is no $x-\Delta x$ or $x+\Delta x$. Here, we are just not going to worry about this detail.
## Time dependent Schrodinger Equation:
## $i\hbar \frac{\partial \psi(x,t)}{\partial t}=-\frac{\hbar^2}{2m}\frac{\partial^2\psi(x,t)}{\partial t^2}+V(x)\psi(x,t)$
```python
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
def step(N,v0):
v=np.zeros(N)
v[int(N/2):]=v0
return v
m=1.0
hbar=1.0
N=1000
T=5000
X,dx=np.linspace(0,N-1,N,retstep=True)
v0=.019
V=step(N,v0)
plt.plot(X,V)
Vmax=V.max()
dt=hbar/(2*hbar**2/(m*dx**2)+Vmax)
c1=hbar*dt/(m*dx**2)
c2=2*dt/hbar
sigma=40
x0=round(N/2)-5*sigma
k0=np.pi/20
E=(hbar**2/2.0/m)*(k0**2+.5/sigma**2)
print('Wavepacket energy',E)
def gauss(x,x0,sigma):
return np.exp(-(x-x0)**2/(2*sigma**2))
n=int(N/2)
x=X[:n]
gg=gauss(x,x0,sigma)
cx=np.cos(k0*x)
sx=np.sin(k0*x)
R=np.zeros((3,N))
I=np.zeros((3,N))
R[1,:n]=gg*cx
I[1,:n]=gg*sx
R[0,:n]=gg*cx
I[0,:n]=gg*sx
P=dx*np.sum(R[1]**2+I[1]**2)
nrm=np.sqrt(P)
R=R/nrm
```
```python
for t in range(T+1):
R0=R[1]
I0=I[1]
I[2,1:-1]=I[0,1:-1]+c1*(R0[2:]-2*R0[1:-1]+R0[:-2])-c2*V[1:-1]*R0[1:-1]
R[2,1:-1]=R[0,1:-1]-c1*(I0[2:]-2*I0[1:-1]+I0[:-2])+c2*V[1:-1]*I0[1:-1]
R[0]=R0
R[1]=R[2]
I[0]=I0
I[1]=I[2]
plt.figure(figsize=(10,8))
prob=R[1]**2+I[1]**2
plt.plot(X,R[1],'-',label='Real')
plt.plot(X,I[1],'--',label='Img')
plt.plot(X,6*prob,lw=2,label='Prob')
plt.plot(X,19*V,'k',lw=3)
plt.legend()
plt.show()
```
```python
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
# Define the number of points in our space, N
N = 128
a = 2*np.pi
# Define the x space from 0 to a with N-1 divisions.
x = np.linspace(0,2*np.pi,N)
# We want to store step size, this is the reliable way:
h = x[1]-x[0] # Should be equal to 2*np.pi/(N-1)
# Compute the function, y = sin(x). With numpy this is easy:
y = np.sin(x)
# We compute the matrix using the np.diag(np.ones(N),0) which creates a
# diagonal matrix of 1 of NxN size. Multiply by -1 to get -1 diagonal array.
# You get an +1 off-diagonal array of ones, with np.diag(np.ones(N-1),1)
# Note that you need N-1 for an NxN array, since the off diagonal is one smaller.
# Add the two together and normalize by 1/h
Md = 1./h*(np.diag( -1.*np.ones(N),0) + np.diag(np.ones(N-1),1))
# Compute the derivative of y into yp by matrix multiplication:
yp = Md.dot(y)
# Plot the results.
plt.figure(figsize=(10,7))
plt.plot(x,y,label='f(x)')
plt.plot(x[:-1],yp[:-1],label='Derivative of f(x)') # Don't plot last value, which is invalid
plt.xlabel('x',size=20)
plt.ylabel('f(x),df(x)/dx',size=30)
plt.legend()
plt.show()
```
```python
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
# Define the number of points in our space, N
N = 128
#a = 2*np.pi
a=5
# Define the x space from 0 to a with N-1 divisions.
x = np.linspace(-5,5,N)
# We want to store step size, this is the reliable way:
h = x[1]-x[0] # Should be equal to 2*np.pi/(N-1)
# Compute the function, y = sin(x). With numpy this is easy:
y = x**2*np.exp(-x**2/2)
# We compute the matrix using the np.diag(np.ones(N),0) which creates a
# diagonal matrix of 1 of NxN size. Multiply by -1 to get -1 diagonal array.
# You get an +1 off-diagonal array of ones, with np.diag(np.ones(N-1),1)
# Note that you need N-1 for an NxN array, since the off diagonal is one smaller.
# Add the two together and normalize by 1/h
Md = 1./h*(np.diag( -1.*np.ones(N),0) + np.diag(np.ones(N-1),1))
# Compute the derivative of y into yp by matrix multiplication:
yp = Md.dot(y)
# Plot the results.
plt.figure(figsize=(10,7))
plt.plot(x,y,label='f(x)')
plt.plot(x[:-1],yp[:-1],label='Derivative of f(x)') # Don't plot last value, which is invalid
plt.xlabel('x',size=20)
plt.ylabel('f(x),df/dx',size=20)
plt.legend()
plt.show()
```
## Second order Differential
We can now extend this method to the second order differential. If we take the backward differential of the result of a forward differential, we get:
$$
\frac{d^2}{dx^2}f(x) = \lim_{\Delta x \rightarrow 0} \frac{f'(x)-f'(x-\Delta x)}{\Delta x} = \lim_{\Delta x \rightarrow 0} \frac{f(x+\Delta x) - f(x) - (f(x) - f(x-\Delta x))}{\Delta x^2} \\
\frac{d^2}{dx^2}f(x) = \lim_{\Delta x \rightarrow 0} \frac{f(x+\Delta x) - 2f(x) + f(x-\Delta x))}{\Delta x^2} \approx \frac{f(x+h) - 2f(x) + f(x-h))}{h^2}
$$
So in the discrete space we can write this as:
$$ f''_i = (f_{i+1} - 2f_i + f_{i-1})/h^2 $$
And finally, as a matrix equation, the second derivative is then:
$$
\begin{pmatrix}f''_0 \\ f''_1 \\ f''_2 \\\vdots \\ f''_{N-1}\end{pmatrix} = \frac{1}{h^2}
\begin{pmatrix} -2 & 1 & 0 & 0 & \\ 1 & -2 & 1 & 0 & \\
0& 1 & -2 & 1 & \\ & & \ddots & \ddots & \ddots &\\
& & & 1 & -2 \end{pmatrix}\begin{pmatrix}f_0 \\ f_1 \\ f_2 \\\vdots \\ f_{N-1}\end{pmatrix}
$$
Where now we note that at both ends of our array we will get an inaccurate answer unless we do some fixup. The fixup in this case is to use the same elements as the row below (at the start) or the row above (at the end), so we get $f''_0 = f''_1$ and $f''_{N-1} = f''_{N-2}$, which is not great but better than the alternative.
We can now try this matrix in Python and compute the second derivative of our $y(x)$ array.
```python
Mdd = 1./(h*h)*(np.diag(np.ones(N-1),-1) + np.diag( -2.*np.ones(N),0) + np.diag(np.ones(N-1),1))
print(Mdd)
ypp = Mdd.dot(y)
plt.figure(figsize=(10,7))
plt.plot(x,y,label='y=f(x)')
plt.plot(x[:-1],yp[:-1],label='Derivative of f(x)') # Last value is invalid, don't plot
plt.plot(x[1:-1],ypp[1:-1],label='Exclude 1st,last value') # First and last value is invalid.
plt.xlabel('x',size=24)
plt.ylabel('f(x),df/dx',size=25)
plt.legend()
plt.show()
```
## Solving the Schrödinger Equation
We can now setup the Schrodinger Equation as a matrix equation:
$$
\hat H = \frac{\hbar^2}{2m}\frac{d^2}{d x^2} + V \\
\hat H \psi(x) = E \psi(x)
$$
We now know the matrix for taking the second order derivative. The matrix for the potential is simply the values of the potential on the diagonal of the matrix: $\mathbf{V}_{i=j} = V_i$.
Writing out the matrix for $\mathbf{H}$ we get:
$$
\mathbf{H} = \frac{-\hbar^2}{2 m h^2} \begin{pmatrix} -2 & 1 & 0 & 0 & \\ 1 & -2 & 1 & 0 & \\
0& 1 & -2 & 1 & \\ & & \ddots & \ddots & \ddots &\\
& & & 1 & -2 \end{pmatrix} +
\begin{pmatrix} V_0 & 0 & 0 & & \\ 0 & V_1 & 0 & & \\ 0 & 0 & V_2 & & \\ & & &\ddots & \\ &&&&V_{N-1}\end{pmatrix}
$$
It is worth looking at the matrix of the Hamiltonian and notice the symmetry: $\mathbf{H}^T = \mathbf{H}$, so the transpose of the matrix is identical to the matrix. Since the matrix is *real* everywhere, the complex conjugate is also the same: $\mathbf{H}^*=\mathbf{H}$. Combining these two statements, we can say that the Hamiltonian is Hermetian: $\mathbf{H}^\dagger = \mathbf{H}$. We will come back to this later in the course.
### Infinite Square Well
The very simplest system to solve for is the infinite square well, for which $V=0$. We will readily recognize the results as alternating $\cos(x)$ and $\sin(x)$ functions, and the energy levels are:
$$
E_i = \frac{n^2\pi^2\hbar^2}{2ma^2}
$$
First, we need to discuss a subtlety. The Infinite Square Well from $-a/2$ to $a/2$ has $V=\infty$ *at* these points. We get into trouble trying to entery $\infty$ in our potential, so what we need to do is just limit the coputational space from $-a/2+h$ to $a/2-h$, where $h$ is our step size. That way we force the wavefunction to zero at the end points.
We compute this in the next box. I create $x_{full}$ as the full x-axis from $-a/2$ to $a/2$, but take $N+2$ steps. I then leave out the first and last point when calculating the wavefunctions. At the end, before plotting, I add a zero to the beginning and end of the wavefunctions, so that we get the expected result for plotting.
Note I again import everything and setup all the definitions, so this block is stand-alone, and can be copy-pasted into another notebook.
```python
# Infinite square well 16/02/2021
import numpy as np
import matplotlib.pyplot as plt
import scipy.linalg as scl
plt.style.use('fivethirtyeight')
hbar=1
m=1
N = 512
a = 1.0
x = np.linspace(-a/2.,a/2.,N)
# We want to store step size, this is the reliable way:
h = x[1]-x[0] # Should be equal to 2*np.pi/(N-1)
V = 0.*x
Mdd = 1./(h*h)*(np.diag(np.ones(N-1),-1) -2* np.diag(np.ones(N),0) + np.diag(np.ones(N-1),1))
H = -(hbar*hbar)/(2.0*m)*Mdd + np.diag(V)
E,psiT = np.linalg.eigh(H) # This computes the eigen values and eigenvectors
psi = np.transpose(psiT) # We take the transpose of psiT to the wavefunction vectors can accessed as psi[n]
```
```python
plt.figure(figsize=(10,7))
for i in range(5):
if psi[i][N-10] < 0: # Flip the wavefunctions if it is negative at large x, so plots are more consistent.
plt.plot(x,-psi[i]/np.sqrt(h),label="$E_{}$={:>8.3f}".format(i,E[i]))
else:
plt.plot(x,psi[i]/np.sqrt(h),label="$E_{}$={:>8.3f}".format(i,E[i]))
plt.title("Wavefunctions for the Infinite Square Well")
plt.xlabel('x',size=25)
plt.ylabel('$\psi_n(x)$',size=30)
plt.legend()
plt.savefig("Infinite_Square_Well_WaveFunctions.pdf")
plt.show()
```
```python
for i in range(7):
n = i+1
print("E[{}] = {:9.4f}, E_{} ={:9.4f}".format(n,E[i],n, n*n*np.pi**2*hbar*hbar/(2*m*a*a)))
```
```python
# Accuracy check
for j in range(5):
for i in range(5):
print("{:16.9e}".format(np.sum(psi[j]*psi[i])))
```
# For Radial equation of Hydrogen atom
## $\frac{d}{dr}(r^2\frac{dR}{dr})-\frac{2mr^2}{\hbar^2}(V(r)-E)R=l(l+1) R$
## If we put $ u(r)=r R(r) $
## We get $-\frac{\hbar^2}{2m}\frac{d^2u}{dr^2}+(-\frac{e^2}{4\pi \epsilon_0 r}+\frac{\hbar^2}{2m r^2}l(l+1))u=E u$
## ${\psi }_{ n,l,m }\left( r,\theta ,\phi \right) =\sqrt { { \left( \frac { 2 }{ n{ a }_{ 0 } } \right) }^{ 3 }\frac { \left( n-l-1 \right) ! }{ { 2n[(n+l)! ] }^{ 3 } } } { e }^{ { -r }/{ { na }_{ 0 } } }{ (\frac { 2r }{ n{ a }_{ 0 } } ) }^{ l }{ L }_{ n-l-1 }^{ 2l+1 }(\frac { 2r }{ n{ a }_{ 0 } } )\cdot { Y }_{ l }^{ m }(\theta ,\phi ) $
where $a_0$ is the Bohr radius, L are the generalized Laguerre polynomials, and n, l, and m are the principal, azimuthal, and magnetic.
```python
# Check the result
# This is more or less accurate at lower l value
# Hydrogen atom 16/02/2021
# hbar=1, m=1,e^2/4 pi epsilon=1
# Based on our taken parameter values
# Theoretical value of ground state E0=-.5, but here it comes -.407
import numpy as np
import matplotlib.pyplot as plt
import scipy.linalg as scl
plt.style.use('fivethirtyeight')
hbar=1
m=1
N = 1001
a = 25.0
l=0
x = np.linspace(0.1,a,N)
# We want to store step size, this is the reliable way:
h = x[1]-x[0] # Should be equal to 2*np.pi/(N-1)
V = -1/x+l*(l+1)/(x**2)
Mdd = 1./(h*h)*(np.diag(np.ones(N-1),-1) -2* np.diag(np.ones(N),0) + np.diag(np.ones(N-1),1))
H = -(hbar*hbar)/(2.0*m)*Mdd + np.diag(V)
E,psiT = np.linalg.eigh(H) # This computes the eigen values and eigenvectors
psi = np.transpose(psiT)/x # We take the transpose of psiT to the wavefunction vectors can accessed as psi[n]
plt.figure(figsize=(12,6))
for i in range(3):
plt.plot(x,psi[i]/np.sqrt(h),label="$E_{}$={:>8.3f}".format(i,E[i]))
plt.title("Wavefunctions for Hydrogen Atom: Radial Part")
plt.xlim(0,25)
plt.ylim(-.1,.1)
plt.xlabel('r')
plt.ylabel('$R_{nl}$')
plt.legend()
plt.savefig("HydrogenAtom.pdf")
plt.show()
```
```python
E
```
```python
# BETTER CODE TO PLOT THE HYDROGEN RADIAL WAVE FUNCTIONs
import numpy as np
from scipy import constants as const
from scipy import sparse as sparse
from scipy.sparse.linalg import eigs
from matplotlib import pyplot as plt
plt.style.use('fivethirtyeight')
hbar = const.hbar
e = const.e
m_e = const.m_e
pi = const.pi
epsilon_0 = const.epsilon_0
joul_to_eV = e
def calculate_potential_term(r):
potential = e**2 / (4.0 * pi * epsilon_0) / r
potential_term = sparse.diags((potential))
return potential_term
def calculate_angular_term(r):
angular = l * (l + 1) / r**2
angular_term = sparse.diags((angular))
return angular_term
def calculate_laplace_three_point(r):
h = r[1] - r[0]
main_diag = -2.0 / h**2 * np.ones(N)
off_diag = 1.0 / h**2 * np.ones(N - 1)
laplace_term = sparse.diags([main_diag, off_diag, off_diag], (0, -1, 1))
return laplace_term
def build_hamiltonian(r):
laplace_term = calculate_laplace_three_point(r)
angular_term = calculate_angular_term(r)
potential_term = calculate_potential_term(r)
hamiltonian = -hbar**2 / (2.0 * m_e) * (laplace_term - angular_term) - potential_term
return hamiltonian
N = 2000
l = 0
r = np.linspace(2e-9, 0.0, N, endpoint=False)
hamiltonian = build_hamiltonian(r)
""" solve eigenproblem """
number_of_eigenvalues = 30
eigenvalues, eigenvectors = eigs(hamiltonian, k=number_of_eigenvalues, which='SM')
""" sort eigenvalue and eigenvectors """
eigenvectors = np.array([x for _, x in sorted(zip(eigenvalues, eigenvectors.T), key=lambda pair: pair[0])])
eigenvalues = np.sort(eigenvalues)
""" compute probability density for each eigenvector """
densities = [np.absolute(eigenvectors[i, :])**2 for i in range(len(eigenvalues))]
plt.figure(figsize=(12,6))
def plot(r, densities, eigenvalues):
plt.xlabel('r ($\\mathrm{\AA}$)')
plt.ylabel('probability density ($\\mathrm{\AA}^{-1}$)')
energies = ['E = {: >5.2f} eV'.format(eigenvalues[i].real / e) for i in range(3)]
plt.plot(r * 1e+10, densities[0], color='blue', label=energies[0])
plt.plot(r * 1e+10, densities[1], color='green', label=energies[1])
plt.plot(r * 1e+10, densities[2], color='red', label=energies[2])
plt.legend()
plt.show()
return
""" plot results """
plot(r, densities, eigenvalues)
```
```python
# BETTER CODE TO PLOT THE HYDROGEN RADIAL WAVE FUNCTIONs
# 21/03/2021
import numpy as np
from scipy import constants as const
from scipy import sparse as sparse
from scipy.sparse.linalg import eigs
from matplotlib import pyplot as plt
plt.style.use('fivethirtyeight')
hbar = const.hbar
e = const.e
m_e = const.m_e
pi = const.pi
epsilon_0 = const.epsilon_0
joul_to_eV = e
def calculate_potential_term(r):
a=1e-8
potential = np.exp(-a*r)*e**2 / (4.0 * pi * epsilon_0) / r
potential_term = sparse.diags((potential))
return potential_term
def calculate_angular_term(r):
angular = l * (l + 1) / r**2
angular_term = sparse.diags((angular))
return angular_term
def calculate_laplace_three_point(r):
h = r[1] - r[0]
main_diag = -2.0 / h**2 * np.ones(N)
off_diag = 1.0 / h**2 * np.ones(N - 1)
laplace_term = sparse.diags([main_diag, off_diag, off_diag], (0, -1, 1))
return laplace_term
def build_hamiltonian(r):
laplace_term = calculate_laplace_three_point(r)
angular_term = calculate_angular_term(r)
potential_term = calculate_potential_term(r)
hamiltonian = -hbar**2 / (2.0 * m_e) * (laplace_term - angular_term) - potential_term
return hamiltonian
N = 2000
l = 0
r = np.linspace(2e-9, 0.0, N, endpoint=False)
hamiltonian = build_hamiltonian(r)
""" solve eigenproblem """
number_of_eigenvalues = 30
eigenvalues, eigenvectors = eigs(hamiltonian, k=number_of_eigenvalues, which='SM')
""" sort eigenvalue and eigenvectors """
eigenvectors = np.array([x for _, x in sorted(zip(eigenvalues, eigenvectors.T), key=lambda pair: pair[0])])
eigenvalues = np.sort(eigenvalues)
""" compute probability density for each eigenvector """
densities = [np.absolute(eigenvectors[i, :])**2 for i in range(len(eigenvalues))]
plt.figure(figsize=(12,6))
def plot(r, densities, eigenvalues):
plt.xlabel('r ($\\mathrm{\AA}$)')
plt.ylabel('probability density ($\\mathrm{\AA}^{-1}$)')
energies = ['E = {: >5.2f} eV'.format(eigenvalues[i].real / e) for i in range(3)]
plt.plot(r * 1e+10, densities[0], color='blue', label=energies[0])
plt.plot(r * 1e+10, densities[1], color='green', label=energies[1])
plt.plot(r * 1e+10, densities[2], color='red', label=energies[2])
plt.legend()
plt.show()
return
""" plot results """
plot(r, densities, eigenvalues)
```
```python
import matplotlib as mpl # matplotlib library for plotting and visualization
import matplotlib.pylab as plt # matplotlib library for plotting and visualization
import numpy as np #numpy library for numerical manipulation, especially suited for data arrays
import warnings
warnings.filterwarnings('ignore')
import sys # checking the version of Python
import IPython # checking the version of IPython
print("Python version = {}".format(sys.version))
print("IPython version = {}".format(IPython.__version__))
print("Matplotlib version = {}".format(plt.__version__))
print("Numpy version = {}".format(np.__version__))
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
def wavefn(mhdx2,psi,vi,E):
N=len(psi)
psiE=[psi[i] for i in rannge(N)]
for i in range(2,N):
psiE[i]=2*(mhdx2*(vi[i]-E)+1)*psiE[i-1]-psiE[i-2]
return psiE
def Sch(mhdx2,vi,psi0,psi1,psiN,nodes,mxItr):
N=len(vi)-1
Emx=max(vi)
Emn=min(vi)
psiIn=[0 for i in range(N+1)]
psiIn[0],psiIn[1],psiIn[N]=psi0,psi1,psiN
itr=0
while abs(Emx-Emn)>1e-6 and itr<mxItr:
E=.5*(Emx+Emn)
psi=wavefn(mhdx2,psiIn,vi,E)
cnt=0
for i in range(1,N-2):
if psi[i]*psi[i+1]<0:
cnt+=1
if cnt>nodes:
Emx=E
elif cnt<nodes:
Emn=E
else:
if psi[N-1]>psi[N]:
Emn=E
elif psi[N-1]<psi[N]:
Emx=E
itr+=1
if itr<mxItr:
return E,psi
else:
return None,None
# Simpson 1/3 rule for discrete function
def simp13dis(h,fx):
n=len(fx)
I=0
for i in range(n):
if i==0 or i==n:
I+=fx[i]
elif i%2!=0:
I+=4*fx[i]
else:
I+=2*fx[i]
I=I*h/3
return I
# Normalisation of discrete wavefunction
def norm(psi,dx):
N=len(psi)
psi2=[psi[i]**2 for i in range(N)]
psimod2=simp13dis(dx,psi2)
normpsi=[psi[i]/psimod2**0.5 for i in range(N)]
return normpsi
```
## Matrix Method
## 1D Harmonic Oscillator
```python
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
x=np.linspace(3,4.5,100)
y=(2*x+7)/(x-4)
plt.figure(figsize=(12,6))
plt.plot(x,y)
```
```python
# Matrix method to solve 1D Harmonic Oscillator schrodinger equation
# units used here m (mas)=1,hbar=1
# Date 15/02/2021
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
def Vpot(x):
return x**2
a = float(input('enter lower limit of the domain: '))
b = float(input('enter upper limit of the domain: '))
N = int(input('enter number of grid points: '))
x = np.linspace(a,b,N)
h = x[1]-x[0]
T = np.zeros((N-2)**2).reshape(N-2,N-2)
for i in range(N-2):
for j in range(N-2):
if i==j:
T[i,j]= -2
elif np.abs(i-j)==1:
T[i,j]=1
else:
T[i,j]=0
V = np.zeros((N-2)**2).reshape(N-2,N-2)
for i in range(N-2):
for j in range(N-2):
if i==j:
V[i,j]= Vpot(x[i+1])
else:
V[i,j]=0
# To create hamiltonian matrix
H = -T/(2*h**2) + V
# To find eigenvalues
val,vec=np.linalg.eig(H)
z = np.argsort(val)
z = z[0:4]
energies=(val[z]/val[z][0])
print(energies)
plt.figure(figsize=(12,10))
for i in range(len(z)):
y = []
y = np.append(y,vec[:,z[i]])
y = np.append(y,0)
y = np.insert(y,0,0)
plt.plot(x,y,lw=3, label="$E_{}={} $".format(i,energies[i]))
plt.xlabel('x', size=24)
plt.ylabel('$\psi$(x)',size=24)
plt.legend()
plt.title('Normalized Wavefunctions for a harmonic oscillator',size=14)
plt.show()
```
```python
plt.figure(figsize=(12,10))
for i in range(len(z)):
y = []
y = np.append(y,vec[:,z[i]])
y = np.append(y,0)
y = np.insert(y,0,0)
plt.plot(x,y,lw=3, label="$E_{}={} $".format(i,energies[i]))
plt.xlabel('x', size=24)
plt.ylabel('$\psi_n$(x)',size=24)
plt.legend()
plt.title('Normalized Wavefunctions for a harmonic oscillator',size=14)
plt.show()
```
```python
```
```python
# Matrix method to solve finite potential well schrodinger equation
# units used here m (mas)=1,hbar=1
# Date 15/02/2021
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
def Vpot(pr,x):
L0,L1,V0=pr
pr=[-1,1,-1]
if x>L0 and x<L1:
pot=V0
else:
pot=0
return pot
a = float(input('enter lower limit of the domain: '))
b = float(input('enter upper limit of the domain: '))
N = int(input('enter number of grid points: '))
x = np.linspace(a,b,N)
h = x[1]-x[0]
T = np.zeros((N-2)**2).reshape(N-2,N-2)
for i in range(N-2):
for j in range(N-2):
if i==j:
T[i,j]= -2
elif np.abs(i-j)==1:
T[i,j]=1
else:
T[i,j]=0
V = np.zeros((N-2)**2).reshape(N-2,N-2)
pr=[-1,1,-1]
for i in range(N-2):
for j in range(N-2):
if i==j:
V[i,j]= Vpot(pr,x[i+1])
else:
V[i,j]=0
# To create hamiltonian matrix
H = -T/(2*h**2) + V
# To find eigenvalues
val,vec=np.linalg.eig(H)
z = np.argsort(val)
z = z[0:4]
energies=(val[z]/val[z][0])
print(energies)
plt.figure(figsize=(12,10))
for i in range(len(z)):
y = []
y = np.append(y,vec[:,z[i]])
y = np.append(y,0)
y = np.insert(y,0,0)
plt.plot(x,y,lw=3, label="$E_{} ={}$".format(i,energies[i]))
plt.xlabel('x', size=24)
plt.ylabel('$\psi$(x)',size=24)
plt.legend()
plt.title('Normalized WaveFunctions for a finite well potential',size=14)
plt.show()
```
```python
# Anharmonic Oscilator
# units used here m (mas)=1,hbar=1
# Date 15/02/2021
import matplotlib.pyplot as plt
plt.style.use('seaborn-dark')
def Vpot(x):
return x**2+(1/4)*x**4
a = float(input('enter lower limit of the domain: '))
b = float(input('enter upper limit of the domain: '))
N = int(input('enter number of grid points: '))
x = np.linspace(a,b,N)
h = x[1]-x[0]
T = np.zeros((N-2)**2).reshape(N-2,N-2)
for i in range(N-2):
for j in range(N-2):
if i==j:
T[i,j]= -2
elif np.abs(i-j)==1:
T[i,j]=1
else:
T[i,j]=0
V = np.zeros((N-2)**2).reshape(N-2,N-2)
for i in range(N-2):
for j in range(N-2):
if i==j:
V[i,j]= Vpot(x[i+1])
else:
V[i,j]=0
# To create hamiltonian matrix
H = -T/(2*h**2) + V
# To find eigenvalues
val,vec=np.linalg.eig(H)
z = np.argsort(val)
z = z[0:4]
energies=(val[z]/val[z][0])
print(energies)
plt.figure(figsize=(12,10))
for i in range(len(z)):
y = []
y = np.append(y,vec[:,z[i]])
y = np.append(y,0)
y = np.insert(y,0,0)
plt.plot(x,y,lw=3, label="$E_{}={} $".format(i,energies[i]))
plt.xlabel('x', size=24)
plt.ylabel('$\psi_n$(x)',size=24)
plt.legend()
plt.title('Normalized Wavefunctions for a anharmonic oscillator',size=14)
plt.show()
```
### Linear Potential
#### Symmetric Case
For the symmetric case, we have the potential:
$$
V(x) = \lambda \left| x \right| = \left\{\begin{matrix} \lambda x,& \mathrm{if\ }& x\ge 0 \\ -\lambda x,& \mathrm{if\ }& x<0 \end{matrix} \right.
$$
For the one-sided case, we have the potential:
$$
V(x) = \left\{\begin{matrix} \lambda x,& \mathrm{if\ }& x\ge 0 \\ \infty,& \mathrm{if\ }& x<0 \end{matrix} \right.
$$
We set $\lambda = 1$ when we do the calculations. Note that to get the potential to go to $\infty$, we need to set the $x$-axis to go from 0. to $a/2$ with $N/2$ points, instead of from $-a/2$ to $a/2$ with $N$ points.
```python
# Application to linear potential
import numpy as np
import matplotlib.pyplot as plt
import scipy.linalg as scl
plt.style.use('fivethirtyeight')
hbar=1
m=1
N = 4096
a = 15.0
```
```python
# This is for the symmetric linear potential
xs = np.linspace(-a/2.,a/2.,N)
Vs = np.abs(xs)
# This is for the one-sided linear potential
xo = np.linspace(0.,a/2.,int(N/2))
Vo = np.abs(xo)
# Make Plots
fig1 = plt.figure(figsize=(8,6))
# plt.xkcd() # Set hand drawn looking style
#plt.xticks([]) # And remove x and y ticks.
#plt.yticks([]) # For plotting.
plt.plot([0,0],[-2,a/2.],color="blue") # Draw the axes in blue.
plt.plot([-a/2.,a/2.],[0,0],color="blue")
plt.plot(xs,Vs,color="green") # Plot the potential in green
plt.title("Symmetric Linear Potential")
plt.savefig("Symmetric_Linear_potential.pdf")
plt.xlabel('x',size=24)
plt.ylabel('V(x)',size=24)
plt.show()
#
# Now plot the one-sided case
#
fig1 = plt.figure(figsize=(8,6))
#plt.xticks([])
#plt.yticks([])
plt.plot([0,0],[-2,a/2.],color="blue")
plt.plot([0,a/2.],[0,0],color="blue")
plt.plot([0,0],[0,a/2.],color="green") # Plot the infinity side.
plt.plot(xo,Vo,color="green")
plt.title("One Sided Linear Potential")
plt.savefig("Onesided_Linear_potential.pdf")
plt.xlabel('x',size=24)
plt.ylabel('V(x)',size=24)
plt.show()
```
```python
# This is for the Symmetric linear potential case.
hs = xs[1]-xs[0] # Should be equal to 2*np.pi/(N-1)
Mdds = 1./(hs*hs)*(np.diag(np.ones(N-1),-1) -2* np.diag(np.ones(N),0) + np.diag(np.ones(N-1),1))
Hs = -(hbar*hbar)/(2.0*m)*Mdds + np.diag(Vs)
Es,psiTs = np.linalg.eigh(Hs) # This computes the eigen values and eigenvectors
psis = np.transpose(psiTs)
# We now have the eigen vectors as psi(i), where i is the energy level.
print(np.sum(psis[0]*psis[0])) # Check. Yes these are normalized already.
```
```python
# This is for the One sided case.
ho = xo[1]-xo[0] # Should be equal to 2*np.pi/(N-1)
Mddo = 1./(ho*ho)*(np.diag(np.ones(int(N/2)-1),-1) -2* np.diag(np.ones(int(N/2)),0) + np.diag(np.ones(int(N/2)-1),1))
Ho = -(hbar*hbar)/(2.0*m)*Mddo + np.diag(Vo)
Eo,psiTo = np.linalg.eigh(Ho) # This computes the eigen values and eigenvectors
psio = np.transpose(psiTo)
# We now have the eigen vectors as psi(i), where i is the energy level.
print(np.sum(psio[0]*psio[0])) # Check. Yes these are normalized already.
# print psiT[0] # Uncomment to see the values printed for Psi_0
```
```python
plt.figure(figsize=(10,6))
plt.plot(xs,0.1*Vs,color="grey",label="Potential: 0.1V(x)")
plt.ylim((-0.9,0.9))
for i in range(6):
if psis[i,N-10]<0:
plt.plot(xs,-np.real(psis[i])/np.sqrt(hs),label="$E_{}={:8.4f}$".format(i,Es[i]))
else:
plt.plot(xs,np.real(psis[i])/np.sqrt(hs),label="$E_{}={:8.4f}$".format(i,Es[i]))
plt.legend()
plt.title("Wavefunctions for the Symmetric Linear Potential")
plt.xlabel("x",size=24)
plt.ylabel("$\psi_n(x)$",size=24)
plt.savefig("Linear_Potential_Wavefunctions.pdf")
plt.show()
```
```python
print("Symmetric Case \t One-sided Case")
for n in range(12):
if n%2==1:
no = int((n-1)/2)
print("Es[{}] = {:9.4f}\t Eo[{}] ={:9.4f}".format(n,Es[n],no, Eo[no]))
else:
print("Es[{}] = {:9.4f} ".format(n,Es[n]))
```
```python
# First six wavefunctions
plt.figure(figsize=(10,6))
plt.plot(xo,0.1*Vo,color="grey",label="Potential: 0.1V(x)")
plt.ylim((-0.9,0.9))
for i in range(6):
if psio[i,int(N/2)-10]<0:
plt.plot(xo,-psio[i]/np.sqrt(ho),label="$E_{}={}$".format(i,Eo[i]))
else:
plt.plot(xo,psio[i]/np.sqrt(ho),label="$E_{}={}$".format(i,Eo[i]))
plt.legend()
plt.title("Wavefunctions for the One Sided Linear Potential")
plt.xlabel("x")
plt.ylabel("$\psi_n(x)$")
plt.savefig("One sided Wavefunctions.pdf")
plt.show()
```
```python
# v(r)=-e^2/r*(exp(-r/a))
```
```python
# Matrix method to solve 1D schrodinger equation
# units used here m (mas)=1,hbar=1
# v(r)=-e^2/r*(exp(-r/a))
# Date 15/02/2021
import matplotlib.pyplot as plt
import numpy as np
plt.style.use('fivethirtyeight')
def Vpot(x):
a=7.5
e2=1
return (-e2/x)*(np.exp(-x/a))
a = float(input('enter lower limit of the domain: '))
b = float(input('enter upper limit of the domain: '))
N = int(input('enter number of grid points: '))
x = np.linspace(a,b,N)
h = x[1]-x[0]
T = np.zeros((N-2)**2).reshape(N-2,N-2)
for i in range(N-2):
for j in range(N-2):
if i==j:
T[i,j]= -2
elif np.abs(i-j)==1:
T[i,j]=1
else:
T[i,j]=0
V = np.zeros((N-2)**2).reshape(N-2,N-2)
for i in range(N-2):
for j in range(N-2):
if i==j:
V[i,j]= Vpot(x[i+1])
else:
V[i,j]=0
# To create hamiltonian matrix
H = -T/(2*h**2) + V
# To find eigenvalues
val,vec=np.linalg.eig(H)
z = np.argsort(val)
z = z[0:4]
energies=(val[z]/val[z][0])
print(energies)
plt.figure(figsize=(12,10))
for i in range(len(z)):
y = []
y = np.append(y,vec[:,z[i]])
y = np.append(y,0)
y = np.insert(y,0,0)
plt.plot(x,y,lw=3, label="$E_{}={} $".format(i,energies[i]))
plt.xlabel('x', size=24)
plt.ylabel('$\psi$(x)',size=24)
plt.legend()
plt.title('Wavefunctions for Spherically Symmetric Potential',size=14)
plt.show()
```
```python
# Matrix method to solve 1D spherically symmetric potential
# units used here m (mas)=1,hbar=1
# Date 15/02/2021
import matplotlib.pyplot as plt
import numpy as np
plt.style.use('fivethirtyeight')
def Vpot(x):
a=7.5
e2=1
return (-e2/x)*(np.exp(-x/a))
a = float(input('enter lower limit of the domain: '))
b = float(input('enter upper limit of the domain: '))
N = int(input('enter number of grid points: '))
x = np.linspace(a,b,N)
h = x[1]-x[0]
T = np.zeros((N-2)**2).reshape(N-2,N-2)
for i in range(N-2):
for j in range(N-2):
if i==j:
T[i,j]= -2
elif np.abs(i-j)==1:
T[i,j]=1
else:
T[i,j]=0
V = np.zeros((N-2)**2).reshape(N-2,N-2)
for i in range(N-2):
for j in range(N-2):
if i==j:
V[i,j]= Vpot(x[i+1])
else:
V[i,j]=0
# To create hamiltonian matrix
H = -T/(2*h**2) + V
# To find eigenvalues
val,vec=np.linalg.eig(H)
z = np.argsort(val)
z = z[0:4]
energies=(val[z]/val[z][0])
print(energies)
plt.figure(figsize=(12,10))
for i in range(len(z)):
y = []
y = np.append(y,vec[:,z[i]])
y = np.append(y,0)
y = np.insert(y,0,0)
plt.plot(x,y,lw=3, label="{} ".format(i))
plt.xlabel('x', size=14)
plt.ylabel('$\psi$(x)',size=14)
plt.legend()
plt.title('Normalized Wavefunctions for spherically symmetric potential',size=14)
plt.show()
```
```python
# Single codes for solving many problems
import numpy as np
import matplotlib.pyplot as plt
from scipy import sparse
from scipy.sparse import linalg as sla
plt.style.use('fivethirtyeight')
def schrodinger1D(xmin, xmax, Nx, Vfun, params, neigs=20, findpsi=False):
# for this code we are using Dirichlet Boundary Conditions
x = np.linspace(xmin, xmax, Nx) # x axis grid
dx = x[1] - x[0] # x axis step size
# Obtain the potential function values:
V = Vfun(x, params)
# create the Hamiltonian Operator matrix:
H = sparse.eye(Nx, Nx, format='lil') * 2
for i in range(Nx - 1):
#H[i, i] = 2
H[i, i + 1] = -1
H[i + 1, i] = -1
#H[-1, -1] = 2
H = H / (dx ** 2)
# Add the potential into the Hamiltonian
for i in range(Nx):
H[i, i] = H[i, i] + V[i]
# convert to csc matrix format
H = H.tocsc()
# obtain neigs solutions from the sparse matrix
[evl, evt] = sla.eigs(H, k=neigs, which='SM')
for i in range(neigs):
# normalize the eigen vectors
evt[:, i] = evt[:, i] / np.sqrt(
np.trapz(np.conj(
evt[:, i]) * evt[:, i], x))
# eigen values MUST be real:
evl = np.real(evl)
if findpsi == False:
return evl
else:
return evl, evt, x
def eval_wavefunctions(xmin,xmax,Nx,Vfun,params,neigs,findpsi=True):
"""
Evaluates the wavefunctions given a particular potential energy function Vfun
Inputs
------
xmin: float
minimum value of the x axis
xmax: float
maximum value of the x axis
Nx: int
number of finite elements in the x axis
Vfun: function
potential energy function
params: list
list containing the parameters of Vfun
neigs: int
number of eigenvalues to find
findpsi: bool
If True, the eigen wavefunctions will be calculated and returned.
If False, only the eigen energies will be found.
"""
H = schrodinger1D(xmin, xmax, Nx, Vfun, params, neigs, findpsi)
evl = H[0] # energy eigen values
indices = np.argsort(evl)
print("Energy eigenvalues in units of E0:")
for i,j in enumerate(evl[indices]):
print("{}: {:.2f}".format(i+1,j))
evt = H[1] # eigen vectors
x = H[2] # x dimensions
i = 0
plt.figure(figsize=(8,8))
while i < neigs:
n = indices[i]
y = np.real(np.conj(evt[:, n]) * evt[:, n])
plt.subplot(neigs, 1, i+1)
plt.plot(x, y)
plt.axis('off')
i = i + 1
plt.show()
def sho_wavefunctions_plot(xmin=-10, xmax=10, Nx=500, neigs=20, params=[1]):
"""
Plots the 1D quantum harmonic oscillator wavefunctions.
Inputs
------
xmin: float
minimum value of the x axis
xmax: float
maximum value of the x axis
Nx: int
number of finite elements in the x axis
neigs: int
number of eigenvalues to find
params: list
list containing the parameters of Vfun
"""
def Vfun(x, params):
V = params[0] * x**2
return V
eval_wavefunctions(xmin,xmax,Nx,Vfun,params,neigs,True)
def infinite_well_wavefunctions_plot(xmin=-10, xmax=10, Nx=500, neigs=20, params=1e10):
"""
Plots the 1D infinite well wavefunctions.
Inputs
------
xmin: float
minimum value of the x axis
xmax: float
maximum value of the x axis
Nx: int
number of finite elements in the x axis
neigs: int
number of eigenvalues to find
params: float
parameter of Vfun
"""
def Vfun(x, params):
V = x*0
V[:100]=params
V[-100:]=params
return V
eval_wavefunctions(xmin,xmax,Nx,Vfun,params,neigs,True)
def double_well_wavefunctions_plot(xmin=-10, xmax=10, Nx=500, neigs=20, params=[-0.5,0.01]):
"""
Plots the 1D double well wavefunctions.
Inputs
------
xmin: float
minimum value of the x axis
xmax: float
maximum value of the x axis
Nx: int
number of finite elements in the x axis
neigs: int
number of eigenvalues to find
params: list
list of parameters of Vfun
"""
def Vfun(x, params):
A = params[0]
B = params[1]
V = A * x ** 2 + B * x ** 4
return V
eval_wavefunctions(xmin,xmax,Nx,Vfun,params,neigs,True)
plt.style.use('fivethirtyeight')
infinite_well_wavefunctions_plot(xmin=-10, xmax=10, Nx=500, neigs=2, params=1e10)
```
```python
plt.style.use('fivethirtyeight')
infinite_well_wavefunctions_plot(xmin=-10, xmax=10, Nx=500, neigs=3, params=1e10)
```
```python
double_well_wavefunctions_plot(xmin=-10, xmax=10, Nx=500, neigs=3, params=[-0.5,0.01])
```
```python
# Harmonic Oscillator
sho_wavefunctions_plot(xmin=-10, xmax=10, Nx=500, neigs=5, params=[1])
```
# NUMERICAL SOLUTION OF SCHRODINGER WAVE EQUATION 1D
# $-\frac{\hbar^2}{2m} \frac{d^2\psi}{dx^2}+V(x)\psi =E\psi $
## Finite Potential well
```python
# Normalisation of discrete wavefunction
# Finite Potential well
def norm(psi,dx):
N=len(psi)
psi2=[psi[i]**2 for i in range(N)]
psimod2=simp13dis(dx,psi2)
normpsi=[psi[i]/psimod2**0.5 for i in range(N)]
return normpsi
import matplotlib.pyplot as plt
# Solution of Schrodinger equation:
def wavefn(mhdx2,psi,vi,E):
N=len(psi)
psiE=[psi[i] for i in range(N)]
for i in range(2,N):
psiE[i]=2*(mhdx2*(vi[i]-E)+1)*psiE[i-1]-psiE[i-2]
return psiE
def Sch(mhdx2,vi,psi0,psi1,psiN,nodes,mxItr):
N=len(vi)-1
Emx=max(vi)
Emn=min(vi)
psiIn=[0 for i in range(N+1)]
psiIn[0],psiIn[1],psiIn[N]=psi0,psi1,psiN
itr=0
while abs(Emx-Emn)>1e-6 and itr<mxItr:
E=.5*(Emx+Emn)
psi=wavefn(mhdx2,psiIn,vi,E)
#For Plotting
plt.title(r'$E_{mx}=%.3f,E_{mn}=%.3f$'%(Emx,Emn))
plt.ylim(-1.02,0.7)
plt.plot(x,psi,'k',label=r'$\psi(x)$ for E=%.4f'%E)
plt.plot(x,vi,'k:',label='Potential')
plt.plot(x[-1],psiN,'k.',label='Far end boundary Point')
plt.xlabel('x')
plt.legend(loc='upper left')
plt.pause(1.0)
plt.clf()
# Node counting
cnt=0
for i in range(1,N-2):
if psi[i]*psi[i+1]<0:
cnt+=1
if cnt>nodes:
Emx=E
elif cnt<nodes:
Emn=E
else:
if psi[N-1]>psi[N]:
Emn=E
elif psi[N-1]<psi[N]:
Emx=E
itr+=1
if itr<mxItr:
return E,psi
else:
return None,None
# Define Square wave potential
def V(pr,x):
L0,L1,V0=pr
if x>L0 and x<L1:
pot=V0
else:
pot=0
return pot
hbar,m=.1,1
x0,xN=-1.4,1.4
dx=.01
mxItr=100
psi0,psiN=0,0
L0,L1,V0=-1,1,-1
prV=[L0,L1,V0]
mhdx2=m*dx**2/hbar**2
N=int((xN-x0)/dx)
dx=(xN-x0)/N
x=[x0+i*dx for i in range(N+1)]
Vi=[V(prV,x[i]) for i in range(N+1)]
nodes=2
psi1=(-1)**nodes*1e-4
E,psi=Sch(mhdx2,Vi,psi0,psi1,psiN,nodes,mxItr)
```
```python
# Finite Well potential
# Simpson 1/3 rule for discrete function
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
def simp13dis(h,fx):
n=len(fx)
I=0
for i in range(n):
if i==0 or i==n:
I+=fx[i]
elif i%2!=0:
I+=4*fx[i]
else:
I+=2*fx[i]
I=I*h/3
return I
# Normalisation of discrete wavefunction
def norm(psi,dx):
N=len(psi)
psi2=[psi[i]**2 for i in range(N)]
psimod2=simp13dis(dx,psi2)
normpsi=[psi[i]/psimod2**0.5 for i in range(N)]
return normpsi
import matplotlib.pyplot as plt
# Solution of Schrodinger equation:
def wavefn(mhdx2,psi,vi,E):
N=len(psi)
psiE=[psi[i] for i in range(N)]
for i in range(2,N):
psiE[i]=2*(mhdx2*(vi[i]-E)+1)*psiE[i-1]-psiE[i-2]
return psiE
def Sch(mhdx2,vi,psi0,psi1,psiN,nodes,mxItr):
N=len(vi)-1
Emx=max(vi)
Emn=min(vi)
psiIn=[0 for i in range(N+1)]
psiIn[0],psiIn[1],psiIn[N]=psi0,psi1,psiN
itr=0
while abs(Emx-Emn)>1e-6 and itr<mxItr:
E=.5*(Emx+Emn)
psi=wavefn(mhdx2,psiIn,vi,E)
# Node counting
cnt=0
for i in range(1,N-2):
if psi[i]*psi[i+1]<0:
cnt+=1
if cnt>nodes:
Emx=E
elif cnt<nodes:
Emn=E
else:
if psi[N-1]>psi[N]:
Emn=E
elif psi[N-1]<psi[N]:
Emx=E
itr+=1
if itr<mxItr:
return E,psi
else:
return None,None
# Define Square wave potential
def V(pr,x):
L0,L1,V0=pr
if x>L0 and x<L1:
pot=V0
else:
pot=0
return pot
hbar,m=.1,1
x0,xN=-1.4,1.4
dx=.01
mxItr=100
psi0,psiN=0,0
L0,L1,V0=-1,1,-1
prV=[L0,L1,V0]
mhdx2=m*dx**2/hbar**2
N=int((xN-x0)/dx)
dx=(xN-x0)/N
x=[x0+i*dx for i in range(N+1)]
Vi=[V(prV,x[i]) for i in range(N+1)]
stln=['k','k--','k:','k.-']
for nodes in range(4):
psi1=(-1)**nodes*1e-4
E,psi=Sch(mhdx2,Vi,psi0,psi1,psiN,nodes,mxItr)
if E!=None:
psi=norm(psi,dx)
#for Plot
#plt.figure(figsize=(12,6))
plt.plot(x,psi,stln[nodes],label=r'E=%.3f $\psi_%d(x)$'%(E,nodes))
#plt.figure(figsize=(12,6))
plt.plot(x,Vi,'k',label='Potential')
plt.legend()
plt.xlabel('x',size=24)
plt.ylabel('$\psi(x)$',size=24)
plt.show()
```
## Theory of Finite Potential Well:
## $\dfrac{d^2\psi(x)}{d x^2} = -\dfrac{2mE}{\hbar^2} \psi(x) =-k^2 \psi(x) \nonumber$
## Behaviour in side box
## $\psi(x) = e^{\pm ikx} \nonumber$
## $\psi(x) = q_1 \cos(kx) + q_2 \sin(kx) \nonumber$
## Behavior outside the Box
$\dfrac{d^2\psi(x)}{d x^2} = -\dfrac{2m(E-V_o)}{\hbar^2}\psi(x) \nonumber$
Connecting the two Behaviors
For $ E<V_o $, it is really interesting to look at what happens outside the walls.
When E<V_o, the Schrödinger equation becomes
$\large{\frac{\partial^2\psi(x)}{\partial x^2}} = \alpha_o^2\cdot \psi(x) \nonumber $
since
$\alpha_o^2 = \frac{2m(V_o-E)}{\hbar^2} >0.$
The solutions to this equation have the general form of
$\psi(x) =e^{\pm\alpha_o x}. \nonumber $
Note that the exponential function does not require an imaginary component, thus it is not oscillatory. Before we move on, here are some questions to consider:
```python
# Finite Potential well
import matplotlib.pyplot as plt
import numpy as np
plt.style.use('fivethirtyeight')
# Reading the input variables from the user
Vo = float(input("Enter the value for Vo (in eV) = "))
L = float(input("Enter the value for L (in Angstroms) = "))
val = np.sqrt(2.0*9.10938356e-31*1.60217662e-19)*1e-10/(2.0*1.05457180013e-34) # equal to sqrt(2m*1eV)*1A/(2*hbar)
# Generating the graph
plt.rcParams.update({'font.size': 18, 'font.family': 'STIXGeneral', 'mathtext.fontset': 'stix'})
fig, axes = plt.subplots(1, 2, figsize=(13,4))
axes[0].axis([0.0,Vo,0.0,np.sqrt(Vo)*1.8])
axes[0].set_xlabel(r'$E$ (eV)')
axes[0].set_ylabel(r'(eV$^{-1}$)')
axes[0].set_title('Even solutions')
axes[1].axis([0.0,Vo,0.0,np.sqrt(Vo)*1.8])
axes[1].set_xlabel(r'$E$ (eV)')
axes[1].set_ylabel(r'')
axes[1].set_title('Odd solutions')
E = np.linspace(0.0, Vo, 10000)
num = int(round((L*np.sqrt(Vo)*val-np.pi/2.0)/np.pi))
# Removing discontinuity points
for n in range(10000):
for m in range(num+2):
if abs(E[n]-((2.0*float(m)+1.0)*np.pi/(2.0*L*val))**2)<0.01: E[n] = np.nan
if abs(E[n]-(float(m)*np.pi/(L*val))**2)<0.01: E[n] = np.nan
# Plotting the curves and setting the labels
axes[0].plot(E, np.sqrt(Vo-E), label=r"$\sqrt{V_o-E}$", color="blue", linewidth=1.8)
axes[0].plot(E, np.sqrt(E)*np.tan(L*np.sqrt(E)*val), label=r"$\sqrt{E}\tan(\frac{L\sqrt{2mE}}{2\hbar})$", color="red", linewidth=1.8)
axes[1].plot(E, np.sqrt(Vo-E), label=r"$\sqrt{V_o-E}$", color="blue", linewidth=1.8)
axes[1].plot(E, -np.sqrt(E)/np.tan(L*np.sqrt(E)*val), label=r"$-\frac{\sqrt{E}}{\tan(\frac{L\sqrt{2mE}}{2\hbar})}$", color="red", linewidth=1.8)
# Chosing the positions of the legends
axes[0].legend(bbox_to_anchor=(0.05, -0.2), loc=2, borderaxespad=0.0)
axes[1].legend(bbox_to_anchor=(0.05, -0.2), loc=2, borderaxespad=0.0)
# Show the plots on the screen once the code reaches this point
plt.show()
```
```python
# Finding eigenvalues
import numpy as np
print ("The allowed bounded energies are:")
# We want to find the values of E in which f_even and f_odd are zero
f_even = lambda E : np.sqrt(Vo-E)-np.sqrt(E)*np.tan(L*np.sqrt(E)*val)
f_odd = lambda E : np.sqrt(Vo-E)+np.sqrt(E)/np.tan(L*np.sqrt(E)*val)
E_old = 0.0
f_even_old = f_even(0.0)
f_odd_old = f_odd(0.0)
n = 1
E_vals = np.zeros(999)
# Here we loop from E = 0 to E = Vo seeking roots
for E in np.linspace(0.0, Vo, 200000):
f_even_now = f_even(E)
# If the difference is zero or if it changes sign then we might have passed through a root
if f_even_now == 0.0 or f_even_now/f_even_old < 0.0:
# If the old values of f are not close to zero, this means we didn't pass through a root but
# through a discontinuity point
if (abs(f_even_now)<1.0 and abs(f_even_old)<1.0):
E_vals[n-1] = (E+E_old)/2.0
print (" State #%3d (Even wavefunction): %9.4f eV, %13.6g J" % (n,E_vals[n-1],E_vals[n-1]*1.60217662e-19))
n += 1
f_odd_now = f_odd(E)
# If the difference is zero or if it changes sign then we might have passed through a root
if f_odd_now == 0.0 or f_odd_now/f_odd_old < 0.0:
# If the old values of f are not close to zero, this means we didn't pass through a root but
# through a discontinuity point
if (abs(f_odd_now)<1.0 and abs(f_odd_old)<1.0):
E_vals[n-1] = (E+E_old)/2.0
print ("State #%3d (Odd wavefunction): %9.4f eV, %13.6g J" % (n,E_vals[n-1],E_vals[n-1]*1.60217662e-19))
n += 1
E_old = E
f_even_old = f_even_now
f_odd_old = f_odd_now
nstates = n-1
print ("\nTHERE ARE %3d POSSIBLE BOUNDED ENERGIES" % nstates)
```
```python
# Energy diagram
import matplotlib.pyplot as plt
# Generating the energy diagram
fig, ax = plt.subplots(figsize=(8,12))
ax.spines['right'].set_color('none')
ax.yaxis.tick_left()
ax.spines['bottom'].set_color('none')
ax.axes.get_xaxis().set_visible(False)
ax.spines['top'].set_color('none')
ax.axis([0.0,10.0,0.0,1.1*Vo])
ax.set_ylabel(r'$E_n$ (eV)')
for n in range(1,nstates+1):
str1="$n = "+str(n)+r"$, $E_"+str(n)+r" = %.3f$ eV"%(E_vals[n-1])
ax.text(6.5, E_vals[n-1]-0.005*Vo, str1, fontsize=16, color="red")
ax.hlines(E_vals[n-1], 0.0, 6.3, linewidth=1.8, linestyle='--', color="red")
str1="$V_o = %.3f$ eV"%(Vo)
ax.text(6.5, Vo-0.01*Vo, str1, fontsize=16, color="blue")
ax.hlines(Vo, 0.0, 6.3, linewidth=3.8, linestyle='-', color="blue")
ax.hlines(0.0, 0.0, 10.3, linewidth=1.8, linestyle='-', color="black")
plt.title("Energy Levels", fontsize=30)
plt.show()
```
```python
# Wavefn
import matplotlib.pyplot as plt
import numpy as np
print ("\nThe bound wavefunctions are:")
fig, ax = plt.subplots(figsize=(12,9))
ax.spines['right'].set_color('none')
ax.xaxis.tick_bottom()
ax.spines['left'].set_color('none')
ax.axes.get_yaxis().set_visible(False)
ax.spines['top'].set_color('none')
X_lef = np.linspace(-L, -L/2.0, 900,endpoint=True)
X_mid = np.linspace(-L/2.0, L/2.0, 900,endpoint=True)
X_rig = np.linspace(L/2.0, L, 900,endpoint=True)
ax.axis([-L,L,0.0,1.1*Vo])
ax.set_xlabel(r'$X$ (Angstroms)')
str1="$V_o = %.3f$ eV"%(Vo)
ax.text(2.0*L/2.0, 1.02*Vo, str1, fontsize=24, color="blue")
# Defining the maximum amplitude of the wavefunction
if (nstates > 1):
amp = np.sqrt((E_vals[1]-E_vals[0])/1.5)
else:
amp = np.sqrt((Vo-E_vals[0])/1.5)
# Plotting the wavefunctions
for n in range(1,nstates+1):
ax.hlines(E_vals[n-1], -L, L, linewidth=1.8, linestyle='--', color="black")
str1="$n = "+str(n)+r"$, $E_"+str(n)+r" = %.3f$ eV"%(E_vals[n-1])
ax.text(2.0*L/2.0, E_vals[n-1]+0.01*Vo, str1, fontsize=16, color="red")
k = 2.0*np.sqrt(E_vals[n-1])*val
a0 = 2.0*np.sqrt(Vo-E_vals[n-1])*val
# Plotting odd wavefunction
if (n%2==0):
ax.plot(X_lef,E_vals[n-1]-amp*np.exp(a0*L/2.0)*np.sin(k*L/2.0)*np.exp(a0*X_lef), color="red", label="", linewidth=2.8)
ax.plot(X_mid,E_vals[n-1]+amp*np.sin(k*X_mid), color="red", label="", linewidth=2.8)
ax.plot(X_rig,E_vals[n-1]+amp*np.exp(a0*L/2.0)*np.sin(k*L/2.0)*np.exp(-a0*X_rig), color="red", label="", linewidth=2.8)
# Plotting even wavefunction
else:
ax.plot(X_lef,E_vals[n-1]+amp*np.exp(a0*L/2.0)*np.cos(k*L/2.0)*np.exp(a0*X_lef), color="red", label="", linewidth=2.8)
ax.plot(X_mid,E_vals[n-1]+amp*np.cos(k*X_mid), color="red", label="", linewidth=2.8)
ax.plot(X_rig,E_vals[n-1]+amp*np.exp(a0*L/2.0)*np.cos(k*L/2.0)*np.exp(-a0*X_rig), color="red", label="", linewidth=2.8)
ax.margins(0.00)
ax.vlines(-L/2.0, 0.0, Vo, linewidth=4.8, color="blue")
ax.vlines(L/2.0, 0.0, Vo, linewidth=4.8, color="blue")
ax.hlines(0.0, -L/2.0, L/2.0, linewidth=4.8, color="blue")
ax.hlines(Vo, -L, -L/2.0, linewidth=4.8, color="blue")
ax.hlines(Vo, L/2.0, L, linewidth=4.8, color="blue")
plt.title('Wavefunctions', fontsize=30)
plt.legend(bbox_to_anchor=(0.8, 1), loc=2, borderaxespad=0.)
plt.show()
```
```python
# Bound Wavefunctions
import matplotlib.pyplot as plt
import numpy as np
print ("\nThe bound Probability Densities are shown below \nwith the areas shaded in green showing the regions where the particle can tunnel outside the box")
fig, ax = plt.subplots(figsize=(12,9))
ax.spines['right'].set_color('none')
ax.xaxis.tick_bottom()
ax.spines['left'].set_color('none')
ax.axes.get_yaxis().set_visible(False)
ax.spines['top'].set_color('none')
X_lef = np.linspace(-L, -L/2.0, 900,endpoint=True)
X_mid = np.linspace(-L/2.0, L/2.0, 900,endpoint=True)
X_rig = np.linspace(L/2.0, L, 900,endpoint=True)
ax.axis([-L,L,0.0,1.1*Vo])
ax.set_xlabel(r'$X$ (Angstroms)')
str1="$V_o = %.3f$ eV"%(Vo)
ax.text(2.05*L/2.0, 1.02*Vo, str1, fontsize=24, color="blue")
# Defining the maximum amplitude of the probability density
if (nstates > 1):
amp = (E_vals[1]-E_vals[0])/1.5
else:
amp = (Vo-E_vals[0])/1.5
# Plotting the probability densities
for n in range(1,nstates+1):
ax.hlines(E_vals[n-1], -L, L, linewidth=1.8, linestyle='--', color="black")
str1="$n = "+str(n)+r"$, $E_"+str(n)+r" = %.3f$ eV"%(E_vals[n-1])
ax.text(2.0*L/2.0, E_vals[n-1]+0.01*Vo, str1, fontsize=16, color="red")
k = 2.0*np.sqrt(E_vals[n-1])*val
a0 = 2.0*np.sqrt(Vo-E_vals[n-1])*val
# Plotting odd probability densities
if (n%2==0):
Y_lef = E_vals[n-1]+amp*(np.exp(a0*L/2.0)*np.sin(k*L/2.0)*np.exp(a0*X_lef))**2
ax.plot(X_lef,Y_lef, color="red", label="", linewidth=2.8)
ax.fill_between(X_lef, E_vals[n-1], Y_lef, color="#3dbb2a")
ax.plot(X_mid,E_vals[n-1]+amp*(np.sin(k*X_mid))**2, color="red", label="", linewidth=2.8)
Y_rig = E_vals[n-1]+amp*(np.exp(a0*L/2.0)*np.sin(k*L/2.0)*np.exp(-a0*X_rig))**2
ax.plot(X_rig,Y_rig, color="red", label="", linewidth=2.8)
ax.fill_between(X_rig, E_vals[n-1], Y_rig, color="#3dbb2a")
# Plotting even probability densities
else:
Y_lef = E_vals[n-1]+amp*(np.exp(a0*L/2.0)*np.cos(k*L/2.0)*np.exp(a0*X_lef))**2
ax.plot(X_lef,Y_lef, color="red", label="", linewidth=2.8)
ax.fill_between(X_lef, E_vals[n-1], Y_lef, color="#3dbb2a")
ax.plot(X_mid,E_vals[n-1]+amp*(np.cos(k*X_mid))**2, color="red", label="", linewidth=2.8)
Y_rig = E_vals[n-1]+amp*(np.exp(a0*L/2.0)*np.cos(k*L/2.0)*np.exp(-a0*X_rig))**2
ax.plot(X_rig,Y_rig, color="red", label="", linewidth=2.8)
ax.fill_between(X_rig, E_vals[n-1], Y_rig, color="#3dbb2a")
ax.margins(0.00)
ax.vlines(-L/2.0, 0.0, Vo, linewidth=4.8, color="blue")
ax.vlines(L/2.0, 0.0, Vo, linewidth=4.8, color="blue")
ax.hlines(0.0, -L/2.0, L/2.0, linewidth=4.8, color="blue")
ax.hlines(Vo, -L, -L/2.0, linewidth=4.8, color="blue")
ax.hlines(Vo, L/2.0, L, linewidth=4.8, color="blue")
plt.title('Probability Densities', fontsize=30)
plt.legend(bbox_to_anchor=(0.8, 1), loc=2, borderaxespad=0.)
plt.show()
```
```python
# Tunneling
import numpy as np
print ("\nThe tunneling probabilities are:")
for n in range(1,nstates+1):
k = 2.0*np.sqrt(E_vals[n-1])*val
a0 = 2.0*np.sqrt(Vo-E_vals[n-1])*val
# For odd solution
if (n%2==0):
C = 1.0
D = np.exp(a0*L/2.0)*np.sin(k*L/2.0)*C
prob = D*D*2.0*k*np.exp(-a0*L)/(B*B*a0*(k*L-np.sin(k*L))+D*D*2.0*k*np.exp(-a0*L))
# For even solution
else:
B = 1.0
D = np.exp(a0*L/2.0)*np.cos(k*L/2.0)*B
prob = D*D*2.0*k*np.exp(-a0*L)/(B*B*a0*(k*L+np.sin(k*L))+D*D*2.0*k*np.exp(-a0*L))
print (" State #%3d tunneling probability = %5.2f%%" % (n,100*prob))
```
## $\large{\dfrac{\displaystyle \int^{\frac{-L}{2}}_{-\infty} |\psi(x)|^2\ dx + \displaystyle \int^{+\infty}_{\frac{L}{2}} |\psi(x)|^2\ dx }{\displaystyle \int_{-\infty}^{+\infty} |\psi(x)|^2\ dx }}$
## where the denominator is 1 for a normalized wavefunction.
# INFINITE POTENTIAL WELL
```python
# InFinite Well potential
# Simpson 1/3 rule for discrete function
def simp13dis(h,fx):
n=len(fx)
I=0
for i in range(n):
if i==0 or i==n:
I+=fx[i]
elif i%2!=0:
I+=4*fx[i]
else:
I+=2*fx[i]
I=I*h/3
return I
# Normalisation of discrete wavefunction
def norm(psi,dx):
N=len(psi)
psi2=[psi[i]**2 for i in range(N)]
psimod2=simp13dis(dx,psi2)
normpsi=[psi[i]/psimod2**0.5 for i in range(N)]
return normpsi
import matplotlib.pyplot as plt
# Solution of Schrodinger equation:
def wavefn(mhdx2,psi,vi,E):
N=len(psi)
psiE=[psi[i] for i in range(N)]
for i in range(2,N):
psiE[i]=2*(mhdx2*(vi[i]-E)+1)*psiE[i-1]-psiE[i-2]
return psiE
def Sch(mhdx2,vi,psi0,psi1,psiN,nodes,mxItr):
N=len(vi)-1
Emx=max(vi)
Emn=min(vi)
psiIn=[0 for i in range(N+1)]
psiIn[0],psiIn[1],psiIn[N]=psi0,psi1,psiN
itr=0
while abs(Emx-Emn)>1e-6 and itr<mxItr:
E=.5*(Emx+Emn)
psi=wavefn(mhdx2,psiIn,vi,E)
# Node counting
cnt=0
for i in range(1,N-2):
if psi[i]*psi[i+1]<0:
cnt+=1
if cnt>nodes:
Emx=E
elif cnt<nodes:
Emn=E
else:
if psi[N-1]>psi[N]:
Emn=E
elif psi[N-1]<psi[N]:
Emx=E
itr+=1
if itr<mxItr:
return E,psi
else:
return None,None
# Define Square wave potential
def V(pr,x):
L0,L1,V0=pr
if x>L0 and x<L1:
pot=0
else:
pot=V0
return pot
hbar,m=.1,1
x0,xN=-1.0,1.0
dx=.01
mxItr=100
psi0,psiN=0,0
L0,L1,V0=-1,1,5000
prV=[L0,L1,V0]
mhdx2=m*dx**2/hbar**2
N=int((xN-x0)/dx)
dx=(xN-x0)/N
x=[x0+i*dx for i in range(N+1)]
Vi=[V(prV,x[i]) for i in range(N+1)]
stln=['k','k--','k:','k.-']
for nodes in range(4):
psi1=(-1)**(nodes)*1e-4
E,psi=Sch(mhdx2,Vi,psi0,psi1,psiN,nodes,mxItr)
if E!=None:
psi=norm(psi,dx)
#for Plot
#plt.figure(figsize=(12,6))
plt.ylim(-5.0,5.0)
plt.plot(x,psi,stln[nodes],label=r'E=%.3f $\psi_%d(x)$'%(E,nodes))
#plt.figure(figsize=(12,6))
#plt.plot(x,Vi,'k',label='Potential')
plt.legend()
plt.xlabel('x')
plt.ylabel('$\psi(x)$')
plt.grid()
plt.show()
```
```python
import matplotlib as mpl # matplotlib library for plotting and visualization
import matplotlib.pylab as plt # matplotlib library for plotting and visualization
import numpy as np #numpy library for numerical manipulation, especially suited for data arrays
import warnings
warnings.filterwarnings('ignore')
import sys # checking the version of Python
import IPython # checking the version of IPython
print("Python version = {}".format(sys.version))
print("IPython version = {}".format(IPython.__version__))
print("Matplotlib version = {}".format(plt.__version__))
print("Numpy version = {}".format(np.__version__))
```
```python
# Wave function and probability
import matplotlib.pyplot as plt
import numpy as np
plt.style.use('fivethirtyeight')
# Defining the wavefunction
def psi(x,n,L): return np.sqrt(2.0/L)*np.sin(float(n)*np.pi*x/L)
# Reading the input variables from the user
n = int(input("Enter the value for the quantum number n = "))
L = float(input("Enter the size of the box in Angstroms = "))
# Generating the wavefunction graph
plt.rcParams.update({'font.size': 18, 'font.family': 'STIXGeneral', 'mathtext.fontset': 'stix'})
x = np.linspace(0, L, 900)
fig, ax = plt.subplots()
lim1=np.sqrt(2.0/L) # Maximum value of the wavefunction
ax.axis([0.0,L,-1.1*lim1,1.1*lim1]) # Defining the limits to be plot in the graph
str1=r"$n = "+str(n)+r"$"
ax.plot(x, psi(x,n,L), linestyle='--', label=str1, color="orange", linewidth=2.8) # Plotting the wavefunction
ax.hlines(0.0, 0.0, L, linewidth=1.8, linestyle='--', color="black") # Adding a horizontal line at 0
# Now we define labels, legend, etc
ax.legend(loc=2);
ax.set_xlabel(r'$L$')
ax.set_ylabel(r'$\psi_n(x)$')
plt.title('Wavefunction')
plt.legend(bbox_to_anchor=(1.1, 1), loc=2, borderaxespad=0.0)
# Generating the probability density graph
fig, ax = plt.subplots()
ax.axis([0.0,L,0.0,lim1*lim1*1.1])
str1=r"$n = "+str(n)+r"$"
ax.plot(x, psi(x,n,L)*psi(x,n,L), label=str1, linewidth=2.8)
ax.legend(loc=2);
ax.set_xlabel(r'$L$')
ax.set_ylabel(r'$|\psi_n|^2(x)$')
plt.title('Probability Density')
plt.legend(bbox_to_anchor=(1.1, 1), loc=2, borderaxespad=0.0)
# Show the plots on the screen once the code reaches this point
plt.show()
```
```python
# Length Dependence on Wavefunctions
import matplotlib.pyplot as plt
import numpy as np
# Reading the input boxes sizes from the user, and making sure the values are not larger than 20 A
L = 100.0
while(L>20.0):
L1 = float(input(" To compare wavefunctions for boxes of different lengths \nenter the value of L for the first box (in Angstroms and not larger then 20 A) = "))
L2 = float(input("Enter the value of L for the second box (in Angstroms and not larger then 20) = "))
L = max(L1,L2)
if(L>20.0):
print ("The sizes of the boxes cannot be larger than 20 A. Please enter the values again.\n")
# Generating the wavefunction and probability density graphs
plt.rcParams.update({'font.size': 18, 'font.family': 'STIXGeneral', 'mathtext.fontset': 'stix'})
fig, ax = plt.subplots(figsize=(12,6))
ax.spines['right'].set_color('none')
ax.xaxis.tick_bottom()
ax.spines['left'].set_color('none')
ax.axes.get_yaxis().set_visible(False)
ax.spines['top'].set_color('none')
val = 1.1*max(L1,L2)
X1 = np.linspace(0.0, L1, 900,endpoint=True)
X2 = np.linspace(0.0, L2, 900,endpoint=True)
ax.axis([-0.5*val,1.5*val,-np.sqrt(2.0/L),3*np.sqrt(2.0/L)])
ax.set_xlabel(r'$X$ (Angstroms)')
strA="$\psi_n$"
strB="$|\psi_n|^2$"
ax.text(-0.12*val, 0.0, strA, rotation='vertical', fontsize=30, color="black")
ax.text(-0.12*val, np.sqrt(4.0/L), strB, rotation='vertical', fontsize=30, color="black")
str1=r"$L = "+str(L1)+r"$ A"
str2=r"$L = "+str(L2)+r"$ A"
ax.plot(X1,psi(X1,n,L1)*np.sqrt(L1/L), color="red", label=str1, linewidth=2.8)
ax.plot(X2,psi(X2,n,L2)*np.sqrt(L2/L), color="blue", label=str2, linewidth=2.8)
ax.plot(X1,psi(X1,n,L1)*psi(X1,n,L1)*(L1/L) + np.sqrt(4.0/L), color="red", linewidth=2.8)
ax.plot(X2,psi(X2,n,L2)*psi(X2,n,L2)*(L2/L) + np.sqrt(4.0/L), color="blue", linewidth=2.8)
ax.margins(0.00)
ax.legend(loc=9)
str2="$V = +\infty$"
ax.text(-0.3*val, 0.5*np.sqrt(2.0/L), str2, rotation='vertical', fontsize=40, color="black")
ax.vlines(0.0, -np.sqrt(2.0/L), 2.5*np.sqrt(2.0/L), linewidth=4.8, color="red")
ax.vlines(L1, -np.sqrt(2.0/L), 2.5*np.sqrt(2.0/L), linewidth=4.8, color="red")
ax.vlines(0.0, -np.sqrt(2.0/L), 2.5*np.sqrt(2.0/L), linewidth=4.8, color="blue")
ax.vlines(L2, -np.sqrt(2.0/L), 2.5*np.sqrt(2.0/L), linewidth=4.8, color="blue")
ax.hlines(0.0, 0.0, L, linewidth=1.8, linestyle='--', color="black")
ax.hlines(np.sqrt(4.0/L), 0.0, L, linewidth=1.8, linestyle='--', color="black")
plt.title('Wavefunction and Probability Density', fontsize=30)
str3=r"$n = "+str(n)+r"$"
ax.text(1.1*L,np.sqrt(4.0/L), r"$n = "+str(n)+r"$", fontsize=25, color="black")
plt.legend(bbox_to_anchor=(0.73, 0.95), loc=2, borderaxespad=0.)
# Show the plots on the screen once the code reaches this point
plt.show()
```
```python
# Length Dependence on Wavefunctions
import matplotlib.pyplot as plt
import numpy as np
# Reading the input boxes sizes from the user, and making sure the values are not larger than 20 A
L = 100.0
while(L>20.0):
L1 = float(input(" To compare wavefunctions for boxes of different lengths \nenter the value of L for the first box (in Angstroms and not larger then 20 A) = "))
L2 = float(input("Enter the value of L for the second box (in Angstroms and not larger then 20) = "))
L = max(L1,L2)
if(L>20.0):
print ("The sizes of the boxes cannot be larger than 20 A. Please enter the values again.\n")
# Generating the wavefunction and probability density graphs
plt.rcParams.update({'font.size': 18, 'font.family': 'STIXGeneral', 'mathtext.fontset': 'stix'})
fig, ax = plt.subplots(figsize=(12,6))
ax.spines['right'].set_color('none')
ax.xaxis.tick_bottom()
ax.spines['left'].set_color('none')
ax.axes.get_yaxis().set_visible(False)
ax.spines['top'].set_color('none')
val = 1.1*max(L1,L2)
X1 = np.linspace(0.0, L1, 900,endpoint=True)
X2 = np.linspace(0.0, L2, 900,endpoint=True)
ax.axis([-0.5*val,1.5*val,-np.sqrt(2.0/L),3*np.sqrt(2.0/L)])
ax.set_xlabel(r'$X$ (Angstroms)')
strA="$\psi_n$"
strB="$|\psi_n|^2$"
ax.text(-0.12*val, 0.0, strA, rotation='vertical', fontsize=30, color="black")
ax.text(-0.12*val, np.sqrt(4.0/L), strB, rotation='vertical', fontsize=30, color="black")
str1=r"$L = "+str(L1)+r"$ A"
str2=r"$L = "+str(L2)+r"$ A"
ax.plot(X1,psi(X1,n,L1)*np.sqrt(L1/L), color="red", label=str1, linewidth=2.8)
ax.plot(X2,psi(X2,n,L2)*np.sqrt(L2/L), color="blue", label=str2, linewidth=2.8)
ax.plot(X1,psi(X1,n,L1)*psi(X1,n,L1)*(L1/L) + np.sqrt(4.0/L), color="red", linewidth=2.8)
ax.plot(X2,psi(X2,n,L2)*psi(X2,n,L2)*(L2/L) + np.sqrt(4.0/L), color="blue", linewidth=2.8)
ax.margins(0.00)
ax.legend(loc=9)
str2="$V = +\infty$"
ax.text(-0.3*val, 0.5*np.sqrt(2.0/L), str2, rotation='vertical', fontsize=40, color="black")
ax.vlines(0.0, -np.sqrt(2.0/L), 2.5*np.sqrt(2.0/L), linewidth=4.8, color="red")
ax.vlines(L1, -np.sqrt(2.0/L), 2.5*np.sqrt(2.0/L), linewidth=4.8, color="red")
ax.vlines(0.0, -np.sqrt(2.0/L), 2.5*np.sqrt(2.0/L), linewidth=4.8, color="blue")
ax.vlines(L2, -np.sqrt(2.0/L), 2.5*np.sqrt(2.0/L), linewidth=4.8, color="blue")
ax.hlines(0.0, 0.0, L, linewidth=1.8, linestyle='--', color="black")
ax.hlines(np.sqrt(4.0/L), 0.0, L, linewidth=1.8, linestyle='--', color="black")
plt.title('Wavefunction and Probability Density', fontsize=30)
str3=r"$n = "+str(n)+r"$"
ax.text(1.1*L,np.sqrt(4.0/L), r"$n = "+str(n)+r"$", fontsize=25, color="black")
plt.legend(bbox_to_anchor=(0.73, 0.95), loc=2, borderaxespad=0.)
# Show the plots on the screen once the code reaches this point
plt.show()
```
```python
# Energy levels
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
#Given the following parameters
h=6.62607e-34 #planck's constant in joules
me=9.1093837e-31 # mass of an electron in kg
# (h**2 / (me*8))* (1e10)**2 *6.242e+18 #is the prefactor using length units is Angstroms and then converted into electron volts
# Defining a function to compute the energy
def En(n,L,m): return (h**2 / (m*8))* (1e10)**2 *6.242e+18*((float(n)/L)**2)
# Reading the input variables from the user
L1 = float(input(" To see how the energy levels change for boxes of different lengths, \nenter the value for L for the first box (in Angstroms) = "))
nmax1 = int(input("Enter the number of levels you want to plot for the first box = "))
L2 = float(input("Enter the value for L for the second box (in Angstroms) = "))
nmax2 = int(input("Enter the number of levels you want to plot for the second box = "))
# Generating the graph
plt.rcParams.update({'font.size': 18, 'font.family': 'STIXGeneral', 'mathtext.fontset': 'stix'})
fig, ax = plt.subplots(figsize=(8,12))
ax.spines['right'].set_color('none')
ax.yaxis.tick_left()
ax.spines['bottom'].set_color('none')
ax.axes.get_xaxis().set_visible(False)
ax.spines['top'].set_color('none')
val = 1.1*max(En(nmax1,L1,me),En(nmax2,L2,me))
val2= 1.1*max(L1,L2)
ax.axis([0.0,10.0,0.0,val])
ax.set_ylabel(r'$E_n$ (eV)')
for n in range(1,nmax1+1):
str1="$n = "+str(n)+r"$, $E_{"+str(n)+r"} = %.3f$ eV"%(En(n,L1,me))
ax.text(0.6, En(n,L1,me)+0.01*val, str1, fontsize=16, color="red")
ax.hlines(En(n,L1,me), 0.0, 4.5, linewidth=1.8, linestyle='--', color="red")
for n in range(1,nmax2+1):
str1="$n = "+str(n)+r"$, $E_{"+str(n)+r"} = %.3f$ eV"%(En(n,L2,me))
ax.text(6.2, En(n,L2,me)+0.01*val, str1, fontsize=16, color="blue")
ax.hlines(En(n,L2,me), 5.5, 10.0, linewidth=1.8, linestyle='--', color="blue")
str1=r"$L = "+str(L1)+r"$ A"
plt.title("Energy Levels for a particle of mass = $m_{electron}$ \n ", fontsize=30)
str1=r"$L = "+str(L1)+r"$ A"
str2=r"$L = "+str(L2)+r"$ A"
ax.text(1.5,val, str1, fontsize=25, color="red")
ax.text(6,val, str2, fontsize=25, color="blue")
# Show the plots on the screen once the code reaches this point
plt.show()
```
```python
# Energy mass dependence
import matplotlib.pyplot as plt
# Reading the input variables from the user
L = float(input(" Enter the value for L for both boxes (in Angstroms) = "))
m1 = float(input(" To see how the energy levels change for particles of different mass, \nEnter the value of the mass for the first particle (in units of the mass of 1 electron) = "))
nmax1 = int(input("Enter the number of levels you want to plot for the first box = "))
m2 = float(input("Enter the value of the mass for the second particle (in units of the mass of 1 electron) = "))
nmax2 = int(input("Enter the number of levels you want to plot for the second box = "))
# Generating the graph
plt.rcParams.update({'font.size': 18, 'font.family': 'STIXGeneral', 'mathtext.fontset': 'stix'})
fig, ax = plt.subplots(figsize=(8,12))
ax.spines['right'].set_color('none')
ax.yaxis.tick_left()
ax.spines['bottom'].set_color('none')
ax.axes.get_xaxis().set_visible(False)
ax.spines['top'].set_color('none')
val = 1.1*max(En(nmax1,L,m1*me),En(nmax2,L,m2*me))
val2= 1.1*max(m1,m2)
ax.axis([0.0,10.0,0.0,val])
ax.set_ylabel(r'$E_n$ (eV)')
for n in range(1,nmax1+1):
str1="$n = "+str(n)+r"$, $E_{"+str(n)+r"} = %.3f$ eV"%(En(n,L,m1*me))
ax.text(0.6, En(n,L,m1*me)+0.01*val, str1, fontsize=16, color="green")
ax.hlines(En(n,L,m1*me), 0.0, 4.5, linewidth=1.8, linestyle='--', color="green")
for n in range(1,nmax2+1):
str1="$n = "+str(n)+r"$, $E_{"+str(n)+r"} = %.3f$ eV"%(En(n,L,m2*me))
ax.text(6.2, En(n,L,m2*me)+0.01*val, str1, fontsize=16, color="magenta")
ax.hlines(En(n,L,m2*me), 5.5, 10.0, linewidth=1.8, linestyle='--', color="magenta")
str1=r"$m = "+str(m1)+r"$ A"
plt.title("Energy Levels for two particles with different masses\n ", fontsize=30)
str1=r"$m_1 = "+str(m1)+r"$ $m_e$ "
str2=r"$m_2 = "+str(m2)+r"$ $m_e$ "
ax.text(1.1,val, str1, fontsize=25, color="green")
ax.text(6.5,val, str2, fontsize=25, color="magenta")
# Show the plots on the screen once the code reaches this point
plt.show()
```
```python
# Combined plot
import matplotlib.pyplot as plt
import numpy as np
# Here the users inputs the value of L
L = float(input("Enter the value of L (in Angstroms) = "))
nmax = int(input("Enter the maximum value of n you want to plot = "))
# Generating the wavefunction graph
fig, ax = plt.subplots(figsize=(12,9))
ax.spines['right'].set_color('none')
ax.xaxis.tick_bottom()
ax.spines['left'].set_color('none')
ax.axes.get_yaxis().set_visible(False)
ax.spines['top'].set_color('none')
X3 = np.linspace(0.0, L, 900,endpoint=True)
Emax = En(nmax,L,me)
amp = (En(2,L,me)-En(1,L,me)) *0.9
Etop = (Emax+amp)*1.1
ax.axis([-0.5*L,1.5*L,0.0,Etop])
ax.set_xlabel(r'$X$ (Angstroms)')
for n in range(1,nmax+1):
ax.hlines(En(n,L,me), 0.0, L, linewidth=1.8, linestyle='--', color="black")
str1="$n = "+str(n)+r"$, $E_{"+str(n)+r"} = %.3f$ eV"%(En(n,L,me))
ax.text(1.03*L, En(n,L,me), str1, fontsize=16, color="black")
ax.plot(X3,En(n,L,me)+amp*np.sqrt(L/2.0)*psi(X3,n,L), color="red", label="", linewidth=2.8)
ax.margins(0.00)
ax.vlines(0.0, 0.0, Etop, linewidth=4.8, color="blue")
ax.vlines(L, 0.0, Etop, linewidth=4.8, color="blue")
ax.hlines(0.0, 0.0, L, linewidth=4.8, color="blue")
plt.title('Wavefunctions', fontsize=30)
plt.legend(bbox_to_anchor=(0.8, 1), loc=2, borderaxespad=0.)
str2="$V = +\infty$"
ax.text(-0.15*L, 0.6*Emax, str2, rotation='vertical', fontsize=40, color="black")
# Generating the probability density graph
fig, ax = plt.subplots(figsize=(12,9))
ax.spines['right'].set_color('none')
ax.xaxis.tick_bottom()
ax.spines['left'].set_color('none')
ax.axes.get_yaxis().set_visible(False)
ax.spines['top'].set_color('none')
X3 = np.linspace(0.0, L, 900,endpoint=True)
Emax = En(nmax,L,me)
ax.axis([-0.5*L,1.5*L,0.0,Etop])
ax.set_xlabel(r'$X$ (Angstroms)')
for n in range(1,nmax+1):
ax.hlines(En(n,L,me), 0.0, L, linewidth=1.8, linestyle='--', color="black")
str1="$n = "+str(n)+r"$, $E_{"+str(n)+r"} = %.3f$ eV"%(En(n,L,me))
ax.text(1.03*L, En(n,L,me), str1, fontsize=16, color="black")
ax.plot(X3,En(n,L,me)+ amp*(np.sqrt(L/2.0)*psi(X3,n,L))**2, color="red", label="", linewidth=2.8)
ax.margins(0.00)
ax.vlines(0.0, 0.0, Etop, linewidth=4.8, color="blue")
ax.vlines(L, 0.0, Etop, linewidth=4.8, color="blue")
ax.hlines(0.0, 0.0, L, linewidth=4.8, color="blue")
plt.title('Probability Density', fontsize=30)
plt.legend(bbox_to_anchor=(0.8, 1), loc=2, borderaxespad=0.)
str2="$V = +\infty$"
ax.text(-0.15*L, 0.6*Emax, str2, rotation='vertical', fontsize=40, color="black")
# Show the plots on the screen once the code reaches this point
plt.show()
```
```python
# 2D Box
import matplotlib.pyplot as plt
import numpy as np
# Defining the wavefunction
def psi2D(x,y): return 2.0*np.sin(n*np.pi*x)*np.sin(m*np.pi*y)
# Here the users inputs the values of n and m
n = int(input("Let's look at the Wavefunction for a 2D box \nEnter the value for n = "))
m = int(input("Enter the value for m = "))
# Generating the wavefunction graph
x = np.linspace(0, 1, 100)
y = np.linspace(0, 1, 100)
X, Y = np.meshgrid(x, y)
fig, axes = plt.subplots(1, 1, figsize=(8,8))
axes.imshow(psi2D(X,Y), origin='lower', extent=[0.0, 1.0, 0.0, 1.0])
axes.set_title(r'Heat plot of $\sqrt{L_xL_y}\Psi_{n,m}(x,y)$ for $n='+str(n)+r'$ and $m='+str(m)+r'$')
axes.set_ylabel(r'$y/L_y$')
axes.set_xlabel(r'$x/L_x$')
# Plotting the colorbar for the density plots
fig = plt.figure(figsize=(10,3))
colbar = fig.add_axes([0.05, 0.80, 0.7, 0.10])
norm = mpl.colors.Normalize(vmin=-2.0, vmax=2.0)
mpl.colorbar.ColorbarBase(colbar, norm=norm, orientation='horizontal')
# Show the plots on the screen once the code reaches this point
plt.show()
```
```python
import matplotlib.pyplot as plt
import numpy as np
# Here the users inputs the values of n and m
yo = float(input("Enter the value of y/L_y for the x-axes slice ="))
xo = float(input("Enter the value of x/L_x for the y-axes slice ="))
# Generating the wavefunction graph
plt.rcParams.update({'font.size': 18, 'font.family': 'STIXGeneral', 'mathtext.fontset': 'stix'})
x = np.linspace(0, 1.0, 900)
fig, ax = plt.subplots()
lim1=2.0 # Maximum value of the wavefunction
ax.axis([0.0,1.0,-1.1*lim1,1.1*lim1]) # Defining the limits to be plot in the graph
str1=r"$n = "+str(n)+r", m = "+str(m)+r", y_o = "+str(yo)+r"\times L_y$"
ax.plot(x, psi2D(x,yo), linestyle='--', label=str1, color="orange", linewidth=2.8) # Plotting the wavefunction
ax.hlines(0.0, 0.0, 1.0, linewidth=1.8, linestyle='--', color="black") # Adding a horizontal line at 0
# Now we define labels, legend, etc
ax.legend(loc=2);
ax.set_xlabel(r'$x/L_x$')
ax.set_ylabel(r'$\sqrt{L_xL_y}\Psi_{n,m}(x,y_o)$')
plt.legend(bbox_to_anchor=(1.1, 1), loc=2, borderaxespad=0.0)
# Generating the wavefunction graph
plt.rcParams.update({'font.size': 18, 'font.family': 'STIXGeneral', 'mathtext.fontset': 'stix'})
y = np.linspace(0, 1.0, 900)
fig, ax = plt.subplots()
lim1=2.0 # Maximum value of the wavefunction
ax.axis([0.0,1.0,-1.1*lim1,1.1*lim1]) # Defining the limits to be plot in the graph
str1=r"$n = "+str(n)+r", m = "+str(m)+r", x_o = "+str(xo)+r"\times L_x$"
ax.plot(y, psi2D(xo,y), linestyle='--', label=str1, color="blue", linewidth=2.8) # Plotting the wavefunction
ax.hlines(0.0, 0.0, 1.0, linewidth=1.8, linestyle='--', color="black") # Adding a horizontal line at 0
# Now we define labels, legend, etc
ax.legend(loc=2);
ax.set_xlabel(r'$y/L_y$')
ax.set_ylabel(r'$\sqrt{L_xL_y}\Psi_{n,m}(x_o,y)$')
plt.legend(bbox_to_anchor=(1.1, 1), loc=2, borderaxespad=0.0)
# Show the plots on the screen once the code reaches this point
plt.show()
```
```python
import matplotlib.pyplot as plt
# Defining the energy as a function
def En2D(n,m,L1,L2): return 37.60597*((float(n)/L1)**2+ (float(m)/L2)**2)
# Reading data from the user
L1 = float(input("Can we count DEGENERATE states?\nEnter the value for Lx (in Angstroms) = "))
nmax1 = int(input("Enter the maximum value of n to consider = "))
L2 = float(input("Enter the value for Ly (in Angstroms) = "))
mmax2 = int(input("Enter the maximum value of m to consider = "))
# Plotting the energy levels
plt.rcParams.update({'font.size': 18, 'font.family': 'STIXGeneral', 'mathtext.fontset': 'stix'})
fig, ax = plt.subplots(figsize=(nmax1*2+2,nmax1*3))
ax.spines['right'].set_color('none')
ax.yaxis.tick_left()
ax.spines['bottom'].set_color('none')
ax.axes.get_xaxis().set_visible(False)
ax.spines['top'].set_color('none')
val = 1.1*(En2D(nmax1,mmax2,L1,L2))
val2= 1.1*max(L1,L2)
ax.axis([0.0,3*nmax1,0.0,val])
ax.set_ylabel(r'$E_n$ (eV)')
for n in range(1,nmax1+1):
for m in range(1, mmax2+1):
str1="$"+str(n)+r","+str(m)+r"$"
str2=" $E = %.3f$ eV"%(En2D(n,m,L1,L2))
ax.text(n*2-1.8, En2D(n,m,L1,L2)+ 0.005*val, str1, fontsize=20, color="blue")
ax.hlines(En2D(n,m,L1,L2), n*2-2, n*2-1, linewidth=3.8, color="red")
ax.hlines(En2D(n,m,L1,L2), 0.0, nmax1*2+1, linewidth=1., linestyle='--', color="black")
ax.text(nmax1*2+1, En2D(n,m,L1,L2)+ 0.005*val, str2, fontsize=16, color="blue")
plt.title("Energy Levels for \n ", fontsize=30)
str1=r"$L_x = "+str(L1)+r"$ A, $n_{max} = "+str(nmax1)+r"$ $L_y = "+str(L2)+r"$ A, $m_{max}="+str(mmax2)+r"$"
ax.text(0.1,val, str1, fontsize=25, color="black")
# Show the plots on the screen once the code reaches this point
plt.show()
```
https://scipython.com/blog/the-harmonic-oscillator-wavefunctions/
```python
import numpy as np
from matplotlib import rc
from scipy.special import factorial
import matplotlib.pyplot as plt
rc('font', **{'family': 'serif', 'serif': ['Computer Modern'], 'size': 14})
#rc('text', usetex=True)
# PLOT_PROB=False plots the wavefunction, psi; PLOT_PROB=True plots |psi|^2
PLOT_PROB = False
# Maximum vibrational quantum number to calculate wavefunction for
VMAX = 6
# Some appearance settings
# Pad the q-axis on each side of the maximum turning points by this fraction
QPAD_FRAC = 1.3
# Scale the wavefunctions by this much so they don't overlap
SCALING = 0.7
# Colours of the positive and negative parts of the wavefunction
COLOUR1 = (0.6196, 0.0039, 0.2588, 1.0)
COLOUR2 = (0.3686, 0.3098, 0.6353, 1.0)
# Normalization constant and energy for vibrational state v
N = lambda v: 1./np.sqrt(np.sqrt(np.pi)*2**v*factorial(v))
get_E = lambda v: v + 0.5
def make_Hr():
"""Return a list of np.poly1d objects representing Hermite polynomials."""
# Define the Hermite polynomials up to order VMAX by recursion:
# H_[v] = 2qH_[v-1] - 2(v-1)H_[v-2]
Hr = [None] * (VMAX + 1)
Hr[0] = np.poly1d([1.,])
Hr[1] = np.poly1d([2., 0.])
for v in range(2, VMAX+1):
Hr[v] = Hr[1]*Hr[v-1] - 2*(v-1)*Hr[v-2]
return Hr
Hr = make_Hr()
def get_psi(v, q):
"""Return the harmonic oscillator wavefunction for level v on grid q."""
return N(v)*Hr[v](q)*np.exp(-q*q/2.)
def get_turning_points(v):
"""Return the classical turning points for state v."""
qmax = np.sqrt(2. * get_E(v + 0.5))
return -qmax, qmax
def get_potential(q):
"""Return potential energy on scaled oscillator displacement grid q."""
return q**2 / 2
fig, ax = plt.subplots(figsize=(10,8))
qmin, qmax = get_turning_points(VMAX)
xmin, xmax = QPAD_FRAC * qmin, QPAD_FRAC * qmax
q = np.linspace(qmin, qmax, 500)
V = get_potential(q)
def plot_func(ax, f, scaling=1, yoffset=0):
"""Plot f*scaling with offset yoffset.
The curve above the offset is filled with COLOUR1; the curve below is
filled with COLOUR2.
"""
ax.plot(q, f*scaling + yoffset, color=COLOUR1)
ax.fill_between(q, f*scaling + yoffset, yoffset, f > 0.,
color=COLOUR1, alpha=0.5)
ax.fill_between(q, f*scaling + yoffset, yoffset, f < 0.,
color=COLOUR2, alpha=0.5)
# Plot the potential, V(q).
ax.plot(q, V, color='k', linewidth=1.5)
# Plot each of the wavefunctions (or probability distributions) up to VMAX.
for v in range(VMAX+1):
psi_v = get_psi(v, q)
E_v = get_E(v)
if PLOT_PROB:
plot_func(ax, psi_v**2, scaling=SCALING*1.5, yoffset=E_v)
else:
plot_func(ax, psi_v, scaling=SCALING, yoffset=E_v)
# The energy, E = (v+0.5).hbar.omega.
ax.text(s=r'$\frac{{{}}}{{2}}\hbar\omega$'.format(2*v+1), x=qmax+0.2,
y=E_v, va='center')
# Label the vibrational levels.
ax.text(s=r'$v={}$'.format(v), x=qmin-0.2, y=E_v, va='center', ha='right')
# The top of the plot, plus a bit.
ymax = E_v+0.5
if PLOT_PROB:
ylabel = r'$|\psi(q)|^2$'
else:
ylabel = r'$\psi(q)$'
ax.text(s=ylabel, x=0, y=ymax, va='bottom', ha='center')
ax.set_xlabel('$q$')
ax.set_xlim(xmin, xmax)
ax.set_ylim(0, ymax)
ax.spines['left'].set_position('center')
ax.set_yticks([])
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
plt.savefig('sho-psi{}-{}.png'.format(PLOT_PROB+1, VMAX))
plt.show()
```
```python
```
```python
# Visualizing the solutions of harmonic oscillator problem.
# First load the numpy/scipy/matplotlib
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
#load interactive widgets
import ipywidgets as widgets
from IPython.display import display
#If your screen has retina display this will increase resolution of plots
%config InlineBackend.figure_format = 'retina'
# Import hermite polynomials and factorial to use in normalization factor
from scipy.special import hermite
from math import factorial
#Check to see if they match the table
H=hermite(4)
print(H)
```
```python
```
```python
x=np.linspace(-2,2,1000) # Range needs to be specified for plotting functions of x
for v in range(0,3):
H=hermite(v)
f=H(x)
plt.plot(x,f)
plt.xlabel('x')
plt.ylabel(r'$H_n(x)$')
```
```python
def N(v):
'''Normalization constant '''
return 1./np.sqrt(np.sqrt(np.pi)*2**v*factorial(v))
def psi(v, x):
"""Harmonic oscillator wavefunction for level v computed on grid of points x"""
Hr=hermite(v)
Psix = N(v)*Hr(x)*np.exp(-0.5*x**2)
return Psix
# Normalization is computed by using numerical integration with trapezoidal method:
from scipy.integrate import trapz
# remember that x runs form -inf to +inf so lets use large xmin and xmax
x=np.linspace(-10,10,1000)
psi2=psi(5,x)**2
Integral = trapz(psi2,x)
print(Integral)
@widgets.interact(v=(0,50))
def plot_psi(v=0):
x=np.linspace(-10,10,1000)
y= psi(v,x)**2
plt.plot(x,y,lw=2)
plt.grid('on')
plt.xlabel('x',fontsize=16)
plt.ylabel('$\psi_n(x)$',fontsize=16)
```
```python
```
```python
def E(v):
'''Eigenvalues in units of h'''
return (v + 0.5)
def V(x):
"""Potential energy function"""
return 0.5*x**2
# plot up to level vmax
VMAX=8
# Range of x determine by classical tunring points:
xmin, xmax = -np.sqrt(2*E(VMAX)), np.sqrt(2*E(VMAX))
x = np.linspace(xmin, xmax, 1000)
fig, ax = plt.subplots(figsize=(8,8))
for v in range(8):
# plot potential V(x)
ax.plot(x,V(x),color='black')
# plot psi squared which we shift up by values of energy
ax.plot(x,psi(v,x)**2 + E(v), lw=2)
# add lines and labels
ax.axhline(E(v), color='gray', linestyle='--')
ax.text(xmax, 1.2*E(v), f"v={v}")
ax.set_xlabel('x')
ax.set_ylabel('$\psi^2_n(x)$')
```
```python
```
```python
# First load the libs
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d
%matplotlib inline
# a and load interactive widgets
import ipywidgets as widgets
from IPython.display import display
L=0.3 # Try different wavelengths
x = np.linspace(0.0, 1.0, 1000)
y = np.sin(2 * np.pi * x/L)
plt.plot(x, y)
```
```python
```
```python
def wave(L):
x = np.linspace(0.0, 1.0, 1000)
y = np.sin(2*np.pi * x/L)
plt.plot(x, y)
# Now try comparing three different wavelengths
wave(0.5)
wave(1.0)
wave(2.0)
```
```python
# Interactive Plot
```
```python
@widgets.interact(L=(0.1,1))
def wavef(L=.1): # By writing wavef(L=0.1) inside function you can specify initial value
x = np.linspace(0, +1., 1000)
y = np.sin(np.pi * x/L)
plt.plot(x, y)
```
```python
```
```python
@widgets.interact(k=(2,20),t=(0,50.0,0.1))
def wavef2(k=10,t=0):
v=1 #velocity of waves
phi = 0.5 # vary initial phase between -2*np.pi and 2*np.pi
x = np.linspace(0, 1., 1000)
wave1 = np.sin(k*(x-v*t))
wave2= np.sin(k*(x-v*t)+phi) #try flipping the direction ofvelocity to get standing wave
plt.figure(figsize=(12,6))
plt.plot(x, wave1,'--', color='blue')
plt.plot(x,wave2,'--', color='green')
plt.plot(x, wave1+wave2,color='red')
plt.ylim([-2.5,2.5])
plt.legend(['Wave1','Wave2','Wave1+Wave2'])
plt.grid('on')
```
```python
```
```python
@widgets.interact(n=(1,10))
def wavef(n=1):
L=1
x = np.linspace(0, +1., 1000)
y = np.sin(n*np.pi * x/L)
plt.plot(x, y, lw=3, color='red')
plt.grid('on')
```
```python
@widgets.interact(n1=(1,5),n2=(1,5),phi=(0,2*np.pi),t=(0,20,0.2))
def wavef(n1=1,n2=1,phi=0,t=0):
L=1
omega=1
x = np.linspace(0, +1., 50)
mode1 = np.cos(omega*t) * np.sin(n1*np.pi * x/L)
mode2 = np.cos(omega*t + phi) * np.sin(n2*np.pi * x/L)
plt.plot(x, mode1+mode2, lw=5, color='orange')
plt.ylim([-2.5,2.5])
plt.grid('on')
```
```python
```
```python
@widgets.interact(n1=(1,10),m1=(1,10))
def membrane(n1=1, m1=1):
t=0
omega=1
Lx,Ly = 1,1 # size of memrbante
N=40 # number of grid points along X and Y
xs = np.linspace(0,Lx,N)
ys = np.linspace(0,Ly,N)
X,Y = np.meshgrid(xs,ys) # create 2D mesh of points along X and Y
mode1 = np.sin(n1*np.pi*X/Lx)*np.sin(m1*np.pi*Y/Ly)
fig, ax =plt.subplots()
ax.contourf(X,Y,mode1,40,cmap='RdBu')
```
```python
```
```python
# Vibrations of square 2D membrane as a linear combination of normal modes
@widgets.interact(n1=(1,10),m1=(1,10),n2=(1,10),m2=(1,10),phi=(0,2*np.pi),t=(0,100))
def membrane(n1=1, m1=1, n2=1, m2=1, phi=0, t=0):
omega=1
L=1 # size of memrbante
N=40 # number of grid points along X and Y
x = np.linspace(0,L,N)
y = np.linspace(0,L,N)
X,Y = np.meshgrid(x,y) # create 2D mesh of points along X and Y
mode1 = np.cos(omega*t) * np.sin(n1*np.pi*X/L) * np.sin(m1*np.pi*Y/L)
mode2 = np.cos(omega*t+phi) * np.sin(n2*np.pi*X/L) * np.sin(m2*np.pi*Y/L)
fig, ax = plt.subplots(figsize=(9,6))
ax = plt.axes(projection='3d') # Making a 3D plot
ax.set_zlim([-2.0,2.0])
ax.plot_surface(X,Y,mode1+mode2,cmap='RdYlBu') #Do the Plot
```
# Solving Time Independent Schrodinger Equation by
# Numerov Method
# SHO ( 1D Harmonic Quantum Oscillator)
# $ -\frac{\hbar^2}{2m}\frac{d^2\psi(x)}{dx^2}-\frac{1}{2}kx^2\psi(x)=E\psi(x)$
```python
# Hamiltonian matrix method to solve harmonic oscillator 15/02/2021
from matplotlib import pyplot as plt
import numpy as np
plt.style.use('fivethirtyeight')
hbar=1
m=1
omega=1
N = 2014
a = 20.0
x = np.linspace(-a/2.,a/2.,N)
h = x[1]-x[0] # Should be equal to 2*np.pi/(N-1)
V = .5*m*omega*x*x
# V[N/2]=2/h # This would add a "delta" spike in the center.
Mdd = 1./(h*h)*(np.diag(np.ones(N-1),-1) -2* np.diag(np.ones(N),0) + np.diag(np.ones(N-1),1))
H = -(hbar*hbar)/(2.0*m)*Mdd + np.diag(V)
En,psiT = np.linalg.eigh(H) # This computes the eigen values and eigenvectors
psi = np.transpose(psiT)
# The psi now contain the wave functions ordered so that psi[n] if the n-th eigen state.
plt.figure(figsize=(12,6))
plt.plot(x,psi[1],label='$\psi_1(x)$')
plt.plot(x,psi[2],label='$\psi_2(x)$')
plt.xlabel('x')
plt.ylabel('$\psi_n(x)$')
plt.legend(loc='upper left')
plt.show()
```
```python
```
```python
# Check the normalization of the wave function arrays.
notok=False
for n in range(len(psi)):
# s = np.sum(psi[n]*psi[n])
s = np.linalg.norm(psi[n]) # This does the same as the line above.
if np.abs(s - 1) > 0.00001: # Check if it is different from one.
print("Wave function {} is not normalized to 1 but {}".format(n,s))
notok=True
if not notok:
print("All the $\psi_n(x)$ are normalized.")
fig2 = plt.figure(figsize=[12,7])
plt.title('Harmonic Oscillator')
plt.ylabel('$\psi(x)$')
plt.xlabel('$x$')
plt.plot([0,0],[-6,V[0]],color="blue")
plt.plot([-a/2.,a/2.],[0,0],color="blue")
plt.plot(x,0.1*V,color="grey",label="V(x) scaled by 0.1")
plt.ylim((-.8,1.))
plt.xlim((-6.,6.))
for i in range(0,5):
if psi[i][int(N/8)] < 0:
plt.plot(x,-psi[i]/np.sqrt(h),label="$E_{}$={:3.1f}".format(i,En[i]))
else:
plt.plot(x,psi[i]/np.sqrt(h),label="$E_{}$={:3.1f}".format(i,En[i]))
plt.title("Solution to harmonic oscillator")
plt.legend()
plt.savefig("Harmonic_Oscillator_WaveFunctions.pdf")
plt.show()
```
```python
```
We want to now plot the initial wave function. We could try to shift the already calculated $\psi_0(x)$ array, but it is better to compute the function from a formula. The *main reason* for making this plot is to make sure the *normalization* of the function is correct. We would want the *same normalization* as our $\psi_n(x)$ arrays, so we need to muliply the function by $\sqrt{\Delta x}$, which we then divide out again when we plot. Doing this makes sure that the $c_n$ factors we compute later are correct and with the same normalization.
```python
fig2 = plt.figure(figsize=[10,7])
plt.title('Harmonic Oscillator and displaced ground state.')
plt.ylabel('$\psi(x)$')
plt.xlabel('$x$')
plt.plot([0,0],[-6,V[0]],color="blue")
plt.plot([-a/2.,a/2.],[0,0],color="blue")
plt.plot(x,0.1*V,color="grey",label="V(x) scaled by 0.1")
plt.ylim((-.1,1.))
plt.xlim((-5.,8.))
a0=5.
alpha = (m*omega/(np.pi*hbar))**0.25
psi0 = np.sqrt(h)*alpha*np.exp(-(x-a0)**2*m*omega/(2*hbar)) # This is the formula for the displaced state.
n0 = np.linalg.norm(psi0)
print("Check the normalization of psi0: ",n0)
plt.plot(x,psi0/np.sqrt(h),label="Displaced state $\Psi(x,0)$")
plt.plot(x,-psi[0]/np.sqrt(h),label="Ground state $\psi_0(x)$")
plt.legend()
plt.savefig("Displaced_state.pdf")
plt.show()
```
We can now compute the $c_n$ factors as a simple sum of the product of the initial wave and the $\psi_n(x)$ eigen states. There are many ways to accomplish this, looping over the $N$ eigen states. The way I do this below is particularly efficient, creating a new Numpy array of the $c_n$ factors. The line commented out does the calculation using list comprehension. The second line does this as a matrix multiplication.
We now compute the energy of this initial state, so we can plot the line on our graph. This was not part of the homework assignment, but it is nice to be able to draw this. We compute the energy as: . We also compute the energy as $E=\int \Psi^*(x,t=0) \hat{\mathrm{H}} \Psi(x,t-0)$, and compare.
Once we have the $c_n$ we can sum them to make sure the normalization is indeed correct: $\sum |c_n|^2 =1$.
We can also use them to calculate the expactation value of the energy of the state: $<E>=\sum |c_n|^2 E_n$, which we can check against an algebraic computation:
$$ <E> = <T> + <V> = \frac{1}{2}\hbar\omega + \frac{1}{2}m\omega^2 |x_0|^2 = \frac{1}{2} + \frac{1}{2} 5^2 = 13$$
And finally we can also compute it directly from the Hamiltonian:
$$<E> = <\Psi(x,0)| \hat{\mathrm{H}} | \Psi(x,0)> = \int \Psi^*(x,0)H \Psi(x,0) dx$$.
```python
```
```python
# cn=np.array([np.sum(psi[i]*psi0) for i in range(N)],dtype='float')
cn = psi.dot(psi0)
print(cn[0:18])
print("Check sum: {:6.4f}".format(np.sum(cn*cn)))
E = np.sum(np.conjugate(cn)*cn*En)
print ("<E> = {:9.4f}".format(E))
E_check = np.sum( np.conjugate(psi0)*H.dot(psi0))
print("Check E=",E_check)
```
We now want to compute:
$$\Psi(x,t)=\sum_{n=1}^{n=\infty} c_n \psi_n(x) e^{-i(n+\frac{1}{2})\omega t}$$
For a particular time $t$ we thus want an array of numbers representing $\Psi$ at that time. A technical detail here is that we want to sum all the $n$ values for a particular $x_i$ of the product $c_n*\psi_n*\phi(t)$. Our wavefunctions are arranged in such a way that this would be a sum over the columns instead of over the rows (which is what we did when checking the normalization). We can circumvent this problem by using the original transposed version, psiT, of the wavefunctions. Below are three different implementation of this function, the first one is a bit more straight forward but slow, the second one is a bit faster (about a factor of 2.5), the third is fastest.
At this point you should start to worry about numerical accuracy. It turns out that for this particular situation the method we are using is accurate enough, and the solution is *stable*, that is, there are no diverging terms in the calculation. This will not be true for all situations where you solve the Schrödinger equation using matrix inversion (i.e. finding the eigen vectors). In many situations, including scattering, you will need to use more sophisticated ways of obtaining solutions.
```python
```
```python
# This version creates an array of zeros, to which it then sequentially adds each of the terms in the sum.
# Note that we use the global psi array.
def psi_xt(t,cn):
out = np.zeros(N,dtype='complex128')
for n in range(N):
out += cn[n]*psi[n]*np.exp(-1j*(n+0.5)*omega*t)
return(out)
# This version uses np.sum to accomplish the same thing as the function above.
def psi_xt2(t,cn):
n = np.arange(len(cn))
times = np.exp(-1j*(n+0.5)*omega*t)
out = psiT.dot(cn*times)
return(out)
# This version uses np.sum and now also the previously calculated energyes.
# This way, psi_xt3 will work even if the potential is distorted and the energy levels are no longer (n+0.5)*hbar*omega
def psi_xt3(t,cn):
out = psiT.dot(cn*np.exp(-1j*En*t/hbar))
return(out)
```
```python
```
```python
fig2 = plt.figure(figsize=[10,5])
plt.title('Harmonic Oscillator')
plt.ylabel('$\psi(x)$')
plt.xlabel('$x$')
plt.plot([0,0],[-6,V[0]],color="blue")
plt.plot([-a/2.,a/2.+1.],[0,0],color="blue")
plt.plot(x,0.05*V,color="grey",label="V(x) scaled by 0.05")
plt.plot([-a/2.,a/2.],[E*0.05,E*0.05],color="grey",linestyle="dashed",label="<E> scaled by 0.05")
plt.ylim((-.1,0.8))
# plt.plot(x,psi0/np.sqrt(h),color='#dddddd')
for t in [0.,np.pi/4.,np.pi/2.,3.*np.pi/4.,np.pi]: # np.linspace(0,np.pi,8):
print(t)
plt.plot(x,np.abs(psi_xt3(t,cn))**2/h,label="t={:5.3}$\pi$".format(t/np.pi))
plt.legend()
plt.savefig("Displaced_state_vs_time.pdf")
plt.show()
```
```python
```
```python
import matplotlib.animation as animation
from IPython.display import HTML
fig3 = plt.figure(figsize=[10,7])
ax = fig3.add_subplot(111, autoscale_on=False, xlim=(-10, 10), ylim=(-0.1, 1.))
ax.grid()
line, = ax.plot([], [], lw=2,color='red')
time_template = 'time = {:9.2f}s'
time_text = ax.text(0.05, 0.93, '', transform=ax.transAxes)
def init():
plt.title('Harmonic Oscillator')
plt.ylabel('$\psi(x)$')
plt.xlabel('$x$')
plt.plot([0,0],[-6,V[0]],color="blue")
plt.plot([-a/2.,a/2.],[0,0],color="blue")
plt.plot(x,0.05*V,color="grey",label="V(x) scaled by 0.05")
plt.plot([-a/2.,a/2.],[E*0.05,E*0.05],color="grey",linestyle="dashed",label="<E> scaled by 0.05")
line.set_data([], [])
time_text.set_text(time_template.format(0.))
return line, time_text
def animate(t):
#t = (float(i)/100.)*(4.*np.pi/omega)
line.set_data(x,np.abs(psi_xt3(t,cn)/np.sqrt(h)))
time_text.set_text(time_template.format(t))
return line, time_text
frame_rate = 30 # Frame rate in Hz. Make higher for smoother movie, but it takes longer to compute.
time_slowdown = 10 # Run time x times slower than normal. Since omega=1, we want this about 10.
ani = animation.FuncAnimation(fig3, animate, np.linspace(0,2*np.pi/omega,frame_rate*time_slowdown),
interval=1000./frame_rate, blit=True, init_func=init)
HTML(ani.to_html5_video())
```
```python
frame_rate = 30 # Frame rate in Hz. Make higher for smoother movie, but it takes longer to compute.
time_slowdown = 10 # Run time x times slower than normal. Since omega=1, we want this about 10.
ani = animation.FuncAnimation(fig3, animate, np.linspace(0,2*np.pi/omega,frame_rate*time_slowdown),
interval=1000./frame_rate, blit=True, init_func=init)
HTML(ani.to_jshtml())
```
```python
# Hamiltonian matrix method to solve anharmonic oscillator 15/02/2021
# 15/02/2021
from matplotlib import pyplot as plt
import numpy as np
plt.style.use('fivethirtyeight')
hbar=1
m=1
omega=1
N = 2014
a = 10.0
x = np.linspace(-a/2.,a/2.,N)
h = x[1]-x[0] # Should be equal to 2*np.pi/(N-1)
V = .5*m*omega*x*x+.1*x**4
#V[int(N/2)]=2/h # This would add a "delta" spike in the center.
Mdd = 1./(h*h)*(np.diag(np.ones(N-1),-1) -2* np.diag(np.ones(N),0) + np.diag(np.ones(N-1),1))
H = -(hbar*hbar)/(2.0*m)*Mdd + np.diag(V)
En,psiT = np.linalg.eigh(H) # This computes the eigen values and eigenvectors
psi = np.transpose(psiT)
# The psi now contain the wave functions ordered so that psi[n] if the n-th eigen state.
plt.figure(figsize=(12,6))
plt.plot(x,psi[1],label='$\psi_1(x)$')
plt.plot(x,psi[2],label='$\psi_2(x)$')
plt.xlabel('x',size=24)
plt.ylabel('$\psi_n(x)$',size=24)
plt.legend(loc='upper left')
plt.show()
```
```python
En[0:4]
```
```python
N=10
(np.diag(np.ones(N-1),-1) -2* np.diag(np.ones(N),0) + np.diag(np.ones(N-1),1))
```
```python
```
```python
# Hamiltonian matrix method to solve inside solution of infinite well 15/02/2021
# 15/02/2021
from matplotlib import pyplot as plt
import numpy as np
plt.style.use('fivethirtyeight')
hbar=1
m=1
omega=1
N = 2014
a = 10.0
x = np.linspace(-a/2.,a/2.,N)
h = x[1]-x[0] # Should be equal to 2*np.pi/(N-1)
#V = .5*m*omega*x*x+.1*x**4
V = 0*x
#V[int(N/2)]=2/h # This would add a "delta" spike in the center.
Mdd = 1./(h*h)*(np.diag(np.ones(N-1),-1) -2* np.diag(np.ones(N),0) + np.diag(np.ones(N-1),1))
H = -(hbar*hbar)/(2.0*m)*Mdd + np.diag(V)
En,psiT = np.linalg.eigh(H) # This computes the eigen values and eigenvectors
psi = np.transpose(psiT)
# The psi now contain the wave functions ordered so that psi[n] if the n-th eigen state.
plt.figure(figsize=(12,6))
plt.plot(x,psi[1],label='$\psi_1(x)$')
plt.plot(x,psi[2],label='$\psi_2(x)$')
plt.xlabel('x')
plt.ylabel('$\psi_n(x)$')
plt.legend(loc='upper left')
plt.show()
```
```python
En
```
```python
import matplotlib.pyplot as plt
def simp13dis(h,fx):
n=len(fx)
I=0
for i in range(n):
if i==0 or i==n:
I+=fx[i]
elif i%2!=0:
I+=4*fx[i]
else:
I+=2*fx[i]
I=I*h/3
return I
# Normalisation of discrete wavefunction
def norm(psi,dx):
N=len(psi)
psi2=[psi[i]**2 for i in range(N)]
psimod2=simp13dis(dx,psi2)
normpsi=[psi[i]/psimod2**0.5 for i in range(N)]
return normpsi
import matplotlib.pyplot as plt
# Solution of Schrodinger equation:
def wavefn(mhdx2,psi,vi,E):
N=len(psi)
psiE=[psi[i] for i in range(N)]
P=[mhdx2*(vi[i]-E) for i in range(N)]
for i in range(2,N):
d=1-1/12*P[i]
a=2*(1+5/12*P[i-1])
b=-(1-1/12*P[i-2])
psiE[i]=(a/d)*psiE[i-1]+(b/d)*psiE[i-2]
return psiE
def NumerovSch(mhdx2,vi,psi0,psi1,psiN,nodes,mxItr):
N=len(vi)-1
Emx=max(vi)
Emn=min(vi)
psiIn=[0 for i in range(N+1)]
psiIn[0],psiIn[1],psiIn[N]=psi0,psi1,psiN
itr=0
while abs(Emx-Emn)>1e-6 and itr<mxItr:
E=.5*(Emx+Emn)
psi=wavefn(mhdx2,psiIn,vi,E)
# Node counting
cnt=0
for i in range(1,N-2):
if psi[i]*psi[i+1]<0:
cnt+=1
if cnt>nodes:
Emx=E
elif cnt<nodes:
Emn=E
else:
if psi[N-1]>psi[N]:
Emn=E
elif psi[N-1]<psi[N]:
Emx=E
itr+=1
if itr<mxItr:
return E,psi
else:
return None,None
def V(k,x):
return .5*k*x*x
hbar,m=.1,1.0
dx=.01
mxItr=100
psi0,psiN=0,0
k=1
stln=['b','r','m','g','c','y']
x0,xN=[-1.2,-1.4,-1.5,-1.6,-1.7,-1.8],[1.2,1.4,1.5,1.6,1.7,1.8]
psi0,psiN=0,0
for nodes in range(4):
N=int((xN[nodes]-x0[nodes])/dx)
dx=(xN[nodes]-x0[nodes])/N
mhdx2=2*m*dx**2/hbar**2
x=[x0[nodes]+i*dx for i in range(N+1)]
Vi=[V(k,x[i]) for i in range(N+1)]
psi1=(-1)**nodes*1e-4
E,psi=NumerovSch(mhdx2,Vi,psi0,psi1,psiN,nodes,mxItr)
if E!=None:
psi=norm(psi,dx)
#plt.figure(figsize=(12,6))
plt.plot(x,psi,stln[nodes],label=r'E=%.4f$\psi_%d(x)$'%(E,nodes))
#plt.plot(x,Vi,'k--',label='Potential')
xax=[0 for i in range(N+1)] # x-axis
#plt.figure(figsize=(12,6))
plt.plot(x,xax,'k')
plt.plot(x,Vi,'k--',label='Potential')
plt.legend(loc='lower left')
plt.xlabel('x',size=20)
plt.ylabel('$\psi(x)$',size=30)
#plt.figure(figsize=(12,6))
plt.show()
```
# The harmonic oscillator wavefunctions:
The harmonic oscillator is often used as an approximate model for the behaviour of some quantum systems, for example the vibrations of a diatomic molecule. The Schrödinger equation for a particle of mass m moving in one dimension in a potential $ V(x)=\frac{1}{2}kx^2$ is
# $ −\frac{\hbar^2}{2m}\frac{d^2\psi}{dx^2}+\frac{1}{2}kx^2ψ=E\psi.$
With the change of variable, $ q=(mk/\hbar^2)^{1/4}x $, this equation becomes
$-\frac{1}{2}\frac{d^2\psi}{dq^2}+\frac{1}{2}\psi=\frac{E\psi}{\hbar\omega}$
where $ω=\sqrt(k/m)$. This differential equation has an exact solution in terms of a quantum number $ v=0,1,2,⋯$
#$\psi(q)=N_vH_v(q)exp(−q^2/2)$
where $N_v=(\sqrtπ 2^vv!)^{−1/2}$ is a normalization constant and $H_v(q)$ is the Hermite polynomial of order v, defined by:
$H_v(q)=(−1)^ve^{q^2}\frac{d^v}{dq^v}(e^{−q^2})$.
The Hermite polynomials obey a useful recursion formula:
$H_{n+1}(q)=2qH_n(q)−2nH_{n−1}(q)$.
so given the first two: H0=1 and H1=2q, we can calculate all the others.
The following code plots the harmonic oscillator wavefunctions (PLOT_PROB = False) or probability densities (PLOT_PROB = True) for vibrational levels up to VMAX on the vibrational energy levels depicted within the potential, $V=q^2/2$.
http://phys.ubbcluj.ro/~tbeu/INP/programs.html
```python
import numpy as np
import matplotlib.pyplot as plt
from scipy.constants import hbar
m= 1e-27
E= 1.5
def numerov_step(psi_1,psi_2,k1,k2,k3,h):
#k1=k_(n-1), k2=k_n, k3=k_(n+1)
#psi_1 = psi_(n-1) and psi_2=psi_n
m = 2*(1-(5/12) * h**2 * k2**2)*psi_2
n = (1+(1/12)*h**2*k1**2)*psi_1
o = 1 + (1/12) *h**2 *k3**2
return (m-n)/o
def numerov(N,x0,xE,a):
x,dx = np.linspace(x0,xE,N+1,retstep=True)
def V(x,a):
#if (np.abs(x)<a):
return .5*x**2
#else:
# return 0
k = np.zeros(N+1)
for i in range(len(k)):
k[i] = 2*m*(E-V(x[i],a))/hbar**2
psi= np.zeros(N+1)
psi[0]=0
psi[1]=.10
psi[2]=1.0
for j in np.arange(2,N):
psi[j+1]= numerov_step(psi[j],psi[j+1],k[j-1],k[j],k[j+1],dx)
return psi
x0 =-3
xE = 3
N =1000
psi=numerov(N,x0,xE,3)
x = np.linspace(x0,xE,N+1)
y=[psi[i] for i in range(len(x))]
plt.figure()
plt.plot(x,psi)
plt.show()
```
```python
import pylab as lab
import math
N = 60000 # iterations
h = 0.0001
h2 = pow(h,2)
epsilon = 3.5 # n+1/2
y = 0.0
k = 0.0
x = -1*(N-2)*h
k_minus_2 = epsilon + x-2*h # k_0
k_minus_1 = epsilon + x-h # k_1
a = 0.1
y_minus_2 = 0 # y_0
y_minus_1 = a # y_1
x_out = []
y_out = []
n=-1*N+2
while n<N-2:
n+=1
x += h;
k = 2*epsilon - pow(x, 2)
b = h2/12
y = ( 2*(1-5*b*k_minus_1) * y_minus_1 - (1+b*k_minus_2) * y_minus_2 ) / (1 + b * k)
# Save for plotting
x_out.append(x)
y_out.append(y)
# Shift for next iteration
y_minus_2 = y_minus_1
y_minus_1 = y
k_minus_2 = k_minus_1
k_minus_1 = k
# Plot
lab.figure(1)
lab.plot(x_out, y_out, label="$\epsilon = "+repr(epsilon)+"$")
lab.xlabel("x")
lab.ylabel("y")
lab.title("Schroedinger Eqn in Harmonic Potential")
lab.legend(loc=1)
lab.show()
lab.grid()
```
```python
from scipy import *
from scipy import integrate
from scipy import optimize
def Schroed_deriv(y,r,l,En):
"Given y=[u,u'] returns dy/dr=[u',u''] "
(u,up) = y
return array([up, (l*(l+1)/r**2-2/r-En)*u])
R = linspace(1e-10,20,500)
l=0
E0=-1.0
ur = integrate.odeint(Schroed_deriv, [0.0, 1.0], R, args=(l,E0))
from pylab import *
%matplotlib inline
plot(R,ur)
grid()
show()
```
```python
R = linspace(1e-10,20,500)
l=0
E0=-1.0
Rb=R[::-1] # invert the mesh
urb = integrate.odeint(Schroed_deriv, [0.0, -1e-5], Rb, args=(l,E0))
ur = urb[:,0][::-1] # we take u(r) and invert it in R.
norm=integrate.simps(ur**2,x=R)
ur *= 1./sqrt(norm)
```
```python
plot(R,ur)
grid()
show()
plot(R,ur)
xlim(0,0.01)
ylim(0,0.01)
show()
```
```python
# Subroutine
def SolveSchroedinger(En,l,R):
Rb=R[::-1]
du0=-1e-5
urb=integrate.odeint(Schroed_deriv, [0.0,du0], Rb, args=(l,En))
ur=urb[:,0][::-1]
norm=integrate.simps(ur**2,x=R)
ur *= 1./sqrt(norm)
return ur
l=1
En=-1./(2**2) # 2p orbital
l=1
En = -0.25
Ri = linspace(1e-6,20,500) # linear mesh already fails for this case
ui = SolveSchroedinger(En,l,Ri)
R = logspace(-5,2.,500)
ur = SolveSchroedinger(En,l,R)
#ylim([-0.1,0.1])
plot(R,ur,'g-')
#plot(Ri,ui,'s-')
xlim([0,20])
```
```python
def Shoot(En,R,l):
Rb=R[::-1]
du0=-1e-5
ub=integrate.odeint(Schroed_deriv, [0.0,du0], Rb, args=(l,En))
ur=ub[:,0][::-1]
norm=integrate.simps(ur**2,x=R)
ur *= 1./sqrt(norm)
ur = ur/R**l
f0 = ur[0]
f1 = ur[1]
f_at_0 = f0 + (f1-f0)*(0.0-R[0])/(R[1]-R[0])
return f_at_0
```
```python
R = logspace(-5,2.2,500)
Shoot(-1./2**2,R,1)
```
```python
def FindBoundStates(R,l,nmax,Esearch):
n=0
Ebnd=[]
u0 = Shoot(Esearch[0],R,l)
for i in range(1,len(Esearch)):
u1 = Shoot(Esearch[i],R,l)
if u0*u1<0:
Ebound = optimize.brentq(Shoot,Esearch[i-1],Esearch[i],xtol=1e-16,args=(R,l))
Ebnd.append((l,Ebound))
if len(Ebnd)>nmax: break
n+=1
print('Found bound state at E=%14.9f E_exact=%14.9f l=%d' % (Ebound, -1.0/(n+l)**2,l))
u0=u1
return Ebnd
Esearch = -1.2/arange(1,20,0.2)**2
R = logspace(-6,2.2,500)
nmax=7
Bnd=[]
for l in range(nmax-1):
Bnd += FindBoundStates(R,l,nmax-l,Esearch)
```
```python
def cmpE(x,y):
if abs(x[1]-y[1])>1e-4:
return cmp(x[1],y[1])
else:
return cmp(x[0],y[0])
```
```python
Bnd
```
```python
Z=28 # like Ni
N=0
rho=zeros(len(R))
for (l,En) in Bnd:
ur = SolveSchroedinger(En,l,R)
dN = 2*(2*l+1)
if N+dN<=Z:
ferm=1.
else:
ferm=(Z-N)/float(dN)
drho = ur**2 * ferm * dN/(4*pi*R**2)
rho += drho
N += dN
print('adding state (%2d,%14.9f) with fermi=%4.2f and current N=%5.1f' % (l,En,ferm,N))
if N>=Z: break
```
```python
from pylab import *
%matplotlib inline
plot(R,rho*(4*pi*R**2),label='charge density')
xlim([0,25])
show()
```
# Numerov algorithm
The general purpose integration routine is not the best method for solving the Schroedinger equation, which does not have first derivative terms.
Numerov algorithm is better fit for such equations, and its algorithm is summarized below.
The second order linear differential equation (DE) of the form
$ x′′(t)=f(t)x(t)+u(t) $
is a target of Numerov algorithm.
Due to a special structure of the DE, the fourth order error cancels and leads to sixth order algorithm using second order integration scheme.
If we expand x(t) to some higher power and take into account the time reversal symmetry of the equation, all odd term cancel
$x(h)=x(0)+hx′(0)+12h^2x′′(0)+\frac{1}{3!}h^3x^{(3)}(0)+\frac{1}{4!}h^4x^{(4)}(0)+\frac{1}{5!}h^5x^{(5)}(0)+...x(−h)=x(0)−hx′(0)+\frac{1}{2}h^2x′′(0)−\frac{1}{3!}h^3x^{(3)}(0)+\frac{1}{4!}h^4x^{(4)}(0)−\frac{1}{5!}h^5x^{(5)}(0)+... $
hence
$ x(h)+x(−h)=2x(0)+h^2(f(0)x(0)+u(0))+\frac{2}{4!}h^4x^{(4)}(0)+O(h6)$
If we are happy with O(h4) algorithm, we can neglect x(4) term and get the following recursion relation
$x_i+1−2x_i+x_i−1=h^2(f_ix_i+u_i)$.
But we know from the differential equation that
$x(4)=\frac{d^2x′′(t)}{dt^2}=\frac{d^2}{dt^2}(f(t)x(t)+u(t))$
which can be approximated by
$x(4)∼f_i+1x_i+1+ui+1−2fixi−2ui+fi−1xi−1+ui−1h2 $
Inserting the fourth order derivative into the above recursive equation (forth equation in his chapter), we get
$x_i+1−2x_i+x_i−1=h^2(f_ix_i+u_i)+h^2/12(f_i+1x_i+1+u_i+1−2f_ix_i−2u_i+f_i−1x_i−1+u_i−1)$
If we switch to a new variable wi=xi(1−h212fi)−h212ui we are left with the following equation
$wi+1−2wi+wi−1=h^2(f_ix_i+u_i)+O(h6)$
The variable x needs to be recomputed at each step with $x_i=(w_i+h^2/12u_i)$$(1−\frac{h^2}{12}f_i)$
```python
# NUMEROV Solution Schrodinger Equation: Hydrogen Atom
from pylab import *
%matplotlib inline
#import weave
def Numerovc(f, x0_, dx, dh_):
code_Numerov="""
double h2 = dh*dh;
double h12 = h2/12.;
double w0 = x(0)*(1-h12*f(0));
double w1 = x(1)*(1-h12*f(1));
double xi = x(1);
double fi = f(1);
for (int i=2; i<f.size(); i++){
double w2 = 2*w1-w0+h2*fi*xi; // here fi=f1
fi = f(i); // fi=f2
xi = w2/(1-h12*fi);
x(i)=xi;
w0 = w1;
w1 = w2;
}
"""
x = zeros(len(f))
dh=float(dh_)
x[0]=x0_
x[1]=x0_+dh*dx
#weave.inline(code_Numerov, ['f','dh','x'], type_converters=weave.converters.blitz, compiler = 'gcc')
return x
def fSchrod(En, l, R):
return l*(l+1.)/R**2-2./R-En
def ComputeSchrod(En,R,l):
"Computes Schrod Eq."
f = fSchrod(En,l,R[::-1])
ur = Numerovc(f,0.0,-1e-7,-R[1]+R[0])[::-1]
norm = integrate.simps(ur**2,x=R)
return ur*1/sqrt(abs(norm))
def Shoot(En,R,l):
ur = ComputeSchrod(En,R,l)
ur = ur/R**l
f0 = ur[0]
f1 = ur[1]
f_at_0 = f0 + (f1-f0)*(0.0-R[0])/(R[1]-R[0])
return f_at_0
def FindBoundStates(R,l,nmax,Esearch):
n=0
Ebnd=[]
u0 = Shoot(Esearch[0],R,l)
for i in range(1,len(Esearch)):
u1 = Shoot(Esearch[i],R,l)
if u0*u1<0:
Ebound = optimize.brentq(Shoot,Esearch[i-1],Esearch[i],xtol=1e-16,args=(R,l))
Ebnd.append((l,Ebound))
if len(Ebnd)>nmax: break
n+=1
print('Found bound state at E=%14.9f E_exact=%14.9f l=%d' % (Ebound, -1.0/(n+l)**2,l))
u0=u1
return Ebnd
def cmpE(x,y):
if abs(x[1]-y[1])>1e-4:
return cmp(x[1],y[1])
else:
return cmp(x[0],y[0])
Esearch = -1.2/arange(1,20,0.2)**2
R = linspace(1e-8,100,2000)
nmax=5
Bnd=[]
for l in range(nmax-1):
Bnd += FindBoundStates(R,l,nmax-l,Esearch)
#Bnd.sort(cmpE)
Z=28 # Like Ni ion
N=0
rho=zeros(len(R))
for (l,En) in Bnd:
#ur = SolveSchroedinger(En,l,R)
ur = ComputeSchrod(En,R,l)
dN = 2*(2*l+1)
if N+dN<=Z:
ferm=1.
else:
ferm=(Z-N)/float(dN)
drho = ur**2 * ferm * dN/(4*pi*R**2)
rho += drho
N += dN
print('adding state', (l,En), 'with fermi=', ferm)
plot(R, drho*(4*pi*R**2))
if N>=Z: break
xlim([0,25])
show()
plot(R,rho*(4*pi*R**2),label='charge density')
xlim([0,25])
show()
```
# HYDROGEN ATOM: SCHRODINGER EQUATION
# $ \hat {H} (r , \theta , \varphi ) \psi (r , \theta , \varphi ) = E \psi ( r , \theta , \varphi) $
# $ \hat {V} (r) = - \dfrac {e^2}{4 \pi \epsilon _0 r } $
# $$\left \{ -\dfrac {\hbar ^2}{2 \mu r^2} \left [ \dfrac {\partial}{\partial r} \left (r^2 \dfrac {\partial}{\partial r} \right ) + \dfrac {1}{\sin \theta } \dfrac {\partial}{\partial \theta } \left ( \sin \theta \dfrac {\partial}{\partial \theta} \right ) + \dfrac {1}{\sin ^2 \theta} \dfrac {\partial ^2}{\partial \varphi ^2} \right ] -
\dfrac {e^2}{4 \pi \epsilon _0 r } \right \} \psi (r , \theta , \varphi ) = E \psi (r , \theta , \varphi ) $$
# $\color{red} \psi (r , \theta , \varphi ) =\color{green} R (r) Y (\theta , \varphi ) $
#$ \bbox[pink]{\frac{1}{R}\frac{\rm d}{{\rm d}r}\left(r^2\frac{{\rm d}R}{{\rm d}r}\right)}+\bbox[lightblue]{\frac{1}{Y\sin\theta}\frac{\partial}{\partial\theta}\left(\sin\theta\frac{\partial Y}{\partial\theta}\right)+\frac{1}{Y\sin^2\theta}\frac{\partial^2Y}{\partial\phi^2}}+\bbox[pink]{\frac{2\mu r^2}{\hbar^2}\left(E+\frac{Ze^2}{4\pi\epsilon_0r}\right)}=0\qquad.
$
# Energy eigenvalues of Hydrogen atom:
# NORMALIZATION CONDITION:
```python
# CODE TO PLOT THE ABOVE FUNCTION
import numpy as np
from scipy import constants as const
from scipy import sparse as sparse
from scipy.sparse.linalg import eigs
from matplotlib import pyplot as plt
hbar = const.hbar
e = const.e
m_e = const.m_e
pi = const.pi
epsilon_0 = const.epsilon_0
joul_to_eV = e
def calculate_potential_term(r):
potential = e**2 / (4.0 * pi * epsilon_0) / r
potential_term = sparse.diags((potential))
return potential_term
def calculate_angular_term(r):
angular = l * (l + 1) / r**2
angular_term = sparse.diags((angular))
return angular_term
def calculate_laplace_three_point(r):
h = r[1] - r[0]
main_diag = -2.0 / h**2 * np.ones(N)
off_diag = 1.0 / h**2 * np.ones(N - 1)
laplace_term = sparse.diags([main_diag, off_diag, off_diag], (0, -1, 1))
return laplace_term
def build_hamiltonian(r):
laplace_term = calculate_laplace_three_point(r)
angular_term = calculate_angular_term(r)
potential_term = calculate_potential_term(r)
hamiltonian = -hbar**2 / (2.0 * m_e) * (laplace_term - angular_term) - potential_term
return hamiltonian
N = 2000
l = 0
r = np.linspace(2e-9, 0.0, N, endpoint=False)
hamiltonian = build_hamiltonian(r)
""" solve eigenproblem """
number_of_eigenvalues = 30
eigenvalues, eigenvectors = eigs(hamiltonian, k=number_of_eigenvalues, which='SM')
""" sort eigenvalue and eigenvectors """
eigenvectors = np.array([x for _, x in sorted(zip(eigenvalues, eigenvectors.T), key=lambda pair: pair[0])])
eigenvalues = np.sort(eigenvalues)
""" compute probability density for each eigenvector """
densities = [np.absolute(eigenvectors[i, :])**2 for i in range(len(eigenvalues))]
def plot(r, densities, eigenvalues):
plt.xlabel('x ($\\mathrm{\AA}$)')
plt.ylabel('probability density ($\\mathrm{\AA}^{-1}$)')
energies = ['E = {: >5.2f} eV'.format(eigenvalues[i].real / e) for i in range(3)]
plt.plot(r * 1e+10, densities[0], color='blue', label=energies[0])
plt.plot(r * 1e+10, densities[1], color='green', label=energies[1])
plt.plot(r * 1e+10, densities[2], color='red', label=energies[2])
plt.legend()
plt.show()
return
""" plot results """
plot(r, densities, eigenvalues)
```
## Spherical Harmonics
$l=0:~~~~~~Y_{0}^{0}(\theta,\varphi)={1\over 2}\sqrt{1\over \pi}$
$l=1~~~\begin{align} Y_{1}^{-1}(\theta,\varphi) & = {1\over 2}\sqrt{3\over 2\pi}\cdot e^{-i\varphi}\cdot\sin\theta\quad = {1\over 2}\sqrt{3\over 2\pi}\cdot{(x-iy)\over r} \\ Y_{1}^{0}(\theta,\varphi) & = {1\over 2}\sqrt{3\over \pi}\cdot\cos\theta\quad \quad = {1\over 2}\sqrt{3\over \pi}\cdot{z\over r} \\ Y_{1}^{1}(\theta,\varphi) & = {-1\over 2}\sqrt{3\over 2\pi}\cdot e^{i\varphi}\cdot\sin\theta\quad = {-1\over 2}\sqrt{3\over 2\pi}\cdot{(x+iy)\over r} \end{align}$
# l = 0
\begin{align}Y_{00} & = s = Y_0^0 = \frac{1}{2} \sqrt{\frac{1}{\pi}}\end{align}
$
$l = 1
\begin{align} Y_{1,-1} & = p_y = i \sqrt{\frac{1}{2}} \left( Y_1^{- 1} + Y_1^1 \right) = \sqrt{\frac{3}{4 \pi}} \cdot \frac{y}{r} \\ Y_{10} & = p_z = Y_1^0 = \sqrt{\frac{3}{4 \pi}} \cdot \frac{z}{r} \\ Y_{11} & = p_x = \sqrt{\frac{1}{2}} \left( Y_1^{- 1} - Y_1^1 \right) = \sqrt{\frac{3}{4 \pi}} \cdot \frac{x}{r} \end{align} $
$
l = 2
\begin{align}Y_{2,-2} & = d_{xy} = i \sqrt{\frac{1}{2}} \left( Y_2^{- 2} - Y_2^2\right) = \frac{1}{2} \sqrt{\frac{15}{\pi}} \cdot \frac{x y}{r^2} \\Y_{2,-1} & = d_{yz} = i \sqrt{\frac{1}{2}} \left( Y_2^{- 1} + Y_2^1 \right) = \frac{1}{2} \sqrt{\frac{15}{\pi}} \cdot \frac{y z}{r^2} \\Y_{20} & = d_{z^2} = Y_2^0 = \frac{1}{4} \sqrt{\frac{5}{\pi}} \cdot \frac{- x^2 - y^2 + 2 z^2}{r^2} \\Y_{21} & = d_{xz} = \sqrt{\frac{1}{2}} \left( Y_2^{- 1} - Y_2^1 \right) = \frac{1}{2} \sqrt{\frac{15}{\pi}} \cdot \frac{z x}{r^2} \\Y_{22} & = d_{x^2-y^2} = \sqrt{\frac{1}{2}} \left( Y_2^{- 2} + Y_2^2 \right) = \frac{1}{4} \sqrt{\frac{15}{\pi}} \cdot \frac{x^2 - y^2 }{r^2}\end{align} $
$
l = 3
\begin{align}Y_{3,-3} & = f_{y(3x^2-y^2)} = i \sqrt{\frac{1}{2}} \left( Y_3^{- 3} + Y_3^3 \right) = \frac{1}{4} \sqrt{\frac{35}{2 \pi}} \cdot \frac{\left( 3 x^2 - y^2 \right) y}{r^3} \\Y_{3,-2} & = f_{xyz} = i \sqrt{\frac{1}{2}} \left( Y_3^{- 2} - Y_3^2 \right) = \frac{1}{2} \sqrt{\frac{105}{\pi}} \cdot \frac{xy z}{r^3} \\Y_{3,-1} & = f_{yz^2} = i \sqrt{\frac{1}{2}} \left( Y_3^{- 1} + Y_3^1 \right) = \frac{1}{4} \sqrt{\frac{21}{2 \pi}} \cdot \frac{y (4 z^2 - x^2 - y^2)}{r^3} \\Y_{30} & = f_{z^3} = Y_3^0 = \frac{1}{4} \sqrt{\frac{7}{\pi}} \cdot \frac{z (2 z^2 - 3 x^2 - 3 y^2)}{r^3} \\Y_{31} & = f_{xz^2} = \sqrt{\frac{1}{2}} \left( Y_3^{- 1} - Y_3^1 \right) = \frac{1}{4} \sqrt{\frac{21}{2 \pi}} \cdot \frac{x (4 z^2 - x^2 - y^2)}{r^3} \\Y_{32} & = f_{z(x^2-y^2)} = \sqrt{\frac{1}{2}} \left( Y_3^{- 2} + Y_3^2 \right) = \frac{1}{4} \sqrt{\frac{105}{\pi}} \cdot \frac{\left( x^2 - y^2 \right) z}{r^3} \\Y_{33} & = f_{x(x^2-3y^2)} = \sqrt{\frac{1}{2}} \left( Y_3^{- 3} - Y_3^3 \right) = \frac{1}{4} \sqrt{\frac{35}{2 \pi}} \cdot \frac{\left( x^2 - 3 y^2 \right) x}{r^3}\end{align}$
$
l = 4
\begin{align}Y_{4,-4} & = g_{xy(x^2-y^2)} = i \sqrt{\frac{1}{2}} \left( Y_4^{- 4} - Y_4^4 \right) = \frac{3}{4} \sqrt{\frac{35}{\pi}} \cdot \frac{xy \left( x^2 - y^2 \right)}{r^4} \\Y_{4,-3} & = g_{zy^3} = i \sqrt{\frac{1}{2}} \left( Y_4^{- 3} + Y_4^3 \right) = \frac{3}{4} \sqrt{\frac{35}{2 \pi}} \cdot \frac{(3 x^2 - y^2) yz}{r^4} \\Y_{4,-2} & = g_{z^2xy} = i \sqrt{\frac{1}{2}} \left( Y_4^{- 2} - Y_4^2 \right) = \frac{3}{4} \sqrt{\frac{5}{\pi}} \cdot \frac{xy \cdot (7 z^2 - r^2)}{r^4} \\Y_{4,-1} & = g_{z^3y} = i \sqrt{\frac{1}{2}} \left( Y_4^{- 1} + Y_4^1\right) = \frac{3}{4} \sqrt{\frac{5}{2 \pi}} \cdot \frac{yz \cdot (7 z^2 - 3 r^2)}{r^4} \\Y_{40} & = g_{z^4} = Y_4^0 = \frac{3}{16} \sqrt{\frac{1}{\pi}} \cdot \frac{(35 z^4 - 30 z^2 r^2 + 3 r^4)}{r^4} \\Y_{41} & = g_{z^3x} = \sqrt{\frac{1}{2}} \left( Y_4^{- 1} - Y_4^1 \right) = \frac{3}{4} \sqrt{\frac{5}{2 \pi}} \cdot \frac{xz \cdot (7 z^2 - 3 r^2)}{r^4} \\Y_{42} & = g_{z^2xy} = \sqrt{\frac{1}{2}} \left( Y_4^{- 2} + Y_4^2 \right) = \frac{3}{8} \sqrt{\frac{5}{\pi}} \cdot \frac{(x^2 - y^2) \cdot (7 z^2 - r^2)}{r^4} \\Y_{43} & = g_{zx^3} = \sqrt{\frac{1}{2}} \left( Y_4^{- 3} - Y_4^3 \right) = \frac{3}{4} \sqrt{\frac{35}{2 \pi}} \cdot \frac{(x^2 - 3 y^2) xz}{r^4} \\Y_{44} & = g_{x^4+y^4} = \sqrt{\frac{1}{2}} \left( Y_4^{- 4} + Y_4^4 \right) = \frac{3}{16} \sqrt{\frac{35}{\pi}} \cdot \frac{x^2 \left( x^2 - 3 y^2 \right) - y^2 \left( 3 x^2 - y^2 \right)}{r^4}\end{align} $
```python
! pip install ipyvolume
```
```python
%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib import cm, colors
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
import scipy.integrate as integrate
# Increase resolution for retina display
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('retina')
# Load interactive widgets
import ipywidgets as widgets
import ipyvolume as ipv
```
```python
# Import special functions
import scipy.special as spe
def psi_R(r,n=1,l=0):
coeff = np.sqrt((2.0/n)**3 * spe.factorial(n-l-1) /(2.0*n*spe.factorial(n+l)))
laguerre = spe.assoc_laguerre(2.0*r/n,n-l-1,2*l+1)
return coeff * np.exp(-r/n) * (2.0*r/n)**l * laguerre
r = np.linspace(0,100,1000)
R = psi_R(r,n=5,l=1)
plt.plot(r, R**2, lw=3)
plt.xlabel('$r [a_0]$',fontsize=20)
plt.ylabel('$R_{nl}(r)$', fontsize=20)
plt.grid('True')
```
```python
nmax=10
@widgets.interact(n = np.arange(1,nmax,1), l = np.arange(0,nmax-1,1))
def plot_radial(n=1,l=0):
r = np.linspace(0,250,10000)
psi2 = psi_R(r,n,l)**2 * (r**2)
plt.plot(r, psi2, lw=2, color='red')
''' Styling the plot'''
plt.xlabel('$r [a_0]$')
plt.ylabel('$R_{nl}(r)$')
rmax = n**2*(1+0.5*(1-l*(l+1)/n**2))
plt.xlim([0, 2*rmax])
```
```python
def psi_ang(phi,theta,l=0,m=0):
sphHarm = spe.sph_harm(m,l,phi,theta)
return sphHarm.real
phi, theta = np.linspace(0, np.pi, 100), np.linspace(0, 2*np.pi, 100)
phi, theta = np.meshgrid(phi, theta)
Ylm = psi_ang(theta,phi,l=2,m=0)
x = np.sin(phi) * np.cos(theta) * abs(Ylm)
y = np.sin(phi) * np.sin(theta) * abs(Ylm)
z = np.cos(phi) * abs(Ylm)
'''Set up the 3D Canvas'''
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111, projection='3d')
''' Normalize color bar to [0,1] scale'''
fcolors = (Ylm - Ylm.min())/(Ylm.max() - Ylm.min())
'''Make 3D plot of real part of spherical harmonic'''
ax.plot_surface(x, y, z, facecolors=cm.seismic(fcolors), alpha=0.3)
''' Project 3D plot onto 2D planes'''
cset = ax.contour(x, y, z,20, zdir='z',offset = -1, cmap='summer')
cset = ax.contour(x, y, z,20, zdir='y',offset = 1, cmap='winter' )
cset = ax.contour(x, y, z,20, zdir='x',offset = -1, cmap='autumn')
''' Set axes limit to keep aspect ratio 1:1:1 '''
ax.set_xlim(-1, 1)
ax.set_ylim(-1, 1)
ax.set_zlim(-1, 1)
```
```python
def HFunc(r,theta,phi,n,l,m):
'''
Hydrogen wavefunction // a_0 = 1
INPUT
r: Radial coordinate
theta: Polar coordinate
phi: Azimuthal coordinate
n: Principle quantum number
l: Angular momentum quantum number
m: Magnetic quantum number
OUTPUT
Value of wavefunction
'''
return psi_R(r,n,l) * psi_ang(phi,theta,l,m)
nmax = 10
lmax = nmax-1
@widgets.interact(n=np.arange(1,nmax,1), l = np.arange(0,nmax-1,1), m=np.arange(-lmax,lmax+1,1))
def psi_xz_plot(n=1,l=0,m=0):
plt.figure(figsize=(10,8))
limit = 4*(n+l)
x_1d = np.linspace(-limit,limit,500)
z_1d = np.linspace(-limit,limit,500)
x,z = np.meshgrid(x_1d,z_1d)
y = 0
r = np.sqrt(x**2 + y**2 + z**2)
theta = np.arctan2(np.sqrt(x**2+y**2), z )
phi = np.arctan2(y, x)
psi_nlm = HFunc(r,theta,phi,n,l,m)
#plt.pcolormesh(x, z, psi_nlm, cmap='inferno') # Try cmap = inferno, rainbow, autumn, summer,
plt.contourf(x, z, psi_nlm, 20, cmap='seismic', alpha=0.6) # Classic orbitals
plt.colorbar()
plt.title(f"$n,l,m={n,l,m}$",fontsize=20)
plt.xlabel('X',fontsize=20)
plt.ylabel('Z',fontsize=20)
```
```python
import ipyvolume as ipv
#Variables to adjust
maxi = 60
resolution = 160
base = np.linspace(-maxi, maxi, resolution)[:,np.newaxis,np.newaxis]
x2 = np.tile(base, (1,resolution,resolution))
y2 = np.swapaxes(x2,0,1)
z2 = np.swapaxes(x2,0,2)
total = np.concatenate((x2[np.newaxis,:],y2[np.newaxis,:],z2[np.newaxis,:]), axis=0)
r2 = np.linalg.norm(total, axis=0)
#Alternative theta calculation
#theta3 = np.abs(np.arctan2(np.linalg.norm(total[:2], axis=0),-total[2]))
np.seterr(all='ignore')
phi2 = np.arctan(np.divide(total[2],np.linalg.norm(total[:2], axis=0))) + np.pi/2
theta2 = np.arctan2(total[1],total[0])
ipv.figure()
psi = HFunc(r2,theta2,phi2,2,1,1)
ipv.volshow(r2**2 * np.sin(phi2)*psi**2)
ipv.show()
```
## CRANK-NICOLSON METHOD: SCHRODINGER TIME DEPENDENT EQUATION
Since at this point we know everything about the Crank-Nicolson scheme, it is time to get our hands dirty. In this post, the third on the series on how to numerically solve 1D parabolic partial differential equations, I want to show a Python implementation of a Crank-Nicolson scheme for solving a heat diffusion problem.
While Phython is certainly not the best choice for scientific computing, in terms of performance and optimization, it is a good language for rapid prototyping and scripting (and in many cases even for complex production-level code).
For a problem of this type, Python is more than sufficient at doing the job. For more complicated problems involving multiple dimensions, more coupled equations and many extra terms, other languages are typically preferred (Fortran, C, C++,…), often with the inclusion of parallel programming using the Message Passing Interface (MPI) paradigm.
The problem that I am going to present is the one proposed in the excellent book “Numerical Solution of Partial Differential Equations” by G.D. Smith (Oxford University Press),
\begin{eqnarray}
\frac{\partial u}{\partial t} \ \ & = & \ \frac{\partial^2 u}{\partial x^2}, \ \ & 0\leq x \leq 1 \nonumber \\
u(t,0) & = & u(t,1)=0 & \ \ \ \ \ \ \forall t\nonumber\\
u(0,x) & = & 2x & \mathrm{if} \ \ x\leq 0.5 \nonumber\\
u(0,x) & = & 2(1-x) & \mathrm{if} \ \ x> 0.5 \nonumber
\end{eqnarray}
\begin{equation}
A\textbf{u}_{j+1} = B\textbf{u}_{j} + \textbf{b}_{j},
\end{equation}
```python
import scipy as sp
import numpy as np
from scipy import integrate, sparse, linalg
import scipy.sparse.linalg
import pylab as pl
nx = 8000
dx = 0.0025
dt = 0.00002
niter = 20
nonlin = 0.0
gridx = np.zeros(nx)
igridx = np.array(range(nx))
psi = np.zeros(nx)
pot = np.zeros(nx)
depth = 0.01
# Set up grid, potential, and initial state
gridx = dx*(igridx - nx/2)
pot = depth*gridx*gridx
psi = np.pi**(-1/4)*np.exp(-0.5*gridx*gridx)
# Normalize Psi
#psi /= sp.integrate.simps(psi*psi, dx=dx)
# Plot parameters
xlimit = [gridx[0], gridx[-1]]
ylimit = [0, 2*psi[int(nx/2)]]
# Set up diagonal coefficients
Adiag = np.empty(nx)
Asup = np.empty(nx)
Asub = np.empty(nx)
bdiag = np.empty(nx)
bsup = np.empty(nx)
bsub = np.empty(nx)
Adiag.fill(1 - dt/dx**2)
Asup.fill(dt/(2*dx**2))
Asub.fill(dt/(2*dx**2))
bdiag.fill(1 + dt/dx**2)
bsup.fill(-dt/(2*dx**2))
bsub.fill(-dt/(2*dx**2))
# Construct tridiagonal matrix
A = sp.sparse.spdiags([Adiag, Asup, Asub], [0, 1, -1], nx, nx)
b = sp.sparse.spdiags([bdiag, bsup, bsub], [0, 1, -1], nx, nx)
# Loop through time
for t in range(0, niter) :
# Calculate effect of potential and nonlinearity
psi *= np.exp(-dt*(pot + nonlin*psi*psi))
# Calculate spacial derivatives
psi = sp.sparse.linalg.bicg(A, b*psi)[0]
# Normalize Psi
psi /= sp.integrate.simps(psi*psi, dx=dx)
# Output figures
pl.plot(gridx, psi)
pl.plot(gridx, psi*psi)
pl.plot(gridx, pot)
pl.xlim(xlimit)
pl.ylim(ylimit)
#pl.savefig('outputla/fig' + str(t))
#pl.clf()
plt.grid()
plt.show()
```
```python
import numpy as np
N=100 # the number of grid points
a=0
b=np.pi
x,h = np.linspace(a,b,N,retstep = True)
y=np.sin(x)*np.sinh(x)
plt.plot(x,y)
plt.grid()
```
```python
L=4
tau=.2
y = 0.01 * np.exp(-(x-L/2)**2 / 0.02)
ynew = np.zeros_like(y)
j = 0
t = 0
tmax = 2
plt.figure(1) # Open the figure window
# the loop that steps the solution along
while t < tmax:
j = j+1
t = t + tau
# Use leapfrog and the boundary conditions to load
# ynew with y at the next time step using y and yold
# update yold and y for next timestep
# remember to use np.copy
# make plots every 50 time steps
if j % 50 == 0:
#plt.clf() # clear the figure window
plt.plot(x,y,'b-')
plt.xlabel('x')
plt.ylabel('y')
plt.title('time={:1.3f}'.format(t))
plt.ylim([-0.03,0.03])
plt.xlim([0,1])
plt.draw() # Draw the plot
plt.pause(1) # Give the computer time to draw
plt.show()
```
```python
# Tools for sparse matrices
import scipy.sparse as sparse
import scipy.sparse.linalg
# Numerical tools
from numpy import *
# Plotting library
from matplotlib.pyplot import *
"""Physical constants"""
_E0p = 938.27 # Rest energy for a proton [MeV]
_hbarc = 0.1973 # [MeV pm]
_c = 3.0e2 # Spees of light [pm / as]
def Psi0( x ):
'''
Initial state for a travelling gaussian wave packet.
'''
x0 = -0.100 # [pm]
a = 0.0050 # [pm]
l = 200000.0 # [1 / pm]
A = ( 1. / ( 2 * pi * a**2 ) )**0.25
K1 = exp( - ( x - x0 )**2 / ( 4. * a**2 ) )
K2 = exp( 1j * l * x )
return A * K1 * K2
def deltaPotential( x, height=75 ):
"""
A potential spike or delta potential in the center.
@param height Defines the height of the barrier / spike. This should be
chosen to be high "enough".
"""
# Declare new empty array with same length as x
potential = zeros( len( x ) )
# Middle point has high potential
potential[ 0.5*len(potential) ] = height
return potential
if __name__ == '__main__':
nx = 1001 # Number of points in x direction
dx = 0.001 # Distance between x points [pm]
# Use zero as center, same amount of points each side
a = - 0.5 * nx * dx
b = 0.5 * nx * dx
x = linspace( a, b, nx )
# Time parameters
T = 0.005 # How long to run simulation [as]
dt = 1e-5 # The time step [as]
t = 0
time_steps = int( T / dt ) # Number of time steps
# Constants - save time by calculating outside of loop
k1 = - ( 1j * _hbarc * _c) / (2. * _E0p )
k2 = ( 1j * _c ) / _hbarc
# Create the initial state Psi
Psi = Psi0(x)
# Create the matrix containing central differences. It it used to
# approximate the second derivative.
data = ones((3, nx))
data[1] = -2*data[1]
diags = [-1,0,1]
D2 = k1 / dx**2 * sparse.spdiags(data,diags,nx,nx)
# Identity Matrix
I = sparse.identity(nx)
# Create the diagonal matrix containing the potential.
V_data = deltaPotential(x)
V_diags = [0]
V = k2 * sparse.spdiags(V_data, V_diags, nx, nx)
# Put mmatplotlib in interactive mode for animation
ion()
# Setup the figure before starting animation
fig = figure() # Create window
ax = fig.add_subplot(111) # Add axes
line, = ax.plot( x, abs(Psi)**2, label='$|\Psi(x,t)|^2$' ) # Fetch the line object
# Also draw a green line illustrating the potential
ax.plot( x, V_data, label='$V(x)$' )
# Add other properties to the plot to make it elegant
fig.suptitle("Solution of Schrodinger's equation with delta potential") # Title of plot
ax.grid('on') # Square grid lines in plot
ax.set_xlabel('$x$ [pm]') # X label of axes
ax.set_ylabel('$|\Psi(x, t)|^2$ [1/pm] and $V(x)$ [MeV]') # Y label of axes
ax.legend(loc='best') # Adds labels of the lines to the window
draw() # Draws first window
# Time loop
while t < T:
"""
For each iteration: Solve the system of linear equations:
(I - k/2*D2) u_new = (I + k/2*D2)*u_old
"""
# Set the elements of the equation
A = I - dt*.5*(D2 + V)
b = (I + dt*.5 * (D2 + V)) * Psi
# Calculate the new Psi
Psi = sparse.linalg.spsolve(A,b)
# Update time
t += dt
# Plot this new state
line.set_ydata( abs(Psi)**2 ) # Update the y values of the Psi line
draw() # Update the plot
# Turn off interactive mode
ioff()
# Add show so that windows do not automatically close
show()
```
```python
# Create a single Gaussian wave packet and follow its evolution as it
# crosses the computational domain and reflects off the boundaries.
import sys
import getopt
import numpy as np
import matplotlib.pyplot as plt
# Physics:
HBAR = 1.0
M = 1.0
DELX = 2.5
XBAR = -30.0
PBAR = 5.0 # wave packet energy is PBAR**2 / 2M
# Range and time span:
XMIN = -40.0
XMAX = 40.0
TMAX = 10.0
# Numerics:
J = 1001
DX = (XMAX-XMIN)/(J-1.0)
DT = 0.001
ALPHA = HBAR*DT/(2*M*DX**2)
DTPLOT = 0.1
which = 0
I = 1j # physicist's square root of -1!
def wavepacket(x, xb, pb):
return np.exp(-(x-xb)**2/(4*DELX**2) + I*pb*x/HBAR) \
/ (2*np.pi*DELX**2)**0.25
def initialize(x):
return wavepacket(x, XBAR, PBAR)
def tridiag(a, b, c, r):
n = len(r)
u = np.zeros(n, dtype=complex)
gam = np.zeros(n, dtype=complex)
bet = b[0]
u[0] = r[0]/bet
for j in range(1,n):
gam[j] = c[j-1]/bet
bet = b[j] - a[j]*gam[j]
u[j] = (r[j]-a[j]*u[j-1])/bet
for j in range(n-2,-1,-1):
u[j] -= gam[j+1]*u[j+1]
return u
def cn_step(a, b, c, r, u):
r[1:-1] = 0.5*I*ALPHA*(u[:-2]+u[2:]) + (1-I*ALPHA)*u[1:-1]
return tridiag(a, b, c, r) # tridiag is fast enough in 1D...
def display_data(x, u, t):
if t > 0.0: plt.cla()
title = 'time = '+'%.2f'%(t)
plt.title(title)
plt.xlabel('x')
if which == 0:
plt.ylim(0.0, 0.25)
plt.ylabel('$|\psi|^2$')
plt.plot(x, np.abs(u)**2)
else:
plt.ylim(-0.5, 0.5)
plt.ylabel('$|\psi|$')
plt.plot(x, np.abs(u))
plt.plot(x, np.real(u))
plt.plot(x, np.imag(u))
plt.pause(0.001)
def prob(u, dx): # 1-d trapezoid
pp = np.abs(u)**2
return dx*(0.5*pp[0]+np.sum(pp[1:-1])+0.5*pp[-1])
def main(argv):
global J, DX, ALPHA
dt = DT
if len(sys.argv) > 1: dt = float(sys.argv[1])
if len(sys.argv) > 2: J = int(sys.argv[2])
DX = (XMAX-XMIN)/(J-1.0)
ALPHA = HBAR*dt/(2*M*DX**2)
dtplot = DTPLOT
if dtplot < dt: dtplot = dt
tplot = dtplot
x = np.linspace(XMIN, XMAX, J)
u = initialize(x)
t = 0.0
int0 = prob(u, DX)
u /= int0**0.5
display_data(x, u, t)
a = -0.5*I*ALPHA*np.ones(J)
b = (1+I*ALPHA)*np.ones(J)
c = a.copy()
r = np.zeros(J, dtype=complex)
# Boundary conditions:
b[0] = 1.0
c[0] = 0.0 # BC b[0]*u[0] + c[0]*u[1]] = r[0]
r[0] = 0.0
a[-1] = 0.0
b[-1] = 1.0 # BC b[-1]*u[-1] + a[-1]*u[-2] = r[-1]
r[-1] = 0.0
# Note that the vectors a, b, and c are constant for fixed time step.
int0 = prob(u, DX)
print('initial integral =', int0)
while t < TMAX-0.5*dt:
u = cn_step(a, b, c, r, u)
t += dt
if t > tplot-0.5*dt:
display_data(x, u, t)
tplot += dtplot
int1 = prob(u, DX)
print('final integral =', int1) # 'error =', int1/int0-1.0)
plt.show()
if __name__ == "__main__" :
main(sys.argv)
```
```python
import numpy as np
import matplotlib.pyplot as plt
tfinal=1000 # Total simulation time
dt=0.001#time step
M=np.round(tfinal/dt)
time_steps=M.astype(int)# number of time steps
b=10.0# final x
a=-10.0# initial x
N=512# num of pieces
dx=(b-a)/N # x-space step size
dk=2*np.pi/(b-a)
n=np.linspace(-N/2,N/2 -1,N)
x=n*dx
k=n*dk
t=0
alpha=0.0005
beta=0.0001
acoef=(alpha-beta*np.cos(t))/(2*alpha**0.5)
bcoef=(alpha-beta*np.cos(t))**0.5 / (2*alpha**0.5)
V=0.5*(acoef*x**4 -bcoef*x**2)
u0=np.exp(-(x+4.72)**2 /2)*(1/np.pi)**0.25
u=u0
for i in range(1,time_steps):
u=np.exp(-1j*dt*V/2)*u
c=np.fft.fftshift(np.fft.fft(u))
c=np.exp(-1j*dt*k**2 /2)*c
u=np.fft.ifft(np.fft.fftshift(c))
u=np.exp(-1j*dt*V/2)*u
t=t+dt
plt.plot(x,u*np.conjugate(u),'r--')
plt.xlim(-10,10)
plt.ylim(0,0.6)
plt.show
```
```python
plt.figure(1)
for n in range(len(wplot)):
w = wplot[n]
g = l3.steadySol(f,h,w,T,u)
plt.clf() # Clear the previous plot
plt.plot(x,g)
plt.title('$\omega={:1.2e}$'.format(w))
plt.xlabel('x')
plt.ylim([-0.05, 0.05]) # prevent auto-scaling
plt.draw() # Request to draw the plot now
plt.pause(0.1) # Give th
```
```python
import numpy as np
import matplotlib.pyplot as plt
import os, sys
import matplotlib
matplotlib.rc('font', size=18)
matplotlib.rc('font', family='Arial')
#definition of numerical parameters
N = 51 #number of grid points
dt = 5.e-4 #time step
L = float(1) #size of grid
nsteps = 620 #number of time steps
dx = L/(N-1) #grid spacing
nplot = 20 #number of timesteps before plotting
r = dt/dx**2 #assuming heat diffusion coefficient == 1
#initialize matrices A, B and b array
A = np.zeros((N-2,N-2))
B = np.zeros((N-2,N-2))
b = np.zeros((N-2))
#define matrices A, B and b array
for i in range(N-2):
if i==0:
A[i,:] = [2+2*r if j==0 else (-r) if j==1 else 0 for j in range(N-2)]
B[i,:] = [2-2*r if j==0 else r if j==1 else 0 for j in range(N-2)]
b[i] = 0. #boundary condition at i=1
elif i==N-3:
A[i,:] = [-r if j==N-4 else 2+2*r if j==N-3 else 0 for j in range(N-2)]
B[i,:] = [r if j==N-4 else 2-2*r if j==N-3 else 0 for j in range(N-2)]
b[i] = 0. #boundary condition at i=N
else:
A[i,:] = [-r if j==i-1 or j==i+1 else 2+2*r if j==i else 0 for j in range(N-2)]
B[i,:] = [r if j==i-1 or j==i+1 else 2-2*r if j==i else 0 for j in range(N-2)]
#initialize grid
x = np.linspace(0,1,N)
#initial condition
u = np.asarray([2*xx if xx<=0.5 else 2*(1-xx) for xx in x])
#evaluate right hand side at t=0
bb = B.dot(u[1:-1]) + b
fig = plt.figure()
plt.plot(x,u,linewidth=2)
filename = 'foo000.jpg';
#fig.set_tight_layout(True,"h_pad=1.0");
plt.tight_layout(pad=3.0)
plt.xlabel("x")
plt.ylabel("u")
plt.title("t = 0")
plt.savefig(filename,format="jpg")
plt.clf()
c = 0
for j in range(nsteps):
print(j)
#find solution inside domain
u[1:-1] = np.linalg.solve(A,bb)
#update right hand side
bb = B.dot(u[1:-1]) + b
if(j%nplot==0): #plot results every nplot timesteps
plt.plot(x,u,linewidth=2)
plt.ylim([0,1])
filename = 'foo' + str(c+1).zfill(3) + '.jpg';
plt.xlabel("x")
plt.ylabel("u")
plt.title("t = %2.2f"%(dt*(j+1)))
plt.savefig(filename,format="jpg")
plt.clf()
c += 1
#os.system("ffmpeg -y -i 'foo%03d.jpg' heat_equation.m4v")
#os.system("rm -f *.jpg")
```
```python
import matplotlib.pyplot as plt
import numpy as np
def CN(pr,psiR,psiI,V,dx,dt,mxItr,tol):
hbar,m=pr
Nx=len(psiR)
al=hbar*dt/(4*m*dx**2)
be=dt/(2*hbar)
psiR0=[0 for i in range(Nx)]
psiI0=[0 for i in range(Nx)]
gam=[2*al+be*V[i] for i in range(Nx)]
for i in range(1,Nx-1):
psiR0[i]=psiR[i]-al*(psiI[i-1]+psiI[i+1]+gam[i]*psiI[i])
psiI0[i]=psiI[i]-al*(psiR[i-1]+psiR[i+1]+gam[i]*psiR[i])
for it in range(1,mxItr-1):
err=0
for i in range(1,Nx-1):
psi1=psiR0[i]-al*(psiI[i-1]+psiI[i+1]+gam[i]*psiI[i])
psi2=psiI0[i]-al*(psiR[i-1]+psiR[i+1]+gam[i]*psiR[i])
errf=abs(psi1**2+psi2**2-psiR[i]**2-psiI[i]**2)
if errf>err:
err=errf
psiR[i]=psi1
psiI[i]=psi2
if err<tol:
break
if it<mxItr:
return psiR,psiI
else:
print('Function not Converging')
return None,None
# Normalization
def normalize(psiR,psiI,dx):
Nx=len(psiR)
psi2=[psiR[i]**2+psiI[i]**2 for i in range(Nx)]
psiNorm=0.5*(psi2[0]+psi2[Nx-1])
for i in range(1,Nx-1):
psiNorm+=psi2[i]
psiNorm*=dx
psiR=[psiR[i]/psiNorm for i in range(Nx)]
psiI=[psiI[i]/psiNorm for i in range(Nx)]
return psiR,psiI
def CN_TDSE(pr,wvpk,prWV,Pot,prPt,X0,XN,Nx,ts,T,tps):
hbar,m=pr
dx=(XN-X0)/(Nx-1)
X=[X0+i*dx for i in range(Nx)]
sig,k0=prWV[1],prWV[2]
E=(hbar**2/(2*m))*(k0**2+.5/sig**2)
psi=[wvpk(prWV,x) for x in X]
psiR=[psi[i].real for i in range(Nx)]
psiI=[psi[i].imag for i in range(Nx)]
psiR,psiI=normalize(psiR,psiI,dx)
V=Pot(prPt,Nx)
Vmx=max(max(V),abs(min(V)))
dt=hbar/(hbar**2/(m*dx**2)+Vmx/2)
Xmx,Xmn,Ymx=max(X),min(X),1.5*max(psiR)
if Vmx!=0:
Efac=Ymx/(2*Vmx)
V_plot=[V[i]*Efac for i in range(Nx)]
print('Energy of the particle=',E,'Scaled Energy=',E*Efac)
mxItr,tol=100,1e-5
t,it=0,0
while t<T:
if it%ts==0:
plt.axis([Xmn,Xmx,-Ymx,Ymx])
if Efac!=0:
plt.plot(X,V_plot,':k')
plt.axhline(E*Efac,color='g',label='E')
plt.fill_between(X,V_plot,facecolor='lightgrey')
plt.plot(X,psiR,'b',label=r'$\psi(x,t) $')
plt.plot(X,[psiR[i]**2+psiI[i]**2 for i in range(Nx)],'k',label=r'$\psi(x,t)^2$')
plt.text(.8*Xmx,.9*Ymx,'t=%.2f'%t)
plt.legend(loc='lower left',prop={'size':12})
plt.xlabel('x')
plt.pause(tps)
plt.clf()
psiR,psiI=CN(pr,psiR,psiI,V,dx,dt,mxItr,tol)
it+=1
t+=dt
plt.show()
return None
from math import*
def Pot(V0,Nx):
V=[V0 for i in range(Nx)]
return V
def gauss(pr,x):
x0,sig,k0=pr
a=1/((2*pi)**.5*sig)**.5
b=-1/(4*sig**2)
gs=a*exp(b*(x-x0)**2)
if gs<1e-10:
gs=0
psi=gs*complex(cos(k0*x),sin(k0*x))
return psi
hbar,m=1,1
X0,XN,Nx=-20,40,200
sig=(XN-X0)/40
x0,k0=X0+15*sig,pi
V0=7
pr=[hbar,m]
prWV=[x0,sig,k0]
ts,T,tps=1,10,.05
CN_TDSE(pr,gauss,prWV,Pot,V0,X0,XN,Nx,ts,T,tps)
```
```python
```
```python
```
```python
```
```python
```
```python
# KRONIG-PENNY MODEL
#!/usr/bin/python
''' This script is used to analyze the energy levels predicted by the Kronig-
Penney model of electron band structure for a periodic potential. The end
result of this program are plots of the E-K bands of the potential.
/Kronig-Penney Model Potential/
U
^
b | a
<----> | <----->
----+ +----+ +----+ +----+ +----+ +---- Uo
| | | | | | | | | |
... | | | | | | | | | | ...
| | | | | | | | | |
---------------------------------------------------------------> x
x = -b | x = a
|
*** NOTE ***
If 'Uo' is negative (instead of having ridges, there are pits), then the
resulting energy banding is the same and can be obtained by swapping 'a'
with 'b' and by shifting all energy values down by the magnitude of 'Uo'.
'''
```
```python
```
```python
```
```python
```
```python
# KRONIG-PENNY MODEL
#!/usr/bin/python
''' This script is used to analyze the energy levels predicted by the Kronig-
Penney model of electron band structure for a periodic potential. The end
result of this program are plots of the E-K bands of the potential.
/Kronig-Penney Model Potential/
U
^
b | a
<----> | <----->
----+ +----+ +----+ +----+ +----+ +---- Uo
| | | | | | | | | |
... | | | | | | | | | | ...
| | | | | | | | | |
---------------------------------------------------------------> x
x = -b | x = a
|
*** NOTE ***
If 'Uo' is negative (instead of having ridges, there are pits), then the
resulting energy banding is the same and can be obtained by swapping 'a'
with 'b' and by shifting all energy values down by the magnitude of 'Uo'.
'''
from numpy import *
from matplotlib.pyplot import *
#------------------------------------------------------------------------------
# Physical Constants
#------------------------------------------------------------------------------
_c = 2.998e17 # Speed of light in nm/s
_m = 0.511e6/_c**2 # Electron mass in eV/c^2
_hbar = 6.583e-16 # h/(2*pi) in eV*s
_Uo = _hbar**2/(2*_m) # If Uo = _Uo then u = 1
#------------------------------------------------------------------------------
# Model Parameters
# (units: a,b = nm, Uo,dE = eV)
#------------------------------------------------------------------------------
Uo = _Uo
a = pi
b = pi
Emax = 4*Uo
dE = 0.00001*Emax
gap_tol = 1.000001*dE
# Display model parameters
print("----------------------------------------------------------------------")
print("Model Parameters")
print("Uo (eV):", Uo)
print("a (nm):", a)
print("b (nm):", b)
print("dE (eV):", dE)
print("Energy gap tolerance (eV):", gap_tol)
#------------------------------------------------------------------------------
# Function definitions
#------------------------------------------------------------------------------
def f_bound(E, Uo, a, b):
u = sqrt(2*_m*Uo)/_hbar
xi = E/Uo
alpha = a*u*sqrt(xi)
beta = b*u*sqrt(1 - xi)
C = (1 - 2*xi)/(2*sqrt(xi*(1 - xi)))
return cos(alpha)*cosh(beta) + C*sin(alpha)*sinh(beta)
def f_free(E, Uo, a, b):
u = sqrt(2*_m*Uo)/_hbar
xi = E/Uo
alpha = a*u*sqrt(xi)
beta = b*u*sqrt(xi - 1)
C = (1 - 2*xi)/(2*sqrt(xi*(xi - 1)))
return cos(alpha)*cos(beta) + C*sin(alpha)*sin(beta)
def f_top(Uo, a, b):
u = sqrt(2*_m*Uo)/_hbar
alpha = a*u
C = -0.5*b*u
return cos(alpha) + C*sin(alpha)
def f_bot(Uo, a, b):
u = sqrt(2*_m*Uo)/_hbar
beta = b*u
C = 0.5*a*u
return cosh(beta) + C*sinh(beta)
def mass_eff(k,E):
return ((_hbar*_c)**2)/(polyfit(k,E,2)[0])
#==============================================================================
# Perform Calculations - The main part of the script
#==============================================================================
# Assign initial values of E
if Uo > 0:
E = arange(0,Emax,dE)
elif Uo < 0:
print("Uo must be greater than 0!")
exit()
else:
print("Uo cannont equal zero!")
exit()
# Calculate the energy bands
f = []
E_bands = []
f_bands = []
for En in E:
if En == 0:
fn = f_bot(Uo, a, b)
f.append(fn)
if abs(fn) <= 1:
E_bands.append(En)
f_bands.append(fn)
elif En < Uo:
fn = f_bound(En, Uo, a, b)
f.append(fn)
if abs(fn) <= 1:
E_bands.append(En)
f_bands.append(fn)
elif En == Uo:
fn = f_top(Uo, a, b)
f.append(fn)
if abs(fn) <= 1:
E_bands.append(En)
f_bands.append(fn)
elif En > Uo:
fn = f_free(En, Uo, a, b)
f.append(fn)
if abs(fn) <= 1:
E_bands.append(En)
f_bands.append(fn)
f = array(f) # f as a function of E
E = array(E) # Emin to Emax by dE
xi = E/Uo # Emin/Uo to Emax/Uo by dE/Uo
E_bands = array(E_bands) # Allowed Energies
f_bands = array(f_bands) # Allowed values of f
xi_bands = E_bands/Uo # Normalized allowed energies
K = arccos(f_bands)/(a+b) # K values corresponding to energies
Kext = zeros(K.shape) # Extended K values
# Find Band Gaps And Effective Mass At Band Ends
E_gaps = [] # Array of tuples containing band gaps
# (Bottom Energy, Top Energy, Gap Width)
gap_idx = [] # Index of the bottom edge of every band
n = 0
for i in range(1,E_bands.size):
chgE = E_bands[i] - E_bands[i-1]
if chgE > gap_tol:
E_gaps.append((E_bands[i-1], E_bands[i], chgE))
gap_idx.append(i)
n += 1
if n != 0:
if n%2:
Kext[i] = (n + 1)*pi/(a+b) - K[i]
else:
Kext[i] = n*pi/(a+b) + K[i]
else:
Kext[i] = K[i]
del n
# Normalized E_gaps
xi_gaps = [(gap[0]/Uo, gap[1]/Uo, gap[2]/Uo) for gap in E_gaps]
# Maximum value of K
Kmax = max(Kext)
# Energies of a free particle over the range -Kmax to Kmax
fakeK = arange(-Kmax,Kmax,0.001*Kmax)
E_empty = (_hbar*fakeK)**2/(2*_m)
# Calculate the effective masses at each band edge
effMass_gnd = mass_eff(Kext[0:3],E_bands[0:3]) # Ground State Effective Mass
effMass = [] # Array of tuples of effective masses for bands
# [(mass at bottom, mass at top), (..),...]
# ----------"Band 1"-----------, ...
mbot = effMass_gnd
mtop = 0
for i in gap_idx:
mtop = mass_eff(Kext[i-3:i],E_bands[i-3:i])
effMass.append((mbot,mtop))
mbot = mass_eff(Kext[i:i+3],E_bands[i:i+3])
del mbot, mtop
#==============================================================================
#------------------------------------------------------------------------------
# Display Ground State Energy, Energy Gaps, & Effective masses at band ends
#------------------------------------------------------------------------------
print("----------------------------------------------------------------------")
print("Ground State Energy (E/Uo):", E_bands[0]/Uo)
print("Energy Gaps (E/Uo)")
print("[Gap #] (Lower Energy, Upper Energy, Energy Gap)")
for i in range(0,len(E_gaps)):
print("[{0:2d}] ".format(i+1), xi_gaps[i])
print("----------------------------------------------------------------------")
print("Ground State Energy (eV):", E_bands[0])
print("Energy Gaps (eV)")
print("[Gap #] (Lower Energy, Upper Energy, Energy Gap)")
for i in range(0,len(E_gaps)):
print("[{0:2d}] ".format(i+1), E_gaps[i])
print("----------------------------------------------------------------------")
print("Ground State Effective mass (MeV):", effMass_gnd/1e6)
print("Effective Mass At Band Edges (MeV)")
print("[Band #] (Mass At Bottom, Mass At Top)")
for i in range(0,len(effMass)):
print("[{0:2d}] ".format(i+1), (effMass[i][0]/1e6, effMass[i][1]/1e6))
#------------------------------------------------------------------------------
# Display Pretty Figures
#------------------------------------------------------------------------------
# Graphical Solution
'''
f1 = figure(1)
plot(xi, f, 'b',
xi, zeros(xi.shape), 'k',
xi, ones(xi.shape), 'r',
xi, -1*ones(xi.shape),'r',
figure=f1)
grid(True)
ylim([-4,4])
xlabel("E/Uo")
ylabel("f(E)")
'''
# Normalized E-K Plots
f2 = figure(2)
plot(-1*K*(a+b)/pi, xi_bands, 'b', K*(a+b)/pi, xi_bands, 'b',
-1*Kext*(a+b)/pi, xi_bands, 'r', Kext*(a+b)/pi, xi_bands, 'r',
fakeK*(a+b)/pi, E_empty/Uo, 'g--',
figure=f2)
grid(True)
xlim([-1.1*Kmax*(a+b)/pi,1.1*Kmax*(a+b)/pi])
ylim([0,Emax/Uo])
xlabel("K*(a + b)/Pi")
ylabel("E/Uo")
title("EK Plots")
'''
# E-K Plots
f3 = figure(3)
plot(-1*K, E_bands, 'b', K, E_bands, 'b',
-1*Kext, E_bands, 'r', Kext, E_bands, 'r',
fakeK, E_empty, 'g--',
figure=f3)
grid(True)
xlim([-1.1*Kmax, 1.1*Kmax])
ylim([0,Emax])
xlabel("K (1/nm)")
ylabel("E (eV)")
title("EK Plots")
'''
show()
```
```python
# Graphical Solution
f1 = figure(1)
plot(xi, f, 'b',
xi, zeros(xi.shape), 'k',
xi, ones(xi.shape), 'r',
xi, -1*ones(xi.shape),'r',
figure=f1)
grid(True)
ylim([-4,4])
xlabel("E/Uo")
ylabel("f(E)")
```
```python
# Normalized E-K Plots
f2 = figure(2)
plot(-1*K*(a+b)/pi, xi_bands, 'b', K*(a+b)/pi, xi_bands, 'b',
-1*Kext*(a+b)/pi, xi_bands, 'r', Kext*(a+b)/pi, xi_bands, 'r',
fakeK*(a+b)/pi, E_empty/Uo, 'g--',
figure=f2)
grid(True)
xlim([-1.1*Kmax*(a+b)/pi,1.1*Kmax*(a+b)/pi])
ylim([0,Emax/Uo])
xlabel("K*(a + b)/Pi")
ylabel("E/Uo")
title("EK Plots")
```
```python
# E-K Plots
f3 = figure(3)
plot(-1*K, E_bands, 'b', K, E_bands, 'b',
-1*Kext, E_bands, 'r', Kext, E_bands, 'r',
fakeK, E_empty, 'g--',
figure=f3)
grid(True)
xlim([-1.1*Kmax, 1.1*Kmax])
ylim([0,Emax])
xlabel("K (1/nm)")
ylabel("E (eV)")
title("EK Plots")
```
```python
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn-white')
import numpy as np
def f(x, y):
return np.sin(x) ** 10 + np.cos(10 + y * x) * np.cos(x)
x = np.linspace(0, 5, 50)
y = np.linspace(0, 5, 40)
X, Y = np.meshgrid(x, y)
Z = f(X, Y)
plt.contour(X, Y, Z, colors='black');
```
```python
plt.contour(X, Y, Z, 20, cmap='RdGy');
```
```python
plt.imshow(Z, extent=[0, 5, 0, 5], origin='lower',
cmap='RdGy')
plt.colorbar()
plt.axis(aspect='image');
```
```python
contours = plt.contour(X, Y, Z, 3, colors='black')
plt.clabel(contours, inline=True, fontsize=8)
plt.imshow(Z, extent=[0, 5, 0, 5], origin='lower',
cmap='RdGy', alpha=0.5)
plt.colorbar();
```
```python
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
fig = plt.figure()
ax = plt.axes(projection='3d')
ax = plt.axes(projection='3d')
# Data for a three-dimensional line
zline = np.linspace(0, 15, 1000)
xline = np.sin(zline)
yline = np.cos(zline)
ax.plot3D(xline, yline, zline, 'gray')
# Data for three-dimensional scattered points
zdata = 15 * np.random.random(100)
xdata = np.sin(zdata) + 0.1 * np.random.randn(100)
ydata = np.cos(zdata) + 0.1 * np.random.randn(100)
ax.scatter3D(xdata, ydata, zdata, c=zdata, cmap='YlGnBu')
```
```python
def f(x, y):
return np.sin(np.sqrt(x ** 2 + y ** 2))
x = np.linspace(-6, 6, 30)
y = np.linspace(-6, 6, 30)
X, Y = np.meshgrid(x, y)
Z = f(X, Y)
fig = plt.figure()
ax = plt.axes(projection='3d')
ax.contour3D(X, Y, Z, 50, cmap='Greens')
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('z');
```
```python
# DOUBLE PENDULAM
import matplotlib
matplotlib.use('TKAgg') # 'tkAgg' if Qt not present
import matplotlib.pyplot as plt
import scipy as sp
import matplotlib.animation as animation
class Pendulum:
def __init__(self, theta1, theta2, dt):
self.theta1 = theta1
self.theta2 = theta2
self.p1 = 0.0
self.p2 = 0.0
self.dt = dt
self.g = 9.81
self.length = 1.0
self.trajectory = [self.polar_to_cartesian()]
def polar_to_cartesian(self):
x1 = self.length * sp.sin(self.theta1)
y1 = -self.length * sp.cos(self.theta1)
x2 = x1 + self.length * sp.sin(self.theta2)
y2 = y1 - self.length * sp.cos(self.theta2)
print(self.theta1, self.theta2)
return sp.array([[0.0, 0.0], [x1, y1], [x2, y2]])
def evolve(self):
theta1 = self.theta1
theta2 = self.theta2
p1 = self.p1
p2 = self.p2
g = self.g
l = self.length
expr1 = sp.cos(theta1 - theta2)
expr2 = sp.sin(theta1 - theta2)
expr3 = (1 + expr2**2)
expr4 = p1 * p2 * expr2 / expr3
expr5 = (p1**2 + 2 * p2**2 - p1 * p2 * expr1) \
* sp.sin(2 * (theta1 - theta2)) / 2 / expr3**2
expr6 = expr4 - expr5
self.theta1 += self.dt * (p1 - p2 * expr1) / expr3
self.theta2 += self.dt * (2 * p2 - p1 * expr1) / expr3
self.p1 += self.dt * (-2 * g * l * sp.sin(theta1) - expr6)
self.p2 += self.dt * ( -g * l * sp.sin(theta2) + expr6)
new_position = self.polar_to_cartesian()
self.trajectory.append(new_position)
print(new_position)
return new_position
class Animator:
def __init__(self, pendulum, draw_trace=False):
self.pendulum = pendulum
self.draw_trace = draw_trace
self.time = 0.0
# set up the figure
self.fig, self.ax = plt.subplots()
self.ax.set_ylim(-2.5, 2.5)
self.ax.set_xlim(-2.5, 2.5)
# prepare a text window for the timer
self.time_text = self.ax.text(0.05, 0.95, '',
horizontalalignment='left',
verticalalignment='top',
transform=self.ax.transAxes)
# initialize by plotting the last position of the trajectory
self.line, = self.ax.plot(
self.pendulum.trajectory[-1][:, 0],
self.pendulum.trajectory[-1][:, 1],
marker='o')
# trace the whole trajectory of the second pendulum mass
if self.draw_trace:
self.trace, = self.ax.plot(
[a[2, 0] for a in self.pendulum.trajectory],
[a[2, 1] for a in self.pendulum.trajectory])
def advance_time_step(self):
while True:
self.time += self.pendulum.dt
yield self.pendulum.evolve()
def update(self, data):
self.time_text.set_text('Elapsed time: {:6.2f} s'.format(self.time))
self.line.set_ydata(data[:, 1])
self.line.set_xdata(data[:, 0])
if self.draw_trace:
self.trace.set_xdata([a[2, 0] for a in self.pendulum.trajectory])
self.trace.set_ydata([a[2, 1] for a in self.pendulum.trajectory])
return self.line,
def animate(self):
self.animation = animation.FuncAnimation(self.fig, self.update,
self.advance_time_step, interval=25, blit=False)
pendulum = Pendulum(theta1=sp.pi, theta2=sp.pi - 0.01, dt=0.01)
animator = Animator(pendulum=pendulum, draw_trace=True)
animator.animate()
plt.show()
```
```python
# QUANTUM TUNNELING
import matplotlib
import numpy as np
#matplotlib.use('TKAgg')
import matplotlib.pyplot as plt
from scipy.sparse import linalg as ln
from scipy import sparse as sparse
import matplotlib.animation as animation
class Wave_Packet:
def __init__(self, n_points, dt, sigma0=5.0, k0=1.0, x0=-150.0, x_begin=-200.0,
x_end=200.0, barrier_height=1.0, barrier_width=3.0):
self.n_points = n_points
self.sigma0 = sigma0
self.k0 = k0
self.x0 = x0
self.dt = dt
self.prob = np.zeros(n_points)
self.barrier_width = barrier_width
self.barrier_height = barrier_height
""" 1) Space discretization """
self.x, self.dx = np.linspace(x_begin, x_end, n_points, retstep=True)
""" 2) Initialization of the wave function to Gaussian wave packet """
norm = (2.0 * np.pi * sigma0**2)**(-0.25)
self.psi = np.exp(-(self.x - x0)**2 / (4.0 * sigma0**2))
self.psi *= np.exp(1.0 * k0 * self.x)
self.psi *= (2.0 * np.pi * sigma0**2)**(-0.25)
""" 3) Setting up the potential barrier """
self.potential = np.array(
[barrier_height if 0.0 < x < barrier_width else 0.0 for x in self.x])
""" 4) Creating the Hamiltonian """
h_diag = np.ones(n_points) / self.dx**2 + self.potential
h_non_diag = np.ones(n_points - 1) * (-0.5 / self.dx**2)
hamiltonian = sparse.diags([h_diag, h_non_diag, h_non_diag], [0, 1, -1])
""" 5) Computing the Crank-Nicolson time evolution matrix """
implicit = (sparse.eye(self.n_points) - dt / 2.0j * hamiltonian).tocsc()
explicit = (sparse.eye(self.n_points) + dt / 2.0j * hamiltonian).tocsc()
self.evolution_matrix = ln.inv(implicit).dot(explicit).tocsr()
def evolve(self):
self.psi = self.evolution_matrix.dot(self.psi)
self.prob = abs(self.psi)**2
norm = sum(self.prob)
self.prob /= norm
self.psi /= norm**0.5
return self.prob
class Animator:
def __init__(self, wave_packet):
self.time = 0.0
self.wave_packet = wave_packet
self.fig, self.ax = plt.subplots()
plt.plot(self.wave_packet.x, self.wave_packet.potential * 0.1, color='r')
self.time_text = self.ax.text(0.05, 0.95, '', horizontalalignment='left',
verticalalignment='top', transform=self.ax.transAxes)
self.line, = self.ax.plot(self.wave_packet.x, self.wave_packet.evolve())
self.ax.set_ylim(0, 0.2)
self.ax.set_xlabel('Position (a$_0$)')
self.ax.set_ylabel('Probability density (a$_0$)')
def update(self, data):
self.line.set_ydata(data)
return self.line,
def time_step(self):
while True:
self.time += self.wave_packet.dt
self.time_text.set_text(
'Elapsed time: {:6.2f} fs'.format(self.time * 2.419e-2))
yield self.wave_packet.evolve()
def animate(self):
self.ani = animation.FuncAnimation(
self.fig, self.update, self.time_step, interval=5, blit=False)
wave_packet = Wave_Packet(n_points=500, dt=0.5, barrier_width=10, barrier_height=1)
animator = Animator(wave_packet)
animator.animate()
plt.show()
from IPython.display import HTML
ani=Animator(wave_packet).animate()
ani
```
```python
from __future__ import print_function
from __future__ import division
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
def D_wrap(A, dX):
d = A - np.roll(A, 1)
d = 0.5 * (d + np.roll(d, -1))
return d
#doesn't wrap over, faster
def D_nowrap(A, dX):
d = A[1:] - A[:-1]
d = np.concatenate(([d[0]], d, [d[-1]]))
d = 0.5 * (d[1:] + d[:-1])
return d
D = D_nowrap #select diff function
N = 400 #number of points
size = 10 #x in [-size, size]
dX = 2.*size/N #x step
dt = 0.0005 #time step
X = np.linspace(-size, size, N)
x0 = -5
A = np.exp(-4*(X-x0)*((X-x0)+1500j)) #initial wave function
A = np.array(A, dtype=np.complex)
#V = np.zeros(N)
V = 0.5*(np.abs(X - 2) < 0.5) #potential
fig, ax = plt.subplots()
line, = ax.plot(X, A)
ax.set_ylim([0, 1])
ax.plot(X, V)
def animate(i):
global A
for i in range(1000): #don't show every time step
A += dt*(1j*D(D(A, dX), dX) - 1j * V * A) #schrodinger equation
print(np.sum(np.abs(A**2))) #print norm
line.set_ydata(np.abs(A**2))
return line,
ani = animation.FuncAnimation(fig, animate, np.arange(1, 200), interval=25, blit=True)
plt.show()
ani
```
```python
import numpy as np
from matplotlib import pyplot as pl
from matplotlib import animation
from scipy.fftpack import fft,ifft
class Schrodinger(object):
"""
Class which implements a numerical solution of the time-dependent
Schrodinger equation for an arbitrary potential
"""
def __init__(self, x, psi_x0, V_x,
k0 = None, hbar=1, m=1, t0=0.0):
"""
Parameters
----------
x : array_like, float
length-N array of evenly spaced spatial coordinates
psi_x0 : array_like, complex
length-N array of the initial wave function at time t0
V_x : array_like, float
length-N array giving the potential at each x
k0 : float
the minimum value of k. Note that, because of the workings of the
fast fourier transform, the momentum wave-number will be defined
in the range
k0 < k < 2*pi / dx
where dx = x[1]-x[0]. If you expect nonzero momentum outside this
range, you must modify the inputs accordingly. If not specified,
k0 will be calculated such that the range is [-k0,k0]
hbar : float
value of planck's constant (default = 1)
m : float
particle mass (default = 1)
t0 : float
initial tile (default = 0)
"""
# Validation of array inputs
self.x, psi_x0, self.V_x = map(np.asarray, (x, psi_x0, V_x))
N = self.x.size
assert self.x.shape == (N,)
assert psi_x0.shape == (N,)
assert self.V_x.shape == (N,)
# Set internal parameters
self.hbar = hbar
self.m = m
self.t = t0
self.dt_ = None
self.N = len(x)
self.dx = self.x[1] - self.x[0]
self.dk = 2 * np.pi / (self.N * self.dx)
# set momentum scale
if k0 == None:
self.k0 = -0.5 * self.N * self.dk
else:
self.k0 = k0
self.k = self.k0 + self.dk * np.arange(self.N)
self.psi_x = psi_x0
self.compute_k_from_x()
# variables which hold steps in evolution of the
self.x_evolve_half = None
self.x_evolve = None
self.k_evolve = None
# attributes used for dynamic plotting
self.psi_x_line = None
self.psi_k_line = None
self.V_x_line = None
def _set_psi_x(self, psi_x):
self.psi_mod_x = (psi_x * np.exp(-1j * self.k[0] * self.x)
* self.dx / np.sqrt(2 * np.pi))
def _get_psi_x(self):
return (self.psi_mod_x * np.exp(1j * self.k[0] * self.x)
* np.sqrt(2 * np.pi) / self.dx)
def _set_psi_k(self, psi_k):
self.psi_mod_k = psi_k * np.exp(1j * self.x[0]
* self.dk * np.arange(self.N))
def _get_psi_k(self):
return self.psi_mod_k * np.exp(-1j * self.x[0] *
self.dk * np.arange(self.N))
def _get_dt(self):
return self.dt_
def _set_dt(self, dt):
if dt != self.dt_:
self.dt_ = dt
self.x_evolve_half = np.exp(-0.5 * 1j * self.V_x
/ self.hbar * dt )
self.x_evolve = self.x_evolve_half * self.x_evolve_half
self.k_evolve = np.exp(-0.5 * 1j * self.hbar /
self.m * (self.k * self.k) * dt)
psi_x = property(_get_psi_x, _set_psi_x)
psi_k = property(_get_psi_k, _set_psi_k)
dt = property(_get_dt, _set_dt)
def compute_k_from_x(self):
self.psi_mod_k = fft(self.psi_mod_x)
def compute_x_from_k(self):
self.psi_mod_x = ifft(self.psi_mod_k)
def time_step(self, dt, Nsteps = 1):
"""
Perform a series of time-steps via the time-dependent
Schrodinger Equation.
Parameters
----------
dt : float
the small time interval over which to integrate
Nsteps : float, optional
the number of intervals to compute. The total change
in time at the end of this method will be dt * Nsteps.
default is N = 1
"""
self.dt = dt
if Nsteps > 0:
self.psi_mod_x *= self.x_evolve_half
for i in range(Nsteps - 1):
self.compute_k_from_x()
self.psi_mod_k *= self.k_evolve
self.compute_x_from_k()
self.psi_mod_x *= self.x_evolve
self.compute_k_from_x()
self.psi_mod_k *= self.k_evolve
self.compute_x_from_k()
self.psi_mod_x *= self.x_evolve_half
self.compute_k_from_x()
self.t += dt * Nsteps
######################################################################
# Helper functions for gaussian wave-packets
def gauss_x(x, a, x0, k0):
"""
a gaussian wave packet of width a, centered at x0, with momentum k0
"""
return ((a * np.sqrt(np.pi)) ** (-0.5)
* np.exp(-0.5 * ((x - x0) * 1. / a) ** 2 + 1j * x * k0))
def gauss_k(k,a,x0,k0):
"""
analytical fourier transform of gauss_x(x), above
"""
return ((a / np.sqrt(np.pi))**0.5
* np.exp(-0.5 * (a * (k - k0)) ** 2 - 1j * (k - k0) * x0))
######################################################################
# Utility functions for running the animation
def theta(x):
"""
theta function :
returns 0 if x<=0, and 1 if x>0
"""
x = np.asarray(x)
y = np.zeros(x.shape)
y[x > 0] = 1.0
return y
def square_barrier(x, width, height):
return height * (theta(x) - theta(x - width))
######################################################################
# Create the animation
# specify time steps and duration
dt = 0.01
N_steps = 50
t_max = 120
frames = int(t_max / float(N_steps * dt))
# specify constants
hbar = 1.0 # planck's constant
m = 1.9 # particle mass
# specify range in x coordinate
N = 2 ** 11
dx = 0.1
x = dx * (np.arange(N) - 0.5 * N)
# specify potential
V0 = 1.5
L = hbar / np.sqrt(2 * m * V0)
a = 3 * L
x0 = -60 * L
V_x = square_barrier(x, a, V0)
V_x[x < -98] = 1E6
V_x[x > 98] = 1E6
# specify initial momentum and quantities derived from it
p0 = np.sqrt(2 * m * 0.2 * V0)
dp2 = p0 * p0 * 1./80
d = hbar / np.sqrt(2 * dp2)
k0 = p0 / hbar
v0 = p0 / m
psi_x0 = gauss_x(x, d, x0, k0)
# define the Schrodinger object which performs the calculations
S = Schrodinger(x=x,
psi_x0=psi_x0,
V_x=V_x,
hbar=hbar,
m=m,
k0=-28)
######################################################################
# Set up plot
fig = pl.figure()
# plotting limits
xlim = (-100, 100)
klim = (-5, 5)
# top axes show the x-space data
ymin = 0
ymax = V0
ax1 = fig.add_subplot(211, xlim=xlim,
ylim=(ymin - 0.2 * (ymax - ymin),
ymax + 0.2 * (ymax - ymin)))
psi_x_line, = ax1.plot([], [], c='r', label=r'$|\psi(x)|$')
V_x_line, = ax1.plot([], [], c='k', label=r'$V(x)$')
center_line = ax1.axvline(0, c='k', ls=':',
label = r"$x_0 + v_0t$")
title = ax1.set_title("")
ax1.legend(prop=dict(size=12))
ax1.set_xlabel('$x$')
ax1.set_ylabel(r'$|\psi(x)|$')
# bottom axes show the k-space data
ymin = abs(S.psi_k).min()
ymax = abs(S.psi_k).max()
ax2 = fig.add_subplot(212, xlim=klim,
ylim=(ymin - 0.2 * (ymax - ymin),
ymax + 0.2 * (ymax - ymin)))
psi_k_line, = ax2.plot([], [], c='r', label=r'$|\psi(k)|$')
p0_line1 = ax2.axvline(-p0 / hbar, c='k', ls=':', label=r'$\pm p_0$')
p0_line2 = ax2.axvline(p0 / hbar, c='k', ls=':')
mV_line = ax2.axvline(np.sqrt(2 * V0) / hbar, c='k', ls='--',
label=r'$\sqrt{2mV_0}$')
ax2.legend(prop=dict(size=12))
ax2.set_xlabel('$k$')
ax2.set_ylabel(r'$|\psi(k)|$')
V_x_line.set_data(S.x, S.V_x)
######################################################################
# Animate plot
def init():
psi_x_line.set_data([], [])
V_x_line.set_data([], [])
center_line.set_data([], [])
psi_k_line.set_data([], [])
title.set_text("")
return (psi_x_line, V_x_line, center_line, psi_k_line, title)
def animate(i):
S.time_step(dt, N_steps)
psi_x_line.set_data(S.x, 4 * abs(S.psi_x))
V_x_line.set_data(S.x, S.V_x)
center_line.set_data(2 * [x0 + S.t * p0 / m], [0, 1])
psi_k_line.set_data(S.k, abs(S.psi_k))
title.set_text("t = %.2f" % S.t)
return (psi_x_line, V_x_line, center_line, psi_k_line, title)
# call the animator. blit=True means only re-draw the parts that have changed.
anim = animation.FuncAnimation(fig, animate, init_func=init,
frames=frames, interval=30, blit=True)
# uncomment the following line to save the video in mp4 format. This
# requires either mencoder or ffmpeg to be installed on your system
anim.save('schrodinger_barrier.mp4', fps=15, extra_args=['-vcodec', 'libx264'])
pl.show()
```
# Double Finite Square Well
The double finite square well is an interesting problem because it shows, in a very rudamentary way, how you can get binding between two atoms simply because they share a particle. The idea is that the overall energy of the system wants to trend to the minimum.
We follow the now familiar recipe to get the energy levels of a quantum system.
```python
import numpy as np
import matplotlib.pyplot as plt
import scipy.linalg as scl
plt.style.use('fivethirtyeight')
hbar=1
m=1
N = 4096
x_min = -50.0
x_max = 50
x = np.linspace(x_min,x_max,N)
# We want to store step size, this is the reliable way:
h = x[1]-x[0] # Should be equal to 2*np.pi/(N-1)
a=2.
b=2*a
V0 = -.5
#
#
V=np.zeros(N)
for i in range(N):
if x[i] > -a -b/2. and x[i]< -b/2.:
V[i]= V0
elif x[i] > b/2. and x[i] < b/2. + a :
V[i]= V0
Mdd = 1./(h*h)*(np.diag(np.ones(N-1),-1) -2* np.diag(np.ones(N),0) + np.diag(np.ones(N-1),1))
H = -(hbar*hbar)/(2.0*m)*Mdd + np.diag(V)
E,psiT = np.linalg.eigh(H) # This computes the eigen values and eigenvectors
psi = np.transpose(psiT) # We take the transpose of psiT to the wavefunction vectors can accessed as psi[n]
#
```
```python
fig = plt.figure(figsize=(10,8))
ax1 = fig.add_subplot(111)
ax1.set_xlabel("x")
ax1.set_ylabel("$\psi_n(x)$")
for i in range(2):
if E[i]<0: # Only plot the bound states. The scattering states are not reliably computed.
if psi[i][int(N/2)+10] < 0: # Flip the wavefunctions if it is negative at large x, so plots are more consistent.
ax1.plot(x,-psi[i]/np.sqrt(h),label="$E_{}$={:>8.3f}".format(i,E[i]))
else:
ax1.plot(x,psi[i]/np.sqrt(h),label="$E_{}$={:>8.3f}".format(i,E[i]))
plt.title("Wavefunctions for the Finite Square Well")
# Plot the potential as well, on a separate y axis
ax2 = ax1.twinx()
ax2.set_ylabel("Energy") # To get separate energy scale
ax2.plot(x,V,color="Gray",label="V(x)")
ax1.set_xlim((-a-b-5,a+b+5))
legendh1,labels1 = ax1.get_legend_handles_labels() # For putting all legends in one box.
legendh2,labels2 = ax2.get_legend_handles_labels()
plt.legend(legendh1+legendh2,labels1+labels2,loc="lower right")
plt.savefig("Double_Finite_Square_Well_WaveFunctions1.pdf")
plt.show()
```
```python
# Change of barrier width
```
```python
import numpy as np
import matplotlib.pyplot as plt
E_dw=[]
psi_dw=[]
b_arr = [0.,0.25*a,0.5*a,0.75*a,1.*a,1.25*a,1.5*a,1.75*a,2.*a,2.5*a,3.*a,4.*a,5.*a]
for b in b_arr:
V0 = -1.
V=np.zeros(N)
for i in range(N):
if x[i] > -a -b/2. and x[i]< -b/2.:
V[i]= V0
elif x[i] > b/2. and x[i] < b/2. + a :
V[i]= V0
Mdd = 1./(h*h)*(np.diag(np.ones(N-1),-1) -2* np.diag(np.ones(N),0) + np.diag(np.ones(N-1),1))
H = -(hbar*hbar)/(2.0*m)*Mdd + np.diag(V)
E,psiT = np.linalg.eigh(H) # This computes the eigen values and eigenvectors
psi = np.transpose(psiT) # We take the transpose of psiT to the wavefunction vectors can accessed as psi[n]
E_dw.append(E)
psi_dw.append(psi)
```
```python
import numpy as np
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(10,8))
E_0=[E_dw[i][0] for i in range(len(b_arr))]
E_1=[E_dw[i][1] for i in range(len(b_arr))]
plt.plot(b_arr,E_0,label="$E_1$ Ground state")
plt.plot(b_arr,E_1,label="$E_2$ 1st excited")
plt.legend()
plt.savefig("Double_Finite_Well_E")
plt.show()
```
# Hyperbolic Partial Differential Equation:
## $ \frac{\partial^2\psi}{\partial t^2}=c^2\frac{\partial^2\psi}{\partial x^2}$
```python
!pip install gekko
```
```python
import numpy as np
import matplotlib.pyplot as plt
from gekko import*
from mpl_toolkits.mplot3d.axes3d import Axes3D
tf = .0005
npt = 100
xf = 2*np.pi
npx = 100
time = np.linspace(0,tf,npt)
xpos = np.linspace(0,xf,npx)
m = GEKKO()
m.time = time
def phi(x):
phi = np.cos(x)
return phi
def psi(x):
psi = np.sin(2*x)
return psi
x0 = phi(xpos)
v0 = psi(xpos)
dx = xpos[1]-xpos[0]
a = 18996.06
c = m.Const(value = a)
dx = m.Const(value = dx)
u = [m.Var(value = x0[i]) for i in range(npx)]
v = [m.Var(value = v0[i]) for i in range(npx)]
[m.Equation(u[i].dt()==v[i]) for i in range(npx)]
m.Equation(v[0].dt()==c**2 * \
(u[1] - 2.0*u[0] + u[npx-1])/dx**2 )
[m.Equation(v[i+1].dt()== \
c**2 * (u[i+2] - 2.0*u[i+1] + u[i])/dx**2) \
for i in range(npx-2) ]
m.Equation(v[npx-1].dt()== c**2 * \
(u[npx-2] - 2.0*u[npx-1] + u[0])/dx**2 )
m.options.imode = 4
m.options.solver = 1
m.options.nodes = 3
m.solve()
# re-arrange results for plotting
for i in range(npx):
if i ==0:
ustor = np.array([u[i]])
tstor = np.array([m.time])
else:
ustor = np.vstack([ustor,u[i]])
tstor = np.vstack([tstor,m.time])
for i in range(npt):
if i == 0:
xstor = xpos
else:
xstor = np.vstack([xstor,xpos])
xstor = xstor.T
t = tstor
ustor = np.array(ustor)
fig = plt.figure()
ax = fig.add_subplot(1,1,1,projection='3d')
ax.set_xlabel('Distance (ft)', fontsize = 12)
ax.set_ylabel('Time (seconds)', fontsize = 12)
ax.set_zlabel('Position (ft)', fontsize = 12)
ax.set_zlim((-1,1))
p = ax.plot_wireframe(xstor,tstor,ustor,\
rstride=1,cstride=1)
fig.savefig('wave_3d.png', Transparent=True)
plt.figure()
plt.contour(xstor, tstor, ustor, 150)
plt.colorbar()
plt.xlabel('X')
plt.ylabel('Time')
plt.savefig('wave_contour.png', Transparent=True)
plt.show()
```
```python
# Solving wave equation by explicit method
def PDE(U0,U1,v0,c,hx,ht):
```
```python
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
from scipy.fftpack import diff as psdiff
N = 100 #no. of mesh points
L = 1.0
x = np.linspace(0, L, N) #mesh points xi, 0 < xi < 1
h = x[1] - x[0]
k = -1.0
def odefunc(u, t):
ux = np.zeros(x.shape)
u[0] = 1 # boundary condition
for i in range(1,N-1):
ux[i] = float(u[i+1] - u[i-1])/(2*h)
# ux[i] = float(u[i] - u[i-1])/h
dudt = -ux
return dudt
init = np.zeros(x.shape, np.float) #initial condition
tspan = np.linspace(0.0, 2.0, N)
sol = odeint(odefunc, init, tspan, mxstep=5000)
for i in range(0, len(tspan), 2):
plt.plot(x, sol[N-1], label='t={0:1.2f}'.format(tspan[i]))
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.xlabel('t')
plt.ylabel('u(x,t)')
plt.subplots_adjust(top=0.89, right=0.77)
plt.savefig('pde.png')
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
SX, ST = np.meshgrid(x, tspan)
ax.plot_surface(SX, ST, sol, cmap='jet')
ax.set_xlabel('x')
ax.set_ylabel('t')
ax.set_zlabel('u(x,t)')
ax.view_init(elev=15, azim=-100) # adjust view so it is easy to see
plt.savefig('pde-3d.png')
```
## Heat Conduction equation
Heat Equation
The heat equation in one dimension is a parabolic PDE. The one dimensional transient heat equation is contains a partial derivative with respect to time and a second partial derivative with respect to distance.
$\rho c_p \frac{\partial T}{\partial t} = \frac{\partial}{\partial t} \left( k \frac{\partial T}{\partial t} \right) + \dot Q$
with
ρ
as density,
c
p
as heat capacity,
T
as the temperature,
k
as the thermal conductivity, and
.
Q
as the heat input rate. The heat equation is discretized in space to give a set of Ordinary Differential Equations (ODEs) in time.
Heated Rod (Left Boundary Condition)
The following simulation is for a heated rod (10 cm) with the left side temperature step to 100 oC. The heat transfers to the right by conduction and away from the rod by convection. The temperature at the tip is the lowest temperature because of heat lost by convection to the surrounding air.
```python
# The parabolic PDE equation describes the evolution of temperature
# for the interior region of the rod. This model is modified to make
# one end of the rod fixed and the other temperature at the end of the
# rod calculated.
import numpy as np
from gekko import GEKKO
import matplotlib.pyplot as plt
# Steel rod temperature profile
# Diameter = 3 cm
# Length = 10 cm
seg = 100 # number of segments
T_melt = 1426 # melting temperature of H13 steel
pi = 3.14159 # pi
d = 3 / 100 # rod diameter (m)
L = 10 / 100 # rod length (m)
L_seg = L / seg # length of a segment (m)
A = pi * d**2 / 4 # rod cross-sectional area (m)
As = pi * d * L_seg # surface heat transfer area (m^2)
heff = 5.8 # heat transfer coeff (W/(m^2*K))
keff = 28.6 # thermal conductivity in H13 steel (W/m-K)
rho = 7760 # density of H13 rod steel (kg/m^3)
cp = 460 # heat capacity of H13 steel (J/kg-K)
Ts = 23 # temperature of the surroundings (°C)
c2k = 273.15 # Celcius to Kelvin
m = GEKKO() # create GEKKO model
tf = 3000
nt = int(tf/30) + 1
m.time = np.linspace(0,tf,nt)
Th = m.MV(ub=T_melt) # heater temperature (°C)
Th.value = np.ones(nt) * 23 # start at room temperature
Th.value[10:] = 100 # step at 300 sec
T = [m.Var(23) for i in range(seg)] # temperature of the segments (°C)
# Energy balance for the rod (segments)
# accumulation =
# (heat gained from upper segment)
# - (heat lost to lower segment)
# - (heat lost to surroundings)
# Units check
# kg/m^3 * m^2 * m * J/kg-K * K/sec =
# W/m-K * m^2 * K / m
# - W/m-K * m^2 * K / m
# - W/m^2-K * m^2 * K
# first segment
m.Equation(rho*A*L_seg*cp*T[0].dt() == \
keff*A*(Th-T[0])/L_seg \
- keff*A*(T[0]-T[1])/L_seg \
- heff*As*(T[0]-Ts))
# middle segments
m.Equations([rho*A*L_seg*cp*T[i].dt() == \
keff*A*(T[i-1]-T[i])/L_seg \
- keff*A*(T[i]-T[i+1])/L_seg \
- heff*As*(T[i]-Ts) for i in range(1,seg-1)])
# last segment
m.Equation(rho*A*L_seg*cp*T[seg-1].dt() == \
keff*A*(T[seg-2]-T[seg-1])/L_seg \
- heff*(As+A)*(T[seg-1]-Ts))
# simulation
m.options.IMODE = 4
m.solve()
# plot results
plt.figure()
tm = m.time / 60.0
plt.plot(tm,Th.value,'r-',label=r'$T_{heater}\,(^oC)$')
plt.plot(tm,T[5].value,'k--',label=r'$T_5\,(^oC)$')
plt.plot(tm,T[15].value,'k:',label=r'$T_{15}\,(^oC)$')
plt.plot(tm,T[25].value,'k:',label=r'$T_{25}\,(^oC)$')
plt.plot(tm,T[45].value,'k-.',label=r'$T_{45}\,(^oC)$')
plt.plot(tm,T[-1].value,'b-',label=r'$T_{tip}\,(^oC)$')
plt.ylabel(r'$T\,(^oC$)')
plt.xlabel('Time (min)')
plt.xlim([0,50])
plt.legend(loc=4)
plt.show()
```
```python
# The parabolic PDE equation describes the evolution of temperature
# for the interior region of the rod. This model is modified to make
# one end of the device fixed and the other temperature at the end of the
# device calculated.
import numpy as np
from gekko import GEKKO
import matplotlib.pyplot as plt
import matplotlib.animation as animation
# Steel temperature profile
# Diameter = 3 cm
# Length = 10 cm
seg = 100 # number of segments
T_melt = 1426 # melting temperature of H13 steel
pi = 3.14159 # pi
d = 3 / 100 # diameter (m)
L = 10 / 100 # length (m)
L_seg = L / seg # length of a segment (m)
A = pi * d**2 / 4 # cross-sectional area (m)
As = pi * d * L_seg # surface heat transfer area (m^2)
heff = 5.8 # heat transfer coeff (W/(m^2*K))
keff = 28.6 # thermal conductivity in H13 steel (W/m-K)
rho = 7760 # density of H13 steel (kg/m^3)
cp = 460 # heat capacity of H13 steel (J/kg-K)
Ts = 23 # temperature of the surroundings (°C)
m = GEKKO() # create GEKKO model
tf = 3000
nt = int(tf/30) + 1
dist = np.linspace(0,L,seg+2)
m.time = np.linspace(0,tf,nt)
T1 = m.MV(ub=T_melt) # temperature 1 (°C)
T1.value = np.ones(nt) * 23 # start at room temperature
T1.value[10:] = 80 # step at 300 sec
T = [m.Var(23) for i in range(seg)] # temperature of the segments (°C)
T2 = m.MV(ub=T_melt) # temperature 2 (°C)
T2.value = np.ones(nt) * 23 # start at room temperature
T2.value[50:] = 100 # step at 300 sec
# Energy balance for the segments
# accumulation =
# (heat gained from upper segment)
# - (heat lost to lower segment)
# - (heat lost to surroundings)
# Units check
# kg/m^3 * m^2 * m * J/kg-K * K/sec =
# W/m-K * m^2 * K / m
# - W/m-K * m^2 * K / m
# - W/m^2-K * m^2 * K
# first segment
m.Equation(rho*A*L_seg*cp*T[0].dt() == \
keff*A*(T1-T[0])/L_seg \
- keff*A*(T[0]-T[1])/L_seg \
- heff*As*(T[0]-Ts))
# middle segments
m.Equations([rho*A*L_seg*cp*T[i].dt() == \
keff*A*(T[i-1]-T[i])/L_seg \
- keff*A*(T[i]-T[i+1])/L_seg \
- heff*As*(T[i]-Ts) for i in range(1,seg-1)])
# last segment
m.Equation(rho*A*L_seg*cp*T[-1].dt() == \
keff*A*(T[-2]-T[-1])/L_seg \
- keff*A*(T[-1]-T2)/L_seg \
- heff*As*(T[-1]-Ts))
# simulation
m.options.IMODE = 4
m.solve()
# plot results
plt.figure()
tm = m.time / 60.0
plt.plot(tm,T1.value,'k-',linewidth=2,label=r'$T_{left}\,(^oC)$')
plt.plot(tm,T[5].value,':',color='yellow',label=r'$T_{5}\,(^oC)$')
plt.plot(tm,T[15].value,'--',color='red',label=r'$T_{15}\,(^oC)$')
plt.plot(tm,T[25].value,'--',color='green',label=r'$T_{50}\,(^oC)$')
plt.plot(tm,T[45].value,'-.',color='gray',label=r'$T_{85}\,(^oC)$')
plt.plot(tm,T[95].value,'-',color='orange',label=r'$T_{95}\,(^oC)$')
plt.plot(tm,T2.value,'b-',linewidth=2,label=r'$T_{right}\,(^oC)$')
plt.ylabel(r'$T\,(^oC$)')
plt.xlabel('Time (min)')
plt.xlim([0,50])
plt.legend(loc=4)
plt.savefig('heat.png')
# create animation as heat.mp4
fig = plt.figure(figsize=(5,4))
fig.set_dpi(300)
ax1 = fig.add_subplot(1,1,1)
# store results in d
d = np.empty((seg+2,len(m.time)))
d[0] = np.array(T1.value)
for i in range(seg):
d[i+1] = np.array(T[i].value)
d[-1] = np.array(T2.value)
d = d.T
k = 0
def animate(i):
global k
k = min(len(m.time)-1,k)
ax1.clear()
plt.plot(dist*100,d[k],color='red',label=r'Temperature ($^oC$)')
plt.text(1,100,'Elapsed time '+str(round(m.time[k]/60,2))+' min')
plt.grid(True)
plt.ylim([20,110])
plt.xlim([0,L*100])
plt.ylabel(r'T ($^oC$)')
plt.xlabel(r'Distance (cm)')
plt.legend(loc=1)
k += 1
anim = animation.FuncAnimation(fig,animate,frames=len(m.time),interval=20)
# requires ffmpeg to save mp4 file
# available from https://ffmpeg.zeranoe.com/builds/
# add ffmpeg.exe to path such as C:\ffmpeg\bin\ in
# environment variables
try:
anim.save('heat.mp4',fps=10)
except:
print('requires ffmpeg to save mp4 file')
plt.show()
```
## SYMPY
https://docs.sympy.org/latest/tutorial/calculus.html
```python
from sympy import *
x, y, z = symbols('x y z')
init_printing(use_unicode=True)
diff(cos(x), x)
```
```python
from numpy import linspace, zeros, asarray
import matplotlib.pyplot as plt
def ode_FE(f, U_0, dt, T):
N_t = int(round(float(T)/dt))
# Ensure that any list/tuple returned from f_ is wrapped as array
f_ = lambda u, t: asarray(f(u, t))
u = zeros((N_t+1, len(U_0)))
t = linspace(0, N_t*dt, len(u))
u[0] = U_0
for n in range(N_t):
u[n+1] = u[n] + dt*f_(u[n], t[n])
return u, t
"""Verify the implementation of the diffusion equation."""
from numpy import linspace, zeros, abs
def rhs(u, t):
N = len(u) - 1
rhs = zeros(N+1)
rhs[0] = dsdt(t)
for i in range(1, N):
rhs[i] = (beta/dx**2)*(u[i+1] - 2*u[i] + u[i-1]) + \
f(x[i], t)
rhs[N] = (beta/dx**2)*(2*u[N-1] + 2*dx*dudx(t) -
2*u[N]) + f(x[N], t)
return rhs
def u_exact(x, t):
return (3*t + 2)*(x - L)
def dudx(t):
return (3*t + 2)
def s(t):
return u_exact(0, t)
def dsdt(t):
return 3*(-L)
def f(x, t):
return 3*(x-L)
def verify_sympy_ForwardEuler():
import sympy as sp
beta, x, t, dx, dt, L = sp.symbols('beta x t dx dt L')
u = lambda x, t: (3*t + 2)*(x - L)**2
f = lambda x, t, beta, L: 3*(x-L)**2 - (3*t + 2)*2*beta
s = lambda t: (3*t + 2)*L**2
N = 4
rhs = [None]*(N+1)
rhs[0] = sp.diff(s(t), t)
for i in range(1, N):
rhs[i] = (beta/dx**2)*(u(x+dx,t) - 2*u(x,t) + u(x-dx,t)) + \
f(x, t, beta, L)
rhs[N] = (beta/dx**2)*(u(x-dx,t) + 2*dx*(3*t+2) -
2*u(x,t) + u(x-dx,t)) + f(x, t, beta, L)
for i in range(len(rhs)):
rhs[i] = sp.simplify(sp.expand(rhs[i])).subs(x, i*dx)
print(rhs[i])
lhs = (u(x, t+dt) - u(x,t))/dt # Forward Euler difference
lhs = sp.simplify(sp.expand(lhs.subs(x, i*dx)))
print(lhs)
print(sp.simplify(lhs - rhs[i]))
print('---')
def test_diffusion_exact_linear():
global beta, dx, L, x # needed in rhs
L = 1.5
beta = 0.5
N = 4
x = linspace(0, L, N+1)
dx = x[1] - x[0]
u = zeros(N+1)
U_0 = zeros(N+1)
U_0[0] = s(0)
U_0[1:] = u_exact(x[1:], 0)
dt = 0.1
print(dt)
u, t = ode_FE(rhs, U_0, dt, T=1.2)
tol = 1E-12
for i in range(0, u.shape[0]):
diff = abs(u_exact(x, t[i]) - u[i,:]).max()
assert diff < tol, 'diff=%.16g' % diff
print('diff=%g at t=%g' % (diff, t[i]))
if __name__ == '__main__':
test_diffusion_exact_linear()
verify_sympy_ForwardEuler()
```
## Application: heat conduction in a rod
Let us return to the case with heat conduction in a rod (120)-(123). Assume that the rod is 50 cm long and made of aluminum alloy 6082. The β parameter equals κ/(ϱc), where κ is the heat conduction coefficient, ϱ is the density, and c is the heat capacity. We can find proper values for these physical quantities in the case of aluminum alloy 6082: ϱ=2.7⋅103 kg/m3, κ=200WmK, c=900JKkg. This results in β=κ/(ϱc)=8.2⋅10−5 m2/s. Preliminary simulations show that we are close to a constant steady state temperature after 1 h, i.e., T=3600 s.
The rhs function from the previous section can be reused, only the functions s, dsdt, g, and dudx must be changed (see file rod_FE.py):
```python
"""Temperature evolution in a rod, computed by a ForwardEuler method."""
from numpy import linspace, zeros, linspace
import time
def rhs(u, t):
N = len(u) - 1
rhs = zeros(N+1)
rhs[0] = dsdt(t)
for i in range(1, N):
rhs[i] = (beta/dx**2)*(u[i+1] - 2*u[i] + u[i-1]) + \
g(x[i], t)
i = N
rhs[i] = (beta/dx**2)*(2*u[i-1] + 2*dx*dudx(t) -
2*u[i]) + g(x[N], t)
return rhs
def dudx(t):
return 0
def s(t):
return 323
def dsdt(t):
return 0
def g(x, t):
return 0
L = 0.5
beta = 8.2E-5
N = 40
x = linspace(0, L, N+1)
dx = x[1] - x[0]
u = zeros(N+1)
U_0 = zeros(N+1)
U_0[0] = s(0)
U_0[1:] = 283
dt = dx**2/(2*beta)
#print('stability limit:', dt)
#dt = 0.00034375
t0 = time.clock()
u, t = ode_FE(rhs, U_0, dt, T=1*60*60)
t1 = time.clock()
#print('CPU time: %.1fs' % (t1 - t0))
# Make movie
import os
os.system('rm tmp_*.png')
import matplotlib.pyplot as plt
plt.ion()
y = u[0,:]
lines = plt.plot(x, y)
plt.axis([x[0], x[-1], 273, s(0)+10])
plt.xlabel('x')
plt.ylabel('u(x,t)')
counter = 0
# Plot each of the first 100 frames, then increase speed by 10x
change_speed = 100
for i in range(0, u.shape[0]):
#print(t[i])
plot = True if i <= change_speed else i % 10 == 0
lines[0].set_ydata(u[i,:])
if i > change_speed:
plt.legend(['t=%.0f 10x' % t[i]])
else:
plt.legend(['t=%.0f' % t[i]])
plt.draw()
if plot:
plt.savefig('tmp_%04d.png' % counter)
counter += 1
time.sleep(0.2)
```
## Numerical Python
http://people.bu.edu/andasari/courses/numericalpython/python.html
```python
import numpy as np
import matplotlib.pyplot as plt
class beamWarming1:
def __init__(self, N, tmax):
self.N = N # number of nodes
self.tmax = tmax
self.xmin = 0
self.xmax = 1
self.dt = 0.009 # timestep
self.v = 1 # velocity
self.xc = 0.25
self.initializeDomain()
self.initializeU()
self.initializeParams()
def initializeDomain(self):
self.dx = (self.xmax - self.xmin)/self.N
self.x = np.arange(self.xmin-self.dx, self.xmax+(2*self.dx), self.dx)
def initializeU(self):
u0 = np.exp(-200*(self.x-self.xc)**2)
self.u = u0.copy()
self.unp1 = u0.copy()
def initializeParams(self):
self.nsteps = round(self.tmax/self.dt)
self.alpha1 = self.v*self.dt/(2*self.dx)
self.alpha2 = self.v**2*self.dt**2/(2*self.dx**2)
def solve_and_plot(self):
tc = 0
for i in range(self.nsteps):
plt.clf()
# The Beam-Warming scheme, Eq. (18.25)
for j in range(self.N+3):
self.unp1[j] = self.u[j] - self.alpha1*(3*self.u[j] - 4*self.u[j-1] + self.u[j-2]) + \
self.alpha2*(self.u[j] - 2*self.u[j-1] + self.u[j-2])
self.u = self.unp1.copy()
# Periodic boundary conditions
self.u[0] = self.u[self.N+1]
self.u[1] = self.u[self.N+2]
uexact = np.exp(-200*(self.x-self.xc-self.v*tc)**2)
plt.plot(self.x, uexact, 'r', label="Exact solution")
plt.plot(self.x, self.u, 'bo-', label="Beam-Warming")
plt.axis((self.xmin-0.15, self.xmax+0.15, -0.2, 1.4))
plt.grid(True)
plt.xlabel("Distance (x)")
plt.ylabel("u")
plt.legend(loc=1, fontsize=12)
plt.suptitle("Time = %1.3f" % (tc+self.dt))
plt.pause(0.01)
tc += self.dt
def main():
sim = beamWarming1(100, 1.5)
sim.solve_and_plot()
plt.show()
if __name__ == "__main__":
main()
#N = 100
#tmax = 2.5 # maximum value of t
```
```python
import matplotlib.pyplot as plt
import numpy as np
def feval(funcName, *args):
return eval(funcName)(*args)
def RKF45(func, yinit, x_range, h):
m = len(yinit)
n = int((x_range[-1] - x_range[0])/h)
x = x_range[0]
y = yinit
xsol = np.empty(0)
xsol = np.append(xsol, x)
ysol = np.empty(0)
ysol = np.append(ysol, y)
for i in range(n):
k1 = feval(func, x, y)
yp2 = y + k1*(h/5)
k2 = feval(func, x+h/5, yp2)
yp3 = y + k1*(3*h/40) + k2*(9*h/40)
k3 = feval(func, x+(3*h/10), yp3)
yp4 = y + k1*(3*h/10) - k2*(9*h/10) + k3*(6*h/5)
k4 = feval(func, x+(3*h/5), yp4)
yp5 = y - k1*(11*h/54) + k2*(5*h/2) - k3*(70*h/27) + k4*(35*h/27)
k5 = feval(func, x+h, yp5)
yp6 = y + k1*(1631*h/55296) + k2*(175*h/512) + k3*(575*h/13824) + k4*(44275*h/110592) + k5*(253*h/4096)
k6 = feval(func, x+(7*h/8), yp6)
for j in range(m):
y[j] = y[j] + h*(37*k1[j]/378 + 250*k3[j]/621 + 125*k4[j]/594 + 512*k6[j]/1771)
x = x + h
xsol = np.append(xsol, x)
for r in range(len(y)):
ysol = np.append(ysol, y[r])
return [xsol, ysol]
def myFunc(x, y):
dy = np.zeros((len(y)))
dy[0] = np.exp(-2*x) - 2*y[0]
return dy
# -----------------------
h = 0.2
x = np.array([0.0, 2.0])
yinit = np.array([1.0/10])
[ts, ys] = RKF45('myFunc', yinit, x, h)
dt = int((x[-1]-x[0])/h)
t = [x[0]+i*h for i in range(dt+1)]
yexact = []
for i in range(dt+1):
ye = (1.0 / 10) * np.exp(-2 * t[i]) + t[i] * np.exp(-2 * t[i])
yexact.append(ye)
y_diff = ys - yexact
print("max diff =", np.max(abs(y_diff)))
plt.plot(ts, ys, 'rs')
plt.plot(t, yexact, 'b')
plt.xlim(x[0], x[1])
plt.legend(["RKF45", "Exact solution"], loc=1)
plt.xlabel('x', fontsize=17)
plt.ylabel('y', fontsize=17)
plt.tight_layout()
plt.show()
# Uncomment the following to print the figure:
#plt.savefig('Fig_ex2_RK4_h0p1.png', dpi=600)
```
```python
import matplotlib.pyplot as plt
import numpy as np
def feval(funcName, *args):
return eval(funcName)(*args)
def RK4thOrder(func, yinit, x_range, h):
m = len(yinit)
n = int((x_range[-1] - x_range[0])/h)
x = x_range[0]
y = yinit
xsol = np.empty(0)
xsol = np.append(xsol, x)
ysol = np.empty(0)
ysol = np.append(ysol, y)
for i in range(n):
k1 = feval(func, x, y)
yp2 = y + k1*(h/2)
k2 = feval(func, x+h/2, yp2)
yp3 = y + k2*(h/2)
k3 = feval(func, x+h/2, yp3)
yp4 = y + k3*h
k4 = feval(func, x+h, yp4)
for j in range(m):
y[j] = y[j] + (h/6)*(k1[j] + 2*k2[j] + 2*k3[j] + k4[j])
x = x + h
xsol = np.append(xsol, x)
for r in range(len(y)):
ysol = np.append(ysol, y[r])
return [xsol, ysol]
def myFunc(x, y):
dy = np.zeros((len(y)))
dy[0] = np.exp(-2*x) - 2*y[0]
return dy
# -----------------------
h = 0.2
x = np.array([0.0, 2.0])
yinit = np.array([1.0/10])
[ts, ys] = RK4thOrder('myFunc', yinit, x, h)
dt = int((x[-1]-x[0])/h)
t = [x[0]+i*h for i in range(dt+1)]
yexact = []
for i in range(dt+1):
ye = (1.0/10)*np.exp(-2*t[i]) + t[i]*np.exp(-2*t[i])
yexact.append(ye)
diff = ys - yexact
print("Maximum difference =", np.max(abs(diff)))
plt.plot(ts, ys, 'r')
plt.plot(t, yexact, 'b')
plt.xlim(x[0], x[1])
plt.legend(["4th Order RK", "Exact solution"], loc=1)
plt.xlabel('x', fontsize=17)
plt.ylabel('y', fontsize=17)
plt.tight_layout()
plt.show()
# Uncomment the following to print the figure:
#plt.savefig('Fig_ex2_RK4_h0p1.png', dpi=600)
```
```python
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from IPython.display import HTML
class WaveEquationFD:
def __init__(self, N, D, Mx, My):
self.N = N
self.D = D
self.Mx = Mx
self.My = My
self.tend = 6
self.xmin = 0
self.xmax = 2
self.ymin = 0
self.ymax = 2
self.initialization()
self.eqnApprox()
def initialization(self):
self.dx = (self.xmax - self.xmin)/self.Mx
self.dy = (self.ymax - self.ymin)/self.My
self.x = np.arange(self.xmin, self.xmax+self.dx, self.dx)
self.y = np.arange(self.ymin, self.ymax+self.dy, self.dy)
#----- Initial condition -----#
self.u0 = lambda r, s: 0.1*np.sin(np.pi*r)*np.sin(np.pi*s/2)
#----- Initial velocity -----#
self.v0 = lambda a, b: 0
#----- Boundary conditions -----#
self.bxyt = lambda left, right, time: 0
self.dt = (self.tend - 0)/self.N
self.t = np.arange(0, self.tend+self.dt/2, self.dt)
# Assertion for the condition of r < 1, for stability
r = 4*self.D*self.dt**2/(self.dx**2+self.dy**2);
assert r < 1, "r is bigger than 1!"
def eqnApprox(self):
#----- Approximation equation properties -----#
self.rx = self.D*self.dt**2/self.dx**2
self.ry = self.D*self.dt**2/self.dy**2
self.rxy1 = 1 - self.rx - self.ry
self.rxy2 = self.rxy1*2
#----- Initialization matrix u for solution -----#
self.u = np.zeros((self.Mx+1, self.My+1))
self.ut = np.zeros((self.Mx+1, self.My+1))
self.u_1 = self.u.copy()
#----- Fills initial condition and initial velocity -----#
for j in range(1, self.Mx):
for i in range(1, self.My):
self.u[i,j] = self.u0(self.x[i], self.y[j])
self.ut[i,j] = self.v0(self.x[i], self.y[j])
def solve_and_animate(self):
u_2 = np.zeros((self.Mx+1, self.My+1))
xx, yy = np.meshgrid(self.x, self.y)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
wframe = None
k = 0
nsteps = self.N
while k < nsteps:
if wframe:
ax.collections.remove(wframe)
self.t = k*self.dt
#----- Fills in boundary condition along y-axis (vertical, columns 0 and Mx) -----#
for i in range(self.My+1):
self.u[i, 0] = self.bxyt(self.x[0], self.y[i], self.t)
self.u[i, self.Mx] = self.bxyt(self.x[self.Mx], self.y[i], self.t)
for j in range(self.Mx+1):
self.u[0, j] = self.bxyt(self.x[j], self.y[0], self.t)
self.u[self.My, j] = self.bxyt(self.x[j], self.y[self.My], self.t)
if k == 0:
for j in range(1, self.My):
for i in range(1, self.Mx):
self.u[i,j] = 0.5*(self.rx*(self.u_1[i-1,j] + self.u_1[i+1,j])) \
+ 0.5*(self.ry*(self.u_1[i,j-1] + self.u_1[i,j+1])) \
+ self.rxy1*self.u[i,j] + self.dt*self.ut[i,j]
else:
for j in range(1, self.My):
for i in range(1, self.Mx):
self.u[i,j] = self.rx*(self.u_1[i-1,j] + self.u_1[i+1,j]) \
+ self.ry*(self.u_1[i,j-1] + self.u_1[i,j+1]) \
+ self.rxy2*self.u[i,j] - u_2[i,j]
u_2 = self.u_1.copy()
self.u_1 = self.u.copy()
wframe = ax.plot_surface(xx, yy, self.u, cmap=cm.coolwarm, linewidth=2,
antialiased=False)
ax.set_xlim3d(0, 2.0)
ax.set_ylim3d(0, 2.0)
ax.set_zlim3d(-1.5, 1.5)
ax.set_xticks([0, 0.5, 1.0, 1.5, 2.0])
ax.set_yticks([0, 0.5, 1.0, 1.5, 2.0])
ax.set_xlabel("x")
ax.set_ylabel("y")
ax.set_zlabel("U")
plt.pause(0.01)
k += 0.5
def main():
simulator = WaveEquationFD(200, 0.25, 50, 50)
simulator.solve_and_animate()
plt.show()
if __name__ == "__main__":
main()
simulator=WaveEquationFD(200, 0.25, 50, 50)
ani=simulator.solve_and_animate()
ani.save(r'movie11,mp4',fps=10)
```
```python
"""
2D wave equation via FFT
u_tt = u_xx + u_yy
on [-1, 1]x[-1, 1], t > 0 and Dirichlet BC u=0
"""
import numpy as np
from scipy import interpolate
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
class WaveEquation:
def __init__(self, N):
self.N = N
self.x0 = -1.0
self.xf = 1.0
self.y0 = -1.0
self.yf = 1.0
self.initialization()
self.initCond()
def initialization(self):
k = np.arange(self.N + 1)
self.x = np.cos(k*np.pi/self.N)
self.y = self.x.copy()
self.xx, self.yy = np.meshgrid(self.x, self.y)
self.dt = 6/self.N**2
self.plotgap = round((1/3)/self.dt)
self.dt = (1/3)/self.plotgap
def initCond(self):
self.vv = np.exp(-40*((self.xx-0.4)**2 + self.yy**2))
self.vvold = self.vv.copy()
def solve_and_animate(self):
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
tc = 0
nstep = round(3*self.plotgap+1)
wframe = None
while tc < nstep:
if wframe:
ax.collections.remove(wframe)
xxx = np.arange(self.x0, self.xf+1/16, 1/16)
yyy = np.arange(self.y0, self.yf+1/16, 1/16)
vvv = interpolate.interp2d(self.x, self.y, self.vv, kind='cubic')
Z = vvv(xxx, yyy)
xxf, yyf = np.meshgrid(np.arange(self.x0,self.xf+1/16,1/16), np.arange(self.y0,self.yf+1/16,1/16))
wframe = ax.plot_surface(xxf, yyf, Z, cmap=cm.coolwarm, linewidth=0,
antialiased=False)
ax.set_xlim3d(self.x0, self.xf)
ax.set_ylim3d(self.y0, self.yf)
ax.set_zlim3d(-0.15, 1)
ax.set_xlabel("x")
ax.set_ylabel("y")
ax.set_zlabel("U")
ax.set_xticks([-1.0, -0.5, 0.0, 0.5, 1.0])
ax.set_yticks([-1.0, -0.5, 0.0, 0.5, 1.0])
fig.suptitle("Time = %1.3f" % (tc/(3*self.plotgap-1)-self.dt))
#plt.tight_layout()
ax.view_init(elev=30., azim=-110)
plt.pause(0.01)
uxx = np.zeros((self.N+1, self.N+1))
uyy = np.zeros((self.N+1, self.N+1))
ii = np.arange(1, self.N)
for i in range(1, self.N):
v = self.vv[i,:]
V = np.hstack((v, np.flipud(v[ii])))
U = np.fft.fft(V)
U = U.real
r1 = np.arange(self.N)
r2 = 1j*np.hstack((r1, 0, -r1[:0:-1]))*U
W1 = np.fft.ifft(r2)
W1 = W1.real
s1 = np.arange(self.N+1)
s2 = np.hstack((s1, -s1[self.N-1:0:-1]))
s3 = -s2**2*U
W2 = np.fft.ifft(s3)
W2 = W2.real
uxx[i,ii] = W2[ii]/(1-self.x[ii]**2) - self.x[ii]*W1[ii]/(1-self.x[ii]**2)**(3/2)
for j in range(1, self.N):
v = self.vv[:,j]
V = np.hstack((v, np.flipud(v[ii])))
U = np.fft.fft(V)
U = U.real
r1 = np.arange(self.N)
r2 = 1j*np.hstack((r1, 0, -r1[:0:-1]))*U
W1 = np.fft.ifft(r2)
W1 = W1.real
s1 = np.arange(self.N+1)
s2 = np.hstack((s1, -s1[self.N-1:0:-1]))
s3 = -s2**2*U
W2 = np.fft.ifft(s3)
W2 = W2.real
uyy[ii,j] = W2[ii]/(1-self.y[ii]**2) - self.y[ii]*W1[ii]/(1-self.y[ii]**2)**(3/2)
vvnew = 2*self.vv - self.vvold + self.dt**2*(uxx+uyy)
self.vvold = self.vv.copy()
self.vv = vvnew.copy()
tc += 1
def main():
simulator = WaveEquation(30)
simulator.solve_and_animate()
plt.show()
if __name__ == "__main__":
main()
```
```python
# Animated surface plot
import numpy as np
from scipy import sparse
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
class ADImethod:
def __init__(self, M, maxt, D):
self.M = M
self.x0 = 0
self.xf = 1
self.y0 = 0
self.yf = 1
self.maxt = maxt
self.dt = 0.01
self.D = D
self.h = 1/self.M
self.r = self.D*self.dt/(2*self.h**2)
self.generateGrid()
self.lhsMatrixA()
self.rhsMatrixA()
def generateGrid(self):
self.X, self.Y = np.meshgrid(np.linspace(self.x0, self.xf, self.M), np.linspace(self.y0, self.yf,self. M))
ic01 = np.logical_and(self.X >= 1/4, self.X <= 3/4)
ic02 = np.logical_and(self.Y >= 1/4, self.Y <= 3/4)
ic0 = np.multiply(ic01, ic02)
self.U = ic0*1
def lhsMatrixA(self):
maindiag = (1+2*self.r)*np.ones((1, self.M))
offdiag = -self.r*np.ones((1, self.M-1))
a = maindiag.shape[1]
diagonals = [maindiag, offdiag, offdiag]
Lx = sparse.diags(diagonals, [0, -1, 1], shape=(a, a)).toarray()
Ix = sparse.identity(self.M).toarray()
self.A = sparse.kron(Ix, Lx).toarray()
pos1 = np.arange(0,self.M**2,self.M)
for i in range(len(pos1)):
self.A[pos1[i], pos1[i]] = 1 + self.r
pos2 = np.arange(self.M-1, self.M**2, self.M)
for j in range(len(pos2)):
self.A[pos2[j], pos2[j]] = 1 + self.r
def rhsMatrixA(self):
maindiag = (1-self.r)*np.ones((1, self.M))
offdiag = self.r*np.ones((1, self.M-1))
a = maindiag.shape[1]
diagonals = [maindiag, offdiag, offdiag]
Rx = sparse.diags(diagonals, [0, -1, 1], shape=(a, a)).toarray()
Ix = sparse.identity(self.M).toarray()
self.A_rhs = sparse.kron(Rx, Ix).toarray()
pos3 = np.arange(self.M, self.M**2-self.M)
for k in range(len(pos3)):
self.A_rhs[pos3[k], pos3[k]] = 1 - 2*self.r
def solve_and_animate(self):
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.set_zlim(0, 1)
tc = 0
nstep = round(self.maxt/self.h)
wframe = None
while tc < nstep+1:
if wframe:
ax.collections.remove(wframe)
b1 = np.flipud(self.U).reshape(self.M**2, 1)
sol = np.linalg.solve(self.A, np.matmul(self.A_rhs, b1))
self.U = np.flipud(sol).reshape(self.M, self.M)
b2 = np.flipud(self.U).reshape(self.M**2, 1)
sol = np.linalg.solve(self.A, np.matmul(self.A_rhs, b2))
self.U = np.flipud(sol).reshape(self.M, self.M)
wframe = ax.plot_surface(self.X, self.Y, self.U, cmap=cm.coolwarm,
linewidth=0, antialiased=False)
fig.suptitle("Time = %s" % tc)
plt.pause(0.001)
tc += 1
def main():
simulator = ADImethod(50, 2.5, 0.01)
simulator.solve_and_animate()
plt.show()
if __name__ == "__main__":
main()
```
http://people.bu.edu/andasari/courses/numericalpython/python.html
```python
def forwardDifference(f, x, h):
return (f(x+h) - f(x))/h
def backwardDifference(f, x, h):
return (f(x) - f(x-h))/h
def centralDifference(f, x, h):
return (f(x+h) - f(x-h))/(2*h)
def myFunc(x):
return 2*x**3 + 4*x**2 - 5*x
x = 1
h = 0.1
dydt_f = forwardDifference(myFunc, x, h)
dydt_b = backwardDifference(myFunc, x, h)
dydt_c = centralDifference(myFunc, x, h)
exact = 6*x**2 + 8*x - 5
print("Forward difference =", dydt_f)
print("Backward difference =", dydt_b)
print("Central difference =", dydt_c)
print("Exact value =", exact)
```
```python
def forwardDifference(f, x, h):
return (f(x+h+h) - 2*f(x+h) + f(x))/h**2
def backwardDifference(f, x, h):
return (f(x) - 2*f(x-h) + f(x-h-h))/h**2
def centralDifference(f, x, h):
return (f(x+h) - 2*f(x) + f(x-h))/h**2
def myFunc(x):
return 2*x**3 + 4*x**2 - 5*x
x = 1
h = 0.1
dydt_f = forwardDifference(myFunc, x, h)
dydt_b = backwardDifference(myFunc, x, h)
dydt_c = centralDifference(myFunc, x, h)
exact = 12*x + 8
print("Forward difference =", dydt_f)
print("Backward difference =", dydt_b)
print("Central difference =", dydt_c)
print("Exact value =", exact)
```
```python
import numpy as np
def forwardDiff(f, x, h, *fparams):
return (f(x+h, *fparams) - f(x, *fparams))/h
def backwardDiff(f, x, h, *fparams):
return (f(x, *fparams) - f(x-h, *fparams))/h
def centralDiff(f, x, h, *fparams):
return (f(x+h, *fparams) - f(x-h, *fparams))/(2*h)
def y(t, mu, k):
return mu*t + 1.3*k*t**2
def G(x, t, A, r, phi):
return A*np.exp(-r*t**2)*np.sin(phi*x)
dydtf = forwardDiff(y, 0.1, 1e-9, 3, 9.81) # f=y, x=0.1, h=1e-9,
# params[0]=mu=3, params[1]=k=9.81
dydtb = backwardDiff(y, 0.1, 1e-9, 3, 9.81)
dydtc = centralDiff(y, 0.1, 1e-9, 3, 9.81)
print("Forward difference dydt =", dydtf)
print("Backward difference dydt =", dydtb)
print("Central difference dydt =", dydtc)
print("--------------------")
dGdtf = forwardDiff(G, 0.5, 1e-9, 1, 1, 0.6, 100) # f=G, x=0.5, h=1e-9,
# params[0]=t=0,
# params[1]=A=1,
# params[2]=r=0.6,
# params[3]=phi=100
dGdtb = backwardDiff(G, 0.5, 1e-9, 1, 1, 0.6, 100)
dGdtc = centralDiff(G, 0.5, 1e-9, 1, 1, 0.6, 100)
print("Forward difference dGdt =", dGdtf)
print("Backward difference dGdt =", dGdtb)
print("Central difference dGdt =", dGdtc)
print("--------------------")
```
```python
import numpy as np
import math
def feval(funcName, *args):
'''
This function is similar to "feval" in Matlab.
Example: feval('cos', pi) = -1.
'''
return eval(funcName)(*args)
a = feval('np.cos', np.pi)
print("Cos(pi) =", a)
b = feval('math.log', 1000, 10)
print("Log10(1000) =", round(b))
def myFunc(x):
return x**2
c1 = 9.0
c = feval('myFunc', c1)
print("If x =", str(c1), "then x^2 =", c)
```
```python
import matplotlib.pyplot as plt
import math
def feval(funcName, *args):
return eval(funcName)(*args)
def mult(vector, scalar):
newvector = [0]*len(vector)
for i in range(len(vector)):
newvector[i] = vector[i]*scalar
return newvector
def backwardEuler(func, yinit, x_range, h):
numOfODEs = len(yinit)
sub_intervals = int((x_range[-1] - x_range[0])/h)
x = x_range[0]
y = yinit
xsol = [x]
ysol = [y[0]]
for i in range(sub_intervals):
yprime = feval(func, x+h, y)
yp = mult(yprime, (1/(1+h)))
for j in range(numOfODEs):
y[j] = y[j] + h*yp[j]
x += h
xsol.append(x)
for r in range(len(y)):
ysol.append(y[r]) # Saves all new y's
return [xsol, ysol]
def myFunc(x, y):
'''
We define our ODEs in this function.
'''
dy = [0] * len(y)
dy[0] = 3 * (1 + x) - y[0]
return dy
h = 0.2
x = [1.0, 2.0]
yinit = [4.0]
[ts, ys] = backwardEuler('myFunc', yinit, x, h)
# Calculates the exact solution, for comparison
dt = int((x[-1] - x[0]) / h)
t = [x[0]+i*h for i in range(dt+1)]
yexact = []
for i in range(dt+1):
ye = 3 * t[i] + math.exp(1 - t[i])
yexact.append(ye)
plt.plot(ts, ys, 'r')
plt.plot(t, yexact, 'b')
plt.xlim(x[0], x[1])
plt.legend(["Backward Euler method",
"Exact solution"], loc=2)
plt.xlabel('x', fontsize=17)
plt.ylabel('y', fontsize=17)
plt.tight_layout()
plt.show()
# Uncomment the following to print the figure:
#plt.savefig('Fig1.png', dpi=600)
```
```python
import matplotlib.pyplot as plt
import numpy as np
def feval(funcName, *args):
return eval(funcName)(*args)
def RK2A(func, yinit, x_range, h):
m = len(yinit)
n = int((x_range[-1] - x_range[0])/h)
x = x_range[0]
y = yinit
xsol = np.empty(0)
xsol = np.append(xsol, x)
ysol = np.empty(0)
ysol = np.append(ysol, y)
for i in range(n):
k1 = feval(func, x, y)
ypredictor = y + k1 * h
k2 = feval(func, x+h, ypredictor)
for j in range(m):
y[j] = y[j] + (h/2)*(k1[j] + k2[j])
x = x + h
xsol = np.append(xsol, x)
for r in range(len(y)):
ysol = np.append(ysol, y[r])
return [xsol, ysol]
def myFunc(x, y):
dy = np.zeros((len(y)))
dy[0] = np.exp(-2 * x) - 2 * y[0]
return dy
# -----------------------
h = 0.2
x = np.array([0, 2])
yinit = np.array([1.0/10])
[ts, ys] = RK2A('myFunc', yinit, x, h)
dt = int((x[-1]-x[0])/h)
t = [x[0]+i*h for i in range(dt+1)]
yexact = []
for i in range(dt+1):
ye = (1.0/10)*np.exp(-2*t[i]) + t[i]*np.exp(-2*t[i])
yexact.append(ye)
plt.figure(figsize=(12,6))
plt.plot(ts, ys, 'r')
plt.plot(t, yexact, 'b')
plt.xlim(x[0], x[1])
plt.legend(["2nd Order RK, A=1/2 ", "Exact solution"], loc=1)
plt.xlabel('x', fontsize=17)
plt.ylabel('y', fontsize=17)
plt.tight_layout()
plt.show()
# Uncomment the following to print the figure:
#plt.savefig('Fig2.png', dpi=600)
```
```python
import numpy as np
from matplotlib import pyplot as pl
from matplotlib import animation
from scipy.fftpack import fft,ifft
from IPython.display import HTML
class Schrodinger(object):
"""
Class which implements a numerical solution of the time-dependent
Schrodinger equation for an arbitrary potential
"""
def __init__(self, x, psi_x0, V_x,
k0 = None, hbar=1, m=1, t0=0.0):
# Validation of array inputs
self.x, psi_x0, self.V_x = map(np.asarray, (x, psi_x0, V_x))
N = self.x.size
assert self.x.shape == (N,)
assert psi_x0.shape == (N,)
assert self.V_x.shape == (N,)
# Set internal parameters
self.hbar = hbar
self.m = m
self.t = t0
self.dt_ = None
self.N = len(x)
self.dx = self.x[1] - self.x[0]
self.dk = 2 * np.pi / (self.N * self.dx)
# set momentum scale
if k0 == None:
self.k0 = -0.5 * self.N * self.dk
else:
self.k0 = k0
self.k = self.k0 + self.dk * np.arange(self.N)
self.psi_x = psi_x0
self.compute_k_from_x()
# variables which hold steps in evolution of the
self.x_evolve_half = None
self.x_evolve = None
self.k_evolve = None
# attributes used for dynamic plotting
self.psi_x_line = None
self.psi_k_line = None
self.V_x_line = None
def _set_psi_x(self, psi_x):
self.psi_mod_x = (psi_x * np.exp(-1j * self.k[0] * self.x)
* self.dx / np.sqrt(2 * np.pi))
def _get_psi_x(self):
return (self.psi_mod_x * np.exp(1j * self.k[0] * self.x)
* np.sqrt(2 * np.pi) / self.dx)
def _set_psi_k(self, psi_k):
self.psi_mod_k = psi_k * np.exp(1j * self.x[0]
* self.dk * np.arange(self.N))
def _get_psi_k(self):
return self.psi_mod_k * np.exp(-1j * self.x[0] *
self.dk * np.arange(self.N))
def _get_dt(self):
return self.dt_
def _set_dt(self, dt):
if dt != self.dt_:
self.dt_ = dt
self.x_evolve_half = np.exp(-0.5 * 1j * self.V_x
/ self.hbar * dt )
self.x_evolve = self.x_evolve_half * self.x_evolve_half
self.k_evolve = np.exp(-0.5 * 1j * self.hbar /
self.m * (self.k * self.k) * dt)
psi_x = property(_get_psi_x, _set_psi_x)
psi_k = property(_get_psi_k, _set_psi_k)
dt = property(_get_dt, _set_dt)
def compute_k_from_x(self):
self.psi_mod_k = fft(self.psi_mod_x)
def compute_x_from_k(self):
self.psi_mod_x = ifft(self.psi_mod_k)
def time_step(self, dt, Nsteps = 1):
self.dt = dt
if Nsteps > 0:
self.psi_mod_x *= self.x_evolve_half
for i in range(Nsteps - 1):
self.compute_k_from_x()
self.psi_mod_k *= self.k_evolve
self.compute_x_from_k()
self.psi_mod_x *= self.x_evolve
self.compute_k_from_x()
self.psi_mod_k *= self.k_evolve
self.compute_x_from_k()
self.psi_mod_x *= self.x_evolve_half
self.compute_k_from_x()
self.t += dt * Nsteps
######################################################################
# Helper functions for gaussian wave-packets
def gauss_x(x, a, x0, k0):
"""
a gaussian wave packet of width a, centered at x0, with momentum k0
"""
return ((a * np.sqrt(np.pi)) ** (-0.5)
* np.exp(-0.5 * ((x - x0) * 1. / a) ** 2 + 1j * x * k0))
def gauss_k(k,a,x0,k0):
"""
analytical fourier transform of gauss_x(x), above
"""
return ((a / np.sqrt(np.pi))**0.5
* np.exp(-0.5 * (a * (k - k0)) ** 2 - 1j * (k - k0) * x0))
######################################################################
# Utility functions for running the animation
def theta(x):
"""
theta function :
returns 0 if x<=0, and 1 if x>0
"""
x = np.asarray(x)
y = np.zeros(x.shape)
y[x > 0] = 1.0
return y
def square_barrier(x, width, height):
return height * (theta(x) - theta(x - width))
######################################################################
# Create the animation
# specify time steps and duration
dt = 0.01
N_steps = 50
t_max = 120
frames = int(t_max / float(N_steps * dt))
# specify constants
hbar = 1.0 # planck's constant
m = 1.9 # particle mass
# specify range in x coordinate
N = 2 ** 11
dx = 0.1
x = dx * (np.arange(N) - 0.5 * N)
# specify potential
V0 = 1.5
L = hbar / np.sqrt(2 * m * V0)
a = 3 * L
x0 = -60 * L
V_x = square_barrier(x, a, V0)
V_x[x < -98] = 1E6
V_x[x > 98] = 1E6
# specify initial momentum and quantities derived from it
p0 = np.sqrt(2 * m * 0.2 * V0)
dp2 = p0 * p0 * 1./80
d = hbar / np.sqrt(2 * dp2)
k0 = p0 / hbar
v0 = p0 / m
psi_x0 = gauss_x(x, d, x0, k0)
# define the Schrodinger object which performs the calculations
S = Schrodinger(x=x,
psi_x0=psi_x0,
V_x=V_x,
hbar=hbar,
m=m,
k0=-28)
######################################################################
# Set up plot
fig = pl.figure()
# plotting limits
xlim = (-100, 100)
klim = (-5, 5)
# top axes show the x-space data
ymin = 0
ymax = V0
ax1 = fig.add_subplot(211, xlim=xlim,
ylim=(ymin - 0.2 * (ymax - ymin),
ymax + 0.2 * (ymax - ymin)))
psi_x_line, = ax1.plot([], [], c='r', label=r'$|\psi(x)|$')
V_x_line, = ax1.plot([], [], c='k', label=r'$V(x)$')
center_line = ax1.axvline(0, c='k', ls=':',
label = r"$x_0 + v_0t$")
title = ax1.set_title("")
ax1.legend(prop=dict(size=12))
ax1.set_xlabel('$x$')
ax1.set_ylabel(r'$|\psi(x)|$')
# bottom axes show the k-space data
ymin = abs(S.psi_k).min()
ymax = abs(S.psi_k).max()
ax2 = fig.add_subplot(212, xlim=klim,
ylim=(ymin - 0.2 * (ymax - ymin),
ymax + 0.2 * (ymax - ymin)))
psi_k_line, = ax2.plot([], [], c='r', label=r'$|\psi(k)|$')
p0_line1 = ax2.axvline(-p0 / hbar, c='k', ls=':', label=r'$\pm p_0$')
p0_line2 = ax2.axvline(p0 / hbar, c='k', ls=':')
mV_line = ax2.axvline(np.sqrt(2 * V0) / hbar, c='k', ls='--',
label=r'$\sqrt{2mV_0}$')
ax2.legend(prop=dict(size=12))
ax2.set_xlabel('$k$')
ax2.set_ylabel(r'$|\psi(k)|$')
V_x_line.set_data(S.x, S.V_x)
######################################################################
# Animate plot
def init():
psi_x_line.set_data([], [])
V_x_line.set_data([], [])
center_line.set_data([], [])
psi_k_line.set_data([], [])
title.set_text("")
return (psi_x_line, V_x_line, center_line, psi_k_line, title)
def animate(i):
S.time_step(dt, N_steps)
psi_x_line.set_data(S.x, 4 * abs(S.psi_x))
V_x_line.set_data(S.x, S.V_x)
center_line.set_data(2 * [x0 + S.t * p0 / m], [0, 1])
psi_k_line.set_data(S.k, abs(S.psi_k))
title.set_text("t = %.2f" % S.t)
return (psi_x_line, V_x_line, center_line, psi_k_line, title)
anim = animation.FuncAnimation(fig, animate, init_func=init,
frames=frames, interval=30, blit=True)
anim.save('schrodinger_barrier.mp4', fps=15, extra_args=['-vcodec', 'libx264'])
pl.show()
HTML(anim.to_jshtml())
```
```python
# Infinite potential well wavefunction
import matplotlib.pyplot as plt
import numpy as np
plt.style.use('fivethirtyeight')
#Constants
h = 6.626e-34
m = 9.11e-31
#Values for L and x
x_list = np.linspace(0,1,100)
L = 1
def psi(n,L,x):
return np.sqrt(2/L)*np.sin(n*np.pi*x/L)
def psi_2(n,L,x):
return np.square(psi(n,L,x))
plt.figure(figsize=(15,10))
plt.suptitle("Wave Functions", fontsize=18)
for n in range(1,4):
#Empty lists for energy and psi wave
psi_2_list = []
psi_list = []
for x in x_list:
psi_2_list.append(psi_2(n,L,x))
psi_list.append(psi(n,L,x))
plt.subplot(3,2,2*n-1)
plt.plot(x_list, psi_list)
plt.xlabel("L", fontsize=13)
plt.ylabel("Ψ", fontsize=13)
plt.xticks(np.arange(0, 1, step=0.5))
plt.title("n="+str(n), fontsize=16)
plt.grid()
plt.subplot(3,2,2*n)
plt.plot(x_list, psi_2_list)
plt.xlabel("L", fontsize=13)
plt.ylabel("Ψ*Ψ", fontsize=13)
plt.xticks(np.arange(0, 1, step=0.5))
plt.title("n="+str(n), fontsize=16)
plt.tight_layout(rect=[0, 0.03, 1, 0.95])
```
```python
# Schrodinger Equation Solver
import numpy as np
from matplotlib import pyplot as pl
from matplotlib import animation
from scipy.fftpack import fft,ifft
plt.style.use('fivethirtyeight')
class Schrodinger(object):
"""
Class which implements a numerical solution of the time-dependent
Schrodinger equation for an arbitrary potential
"""
def __init__(self, x, psi_x0, V_x,
k0 = None, hbar=1, m=1, t0=0.0):
"""
Parameters
----------
x : array_like, float
length-N array of evenly spaced spatial coordinates
psi_x0 : array_like, complex
length-N array of the initial wave function at time t0
V_x : array_like, float
length-N array giving the potential at each x
k0 : float
the minimum value of k. Note that, because of the workings of the
fast fourier transform, the momentum wave-number will be defined
in the range
k0 < k < 2*pi / dx
where dx = x[1]-x[0]. If you expect nonzero momentum outside this
range, you must modify the inputs accordingly. If not specified,
k0 will be calculated such that the range is [-k0,k0]
hbar : float
value of planck's constant (default = 1)
m : float
particle mass (default = 1)
t0 : float
initial tile (default = 0)
"""
# Validation of array inputs
self.x, psi_x0, self.V_x = map(np.asarray, (x, psi_x0, V_x))
N = self.x.size
assert self.x.shape == (N,)
assert psi_x0.shape == (N,)
assert self.V_x.shape == (N,)
# Set internal parameters
self.hbar = hbar
self.m = m
self.t = t0
self.dt_ = None
self.N = len(x)
self.dx = self.x[1] - self.x[0]
self.dk = 2 * np.pi / (self.N * self.dx)
# set momentum scale
if k0 == None:
self.k0 = -0.5 * self.N * self.dk
else:
self.k0 = k0
self.k = self.k0 + self.dk * np.arange(self.N)
self.psi_x = psi_x0
self.compute_k_from_x()
# variables which hold steps in evolution of the
self.x_evolve_half = None
self.x_evolve = None
self.k_evolve = None
# attributes used for dynamic plotting
self.psi_x_line = None
self.psi_k_line = None
self.V_x_line = None
def _set_psi_x(self, psi_x):
self.psi_mod_x = (psi_x * np.exp(-1j * self.k[0] * self.x)
* self.dx / np.sqrt(2 * np.pi))
def _get_psi_x(self):
return (self.psi_mod_x * np.exp(1j * self.k[0] * self.x)
* np.sqrt(2 * np.pi) / self.dx)
def _set_psi_k(self, psi_k):
self.psi_mod_k = psi_k * np.exp(1j * self.x[0]
* self.dk * np.arange(self.N))
def _get_psi_k(self):
return self.psi_mod_k * np.exp(-1j * self.x[0] *
self.dk * np.arange(self.N))
def _get_dt(self):
return self.dt_
def _set_dt(self, dt):
if dt != self.dt_:
self.dt_ = dt
self.x_evolve_half = np.exp(-0.5 * 1j * self.V_x
/ self.hbar * dt )
self.x_evolve = self.x_evolve_half * self.x_evolve_half
self.k_evolve = np.exp(-0.5 * 1j * self.hbar /
self.m * (self.k * self.k) * dt)
psi_x = property(_get_psi_x, _set_psi_x)
psi_k = property(_get_psi_k, _set_psi_k)
dt = property(_get_dt, _set_dt)
def compute_k_from_x(self):
self.psi_mod_k = fft(self.psi_mod_x)
def compute_x_from_k(self):
self.psi_mod_x = ifft(self.psi_mod_k)
def time_step(self, dt, Nsteps = 1):
"""
Perform a series of time-steps via the time-dependent
Schrodinger Equation.
Parameters
----------
dt : float
the small time interval over which to integrate
Nsteps : float, optional
the number of intervals to compute. The total change
in time at the end of this method will be dt * Nsteps.
default is N = 1
"""
self.dt = dt
if Nsteps > 0:
self.psi_mod_x *= self.x_evolve_half
for i in range(Nsteps - 1):
self.compute_k_from_x()
self.psi_mod_k *= self.k_evolve
self.compute_x_from_k()
self.psi_mod_x *= self.x_evolve
self.compute_k_from_x()
self.psi_mod_k *= self.k_evolve
self.compute_x_from_k()
self.psi_mod_x *= self.x_evolve_half
self.compute_k_from_x()
self.t += dt * Nsteps
######################################################################
# Helper functions for gaussian wave-packets
def gauss_x(x, a, x0, k0):
"""
a gaussian wave packet of width a, centered at x0, with momentum k0
"""
return ((a * np.sqrt(np.pi)) ** (-0.5)
* np.exp(-0.5 * ((x - x0) * 1. / a) ** 2 + 1j * x * k0))
def gauss_k(k,a,x0,k0):
"""
analytical fourier transform of gauss_x(x), above
"""
return ((a / np.sqrt(np.pi))**0.5
* np.exp(-0.5 * (a * (k - k0)) ** 2 - 1j * (k - k0) * x0))
######################################################################
# Utility functions for running the animation
def theta(x):
"""
theta function :
returns 0 if x<=0, and 1 if x>0
"""
x = np.asarray(x)
y = np.zeros(x.shape)
y[x > 0] = 1.0
return y
def square_barrier(x, width, height):
return height * (theta(x) - theta(x - width))
######################################################################
# Create the animation
# specify time steps and duration
dt = 0.01
N_steps = 50
t_max = 120
frames = int(t_max / float(N_steps * dt))
# specify constants
hbar = 1.0 # planck's constant
m = 1.9 # particle mass
# specify range in x coordinate
N = 2 ** 11
dx = 0.1
x = dx * (np.arange(N) - 0.5 * N)
# specify potential
V0 = 1.5
L = hbar / np.sqrt(2 * m * V0)
a = 3 * L
x0 = -60 * L
V_x = square_barrier(x, a, V0)
V_x[x < -98] = 1E6
V_x[x > 98] = 1E6
# specify initial momentum and quantities derived from it
p0 = np.sqrt(2 * m * 0.2 * V0)
dp2 = p0 * p0 * 1./80
d = hbar / np.sqrt(2 * dp2)
k0 = p0 / hbar
v0 = p0 / m
psi_x0 = gauss_x(x, d, x0, k0)
# define the Schrodinger object which performs the calculations
S = Schrodinger(x=x,
psi_x0=psi_x0,
V_x=V_x,
hbar=hbar,
m=m,
k0=-28)
######################################################################
# Set up plot
fig = pl.figure()
# plotting limits
xlim = (-100, 100)
klim = (-5, 5)
# top axes show the x-space data
ymin = 0
ymax = V0
ax1 = fig.add_subplot(211, xlim=xlim,
ylim=(ymin - 0.2 * (ymax - ymin),
ymax + 0.2 * (ymax - ymin)))
psi_x_line, = ax1.plot([], [], c='r', label=r'$|\psi(x)|$')
V_x_line, = ax1.plot([], [], c='k', label=r'$V(x)$')
center_line = ax1.axvline(0, c='k', ls=':',
label = r"$x_0 + v_0t$")
title = ax1.set_title("")
ax1.legend(prop=dict(size=12))
ax1.set_xlabel('$x$')
ax1.set_ylabel(r'$|\psi(x)|$')
# bottom axes show the k-space data
ymin = abs(S.psi_k).min()
ymax = abs(S.psi_k).max()
ax2 = fig.add_subplot(212, xlim=klim,
ylim=(ymin - 0.2 * (ymax - ymin),
ymax + 0.2 * (ymax - ymin)))
psi_k_line, = ax2.plot([], [], c='r', label=r'$|\psi(k)|$')
p0_line1 = ax2.axvline(-p0 / hbar, c='k', ls=':', label=r'$\pm p_0$')
p0_line2 = ax2.axvline(p0 / hbar, c='k', ls=':')
mV_line = ax2.axvline(np.sqrt(2 * V0) / hbar, c='k', ls='--',
label=r'$\sqrt{2mV_0}$')
ax2.legend(prop=dict(size=12))
ax2.set_xlabel('$k$')
ax2.set_ylabel(r'$|\psi(k)|$')
V_x_line.set_data(S.x, S.V_x)
######################################################################
# Animate plot
def init():
psi_x_line.set_data([], [])
V_x_line.set_data([], [])
center_line.set_data([], [])
psi_k_line.set_data([], [])
title.set_text("")
return (psi_x_line, V_x_line, center_line, psi_k_line, title)
def animate(i):
S.time_step(dt, N_steps)
psi_x_line.set_data(S.x, 4 * abs(S.psi_x))
V_x_line.set_data(S.x, S.V_x)
center_line.set_data(2 * [x0 + S.t * p0 / m], [0, 1])
psi_k_line.set_data(S.k, abs(S.psi_k))
title.set_text("t = %.2f" % S.t)
return (psi_x_line, V_x_line, center_line, psi_k_line, title)
# call the animator. blit=True means only re-draw the parts that have changed.
anim = animation.FuncAnimation(fig, animate, init_func=init,
frames=frames, interval=30, blit=True)
# uncomment the following line to save the video in mp4 format. This
# requires either mencoder or ffmpeg to be installed on your system
anim.save('schrodinger_barrier.mp4', fps=15, extra_args=['-vcodec', 'libx264'])
pl.show()
```
```python
from pylab import *
from scipy.integrate import odeint
from scipy.optimize import brentq
def V(x):
"""
Potential function in the finite square well. Width is L and value is global variable Vo
"""
L = 1
if abs(x) > L:
return 0
else:
return Vo
def SE(psi, x):
"""
Returns derivatives for the 1D schrodinger eq.
Requires global value E to be set somewhere. State0 is first derivative of the
wave function psi, and state1 is its second derivative.
"""
state0 = psi[1]
state1 = 2.0*(V(x) - E)*psi[0]
return array([state0, state1])
def Wave_function(energy):
"""
Calculates wave function psi for the given value
of energy E and returns value at point b
"""
global psi
global E
E = energy
psi = odeint(SE, psi0, x)
return psi[-1,0]
def find_all_zeroes(x,y):
"""
Gives all zeroes in y = Psi(x)
"""
all_zeroes = []
s = sign(y)
for i in range(len(y)-1):
if s[i]+s[i+1] == 0:
zero = brentq(Wave_function, x[i], x[i+1])
all_zeroes.append(zero)
return all_zeroes
def find_analytic_energies(en):
"""
Calculates Energy values for the finite square well using analytical
model (Griffiths, Introduction to Quantum Mechanics, 1st edition, page 62.)
"""
z = sqrt(2*en)
z0 = sqrt(2*Vo)
z_zeroes = []
f_sym = lambda z: tan(z)-sqrt((z0/z)**2-1) # Formula 2.138, symmetrical case
f_asym = lambda z: -1/tan(z)-sqrt((z0/z)**2-1) # Formula 2.138, antisymmetrical case
# first find the zeroes for the symmetrical case
s = sign(f_sym(z))
for i in range(len(s)-1): # find zeroes of this crazy function
if s[i]+s[i+1] == 0:
zero = brentq(f_sym, z[i], z[i+1])
z_zeroes.append(zero)
print("Energies from the analyitical model are: ")
print("Symmetrical case)")
for i in range(0, len(z_zeroes),2): # discard z=(2n-1)pi/2 solutions cause that's where tan(z) is discontinous
print("%.4f" %(z_zeroes[i]**2/2))
# Now for the asymmetrical
z_zeroes = []
s = sign(f_asym(z))
for i in range(len(s)-1): # find zeroes of this crazy function
if s[i]+s[i+1] == 0:
zero = brentq(f_asym, z[i], z[i+1])
z_zeroes.append(zero)
print ('Antisymmetrical case')
for i in range(0, len(z_zeroes),2): # discard z=npi solutions cause that's where ctg(z) is discontinous
print("%.4f" %(z_zeroes[i]**2/2))
N = 1000 # number of points to take
psi = np.zeros([N,2]) # Wave function values and its derivative (psi and psi')
psi0 = array([0,1]) # Wave function initial states
Vo = 20
E = 0.0 # global variable Energy needed for Sch.Eq, changed in function "Wave function"
b = 2 # point outside of well where we need to check if the function diverges
x = linspace(-b, b, N) # x-axis
def main():
# main program
en = linspace(0, Vo, 100) # vector of energies where we look for the stable states
psi_b = [] # vector of wave function at x = b for all of the energies in en
for e1 in en:
psi_b.append(Wave_function(e1)) # for each energy e1 find the the psi(x) at x = b
E_zeroes = find_all_zeroes(en, psi_b) # now find the energies where psi(b) = 0
# Print energies for the bound states
print("Energies for the bound states are: ")
for E in E_zeroes:
print("%.2f" %E)
# Print energies of each bound state from the analytical model
find_analytic_energies(en)
# Plot wave function values at b vs energy vector
figure()
plot(en/Vo,psi_b)
title('Values of the $\Psi(b)$ vs. Energy')
xlabel('Energy, $E/V_0$')
ylabel('$\Psi(x = b)$', rotation='horizontal')
for E in E_zeroes:
plot(E/Vo, [0], 'go')
annotate("E = %.2f"%E, xy = (E/Vo, 0), xytext=(E/Vo, 30))
grid()
# Plot the wavefunctions for first 4 eigenstates
figure(2)
for E in E_zeroes[0:4]:
Wave_function(E)
plot(x, psi[:,0], label="E = %.2f"%E)
legend(loc="upper right")
title('Wave function')
xlabel('x, $x/L$')
ylabel('$\Psi(x)$', rotation='horizontal', fontsize = 15)
grid()
if __name__ == "__main__":
main()
```
```python
import scipy as sp
import scipy.constants as spc
import scipy.integrate
import math
import numpy
from matplotlib import pyplot as plt
import datetime
#---------------------------
# VARIABLES -
#---------------------------
#equilibrium bond length in Angstrom
re = 0.96966
#effective mass in kg
amu = spc.physical_constants['atomic mass constant'][0]
mass = (1.*16./(1.+16.))*amu *2
#minimum x value for partice in a box calculation
xmin = -0.5+re
#maximum x value for particle in a box calculation
#there is no limit, but if xmax<xmin, the values will be swapped
#if xmax = xmin, then xmax will be set to xmin + 1
xmax = 1.5+re
#number of grid points at which to calculate integral
#must be an odd number. If an even number is given, 1 will be added.
#minimum number of grid points is 3
ngrid = 501
#number of PIB wavefunctions to include in basis set
#minimum is 1
nbasis = 100
#if you want to control the plot ranges, you can do so here
make_plots = True
plotymin = 0
plotymax = 0
plotxmin = 0
plotxmax = 0
#dissociation energy in cm-1
de = 37778.617
#force constant in N / m
fk = 774.7188418117737*0.75
#angular frequency in rad/s
omega = numpy.sqrt(fk/mass)
#output file for eigenvalues
outfile = "eigenvalues.txt"
#output PDF for energy spectrum
spectrumfile = "energy.pdf"
#output PDF for potential and eigenfunctions
potentialfile = "potential.pdf"
#--------------------------
# POTENTIAL DEFINITIONS -
#--------------------------
#definitions of potential functions
#particle in a box: no potential. Note: you need to manually set plotymin and plotymax above for proper graphing
def box(x):
return 0
#purely harmonic potential
#energy in cm^-1
prefactor = 0.5 * fk * 1e-20/(spc.h*spc.c*100.0)
def harmonic(x):
return prefactor*(x-re)*(x-re)
#anharmonic potential
#def anharmonic(x):
# return 0.5*(x**2.) + 0.05*(x**4.)
#Morse potential
#energy in cm^-1
#alpha in A^-1
alpha = math.sqrt(fk/2.0/(de*spc.h*spc.c*100.0))*1e-10
def morse(x):
return de*(1.-numpy.exp(-alpha*(x-re)))**2.
#double well potential (minima at x = +/-3)
#def doublewell(x):
# return x**4.-18.*x**2. + 81.
#potential function used for this calculation
def V(x):
return morse(x)
#------------------------
# BEGIN CALCULATION -
#------------------------
#verify that inputs are sane
if xmax==xmin:
xmax = xmin+1.
elif xmax<xmin:
xmin,xmax = xmax,xmin
L = xmax - xmin
#function to compute normalized PIB wavefunction
tl = numpy.sqrt(2./L)
pixl = numpy.pi/L
def pib(x,n,L):
return tl*math.sin(n*x*pixl)
ngrid = max(ngrid,3)
if not ngrid%2:
ngrid+=1
nbasis = max(1,nbasis)
#get current time
starttime = datetime.datetime.now()
#create grid
x = numpy.linspace(xmin,xmax,ngrid)
#create Hamiltonian matrix; fill with zeros
H = numpy.zeros((nbasis,nbasis))
#split Hamiltonian into kinetic energy and potential energy terms
#H = T + V
#V is defined above, and will be evaluated numerically
#Compute kinetic energy
#The Kinetic enery operator is T = -hbar^2/2m d^2/dx^2
#in the PIB basis, T|phi(i)> = hbar^2/2m n^2pi^2/L^2 |phi(i)>
#Therefore <phi(k)|T|phi(i)> = hbar^2/2m n^2pi^2/L^2 delta_{ki}
#Kinetic energy is diagonal
kepf = spc.hbar*spc.hbar*math.pi**2./(2.*mass*(L*1e-10)*(L*1e-10)*spc.h*spc.c*100)
for i in range(0,nbasis):
H[i,i] += kepf*(i+1.)*(i+1.)
#now, add in potential energy matrix elements <phi(j)|V|phi(i)>
#that is, multiply the two functions by the potential at each grid point and integrate
for i in range(0,nbasis):
for j in range(0,nbasis):
if j >= i:
y = numpy.zeros(ngrid)
for k in range(0,ngrid):
p = x[k]
y[k] += pib(p-xmin,i+1.,L)*V(p)*pib(p-xmin,j+1.,L)
H[i,j] += sp.integrate.simps(y,x)
else:
H[i,j] += H[j,i]
#Solve for eigenvalues and eigenvectors
evalues, evectors = sp.linalg.eigh(H)
evalues = evalues
#get ending time
endtime = datetime.datetime.now()
#------------------------
# GENERATE OUTPUT -
#------------------------
print("Results:")
print("-------------------------------------")
print(" v Energy (cm-1) Delta E (cm-1) ")
print("-------------------------------------")
for i in range(min(40,len(evalues))):
if i>0:
deltae = evalues[i] - evalues[i-1]
print(" {:>3d} {:>13.3f} {:>14.3f} ".format(i,evalues[i],deltae))
else:
print(" {:>3d} {:>13.3f} ------ ".format(i,evalues[i]))
with open(outfile,'w') as f:
f.write(str('#pibsolver.py output\n'))
f.write('#start time ' + starttime.isoformat(' ') + '\n')
f.write('#end time ' + endtime.isoformat(' ') + '\n')
f.write(str('#elapsed time ' + str(endtime-starttime) + '\n'))
f.write(str('#xmin {:1.4f}\n').format(xmin))
f.write(str('#xmax {:1.4f}\n').format(xmax))
f.write(str('#grid size {0}\n').format(ngrid))
f.write(str('#basis size {0}\n\n').format(nbasis))
f.write(str('#eigenvalues\n'))
f.write(str('{0:.5e}').format(evalues[0]))
for i in range(1,nbasis):
f.write(str('\t{0:.5e}').format(evalues[i]))
f.write('\n\n#eigenvectors\n')
for j in range(0,nbasis):
f.write(str('{0:.5e}').format(evectors[j,0]))
for i in range(1,nbasis):
f.write(str('\t{0:.5e}').format(evectors[j,i]))
f.write('\n')
if make_plots == True:
#if this is run in interactive mode, make sure plot from previous run is closed
plt.figure(1)
plt.close()
plt.figure(2)
plt.close()
plt.figure(1)
#Make graph of eigenvalue spectrum
title = "EV Spectrum, Min={:1.4f}, Max={:1.4f}, Grid={:3d}, Basis={:3d}".format(xmin,xmax,ngrid,nbasis)
plt.plot(evalues,'ro')
plt.xlabel('v')
plt.ylabel(r'E (cm$^{-1}$)')
plt.title(title)
plt.savefig(spectrumfile)
plt.show()
#Make graph with potential and eigenfunctions
plt.figure(2)
title = "Wfns, Min={:1.4f}, Max={:1.4f}, Grid={:3d}, Basis={:3d}".format(xmin,xmax,ngrid,nbasis)
vplot = numpy.zeros(x.size)
for i in range(0,x.size):
vplot[i] = V(x[i])
plt.plot(x,vplot)
if plotxmin == 0:
plotxmin = xmin
if plotxmax == 0:
plotxmax = xmax
if(plotxmin == plotxmax):
plotxmin, plotmax = xmin, xmax
if(plotxmin > plotxmax):
plotxmin, plotxmax = plotxmax,plotxmin
if plotymax == 0:
plotymax = 1.25*de
# plotymax = evalues[10]
if(plotymin > plotymax):
plotymin, plotymax = plotymax,plotymin
sf = 1.0
if len(evalues)>2:
sf = (evalues[2]-evalues[0])/5.0
for i in range(0,nbasis):
plt.plot([xmin,xmax],[evalues[i],evalues[i]],'k-')
for i in range(0,nbasis):
ef = numpy.zeros(ngrid)
ef += evalues[i]
for j in range(0,nbasis):
for k in range(0,ngrid):
ef[k] += evectors[j,i]*pib(x[k]-xmin,j+1,L)*sf
plt.plot(x,ef)
plt.plot([xmin,xmin],[plotymin,plotymax],'k-')
plt.plot([xmax,xmax],[plotymin,plotymax],'k-')
plt.axis([plotxmin,plotxmax,plotymin,plotymax])
plt.title(title)
plt.xlabel('$R$ (Angstrom)')
plt.ylabel(r'V (cm$^{-1}$)')
plt.savefig(potentialfile)
plt.show()
```
```python
import matplotlib.pyplot as plt
import numpy as np
#Probability of 1s
def prob_1s(x,y,z):
r=np.sqrt(np.square(x)+np.square(y)+np.square(z))
#Remember.. probability is psi squared!
return np.square(np.exp(-r)/np.sqrt(np.pi))
#Random coordinates
x=np.linspace(0,1,30)
y=np.linspace(0,1,30)
z=np.linspace(0,1,30)
elements = []
probability = []
for ix in x:
for iy in y:
for iz in z:
#Serialize into 1D object
elements.append(str((ix,iy,iz)))
probability.append(prob_1s(ix,iy,iz))
#Ensure sum of probability is 1
probability = probability/sum(probability)
#Getting electron coordinates based on probabiliy
coord = np.random.choice(elements, size=100000, replace=True, p=probability)
elem_mat = [i.split(',') for i in coord]
elem_mat = np.matrix(elem_mat)
x_coords = [float(i.item()[1:]) for i in elem_mat[:,0]]
y_coords = [float(i.item()) for i in elem_mat[:,1]]
z_coords = [float(i.item()[0:-1]) for i in elem_mat[:,2]]
#Plotting
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(x_coords, y_coords, z_coords, alpha=0.05, s=2)
ax.set_title("Hydrogen 1s density")
plt.show()
```
```python
import matplotlib
#matplotlib.use("TkAgg")
import matplotlib.pyplot as plt
from matplotlib import animation
from matplotlib import style
import seaborn
import numpy as np
import pandas as pd
import warnings
######################################################################
#Ignore complex casting warning
warnings.filterwarnings('ignore')
class time_evolution:
'''
Class that take an approach at solving the Time-Dependent Schrodinger
equation for an unbounded particle in an infinite square well and
generates a gaussian wave function for the psi initial.
'''
def __init__(self, hbar, m, quantum_number, total_time, dt,
L, x, n, a, l):
self.hbar = hbar
self.mass = m
self.quantum_number = quantum_number
self.total_time = total_time
self.time_step = dt
self.length = L
self.x = x
self.n = n
self.a = a
'''
Parameters
----------
x : array, float
length-N array of evenly spaced spatial coordinates
psi_x0 : array, complex
Initial wave function at time t0 = 0
psi_xt : array, complex
Time-dependent Schrodinger equation
hbar : scalar, float
value of planck's constant
m : scalar, float
particle mass
quantum_number : scalar, integer
values of conserved quantities in a quantum system
total_time : float
Time_step : scalar, integer
'''
def gaussan_wave_packet(self,x,x0,l,a):
A = (1/(4*a**2))**(1/4.0)
self.psi_x0 = A*(np.exp((-(x - x0)**2)
/(4*a**2))*np.exp(1j*l*x)).reshape(len(x),1)
print("psi_x0: " + str(self.psi_x0.shape))
def normalize(self):
self.A = ( 1/(np.sqrt(np.trapz((np.conj(self.psi_x0[:,0])
*self.psi_x0[:,0]), x[:,0]))))
self.psi_x0_normalized = self.A*self.psi_x0
print("Scalar A: " + str(A))
print("Psi0 Normalized: " + str(self.psi_x0_normalized.shape))
def phi_n(self):
self.phi = ( np.sqrt( 2/L ) * np.sin( (n * np.pi * x) /L ) )
print("Phi: " + str(self.phi.shape))
def energy_eigenvalues(self):
self.En = ((np.power(n,2)) * (np.pi**2)*(hbar**2))/(2*m*L**2)
print("En: " + str(self.En.shape))
def C_n(self):
self.Cn = np.zeros((quantum_number,1),dtype=complex)
for i in range(0,quantum_number):
self.Cn[i,0] = np.trapz((np.conj(self.phi)
* self.psi_x0_normalized)[:,i], x[:,0])
self.Cn = self.Cn.reshape(1,500)
print("Cn: " + str(self.Cn.shape))
def schrodinger_equation(self, total_time):
count = 0
for j in range(0, total_time, dt):
time = j
self.psi_xt = np.zeros((len(x),1),dtype=complex).reshape(len(x),1)
for k in range(0, quantum_number):
self.psi_xt[:,0] = self.psi_xt[:,0] + (self.Cn[0,k]
* self.phi[:,k] * (np.exp((-1j * self.En[0,k] * time)/hbar)))
count += 1
######################################################################
# plot
style.use('seaborn-dark')
plt.plot(x, np.real(self.psi_xt),'r',
label='real' r'$\psi(x,t)$', linewidth = 0.75)
plt.plot(x, np.imag(self.psi_xt),'b',
label=r'$imag \psi(x,t)$', linewidth = 0.75)
plt.plot(x, np.abs(self.psi_xt),'y',
label=r'$|\psi(x,t)|$', linewidth = 0.75)
x_min = min(self.x[:,0]-5)
x_max = max(self.x[:,0]+5)
psi_min = -A
psi_max = A
plt.xlim((x_min, x_max))
plt.ylim((psi_min, psi_max))
plt.legend(prop=dict(size=6))
psi_x_line, = plt.plot([], [], c='r')
V_x_line, = plt.plot([], [], c='k')
left_wall_line = plt.axvline(0, c='k', linewidth=2)
right_well_line = plt.axvline(x[-1], c='k', linewidth=2)
plt.pause(0.01)
plt.draw()
plt.clf()
plt.cla()
print('The number of iterations: ' + str(count))
######################################################################
#Predefined parameters
quantum_number = 500
x = np.linspace(0,100,1000).astype(complex).reshape(1000,1)
n = np.arange(1,quantum_number+1).reshape(1,quantum_number)
x0 = 30
a = 5
l = 2
A = (1/(4*a**2))**(1/4.0)
m = 2#int(938000000)
hbar = 1#6.58211951*10**(-16)
total_time = 1*10**2
L = x[-1]
dt = 1
######################################################################
#Welcome statement
print('-'*100 + '\n' + 'Analytical solution to the Time-Dependent Schrodinger equation \n' +
'for an unbounded particle in an infinite square well \n \n' +
'author: Dr. Anathnath Ghosh \n' +
'email: anathnath.rivu@gmail.com \n')
print('Note: change frames in animate for length of recording')
########################################################################
#Inputs for customization
choose_custom = int(input('Enter 1 to run or enter any key to customize: '))
if choose_custom == int(1):
pass
else:
quantum_number = int(input('Quantum number: '))
length_of_well = int(input('Length of the well (nm): '))
x0 = int(input('Center of wave packet (i.e. center of well): '))
a = int(input('Enter the width of wave packet (sigma): '))
l = int(input('Enter number of waves: '))
total_time = int(input('Total run time: '))
dt = int(input('Enter time step: '))
dx = int(input('Enter length intervals: '))
x = np.linspace(0,int(length_of_well),
int(dx)).astype(complex).reshape(int(dx),1)
#########################################################################
#Run class
choose_run= int(input('Enter 1 to run for loop or any key to \n'
'run animation for recording: '))
Schrodinger = time_evolution(hbar,m,quantum_number,total_time,dt,
L,x,n,a,l)
if choose_run == int(1):
Schrodinger.gaussan_wave_packet(x,x0,l,a)
Schrodinger.normalize()
Schrodinger.phi_n()
Schrodinger.energy_eigenvalues()
Schrodinger.C_n()
Schrodinger.schrodinger_equation(total_time)
elif choose_run == int(2):
######################################################################
# animate
Schrodinger.gaussan_wave_packet(x,x0,l,a)
Schrodinger.normalize()
Schrodinger.phi_n()
Schrodinger.energy_eigenvalues()
Schrodinger.C_n()
style.use('seaborn-dark')
fig = plt.figure(figsize=(20,15))
x_min = min(Schrodinger.x[:,0]-5)
x_max = max(Schrodinger.x[:,0]+5)
psi_min = A * (-1)
psi_max = A
xlim = ((x_min, x_max))
ylim = ((psi_min, psi_max))
ax = fig.add_subplot(111, xlim = xlim, ylim = ylim)
psi_xt_real, = ax.plot([], [], c='r',
label='real' r'$\psi(x,t)$', linewidth = 0.75)
psi_xt_imag, = ax.plot([], [], c='b',
label=r'$imag \psi(x,t)$', linewidth = 0.75)
psi_xt_abs, = ax.plot([], [], c='y',
label=r'$|\psi(x,t)|$', linewidth = 0.75)
left_wall_line = ax.axvline(0, c='k', linewidth=1)
right_well_line = ax.axvline(x[-1], c='k', linewidth=1)
title = ax.set_title('', fontsize=20)
ax.legend(prop=dict(size=15), loc='upper center', shadow=True, ncol=3)
ax.set_xlabel('$x$', fontsize=20)
ax.set_ylabel(r'$|\psi(x)|$', fontsize=20)
ax.xaxis.set_tick_params(labelsize=20)
ax.yaxis.set_tick_params(labelsize=20)
i = np.zeros([])
def init():
psi_xt_real.set_data([], [])
psi_xt_imag.set_data([], [])
psi_xt_abs.set_data([], [])
title.set_text('')
return psi_xt_real,
def animate(i):
i = i/50
time = np.linspace(0,1000,1000).astype(complex)
psi_xt = np.zeros((len(x),1),dtype=complex).reshape(len(x),1)
for k in range(0, quantum_number):
psi_xt[:,0] = psi_xt[:,0] + (Schrodinger.Cn[0,k] *
Schrodinger.phi[:,k] * (np.exp((-1j *
Schrodinger.En[0,k] * i)/hbar)))
psi_xt_real.set_data(x, np.real(psi_xt))
psi_xt_imag.set_data(x, np.imag(psi_xt))
psi_xt_abs.set_data(x, np.abs(psi_xt))
title.set_text('Time evolution: t = %.2f' %i)
record = str(input('Do you want to record? Enter y or n: '))
if record == 'y':
animate = matplotlib.animation.FuncAnimation(fig, animate,
init_func=init, frames=1000, interval=1, repeat=False)
animate.save('animation.gif',
writer='imagemagick', fps=60, dpi=80)
animate.save('time_evolution.mp4', fps=120,
extra_args=['-vcodec', 'libx264'])
else:
plt.show()
plt.clf()
if __name__ == '__main__':
init()
animate(i)
```
```python
import numpy as np
import matplotlib.pyplot as plt
from scipy.special import genlaguerre
from scipy.integrate import simps
a = 5.29*10**-11
numValues = 20000
def R(n,l,r):
'''Returns the radial wavefunction for each value of r where n is the principle quantum number and l is the angular momentum quantum number.'''
y=np.zeros(numValues)
#Generates an associated laguerre polynomial as a scipy.special.orthogonal.orthopoly1d
Lag = genlaguerre(n-l-1, 2*l+1)
#Loops through each power of r in the laguerre polynomial and adds to the total
for i in range(len(Lag)+1):
i = float(i)
#The main equation (What its all about)
y = y + (((r/(n*a))**(l))*(np.exp(-r/(a*n)))*Lag[int(i)]*(((2*r)/(a*n))**i))
return y
#Approximates integral value and divides by the absolute value.
def normalise(x,y):#Doesnt normalise. Calculates integral, we want area under the graphs. Haven't looked at this in a while, needs checking.
integral = simps(np.absolute(y),x)
print(integral, "simps")
print(type(integral))
y = y/np.absolute(integral)
return y
def plotPsi(r,psi):
plt.subplot(311)
plt.plot(r, psi)
plt.grid(True)
plt.xlabel("r/a (m)")
plt.ylabel("Psi")
def plotPsiSquared(r,psi):
psiSquared = psi**2
psiSquared = normalise(r,psiSquared)
plt.plot(r, psiSquared)
plt.grid(True)
plt.xlabel("r/a (m)")
plt.ylabel("Psi^2")
def plotRadialDistribution(r,psi):
radialDistribution = 4*np.pi*(r**2)*psi**2
radialDistribution = normalise(r, radialDistribution)
plt.plot(r, radialDistribution)
plt.grid(True)
plt.xlabel("r/a (m)")
plt.ylabel("4*pi*(r^2)*Psi")
def plotAll(r,psi):
plt.figure(1)
plt.subplot(311)
plotPsi(r,psi)
plt.subplot(312)
plotPsiSquared(r,psi)
plt.subplot(313)
plotRadialDistribution(r,psi)
def graphs(r, psi,choice):
if choice == 1:
plotPsi(r,psi)
elif choice == 2:
plotPsiSquared(r,psi)
elif choice == 3:
plotRadialDistribution(r,psi)
else:
plotAll(r,psi)
def main():
numPsi = int(input("How many wavefunctions do you want to draw?"))
#Note that the whole wavefunction should be drawn or the normalise function will not work correctly
width = float(input("To what value of r (in units of a) do you want to plot?")) * a
print("1: Plot the wavefunction")
print("2: Plot the wavefunction squared (the probability density)")
print("3: Plot the wavefunction squared times pi r^2 (the radial distribution function)")
print("4: Plot all 3")
choice = int(input("Choose an option:"))
for i in range(numPsi):
print("Wavefunction " + str(i+1))
n = float(input("Enter n:"))
l = float(input("Enter l:"))
r = np.linspace(0,width, numValues)
psi = R(n,l,r)
#Converts r into units of a
r = r/a
psi = normalise(r, psi)
graphs(r, psi,choice)
plt.show()
if __name__ == '__main__':
main()
```
```python
!pip install Schrodinger
```
```python
import numpy as np
from scipy import fftpack
#Class to obtain a solution of the Schrodinger equation for a given potential
class Schrodinger(object):
#All parameters of the class are validated and initialised here
def __init__(self, x, psi_x0, V_x, k0 = None, hbar = 1, m = 1, t0 = 0.0):
#Validation of array inputs
self.x, psi_x0, self.V_x = map(np.asarray, (x, psi_x0, V_x))
N = self.x.size
#Set internal parameters
assert hbar > 0
assert m > 0
self.hbar = hbar
self.m = m
self.t = t0
self.dt_ = None
self.N = len(x)
self.dx = self.x[1] - self.x[0]
self.dk = 2 * np.pi / (self.N * self.dx)
#Set momentum scale
if k0 == None:
self.k0 = -0.5 * self.N * self.dk
else:
assert k0 < 0
self.k0 = k0
self.k = self.k0 + self.dk * np.arange(self.N)
self.psi_x = psi_x0
self.compute_k_from_x()
#Variables which hold steps in evolution
self.x_evolve_half = None
self.x_evolve = None
self.k_evolve = None
def _set_psi_x(self, psi_x):
assert psi_x.shape == self.x.shape
self.psi_mod_x = (psi_x * np.exp(-1j * self.k[0] * self.x) * self.dx / np.sqrt(2 * np.pi))
self.psi_mod_x /= self.norm
self.compute_k_from_x()
def _get_psi_x(self):
return (self.psi_mod_x * np.exp(1j * self.k[0] * self.x) * np.sqrt(2 * np.pi) / self.dx)
def _set_psi_k(self, psi_k):
assert psi_k.shape == self.x.shape
self.psi_mod_k = psi_k * np.exp(1j * self.x[0] * self.dk * np.arange(self.N))
self.compute_x_from_k()
self.compute_k_from_x()
def _get_psi_k(self):
return self.psi_mod_k * np.exp(-1j * self.x[0] * self.dk * np.arange(self.N))
def _get_dt(self):
return self.dt_
def _set_dt(self, dt):
assert dt != 0
if dt != self.dt_:
self.dt_ = dt
self.x_evolve_half = np.exp(-0.5 * 1j * self.V_x / self.hbar * self.dt)
self.x_evolve = self.x_evolve_half * self.x_evolve_half
self.k_evolve = np.exp(-0.5 * 1j * self.hbar / self.m * (self.k * self.k) * self.dt)
def _get_norm(self):
return self.wf_norm(self.psi_mod_x)
psi_x = property(_get_psi_x, _set_psi_x)
psi_k = property(_get_psi_k, _set_psi_k)
norm = property(_get_norm)
dt = property(_get_dt, _set_dt)
#The Fourier transform
def compute_k_from_x(self):
self.psi_mod_k = fftpack.fft(self.psi_mod_x)
#The inverse Fourier transform
def compute_x_from_k(self):
self.psi_mod_x = fftpack.ifft(self.psi_mod_k)
#To calculate the norm of a wave function
def wf_norm(self, wave_fn):
assert wave_fn.shape == self.x.shape
return np.sqrt((abs(wave_fn) ** 2).sum() * 2 * np.pi / self.dx)
def solve(self, dt, Nsteps = 1, eps = 1e-3, max_iter = 1000):
eps = abs(eps)
assert eps > 0
t0 = self.t
old_psi = self.psi_x
d_psi = 2 * eps
num_iter = 0
while (d_psi > eps) and (num_iter <= max_iter):
num_iter += 1
self.time_step(-1j * dt, Nsteps)
d_psi = self.wf_norm(self.psi_x - old_psi)
old_psi = 1. * self.psi_x
self.t = t0
#Discretization and solving for each time interval...
def time_step(self, dt, Nsteps = 1):
assert Nsteps >= 0
self.dt = dt
if Nsteps > 0:
self.psi_mod_x *= self.x_evolve_half
for num_iter in xrange(Nsteps - 1):
self.compute_k_from_x()
self.psi_mod_k *= self.k_evolve
self.compute_x_from_k()
self.psi_mod_x *= self.x_evolve
self.compute_k_from_x()
self.psi_mod_k *= self.k_evolve
self.compute_x_from_k()
self.psi_mod_x *= self.x_evolve_half
self.compute_k_from_x()
self.psi_mod_x /= self.norm
self.compute_k_from_x()
self.t += dt * Nsteps
import matplotlib.pyplot as plt
from matplotlib import animation
import numpy as np
#from schrod_class import Schrodinger
#Helper functions for Gaussian wave-packets
def gauss_x(x, a, x0, k0):
#A Gaussian wave packet of width a, centered at x0, with momentum k0
return ((a * np.sqrt(np.pi)) ** (-0.5) * np.exp(-0.5 * ((x - x0) * 1. / a) ** 2 + 1j * x * k0))
def gauss_k(k, a, x0, k0):
#Fourier transform of gauss_x(x)
return ((a / np.sqrt(np.pi)) ** 0.5 * np.exp(-0.5 * (a * (k - k0)) ** 2 - 1j * (k - k0) * x0))
#Helper function to define the potential barrier
def theta(x):
#Return 0 if x <= 0, and 1 if x > 0
x = np.asarray(x)
y = np.zeros(x.shape)
y[x > 0] = 1.0
return y
#The potential barrier
def square_barrier(x, width, height):
return height * (theta(x) - theta(x - width))
#Create the animation
#Time steps and duration (to be used as parameters for the FFT and the animation)
dt = 0.01
N_steps = 50
t_max = 120
frames = int(t_max / float(N_steps * dt))
#Constants
hbar = 1.0 #Planck's constant
m = 1.9 #Particle mass
#Range of values of x
N = 2 ** 11
dx = 0.1
x = dx * (np.arange(N) - 0.5 * N)
#Potential specification
V0 = 1.5 #Barrier height
L = hbar / np.sqrt(2 * m * V0)
a = 3 * L #Barrier width
x0 = -60 * L
V_x = square_barrier(x, a, V0)
V_x[x < -98] = 1E6
V_x[x > 98] = 1E6
#Specify initial momentum and quantities derived from it
p0 = np.sqrt(2 * m * 0.2 * V0)
dp2 = p0 * p0 * 1. / 80
d = hbar / np.sqrt(2 * dp2)
k0 = p0 / hbar
v0 = p0 / m
psi_x0 = gauss_x(x, d, x0, k0)
#Define the Schrodinger object which performs the calculations
S = Schrodinger(x = x, psi_x0 = psi_x0, V_x = V_x, hbar = hbar, m = m, k0 = -28)
#Setting up the plot...
fig = plt.figure()
#Axis limits
xlim = (-100, 100)
klim = (-5, 5)
#First plot (x-space) :
ymin = 0
ymax = V0
ax1 = fig.add_subplot(211, xlim = xlim, ylim = (ymin - 0.2 * (ymax - ymin), ymax + 0.2 * (ymax - ymin)))
psi_x_line, = ax1.plot([], [], c = 'r', label = r'$|\psi(x)|$')
V_x_line, = ax1.plot([], [], c = 'k', label = r'$V(x)$')
center_line = ax1.axvline(0, c = 'k', ls = ':', label = r"$x_0 + v_0t$")
title = ax1.set_title("")
ax1.legend(prop = dict(size = 12))
ax1.set_xlabel('$x$')
ax1.set_ylabel(r'$|\psi(x)|$')
#Second plot (k-space) :
ymin = abs(S.psi_k).min()
ymax = abs(S.psi_k).max()
ax2 = fig.add_subplot(212, xlim = klim, ylim = (ymin - 0.2 * (ymax - ymin), ymax + 0.2 * (ymax - ymin)))
psi_k_line, = ax2.plot([], [], c = 'r', label = r'$|\psi(k)|$')
p0_line1 = ax2.axvline(-p0 / hbar, c = 'k', ls = ':', label = r'$\pm p_0$')
p0_line2 = ax2.axvline(p0 / hbar, c = 'k', ls = ':')
mV_line = ax2.axvline(np.sqrt(2 * V0) / hbar, c = 'k', ls = '--', label = r'$\sqrt{2mV_0}$')
ax2.legend(prop = dict(size = 12))
ax2.set_xlabel('$k$')
ax2.set_ylabel(r'$|\psi(k)|$')
V_x_line.set_data(S.x, S.V_x) #Feeding the plot with the data
#Functions to help in animation
def init():
psi_x_line.set_data([], [])
V_x_line.set_data([], [])
center_line.set_data([], [])
psi_k_line.set_data([], [])
title.set_text("")
return (psi_x_line, V_x_line, center_line, psi_k_line, title)
def animate(i):
S.time_step(dt, N_steps)
psi_x_line.set_data(S.x, 4 * abs(S.psi_x))
V_x_line.set_data(S.x, S.V_x)
center_line.set_data(2 * [x0 + S.t * p0 / m], [0, 1])
psi_k_line.set_data(S.k, abs(S.psi_k))
title.set_text("t = %.2f" % S.t)
return (psi_x_line, V_x_line, center_line, psi_k_line, title)
anim = animation.FuncAnimation(fig, animate, init_func = init, frames = frames, interval = 30, blit = True)
plt.show()
```
```python
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
def timevol(x, q, N, problem, fPOT):
# ucnomment these two lines, as well as anim.save
# to encode and save the simulatation video
#Writer = animation.writers['ffmpeg']
#writer = Writer(fps=30, bitrate=1800)
def _update_plot(i, fig, phi):
ax.clear()
ax.set_xlim([-50, 50])
ax.set_ylim([-1, 1])
scat = plt.plot(x, q[:, i])
return scat
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_xlim([-50, 50])
ax.set_ylim([-1, 1])
scat = plt.plot(x, q[:, 0])
anim = animation.FuncAnimation(fig, _update_plot, fargs=(fig, scat),
frames=N, interval=N/10)
anim.save(r'free.mp4')
plt.show()
```
```python
import numpy as np
import matplotlib.pyplot as plt
L = 200 #number of discretisation nodes
T = 2000 #number of steps
dt = 0.001 #step size
x = range(L) #solution space (1D)
def D2(F): #Discrete Laplacian with periodic boundary conditions and next-neighbour kernel
ker = np.array([1,-2,1])
Fnew = np.zeros_like(F)
for x in range(L):
xleft = (x+L-1)%L
xright = (x+L+1)%L
Fnew[x]= ker[0]*F[xleft] + ker[1]*F[x] + ker[2]*F[xright]
return Fnew
def schrodinger(Y): #schrodinger equation for a free particle (V=0) iY_t = Y_xx, letting Y=u+iv and solving Re and Im separately
u,v=Y[0,:],Y[1,:]
ut = -D2(v)
vt = D2(u)
return np.array([u+ut*dt,v+vt*dt])
uo = np.zeros(L)
vo = np.zeros(L)
xo = np.random.randint(0,L)
uo[(xo+1)%L]=1/np.sqrt(3)
uo[xo]=1/np.sqrt(3)
uo[(xo-1+L)%L]=1/np.sqrt(3)
Yo = np.array([uo,vo]) #setting initial conditions
fig=plt.figure()
plt.plot(x,np.zeros(L))
plt.plot(x,np.power(Yo[0,:],2)+np.power(Yo[1,:],2))
plt.ylim(0,1)
plt.savefig("out.png") #plotting initial stage
plt.close(fig)
Y=schrodinger(Yo)
for t in range(T): #plotting time evolution
fig=plt.figure()
plt.plot(x,np.zeros(L))
plt.plot(x,np.power(Y[0,:],2)+np.power(Y[1,:],2))
plt.ylim(0,1)
#plt.savefig("out {0:03d}.png".format(t+1))
plt.close(fig)
Y=schrodinger(Y)
```
```python
Y
```
# ACKNOWLEDGEMENTS:
# 1. Prof. Abhijit KarGupta, Pashkura College
# 2. Prof. Arunava Adhikary, WBSU
# 3. Book: Physics in Laboratory
# by Dr. Pradipta Kumar Mandal
# 4. Book: Python Programming
# by Dr. Abhijit KarGupta
# 5. GOOGLE MASTER !
| ba9090260504921d07725e6eb929bcfac9f8d9c0 | 660,040 | ipynb | Jupyter Notebook | CBCS_V_SchrodingerEquationPartI.ipynb | anathnath/EDA | 26b24ccfffe70747b84074598a069cdc96300163 | [
"MIT"
] | null | null | null | CBCS_V_SchrodingerEquationPartI.ipynb | anathnath/EDA | 26b24ccfffe70747b84074598a069cdc96300163 | [
"MIT"
] | null | null | null | CBCS_V_SchrodingerEquationPartI.ipynb | anathnath/EDA | 26b24ccfffe70747b84074598a069cdc96300163 | [
"MIT"
] | null | null | null | 61.273672 | 171,162 | 0.632153 | true | 89,254 | Qwen/Qwen-72B | 1. YES
2. YES | 0.867036 | 0.831143 | 0.720631 | __label__eng_Latn | 0.56528 | 0.512598 |
# Loading and Plotting Data
For the first part, we'll be doing linear regression with one variable, and so we'll use only two fields from the daily data set: the normalized high temperature in C, and the total number of bike rentals. The values for rentals are scaled by a factor of a thousand, given the difference in magnitude between them and the normalized temperatures.
```python
import pandas as pd
data = pd.read_csv("./data.csv")
temps = data['atemp'].values
rentals = data['cnt'].values / 1000
print(temps)
```
[0.363625 0.353739 0.189405 0.212122 0.22927 0.233209 0.208839
0.162254 0.116175 0.150888 0.191464 0.160473 0.150883 0.188413
0.248112 0.234217 0.176771 0.232333 0.298422 0.25505 0.157833
0.0790696 0.0988391 0.11793 0.234526 0.2036 0.2197 0.223317
0.212126 0.250322 0.18625 0.23453 0.254417 0.177878 0.228587
0.243058 0.291671 0.303658 0.198246 0.144283 0.149548 0.213509
0.232954 0.324113 0.39835 0.254274 0.3162 0.428658 0.511983
0.391404 0.27733 0.284075 0.186033 0.245717 0.289191 0.350461
0.282192 0.351109 0.400118 0.263879 0.320071 0.200133 0.255679
0.378779 0.366252 0.238461 0.3024 0.286608 0.385668 0.305
0.32575 0.380091 0.332 0.318178 0.36693 0.410333 0.527009
0.466525 0.32575 0.409735 0.440642 0.337939 0.270833 0.256312
0.257571 0.250339 0.257574 0.292908 0.29735 0.257575 0.283454
0.315637 0.378767 0.542929 0.39835 0.387608 0.433696 0.324479
0.341529 0.426737 0.565217 0.493054 0.417283 0.462742 0.441913
0.425492 0.445696 0.503146 0.489258 0.564392 0.453892 0.321954
0.450121 0.551763 0.5745 0.594083 0.575142 0.578929 0.497463
0.464021 0.448204 0.532833 0.582079 0.40465 0.441917 0.474117
0.512621 0.518933 0.525246 0.522721 0.5284 0.523363 0.4943
0.500629 0.536 0.550512 0.538529 0.527158 0.510742 0.529042
0.571975 0.5745 0.590296 0.604813 0.615542 0.654688 0.637008
0.612379 0.61555 0.671092 0.725383 0.720967 0.643942 0.587133
0.594696 0.616804 0.621858 0.65595 0.727279 0.757579 0.703292
0.678038 0.643325 0.601654 0.591546 0.587754 0.595346 0.600383
0.643954 0.645846 0.595346 0.637646 0.693829 0.693833 0.656583
0.643313 0.637629 0.637004 0.692558 0.654688 0.637008 0.652162
0.667308 0.668575 0.665417 0.696338 0.685633 0.686871 0.670483
0.664158 0.690025 0.729804 0.739275 0.689404 0.635104 0.624371
0.638263 0.669833 0.703925 0.747479 0.74685 0.826371 0.840896
0.804287 0.794829 0.720958 0.696979 0.690667 0.7399 0.785967
0.728537 0.729796 0.703292 0.707071 0.679937 0.664788 0.656567
0.676154 0.715292 0.703283 0.724121 0.684983 0.651521 0.654042
0.645858 0.624388 0.616167 0.645837 0.666671 0.662258 0.633221
0.648996 0.675525 0.638254 0.606067 0.630692 0.645854 0.659733
0.635556 0.647959 0.607958 0.594704 0.611121 0.614921 0.604808
0.633213 0.665429 0.625646 0.5152 0.544229 0.555361 0.578946
0.607962 0.609229 0.60213 0.603554 0.6269 0.553671 0.461475
0.478512 0.490537 0.529675 0.532217 0.550533 0.554963 0.522125
0.564412 0.572637 0.589042 0.574525 0.575158 0.574512 0.544829
0.412863 0.345317 0.392046 0.472858 0.527138 0.480425 0.504404
0.513242 0.523983 0.542925 0.546096 0.517717 0.551804 0.529675
0.498725 0.503154 0.510725 0.522721 0.513848 0.466525 0.423596
0.425492 0.422333 0.457067 0.463375 0.472846 0.457046 0.318812
0.227913 0.321329 0.356063 0.397088 0.390133 0.405921 0.403392
0.323854 0.362358 0.400871 0.412246 0.409079 0.373721 0.306817
0.357942 0.43055 0.524612 0.507579 0.451988 0.323221 0.272721
0.324483 0.457058 0.445062 0.421696 0.430537 0.372471 0.380671
0.385087 0.4558 0.490122 0.451375 0.311221 0.305554 0.331433
0.310604 0.3491 0.393925 0.4564 0.400246 0.256938 0.317542
0.266412 0.253154 0.270196 0.301138 0.338362 0.412237 0.359825
0.249371 0.245579 0.280933 0.396454 0.428017 0.426121 0.377513
0.299242 0.279961 0.315535 0.327633 0.279974 0.263892 0.318812
0.414121 0.375621 0.252304 0.126275 0.119337 0.278412 0.340267
0.390779 0.340258 0.247479 0.318826 0.282821 0.381938 0.249362
0.183087 0.161625 0.190663 0.364278 0.275254 0.190038 0.220958
0.174875 0.16225 0.243058 0.349108 0.294821 0.35605 0.415383
0.326379 0.272721 0.262625 0.381317 0.466538 0.398971 0.309346
0.272725 0.264521 0.296426 0.361104 0.266421 0.261988 0.293558
0.210867 0.101658 0.227913 0.333946 0.351629 0.330162 0.351629
0.355425 0.265788 0.273391 0.295113 0.392667 0.444446 0.410971
0.255675 0.268308 0.357954 0.353525 0.34847 0.475371 0.359842
0.413492 0.303021 0.241171 0.255042 0.3851 0.524604 0.397083
0.277767 0.35967 0.459592 0.542929 0.548617 0.532825 0.436229
0.505046 0.464 0.532821 0.538533 0.513258 0.531567 0.570067
0.486733 0.437488 0.43875 0.315654 0.47095 0.482304 0.375621
0.421708 0.417287 0.427513 0.461483 0.53345 0.431163 0.390767
0.426129 0.492425 0.476638 0.436233 0.337274 0.387604 0.431808
0.487996 0.573875 0.614925 0.598487 0.457038 0.493046 0.515775
0.542921 0.389504 0.301125 0.405283 0.470317 0.483583 0.452637
0.377504 0.450121 0.457696 0.577021 0.537896 0.537242 0.590917
0.584608 0.546737 0.527142 0.557471 0.553025 0.491783 0.520833
0.544817 0.585238 0.5499 0.576404 0.595975 0.572613 0.551121
0.566908 0.583967 0.565667 0.580825 0.584612 0.6067 0.627529
0.642696 0.641425 0.6793 0.672992 0.611129 0.631329 0.607962
0.566288 0.575133 0.578283 0.525892 0.542292 0.569442 0.597862
0.648367 0.663517 0.659721 0.597875 0.611117 0.624383 0.599754
0.594708 0.571975 0.544842 0.654692 0.720975 0.752542 0.724121
0.652792 0.674254 0.654042 0.594704 0.640792 0.675512 0.786613
0.687508 0.750629 0.702038 0.70265 0.732337 0.761367 0.752533
0.804913 0.790396 0.654054 0.664796 0.650271 0.654683 0.667933
0.666042 0.705196 0.724125 0.755683 0.745583 0.714642 0.613025
0.549912 0.623125 0.690017 0.70645 0.654054 0.739263 0.734217
0.697604 0.667933 0.684987 0.662896 0.667308 0.707088 0.722867
0.751267 0.731079 0.710246 0.697621 0.707717 0.699508 0.667942
0.638267 0.644579 0.662254 0.676779 0.654037 0.654688 0.2424
0.618071 0.603554 0.595967 0.601025 0.621854 0.637008 0.6471
0.618696 0.595996 0.654688 0.66605 0.635733 0.652779 0.6894
0.702654 0.649 0.661629 0.686888 0.708983 0.655329 0.657204
0.611121 0.578925 0.565654 0.554292 0.570075 0.579558 0.594083
0.585867 0.563125 0.55305 0.565067 0.540404 0.532192 0.571971
0.610488 0.518933 0.502513 0.544179 0.596613 0.607975 0.585863
0.530296 0.517663 0.512 0.542333 0.599133 0.607975 0.580187
0.538521 0.419813 0.387608 0.438112 0.503142 0.431167 0.433071
0.391396 0.508204 0.53915 0.460846 0.450108 0.512625 0.537896
0.472842 0.456429 0.482942 0.530304 0.558721 0.529688 0.52275
0.515133 0.467771 0.4394 0.309909 0.3611 0.369942 0.356042
0.323846 0.329538 0.308075 0.281567 0.274621 0.341891 0.355413
0.393937 0.421713 0.475383 0.323225 0.281563 0.324492 0.347204
0.326383 0.337746 0.375621 0.380667 0.364892 0.350371 0.378779
0.248742 0.257583 0.339004 0.281558 0.289762 0.298422 0.323867
0.316904 0.359208 0.455796 0.469054 0.428012 0.258204 0.321958
0.389508 0.390146 0.435575 0.338363 0.297338 0.294188 0.294192
0.338383 0.369938 0.4015 0.409708 0.342162 0.335217 0.301767
0.236113 0.259471 0.2589 0.294465 0.220333 0.226642 0.255046
0.2424 0.2317 0.223487 ]
The plot reveals some degree of correlation between temperature and bike rentals, as one might guess.
```python
import matplotlib.pyplot as plt
%matplotlib inline
plt.scatter(temps, rentals, marker='x', color='red')
plt.xlabel('Normalized Temperature in C')
plt.ylabel('Bike Rentals in 1000s')
```
# Simple Linear Regression
We'll start by implementing the [cost function](https://en.wikipedia.org/wiki/Loss_function) for linear regression, specifically [mean squared error](https://en.wikipedia.org/wiki/Mean_squared_error) (MSE). Intuitively, MSE represents an aggregation of the distances between point's actual y value and what a hypothesis function $h_\theta(x)$ predicted it would be. That hypothesis function and the cost function $J(\theta)$ are defined as
\begin{align}
h_\theta(x) & = \theta_0 + \theta_1x_1 \\
J(\theta) & = \frac{1}{2m}\sum\limits_{i = 1}^{m}(h_\theta(x^{(i)}) - y^{(i)})^2
\end{align}
where $\theta$ is a vector of feature weights, $x^{(i)}$ is the ith training example, $y^{(i)}$ is that example's y value, and $x_j$ is the value for its jth feature.
```python
import numpy as np
def compute_cost(X, y, theta):
return np.sum(np.square(np.matmul(X, theta) - y)) / (2 * len(y))
```
Before computing the cost with an initial guess for $\theta$, a column of 1s is prepended onto the input data. This allows us to vectorize the cost function, as well as make it usable for multiple linear regression later. This first value $\theta_0$ now behaves as a constant in the cost function.
```python
# [T0 t1]
# X
# [T0 T1] X [1]
# [X]
theta = np.zeros(2)
X = np.column_stack((np.ones(len(temps)), temps))
y = rentals
cost = compute_cost(X, y, theta)
print('theta:', theta)
print('cost:', cost)
print(X)
```
theta: [0. 0.]
cost: 12.018406441176468
[[1. 0.363625]
[1. 0.353739]
[1. 0.189405]
...
[1. 0.2424 ]
[1. 0.2317 ]
[1. 0.223487]]
We'll now minimize the cost using the [gradient descent](https://en.wikipedia.org/wiki/Gradient_descent) algorithm. Intuitively, gradient descent takes small, linear hops down the slope of a function in each feature dimension, with the size of each hop determined by the partial derivative of the cost function with respect to that feature and a learning rate multiplier $\alpha$. If tuned properly, the algorithm converges on a global minimum by iteratively adjusting feature weights $\theta$ of the cost function, as shown here for two feature dimensions:
\begin{align}
\theta_0 & := \theta_0 - \alpha\frac{\partial}{\partial\theta_0} J(\theta_0,\theta_1) \\
\theta_1 & := \theta_1 - \alpha\frac{\partial}{\partial\theta_1} J(\theta_0,\theta_1)
\end{align}
The update rule each iteration then becomes:
\begin{align}
\theta_0 & := \theta_0 - \alpha\frac{1}{m} \sum_{i=1}^m (h_\theta(x^{(i)})-y^{(i)}) \\
\theta_1 & := \theta_1 - \alpha\frac{1}{m} \sum_{i=1}^m (h_\theta(x^{(i)})-y^{(i)})x_1^{(i)} \\
\end{align}
See [here](http://mccormickml.com/2014/03/04/gradient-descent-derivation/) for a more detailed explanation of how the update equations are derived.
```python
def gradient_descent(X, y, alpha, iterations):
theta = np.zeros(2)
m = len(y)
for i in range(iterations):
t0 = theta[0] - (alpha / m) * np.sum(np.dot(X, theta) - y)
t1 = theta[1] - (alpha / m) * np.sum((np.dot(X, theta) - y) * X[:,1])
theta = np.array([t0, t1])
return theta
iterations = 5000
alpha = 0.1
theta = gradient_descent(X, y, alpha, iterations)
cost = compute_cost(X, y, theta)
print("theta:", theta)
print('cost:', compute_cost(X, y, theta))
```
theta: [0.94588081 7.50171673]
cost: 1.1275869258439812
We can examine the values of $\theta$ chosen by the algorithm using a few different visualizations, first by plotting $h_\theta(x)$ against the input data. The results show the expected correlation between temperature and rentals.
```python
plt.scatter(temps, rentals, marker='x', color='red')
plt.xlabel('Normalized Temperature in C')
plt.ylabel('Bike Rentals in 1000s')
samples = np.linspace(min(temps), max(temps))
plt.plot(samples, theta[0] + theta[1] * samples)
```
A surface plot is a better illustration of how gradient descent approaches a global minimum, plotting the values for $\theta$ against their associated cost. This requires a bit more code than an implementation in Octave / MATLAB, largely due to how the input data is generated and fed to the surface plot function.
```python
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
Xs, Ys = np.meshgrid(np.linspace(-5, 5, 50), np.linspace(-40, 40, 50))
Zs = np.array([compute_cost(X, y, [t0, t1]) for t0, t1 in zip(np.ravel(Xs), np.ravel(Ys))])
Zs = np.reshape(Zs, Xs.shape)
fig = plt.figure(figsize=(7,7))
ax = fig.gca(projection="3d")
ax.set_xlabel(r't0')
ax.set_ylabel(r't1')
ax.set_zlabel(r'cost')
ax.view_init(elev=25, azim=40) # rotate 3D image
ax.plot_surface(Xs, Ys, Zs, cmap=cm.rainbow)
```
Finally, a countour plot reveals slices of that surface plot in 2D space, and can show the resulting $\theta$ values sitting exactly at the global minimum.
```python
ax = plt.figure().gca()
ax.plot(theta[0], theta[1], 'r*')
plt.contour(Xs, Ys, Zs, np.logspace(-3, 3, 15))
#plt.contour(Xs, Ys, Zs)
```
# Multiple Linear Regression
First, we reload the data and add two more features, humidity and windspeed.
Before implementing gradient descent for multiple variables, we'll also apply [feature scaling](https://en.wikipedia.org/wiki/Feature_scaling) to normalize feature values, preventing any one of them from disproportionately influencing the results, as well as helping gradient descent converge more quickly. In this case, each feature value is adjusted by subtracting the mean and dividing the result by the standard deviation of all values for that feature:
$$
z = \frac{x - \mu}{\sigma}
$$
More details on feature scaling and normalization can be found [here](http://sebastianraschka.com/Articles/2014_about_feature_scaling.html).
```python
data
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>instant</th>
<th>dteday</th>
<th>season</th>
<th>yr</th>
<th>mnth</th>
<th>holiday</th>
<th>weekday</th>
<th>workingday</th>
<th>weathersit</th>
<th>temp</th>
<th>atemp</th>
<th>hum</th>
<th>windspeed</th>
<th>casual</th>
<th>registered</th>
<th>cnt</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>1</td>
<td>2011-01-01</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>6</td>
<td>0</td>
<td>2</td>
<td>0.344167</td>
<td>0.363625</td>
<td>0.805833</td>
<td>0.160446</td>
<td>331</td>
<td>654</td>
<td>985</td>
</tr>
<tr>
<th>1</th>
<td>2</td>
<td>2011-01-02</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>2</td>
<td>0.363478</td>
<td>0.353739</td>
<td>0.696087</td>
<td>0.248539</td>
<td>131</td>
<td>670</td>
<td>801</td>
</tr>
<tr>
<th>2</th>
<td>3</td>
<td>2011-01-03</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>0.196364</td>
<td>0.189405</td>
<td>0.437273</td>
<td>0.248309</td>
<td>120</td>
<td>1229</td>
<td>1349</td>
</tr>
<tr>
<th>3</th>
<td>4</td>
<td>2011-01-04</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>2</td>
<td>1</td>
<td>1</td>
<td>0.200000</td>
<td>0.212122</td>
<td>0.590435</td>
<td>0.160296</td>
<td>108</td>
<td>1454</td>
<td>1562</td>
</tr>
<tr>
<th>4</th>
<td>5</td>
<td>2011-01-05</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>3</td>
<td>1</td>
<td>1</td>
<td>0.226957</td>
<td>0.229270</td>
<td>0.436957</td>
<td>0.186900</td>
<td>82</td>
<td>1518</td>
<td>1600</td>
</tr>
<tr>
<th>...</th>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<th>726</th>
<td>727</td>
<td>2012-12-27</td>
<td>1</td>
<td>1</td>
<td>12</td>
<td>0</td>
<td>4</td>
<td>1</td>
<td>2</td>
<td>0.254167</td>
<td>0.226642</td>
<td>0.652917</td>
<td>0.350133</td>
<td>247</td>
<td>1867</td>
<td>2114</td>
</tr>
<tr>
<th>727</th>
<td>728</td>
<td>2012-12-28</td>
<td>1</td>
<td>1</td>
<td>12</td>
<td>0</td>
<td>5</td>
<td>1</td>
<td>2</td>
<td>0.253333</td>
<td>0.255046</td>
<td>0.590000</td>
<td>0.155471</td>
<td>644</td>
<td>2451</td>
<td>3095</td>
</tr>
<tr>
<th>728</th>
<td>729</td>
<td>2012-12-29</td>
<td>1</td>
<td>1</td>
<td>12</td>
<td>0</td>
<td>6</td>
<td>0</td>
<td>2</td>
<td>0.253333</td>
<td>0.242400</td>
<td>0.752917</td>
<td>0.124383</td>
<td>159</td>
<td>1182</td>
<td>1341</td>
</tr>
<tr>
<th>729</th>
<td>730</td>
<td>2012-12-30</td>
<td>1</td>
<td>1</td>
<td>12</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>0.255833</td>
<td>0.231700</td>
<td>0.483333</td>
<td>0.350754</td>
<td>364</td>
<td>1432</td>
<td>1796</td>
</tr>
<tr>
<th>730</th>
<td>731</td>
<td>2012-12-31</td>
<td>1</td>
<td>1</td>
<td>12</td>
<td>0</td>
<td>1</td>
<td>1</td>
<td>2</td>
<td>0.215833</td>
<td>0.223487</td>
<td>0.577500</td>
<td>0.154846</td>
<td>439</td>
<td>2290</td>
<td>2729</td>
</tr>
</tbody>
</table>
<p>731 rows × 16 columns</p>
</div>
```python
def feature_normalize(X):
'''
make sure features are on similar scale
--> gradient decense will converge faster
'''
n_features = X.shape[1]
means = np.array([np.mean(X[:,i]) for i in range(n_features)])
stddevs = np.array([np.std(X[:,i]) for i in range(n_features)])
normalized = (X - means) / stddevs
return normalized
'''
X = data.values
X = np.matrix(X)
X = feature_normalize(X)
X = np.column_stack((np.ones(len(X)), X))
data_y = pd.read_csv("E:\DATAs\data.csv",usecols=['cnt'])
y = data_y['cnt'].values/1000
'''
data2 = pd.read_csv("./data.csv",usecols=['atemp', 'hum', 'windspeed'])
X = data2.values
#X = np.matrix(X)
#X = data.as_matrix(columns=['atemp', 'hum', 'windspeed'])
print(np.shape(X))
X = feature_normalize(X)
X = np.column_stack((np.ones(len(X)), X))
y = data['cnt'].values / 1000
```
(731, 3)
```python
print(X)
print(X.shape)
```
[[ 1. -0.67994602 1.25017133 -0.38789169]
[ 1. -0.74065231 0.47911298 0.74960172]
[ 1. -1.749767 -1.33927398 0.74663186]
...
[ 1. -1.42434419 0.87839173 -0.85355213]
[ 1. -1.49004895 -1.01566357 2.06944426]
[ 1. -1.54048197 -0.35406086 -0.46020122]]
(731, 4)
The next step is to implement gradient descent for any number of features. Fortunately, the update step generalizes easily, and can be vectorized to avoid iterating through $\theta_j$ values as might be suggested by the single variable implementation above:
$$
\theta_j := \theta_j - \alpha\frac{1}{m} \sum_{i=1}^m (h_\theta(x^{(i)})-y^{(i)})x_j^{(i)}
$$
```python
def gradient_descent_multi(X, y, theta, alpha, iterations):
theta = np.zeros(X.shape[1])
m = len(X)
for i in range(iterations):
gradient = (1/m) * np.matmul(X.T, np.matmul(X, theta) - y)
theta = theta - alpha * gradient
return theta
theta = gradient_descent_multi(X, y, theta, alpha, iterations)
cost = compute_cost(X, y, theta)
print('theta:', theta)
print('cost', cost)
```
theta: [ 4.50434884 1.22203893 -0.45083331 -0.34166068]
cost 1.0058709247119848
Unfortunately, it's now more difficult to evaluate the results visually, but we can check them a totally different method of calculating the answer, the [normal equation](http://eli.thegreenplace.net/2014/derivation-of-the-normal-equation-for-linear-regression/). This solves directly for the solution without iteration specifying an $\alpha$ value, although it begins to perform worse than gradient descent with large (10,000+) numbers of features.
$$
\theta = (X^TX)^{-1}X^Ty
$$
```python
from numpy.linalg import inv
def normal_eq(X, y):
return inv(X.T.dot(X)).dot(X.T).dot(y)
theta = normal_eq(X, y)
cost = compute_cost(X, y, theta)
print('theta:', theta)
print('cost:', cost)
```
theta: [ 4.50434884 1.22203893 -0.45083331 -0.34166068]
cost: 1.0058709247119846
The $\theta$ values and costs for each implementation are identical, so we can have a high degree of confidence they are correct.
## Linear Regression in Tensorflow
Tensorflow offers significantly higher-level abstractions to work with, representing the algorithm as a computational graph. It has a built-in gradient descent optimizer that can minimize the cost function without us having to define the gradient manually.
We'll begin by reloading the data and adapting it to more Tensorflow-friendly data structures and terminology. Features are still normalized as before, but the added column of 1s is absent: the constant is treated separately as a *bias* variable, the previous $\theta$ values are now *weights*.
```python
import tensorflow as tf
X = data.as_matrix(columns=['atemp', 'hum', 'windspeed'])
X = feature_normalize(X)
y = data['cnt'].values / 1000
y = y.reshape((-1, 1))
m = X.shape[0]
n = X.shape[1]
examples = tf.placeholder(tf.float32, [m,n])
labels = tf.placeholder(tf.float32, [m,1])
weights = tf.Variable(tf.zeros([n,1], dtype=np.float32), name='weight')
bias = tf.Variable(tf.zeros([1], dtype=np.float32), name='bias')
```
The entire gradient descent occurs below in only three lines of code. All that's needed is to define the hypothesis and cost functions, and then a gradient descent optimizer to find the minimum.
```python
hypothesis = tf.add(tf.matmul(examples, weights), bias)
cost = tf.reduce_sum(tf.square(hypothesis - y)) / (2 * m)
optimizer = tf.train.GradientDescentOptimizer(alpha).minimize(cost)
```
The graph is now ready to use, and all the remains is to start up a session, run the optimizer iteratively, and check the results.
```python
with tf.Session() as sess:
init = tf.global_variables_initializer()
sess.run(init)
for i in range(1, iterations):
sess.run(optimizer, feed_dict={
examples: X,
labels: y
})
print('bias:', sess.run(bias))
print('weights:', sess.run(weights))
```
The bias and weight values are identical to the $\theta$ values calculated in both implementations previously, so the Tensorflow implementation of the algorithm looks correct.
| 8b1641c3692f61ac9799e2a0578ce27680cf1af5 | 272,519 | ipynb | Jupyter Notebook | ml-linear-regression.ipynb | Jerry671/natural-language-project | 943415ac6085a74363b4f7e881454ebcfccc7689 | [
"Apache-2.0"
] | null | null | null | ml-linear-regression.ipynb | Jerry671/natural-language-project | 943415ac6085a74363b4f7e881454ebcfccc7689 | [
"Apache-2.0"
] | null | null | null | ml-linear-regression.ipynb | Jerry671/natural-language-project | 943415ac6085a74363b4f7e881454ebcfccc7689 | [
"Apache-2.0"
] | 1 | 2020-11-28T07:41:42.000Z | 2020-11-28T07:41:42.000Z | 255.167603 | 138,432 | 0.895831 | true | 9,770 | Qwen/Qwen-72B | 1. YES
2. YES | 0.83762 | 0.675765 | 0.566034 | __label__eng_Latn | 0.386092 | 0.153416 |
```python
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
```
# The frequency of a Ricker wavelet
We often use Ricker wavelets to model seismic, for example when making a synthetic seismogram with which to help tie a well. One simple way to guesstimate the peak or central frequency of the wavelet that will model a particlar seismic section is to count the peaks per unit time in the seismic. But this tends to overestimate the actual frequency because the maximum [frequency](http://www.subsurfwiki.org/wiki/Frequency) of [a Ricker wavelet](http://subsurfwiki.org/wiki/Ricker_wavelet) is more than the peak frequency. The question is, how much more?
To investigate, let's make a Ricker wavelet and see what it looks like in the time and frequency domains.
```python
T, dt, f = 0.256, 0.001, 25
import bruges
w, t = bruges.filters.ricker(T, dt, f, return_t=True)
import scipy.signal
f_W, W = scipy.signal.welch(w, fs=1/dt, nperseg=256)
```
```python
fig, axs = plt.subplots(figsize=(15,5), ncols=2)
axs[0].plot(t, w)
axs[0].set_xlabel("Time [s]")
axs[1].plot(f_W[:25], W[:25], c="C1")
axs[1].set_xlabel("Frequency [Hz]")
plt.show()
```
When we count the peaks in a section, the assumption is that this apparent frequency — that is, the reciprocal of apparent period or distance between the extrema — tells us the dominant or peak frequency.
To help see why this assumption is wrong, let's compare the Ricker with a signal whose apparent frequency does match its peak frequency: a pure cosine:
```python
c = np.cos(2*25*np.pi*t)
f_C, C = scipy.signal.welch(c, fs=1/dt, nperseg=256)
```
```python
fig, axs = plt.subplots(figsize=(15,5), ncols=2)
axs[0].plot(t, c, c="C2")
axs[0].set_xlabel("Time [s]")
axs[1].plot(f_C[:25], C[:25], c="C1")
axs[1].set_xlabel("Frequency [Hz]")
plt.show()
```
Notice that the signal is much narrower in bandwidth. If we allowed more oscillations, it would be even narrower. If it lasted forever, it would be a spike in the frequency domain.
Let's overlay the signals to get a picture of the difference in the relative periods:
```python
plt.figure(figsize=(15, 5))
plt.plot(t, c, c='C2')
plt.plot(t, w)
plt.xlabel("Time [s]")
plt.show()
```
The practical consequence of this is that if we estimate the peak frequency to be $f\ \mathrm{Hz}$, then we need to reduce $f$ by some factor if we want to design a wavelet to match the data. To get this factor, we need to know the apparent period of the Ricker function, as given by the time difference between the two minima.
Let's look at a couple of different ways to find those minima: numerically and analytically.
## Find minima numerically
We'll use [`scipy.optimize.minimize`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html#scipy.optimize.minimize) to find a numerical solution. In order to use it, we'll need a slightly different expression for the Ricker function — casting it in terms of a time basis `t`. We'll also keep `f` as a variable, rather than hard-coding it in the expression, to give us the flexibility of computing the minima for different values of `f`.
Here's the equation we're implementing:
$$w(t, f) = (1 - 2\pi^2 f^2 t^2)\ e^{-\pi^2 f^2 t^2}$$
```python
def ricker(t, f):
return (1 - 2*(np.pi*f*t)**2) * np.exp(-(np.pi*f*t)**2)
```
Check that the wavelet looks like it did before, by comparing the output of this function when `f` is 25 with the wavelet `w` we were using before:
```python
f = 25
np.allclose(w, ricker(t, f=25))
```
True
```python
plt.figure(figsize=(15, 5))
plt.plot(w, lw=3)
plt.plot(ricker(t, f), '--', c='C4', lw=3)
plt.show()
```
Now we call SciPy's `minimize` function on our `ricker` function. It itertively searches for a minimum solution, then gives us the `x` (which is really `t` in our case) at that minimum:
```python
import scipy.optimize
f = 25
scipy.optimize.minimize(ricker, x0=0, args=(f))
```
fun: -0.4462603202963996
hess_inv: array([[1]])
jac: array([-2.19792128e-07])
message: 'Optimization terminated successfully.'
nfev: 30
nit: 1
njev: 10
status: 0
success: True
x: array([0.01559393])
So the minimum amplitude, given by `fun`, is $-0.44626$ and it occurs at an `x` (time) of $\pm 0.01559\ \mathrm{s}$.
In comparison, the minima of the cosine function occur at a time of $\pm 0.02\ \mathrm{s}$. In other words, the period appears to be $0.02 - 0.01559 = 0.00441\ \mathrm{s}$ shorter than the pure waveform, which is...
```python
(0.02 - 0.01559) / 0.02
```
0.22050000000000003
...about 22% shorter. This means that if we naively estimate frequency by counting peaks or zero crossings, we'll tend to overestimate the peak frequency of the wavelet by about 22% — assuming it is approximately Ricker-like; if it isn't we can use the same method to estimate the error for other functions.
This is good to know, but it would be interesting to know if this parameter depends on frequency, and also to have a more precise way to describe it than a decimal. To get at these questions, we need an analytic solution.
## Find minima analytically
Python's [SymPy package](http://sympy.org/) is a bit like Maple — it understands math symbolically. We'll use [`sympy.solve`](http://docs.sympy.org/latest/modules/solvers/solvers.html) to find an analytic solution. It turns out that it needs the Ricker function writing in yet another way, using SymPy symbols and expressions for $\mathrm{e}$ and $\pi$.
```python
import sympy as sp
t = sp.Symbol('t')
f = sp.Symbol('f')
r = (1 - 2*(sp.pi*f*t)**2) * sp.exp(-(sp.pi*f*t)**2)
```
Now we can easily find the solutions to the Ricker equation, that is, the times at which the function is equal to zero:
```python
sp.solvers.solve(r, t)
```
[-sqrt(2)/(2*pi*f), sqrt(2)/(2*pi*f)]
But this is not quite what we want. We need the minima, not the zero-crossings.
Maybe there's a better way to do this, but here's one way. Note that the gradient (slope or derivative) of the Ricker function is zero at the minima, so let's just solve the first time derivative of the Ricker function. That will give us the three times at which the function has a gradient of zero.
```python
dwdt = sp.diff(r, t)
sp.solvers.solve(dwdt, t)
```
[0, -sqrt(6)/(2*pi*f), sqrt(6)/(2*pi*f)]
In other words, the non-zero minima of the Ricker function are at:
$$\pm \frac{\sqrt{6}}{2\pi f}$$
Let's just check that this evaluates to the same answer we got from `scipy.optimize`, which was 0.01559.
```python
np.sqrt(6) / (2 * np.pi * 25)
```
0.015593936024673521
The solutions agree.
While we're looking at this, we can also compute the analytic solution to the amplitude of the minima, which SciPy calculated as -0.446. We just substitute one of the expressions for the minimum time into the expression for `r`:
```python
r.subs({t: sp.sqrt(6)/(2*sp.pi*f)})
```
-2*exp(-3/2)
## Apparent frequency
So what's the result of all this? What's the correction we need to make?
The minima of the Ricker wavelet are $\sqrt{6}\ /\ \pi f_\mathrm{actual}\ \mathrm{s}$ apart — this is the apparent period. If we're assuming a pure tone, this period corresponds to an apparent frequency of $\pi f_\mathrm{actual}\ /\ \sqrt{6}\ \mathrm{Hz}$. For $f = 25\ \mathrm{Hz}$, this apparent frequency is:
```python
(np.pi * 25) / np.sqrt(6)
```
32.06374575404661
If we were to try to model the data with a Ricker of 32 Hz, the frequency will be too high. We need to multiply the frequency by a factor of $\sqrt{6} / \pi$, like so:
```python
32.064 * np.sqrt(6) / (np.pi)
```
25.00019823475659
This gives the correct frequency of 25 Hz.
To sum up, rearranging the expression above:
$$f_\mathrm{actual} = f_\mathrm{apparent} \frac{\sqrt{6}}{\pi}$$
Expressed as a decimal, the factor we were seeking is therefore $\sqrt{6}\ /\ \pi$:
```python
np.sqrt(6) / np.pi
```
0.779696801233676
That is, the reduction factor is 22%.
----
Curious coincidence: in [the recent Pi Day post](https://agilescientific.com/blog/2018/3/14/happy-pi-day-einstein), I mentioned the Riemann zeta function of 2 as a way to compute pi. It evaluates to $(\pi / \sqrt{6})^2$. Is there a connection between the Ricker wavelet and the Riemann hypothesis?
I doubt it.
| 6d3dd9a9d6328af6673124db842a76ef6f03aaa4 | 192,663 | ipynb | Jupyter Notebook | The_frequency_of_a_Ricker.ipynb | agilescientific/notebooks | cc5ef92f218d39d1d31f9c5e89aaeb8c6d466c20 | [
"Apache-2.0"
] | 100 | 2015-01-02T16:45:25.000Z | 2022-01-29T14:01:00.000Z | The_frequency_of_a_Ricker.ipynb | afcarl/notebooks-agile-geoscience | 446a992bf670ddbae7d2524a41e667a56db09bff | [
"Apache-2.0"
] | 1 | 2018-01-02T15:25:06.000Z | 2018-01-02T15:25:06.000Z | The_frequency_of_a_Ricker.ipynb | afcarl/notebooks-agile-geoscience | 446a992bf670ddbae7d2524a41e667a56db09bff | [
"Apache-2.0"
] | 69 | 2015-01-02T16:45:25.000Z | 2022-02-25T01:18:45.000Z | 319.507463 | 59,344 | 0.921215 | true | 2,422 | Qwen/Qwen-72B | 1. YES
2. YES | 0.901921 | 0.904651 | 0.815923 | __label__eng_Latn | 0.993132 | 0.733995 |
## Teoría de perturbaciones
Consiste en resolver un sistema perturbado(se conoce la solución al no perturbado), y donde el interés es conocer la contribución de la parte perturbada $H'$ al nuevo sistema total.
$$ H = H^{0} + H'$$
Para sistemas no degenerados, la corrección a la energía a primer orden se calcula como
$$ E_{n}^{(1)} = \int\psi_{n}^{(0)*} H' \psi_{n}^{(0)}d\tau$$
** Tarea 1 : Programar esta ecuación si conoces $H^{0}$ y sus soluciones. **
```python
from sympy.physics.qho_1d import psi_n
from sympy.physics.qho_1d import E_n
from sympy import *
from sympy import init_printing; init_printing(use_latex = 'mathjax')
n, m, m_e, omega, hbar = symbols('n m m_e omega hbar', real = True, constant = True)
var('x')
m_e = 9.10938356e-31
n = Abs(sympify(input('Valor de la energia: ')))
omega = sympify(input('Frecuencia Angular: '))
# Funcion de onda del hamiltoniano H.O no perturbado
FunOnda = psi_n(n, x, m_e, omega)
#Energia del hamiltoniano no perturbado
E0 = E_n(n, omega)
#Se necesita definir el nuevo hamiltoniano agregando la perturbacion
H = ((-(hbar**2)/(2*m_e))*diff(FunOnda, x, 2) + FunOnda*(m_e*(omega*x)**2)/(2))+FunOnda*sympify(input('Perturbation: '))
#Producto interno
sandwich = conjugate(FunOnda)*H
E = integrate(sandwich, (x, -oo,oo))
Error = (((E-E0)*100)/E0)
E
```
Valor de la energia: 1
Frecuencia Angular: 1
Perturbation: 0
$$\frac{0.75 \hbar^{2}}{\hbar} + 0.75 \hbar$$
Y la corrección a la función de onda, también a primer orden, se obtiene como:
$$ \psi_{n}^{(1)} = \sum_{m\neq n} \frac{\langle\psi_{m}^{(0)} | H' | \psi_{n}^{(0)} \rangle}{E_{n}^{(0)} - E_{m}^{(0)}} \psi_{m}^{(0)}$$
**Tarea 2: Programar esta ecuación si conoces $H^{0}$ y sus soluciones. **
```python
### Solución
#Importar de sympy el hamiltoniano y eigenfunciones para el oscilador armónico cuántico
from sympy.physics.qho_1d import psi_n
from sympy.physics.qho_1d import E_n
from sympy import *
from sympy import init_printing; init_printing(use_latex = 'mathjax')
n, m, m_e, omega, hbar = symbols('n m m_e omega hbar', real = True, constant = True)
var('x')
m_e = 9.10938356e-31
#Nivel de energia, sobre el cual realizar la corrección
n = Abs(sympify(input('Nivel de energia para la correcion de la funcion de onda: ')))
i= Abs(sympify(input('Nivel mas alto de energia:')))
omega = sympify(input('Frecuencia Angular: '))
#Funcion de onda no perturbada
FunOnda = psi_n(n, x, m_e, omega)
#Energía previa a la perturbación
E0 = E_n(n, omega)
#Nuevo hamiltoniano, contiene el original y la perturbacion
H = FunOnda*sympify(input('Perturbacion: '))
#Energia no perturbada del Hamiltoniano
E0 = E_n(n, omega)
psicorrec = 0
for m in range(i):
if m !=n:
psim= psi_n(m, x, m_e, omega)
producto = conjugate(psim)*H
sandwich = integrate(producto, (x,-oo,oo))
Em = E_n(m, omega)
correc = ((sandwich)/(E0-Em))*psim
psicorrec = psicorrec + correc
else:
psicorrec = psicorrec
#Integral del producto interno
sandwich = conjugate(FunOnda)*H
E = E0 + integrate(sandwich, (x, -oo,oo))
Error = (((E-E0)*100)/E0)
psipert = FunOnda + psicorrec
psipert.evalf()
psiplot = conjugate(psipert)*psipert
plot(psiplot,(x,-0.1,0.1))
```
**Tarea 3: Investigue las soluciones a segundo orden y también programe las soluciones. **
```python
### Solución
#La forma para la corrección de la energía a segundo orden es parecida. Trabajamos con energía en vez de funciones de onda.
from sympy.physics.qho_1d import psi_n
from sympy.physics.qho_1d import E_n
from sympy import *
from sympy import init_printing; init_printing(use_latex = 'mathjax')
n, m, m_e, omega, hbar = symbols('n m m_e omega hbar', real = True, constant = True)
var('x')
m_e = 9.10938356e-31
#Nivel de energia sobre la cual trabajamos
n = Abs(sympify(input('Nivel de energia para la correcion de la funcion de onda: ')))
i= Abs(sympify(input('Nivel mas alto de energia:')))
omega = sympify(input('Frecuencia angular: '))
#Funcion de onda no perturbada
FunOnda = psi_n(n, x, m_e, omega)
#Energía antes de la perturbación
E0 = E_n(n, omega)
#Nuevo hamiltoniano, contiene el original y la perturbacion
H = FunOnda*sympify(input('Perturbacion: '))
intepriorden = conjugate(FunOnda)*H
priorden = integrate (intepriorden, (x,-oo,oo))
#Energia del hamiltoniano no perturbado
E0 = E_n(n, omega)
Ecorrec = 0
#La primera parte de la corrección a 2º orden es la de primer orden
for m in range(i):
if m !=n:
psim= psi_n(m, x, m_e, omega)
product = conjugate(psim)*H
sandwich = integrate(product, (x,-oo,oo))
Em = E_n(m, omega)
corr = ((sandwich)**2/(E0-Em))
Ecorrec = Ecorrec + corr
else:
Ecorrec = Ecorrec
E = E0 + priorden + Ecorrec
E
```
Nivel de energia para la correcion de la funcion de onda: 1
Nivel mas alto de energia:10
Frecuencia angular: 1
Perturbacion: x**3
$$- 1.17409001625997 \cdot 10^{91} \hbar^{2} - \frac{\hbar^{2}}{\pi} 1.883130520616 \cdot 10^{59} + \frac{3 \hbar}{2}$$
**Tarea 4. Resolver el átomo de helio aplicando los programas anteriores.**
```python
from sympy.physics.hydrogen import E_nl, R_nl
var('r1, r2, q', positive=True, real=True)
def Helium(N1,N2,L1,L2):
Eb=E_nl(N1,1)+E_nl(N2,1)
Psi1=R_nl(N1, L1, r1, Z=1)
Psi2=R_nl(N2, L2, r2, Z=1)
Psi=Psi1*Psi2
E_correction1 = integrate(integrate(r1**2*r2**2*conjugate(Psi1)*conjugate(Psi2)*q**2*Psi1*Psi2/abs(r1-r2), (r1,0,oo)), (r2,0,oo))
E_correctionR1 = q**2*integrate(r2**2*conjugate(Psi2)*Psi2*(integrate(r1**2*conjugate(Psi1)*Psi1/r2, (r1,0,r2))+integrate(r1**2*conjugate(Psi1)*Psi1/r1, (r1,r2,oo))), (r2,0,oo))
return Psi, Eb, E_correction1, E_correctionR1
E_correction1, E_correctionR1
Helium (1,1,0,0)
```
$$\left ( \frac{4}{e^{r_{1}} e^{r_{2}}}, \quad -1, \quad 16 q^{2} \int_{0}^{\infty} \frac{r_{2}^{2}}{e^{2 r_{2}}} \int_{0}^{\infty} \frac{r_{1}^{2}}{e^{2 r_{1}} \left|{r_{1} - r_{2}}\right|}\, dr_{1}\, dr_{2}, \quad \frac{5 q^{2}}{8}\right )$$
**Tarea 5: Método variacional-perturbativo. **
Este método nos permite estimar de forma precisa $E^{(2)}$ y correcciones perturbativas de la energía de órdenes más elevados para el estado fundamental del sistema, sin evaluar sumas infinitas. Ver ecuación 9.38 del libro.
**Resolver el átomo de helio, considerando este método (sección 9.4), como mejor le parezca. **
**Tarea 6. Revisar sección 9.7. **
Inicialmente a mano, y sengunda instancia favor de intentar programar sección del problema, i.e. integral de Coulomb e integral de intercambio.
## Siguiente: Segunda parte, Octubre
Simetrías moleculares y Hartree-Fock
```python
```
| 2f78a038361aa710d2708c8a8776481c4f713f38 | 33,664 | ipynb | Jupyter Notebook | Perturbaciones/Perturbaciones lalo.ipynb | lazarusA/Density-functional-theory | c74fd44a66f857de570dc50471b24391e3fa901f | [
"MIT"
] | null | null | null | Perturbaciones/Perturbaciones lalo.ipynb | lazarusA/Density-functional-theory | c74fd44a66f857de570dc50471b24391e3fa901f | [
"MIT"
] | null | null | null | Perturbaciones/Perturbaciones lalo.ipynb | lazarusA/Density-functional-theory | c74fd44a66f857de570dc50471b24391e3fa901f | [
"MIT"
] | null | null | null | 76.335601 | 20,376 | 0.767051 | true | 2,404 | Qwen/Qwen-72B | 1. YES
2. YES | 0.757794 | 0.746139 | 0.56542 | __label__spa_Latn | 0.696464 | 0.15199 |
## Heat Transfer problem with linear initial temperature and steady surface temperature
```python
import numpy as np
import math
import matplotlib.pyplot as plt
%matplotlib inline
from scipy.optimize import newton
from mpl_toolkits.mplot3d import axes3d
from matplotlib import cm
import numpy.ma as ma
from scipy.integrate import odeint
from matplotlib import cm
import matplotlib.image as mpimg
```
```python
pic = mpimg.imread('screenshot.png')
plt.imshow(pic)
plt.axis('off')
plt.show()
```
\begin{equation}\begin{aligned}& \frac{{\partial u}}{{\partial t}} = k\frac{{{\partial ^2}u}}{{\partial {x^2}}}\\ & u\left( {x,0} \right) = T_0+bx\hspace{0.25in}u\left( {0,t} \right) = {T_0}\hspace{0.25in}\,\,\,\,\,u\left( {L,t} \right) = {T_0}\end{aligned}\label{eq:eq1}\end{equation}
This question can be easily solved by Fourier sine series, but we cannot use separation of variables directly since it has nonhomogeneous boundary conditions. Here we us a little trick:
Let:$$v(x,t)=u(x,t)-T_0$$
Then the heat equation can be rewritten as:
$$ \frac{\partial v}{\partial t}=k\frac{\partial^2 v}{\partial x^2}$$
with the homogeneous boundary conditions:
$$v(x,0)=bx, v(0,t)=0, v(L,t)=0$$
The solution can be given by Fourier sine series as:
$$v\left( {x,t} \right) = \sum\limits_{n = 1}^\infty {{B_n}\sin \left( {\frac{{n\pi x}}{L}} \right){{\bf{e}}^{ - k{{\left( {\frac{{n\pi }}{L}} \right)}^2}\,t}}}$$
where the coefficient $B_n$ is:
\begin{align*}{B_{\,n}} & = \frac{2}{L}\int_{{\,0}}^{{\,L}}{{bx\sin \left( {\frac{{n\,\pi x}}{L}} \right)\,dx}} = \frac{2b}{L}\left. {\left( {\frac{L}{{{n^2}{\pi ^2}}}} \right)\left( {L\sin \left( {\frac{{n\,\pi x}}{L}} \right) - n\pi x\cos \left( {\frac{{n\,\pi x}}{L}} \right)} \right)} \right|_0^L\\ & = \frac{2b}{{{n^2}{\pi ^2}}}\left( {L\sin \left( {n\,\pi } \right) - n\pi L\cos \left( {n\,\pi } \right)} \right)\end{align*}
It follows that:
$$B_n = \frac{(-1)^{n+1}2bL}{n\pi}$$
The final solution is given as:
$$\frac{u(x,t)-T_0}{bl}=\frac{2}{\pi}\sum\limits_{n = 1}^\infty \frac{(-1)^{n+1}}{n}sin(\frac{n\pi x}{L})e^{-k(\frac{n\pi}{L})^2 t}$$
##### Discussion:
In transient heat transfer problems,introduce the dimensionless number: Fourier number ,noted as $Fo$
$$Fo=\frac{kt}{L^2}$$
where k is the thermal diffusivity in $m^2/s$.
The physical meaning of $Fo$ is:
$$Fo = \frac{\text{diffusive transport rate}}{\text{storage rate}}$$
$Fo$ can act as "Dimentionless time" in related analysis since the originial heat equation:
$$\frac{\partial u}{\partial t} = k \frac{\partial^2 u}{\partial x^2}$$ can be rewritten as the dimentionless form:
$$\frac{\partial u}{\partial \Theta} = \frac{\partial^2 u}{\partial X^2}$$
where $\Theta = Fo$ and $X=\frac{x}{L}$
##### Visualize the Fourier series
```python
x = np.linspace(0,1,100)
y1 = x
a = 0
fig, ax = plt.subplots()
for n in range (1,50):
B = ((-1)**(n+1))/n
a = a+ B*np.sin(n*math.pi*x)
y2 = 2*a/math.pi
ax.plot(x,y1)
ax.plot(x,y2)
```
##### Draw the temperature profile:
```python
x = np.linspace(0,1,100)
fo = np.linspace(0,0.5,100)
xv, fov = np.meshgrid(x,fo)
a = 0
for n in range (1,200):
B = ((-1)**(n+1))/n
a = a+ (B*np.sin(n*math.pi*xv)*np.exp((-(n**2)*(math.pi)**2)*fov))
u = (2*a)/(math.pi)+273
fig = plt.figure(figsize=(8, 8))
ax = fig.add_subplot(111, projection='3d')
ax.plot_surface(xv, fov, u, cmap=cm.coolwarm)
ax.set_xlabel('x')
ax.set_ylabel('Fo')
ax.view_init(elev=50, azim=70)
# Set To=273K and coefficients b=1, L=1
```
A perfect visualization of Fourier series in heat transfer problems is given by
[3Blue1Brown Solve the heat equations](https://www.youtube.com/watch?v=ToIXSwZ1pJU&t=127s)
```python
```
| 8cd1f6a76bb8d0952c6af65787270520e59595d6 | 194,505 | ipynb | Jupyter Notebook | presentations/11_04_19_Renyu.ipynb | uw-cheme512/uw-cheme512.github.io | 6dad7a9554eafb6eba347462d30c62bf9c0ec4da | [
"BSD-3-Clause"
] | null | null | null | presentations/11_04_19_Renyu.ipynb | uw-cheme512/uw-cheme512.github.io | 6dad7a9554eafb6eba347462d30c62bf9c0ec4da | [
"BSD-3-Clause"
] | null | null | null | presentations/11_04_19_Renyu.ipynb | uw-cheme512/uw-cheme512.github.io | 6dad7a9554eafb6eba347462d30c62bf9c0ec4da | [
"BSD-3-Clause"
] | null | null | null | 750.984556 | 167,752 | 0.948963 | true | 1,327 | Qwen/Qwen-72B | 1. YES
2. YES | 0.931463 | 0.863392 | 0.804217 | __label__eng_Latn | 0.691246 | 0.706798 |
```python
# Libraries Sympy and Numpy
from sympy import*
import numpy as np
# To define automatic printing mode (Not needed anymore)
# init_printing()
# For priting with text
from IPython.display import display, Latex
# Plotting Libraries
from mpl_toolkits import mplot3d
import matplotlib.pyplot as plt
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
import matplotlib.ticker as ticker
#interactive plotting in separate window
#%matplotlib qt
#normal charts inside notebooks
#%matplotlib inline
# Use Latex in plots
plt.rcParams['text.usetex'] = False
```
# 1. IPT System Frequency Splitting Model
## Equations Derivation
```python
# Definition of the symbolic variables
# Resistances --> Coils and Load
R_1, R_2, R_L = symbols('R_1, R_2, R_L',
real = True)
# Inductance and Capacitance
C_1, C_2, L_1, L_2, M_i = symbols('C_1, C_2, L_1,L_2, M_i',
real = True)
# Reactance
X_1, X_2 = symbols('X_1, X_2',
real = True)
# Angular frequency
w_e = symbols('w_e', real = True)
# Input Voltage and Current
V_1, I_1 = symbols('V_1, I_1', real = True)
```
```python
# Reactance in each mesh
#X_1 = (w_e*L_1) - (1/(w_e*C_1))
#X_2 = (w_e*L_2) - (1/(w_e*C_2))
```
- Equivalent Circuit's Matrix Representation: $[Z][I] = [V]$
```python
# Impedance Matriz [Z]
Z_m = Matrix([[R_1 + (I*X_1), I*w_e*M_i],
[I*w_e*M_i, R_2 + R_L + (I*X_2)]])
# Voltage Vector
V_m = Matrix([[V_1], [0]])
```
```python
# Solve the System of Equations
I_m = Z_m.inv()*V_m
```
```python
# Input Current
result_I = "Input Current: \n $${} = {}$$".format(latex(I_1),
latex(I_m[0]))
display(Latex(result_I))
# Separate Numerator and denominator of I1
I_1_n, I_1_d = fraction(I_m[0])
```
Input Current:
$$I_{1} = \frac{V_{1} \left(- i R_{2} - i R_{L} + X_{2}\right)}{- i M_{i}^{2} w_{e}^{2} - i R_{1} R_{2} - i R_{1} R_{L} + R_{1} X_{2} + R_{2} X_{1} + R_{L} X_{1} + i X_{1} X_{2}}$$
```python
# Operate separately each polynomial
display(factor(I_1_n))
display(factor(I_1_d + (I*(M_i**2)*(w_e**2))))
```
$\displaystyle - i V_{1} \left(R_{2} + R_{L} + i X_{2}\right)$
$\displaystyle - i \left(R_{1} + i X_{1}\right) \left(R_{2} + R_{L} + i X_{2}\right)$
```python
# Reconstruct the factored result
I_1 = (factor(I_1_n))/(factor(I_1_d + (I*(M_i**2)*(w_e**2)))
- (I*(M_i**2)*(w_e**2)))
display(simplify(I_1))
```
$\displaystyle \frac{V_{1} \left(R_{2} + R_{L} + i X_{2}\right)}{M_{i}^{2} w_{e}^{2} + \left(R_{1} + i X_{1}\right) \left(R_{2} + R_{L} + i X_{2}\right)}$
- Input Impedance Expression $Z_{in}$:
```python
Z_in = V_1*(1/simplify(I_1))
display(Z_in)
```
$\displaystyle \frac{M_{i}^{2} w_{e}^{2} + \left(R_{1} + i X_{1}\right) \left(R_{2} + R_{L} + i X_{2}\right)}{R_{2} + R_{L} + i X_{2}}$
- Freq. Splitting occurs when $\Im(Z_{in}) = 0$
```python
# Assume both reactance are equal for simplicity
# Common reactance X_e
X_e = symbols('X_e', real = True)
# Substitute the variable
Z_in = Z_in.subs([(X_1, X_e), (X_2, X_e)])
# Imaginary part of Zin
simplify(im(Z_in))
```
$\displaystyle \frac{X_{e} \left(- M_{i}^{2} w_{e}^{2} - R_{1} R_{2} - R_{1} R_{L} + X_{e}^{2} + \left(R_{2} + R_{L}\right) \left(R_{1} + R_{2} + R_{L}\right)\right)}{X_{e}^{2} + \left(R_{2} + R_{L}\right)^{2}}$
```python
# Operate separately
factor(-(R_1*R_2) - (R_1*R_L) + ((R_2 + R_L)*(R_1 + R_2 + R_L)))
```
$\displaystyle \left(R_{2} + R_{L}\right)^{2}$
## a) 1st Condition to only have one non-trivial root: $\Delta < 0$
```python
# Angular Resonance frequency (central)
w_c = 2*np.pi*(800e3) # 800 kHz
# Assume coils' self-inductance equal [H]
L_c = 53.49e-6
# Frequency-dependent parameter
beta_c = (w_c*L_c)**2
```
```python
# Formula for condition Delta
Delta_c = lambda R, beta, k: (((R**2) - (2*beta))**2) - (4*(beta**2)*(1 - (k**2)))
# Inductive Coupling Coefficient Range k_i
k_i = np.arange(0, 1 + 0.01, 0.01)
print(k_i.shape)
print(k_i[-1])
# Load Resistance [ohms]
R_L = np.arange(1., 1000. + 1., 1.)
print(R_L.shape)
print(R_L[-1])
# Create a mesh
k_i_m, R_L_m = np.meshgrid(k_i, R_L)
# Calculate the function Delta
f_delta = Delta_c(R_L_m, beta_c, k_i_m)
print(f_delta.shape)
```
(101,)
1.0
(1000,)
1000.0
(1000, 101)
```python
#interactive plotting in separate window
#%matplotlib qt
#normal charts inside notebooks
%matplotlib inline
# Set the plot --> size(width, height)
fig1 = plt.figure(figsize=(15,10))
################
# 1st Subplot
################
# Set the subplot for the two graphs
axs = fig1.add_subplot(1, 2, 1, projection = '3d')
# Complete figure information
fig1.suptitle('Frequency Splitting: $\Delta$ < 0', fontsize = 20)
# For a simple 3D plot
#ax = fig.gca(projection = '3d')
# Axis labels
axs.set_xlabel('Inductive Coupling Coefficient ($k_{i}$)')
axs.set_ylabel('Load Resistance ($R^{\'}_{L}$) [$\Omega$]')
axs.set_zlabel('Function $\Delta$')
# Title
axs.set_title('3D Plot of $\Delta$ Function', fontsize = 15)
# Plot the surface
surf = axs.plot_surface(k_i_m, R_L_m, f_delta, cmap = cm.coolwarm,
linewidth = 0, antialiased = False)
# Figure size (single plot)
#plt.rcParams["figure.figsize"] = (30,30)
# Add a color bar which maps values to colors.
plt.colorbar(surf, orientation = 'horizontal', shrink=0.6,
aspect=20, pad = 0.05)
################
# 2nd Subplot
################
# Set the subplot for the two graphs
axs = fig1.add_subplot(1, 2, 2)
# Plot the contour plot
cp = plt.contour(k_i_m, R_L_m, f_delta, cmap = cm.Spectral)
# Define the levels to label
#c_labels = [0.e0]
# To specify scientific notation
fmt = ticker.LogFormatterSciNotation()
axs.clabel(cp, inline = 1, colors = 'k', fmt = fmt, fontsize = 10)
# Color bar
plt.colorbar(cp, orientation = 'horizontal', shrink=0.6,
aspect=20, pad = 0.1)
# Information
axs.set_title('Contour Plot of $\Delta$ Function', fontsize = 15)
# Axis labels
axs.set_xlabel('Inductive Coupling Coefficient ($k_{i}$)')
axs.set_ylabel('Load Resistance ($R^{\'}_{L}$) [$\Omega$]')
axs.xaxis.grid(True)
axs.yaxis.grid(True)
# Distance between subplots
fig1.subplots_adjust(wspace=0.3)
plt.show()
```
```python
#fig1.savefig ("delta_ipt.eps", format = 'eps')
fig1.savefig ("delta_ipt.svg", format = "svg", dpi = 1200)
```
## b) 2nd Condition to only have one non-trivial root:
$(q>0) \wedge (\Delta>0) \wedge (-q + \sqrt{\Delta} \leq 0)$
- The condition $q>0$ implies $R_{L}> \sqrt{2}\omega L$
```python
# We use the previously defined w_c, L_c, and beta_c
# RL is re-defined to comply with the condition:
# Load Resistance [ohms]
R_L = np.arange(np.ceil(np.sqrt(2)*w_c*L_c),
1000. + 1., 1.) # Ceil function --> Up the inferior limit
print(R_L.shape)
print(f'Load Resistance interval: ({R_L[0]}, {R_L[-1]}]')
```
(620,)
Load Resistance interval: (381.0, 1000.0]
- With the previous $R_L$ values, determine:
<ol>
<li> Plot of $\Gamma(R_L, k_i) = -q(R_L) + \sqrt{\Delta(R_L, k_i)} \leq 0$.
<li> Only $\Delta > 0$ are importat, so the rest will be nan.
</ol>
```python
# Formula for condition Gamma
Gamma_c = lambda R, beta, k: -((R**2) - (2*beta)) + (np.sqrt(Delta_c(R, beta, k)))
# Create a new mesh
k_i_m, R_L_m = np.meshgrid(k_i, R_L)
print(k_i_m.shape)
print(k_i_m)
print(R_L_m.shape)
print(R_L_m)
print()
# Calculate the Gamma function
f_gamma= Gamma_c(R_L_m, beta_c, k_i_m)
print(f'With sqrt(-1): \n {f_gamma} \n')
# Since 1 is not part of the result
# Invalid values (nan) are set to 1
f_gamma = np.nan_to_num(f_gamma, nan = 1)
print(f'After setting to 1: \n {f_gamma} \n')
```
(620, 101)
[[0. 0.01 0.02 ... 0.98 0.99 1. ]
[0. 0.01 0.02 ... 0.98 0.99 1. ]
[0. 0.01 0.02 ... 0.98 0.99 1. ]
...
[0. 0.01 0.02 ... 0.98 0.99 1. ]
[0. 0.01 0.02 ... 0.98 0.99 1. ]
[0. 0.01 0.02 ... 0.98 0.99 1. ]]
(620, 101)
[[ 381. 381. 381. ... 381. 381. 381.]
[ 382. 382. 382. ... 382. 382. 382.]
[ 383. 383. 383. ... 383. 383. 383.]
...
[ 998. 998. 998. ... 998. 998. 998.]
[ 999. 999. 999. ... 999. 999. 999.]
[1000. 1000. 1000. ... 1000. 1000. 1000.]]
With sqrt(-1):
[[ nan nan nan ... nan
nan 0. ]
[ nan nan nan ... nan
nan 0. ]
[ nan nan nan ... nan
nan 0. ]
...
[-12365.74627493 -12364.50058911 -12360.76354272 ... -486.26641676
-244.32643007 0. ]
[-12336.38550525 -12335.14282056 -12331.41477751 ... -485.12790536
-243.75454267 0. ]
[-12307.13565211 -12305.89595691 -12302.17688225 ... -483.99358204
-243.18475785 0. ]]
After setting to 1:
[[ 1.00000000e+00 1.00000000e+00 1.00000000e+00 ... 1.00000000e+00
1.00000000e+00 0.00000000e+00]
[ 1.00000000e+00 1.00000000e+00 1.00000000e+00 ... 1.00000000e+00
1.00000000e+00 0.00000000e+00]
[ 1.00000000e+00 1.00000000e+00 1.00000000e+00 ... 1.00000000e+00
1.00000000e+00 0.00000000e+00]
...
[-1.23657463e+04 -1.23645006e+04 -1.23607635e+04 ... -4.86266417e+02
-2.44326430e+02 0.00000000e+00]
[-1.23363855e+04 -1.23351428e+04 -1.23314148e+04 ... -4.85127905e+02
-2.43754543e+02 0.00000000e+00]
[-1.23071357e+04 -1.23058960e+04 -1.23021769e+04 ... -4.83993582e+02
-2.43184758e+02 0.00000000e+00]]
<ipython-input-13-0713b495cec5>:2: RuntimeWarning: invalid value encountered in sqrt
Gamma_c = lambda R, beta, k: -((R**2) - (2*beta)) + (np.sqrt(Delta_c(R, beta, k)))
```python
# Index of an specific value
result = np.where(f_gamma == 1)
print('Tuple of arrays returned : ', result)
# zip the 2 arrays to get the exact coordinates
listOfCoordinates= list(zip(result[0], result[1]))
# iterate over the list of coordinates
#for cord in listOfCoordinates:
# print(cord)
# Just the first and last coordinate
print(f'1st: {listOfCoordinates[0]},\nlast: {listOfCoordinates[-1]} \n')
# Minimum value in sq_de
#ind = np.unravel_index(np.argmin(sq_delta, axis=None), sq_delta.shape)
#print(ind)
#print(sq_delta[ind])
```
Tuple of arrays returned : (array([ 0, 0, 0, ..., 156, 156, 156]), array([ 0, 1, 2, ..., 8, 9, 10]))
1st: (0, 0),
last: (156, 10)
```python
#interactive plotting in separate window
#%matplotlib qt
#normal charts inside notebooks
%matplotlib inline
# Set the plot --> size(width, height)
fig2 = plt.figure(figsize=(15,10))
################
# 1st Subplot
################
# Set the subplot for the two graphs
axs = fig2.add_subplot(1, 2, 1, projection = '3d')
# Complete figure information
fig2.suptitle('Frequency Splitting: $(q>0) \wedge (\Delta>0) \wedge (-q + \sqrt{\Delta} \leq 0)$',
fontsize = 20)
# For a simple 3D plot
#ax = fig.gca(projection = '3d')
# Axis labels
axs.set_xlabel('Inductive Coupling Coefficient ($k_{i}$)')
axs.set_ylabel('Load Resistance ($R_{L}$) [$\Omega$]')
axs.set_zlabel('Function $\Gamma$')
# Title
axs.set_title('3D Plot of $\Gamma$ Function', fontsize = 15)
# Plot the surface
surf = axs.plot_surface(k_i_m, R_L_m, f_gamma, cmap = cm.coolwarm,
linewidth = 0, antialiased = False)
# Figure size (single plot)
#plt.rcParams["figure.figsize"] = (30,30)
# Add a color bar which maps values to colors.
plt.colorbar(surf, orientation = 'horizontal', shrink=0.6,
aspect=20, pad = 0.05)
################
# 2nd Subplot
################
# Set the subplot for the two graphs
axs = fig2.add_subplot(1, 2, 2)
# Plot the contour plot
cp = plt.contour(k_i_m, R_L_m,f_gamma,
levels = [-4e4, -2e4, -1e4, 0],
cmap = cm.Spectral)
# Define the levels to label
#c_labels = [0.e0]
# To specify scientific notation
fmt = ticker.LogFormatterSciNotation()
axs.clabel(cp, inline = 1, colors = 'k', fmt = fmt, fontsize = 10)
# Color bar
plt.colorbar(cp, orientation = 'horizontal', shrink=0.6,
aspect=20, pad = 0.1)
# Information
axs.set_title('Contour Plot of $\Gamma$ Function', fontsize = 15)
# Axis labels
axs.set_xlabel('Inductive Coupling Coefficient ($k_{i}$)')
axs.set_ylabel('Load Resistance ($R_{L}$) [$\Omega$]')
axs.xaxis.grid(True)
axs.yaxis.grid(True)
# Distance between subplots
fig2.subplots_adjust(wspace=0.3)
plt.show()
```
### - Another way yo visualize only the regions:
```python
"""
Terms definitions for the IPT problem
"""
# Definition of term p
# Return the value of p term
def p_termi(L, k_i):
return (L**2)*(1 - (k_i**2))
# Definition of term q
# Return the value of q term
def q_termi(R, L, C):
return (R**2) - ((2*L)/(C))
# Definition of term r
# Return the value of r term
def r_termi(C):
return (1/(C**(2)))
# Definition of condition Delta
def Delta_IPT(R, L, C, k_i):
p_aux = p_termi(L, k_i)
#print(f'p {p_aux}')
q_aux = q_termi(R, L, C)
#print(f'q {q_aux}')
r_aux = r_termi(C)
#print(f'r {r_aux}')
# Delta condition
delta = (q_aux**2) - (4*p_aux*r_aux)
#print(f'delta {delta}')
return delta
```
```python
# Assume coils' self-inductance equal [H]
L_c = 53.49e-6
C_r = 739.92e-12
# Inductive Coupling Coefficient Range k_i
k_i = np.arange(0, 1 + 0.01, 0.01)
print(k_i.shape)
print(k_i[-1])
# Load Resistance [ohms]
R_L = np.arange(1., 1000. + 1., 1.)
print(R_L.shape)
print(R_L[-1])
# Create a mesh
k_i_m, R_L_m = np.meshgrid(k_i, R_L)
```
(101,)
1.0
(1000,)
1000.0
```python
# Calculate the function gamma (Condition 2)
# Only check for the region, the value of
# the function is not important
f_gamma = ((q_termi(R_L_m, L_c, C_r) > 0)
& (Delta_IPT(R_L_m, L_c, C_r, k_i_m) > 0)
& (-q_termi(R_L_m, L_c, C_r)
+ np.sqrt(Delta_IPT(R_L_m, L_c, C_r, k_i_m)) <= 0)).astype(int)
print(f_gamma.shape)
print(f_gamma)
```
(1000, 101)
[[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
...
[1 1 1 ... 1 1 1]
[1 1 1 ... 1 1 1]
[1 1 1 ... 1 1 1]]
<ipython-input-25-ae9987a86952>:8: RuntimeWarning: invalid value encountered in sqrt
+ np.sqrt(Delta_IPT(R_L_m, L_c, C_r, k_i_m)) <= 0)).astype(int)
```python
#interactive plotting in separate window
#%matplotlib qt
#normal charts inside notebooks
%matplotlib inline
# Set the plot --> size(width, height)
fig2 = plt.figure(figsize=(15,10))
################
# 1st Subplot
################
# Set the subplot for the two graphs
axs = fig2.add_subplot(1, 2, 1, projection = '3d')
# Complete figure information
fig2.suptitle('Frequency Splitting: $(q>0) \wedge (\Delta>0) \wedge (-q + \sqrt{\Delta} \leq 0)$',
fontsize = 20)
# For a simple 3D plot
#ax = fig.gca(projection = '3d')
# Axis labels
axs.set_xlabel('Inductive Coupling Coefficient ($k_{i}$)')
axs.set_ylabel('Load Resistance ($R^{\'}_{L}$) [$\Omega$]')
axs.set_zlabel('Function $\Gamma$')
# Title
axs.set_title('3D Plot of $\Gamma$ Function', fontsize = 15)
# Plot the surface
surf = axs.plot_surface(k_i_m, R_L_m, f_gamma, cmap = cm.coolwarm,
linewidth = 0, antialiased = False)
# Figure size (single plot)
#plt.rcParams["figure.figsize"] = (30,30)
# Add a color bar which maps values to colors.
plt.colorbar(surf, orientation = 'horizontal', shrink=0.6,
aspect=20, pad = 0.05)
################
# 2nd Subplot
################
# Set the subplot for the two graphs
axs = fig2.add_subplot(1, 2, 2)
# Plot the contour plot
cp = plt.contourf(k_i_m, R_L_m,f_gamma,
levels = [-1, 0, 1],
cmap = cm.Spectral)
# Define the levels to label
#c_labels = [0.e0]
# To specify scientific notation
fmt = ticker.LogFormatterSciNotation()
axs.clabel(cp, inline = 1, colors = 'k', fmt = fmt, fontsize = 10)
# Color bar
plt.colorbar(cp, orientation = 'horizontal', shrink=0.6, aspect=20, pad = 0.1)
# Information
axs.set_title('Contour Plot of $\Gamma$ Function', fontsize = 15)
# Axis labels
axs.set_xlabel('Inductive Coupling Coefficient ($k_{i}$)')
axs.set_ylabel('Load Resistance ($R^{\'}_{L}$) [$\Omega$]')
axs.xaxis.grid(True)
axs.yaxis.grid(True)
# Distance between subplots
fig2.subplots_adjust(wspace=0.5)
plt.show()
```
```python
fig2.savefig ("gamma_ipt.svg", format = "svg", dpi = 1200)
```
## c) 3nd Condition to only have one non-trivial root:
$(q<0) \wedge (\Delta>0) \wedge (-q - \sqrt{\Delta} \leq 0)$ <br>
This is not a condition to consider since it will provide at least <br>
one non-trivial root.
- The condition $q<0$ implies $R_L < \sqrt{2}\omega L$
```python
# We use the previously defined w_c, L_c, and beta_c
# RL is re-defined to comply with the condition:
# Load Resistance [ohms]
R_L = np.arange(1., (np.sqrt(2)*w_c*L_c), 1.)
print(R_L.shape)
print(f'Load Resistance interval: [{R_L[0]}, {R_L[-1]})')
```
(380,)
Load Resistance interval: [1.0, 380.0)
- With the previous $R_L$ values, determine:
<ol>
<li> Plot of $\Psi(R_L, k_i) = -q(R_L) - \sqrt{\Delta(R_L, k_i)} \leq 0$.
<li> Only $\Delta > 0$ are importat, so the rest will be nan.
</ol>
```python
# Formula for condition Psi
Psi_c = lambda R, beta, k: -((R**2) - (2*beta)) - (np.sqrt(Delta_c(R, beta, k)))
# Create a new mesh
k_i_m, R_L_m = np.meshgrid(k_i, R_L)
print(k_i_m.shape)
print(k_i_m)
print(R_L_m.shape)
print(R_L_m)
print()
# Calculate the Psi function
f_psi= Psi_c(R_L_m, beta_c, k_i_m)
print(f'With sqrt(-1): \n {f_psi} \n')
# Since 1 is not part of the result
# Invalid values (nan) are set to 1
f_psi = np.nan_to_num(f_psi, nan = 1.)
print(f'After setting to 1: \n {f_psi} \n')
```
(380, 101)
[[0. 0.01 0.02 ... 0.98 0.99 1. ]
[0. 0.01 0.02 ... 0.98 0.99 1. ]
[0. 0.01 0.02 ... 0.98 0.99 1. ]
...
[0. 0.01 0.02 ... 0.98 0.99 1. ]
[0. 0.01 0.02 ... 0.98 0.99 1. ]
[0. 0.01 0.02 ... 0.98 0.99 1. ]]
(380, 101)
[[ 1. 1. 1. ... 1. 1. 1.]
[ 2. 2. 2. ... 2. 2. 2.]
[ 3. 3. 3. ... 3. 3. 3.]
...
[378. 378. 378. ... 378. 378. 378.]
[379. 379. 379. ... 379. 379. 379.]
[380. 380. 380. ... 380. 380. 380.]]
With sqrt(-1):
[[ nan 143239.12214754 141740.019677 ... 2891.66489673
1445.83234529 0. ]
[ nan 143611.91175624 141894.01763856 ... 2891.72612341
1445.86264939 0. ]
[ nan nan 142173.38925858 ... 2891.82817368
1445.91315905 0. ]
...
[ nan nan nan ... nan
nan 0. ]
[ nan nan nan ... nan
nan 0. ]
[ nan nan nan ... nan
nan 0. ]]
After setting to 1:
[[1.00000000e+00 1.43239122e+05 1.41740020e+05 ... 2.89166490e+03
1.44583235e+03 0.00000000e+00]
[1.00000000e+00 1.43611912e+05 1.41894018e+05 ... 2.89172612e+03
1.44586265e+03 0.00000000e+00]
[1.00000000e+00 1.00000000e+00 1.42173389e+05 ... 2.89182817e+03
1.44591316e+03 0.00000000e+00]
...
[1.00000000e+00 1.00000000e+00 1.00000000e+00 ... 1.00000000e+00
1.00000000e+00 0.00000000e+00]
[1.00000000e+00 1.00000000e+00 1.00000000e+00 ... 1.00000000e+00
1.00000000e+00 0.00000000e+00]
[1.00000000e+00 1.00000000e+00 1.00000000e+00 ... 1.00000000e+00
1.00000000e+00 0.00000000e+00]]
<ipython-input-12-f4cf8d1a4a28>:2: RuntimeWarning: invalid value encountered in sqrt
Psi_c = lambda R, beta, k: -((R**2) - (2*beta)) - (np.sqrt(Delta_c(R, beta, k)))
```python
# Index of an specific value
result = np.where(f_psi == 1)
print('Tuple of arrays returned : ', result)
# zip the 2 arrays to get the exact coordinates
listOfCoordinates= list(zip(result[0], result[1]))
# iterate over the list of coordinates
#for cord in listOfCoordinates:
# print(cord)
# Just the first and last coordinate
print(f'1st: {listOfCoordinates[0]},\nlast: {listOfCoordinates[-1]} \n')
# Minimum value in sq_de
#ind = np.unravel_index(np.argmin(sq_delta, axis=None), sq_delta.shape)
#print(ind)
#print(sq_delta[ind])
```
Tuple of arrays returned : (array([ 0, 1, 2, ..., 379, 379, 379]), array([ 0, 0, 0, ..., 97, 98, 99]))
1st: (0, 0),
last: (379, 99)
```python
#interactive plotting in separate window
#%matplotlib qt
#normal charts inside notebooks
%matplotlib inline
# To specify scientific notation
fmt = ticker.LogFormatterSciNotation()
# Set the plot --> size(width, height)
fig = plt.figure(figsize=(15,10))
################
# 1st Subplot
################
# Set the subplot for the two graphs
axs = fig.add_subplot(1, 2, 1, projection = '3d')
# Complete figure information
fig.suptitle('Frequency Splitting: $(q<0) \wedge (\Delta>0) \wedge (-q - \sqrt{\Delta} \leq 0)$',
fontsize = 20)
# For a simple 3D plot
#ax = fig.gca(projection = '3d')
# Axis labels
axs.set_xlabel('Inductive Coupling Coefficient ($k_{i}$)')
axs.set_ylabel('Load Resistance ($R_{L}$) [$\Omega$]')
axs.set_zlabel('Function $\Psi$')
# Title
axs.set_title('3D Plot of $\Psi$ Function', fontsize = 15)
# Plot the surface
surf = axs.plot_surface(k_i_m, R_L_m, f_psi, cmap = cm.coolwarm,
linewidth = 0, antialiased = False)
# Figure size (single plot)
#plt.rcParams["figure.figsize"] = (30,30)
# Add a color bar which maps values to colors.
plt.colorbar(surf, orientation = 'horizontal', shrink=0.6,
aspect=20, pad = 0.05)
################
# 2nd Subplot
################
# Set the subplot for the two graphs
axs = fig.add_subplot(1, 2, 2)
# Plot the contour plot
cp = plt.contour(k_i_m, R_L_m,f_psi,
levels = [0, 2e4, 6e4, 1e5],
cmap = cm.Spectral)
# Define the levels to label
#c_labels = [-2e10, 0, 1e10]
axs.clabel(cp, inline = 1, colors = 'k', fmt = fmt, fontsize = 10)
# Color bar
plt.colorbar(cp, format = fmt, orientation = 'horizontal', shrink=0.6,
aspect=20, pad = 0.1)
# Information
axs.set_title('Contour Plot of $\Psi$ Function', fontsize = 15)
# Axis labels
axs.set_xlabel('Inductive Coupling Coefficient ($k_{i}$)')
axs.set_ylabel('Load Resistance ($R_{L}$) [$\Omega$]')
axs.xaxis.grid(True)
axs.yaxis.grid(True)
# Distance between subplots
fig.subplots_adjust(wspace=0.3)
plt.show()
```
# 2. Freq. Splitting Model for Active Compensation Network
## Equations Derivation
```python
# Definition of the symbolic variables
# Resistances --> Coils and Load
R_1, R_2, R_L = symbols('R_1, R_2, R_L',
real = True)
# Inductance and Capacitance
C_1, C_2, L_1, L_2, M_i = symbols('C_1, C_2, L_1,L_2, M_i',
real = True)
# Reactance
X_1, X_2, X_m = symbols('X_1, X_2, X_m',
real = True)
# Angular frequency
w_e = symbols('w_e', real = True)
# Input Voltage and Current
V_1, I_1 = symbols('V_1, I_1', real = True)
```
- Equivalent Circuit's Matrix Representation: $[Z][I] = [V]$
```python
# Impedance Matriz [Z]
Z_m = Matrix([[R_1 + (I*X_1), I*X_m],
[I*X_m, R_2 + R_L + (I*X_2)]])
# Voltage Vector
V_m = Matrix([[V_1], [0]])
```
```python
# Solve the System of Equations
I_m = Z_m.inv()*V_m
```
```python
# Input Current
result_I = "Input Current: \n $${} = {}$$".format(latex(I_1),
latex(I_m[0]))
display(Latex(result_I))
# Separate Numerator and denominator of I1
I_1_n, I_1_d = fraction(I_m[0])
```
Input Current:
$$I_{1} = \frac{V_{1} \left(- i R_{2} - i R_{L} + X_{2}\right)}{- i R_{1} R_{2} - i R_{1} R_{L} + R_{1} X_{2} + R_{2} X_{1} + R_{L} X_{1} + i X_{1} X_{2} - i X_{m}^{2}}$$
```python
# Operate separately each polynomial
display(factor(I_1_n))
display(factor(I_1_d + (I*(X_m**2))))
```
$\displaystyle - i V_{1} \left(R_{2} + R_{L} + i X_{2}\right)$
$\displaystyle - i \left(R_{1} + i X_{1}\right) \left(R_{2} + R_{L} + i X_{2}\right)$
```python
# Reconstruct the factored result
I_1 = (factor(I_1_n))/(factor(I_1_d + (I*(X_m**2)))
- (I*(X_m**2)))
display(simplify(I_1))
```
$\displaystyle \frac{V_{1} \left(R_{2} + R_{L} + i X_{2}\right)}{X_{m}^{2} + \left(R_{1} + i X_{1}\right) \left(R_{2} + R_{L} + i X_{2}\right)}$
- Input Impedance Expression $Z_{in}$:
```python
Z_in = V_1*(1/simplify(I_1))
display(Z_in)
```
$\displaystyle \frac{X_{m}^{2} + \left(R_{1} + i X_{1}\right) \left(R_{2} + R_{L} + i X_{2}\right)}{R_{2} + R_{L} + i X_{2}}$
- Freq. Splitting occurs when $\Im(Z_{in}) = 0$
```python
# Assume both reactance are equal for simplicity
# Common reactance X_e
X_e = symbols('X_e', real = True)
# Substitute the variable
Z_in = Z_in.subs([(X_1, X_e), (X_2, X_e)])
# Imaginary part of Zin
simplify(im(Z_in))
```
$\displaystyle \frac{X_{e} \left(- R_{1} R_{2} - R_{1} R_{L} + X_{e}^{2} - X_{m}^{2} + \left(R_{2} + R_{L}\right) \left(R_{1} + R_{2} + R_{L}\right)\right)}{X_{e}^{2} + \left(R_{2} + R_{L}\right)^{2}}$
```python
# Operate separately
factor(-(R_1*R_2) - (R_1*R_L) + ((R_2 + R_L)*(R_1 + R_2 + R_L)))
```
$\displaystyle \left(R_{2} + R_{L}\right)^{2}$
- Analyze the simple non-trivial root $\chi = 0$
```python
# Definition of the symbolic variables
# Capacitances T-Model
C_ep, C_Mp = symbols('C_ep, C_Mp',
real = True)
# Capacitances Pi-Model
C_e, C_M = symbols('C_e, C_M',
real = True)
# Capacitive Coupling Coefficient
k_c = symbols('k_c', real = True)
```
```python
# Replace the k_c
C_M = k_c*C_e
# Equivalent Pi --> T Model
C_ep = C_e + (2*C_M)
C_Mp = (C_e/C_M)*(C_ep)
```
```python
# Operation for the main resonance frequency
# Defining the equivalent parallel capacitance
C_pp = ((1/C_ep) + (1/C_Mp))**(-1)
simplify(C_pp)
```
$\displaystyle \frac{C_{e} \left(2 k_{c} + 1\right)}{k_{c} + 1}$
```python
# C_Mp also appears in the formula
simplify(C_Mp)
```
$\displaystyle 2 C_{e} + \frac{C_{e}}{k_{c}}$
## a) 1st Condition to only have one non-trivial root: $\Delta < 0$
<ol>
<li> $L$ and $k_i$ are fixed from the IPT Model.
<li> External capacitance $C_e$ is defined according to standard values.
<li> $0 < k_c < 1$
<li> Frequency dependence is visualized separetely.
</ol>
```python
# Defining the Inductive Coupling Coefficient k_i
k_i = np.array([0.214])
# Assume coils' self-inductance equal [H]
L_c = 53.49e-6
# External Capacitance [F]
C_e = np.array([0.668e-9])
```
- Define methods to calculate the condition $\Delta$
```python
##################################
## Define Methods for Calculations
##################################
# Definition of Equivalent Parallel Capacitance C_pp
def C_ppri(C, k_c):
return (C*(1 + (2*k_c)))/(1 + k_c)
# Definition of the Mutual Capacitance in T-Model (C_Mp)
def C_Mpri(C, k_c):
return C*((1/k_c) + 2)
# Definition of term p
# Return the value of p term
def p_term(L, k_i):
return (L**2)*(1 - (k_i**2))
# Definition of term q
# Return the value of q term
def q_term(R, L, C, k_c, k_i):
return (R**2) + (2*L*((k_i/C_Mpri(C, k_c))
- (1/C_ppri(C, k_c))))
# Definition of term r
# Return the value of r term
def r_term(C, k_c):
return (1/((C_ppri(C, k_c))**2)) - (1/((C_Mpri(C, k_c))**2))
# Definition of condition Delta
def Delta_c1(R, L, C, k_c, k_i):
# Auxiliar variables
C_Mp = C_Mpri(C, k_c)
#print(f'C_Mp {C_Mp}')
C_pp = C_ppri(C, k_c)
#print(f'C_pp {C_pp}')
p_aux = p_term(L, k_i)
#print(f'p {p_aux}')
q_aux = q_term(R, L, C, k_c, k_i)
#print(f'q {q_aux}')
r_aux = r_term(C, k_c)
#print(f'r {r_aux}')
# Delta condition
delta = (q_aux**2) - (4*p_aux*r_aux)
#print(f'delta {delta}')
return delta
```
```python
# Capacitive Coupling Coefficient Range k_c
k_c = np.arange(0.01, 1 + 0.01, 0.01)
print(k_c.shape)
print(k_c)
# Load Resistance [ohms]
R_L = np.arange(1., 1000. + 1., 1.)
print(R_L.shape)
print(R_L[-1])
# Create a mesh
k_c_m, R_L_m = np.meshgrid(k_c, R_L)
# Calculate the function Delta
f_delta = Delta_c1(R_L_m, L_c, C_e, k_c_m, k_i)
print(f_delta.shape)
```
(100,)
[0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 0.11 0.12 0.13 0.14
0.15 0.16 0.17 0.18 0.19 0.2 0.21 0.22 0.23 0.24 0.25 0.26 0.27 0.28
0.29 0.3 0.31 0.32 0.33 0.34 0.35 0.36 0.37 0.38 0.39 0.4 0.41 0.42
0.43 0.44 0.45 0.46 0.47 0.48 0.49 0.5 0.51 0.52 0.53 0.54 0.55 0.56
0.57 0.58 0.59 0.6 0.61 0.62 0.63 0.64 0.65 0.66 0.67 0.68 0.69 0.7
0.71 0.72 0.73 0.74 0.75 0.76 0.77 0.78 0.79 0.8 0.81 0.82 0.83 0.84
0.85 0.86 0.87 0.88 0.89 0.9 0.91 0.92 0.93 0.94 0.95 0.96 0.97 0.98
0.99 1. ]
(1000,)
1000.0
(1000, 100)
```python
#interactive plotting in separate window
#%matplotlib qt
#normal charts inside notebooks
%matplotlib inline
# Set the plot --> size(width, height)
fig1 = plt.figure(figsize=(15,10))
################
# 1st Subplot
################
# Set the subplot for the two graphs
axs = fig1.add_subplot(1, 2, 1, projection = '3d')
# Complete figure information
fig1.suptitle('Frequency Splitting Active CN: $\Delta$ Function', fontsize = 20)
# For a simple 3D plot
#ax = fig.gca(projection = '3d')
# Axis labels
axs.set_xlabel('Capacitive Coupling Coefficient ($k_c$)')
axs.set_ylabel('Load Resistance ($R_L$) [$\Omega$]')
axs.set_zlabel('Function $\Delta$')
# Title
axs.set_title('3D Plot of $\Delta$ Function', fontsize = 15)
# Plot the surface
surf = axs.plot_surface(k_c_m, R_L_m, f_delta, cmap = cm.coolwarm,
linewidth = 0, antialiased = False)
# Figure size (single plot)
#plt.rcParams["figure.figsize"] = (30,30)
# Add a color bar which maps values to colors.
plt.colorbar(surf, orientation = 'horizontal', shrink=0.6,
aspect=20, pad = 0.05)
################
# 2nd Subplot
################
# Set the subplot for the two graphs
axs = fig1.add_subplot(1, 2, 2)
# Plot the contour plot
cp = plt.contour(k_c_m, R_L_m, f_delta,
levels = [-0.5e11, 0, 1.5e11, 3e11, 4.5e11],
cmap = cm.Spectral)
# Define the levels to label
#c_labels = [0.e0]
# To specify scientific notation
fmt = ticker.LogFormatterSciNotation()
axs.clabel(cp, inline = 1, colors = 'k', fmt = fmt, fontsize = 10)
# Color bar
plt.colorbar(cp, orientation = 'horizontal', shrink=0.6,
aspect=20, pad = 0.1)
# Information
axs.set_title('Contour Plot of $\Delta$ Function', fontsize = 15)
# Axis labels
axs.set_xlabel('Capacitive Coupling Coefficient ($k_c$)')
axs.set_ylabel('Load Resistance ($R_L$) [$\Omega$]')
axs.xaxis.grid(True)
axs.yaxis.grid(True)
# Distance between subplots
fig1.subplots_adjust(wspace=0.3)
plt.show()
```
```python
#fig1.savefig ("delta_ipt.eps", format = 'eps')
fig1.savefig ("delta_acn.svg", format = "svg", dpi = 1200)
```
- Plot the behavior of resonance frequency (central) w.r.t. $k_c$ <br>
$w_r = \sqrt{\frac{1}{L {C^{\prime}_{p}}}}$
```python
# Define central resonance frequency
def f_operation(L, C, k_c):
freq = (1/(2*np.pi))*np.sqrt((1)/(L*C_ppri(C, k_c)))
return freq
```
```python
# Calculate the resonance frequency
f_res = (1e-3)*f_operation(L_c, C_e, k_c)
print(f_res.shape)
```
(100,)
```python
#interactive plotting in separate window
%matplotlib qt
#normal charts inside notebooks
#%matplotlib inline
# Set the plot --> size(width, height)
fig2 = plt.figure(figsize=(10,10))
ax = fig2.gca()
# Axis labels
ax.set_xlabel('Capacitive Coupling Coefficient $k_{c}$')
ax.set_ylabel('Resonance Frequency ($f_{r}$) [kHz]')
# Title
ax.set_title('System\'s Resonance Frequency', fontsize = 25)
# Plot the surface
lines = ax.plot(k_c, f_res, 'g', linewidth = 2.5)
ax.xaxis.grid(True)
ax.yaxis.grid(True)
# Axis limits
ax.set_xlim([0, 1])
# Legend
#plt.legend(('Maxwell'), loc = 'upper right')
plt.show()
```
```python
fig2.savefig ("fresonant_acn.svg", format = "svg", dpi = 1200)
```
## b) 2nd Condition to only have one non-trivial root:
$(q>0) \wedge (\Delta>0) \wedge (-q + \sqrt{\Delta} \leq 0)$
```python
# Defining the Inductive Coupling Coefficient k_i
k_i = np.array([0.214])
# Assume coils' self-inductance equal [H]
L_c = np.array([53.49e-6])
# External Capacitance [F]
C_e = np.array([0.66e-9])
```
```python
# Capacitive Coupling Coefficient Range k_c
k_c = np.arange(0.01, 1 + 0.01, 0.01)
print(k_c.shape)
print(k_c)
# Load Resistance [ohms]
R_L = np.arange(1., 1000. + 1., 1.)
print(R_L.shape)
print(R_L[-1])
# Create a mesh
k_c_m, R_L_m = np.meshgrid(k_c, R_L)
```
(100,)
[0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 0.11 0.12 0.13 0.14
0.15 0.16 0.17 0.18 0.19 0.2 0.21 0.22 0.23 0.24 0.25 0.26 0.27 0.28
0.29 0.3 0.31 0.32 0.33 0.34 0.35 0.36 0.37 0.38 0.39 0.4 0.41 0.42
0.43 0.44 0.45 0.46 0.47 0.48 0.49 0.5 0.51 0.52 0.53 0.54 0.55 0.56
0.57 0.58 0.59 0.6 0.61 0.62 0.63 0.64 0.65 0.66 0.67 0.68 0.69 0.7
0.71 0.72 0.73 0.74 0.75 0.76 0.77 0.78 0.79 0.8 0.81 0.82 0.83 0.84
0.85 0.86 0.87 0.88 0.89 0.9 0.91 0.92 0.93 0.94 0.95 0.96 0.97 0.98
0.99 1. ]
(1000,)
1000.0
```python
# Calculate the function gamma (Condition 2)
# Only check for the region, the value of
# the function is not important
f_gamma = ((q_term(R_L_m, L_c, C_e, k_c_m, k_i) > 0)
& (Delta_c1(R_L_m, L_c, C_e, k_c_m, k_i) > 0)
& (-q_term(R_L_m, L_c, C_e, k_c_m, k_i)
+ np.sqrt(Delta_c1(R_L_m, L_c, C_e, k_c_m, k_i)) <= 0)).astype(int)
print(f_gamma.shape)
print(f_gamma)
```
(1000, 100)
[[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
...
[1 1 1 ... 1 1 1]
[1 1 1 ... 1 1 1]
[1 1 1 ... 1 1 1]]
<ipython-input-17-fd1090a33310>:8: RuntimeWarning: invalid value encountered in sqrt
+ np.sqrt(Delta_c1(R_L_m, L_c, C_e, k_c_m, k_i)) <= 0)).astype(int)
```python
# Index of an specific value
result = np.where(np.isnan(f_gamma))
print('Tuple of arrays returned : ', result)
# zip the 2 arrays to get the exact coordinates
listOfCoordinates= list(zip(result[0], result[1]))
# iterate over the list of coordinates
#for cord in listOfCoordinates:
# print(cord)
# Just the first and last coordinate
print(f'1st: {listOfCoordinates[0]},\nlast: {listOfCoordinates[-1]} \n')
# Minimum value in sq_de
#ind = np.unravel_index(np.argmin(sq_delta, axis=None), sq_delta.shape)
#print(ind)
#print(sq_delta[ind])
```
```python
#interactive plotting in separate window
#%matplotlib qt
#normal charts inside notebooks
%matplotlib inline
# Set the plot --> size(width, height)
fig1 = plt.figure(figsize=(15,10))
################
# 1st Subplot
################
# Set the subplot for the two graphs
axs = fig1.add_subplot(1, 2, 1, projection = '3d')
# Complete figure information
fig1.suptitle('Frequency Splitting: $(q>0) \wedge (\Delta>0) \wedge (-q + \sqrt{\Delta} \leq 0)$',
fontsize = 20)
# For a simple 3D plot
#ax = fig.gca(projection = '3d')
# Axis labels
axs.set_xlabel('Capacitive Coupling Coefficient ($k_c$)')
axs.set_ylabel('Load Resistance ($R_{L}$) [$\Omega$]')
axs.set_zlabel('Function $\Gamma$')
# Title
axs.set_title('3D Plot of $\Gamma$ Function', fontsize = 15)
# Plot the surface
surf = axs.plot_surface(k_c_m, R_L_m, f_gamma, cmap = cm.coolwarm,
linewidth = 0, antialiased = False)
# Figure size (single plot)
#plt.rcParams["figure.figsize"] = (30,30)
# Add a color bar which maps values to colors.
plt.colorbar(surf, orientation = 'horizontal', shrink=0.6,
aspect=20, pad = 0.05)
################
# 2nd Subplot
################
# Set the subplot for the two graphs
axs = fig1.add_subplot(1, 2, 2)
# Plot the contour plot
cp = plt.contourf(k_c_m, R_L_m, f_gamma,
levels = [-1, 0, 1],
cmap = cm.Spectral)
# Define the levels to label
#c_labels = [0.e0]
# To specify scientific notation
fmt = ticker.LogFormatterSciNotation()
axs.clabel(cp, inline = 1, colors = 'k', fmt = fmt, fontsize = 10)
# Color bar
plt.colorbar(cp, orientation = 'horizontal', shrink=0.6,
aspect=20, pad = 0.09)
# Information
axs.set_title('Contour Plot of $\Gamma$ Function', fontsize = 15)
# Axis labels
axs.set_xlabel('Capacitive Coupling Coefficient ($k_c$)')
axs.set_ylabel('Load Resistance ($R_L$) [$\Omega$]')
axs.xaxis.grid(True)
axs.yaxis.grid(True)
# Distance between subplots
fig1.subplots_adjust(wspace=0.3)
plt.show()
```
```python
fig1.savefig ("fresonant_acn.svg", format = "svg", dpi = 1200)
```
```python
# 3. $\pi$-Model and $T$-Model equivalent equations
- Equivalent capacitive model for 4 plates
```
## $\pi$-Model:
```python
# Define the symbolic variables
# Capacitance
c_1, c_2, c_M = symbols('c_1 c_2 c_M', real = True)
# Angular frequency
w = symbols('w', real = True)
```
```python
# Common denominator
com_D = (1/(w*I))*(1/c_1 + 1/c_2 + 1/c_M)
```
```python
# T-model equivalent c'_1
Zp_1 = ((1/(w*I))**2)*((1/c_1)*(1/c_M))/com_D
simplify(Zp_1)
```
```python
# T-model equivalent c'_2
Zp_2 = ((1/(w*I))**2)*((1/c_2)*(1/c_M))/com_D
simplify(Zp_2)
```
```python
# T-model equivalent c'_M
Zp_M = ((1/(w*I))**2)*((1/c_1)*(1/c_2))/com_D
simplify(Zp_M)
```
## $T$-Model:
```python
# Define the symbolic variables
# Capacitance
cp_1, cp_2, cp_M = symbols('cp_1 cp_2 cp_M', real = True)
```
```python
# Common numerator
com_N = ((1/(w*I))**2)*((1/(cp_1*cp_M)) + (1/(cp_1*cp_2)) + (1/(cp_M*cp_2)))
```
```python
# Pi-model equivalent c_1
Z_1 = (com_N)/(1/(w*cp_2*I))
simplify(Z_1)
```
```python
# Pi-model equivalent c_2
Z_2 = (com_N)/(1/(w*cp_1*I))
simplify(Z_2)
```
```python
# Pi-model equivalent c_M
Z_M = (com_N)/(1/(w*cp_M*I))
simplify(Z_M)
```
# 4. Latex Formulas
$\Im(Z_{in}) = \frac{\chi [(R^{'}_{L} + R_{2})^{2} + \chi^{2} - (\omega M)^{2}]}{(R^{'}_{L} + R_{2})^{2} + \chi^{2}} = 0$
$\chi = \omega L - \frac{1}{\omega C}$
$k_{i} = \frac{M}{L} $
$R^{'}_{L} \gg R_{2} \Rightarrow (R^{'}_{L} + R_{2}) \approx R^{'}_{L}$
$f(\omega^{2}) = \left[ L^{2}(1 - k_{i}^{2}) \right] (\omega^{2})^{2} + \left[ R^{'2}_{L} - \frac{2L}{C} \right] \omega^{2} + \frac{1}{C^{2}}$
$\omega = \sqrt{\frac{-q \pm \sqrt{\Delta}}{2p}}$
$\Delta = q^{2} - 4pr$
$-q \pm \sqrt{\Delta} \leq 0$
$\Im(Z_{in}) = \frac{\chi [(R^{'}_{L} + R_{2})^{2} + \chi^{2} - \chi_{m}^{2}]}{(R^{'}_{L} + R_{2})^{2} + \chi^{2}} = 0$
$\chi = \omega L - \frac{1}{\omega C^{'}_{p}}$
$\chi_{m} = \omega M - \frac{1}{\omega C^{'}_{M}}$
$\frac{1}{C^{'}_{p}} = \frac{1}{C^{'}} + \frac{1}{C^{'}_{M}} = \frac{C + C_{M}}{C(C + 2C_{M})}$
$f(\omega^{2}) = \left[ L^{2}(1 - k_{i}^{2}) \right] (\omega^{2})^{2} + \left[ R^{'2}_{L} + 2L \left(\frac{k_{i}}{C^{'}_{M}} - \frac{1}{C^{'}_{p}} \right) \right] \omega^{2} + \left( \frac{1}{C^{'2}_{p}} - \frac{1}{C^{'2}_{M}} \right)$
$k_{c} = \frac{C_{M}}{C} $
```python
```
| 3746cff2a0ac284d2be70101aaf1f9ee848ec599 | 797,036 | ipynb | Jupyter Notebook | Active_CN_Model.ipynb | SanTT19/Symbolic_Python | 3bfaec4fc62de52e6a592cf3db333e605a35fcbc | [
"MIT"
] | null | null | null | Active_CN_Model.ipynb | SanTT19/Symbolic_Python | 3bfaec4fc62de52e6a592cf3db333e605a35fcbc | [
"MIT"
] | null | null | null | Active_CN_Model.ipynb | SanTT19/Symbolic_Python | 3bfaec4fc62de52e6a592cf3db333e605a35fcbc | [
"MIT"
] | null | null | null | 308.928682 | 144,144 | 0.921547 | true | 14,425 | Qwen/Qwen-72B | 1. YES
2. YES | 0.872347 | 0.899121 | 0.784346 | __label__eng_Latn | 0.343928 | 0.660631 |
<a href="https://colab.research.google.com/github/colbrydi/Scientific_Image_Understanding/blob/master/05-Registration-pre-class-assignment.ipynb" target="_parent"></a>
# Pre-Class Assignment: Image Registration
# Goals for today's pre-class assignment
1. [Image Registration](#Image-Registration)
2. [Basic Rigid Transforms](#Basic-Rigid-Transforms)
3. [ Affine Transforms](#-Affine-Transforms)
4. [Projective Transformations](#Projective-Transformations)
5. [Homography](#Homography)
---
<a name=Image-Registration></a>
# 1. Image Registration
Image registration is the process of transforming different sets of image data into a common coordinate system. This is like moving one image on top of another image. A reasonable overview for image Registration can be found here;
https://en.wikipedia.org/wiki/Image_registration
Consider the following example of two pictures of Beaumont Tower at Michigan State university:
```python
# Read data for this assignment
%matplotlib inline
import matplotlib.pyplot as plt
from urllib.request import urlopen, urlretrieve
from imageio import imread, imsave
url1 = 'http://res.cloudinary.com/miles-extranet-dev/image/upload/ar_16:9,c_fill,w_1000,g_face,q_50/Michigan/migration_photos/G21696/G21696-msubeaumonttower01.jpg'
file1='Tower1.jpeg'
urlretrieve(url1, file1)
im1 = imread(file1)
url2 = 'https://research.msu.edu/wp-content/uploads/2019/11/beaumont-winter.jpg'
file2 = 'Tower2.jpeg'
urlretrieve(url2, file2)
im2= imread(file2)
f, (ax1, ax2) = plt.subplots(1, 2,figsize=(20,10))
ax1.imshow(im1)
ax2.imshow(im2)
```
---
<a name=Basic-Rigid-Transforms></a>
# 2. Basic Rigid Transforms
**✅ DO THIS:** Use the following code and sliders to register the second image onto the first image. Try to find the best fit so that the towers line up exactly.
The sliders will let you change the scale (s), x translation (tx), y-translation (ty), and rotation angle (angle). You can also adjust the alpha measure to help "see though" one image into the other.
```python
from __future__ import division
import matplotlib.pyplot as plt
import numpy as np
from skimage import transform
from ipywidgets import interact, fixed
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
im = im1
def affine_image(im1, im2, s=1,tx=0,ty=0, angle=0, alpha=0.5):
theta = -angle/180 * np.pi
dx = tx*im.shape[1]
dy = ty*im.shape[0]
S = np.matrix([[1/s,0,0], [0,1/s,0], [0,0,1]])
T2 = np.matrix([[1,0,im.shape[1]/2], [0,1,im.shape[0]/2], [0,0,1]])
T1 = np.matrix([[1,0,-im.shape[1]/2-dx], [0,1,-im.shape[0]/2-dy], [0,0,1]])
R = np.matrix([[np.cos(theta),-np.sin(theta),0],[np.sin(theta), np.cos(theta),0],[0,0,1]])
T = T2*S*R*T1;
img = transform.warp(im, T);
plt.imshow(im2);
plt.imshow(img, alpha=alpha);
plt.show();
interact(affine_image,
im1=fixed(im1),
im2=fixed(im2),
s=(0.001,5),
tx=(-1.0,1.0),
ty=(-1,1,0.1),
angle=(-180,180),
alpha=(0.0,1.0));
```
---
<a name=-Affine-Transforms></a>
# 3. Affine Transforms
The above example is for an Rigid body transform. However, more complex transformations are possible. Watch the following video to learn more about Affine Transforms.
```python
from IPython.display import YouTubeVideo
YouTubeVideo("il6Z5LCykZk",width=640,height=360)
```
---
<a name=Projective-Transformations></a>
# 4. Projective Transformations
Projective transforms add additional degrees of freedom and can "skew" an image. Watch the following video on Projective Transforms.
```python
from IPython.display import YouTubeVideo
YouTubeVideo("uyYKPUZg3og",width=640,height=360)
```
---
<a name=Homography></a>
# 5. Homography
The Homography operations calculates a projective transformation using a set of points on one plain mapped to a similar set of points on a different plain. Since images often represent plains this turns out to be a useful registration method in image analysis. More information about the math can be found here:
https://en.wikipedia.org/wiki/Homography_(computer_vision)
and
http://people.scs.carleton.ca/~c_shu/Courses/comp4900d/notes/homography.pdf
Consider the following example code from a psychological study in game theory. The researcher would like to have a computer watch a game between two checkers players and analysis their moves. To do this they need to register the checkboards to the same grid. The code below uses the ```skimage.transform.ProjectiveTransform``` which calculates the transform using homography.
```python
#The following code snip-it downloads a file from internet and saves it to your local directory.
from urllib.request import urlopen, urlretrieve
import scipy.misc as misc
url = 'https://goo.gl/j2SFnL'
file1 = 'Checkers.png'
urlretrieve(url, file1);
im = imread(file1)
#Points in source image coordinate system
src = np.array([[156, 197],[284, 181],[407, 177],[172, 296],[318, 275],[452, 264],[190, 418],[359, 387],[507, 371]])
```
```python
#Calculate Transform
from skimage import transform
width=1000
#Points in desitnation coordinate system
dst = np.array([[0, width/2, width, 0, width/2, width, 0, width/2, width,],
[0, 0, 0, width/2, width/2, width/2, width, width, width]]).T
#Calculate projective transform from the source to the destination
tform = transform.ProjectiveTransform()
tform.estimate(dst, src)
im2 = transform.warp(im, tform, output_shape=(width,width))
```
```python
import sympy as sym
sym
sym.Matrix(tform.params)
```
```python
#show original image next to transformed image
f, (ax1, ax2) = plt.subplots(1, 2,figsize=(20,10))
ax1.imshow(im)
ax1.scatter(src[:,0],src[:,1])
#Add numbers
for i in range(dst.shape[0]):
ax1.annotate(str(i+1), (src[i,0]+20,src[i,1]), color='white');
ax1.set_title('Source Plain')
ax2.imshow(im2)
ax2.scatter(dst[:,0],dst[:,1])
#Add numbers
for i in range(dst.shape[0]):
ax2.annotate(str(i+1), (dst[i,0]+20,dst[i,1]));
ax2.set_title('Destination Plain')
ax2.axis('equal');
```
---
Written by Dr. Dirk Colbry, Michigan State University
<a rel="license" href="http://creativecommons.org/licenses/by-nc/4.0/"></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc/4.0/">Creative Commons Attribution-NonCommercial 4.0 International License</a>.
---
| b4a973a529ea760a226a73e32a230e2ac0bf05dd | 10,367 | ipynb | Jupyter Notebook | 05-Registration-pre-class-assignment.ipynb | colbrydi/Scientific_Image_Understandin | c4c931fd3bf20f899b8ebaaeef24c15eaca43867 | [
"MIT"
] | 3 | 2021-02-24T15:23:42.000Z | 2022-01-10T20:36:11.000Z | 05-Registration-pre-class-assignment.ipynb | colbrydi/Scientific_Image_Understandin | c4c931fd3bf20f899b8ebaaeef24c15eaca43867 | [
"MIT"
] | null | null | null | 05-Registration-pre-class-assignment.ipynb | colbrydi/Scientific_Image_Understandin | c4c931fd3bf20f899b8ebaaeef24c15eaca43867 | [
"MIT"
] | 4 | 2021-03-01T16:54:31.000Z | 2022-01-24T20:42:36.000Z | 30.671598 | 394 | 0.57056 | true | 1,806 | Qwen/Qwen-72B | 1. YES
2. YES | 0.743168 | 0.839734 | 0.624063 | __label__eng_Latn | 0.791945 | 0.288239 |
# Balancer Simulations Math Challenge - Advanced
This notebook provides a collection of challenges for an advanced understanding of Balancer Math.
```python
import numpy as np
import pandas as pd
import plotly.express as px
import plotly.graph_objects as go
from plotly.subplots import make_subplots
import math
```
# Challenge No. 1
For the following questions, there are two tokens $X$ and $Y$, with $x$ representing the number of Token $X$ in the pool, and $y$ representing the number of Token $Y$ in the pool.
The plots below represent the invariant curves for two Balancer Pools. Both curves are of the form $2 = x^ay^{1-a}$ for some value of $a$. Try as many of the exercises below as you want -- discuss them with others and have fun!
1. Explain how the equation $V = B_1^{W_1}B_2^{W_2}$ becomes $2 = x^ay^{1-a}$ in this context.
#### Solution
$V$ is the invariant and the specific value for this case is 2
$a$ and $1-a$ are the weights of token $x$ and token $y$ respectively
2. Give one possible value of $a$ and a pair of possible legal values for $x$ and $y$ in this context.
#### Solution
The only restriction for $a$ is it to be a positive value in range (0,1).
Let´s assume $x = 10$ and let´s see how $y$ changes with regard to $a$
```python
a_vals = pd.Series(range(1,10,1))
List = pd.DataFrame(a_vals/10, columns=['a'])
List['1-a'] = 1-List.a
List['token_A'] = 10
List['token_B'] = (2/(List.token_A**List.a))**(1/(1-List.a))
List
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>a</th>
<th>1-a</th>
<th>token_A</th>
<th>token_B</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.1</td>
<td>0.9</td>
<td>10</td>
<td>1.672502</td>
</tr>
<tr>
<th>1</th>
<td>0.2</td>
<td>0.8</td>
<td>10</td>
<td>1.337481</td>
</tr>
<tr>
<th>2</th>
<td>0.3</td>
<td>0.7</td>
<td>10</td>
<td>1.003394</td>
</tr>
<tr>
<th>3</th>
<td>0.4</td>
<td>0.6</td>
<td>10</td>
<td>0.683990</td>
</tr>
<tr>
<th>4</th>
<td>0.5</td>
<td>0.5</td>
<td>10</td>
<td>0.400000</td>
</tr>
<tr>
<th>5</th>
<td>0.6</td>
<td>0.4</td>
<td>10</td>
<td>0.178885</td>
</tr>
<tr>
<th>6</th>
<td>0.7</td>
<td>0.3</td>
<td>10</td>
<td>0.046784</td>
</tr>
<tr>
<th>7</th>
<td>0.8</td>
<td>0.2</td>
<td>10</td>
<td>0.003200</td>
</tr>
<tr>
<th>8</th>
<td>0.9</td>
<td>0.1</td>
<td>10</td>
<td>0.000001</td>
</tr>
</tbody>
</table>
</div>
3. Rewrite the curve $2 = x^ay^{1-a}$ in the form "$y =$ (some expression involving x)" and plot it using the Python tool of your choice.
#### Solution
$$ y = (\frac{2}{x^a})^{1/(1-a)} $$
4. Generate 100 $(x,y)$ points on the curve $2 = x^{0.6}y^{0.4}$. Now take the **logarithm** of both $x$ and $y$ in your 100 points. Plot $\log(x)$ against $\log(y)$. What do you notice?
```python
x = pd.Series(range(10,0,-1))
a = .6
b = .4
List = pd.DataFrame(x, columns=['token_A'])
List['token_B'] = (2/(List.token_A**a))**(1/(b))
List['log_A'] = np.log(List.token_A)
List['log_B'] = np.log(List.token_B)
List
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>token_A</th>
<th>token_B</th>
<th>log_A</th>
<th>log_B</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>10</td>
<td>0.178885</td>
<td>2.302585</td>
<td>-1.721010</td>
</tr>
<tr>
<th>1</th>
<td>9</td>
<td>0.209513</td>
<td>2.197225</td>
<td>-1.562969</td>
</tr>
<tr>
<th>2</th>
<td>8</td>
<td>0.250000</td>
<td>2.079442</td>
<td>-1.386294</td>
</tr>
<tr>
<th>3</th>
<td>7</td>
<td>0.305441</td>
<td>1.945910</td>
<td>-1.185997</td>
</tr>
<tr>
<th>4</th>
<td>6</td>
<td>0.384900</td>
<td>1.791759</td>
<td>-0.954771</td>
</tr>
<tr>
<th>5</th>
<td>5</td>
<td>0.505964</td>
<td>1.609438</td>
<td>-0.681289</td>
</tr>
<tr>
<th>6</th>
<td>4</td>
<td>0.707107</td>
<td>1.386294</td>
<td>-0.346574</td>
</tr>
<tr>
<th>7</th>
<td>3</td>
<td>1.088662</td>
<td>1.098612</td>
<td>0.084950</td>
</tr>
<tr>
<th>8</th>
<td>2</td>
<td>2.000000</td>
<td>0.693147</td>
<td>0.693147</td>
</tr>
<tr>
<th>9</th>
<td>1</td>
<td>5.656854</td>
<td>0.000000</td>
<td>1.732868</td>
</tr>
</tbody>
</table>
</div>
```python
#plot curves with all
fig = px.line(List, x='log_B', y='log_A')
fig.update_xaxes(range=[-6, 6])
fig.update_yaxes(range=[-1, 6])
fig.update_layout(height=800, width=800, title_text='<b>AMM Curve</b>')
fig.show()
```
#### Solutions
The relationship between the variables is linear and the slope of the line is the ration between the weights of the tokens
5. Both pictures above represent AMM curves of the form $2 = x^{a}y^{1-a}$. One of the curves has $a =0.6$ and the other has $a = 0.8$. Which is which? How can you tell?
#### Solution - There are no pics
6. Both of the curves pictured above contain the point $(2,2)$. Explain why any curve of the form $2= x^{a}y^{1-a}$ will pass through $(2,2)$. Is this a special property of the number 2, or will any curve of the form $k = x^{a}y^{1-a}$ pass through $(k,k)$? (You can determine this with an algebraic proof, or by playing with Python graphs.)
#### Solution
Proof by contradiction. (Attempt 1)
Suppose there is no point $(k, k)$ in the curve $k = x^{a}y^{1-a}$.
$\begin{align}
& \text{Let $ y := x $} \quad (\textit{Is this sound?})\tag{1}. \\
& k = x^{a}y^{1-a} \quad \text{(given)} \tag{2} \\
& k = x^{a}x^{1-a} \quad \text{(substituting from (1))} \tag{3} \\
& k = x^{a+1-a} \quad \text{(product of exponents)} \tag{4} \\
& k = x^{1} \tag{5} \\
& k = x \tag{6}
\end{align} $
However, since $x = k$, per (1) this means $y = k$ also, meaning that point $(k, k)$ exists, contradicting the original assumption. $\square$
Proof by contradiction. (Attempt 2)
Suppose there is no point $(k, k)$ in the curve $k = x^{a}y^{1-a}$.
$
\begin{align*}
\text{Let}\ x &:= k \tag{1}. \\
k &= x^{a}y^{1-a} \quad \text{(Given.)} \tag{2} \\
k &= k^{a}y^{1-a} \quad \text{(Substituting from (1).)} \tag{3} \\
\frac{k}{k^{a}} &= y^{1-a} \tag{4} \\
\log \frac{k}{k^{a}} &= \log y^{1-a} \tag{5} \\
\log {k} - \log {k^{a}} &= \log y^{1-a} \tag{6} \\
\log {k} - a \log {k} &= (1-a) \log y \tag{7} \\
\log {k} - a \log {k} &= \log y - a \log y\tag{8} \\
k &= y \tag{9} \quad \text{(Since a $\log$ function has only one $x$ mapping to every $\log x$.)}
\end{align*}
$
However, since $x = k$ per (1) and also $y=k$ per (9) this means point $(k, k)$ exists, contradicting the original assumption. $\square$
7. Suppose a curve $2 = x^ay^{1-a}$ passes through the point $(2,2)$ and $(p,q)$. Explain how you can use the values of $p$ and $q$ to determine the weights of Token $X$ and Token $Y$.
#### Solution
$
\begin{align*}
2 &= x^a y^{(1-a)} \\
2 &= p^a q^{(1-a)} \\
2 &= x^a y^{(1-a)} \\
p^a q^{(1-a)} &= x^a y^{(1-a)}\\
\log \left(p^a q^{(1-a)}\right) &= \log \left( x^a y^{(1-a)} \right)\\
\log (p^a) + \log (q^{(1-a)}) &= \log (x^a) + \log (y^{(1-a)}) \\
a \log p + (1-a) \log q &= a \log x + (1-a) \log y \\
a \log p + \log q - a \log q &= a \log x + \log y - a \log y \\
a (\log p - \log q) + \log q &= a (\log x - \log y) + \log y \\
\log q - \log y &= a (\log x - \log y) - a (\log p - \log q)\\
\log q - \log y &= a ((\log x - \log y) - (\log p - \log q))\\
\log q - \log y &= a (\log x + \log q - \log y - \log p )\\
\frac{\log q - \log y}{\log x + \log q - \log y - \log p }&= a \\
\end{align*}
$
$
\begin{align}
a = \frac{\log q - \log y}{\log x + \log q - \log y - \log p } \tag{1}
\end{align}
$
Check:
```python
a = 0.8 # Let a = 0.8; we'll use this to check the Equation 1.
x = 2 # Per premise
y = 2 # Per premise
p = 5 # Arbitrary point
q = (2/(p**a))**(1/(1-a)) # Calculated from p via invariant function
"""Check equality in Equation 1."""
(math.log(q)- math.log(y))/(math.log(x) + math.log(q) - math.log(y) - math.log(p)) == a
```
8. In **Alice's Pool**, which point has the highest spot price $SP_X^{Y}$: **A**, **B** or **C**? (You may be able to answer this without calculation.)
9. Suppose Alice is actively managing her pool against Bob's pool and wishes to trade it against Bob's pool. Identify a pair of points -- one in her pool and one in Bob's pool -- that would represent an arbitrage opportunity for Alice.
```python
x = pd.Series(range(100,0,-1))
a = .6
b = .4
List = pd.DataFrame(x/10, columns=['token_A'])
List['token_B'] = (2/(List.token_A**a))**(1/(b))
a = .8
b = .2
List2 = pd.DataFrame(x/10, columns=['token_A2'])
List2['token_B2'] = (2/(List2.token_A2**a))**(1/(b))
L = pd.concat([List, List2], axis=1)
L.head(20)
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>token_A</th>
<th>token_B</th>
<th>token_A2</th>
<th>token_B2</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>10.0</td>
<td>0.178885</td>
<td>10.0</td>
<td>0.003200</td>
</tr>
<tr>
<th>1</th>
<td>9.9</td>
<td>0.181603</td>
<td>9.9</td>
<td>0.003331</td>
</tr>
<tr>
<th>2</th>
<td>9.8</td>
<td>0.184389</td>
<td>9.8</td>
<td>0.003469</td>
</tr>
<tr>
<th>3</th>
<td>9.7</td>
<td>0.187248</td>
<td>9.7</td>
<td>0.003615</td>
</tr>
<tr>
<th>4</th>
<td>9.6</td>
<td>0.190181</td>
<td>9.6</td>
<td>0.003768</td>
</tr>
<tr>
<th>5</th>
<td>9.5</td>
<td>0.193192</td>
<td>9.5</td>
<td>0.003929</td>
</tr>
<tr>
<th>6</th>
<td>9.4</td>
<td>0.196283</td>
<td>9.4</td>
<td>0.004099</td>
</tr>
<tr>
<th>7</th>
<td>9.3</td>
<td>0.199458</td>
<td>9.3</td>
<td>0.004278</td>
</tr>
<tr>
<th>8</th>
<td>9.2</td>
<td>0.202718</td>
<td>9.2</td>
<td>0.004467</td>
</tr>
<tr>
<th>9</th>
<td>9.1</td>
<td>0.206069</td>
<td>9.1</td>
<td>0.004666</td>
</tr>
<tr>
<th>10</th>
<td>9.0</td>
<td>0.209513</td>
<td>9.0</td>
<td>0.004877</td>
</tr>
<tr>
<th>11</th>
<td>8.9</td>
<td>0.213054</td>
<td>8.9</td>
<td>0.005100</td>
</tr>
<tr>
<th>12</th>
<td>8.8</td>
<td>0.216696</td>
<td>8.8</td>
<td>0.005336</td>
</tr>
<tr>
<th>13</th>
<td>8.7</td>
<td>0.220443</td>
<td>8.7</td>
<td>0.005586</td>
</tr>
<tr>
<th>14</th>
<td>8.6</td>
<td>0.224299</td>
<td>8.6</td>
<td>0.005850</td>
</tr>
<tr>
<th>15</th>
<td>8.5</td>
<td>0.228269</td>
<td>8.5</td>
<td>0.006130</td>
</tr>
<tr>
<th>16</th>
<td>8.4</td>
<td>0.232357</td>
<td>8.4</td>
<td>0.006427</td>
</tr>
<tr>
<th>17</th>
<td>8.3</td>
<td>0.236569</td>
<td>8.3</td>
<td>0.006743</td>
</tr>
<tr>
<th>18</th>
<td>8.2</td>
<td>0.240910</td>
<td>8.2</td>
<td>0.007078</td>
</tr>
<tr>
<th>19</th>
<td>8.1</td>
<td>0.245385</td>
<td>8.1</td>
<td>0.007434</td>
</tr>
</tbody>
</table>
</div>
```python
# In pool 1, to buy 0.1 tokens A $0.181603 - 0.178885 = 0.0027179$ tokens_B are required
# In pool 2, to buy 0.1 tokens A $0.003331 - 0.003200 = 0.0001310$ tokens_B are required
# Considering the point token_A = 9.9, if we sell 0.1 token_A in pool 1 we get 0.0027179 token_B but the price of buying 0.1 token_A in pool 2 is only $0.003469-0.003331=0.0001379$ so we end up with the same amount of token_A + $0.0027179 - 0.0001379 = 0.002580$ token_B.
# If we want to buy more token_A in pool 2, our next 0.1 token_A would cost $0.003615-0.003469=0.0001460$ so we could have now .2 token_A plus $0.002580-0.0001460=0.002434$ token_B
```
0.002434
```python
cash = L.token_B.iloc[1]-L.token_B.iloc[0]
expense = 0
i = 1
count = 0.1
while expense < cash:
expense_new = L.token_B2.iloc[i+1]-L.token_B2.iloc[i]
i+=1
expense += expense_new
count += 0.1
count
```
1.5000000000000002
```python
#plot curves with all
List = List[['token_A', 'token_B']]
List2 = List2[['token_B2']]
df = pd.concat([List, List2], axis=1, ignore_index=False)
fig = px.line(df, x="token_A", y=["token_B", 'token_B2'])
fig.update_xaxes(range=[0, 10])
fig.update_yaxes(range=[0, 10])
fig.update_layout(height=1000, width=1000, title_text='<b>AMM Curve</b>')
fig.show()
```
10. Create the invariant curves for Alice's pool and Bob's pool in Python using the graphing tool of your choice. Choose one point on Bob's curve and highlight it in Blue. Call this point **Z**. On your graph of Alice's curve, color in red **all of the points that represent arbitrage opportunities for Alice against Z**, i.e. color red all of the points on Alice's curve that represent where she could make a profit if she traded with Bob while he held the position represented by **Z**. (**Note**: This is the hardest one in terms of symbolic math, since it involves solving an inequality involving power functions.)
# Challenge No. 2
Alice wants to set up another pool.
She knows that the current price of 1 token C is 37 token D.
She owns 57000 token C, and 510000 token D. She plans to set a fee to 1%, and maximize her capital returns via this pool.
#### 1.
What is the optimal set up for Alice’s new pool? Think about how you’d approach it and explain why.
#### 2.
Due to a huge price dump, the price of token C on external markets drops to 28 token D. Now, Arbitrage traders get to work. Image Alice wanted to re-balance the pool herself, how much token C would she need in order to re-balance the pool in one trade? Keep in mind swap fees and slippage!
#### 3.
Imagine Alice’s pool would be twice as big. How much liquidity would she need for the same price change?
#### 4.
Explore the general relation between the size of the pool (liquidity m in token C and n in token D) and price changes with a series of experiments.
• derive an expression
• plot a graph
#### SOLUTION 1.
We understand that profit comes from trading and to be effective and maximize the return, the equilibrium of the pool should be around the trading price. In that sens, we need to calculate the weights of each token according to a Spot Price (SP) equal to the current price
$$ SP = \frac{\frac{B_c}{W_c}}{\frac{B_d}{W_d}} = \frac{1}{37} $$
with $$ W_c + W_d = 1 $$
$$ W_d = 1 - W_c $$
$$ SP = \frac{B_c*W_d}{B_d*W_c} = \frac{B_c}{B_d} * \frac{W_d}{1-W_d} $$
$$ SP * \frac{B_d}{B_c} * (1-W_d) = W_d $$
$$ SP * \frac{B_d}{B_c} = x $$
$$ x * (1-W_d) = x - x*W_d = W_d $$
$$ x = W_d (1+x)$$
$$ W_d = \frac{x}{1+x} $$
```python
SP = 1/37
Bc = 57000
Bd = 510000
x = SP*Bd/Bc
Wd = x/(1+x)
Wd
```
0.19473081328751435
```python
Wc = 1-Wd
Wc
```
0.8052691867124857
```python
# Checking:
SP_check = (Bc/Wc)/(Bd/Wd)
SP_check
```
0.027027027027027032
```python
# What should be the same as:
1/37
```
0.02702702702702703
#### SOLUTION 2.
Now the $ Price = 1/28 $ but the pool $ SP = 1/37$ so arbitrage is possible. To avoid it through one swap (including fees) we need to look for the trade that moves the price in the pool to $1/28$.
As a simplification we are going to consider:
$W_c = 0.805$
$W_b = 0.195$
```python
# FIRST WITHOUT FEES !
Bc = 57000 # initial balance
Bd = 510000 # initial balance
Wc = 0.805 #define weight
Wd = 0.195 #define weight
s_f = 0.01 #swap fee
inv = (Bc**Wc)*(Bd**Wd) #calculate invariant
a_vals = pd.Series(range(60160,60170,1))
#create dataframe with based on a_vals
List = pd.DataFrame(a_vals, columns=['token_C'])
#create values for plot, add Y_balances according to current invariant
List['invariant'] = inv #value required to calculate token B value
List['token_D'] = (List.invariant/(List.token_C**Wc))**(1/Wd)# calculate corresponding token_B value according to invariant
List['Price_C'] = 1/((List.token_C/Wc) / (List.token_D/Wd))
# List['In-Given-Out'] = List.token_D * (( List.token_C / (List.token_C-1) )**(Wc/Wd) -1)
List['Out-Given-Out'] = List.token_D * (( List.token_C / (List.token_C-1) )**(Wc/Wd) -1)
List
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>token_C</th>
<th>invariant</th>
<th>token_D</th>
<th>Price_C</th>
<th>In-Given-Out</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>60160</td>
<td>87388.733071</td>
<td>408163.402192</td>
<td>28.008349</td>
<td>28.009542</td>
</tr>
<tr>
<th>1</th>
<td>60161</td>
<td>87388.733071</td>
<td>408135.395038</td>
<td>28.005961</td>
<td>28.007155</td>
</tr>
<tr>
<th>2</th>
<td>60162</td>
<td>87388.733071</td>
<td>408107.390270</td>
<td>28.003574</td>
<td>28.004768</td>
</tr>
<tr>
<th>3</th>
<td>60163</td>
<td>87388.733071</td>
<td>408079.387889</td>
<td>28.001187</td>
<td>28.002381</td>
</tr>
<tr>
<th>4</th>
<td>60164</td>
<td>87388.733071</td>
<td>408051.387896</td>
<td>27.998800</td>
<td>27.999994</td>
</tr>
<tr>
<th>5</th>
<td>60165</td>
<td>87388.733071</td>
<td>408023.390288</td>
<td>27.996414</td>
<td>27.997607</td>
</tr>
<tr>
<th>6</th>
<td>60166</td>
<td>87388.733071</td>
<td>407995.395067</td>
<td>27.994028</td>
<td>27.995221</td>
</tr>
<tr>
<th>7</th>
<td>60167</td>
<td>87388.733071</td>
<td>407967.402233</td>
<td>27.991642</td>
<td>27.992835</td>
</tr>
<tr>
<th>8</th>
<td>60168</td>
<td>87388.733071</td>
<td>407939.411783</td>
<td>27.989256</td>
<td>27.990449</td>
</tr>
<tr>
<th>9</th>
<td>60169</td>
<td>87388.733071</td>
<td>407911.423720</td>
<td>27.986871</td>
<td>27.988064</td>
</tr>
</tbody>
</table>
</div>
An approximation: Bc should be between 60100 and 60200 so the owner would need to bring ~3,1k token_C into the pool and would get 510k-408k = 102k token_B
The exact calculation can be done with the In-Given-Price formula:
$$ A_i = B_i ((\frac{SP_{new}}{SP})^{(\frac{w_0}{w_0+w_i})})-1)$$
```python
SP = 1/37
SP_new = 1/28
A = Bc * (((SP_new/SP)**(Wd/(Wd+Wc)))-1)
A
# Amount of token_C to bring to the pool (in exchange for token_B)
```
3183.6295721159863
#### SOLUTION 2. with fees
In-Given-Out formula with fees:
a) paying the fee in the token I am getting
$$ A_i = B_i * ((\frac{B_o}{B_o + A_o(1-fee)})^{\frac{w_o}{w_i}} -1) $$
or b) if I pay the fee in the token I am giving
$$ A_i*(1-fee) = B_i * ((\frac{B_o}{B_o + A_o})^{\frac{w_o}{w_i}} -1) $$
```python
# WITH FEES
# First considerations: the amount of tokens to be paid should be higher as some of them goes directly to pay fees
Bc = 57000 # initial balance
Bd = 510000 # initial balance
Wc = 0.805 #define weight
Wd = 0.195 #define weight
s_f = 0 #swap fee
inv = (Bc**Wc)*(Bd**Wd) #calculate invariant
a_vals = pd.Series(range(0,5,1))
delta = 1-s_f
#create dataframe with based on a_vals
List = pd.DataFrame(a_vals, columns=['index'])
List['token_C'] = Bc + List.index*(delta)
#create values for plot, add Y_balances according to current invariant
List['invariant'] = inv #value required to calculate token B value
List['token_D'] = (List.invariant/(List.token_C**Wc))**(1/Wd) # calculate corresponding token_B value according to invariant
List['Price_C'] = 1/((List.token_C/Wc) / (List.token_D/Wd))
# List['In-Given-Out'] = (List.token_C * (( List.token_D / (List.token_D-1) )**(Wd/Wc) -1)) # / (1-s_f)
# List['Out-Given-In'] = (List.token_D * (( List.token_C / (List.token_C + 1) )**(Wc/Wd) -1))
List['New_Own'] = List.token_D * (List.token_C/(List.token_C + delta))**(Wc/Wd) - List.token_D
List
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>index</th>
<th>token_C</th>
<th>invariant</th>
<th>token_D</th>
<th>Price_C</th>
<th>New_Own</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0</td>
<td>57000</td>
<td>87388.733071</td>
<td>510000.000000</td>
<td>36.936572</td>
<td>-36.934911</td>
</tr>
<tr>
<th>1</th>
<td>1</td>
<td>57001</td>
<td>87388.733071</td>
<td>509963.065089</td>
<td>36.933249</td>
<td>-36.931588</td>
</tr>
<tr>
<th>2</th>
<td>2</td>
<td>57002</td>
<td>87388.733071</td>
<td>509926.133501</td>
<td>36.929927</td>
<td>-36.928266</td>
</tr>
<tr>
<th>3</th>
<td>3</td>
<td>57003</td>
<td>87388.733071</td>
<td>509889.205236</td>
<td>36.926604</td>
<td>-36.924943</td>
</tr>
<tr>
<th>4</th>
<td>4</td>
<td>57004</td>
<td>87388.733071</td>
<td>509852.280292</td>
<td>36.923283</td>
<td>-36.921622</td>
</tr>
</tbody>
</table>
</div>
```python
List.token_D.iloc[2] - List.token_D.iloc[1]
```
-36.93158792384202
```python
a_vals = pd.Series(range(0,5,1))
s_f = 0.01
delta = 1-s_f
#create dataframe with based on a_vals
List2 = pd.DataFrame(a_vals, columns=['index'])
List2['token_C'] = Bc + List2.index*(delta)
#create values for plot, add Y_balances according to current invariant
List2['invariant'] = inv #value required to calculate token B value
List2['token_D'] = (List2.invariant/(List2.token_C**Wc))**(1/Wd) # calculate corresponding token_B value according to invariant
List2['Price_C'] = 1/((List2.token_C/Wc) / (List2.token_D/Wd))
# List['In-Given-Out'] = (List.token_C * (( List.token_D / (List.token_D-1) )**(Wd/Wc) -1)) # / (1-s_f)
# List['Out-Given-In'] = (List.token_D * (( List.token_C / (List.token_C + 1) )**(Wc/Wd) -1))
List2['New_Own'] = List2.token_D * (List2.token_C/(List2.token_C +(delta)))**(Wc/Wd) - List2.token_D
List2
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>index</th>
<th>token_C</th>
<th>invariant</th>
<th>token_D</th>
<th>Price_C</th>
<th>New_Own</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0</td>
<td>57000.00</td>
<td>87388.733071</td>
<td>510000.000000</td>
<td>36.936572</td>
<td>-36.565578</td>
</tr>
<tr>
<th>1</th>
<td>1</td>
<td>57000.99</td>
<td>87388.733071</td>
<td>509963.434422</td>
<td>36.933282</td>
<td>-36.562321</td>
</tr>
<tr>
<th>2</th>
<td>2</td>
<td>57001.98</td>
<td>87388.733071</td>
<td>509926.872101</td>
<td>36.929993</td>
<td>-36.559065</td>
</tr>
<tr>
<th>3</th>
<td>3</td>
<td>57002.97</td>
<td>87388.733071</td>
<td>509890.313035</td>
<td>36.926704</td>
<td>-36.555809</td>
</tr>
<tr>
<th>4</th>
<td>4</td>
<td>57003.96</td>
<td>87388.733071</td>
<td>509853.757226</td>
<td>36.923415</td>
<td>-36.552554</td>
</tr>
</tbody>
</table>
</div>
```python
List2.token_D.iloc[2] - List2.token_D.iloc[1]
```
-36.56232138548512
```python
1/(List2.token_D.iloc[3]/List.token_D.iloc[3])
```
0.9999978273765967
```python
# Easy solution:
# The effective tokens in the trade should be the same as before: 3183.6295721159863 but fees need to be paid
# As the effective tokens going into the pool are X-fee = 3183.63 and fee = 1%, .99*X = 3183.63 ->
# fee:
s_f = 0.01
A_new = A / (1-s_f)
A_new
```
3215.7874465818045
```python
# Paid fees:
3215.7874465818045 - 3183.6295721159863
```
32.15787446581817
#### SOLUTION 3.
```python
SP = 1/37
SP_new = 1/28
Bc = 2*Bc
A = Bc * (((SP_new/SP)**(Wd/(Wd+Wc)))-1)
A
# Amount of token_C to bring to the pool (in exchange for token_B)
```
6367.259144231973
```python
# Ratio
6367.259144231973 / 3183.6295721159863
```
2.0
```python
# Variables
# Size: S
# Change in price: delta
# Number of tokens: T
# T produces a delta in a pool with size S
# n*t produces a delta in a pool with size n*S
```
$$ A_i = B_i ((\frac{SP_{new}}{SP})^{(\frac{w_0}{w_0+w_i})})-1)$$
$$ A_i = B_i*(cte)$$
New pool with $n*B_i$
$$ m*A_i = n*B_i*(cte)$$
$$ \frac{m*A_i}{A_i} = \frac{n*B_i (cte)}{B_i*(cte)}$$
$$ m = n $$
If size is double: $B_i -> 2*B_i$ means that $A_i -> 2*A_i$
So we need more tokens in the sam factor as times the new pool is bigger than the old one
# Challenge No. 3
Bob runs a 50-50 Token X-Token Y pool with a 5% swap fee.
The following traders want to interact with this pool:
- Carlos wants to increase the amount of Token X by 10% by trading X for Y
- Diana wants to increase the amount of Token Y by 10% by trading Y for X
- Ellen wants to increase the spot price X:Y by 10%
- Fabricio wants to increase the spot price Y:X by 10%
#### 1.
For each action, specify the amount that the person will need to trade in to achieve the result.
#### 2.
These actions can not occur simultaneously and will be processed in some order. There are 24 different total orders -- choose a few different ones and simulate the effect on spot price and liquidity. Can you make a conclusion about how increasing liquidity or changing balances affects spot price? If not, try running additional simulations with varying numbers besides 10%.
#### 3.
Now add in two agents who want to increase the pool liquidity by making a deposit. With 6 actors, there are 720 different orderings of the actions. Analyze some. Which actions work "together" (reducing the amount needed of the second action to achieve a particular effect) and which work "against each other"?
#### SOLUTION 1.
```python
# Pool definition
Bc = 50000 # initial balance
Bd = 50000 # initial balance
Wc = 0.5 #define weight
Wd = 0.5 #define weight
s_f = 0.05 #swap fee
inv = (Bc**Wc)*(Bd**Wd) #calculate invariant
a_vals = pd.Series(range(50000,55000,1))
#create dataframe with based on a_vals
List = pd.DataFrame(a_vals, columns=['token_C'])
#create values for plot, add Y_balances according to current invariant
List['invariant'] = inv #value required to calculate token B value
List['token_D'] = (List.invariant/((List.token_C)**Wc))**(1/Wd)# calculate corresponding token_B value according to invariant
# List['In-Given-Out'] = List.token_D * (( List.token_C / (List.token_C-1) )**(Wc/Wd) -1)
# List['Out-Given-Out'] = List.token_D * (( List.token_C / (List.token_C-(1-s_f)) )**(Wc/Wd) -1)
List
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>token_C</th>
<th>invariant</th>
<th>token_D</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>50000</td>
<td>50000.0</td>
<td>50000.000000</td>
</tr>
<tr>
<th>1</th>
<td>50001</td>
<td>50000.0</td>
<td>49999.000020</td>
</tr>
<tr>
<th>2</th>
<td>50002</td>
<td>50000.0</td>
<td>49998.000080</td>
</tr>
<tr>
<th>3</th>
<td>50003</td>
<td>50000.0</td>
<td>49997.000180</td>
</tr>
<tr>
<th>4</th>
<td>50004</td>
<td>50000.0</td>
<td>49996.000320</td>
</tr>
<tr>
<th>...</th>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<th>4995</th>
<td>54995</td>
<td>50000.0</td>
<td>45458.678062</td>
</tr>
<tr>
<th>4996</th>
<td>54996</td>
<td>50000.0</td>
<td>45457.851480</td>
</tr>
<tr>
<th>4997</th>
<td>54997</td>
<td>50000.0</td>
<td>45457.024929</td>
</tr>
<tr>
<th>4998</th>
<td>54998</td>
<td>50000.0</td>
<td>45456.198407</td>
</tr>
<tr>
<th>4999</th>
<td>54999</td>
<td>50000.0</td>
<td>45455.371916</td>
</tr>
</tbody>
</table>
<p>5000 rows × 3 columns</p>
</div>
```python
# Carlos wants to increase the amount of Token X by 10% by trading X for Y
# He needs to bring into the pool x*(1-s_f) = 0.1*B_x
# x = (0.1*B_x) / (1-s_f)
# Diana: same result changing X for Y
# Ellen & Fabricio:
```
$$ A_i = B_i ((\frac{SP_{new}}{SP})^{(\frac{w_0}{w_0+w_i})})-1)$$
$$ A_i = B_i ((1.1*SP)^{(\frac{w_0}{w_0+w_i})})-1)$$
```python
A_i = Bc*((1.1)**(0.5)-1)
A_i
```
2440.4424085075816
```python
2440.4424085075816/Bc
```
0.04880884817015163
#### BE CAREFUL: WEIGHTS HAVE CHANGED NOW
#### SOLUTION 2.
acting in the same direction:
- trading X for Y and increasing SP bringing X to the pool
- trading Y for X and increasing SP bringing Y to the pool
# Challenge No. 4
A user has $a$ **TKNA** and $b$ **TKNB**:
The TKNA-TKNB pool has 50:50 weights, and reserves of TKNA and TKNB of m and n, respectively.
The user wishes to perform a swap such that the ratio of TKNA:TKNB in their wallet is p:q
$$\frac{a + x}{b + y} = \frac{p}{q}$$
#### 1.
Assuming the user performs a swap between TKNA and TKNB on the TKNA-TKNB pool, derive an expression for x and y.
#### 2.
Using data or algebra, which is the best option to reduce slippage when trading X for Y:
a) finding a pool with the same invariant but a different X balance value,
b) finding a pool with a different weight on X,
c) or finding a pool with a different invariant value?
Justify your answer in any way you like,
and explain whether "different" should be "smaller" or "larger" here.
#### SOLUTION 1.
$$ x = \frac{p}{q} * (b+y) - a $$
Out-Given-In
$$ A_o = B_o * (1-(\frac{B_i}{B_i + A_i})^{(\frac{w_i}{w_o}})) = B_o * (1- \frac{B_i}{B_i + A_i}) = n * (1- \frac{m}{m + \frac{p}{q} * (b+y) - a}) = b$$
where $A_i = x$ and $A_o = y$
#### SOLUTION 2.
a) if the pools have he same invariant, the depth (and resistance to slippage is the same). The exact value of slippage can be calculated according to the balance point and here we can add that, if the number of tokens X is bigger in the second pool, the slippage would be smaller and if the number of tokens X is smaller, the slippage would be bigger
b) if the weight of X is bigger, the pool is more resistant to slippage, however, if the weight is smaller, the impact of a swap is bigger
c) a bigger invariatn value (comming from a bigger depth of the pool, a bigger amount of tokens X and tokens Y) makes the pool more resitant to slippage
| c7a5a424e6233ef9e90b16030822d07a0715158b | 114,137 | ipynb | Jupyter Notebook | Math Challenges-Advanced.ipynb | bloxmove-com/Token_Engineering_Math_Challenge_All | 73fbff799cc4aa31dbea95cc80e2345219864aa5 | [
"MIT"
] | null | null | null | Math Challenges-Advanced.ipynb | bloxmove-com/Token_Engineering_Math_Challenge_All | 73fbff799cc4aa31dbea95cc80e2345219864aa5 | [
"MIT"
] | null | null | null | Math Challenges-Advanced.ipynb | bloxmove-com/Token_Engineering_Math_Challenge_All | 73fbff799cc4aa31dbea95cc80e2345219864aa5 | [
"MIT"
] | null | null | null | 26.076536 | 624 | 0.379439 | true | 12,070 | Qwen/Qwen-72B | 1. YES
2. YES | 0.903294 | 0.815232 | 0.736395 | __label__eng_Latn | 0.730221 | 0.549224 |
# [HW10] Simple Linear Regression
## 1. Linear regression
Linear regression은 종속 변수 $y$와 한개 이상의 독립 변수 $X$와의 선형 관계를 모델링하는 방법론입니다. 여기서 독립 변수는 입력 값이나 원인을 나타내고, 종속 변수는 독립 변수에 의해 영향을 받는 변수입니다. 종속 변수는 보통 결과물을 나타냅니다.
선형 관계를 모델링한다는 것은 1차로 이루어진 직선을 구하는 것입니다. 우리의 데이터를 가장 잘 설명하는 최적의 직선을 찾아냄으로써 독립 변수와 종속 변수 사이의 관계를 도출해 내는 과정입니다.
이번 실습에서는 독립 변수가 1개인 simple linear regression을 진행하겠습니다. 변수가 하나인 직선을 정의하겠습니다.
$$f(x_i) = wx_i + b$$
우리의 데이터를 가장 잘 설명하는 직선은 우리가 직선을 통해 예측한 값이 실제 데이터의 값과 가장 비슷해야 합니다. 우리의 모델이 예측한 값은 위에서 알 수 있듯 $f(x_i)$입니다. 그리고 실제 데이터는 $y$ 입니다.
실제 데이터(위 그림에서 빨간 점) 과 직선 사이의 차이를 줄이는 것이 우리의 목적입니다. 그것을 바탕으로 cost function을 다음과 같이 정의해보겠습니다.
$$\text{cost function} = \frac{1}{N}\sum_{i=1}^n (y_i - f(x_i))^2$$
우리는 cost function을 최소로 하는 $w$와 $b$를 찾아야 합니다.
우리의 cost function은 이차함수입니다. 우리는 고등학교 수학시간에 이차함수의 최솟값을 구하는 방법을 배웠습니다! 고등학교 때 배웠던 방법을 다시 한번 알아보고, 새로운 gradient descent 방법도 알아보겠습니다.
### 1.1 Analytically
다음 식의 최솟값을 어떻게 찾을 수 있을까요?
$$f(w) = w^2 + 3w -5$$
고등학교 때 배운 방법은 미분한 값이 0이 되는 지점을 찾는 것입니다.
손으로 푸는 방법은 익숙하겠지만 sympy와 numpy 패키지를 사용하여 코드를 통해서 알아보도록 하겠습니다.
```python
import sympy
import numpy
from matplotlib import pyplot
%matplotlib inline
sympy.init_printing()
```
```python
w = sympy.Symbol('w', real=True)
f = w**2 + 3*w - 5
f
```
```python
sympy.plotting.plot(f);
```
1차 미분한 식은 다음과 같이 알아볼 수 있습니다.
```python
fprime = f.diff(w)
fprime
```
그리고 해당 식의 해는 다음과 같이 구할 수 있습니다.
```python
sympy.solve(fprime, w)
```
### 1.2 Gradient Descent
두번째 방법은 오늘 배운 Gradient Descent 방법으로 한번에 정답에 접근하는 것이 아닌 반복적으로 정답에 가까워지는 방법입니다.
이것도 코드를 통해서 이해해보도록 하겠습니다.
먼저 기울기값을 구하는 함수를 먼저 만들겠습니다.
```python
fpnum = sympy.lambdify(w, fprime)
type(fpnum)
```
function
그 다음 처음 $w$ 값을 설정한 뒤, 반복적으로 최솟값을 향해서 접근해보겠습니다.
```python
w = 10.0 # starting guess for the min
for i in range(1000):
w = w - fpnum(w)*0.01 # with 0.01 the step size
print(w)
```
-1.4999999806458753
이처럼 첫번째 방법과 두번째 방법에서 같은 값이 나온 것을 알 수 있습니다.
Gradient descent 방법을 직접 데이터를 만들어서 적용해보겠습니다.
### 1.3 Linear regression
실제로 linear 한 관계를 가진 데이터 셋을 사용하기 위해서 직접 데이터를 만들어보도록 하겠습니다.
Numpy 패키지 안에 Normal distribution 함수를 통해서 조금의 noise 를 추가해서 생성하도록 하겠습니다.
```python
x_data = numpy.linspace(-5, 5, 100)
w_true = 2
b_true = 20
y_data = w_true*x_data + b_true + numpy.random.normal(size=len(x_data))
pyplot.scatter(x_data,y_data);
```
```python
x_data.shape
```
```python
y_data.shape
```
총 100개의 데이터를 생성하였습니다. 이제 코드를 통해 접근해보도록 하겠습니다.
먼저 cost function을 나타내보겠습니다.
```python
w, b, x, y = sympy.symbols('w b x y')
cost_function = (w*x + b - y)**2
cost_function
```
위의 gradient descent 예시에서 한 것처럼 기울기 함수를 정의합니다.
```python
grad_b = sympy.lambdify([w,b,x,y], cost_function.diff(b), 'numpy')
grad_w = sympy.lambdify([w,b,x,y], cost_function.diff(w), 'numpy')
```
이제 $w$와 $b$의 초기값을 정의하고 gradient descent 방법을 적용하여 cost function을 최소로 하는 $w$와 $b$ 값을 찾아보겠습니다.
```python
w = 0
b = 0
for i in range(1000):
descent_b = numpy.sum(grad_b(w,b,x_data,y_data))/len(x_data)
descent_w = numpy.sum(grad_w(w,b,x_data,y_data))/len(x_data)
w = w - descent_w*0.01 # with 0.01 the step size
b = b - descent_b*0.01
print(w)
print(b)
```
2.0303170198038307
19.90701393429327
처음에 데이터를 생성할 때 정의한 $w, b$ 값과 매우 유사한 값을 구할 수 있었습니다.
```python
pyplot.scatter(x_data,y_data)
pyplot.plot(x_data, w*x_data + b, '-r');
```
우리가 구한 직선이 데이터와 잘 맞는 것을 볼 수 있습니다. 이번에는 실제 데이터에서 linear regression을 진행해보겠습니다.
## 2. Earth temperature over time
오늘 배운 linear regression 방법을 사용해서 시간 흐름에 따른 지구의 온도 변화를 분석해보겠습니다.
Global temperature anomaly라는 지표를 통해서 분석을 해볼 것입니다.
여기서 temperature anomaly는 어떠한 기준 온도 값을 정해놓고 그것과의 차이를 나타낸 것입니다. 예를 들어서 temperature anomaly가 양수의 높은 값을 가진다면 그것은 평소보다 따듯한 기온을 가졌다는 말이고, 음수의 작은 값을 가진다면 그것은 평소보다 차가운 기온을 가졌다는 말입니다.
세계 여러 지역의 온도가 각각 다 다르기 때문에 global temperature anomaly를 사용해서 분석을 하도록 하겠습니다. 자세한 내용은 아래 링크에서 확인하실 수 있습니다.
https://www.ncdc.noaa.gov/monitoring-references/faq/anomalies.php
```python
from IPython.display import YouTubeVideo
YouTubeVideo('gGOzHVUQCw0')
```
위 영상으로 기온이 점점 상승하고 있다는 것을 알 수 있습니다.
이제부터는 실제 데이터를 가져와서 분석해보도록 하겠습니다.
### Step 1 : Read a data file
NOAA(National Oceanic and Atmospheric Administration) 홈페이지에서 데이터를 가져오겠습니다.
아래 명령어로 데이터를 다운받겠습니다.
```python
from urllib.request import urlretrieve
URL = 'http://go.gwu.edu/engcomp1data5?accessType=DOWNLOAD'
urlretrieve(URL, 'land_global_temperature_anomaly-1880-2016.csv')
```
('land_global_temperature_anomaly-1880-2016.csv',
<http.client.HTTPMessage at 0x7f158b3dca50>)
다운로드한 데이터를 numpy 패키지를 이용해 불러오겠습니다.
```python
import numpy
```
```python
fname = '/content/land_global_temperature_anomaly-1880-2016.csv'
year, temp_anomaly = numpy.loadtxt(fname, delimiter=',', skiprows=5, unpack=True)
```
### Step 2 : Plot the data
Matplotlib 패키지의 pyplot을 이용해서 2D plot을 찍어보도록 하겠습니다.
```python
from matplotlib import pyplot
%matplotlib inline
```
```python
pyplot.plot(year, temp_anomaly);
```
Plot 에 여러 정보를 추가해서 더 보기 좋게 출력해보겠습니다.
```python
pyplot.rc('font', family='serif', size='18')
#You can set the size of the figure by doing:
pyplot.figure(figsize=(10,5))
#Plotting
pyplot.plot(year, temp_anomaly, color='#2929a3', linestyle='-', linewidth=1)
pyplot.title('Land global temperature anomalies. \n')
pyplot.xlabel('Year')
pyplot.ylabel('Land temperature anomaly [°C]')
pyplot.grid();
```
### Step 3 : Analytically
Linear regression을 하기 위해서 먼저 직선을 정의하겠습니다.
$$f(x_i) = wx + b$$
그 다음 수업 시간에 배운 cost function을 정의하도록 하겠습니다. 우리가 최소화 해야 할 cost function은 다음과 같습니다.
$$\frac{1}{n} \sum_{i=1}^n (y_i - f(x_i))^2 = \frac{1}{n} \sum_{i=1}^n (y_i - (wx_i + b))^2$$
이제 cost function 을 구하고자 하는 변수로 미분한 뒤 0이 되도록 하는 값을 찾으면 됩니다.
먼저 $b$에 대해서 미분을 하겠습니다.
$$\frac{\partial{J(w,b)}}{\partial{b}} = \frac{1}{n}\sum_{i=1}^n -2(y_i - (wx_i+b)) = \frac{2}{n}\left(b + w\sum_{i=1}^n x_i -\sum_{i=1}^n y_i\right) = 0$$
위 식을 만족하는 $b$에 대해서 정리하면
$$b = \bar{y} - w\bar{x}$$
여기서 $\bar{x} = \frac{\sum_{i=1}^n x_i}{n}$ , $\bar{y} = \frac{\sum_{i=1}^n y_i}{n}$ 입니다.
이제 $w$에 대해서 미분을 하겠습니다.
$$\frac{\partial{J(w,b)}}{\partial{w}} = \frac{1}{n}\sum_{i=1}^n -2(y_i - (wx_i+b))x_i = \frac{2}{n}\left(b\sum_{i=1}^nx_i + w\sum_{i=1}^n x_i^2 - \sum_{i=1}^n x_iy_i\right)$$
여기에 아까 구한 $b$를 대입한 후 0이 되는 $w$값을 구하하면
$$w = \frac{\sum_{i=1}^ny_i(x_i-\bar{x_i})}{\sum_{i=1}^nx_i(x_i-\bar{x_i})}$$
가 됩니다.
우리는 계산을 통해서 $w$와 $b$ 값을 구했습니다.
이제 코드를 통해서 적용해보도록 하겠습니다.
```python
w = numpy.sum(temp_anomaly*(year - year.mean())) / numpy.sum(year*(year - year.mean()))
b = a_0 = temp_anomaly.mean() - w*year.mean()
print(w)
print(b)
```
0.01037028394347266
-20.148685384658464
이제 그래프로 그려서 확인해보도록 하겠습니다.
```python
reg = b + w * year
```
```python
pyplot.figure(figsize=(10, 5))
pyplot.plot(year, temp_anomaly, color='#2929a3', linestyle='-', linewidth=1, alpha=0.5)
pyplot.plot(year, reg, 'k--', linewidth=2, label='Linear regression')
pyplot.xlabel('Year')
pyplot.ylabel('Land temperature anomaly [°C]')
pyplot.legend(loc='best', fontsize=15)
pyplot.grid();
```
오늘은 linear regression을 직접 만든 데이터와 실제로 있는 데이터로 진행해보았습니다.
년도에 따른 기온의 변화를 gradient descent 로 하는 방법은 내일 실습에서 추가로 알아보도록 하겠습니다.
질문 있으면 편하게 해주세요~~
| 50e4482ffb529e6b342b8c353d02d9fce71f8ea0 | 236,026 | ipynb | Jupyter Notebook | 03_Machine_Learning/sol/[HW10]_Simple_Linear_Regression.ipynb | wjh1065/goormNLP | ed6aeef6f76507f3e1a2abb15abdad33074bdaaa | [
"MIT"
] | null | null | null | 03_Machine_Learning/sol/[HW10]_Simple_Linear_Regression.ipynb | wjh1065/goormNLP | ed6aeef6f76507f3e1a2abb15abdad33074bdaaa | [
"MIT"
] | null | null | null | 03_Machine_Learning/sol/[HW10]_Simple_Linear_Regression.ipynb | wjh1065/goormNLP | ed6aeef6f76507f3e1a2abb15abdad33074bdaaa | [
"MIT"
] | null | null | null | 226.078544 | 52,730 | 0.905909 | true | 3,517 | Qwen/Qwen-72B | 1. YES
2. YES | 0.897695 | 0.819893 | 0.736014 | __label__kor_Hang | 0.999993 | 0.54834 |
# Best Approximation in Hilbert Spaces
```python
%matplotlib inline
import sympy as sym
import pylab as pl
import numpy as np
import numpy.polynomial.polynomial as n_poly
import numpy.polynomial.legendre as leg
```
## Mindflow
We want the best approximation (in Hilbert Spaces) of the function $f$, on the space $V = \mathrm{span}\{v_i\}$. Remember that $p\in V$ is best approximation of $f$ if and only if:
$$
(p-f,q)=0, \quad \forall q\in V.
$$
Focus one second on the fact that both $p$ and $q$ belong to $V$. We know that any $q$ can be expressed as a linear combination of the basis functions $v_i$:
$$
(p-f,v_i)=0, \quad \forall v_i\in V.
$$
Moreover $p$ is uniquely defined by the coefficients $p^j$ such that $p = p^j\,v_j$. Collecting this information together we get:
$$
(v_j,v_i) p^j = (f,v_i),\quad \forall v_i\in V.
$$
Now that we know our goal (finding these $p^j$ coefficients) we do what the rangers do: we explore!
We understand that we will need to invert the matrix:
$$
M_{ij} = (v_j,v_i) = \int v_i\cdot v_j
$$
What happens if we choose basis functions such that $(v_j,v_i) = \delta_{ij}$?
How to construct numerical techniques to evaluate integrals in an efficient way?
Evaluate the $L^2$ projection.
## Orthogonal Polynomials
Grham Schmidt
$$
p_0(x) = 1, \qquad p_k(x) = x^k - \sum_{j=0}^{k-1} \frac{(x^k,p_j(x))}{(p_j(x),p_j(x))} p_j(x)
$$
or, alternatively
$$
p_0(x) = 1, \qquad p_k(x) = x\,p_{k-1}(x) - \sum_{j=0}^{k-1} \frac{(x p_{k-1}(x),p_j(x))}{(p_j(x),p_j(x))} p_j(x)
$$
```python
def scalar_prod(p0,p1,a=0,b=1):
assert len(p0.free_symbols) <= 1, "I can only do this for single variable functions..."
t = p0.free_symbols.pop() if len(p0.free_symbols) == 1 else sym.symbols('t')
return sym.integrate(p0*p1,(t,a,b))
```
```python
t = sym.symbols('t')
#k = 3
Pk = [1+0*t] # Force it to be a sympy expression
for k in range(1,5):
s = 0
for j in range(0,k):
s+= scalar_prod(t**k,Pk[j])/scalar_prod(Pk[j],Pk[j])*Pk[j]
pk = t**k-s
# pk = pk/sym.sqrt(scalar_prod(pk,pk))
pk = pk/pk.subs(t,1.)
Pk.append(pk)
M = []
for i in range(len(Pk)):
row = []
for j in range(len(Pk)):
row.append(scalar_prod(Pk[i],Pk[j]))
M.append(row)
M = sym.Matrix(M)
print(M)
print Pk
x = np.linspace(0,1,2**5 + 1)
print x
for p in Pk:
if p != 1 :
fs = sym.lambdify(t, p, 'numpy')
#print x.shape
#print fs(x)
_ = pl.plot(x,fs(x))
```
## Theorem
Le $q$ be nonzero polynomial of degree $n+1$ and $\omega(x)$ a positive weight function, s. t.:
$$
\int_a^b x^k q(x)\, \omega(x) = 0, \quad k = 0,\ldots, n
$$
If $x_i$ are zeros of $q(x)$, then:
$$
\int_a^b f(x)\, \omega(x)\approx \sum_{i=0}^nw_i\, f(x_i)
$$
with:
$$
w_i = \int_a^b l_i(x)\, \omega(x)
$$
is exact for all polynomials of degree at most $2n+1$. Here $l_i(x)$ are the usual Lagrange interpolation polynomials.
**Proof:** assume $f(x)$ is a polynomial of degree at most $2n+1$ and show:
$$
\int_a^b f(x)\, \omega(x) = \sum_{i=0}^nw_i\, f(x_i).
$$
Usign the polynomial division we have:
$$
\underbrace{f(x)}_{2n+1} = \underbrace{q(x)}_{n+1}\, \underbrace{p(x)}_{n} + \underbrace{r(x)}_{n}.
$$
By taking $x_i$ as zeros of $q(x)$ we have:
$$
f(x_i) = r(x_i)
$$
Now:
$$
\int_a^b f(x)\, \omega(x) = \int_a^b [q(x)\, p(x) + r(x)]\, \omega(x)
$$
$$
= \underbrace{\int_a^b q(x)\, p(x) \, \omega(x)}_{=0} + \int_a^b r(x)\, \omega(x)
$$
Since $r(x)$ is a polynomial of order $n$ this is exact:
$$
\int_a^b f(x)\, \omega(x) = \int_a^b r(x)\, \omega(x) = \sum_{i=0}^nw_i\, r(x_i)
$$
But since we chose $x_i$ such that $f(x_i) = r(x_i)$, we have:
$$
\int_a^b f(x)\, \omega(x) = \int_a^b r(x)\, \omega(x) = \sum_{i=0}^nw_i\, f(x_i)
$$
This completes the proof.
## Legendre Polynomial
Two term recursion, to obtain the same orthogonal polynomials above (defined between [-1,1]), normalized to be one in $x=1$:
$$
(n+1) p^{n+1}(x) = (2n+1)\, x\, p^n(x) - n\, p^{n-1}(x)
$$
```python
Pn = [1.,t]
#Pn = [1.,x, ((2*n+1)*x*Pn[n] - n*Pn[n-1])/(n+1.) for n in range(1,2)]
for n in range(1,5):
pn1 = ((2*n+1)*t*Pn[n] - n*Pn[n-1])/(n+1.)
Pn.append(sym.simplify(pn1))
print(Pn)
#print(sym.poly(p))
#print(sym.real_roots(sym.poly(p)))
print(sym.integrate(Pn[4]*Pn[3],(t,-1,1)))
x = np.linspace(-1,1,100)
for p in Pn:
if p != 1. :
fs = sym.lambdify(t, p, 'numpy')
#print x.shape
#print fs(x)
_ = pl.plot(x,fs(x))
```
In our proof we selected to evaluate $ r $ and $f$ in $x_i$ at the zeros of the legendre polynomials, this is why we need to evaluate the zeros of the polynomials.
```python
print(sym.real_roots(sym.poly(Pn[3])))
#q = [-1.]+sym.real_roots(sym.poly(Pn[2]))+[1.]
q = sym.real_roots(sym.poly(Pn[3]))
print(q)
for p in Pn:
if p != 1. :
#print(sym.poly(p))
#print(sym.real_roots(sym.poly(p)))
print(sym.nroots(sym.poly(p)))
```
[-sqrt(15)/5, 0, sqrt(15)/5]
[-sqrt(15)/5, 0, sqrt(15)/5]
[0]
[-0.577350269189626, 0.577350269189626]
[-0.774596669241483, 0, 0.774596669241483]
[-0.861136311594053, -0.339981043584856, 0.339981043584856, 0.861136311594053]
[-0.906179845938664, -0.538469310105683, 0, 0.538469310105683, 0.906179845938664]
$$
w_i = \int_{-1}^{1} l_i(x)
$$
```python
Lg = [1. for i in range(len(q))]
print(Lg)
#for i in range(n+1):
for i in range(len(q)):
for j in range(len(q)):
if j != i:
Lg[i] *= (t-q[j])/(q[i]-q[j])
print(Lg)
x = np.linspace(-1,1,100)
for poly in Lg:
fs = sym.lambdify(t, poly, 'numpy')
_ = pl.plot(x,fs(x))
```
```python
for poly in Lg:
print(sym.integrate(poly,(t,-1,1)))
```
0.555555555555555
0.888888888888889
0.555555555555555
### Hint
Proiezione usando polinomi LEGENDRE (f,v_i)
# Now let's get Numerical
From now on I work on the $[0,1]$ interval, becouse i like it this way :)
In the previus section we explored what sympbolically was happening, now we implement things on the computer. We saw how important are the legendre plynomials. Here a little documentation on that. I pont it out not because you need to read it all, but because I would like you get some aquitance with this criptic documentation pages [doc](https://docs.scipy.org/doc/numpy/reference/generated/numpy.polynomial.legendre.legroots.html#numpy.polynomial.legendre.legroots).
The problem we aim at solving is finding the coefficents $p_j$ such that:
$$
(v_j,v_i) p^j = (f,v_i),\quad \forall v_i\in V.
$$
Remind in this section the einstein notation holds.
We can expand the compact scalar product notation:
$$
p^j \int_0^1 v_i\, v_j = \int_0^1 f\, v_i,\quad \forall v_i\in V.
$$
We consider $V = \mathrm{span}\{l_i\}$. Our problem becomes:
$$
p^j \int_0^1 l_i\, l_j = \int_0^1 f\, l_i,\quad \mathrm{for}\ i = 0,\ldots,\mathtt{deg}
$$
Let's focus on mass matrix:
$$
\int_0^1 l_i(x)\, l_j(x) = \sum_k l_i(x_k)\, w_k\, l_j(x_k) =
$$
$$
=
\left(
\begin{array}{c c c c}
l_0(x_0) & l_0(x_1) & \ldots & l_0(x_q) \\
l_1(x_0) & l_1(x_1) & \ldots & l_1(x_q) \\
& \ldots & \ldots & \\
l_n(x_0) & l_n(x_1) & \ldots & l_n(x_q) \\
\end{array}
\right)
\left(
\begin{array}{c c c c}
w_0 & 0 & \ldots & 0 \\
0 & w_1 & \ldots & 0 \\
& \ldots & \ldots & \\
0 & 0 & \ldots & w_q \\
\end{array}
\right)
\left(
\begin{array}{c c c c}
l_0(x_0) & l_1(x_0) & \ldots & l_n(x_0) \\
l_0(x_1) & l_1(x_1) & \ldots & l_n(x_1) \\
& \ldots & \ldots & \\
l_0(x_q) & l_1(x_q) & \ldots & l_n(x_q) \\
\end{array}
\right)
= B\, W\, B^T
$$
A piece of curiosity, how the the two functions to find theros in two different ways
```python
print sym.nroots(sym.poly(Pn[-1]))
coeffs = np.zeros(6)
coeffs[-1] = 1.
print(leg.legroots(coeffs))
```
[-0.906179845938664, -0.538469310105683, 0, 0.538469310105683, 0.906179845938664]
[ -9.06179846e-01 -5.38469310e-01 -5.96500148e-17 5.38469310e-01
9.06179846e-01]
```python
print gauss_points(3)
print(np.sqrt(3./5.)*.5)+.5
```
[ 0.11270167 0.5 0.88729833]
0.887298334621
```python
def define_lagrange_basis_set(q):
n = q.shape[0]
L = [n_poly.Polynomial.fromroots([xj for xj in q if xj != q[i]]) for i in range(n)]
L = [L[i]/L[i](q[i]) for i in range(n)]
return L
```
differenza fra le roots "simboliche" e non
```python
deg = 4
Nq = deg+1
p,w = leg.leggauss(Nq)
w = .5 * w
p = .5*(p+1)
#print p
#print w
W = np.diag(w)
#print W
```
```python
int_p = np.linspace(0,1,deg+1)
L = define_lagrange_basis_set(int_p)
print(len(L))
x = np.linspace(0,1,1025)
for f in L:
_ = pl.plot(x, f(x))
_ = pl.plot(int_p, 0*int_p, 'ro')
```
```python
B = np.zeros((0,Nq))
for l in L:
B = np.vstack([B,l(p)])
```
Recall:
$$
B\, W\, B^T p = B W f
$$
$$
B\, W\, B^T =
\left(
\begin{array}{c c c c}
l_0(x_0) & l_0(x_1) & \ldots & l_0(x_q) \\
l_1(x_0) & l_1(x_1) & \ldots & l_1(x_q) \\
& & \ddots & \\
l_n(x_0) & l_n(x_1) & \ldots & l_n(x_q) \\
\end{array}
\right)
\left(
\begin{array}{c c c c}
w_0 & 0 & \ldots & 0 \\
0 & w_1 & \ldots & 0 \\
& & \ddots & \\
0 & 0 & \ldots & w_q \\
\end{array}
\right)
\left(
\begin{array}{c c c c}
l_0(x_0) & l_1(x_0) & \ldots & l_n(x_0) \\
l_0(x_1) & l_1(x_1) & \ldots & l_n(x_1) \\
& & \ddots & \\
l_0(x_q) & l_1(x_q) & \ldots & l_n(x_q) \\
\end{array}
\right)
$$
```python
print(B.shape)
_ = pl.plot(B.T)
M = B.dot(W.dot(B.T))
print np.linalg.matrix_rank(M)
print np.linalg.cond(M)
```
```python
def step_function():
def sf(x):
index = where((x>.3) & (x<.7))
step = zeros(x.shape)
step[index] = 1
return step
return lambda x : sf(x)
```
$$
B\, W\, f =
\left(
\begin{array}{c c c c}
l_0(x_0) & l_0(x_1) & \ldots & l_0(x_q) \\
l_1(x_0) & l_1(x_1) & \ldots & l_1(x_q) \\
& \ldots & \ldots & \\
l_n(x_0) & l_n(x_1) & \ldots & l_n(x_q) \\
\end{array}
\right)
\left(
\begin{array}{c c c c}
w_0 & 0 & \ldots & 0 \\
0 & w_1 & \ldots & 0 \\
& \ldots & \ldots & \\
0 & 0 & \ldots & w_q \\
\end{array}
\right)
\left(
\begin{array}{c}
f(x_0) \\
f(x_1) \\
\vdots\\
f(x_q) \\
\end{array}
\right)
$$
```python
g = lambda x: np.sin(2*np.pi*x)
#g = step_function()
p = p.reshape((p.shape[0],1))
G = g(p)
print G.shape
print B.shape
print W.shape
G = B.dot(W.dot(G))
```
(5, 1)
(5, 5)
(5, 5)
```python
u = np.linalg.solve(M, G)
print u
```
[[ -1.92161045e-01]
[ 9.79052672e-01]
[ -1.35712301e-15]
[ -9.79052672e-01]
[ 1.92161045e-01]]
```python
def get_interpolating_function(LL,ui):
def func(LL,ui,x):
acc = 0
for L,u in zip(LL,ui):
#print(L,u)
acc+=u*L(x)
return acc
return lambda x : func(LL,ui,x)
```
```python
I = get_interpolating_function(L,u)
sampling = np.linspace(0,1,101)
_= pl.plot(sampling, I(sampling))
#plot(xp, G,'ro')
```
## Diference in between projection and interpolation runge example
Proiezione usando polinomi LEGENDRE (f,v_i) con quadratura con 18 punti
Interpolazione usando polinomi LAGRANGE (sui punti di quadratura che sono i punti di gauss della funzione sopra)
```python
```
| 556e173c830749844ac5bf5b9208c0adcc551aa1 | 176,934 | ipynb | Jupyter Notebook | python-lectures/04_best_approximation.ipynb | denocris/Introduction-to-Numerical-Analysis | 45b40a7743e11457b644fc6a7de17a0854ece4f0 | [
"CC-BY-4.0"
] | 8 | 2018-01-16T15:59:48.000Z | 2022-03-31T09:29:31.000Z | python-lectures/04_best_approximation.ipynb | denocris/Introduction-to-Numerical-Analysis | 45b40a7743e11457b644fc6a7de17a0854ece4f0 | [
"CC-BY-4.0"
] | null | null | null | python-lectures/04_best_approximation.ipynb | denocris/Introduction-to-Numerical-Analysis | 45b40a7743e11457b644fc6a7de17a0854ece4f0 | [
"CC-BY-4.0"
] | 8 | 2018-01-21T16:45:34.000Z | 2021-06-25T15:56:27.000Z | 196.812013 | 35,886 | 0.888326 | true | 4,346 | Qwen/Qwen-72B | 1. YES
2. YES | 0.83762 | 0.868827 | 0.727747 | __label__eng_Latn | 0.520464 | 0.529131 |
# Examples of image reconstruction using PCA
Data classification in high dimensional spaces can be challenging and the results often lack robustness.
This well-known problem has its own name; <i>the curse of dimensionality</i>. Principal Component
Analysis is a popular method for dimensionality reduction. It can also be used as a
visualisation tool to project data into lower dimensional spaces, which improves data exploration and comprehension.
In this script, we will use PCA to study a few characteristics of the MNIST test dataset. It contains 10 K images of
hand-written digits from 0 to 9. It is a good introduction to PCA in image analysis.
## The PCA transform
This linear transform allows us to move from the natural/original space $\cal{X}$ into the component space $\cal{Z}$ and is defined as
<blockquote> $\bf{z} = \bf{W}^{T}(\bf{x}-\bf{\mu})$</blockquote>
with
<blockquote>
$\begin{align}
\bf{x} &= [x_{1} x_{2} \cdots x_{N}]^\top \\
\bf{\mu} &= [\mu_{1} \mu_{2} \cdots \mu_{N}]^\top \\
\bf{z} &= [z_{1} z_{2} \cdots z_{N}]^\top \\
\end{align}$
</blockquote>
where N is the dimension of space and $\bf{\mu}$ is the mean of the N-dimensional data X.
The $W$ matrix if made of the N eigenvectors $\bf{w}_{i}$ of the covariance matrix $\Sigma$. They are stacked together along columns
<blockquote>
$\begin{align}
\bf{W} &= \begin{pmatrix} \bf{w}_{1} & \bf{w}_{2} & \dotsb & \bf{w}_{N} \end{pmatrix} \\
&= \begin{pmatrix} w_{1,1} & w_{2,1} & \dotsb & w_{N,1} \\
w_{1,2} & w_{2,2} & \dotsb & w_{N,2} \\
\vdots & \vdots & \dotsb & \vdots \\
w_{1,N} & w_{2,N} & \dotsb & w_{N,N} \end{pmatrix}
\end{align}$
</blockquote>
As for the covariance matrix $\Sigma$, it is computed from the N-dimensional data distribution X
<blockquote>
$
\begin{align}
\bf{\Sigma} &= E\{(\bf{x}-\bf{\mu})(\bf{x}-\bf{\mu})^{T} \} \\
&= \begin{pmatrix} \sigma_{1}^2 & \sigma_{1,2} & \cdots & \sigma_{1,N} \\
\sigma_{1,2} & \sigma_{2}^2 & \cdots & \sigma_{2,N} \\
\vdots & \vdots & \ddots & \vdots \\
\sigma_{1,N} & \sigma_{2,N} & \cdots & \sigma_{N}^2 \end{pmatrix}
\end{align}
$
</blockquote>
The $\it{inverse}$ PCA transform allows us to move from the component space $\cal{Z}$ back to the natural/original space $\cal{X}$
and is defined as
<blockquote> $\bf{x} = \bf{W}\bf{z} + \bf{\mu}$</blockquote>
The principal components z are sorted in decreasing order of variance $var(z_{i})=\lambda_{i}$ where the $\lambda_{i}$ are
the eigenvalues of the covariance matrix $\bf{\Sigma}$.
N.B. The principal components usually come in one of the two popular notations: $z_{i}$ or $PCA_{i}$.
## PCA as a tool for dimensionality reduction
The last equation shows that we can reconstruct the vector $\bf{x}$ exactly. We usually drop the
least significant principal components (the last elements in z and the last columns in $\bf{W}$)
because they only contain noise. This produces an approximate reconstruction of x
<blockquote> $\bf{x} \approx \bf{\tilde{W}}\bf{\tilde{z}} + \bf{\mu}$</blockquote>
with the modified arrays
<blockquote> $\bf{\tilde{z}} = \begin{pmatrix} z_{1} \ z_{2} \ \dotsb \ z_{M} \end{pmatrix}^{T} $ </blockquote>
and
<blockquote>
$
\begin{align}
\bf{\tilde{W}} &= \begin{pmatrix} \bf{w}_{1} & \bf{w}_{2} & \dotsb & \bf{w}_{M} \end{pmatrix} \\
&= \begin{pmatrix} w_{1,1} & w_{2,1} & \dotsb & w_{M,1} \\
w_{1,2} & w_{2,2} & \dotsb & w_{M,2} \\
\vdots & \vdots & \ddots & \vdots \\
w_{1,N} & w_{2,N} & \dotsb & w_{M,N} \end{pmatrix}
\end{align}
$
</blockquote>
The scree plot, which displays $\lambda$ $\it{versus}$ the component number, is a practical tool for finding
the number of relevant components, i.e. those that contain information rather than noise.
## PCA as a tool for visualisation
The MNIST dataset contains 10 K images. A PCA analysis will generate as many eigenvectors $\bf{w}_{i}$
that can be reshaped into images. Each original images is a linear combinations of the eigenvectors. As
we will see below, the first principal components $z_{i}$ are the most important ones. If we keep only
the first two, each image $\bf{x}$ can be approximated as
<blockquote>
$
\begin{align}
\bf{x} &= \sum_{i=1}^N z_{i} \bf{w}_{i} + \bf{\mu} \\
& \approx z_{1} \bf{w}_{1} + z_{2} \bf{w}_{2} + \bf{\mu}
\end{align}
$
</blockquote>
This means that each image can be 'summarized' by only two numbers $z_{1}$ and $z_{2}$. Thus, the
10 K images can be represented by as many points in the 2-D PCA space of
coordinates ($z_{1}$, $z_{2}$) or equivalently ($PCA_{1}$, $PCA_{2}$). This representation
makes image comparison easy as we will see below.
```python
print(__doc__)
# Author: Pierre Gravel <pierre.gravel@iid.ulaval.ca>
# License: BSD
%matplotlib inline
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.offsetbox import OffsetImage, AnnotationBbox
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay
import seaborn as sns
sns.set(color_codes=True)
# Used for reproductibility of the results
np.random.seed(43)
```
Automatically created module for IPython interactive environment
# Data preprocessing
### Load the MNIST test dataset.
The dataset contains 10 K images of size 28 x 28, each stored into a line of 784 elements. The X data is an array of
shape 10K x 784; each line corresponds to an image (an observation) and each column corresponds to a feature.
The y data is an array of 10 K elements that contains the image labels $[0,\cdots ,9]$
The test dataset can be downloaded from the Kaggle website: https://www.kaggle.com/oddrationale/mnist-in-csv
In what follows, we assume that you downloaded the dataset into the current directory.
```python
filename = 'mnist_test.csv'
df = pd.read_csv(filename)
# Extract from the Panda dataframe the features X and the labels y
X = df.drop(['label'], axis=1).values
y = df[['label']].values
```
Display one (inverted) image for each class. N.B. The original images are white on a black background.
```python
fig, ax = plt.subplots(2,5, figsize=(10,6))
for i in range(10):
idx = np.where(y==i)
idx = idx[0][0]
plt.subplot(2,5, i + 1)
plt.imshow(255 - X[idx,:].reshape(28, 28), cmap='gray', interpolation='nearest')
plt.xticks(())
plt.yticks(())
fig.tight_layout()
plt.savefig('13.1.1_Examples_from_MNIST_dataset.png')
plt.savefig('13.1.1_Examples_from_MNIST_dataset.pdf')
```
### Normalise the data
```python
sc = StandardScaler().fit(X)
X_s = sc.transform(X)
```
# PCA analysis
### Compute the PCA transform using all the images
```python
pca = PCA()
pca.fit(X_s);
```
### Display the PCA scree plot
The scree plot shows the eigenvalues $\lambda$ in a decreasing order. The most relevant eigenvectors are on the left and
the least relevant ones on the right.
The second panel shows the fraction of the image variance already explained by the first n principal components.
For instance, the first 50 principal components explain
about 60% of the information in the dataset, whereas the first 200 account for 90% of it.
```python
(n,m) = X_s.shape
n_components = np.arange(1,m+1)
fig, ax = plt.subplots(2,1, figsize=(10,6))
ax[0].plot(n_components, pca.singular_values_)
ax[0].set_xlabel('Eigenvectors', fontsize=16)
ax[0].set_ylabel('Eigenvalues', fontsize=16)
ax[0].set_title('MNIST Scree plot', fontsize=16)
ax[1].plot(n_components, 100*np.cumsum(pca.explained_variance_ratio_))
ax[1].set_xlabel('Eigenvectors', fontsize=16)
ax[1].set_ylabel('Ratio (%)', fontsize=16)
ax[1].set_title('Proportion of variance explained', fontsize=16)
fig.tight_layout()
plt.savefig('13.1.2_MNIST_scree_plot.png')
plt.savefig('13.1.2_MNIST_scree_plot.pdf')
```
### Show examples of eigenvectors (reshaped as images)
The first eigenvectors (first row) are the most important ones as they contain coherent structures.
The last eigenvectors (second row) usually contain only noise and are generally discarded.
```python
indx = [1, 2, 3, 4, 300, 400, 500, 600]
fig, ax = plt.subplots(2,4, figsize=(10,6))
for i in range(8):
im = pca.components_[indx[i]-1,:].reshape(28, 28)
plt.subplot(2,4, i + 1)
plt.imshow(im, cmap='gray', interpolation='nearest')
plt.title('$Eigenvector_{%d}$' % (indx[i]), fontsize=16)
plt.xticks(())
plt.yticks(())
fig.tight_layout()
plt.savefig('13.1.3_Examples_of_eigenvectors.png')
plt.savefig('13.1.3_Examples_of_eigenvectors.pdf')
```
# Image similarities
### Project the image data into a 2-D space defined by the first two principal components.
Compute the principal components of the image data and make a plot where each image is represented as a
point in 2-D PCA space of coordinates ($PCA_{1}$, $PCA_{2}$) or equivalently ($z_{1}$, $z_{2}$).
Superpose on it the labels for five images in each class.
Notice how the classes are clustered; the '1' images are on the left, the '0' images are on the right, the '7' on the top, etc.
This is one of the reasons why PCA is so much used in data analysis.
The classes are not perfectly separated however since there is some visible overlap between them.
```python
Z = pca.transform(X_s)
```
```python
sns.set_style('white')
fig, ax = plt.subplots(figsize = (10, 10))
ax.scatter(Z[:,0],Z[:,1], c = 'c', s=1)
ax.set_xlabel('$PCA_{1}$', fontsize=18)
ax.set_ylabel('$PCA_{2}$', fontsize=18)
ax.set_title('2-D PCA space for MNIST', fontsize=16)
for i in range(10):
idx = np.where(y==i)
idx = idx[0][0:5]
ax.scatter(Z[idx,0], Z[idx,1],c='k', marker=r"$ {} $".format(i), edgecolors='none', s=150 )
ax.grid(color='k', linestyle='--', linewidth=.2)
sns.despine()
plt.savefig('13.1.4_2D_PCA_space_for_MNIST.png')
plt.savefig('13.1.4_2D_PCA_space_for_MNIST.pdf')
```
In the next figure, we replace the label markers with their corresponding images. Different labels overlap because
their handwritten images share similarities. For instance, a curved '7' may look like a '9', a squashed '3' may
looked like an '8', etc.
```python
sns.set_style('white')
image_shape = (28, 28)
fig, ax = plt.subplots(figsize = (10, 10))
ax.scatter(Z[:,0],Z[:,1], c = 'b', s=1)
ax.set_xlabel('$PCA_{1}$', fontsize=18)
ax.set_ylabel('$PCA_{2}$', fontsize=18)
ax.set_title('2-D PCA space for MNIST', fontsize=16)
z = np.zeros((28, 28))
for i in range(10):
idx = np.where(y==i)
idx = idx[0]
for j in range(5):
J = 1.-X[idx[j],:].reshape(image_shape)
I = (np.dstack((J,J,z)) * 255.999) .astype(np.uint8)
imagebox = OffsetImage(I, zoom=.5);
ab = AnnotationBbox(imagebox, (Z[idx[j],0], Z[idx[j],1]), frameon=True, pad=0);
ax.add_artist(ab);
ax.grid(color='k', linestyle='--', linewidth=.2)
sns.despine()
plt.savefig('13.1.5_2D_PCA_space_for_MNIST_with_images.png')
plt.savefig('13.1.5_2D_PCA_space_for_MNIST_with_images.pdf')
```
# Find the most similar image classes
If images from two different classes look alike, chances are, we will make classification errors when looking at them. The
example of '1' and '7' images is well known. This is why we often put an horizontal bar in the middle of the '7' to
make differences between them more visible. Do classifiers make the same errors?
In what follows, we will split the dataset into a training and a test datasets. A K-Nearest-Neighbor
classifier (KNN) will first be trained on the training dataset. Then, it will be used to make class predictions on the test dataset.
The confusion matrix between the predicted and the true test labels will help to identify the most
common errors. This will tell us what are the most lookalike image classes from the classifier standpoint.
### Split the dataset into a training and a test datasets
```python
X_train, X_test, y_train, y_test = train_test_split(X_s, y, random_state=0,train_size=0.8)
y_train = y_train.ravel()
y_test = y_test.ravel()
```
Train a KNN classifier on the training dataset (using 5 neighbors) and use it to classify the test
dataset.
Warning: the next cell may take a few seconds to a minute to compute. In a few years from now, this warning will be pointless
given the increasing speeds of hardware and software!
```python
clf = KNeighborsClassifier(n_neighbors=5)
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
```
Compute and display the confusion matrix between the true and the predicted labels for the test dataset. The confusion matrix
tells us what are the most common classification mistakes. For instance, images of '8' were confused 13 times
with images of '5'. The most common mistakes were found between (8,5) and (7,9) pairs. Surprisingly, the (1,7)
pair was not the most prevalent source of confusion.
```python
fig, ax = plt.subplots(figsize = (7, 7))
cm = confusion_matrix(y_test, y_pred)
ConfusionMatrixDisplay(cm).plot(ax=ax)
fig.savefig('13.1.6_confusion_matrix.png')
fig.savefig('13.1.6_confusion_matrix.pdf')
```
# Examples of image reconstructions and comparisons
The following section will explain several observations we made about the image class distributions in the 2-D PCA space.
It is important to mention that the results could have been different if we had trained the KNN classifier with more
neighbors (5) and more principal components per image (2).
### A few useful functions
We define functions that will be used to
<ul>
<li>Show the reconstruction performances with increasing number of principal components </li>
<li>Compare the images reconstructed from the first two components $PCA_{1}$ and $PCA_{2}$ </li>
</ul>
The first function computes PCA approximations of images of labels i and j using the first n principal components.
```python
def PCA_approximations(n_comp, X_s, y, sc, image_shape, i, j):
# Find one image with label i and one image with label j
idx = np.where(y==i)
idx = idx[0][0]
image_i = X_s[idx,:].reshape(1, -1)
idx = np.where(y==j)
idx = idx[0][0]
image_j = X_s[idx,:].reshape(1, -1)
# Compute the PCA transform using only the first n principal components
pca = PCA(n_components=n_comp)
pca.fit(X_s)
# Transform and reconstruct the image i using the first n principal components
Z = pca.transform(image_i)
x_i = pca.inverse_transform(Z)
# Remove the normalisation transform
x_i = sc.inverse_transform(x_i)
x_i = x_i.reshape(image_shape)
# Transform and reconstruct the image using the first n principal components
Z = pca.transform(image_j)
x_j = pca.inverse_transform(Z)
# Remove the normalisation transform
x_j = sc.inverse_transform(x_j)
x_j = x_j.reshape(image_shape)
# Remove the normalisation transform from the corresponding original images
X_i = sc.inverse_transform(image_i)
X_i = X_i.reshape(image_shape)
X_j = sc.inverse_transform(image_j)
X_j = X_j.reshape(image_shape)
return (x_i, x_j, X_i, X_j)
```
The second function displays a mosaic of the original images with their PCA approximations with 1, 2, 20
and 200 principal components.
```python
def display_PCA_reconstructions(X_s, y, sc, image_shape, i, j):
# Number of principal components used for the reconstruction of each image
ncomp = [1, 2, 20, 200]
fig, ax = plt.subplots(2,len(ncomp)+1, figsize=(10,6))
for k in range(len(ncomp)):
# Reconstruct both images with the same number of components
(x_i, x_j, X_i, X_j) = PCA_approximations(ncomp[k], X_s, y, sc, image_shape, i, j)
plt.subplot(2,5, k+2)
plt.imshow(255-x_i, cmap='gray', interpolation='nearest')
plt.title('M = %d' % ncomp[k], fontsize=16)
plt.xticks(())
plt.yticks(())
plt.subplot(2,5, k + 7)
plt.imshow(255-x_j, cmap='gray', interpolation='nearest')
plt.xticks(())
plt.yticks(())
# Original images for reference
plt.subplot(2,5, 1)
plt.imshow(255-X_i, cmap='gray', interpolation='nearest')
plt.title('Original', fontsize=16)
plt.xticks(())
plt.yticks(())
plt.subplot(2,5, 6)
plt.imshow(255-X_j, cmap='gray', interpolation='nearest')
plt.xticks(())
plt.yticks(())
fig.tight_layout()
```
## Example I: 7 versus 9
The figure below shows image reconstructions for an increasing number of principal components.
Notice how the reconstructions of '7' and '9' images become easily recognizable when 200 principal components are used.
This is not too surprising since, as mentionned before, the first 200 components account for 90% of the total image variance.
Notice also how the first two reconstructions (M = 1,2) are very similar for both '7' and '9' images. Hence, they share
similar values of $z_{1}$ and $z_{2}$. As a result, the '7' and '9' images should be neighbors in the 2-D PCA space
($z_{1}$, $z_{2}$) or equivalently ($PCA_{1}$, $PCA_{2}$). This is the case; their distributions overlap in the
top of the 2-D PCA space (see corresponding figures above).
```python
i = 7
j = 9
display_PCA_reconstructions(X_s, y, sc, image_shape, i, j)
plt.savefig('13.1.7_Examples_image_reconstructions_7_and_9.png')
plt.savefig('13.1.7_Examples_image_reconstructions_7_and_9.pdf')
```
## Example II: 1 versus 7
This new example is counter intuitive. The reconstructions with M = 2 are now quite different; the '1' and '7' images
do not share similar values of $z_{1}$ and $z_{2}$. As a result, the '1' and '7' images are not close neighbors in
the 2-D PCA space. The '1' are found on the left of the 2-D PCA space whereas the '7' are found at the top. Their
distributions barely overlap.
```python
i = 1
j = 7
display_PCA_reconstructions(X_s, y, sc, image_shape, i, j)
plt.savefig('13.1.8_Examples_image_reconstructions_1_and_7.png')
plt.savefig('13.1.8_Examples_image_reconstructions_1_and_7.pdf')
```
```python
```
| 7451263beba85f8bafd8c4413d4143be5e64d0f1 | 500,780 | ipynb | Jupyter Notebook | 13.1_Generate_image_reconstructions_using_PCA.ipynb | AstroPierre/Scripts-for-figures-courses-GIF-4101-GIF-7005 | a38ad6f960cc6b8155fad00e4c4562f5e459f248 | [
"BSD-2-Clause"
] | null | null | null | 13.1_Generate_image_reconstructions_using_PCA.ipynb | AstroPierre/Scripts-for-figures-courses-GIF-4101-GIF-7005 | a38ad6f960cc6b8155fad00e4c4562f5e459f248 | [
"BSD-2-Clause"
] | null | null | null | 13.1_Generate_image_reconstructions_using_PCA.ipynb | AstroPierre/Scripts-for-figures-courses-GIF-4101-GIF-7005 | a38ad6f960cc6b8155fad00e4c4562f5e459f248 | [
"BSD-2-Clause"
] | null | null | null | 625.975 | 162,136 | 0.944511 | true | 5,140 | Qwen/Qwen-72B | 1. YES
2. YES | 0.937211 | 0.903294 | 0.846577 | __label__eng_Latn | 0.971282 | 0.805215 |
<a href="https://colab.research.google.com/github/neurologic/MotorSystems_BIOL358_SP22/blob/main/Tutorial_GeometricViewOfData.ipynb" target="_parent"></a>
# Tutorial: Geometric view of data
---
# Objectives
In this notebook we'll explore how multivariate data can be represented in different orthonormal bases (dimensions). In other words, how to change the dimensions defining a set of data. As a start, the cartesian coordinates are an example of a two-dimensional orthogonal basis. This notebook will help you build intuition that will be helpful in understanding the use of PCA in research that you will be reading about.
Overview:
- Explore correlated multivariate data.
- Define and visualize an arbitrary orthonormal basis.
- Project data from one basis (cartesian) onto another basis (arbitrary).
---
# Setup
```python
# @title Imports, Settings, Functions
# @markdown Execute this code cell to set up the notebook
import numpy as np
import matplotlib.pyplot as plt
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
def plot_data(X):
"""
Plots bivariate data. Includes a plot of each random variable, and a scatter
plot of their joint activity. The title indicates the sample correlation
calculated from the data.
Args:
X (numpy array of floats) : Data matrix each column corresponds to a
different random variable
Returns:
Nothing.
"""
fig = plt.figure(figsize=[10, 6])
gs = fig.add_gridspec(2, 2)
ax1 = fig.add_subplot(gs[0, 0])
ax1.plot(X[:, 0], color='k')
plt.ylabel('Neuron 1')
plt.title('Sample var 1: {:.1f}'.format(np.var(X[:, 0])))
ax1.set_xticklabels([])
ax2 = fig.add_subplot(gs[1, 0])
ax2.plot(X[:, 1], color='k')
plt.xlabel('Sample Number')
plt.ylabel('Neuron 2')
plt.title('Sample var 2: {:.1f}'.format(np.var(X[:, 1])))
ax3 = fig.add_subplot(gs[:, 1])
ax3.plot(X[:, 0], X[:, 1], '.', markerfacecolor=[.5, .5, .5],
markeredgewidth=0)
ax3.axis('equal')
plt.xlabel('Neuron 1 activity')
plt.ylabel('Neuron 2 activity')
plt.title('Sample corr: {:.1f}'.format(np.corrcoef(X[:, 0], X[:, 1])[0, 1]))
plt.show()
def plot_basis_vectors(X, W):
"""
Plots bivariate data as well as new basis vectors.
Args:
X (numpy array of floats) : Data matrix each column corresponds to a
different random variable
W (numpy array of floats) : Square matrix representing new orthonormal
basis each column represents a basis vector
Returns:
Nothing.
"""
plt.figure(figsize=[4, 4])
plt.plot(X[:, 0], X[:, 1], '.', color=[.5, .5, .5], label='Data')
plt.axis('equal')
plt.xlabel('Neuron 1 activity')
plt.ylabel('Neuron 2 activity')
plt.plot([0, W[0, 0]], [0, W[1, 0]], color='r', linewidth=3,
label='Basis vector 1')
plt.plot([0, W[0, 1]], [0, W[1, 1]], color='b', linewidth=3,
label='Basis vector 2')
plt.legend()
plt.show()
def plot_data_new_basis(Y):
"""
Plots bivariate data after transformation to new bases.
Similar to plot_data but with colors corresponding to projections onto
basis 1 (red) and basis 2 (blue). The title indicates the sample correlation
calculated from the data.
Note that samples are re-sorted in ascending order for the first
random variable.
Args:
Y (numpy array of floats): Data matrix in new basis each column
corresponds to a different random variable
Returns:
Nothing.
"""
fig = plt.figure(figsize=[8, 4])
gs = fig.add_gridspec(2, 2)
ax1 = fig.add_subplot(gs[0, 0])
ax1.plot(Y[:, 0], 'r')
plt.xlabel
plt.ylabel('Projection \n basis vector 1')
plt.title('Sample var 1: {:.1f}'.format(np.var(Y[:, 0])))
ax1.set_xticklabels([])
ax2 = fig.add_subplot(gs[1, 0])
ax2.plot(Y[:, 1], 'b')
plt.xlabel('Sample number')
plt.ylabel('Projection \n basis vector 2')
plt.title('Sample var 2: {:.1f}'.format(np.var(Y[:, 1])))
ax3 = fig.add_subplot(gs[:, 1])
ax3.plot(Y[:, 0], Y[:, 1], '.', color=[.5, .5, .5])
ax3.axis('equal')
plt.xlabel('Projection basis vector 1')
plt.ylabel('Projection basis vector 2')
plt.title('Sample corr: {:.1f}'.format(np.corrcoef(Y[:, 0], Y[:, 1])[0, 1]))
plt.show()
def get_data(cov_matrix):
"""
Returns a matrix of 1000 samples from a bivariate, zero-mean Gaussian.
Note that samples are sorted in ascending order for the first random variable
Args:
cov_matrix (numpy array of floats): desired covariance matrix
Returns:
(numpy array of floats) : samples from the bivariate Gaussian, with each
column corresponding to a different random
variable
"""
mean = np.array([0, 0])
X = np.random.multivariate_normal(mean, cov_matrix, size=1000)
indices_for_sorting = np.argsort(X[:, 0])
# X = X[indices_for_sorting, :]
return X
def calculate_cov_matrix(var_1, var_2, corr_coef):
"""
Calculates the covariance matrix based on the variances and correlation
coefficient.
Args:
var_1 (scalar) : variance of the first random variable
var_2 (scalar) : variance of the second random variable
corr_coef (scalar) : correlation coefficient
Returns:
(numpy array of floats) : covariance matrix
"""
# Calculate the covariance from the variances and correlation
cov = corr_coef * np.sqrt(var_1 * var_2)
cov_matrix = np.array([[var_1, cov], [cov, var_2]])
return cov_matrix
def define_orthonormal_basis(u):
"""
Calculates an orthonormal basis given an arbitrary vector u.
Args:
u (numpy array of floats) : arbitrary 2-dimensional vector used for new
basis
Returns:
(numpy array of floats) : new orthonormal basis
columns correspond to basis vectors
"""
# Normalize vector u
u = u / np.sqrt(u[0] ** 2 + u[1] ** 2)
# Calculate vector w that is orthogonal to w
w = np.array([-u[1], u[0]])
# Put in matrix form
W = np.column_stack([u, w])
return W
def change_of_basis(X, W):
"""
Projects data onto new basis W.
Args:
X (numpy array of floats) : Data matrix each column corresponding to a
different random variable
W (numpy array of floats) : new orthonormal basis columns correspond to
basis vectors
Returns:
(numpy array of floats) : Data matrix expressed in new basis
"""
# Project data onto new basis described by W
Y = X @ W
return Y
```
---
# Section 1: (Correlated) multivariate data
This notebook provides code to draw random samples from a zero-mean bivariate normal distribution with a specified covariance matrix. Throughout this tutorial, we'll imagine these samples represent the activity (firing rates) of two recorded neurons on different trials.
<details>
<summary> <font color='blue'>Click here if you are interested in detials about the math behind multivariate normal distributions </font></summary>
To gain intuition, we will first use a simple model to generate multivariate data. Specifically, we will draw random samples from a *bivariate normal distribution*. This is an extension of the one-dimensional normal distribution to two dimensions, in which each $x_i$ is marginally normal with mean $\mu_i$ and variance $\sigma_i^2$:
\begin{align}
x_i \sim \mathcal{N}(\mu_i,\sigma_i^2).
\end{align}
Additionally, the joint distribution for $x_1$ and $x_2$ has a specified correlation coefficient $\rho$. Recall that the correlation coefficient is a normalized version of the covariance, and ranges between -1 and +1:
\begin{align}
\rho = \frac{\text{cov}(x_1,x_2)}{\sqrt{\sigma_1^2 \sigma_2^2}}.
\end{align}
For simplicity, we will assume that the mean of each variable has already been subtracted, so that $\mu_i=0$ for both $i=1$ and $i=2$. The remaining parameters can be summarized in the covariance matrix, which for two dimensions has the following form:
\begin{align}
{\bf \Sigma} =
\begin{pmatrix}
\text{var}(x_1) & \text{cov}(x_1,x_2) \\
\text{cov}(x_1,x_2) &\text{var}(x_2)
\end{pmatrix}.
\end{align}
In general, $\bf \Sigma$ is a symmetric matrix with the variances $\text{var}(x_i) = \sigma_i^2$ on the diagonal, and the covariances on the off-diagonal. Later, we will see that the covariance matrix plays a key role in PCA.
The covariance can be found by rearranging the equation above:
\begin{align}
\text{cov}(x_1,x_2) = \rho \sqrt{\sigma_1^2 \sigma_2^2}.
\end{align}
</details>
## Interactive Demo 1: Effect of correlation on data in two dimensions
Executing the code cell below will enable you to can change the correlation coefficient between data in two dimensions via an interactive slider. You should get a feel for how changing the correlation coefficient affects the geometry of the simulated data.
1. What effect do negative correlation coefficient values have?
2. What correlation coefficient results in a circular data cloud?
```python
# @markdown Execute this cell to enable widget
def _calculate_cov_matrix(var_1, var_2, corr_coef):
# Calculate the covariance from the variances and correlation
cov = corr_coef * np.sqrt(var_1 * var_2)
cov_matrix = np.array([[var_1, cov], [cov, var_2]])
return cov_matrix
@widgets.interact(corr_coef = widgets.FloatSlider(value=.2, min=-1, max=1, step=0.1))
def visualize_correlated_data(corr_coef=0):
variance_1 = 1
variance_2 = 1
# Compute covariance matrix
cov_matrix = _calculate_cov_matrix(variance_1, variance_2, corr_coef)
# Generate data with this covariance matrix
X = get_data(cov_matrix)
# Visualize
plot_data(X)
```
---
# Section 2: Define an orthonormal basis (a different set of dimensions)
Data can be represented in many ways using different bases. We will be using "orthonormal" bases (orthogonal with length 1).
<details>
<summary> <font color='blue'>Click here if you are interested in some detail about the math </font></summary>
We will define a new orthonormal basis of vectors ${\bf u} = [u_1,u_2]$ and ${\bf w} = [w_1,w_2]$. Two vectors are orthonormal if:
1. They are orthogonal (i.e., their dot product is zero):
\begin{align}
{\bf u\cdot w} = u_1 w_1 + u_2 w_2 = 0
\end{align}
2. They have unit length:
\begin{align}
||{\bf u} || = ||{\bf w} || = 1
\end{align}
In two dimensions, it is easy to make an arbitrary orthonormal basis. All we need is a random vector ${\bf u}$, which we have normalized. If we now define the second basis vector to be ${\bf w} = [-u_2,u_1]$, we can check that both conditions are satisfied:
\begin{align}
{\bf u\cdot w} = - u_1 u_2 + u_2 u_1 = 0
\end{align}
and
\begin{align}
{|| {\bf w} ||} = \sqrt{(-u_2)^2 + u_1^2} = \sqrt{u_1^2 + u_2^2} = 1,
\end{align}
where we used the fact that ${\bf u}$ is normalized. So, with an arbitrary input vector, we can define an orthonormal basis, which we will write in matrix by stacking the basis vectors horizontally:
\begin{align}
{{\bf W} } =
\begin{pmatrix}
u_1 & w_1 \\
u_2 & w_2
\end{pmatrix}.
\end{align}
</details>
```python
# @markdown Execute this cell to plot two orthonormal bases,
# @markdown with the first dimension based on a vector defined by `[x,y]`
# @widgets.interact(corr_coef = widgets.FloatSlider(value=.2, min=-1, max=1, step=0.1))
def visualize_orthonormal_bases(x=0,y=3):
u = [x,y]
# Get orthonomal basis
W = define_orthonormal_basis(u)
# Visualize
with plt.xkcd():
fig,ax = plt.subplots(figsize=[6, 6])
# ax = plt.subplot(1)
# plt.figure(figsize=[6, 6])
ax.axis('equal')
ax.plot([0, u[0]], [0, u[1]], color='k', linewidth=6,
label='Original vector')
ax.plot([0, W[0, 0]], [0, W[1, 0]], color='r', linewidth=3,
label='Basis vector 1')
ax.plot([0, W[0, 1]], [0, W[1, 1]], color='b', linewidth=3,
label='Basis vector 2')
ax.legend()
plt.show()
_ = widgets.interact(visualize_orthonormal_bases, x=(-3,3,0.2), y=(-3,3,0.2))
```
---
# Section 3: Project data onto new basis
Finally, we will express bivariate (2-dimensional) data ($\bf X$) in a new 2-dimensional basis defined as in Section 2. We can project the data into our new basis using ***matrix multiplication*** :
\begin{align}
{\bf Y = X W}.
\end{align}
We will explore the geometry of the transformed data $\bf Y$ as we vary the choice of basis.
## Interactive Demo 3: Play with the basis vectors
To see what happens to the correlation between dimensions as we change the basis vectors, run the cell below.
The parameter corr_coeff controls the correlation between the two original dimensions.
The parameter $\theta$ controls the angle of the first basis vector (red, $\bf u$) in degrees. The second basis vector will be orthogonal to the first. Use the slider to rotate the basis vectors (the new dimensions).
1. What happens to the projected data as you rotate the basis?
2. How does the correlation coefficient change? How does the variance of the projection onto each basis vector change?
3. Are you able to find a basis in which the projected data is **uncorrelated**?
```python
# @markdown Execute this cell to enable the widget
def refresh(corr_coef=0.5,theta=0):
# corr_coef=0.5
variance_1 = 1
variance_2 = 1
# Compute covariance matrix
cov_matrix = _calculate_cov_matrix(variance_1, variance_2, corr_coef)
# Generate data with this covariance matrix
X = get_data(cov_matrix)
u = [1, np.tan(theta * np.pi / 180)]
W = define_orthonormal_basis(u)
Y = change_of_basis(X, W)
# plot_basis_vectors(X, W)
# plot_data_new_basis(Y)
fig = plt.figure(figsize=[20, 10])
gs = fig.add_gridspec(4, 4)
ax1 = fig.add_subplot(gs[0, 0])
ax1.plot(X[:, 0], color='k')
plt.ylabel('Neuron 1')
plt.title('Sample var 1: {:.1f}'.format(np.var(X[:, 0])))
ax1.set_xticklabels([])
ax2 = fig.add_subplot(gs[1, 0])
ax2.plot(X[:, 1], color='k')
plt.xlabel('Sample Number')
plt.ylabel('Neuron 2')
plt.title('Sample var 2: {:.1f}'.format(np.var(X[:, 1])))
ax3 = fig.add_subplot(gs[0:2, 1])
ax3.plot(X[:, 0], X[:, 1], '.', markerfacecolor=[.5, .5, .5],
markeredgewidth=0)
ax3.plot([0, W[0, 0]], [0, W[1, 0]], color='r', linewidth=3,
label='Basis vector 1')
ax3.plot([0, W[0, 1]], [0, W[1, 1]], color='b', linewidth=3,
label='Basis vector 2')
ax3.axis('equal')
plt.xlabel('Neuron 1 activity')
plt.ylabel('Neuron 2 activity')
plt.title('Sample corr: {:.1f}'.format(np.corrcoef(X[:, 0], X[:, 1])[0, 1]))
ax4 = fig.add_subplot(gs[2, 0])
ax4.plot(Y[:, 0], 'r')
plt.xlabel
plt.ylabel('Projection \n basis vector 1')
plt.title('Sample var 1: {:.1f}'.format(np.var(Y[:, 0])))
ax4.set_xticklabels([])
ax5 = fig.add_subplot(gs[3, 0])
ax5.plot(Y[:, 1], 'b')
plt.xlabel('Sample number')
plt.ylabel('Projection \n basis vector 2')
plt.title('Sample var 2: {:.1f}'.format(np.var(Y[:, 1])))
ax6 = fig.add_subplot(gs[2:4, 1])
ax6.plot(Y[:, 0], Y[:, 1], '.', color=[.5, .5, .5])
ax6.axis('equal')
plt.xlabel('Projection basis vector 1')
plt.ylabel('Projection basis vector 2')
plt.title('Sample corr: {:.1f}'.format(np.corrcoef(Y[:, 0], Y[:, 1])[0, 1]))
plt.show()
_ = widgets.interact(refresh, corr_coef=(-1,1,0.1), theta=(-90, 90, 5))
```
---
# Summary
- In this tutorial, we learned that multivariate data can be visualized as a cloud of points in a high-dimensional vector space. The geometry of this cloud is shaped by the *covariance matrix*.
- Multivariate data can be represented in a new orthonormal basis using the dot product (matrix multiplication). These new basis vectors correspond to specific mixtures of the original variables - *for example, in neuroscience, they could represent different ratios of activation across a population of neurons*.
- The projected data (after transforming into the new basis) will generally have a different geometry from the original data. In particular, taking basis vectors that are aligned with the spread of cloud of points decorrelates the data.
* These concepts - covariance, projections, and orthonormal bases - are key for understanding PCA, which is a foundational computational tool in a wide variety of neuroscience research.
---
# Notation
\begin{align}
x_i &\quad \text{data point for dimension } i\\
\mu_i &\quad \text{mean along dimension } i\\
\sigma_i^2 &\quad \text{variance along dimension } i \\
\bf u, \bf w &\quad \text{orthonormal basis vectors}\\
\rho &\quad \text{correlation coefficient}\\
\bf \Sigma &\quad \text{covariance matrix}\\
\bf X &\quad \text{original data matrix}\\
\bf W &\quad \text{projection matrix}\\
\bf Y &\quad \text{transformed data}\\
\end{align}
---
This tutorial was written by Krista Perks for BIOL358 Motor Systems taught at Wesleyan University. Based on content from **Neuromatch Academy 2020: Week 1, Day 5: Dimensionality Reduction** by Alex Cayco Gajic, John Murray
| 8f62b2d84c4462256124646b707888ea7b132be9 | 27,054 | ipynb | Jupyter Notebook | Tutorial_GeometricViewOfData.ipynb | neurologic/MotorSystems_BIOL358_SP22 | ddec85c10e2bbc08a24cba0b6ff7b58466172b21 | [
"CC0-1.0"
] | null | null | null | Tutorial_GeometricViewOfData.ipynb | neurologic/MotorSystems_BIOL358_SP22 | ddec85c10e2bbc08a24cba0b6ff7b58466172b21 | [
"CC0-1.0"
] | null | null | null | Tutorial_GeometricViewOfData.ipynb | neurologic/MotorSystems_BIOL358_SP22 | ddec85c10e2bbc08a24cba0b6ff7b58466172b21 | [
"CC0-1.0"
] | null | null | null | 40.621622 | 432 | 0.505249 | true | 4,807 | Qwen/Qwen-72B | 1. YES
2. YES | 0.851953 | 0.76908 | 0.65522 | __label__eng_Latn | 0.9521 | 0.360627 |
```python
import numpy as np
import scipy
import sympy as sym
import pandas as pd
from scipy import linalg
from scipy import optimize
from scipy import interpolate
import matplotlib.pyplot as plt
from matplotlib import cm
from mpl_toolkits.mplot3d import Axes3D
sym.init_printing(use_unicode=True)
```
# 1. Human capital accumulation
Consider a worker living in **two periods**, $t \in \{1,2\}$.
In each period she decides whether to **work ($l_t = 1$) or not ($l_t = 0$)**.
She can *not* borrow or save and thus **consumes all of her income** in each period.
If she **works** her **consumption** becomes:
$$c_t = w h_t l_t\,\,\text{if}\,\,l_t=1$$
where $w$ is **the wage rate** and $h_t$ is her **human capital**.
If she does **not work** her consumption becomes:
$$c_t = b\,\,\text{if}\,\,l_t=0$$
where $b$ is the **unemployment benefits**.
Her **utility of consumption** is:
$$ \frac{c_t^{1-\rho}}{1-\rho} $$
Her **disutility of working** is:
$$ \gamma l_t $$
From period 1 to period 2, she **accumulates human capital** according to:
$$ h_2 = h_1 + l_1 +
\begin{cases}
0 & \text{with prob. }0.5 \\
\Delta & \text{with prob. }0.5
\end{cases} \\
$$
where $\Delta$ is a **stochastic experience gain**.
In the **second period** the worker thus solves:
$$
\begin{eqnarray*}
v_{2}(h_{2}) & = &\max_{l_{2}} \frac{c_2^{1-\rho}}{1-\rho} - \gamma l_2
\\ & \text{s.t.} & \\
c_{2}& = & w h_2 l_2 \\
l_{2}& \in &\{0,1\}
\end{eqnarray*}
$$
In the **first period** the worker thus solves:
$$
\begin{eqnarray*}
v_{1}(h_{1}) &=& \max_{l_{1}} \frac{c_1^{1-\rho}}{1-\rho} - \gamma l_1 + \beta\mathbb{E}_{1}\left[v_2(h_2)\right]
\\ & \text{s.t.} & \\
c_1 &=& w h_1 l_1 \\
h_2 &=& h_1 + l_1 + \begin{cases}
0 & \text{with prob. }0.5\\
\Delta & \text{with prob. }0.5
\end{cases}\\
l_{1} &\in& \{0,1\}\\
\end{eqnarray*}
$$
where $\beta$ is the **discount factor** and $\mathbb{E}_{1}\left[v_2(h_2)\right]$ is the **expected value of living in period two**.
The **parameters** of the model are:
The **relevant levels of human capital** are:
```python
h_vec = np.linspace(0.1,1.5,100)
```
**Question 1:** Solve the model in period 2 and illustrate the solution (including labor supply as a function of human capital).
**Answer to Q1**
We start by defining the consumption function, which will depend on whether or not she is working. If she is not working, then $l=0$ and she will consume $c=b$, if she is working she will consume $c=w*h$.
Afterwards we define the utility function and bequest, the utility function is the first part of $v_2(h_2)$ defined in the text, and bequest is the second part in $v_2(h_2)$.
```python
l = [0, 1]
def consumption(w,h,b,l):
if l == 0:
c = b
elif l == 1:
c = w*h
return c
def utility(rho,h,w,b,l):
return consumption(w,h,b,l)**(1-rho)/(1-rho)
def bequest(l,gamma):
return gamma*l
def v2(l2,h2,w,b,rho,gamma):
return utility(rho,h2,w,b,l2) - bequest(l2,gamma)
```
```python
rho = 2
beta = 0.96
gamma = 0.1
w = 2
b = 1
Delta = 0.1
```
We are now ready to solve the model. We do that by maximizing the utility given a level of human capital.
```python
def solve_period_2(w,b,rho,gamma):
# h2_vec is the same as h_vec
# we then define an array without any entries, we define 100 of those for both l2 and v2.
h2_vec = np.linspace(0.1,1.5,100)
l2_vec = np.empty(100)
v2_vec = np.empty(100)
# Solve for each h2 in grid
for i,h2 in enumerate(h2_vec):
# We calculate the utility when working, that is v2_Yes and when not working (v2_No).
v2_No = v2(l[0],h2,w,b,rho,gamma)
v2_Yes = v2(l[1],h2,w,b,rho,gamma)
# We then find the utility maximizing level of labor.
v2_vec[i] = max(v2_No,v2_Yes) #the vector is the maximum of the utility of either working or not working
l2_vec[i] = v2_Yes>v2_No
return h2_vec,v2_vec,l2_vec
```
```python
h2_vec,v2_vec,l2_vec = solve_period_2(w,b,rho,gamma);
```
```python
solution = np.where(l2_vec == 1)[0][0];
print(str(round(h2_vec[solution],3)))
```
0.567
We have that solution $h_2^*=0.567$ and she chooses the work whenever $h_2$ is greater than or equal to $0.567$
```python
fig = plt.figure(figsize=(20,4))
ax = fig.add_subplot(1,3,3)
ax.plot(h2_vec,l2_vec)
ax.grid()
ax.set_xlabel('$Human\ capital, period 2$')
ax.set_ylabel('$Labor, period 2$')
ax.set_xlim([0.0,1.2])
ax.plot(0.555567,0.5,marker='.')
```
**Question 2:** Solve the model in period 1 and illustrate the solution (including labor supply as a function of human capital).
**Answer to Q2**
To solve this, we start out by constructing an interpolator, since the values in period 1 will depend on the values from period 2.
```python
v2_interp = interpolate.RegularGridInterpolator([h2_vec], v2_vec,bounds_error=False,fill_value=None)
```
We can now define the utility in period 1, which will depend on the value of v2 in period 2.
```python
def v1(l1,h1,w,b,rho,gamma,beta,Delta,v2_interp):
# The level of v2 when gain is 0 with probability 50%
h2_zero = h1 + l1
v2_zero = v2_interp([h2_zero])[0]
# The level of v2 when gain is Delta with probability 50%
h2_delta = h1 + l1 + Delta
v2_delta = v2_interp([h2_delta])[0]
# We know calculate the expected value of v2
v2_expected = 0.5 * v2_zero + 0.5 * v2_delta
# And at last we get the total utility
return utility(rho,h1,w,b,l1) - bequest(l1,gamma) + beta*v2_expected
```
From here we do it the same way as we did in question 1.
```python
def solve_period_1(w,b,rho,gamma,Delta,beta,v2_interp):
# This is done in the same way as in question 1:
h1_vec = np.linspace(0.1,1.5,100)
l1_vec = np.empty(100)
v1_vec = np.empty(100)
for i,h1 in enumerate(h1_vec):
v1_No = v1(l[0],h1,w,b,rho,gamma,beta,Delta,v2_interp)
v1_Yes = v1(l[1],h1,w,b,rho,gamma,beta,Delta,v2_interp)
v1_vec[i] = max(v1_No,v1_Yes)
l1_vec[i] = v1_Yes > v1_No
return h1_vec,v1_vec,l1_vec
```
```python
h1_vec,v1_vec,l1_vec = solve_period_1(w,b,rho,gamma,Delta,beta,v2_interp)
```
```python
solution = np.where(l1_vec == 1)[0][0];
print(str(round(h1_vec[solution],3)))
```
0.355
We have that $h_1^*=0.355$
So she will decide to work in period 1 if her human capital is greater that or equal to $0.355$. We illustrate a figure below with the labor supply as a function of human capital.
```python
fig = plt.figure(figsize=(20,4))
ax = fig.add_subplot(1,3,3)
ax.plot(h1_vec,l1_vec)
ax.grid()
ax.set_xlabel('$Human\ capital, period 1$')
ax.set_ylabel('$Labor, period 1$')
ax.set_xlim([0.0,1.0])
ax.plot(0.3445,0.5,marker='.')
```
**Question 3:** Will the worker never work if her potential wage income is lower than the unemployment benefits she can get? Explain and illustrate why or why not.
**Answer to Q3**
The worker might work even though her potential wage income in period 1 is lower than the unemployment benefit. The reason is the stochastic experience gain we have defined. She accumulates human capital, if she doesn't work than $h_2=h_1+l_1$ while if she works then $h_2=h_1+l_1+\Delta$. The reason she might be working in period 1 even though her wage income is lower than the unemployment benefit, is because she might accumulate enough human capital such that in period 2 she will get an even higher wage, and thus be able to consume more.
```python
def v1NO(l1,h1,w,b,rho,gamma,beta,v2_interp):
# The level of v2 when gain is 0 with probability 50%
h2_zero = h1 + l1
v2_zero = v2_interp([h2_zero])[0]
# d. total value
return utility(rho,h1,w,b,l1) - bequest(l1,gamma) + beta*v2_zero
```
```python
def solve_period_1NO(w,b,rho,gamma,Delta,beta,v2_interp):
# a. grids
h1NO_vec = np.linspace(0.1,1.5,100)
l1NO_vec = np.empty(100)
v1NO_vec = np.empty(100)
# b. solve for each h1 in grid
for i,h1 in enumerate(h1_vec):
# We now calculate the utility when working (v1_1) and not working (v1_0)
v1NO_0 = v1NO(l[0],h1,w,b,rho,gamma,beta,v2_interp)
v1NO_1 = v1NO(l[1],h1,w,b,rho,gamma,beta,v2_interp)
v1NO_vec[i] = max(v1NO_0,v1NO_1)
l1NO_vec[i] = v1NO_1 > v1NO_0
return h1NO_vec,v1NO_vec,l1NO_vec
```
The idea here was to show the utility as defined in question two and a utility function where there are no stochastic gain (so only $h_1+l_1$ with probability 1), but it somehow messes up. The idea was to show how the utilities differ, and that might explain why sometimes she might choose to work even though she would consume less in the first period.
# AS-AD model
Consider the following **AS-AD model**. The **goods market equilibrium** is given by
$$ y_{t} = -\alpha r_{t} + v_{t} $$
where $y_{t}$ is the **output gap**, $r_{t}$ is the **ex ante real interest** and $v_{t}$ is a **demand disturbance**.
The central bank's **Taylor rule** is
$$ i_{t} = \pi_{t+1}^{e} + h \pi_{t} + b y_{t}$$
where $i_{t}$ is the **nominal interest rate**, $\pi_{t}$ is the **inflation gap**, and $\pi_{t+1}^{e}$ is the **expected inflation gap**.
The **ex ante real interest rate** is given by
$$ r_{t} = i_{t} - \pi_{t+1}^{e} $$
Together, the above implies that the **AD-curve** is
$$ \pi_{t} = \frac{1}{h\alpha}\left[v_{t} - (1+b\alpha)y_{t}\right]$$
Further, assume that the **short-run supply curve (SRAS)** is given by
$$ \pi_{t} = \pi_{t}^{e} + \gamma y_{t} + s_{t}$$
where $s_t$ is a **supply disturbance**.
**Inflation expectations are adaptive** and given by
$$ \pi_{t}^{e} = \phi\pi_{t-1}^{e} + (1-\phi)\pi_{t-1}$$
Together, this implies that the **SRAS-curve** can also be written as
$$ \pi_{t} = \pi_{t-1} + \gamma y_{t} - \phi\gamma y_{t-1} + s_{t} - \phi s_{t-1} $$
The **parameters** of the model are:
```python
par = {}
par['alpha'] = 5.76
par['h'] = 0.5
par['b'] = 0.5
par['phi'] = 0
par['gamma'] = 0.075
#Defining symbols
pi_t = sym.symbols('pi_t')
pi_t2 = sym.symbols('pi_t-1')
y_t = sym.symbols('y_t')
y_t2 = sym.symbols('y_t-1')
v_t = sym.symbols('v_t')
s_t = sym.symbols('s_t')
s_t2 = sym.symbols('s_t-1')
alpha = sym.symbols('alpha')
gamma = sym.symbols('gamma')
phi = sym.symbols('phi')
h = sym.symbols('h')
b = sym.symbols('b')
```
**Question 1:** Use the ``sympy`` module to solve for the equilibrium values of output, $y_t$, and inflation, $\pi_t$, (where AD = SRAS) given the parameters ($\alpha$, $h$, $b$, $\alpha$, $\gamma$) and $y_{t-1}$ , $\pi_{t-1}$, $v_t$, $s_t$, and $s_{t-1}$.
**Answer for Q1**
First, one start by defining the AD and SRAS functions, and then solving AD=SRAS with sympy solver for both $y_t$ and $\pi_t$.
```python
#Defining AD-curve
AD = sym.Eq(pi_t,(1/(h*alpha))*(v_t-(1+b*alpha)*y_t))
#Defining SRAS-curve
SRAS = sym.Eq(pi_t,pi_t2+gamma*y_t-phi*gamma*y_t2+s_t-phi*s_t2)
#Setting AD equal to SRAS and solving for equilibrium values of output, y*.
eq_1= sym.solve([AD, SRAS], [y_t, pi_t])
eq_y=eq_1[y_t]
print('The equilibrium for output y is:')
eq_y
```
```python
eq_pi=eq_1[pi_t]
print('The equilibrium for output pi is:')
eq_pi
```
```python
#lambdifying for later use
pi_func = sym.lambdify((pi_t2, y_t2, v_t, s_t, s_t2, alpha, h, b, phi, gamma), eq_pi)
y_func = sym.lambdify((pi_t2, y_t2, v_t, s_t, s_t2, alpha, h, b, phi, gamma), eq_y)
```
Now, one an just set the paramter values to find the equilibrium given the paramter values through the same procedure as above.
```python
#Setting parameter values
alpha = par['alpha']
h = par['h']
b = par['b']
phi = par['phi']
gamma = par['gamma']
#Resolving for equilibrium
AD_SRAS_val=sym.Eq((1/(h*alpha))*(v_t-(1+b*alpha)*y_t),pi_t2+gamma*y_t-phi*gamma*y_t2+s_t-phi*s_t2)
eq_y_val = sym.solve(AD_SRAS_val,y_t)
print('The equilibrium for output y given parameter values is:')
eq_y_val
```
```python
eq_pi_val=1/(h*alpha)*(v_t-(1+b*alpha)*eq_y_val[0])
print('The equilibrium for output pi given parameter values is:')
eq_pi_val
```
**Question 2:** Find and illustrate the equilibrium when $y_{t-1} = \pi_{t-1} = v_t = s_t = s_{t-1} = 0$. Illustrate how the equilibrium changes when instead $v_t = 0.1$.
**Answer to Q2**
To find the equilibrium when $y_{t-1} = \pi_{t-1} = v_t = s_t = s_{t-1} = 0$, one needs to set the new values for the variables, and here, just re-solve the same problem as earlier.
```python
#Setting variable values - parameter is similar to previous questions
pi_t2 = 0
y_t2 = 0
v_t = 0
s_t = 0
s_t2 = 0
#Solving for equilibrium given variable values
AD_SRAS_param=sym.Eq((1/(h*alpha))*(v_t-(1+b*alpha)*y_t),pi_t2+gamma*y_t-phi*gamma*y_t2+s_t-phi*s_t2)
eq_y_param = sym.solve(AD_SRAS_param,y_t)
eq_pi_param=1/(h*alpha)*(v_t-(1+b*alpha)*eq_y_param[0])
print('The equilibrium for output y given parameter and variable values is: %8.3f' % eq_y_param[0])
print('The equilibrium for output pi given parameter and variable values is: %8.3f' % eq_pi_param)
```
The equilibrium for output y given parameter and variable values is: 0.000
The equilibrium for output pi given parameter and variable values is: 0.000
With the assumption $y_{t-1} = \pi_{t-1} = v_t = s_t = s_{t-1} = 0$, one can see that $\pi_t=y_t=0$ in equilibrium.
Now, one can investigate how the equilibrium changes when $v_t=0.1$
```python
#Solving for equilibrium given v_t=0.1
v_t2 = 0.1
AD_SRAS_param2=sym.Eq((1/(h*alpha))*(v_t2-(1+b*alpha)*y_t),pi_t2+gamma*y_t-phi*gamma*y_t2+s_t-phi*s_t2)
eq_y_param2 = sym.solve(AD_SRAS_param2,y_t)
eq_pi_param2=1/(h*alpha)*(v_t2-(1+b*alpha)*eq_y_param2[0])
print('The equilibrium for output y given parameter and variable values is: %8.4f' % eq_y_param2[0])
print('The equilibrium for output pi given parameter and variable values is: %8.4f' % eq_pi_param2)
```
The equilibrium for output y given parameter and variable values is: 0.0244
The equilibrium for output pi given parameter and variable values is: 0.0018
In the case of $v_t=0.1$ both $\pi_t$ and $y_t$ becomes positive but $y_t^*>\pi_t^*$
Now, lets illustrate the equilibria. First, one creates a vector consisting of the equilibrium values for $y_t$ and $\pi_t$, respectively. Next step is to construct a illustration of the equilibrium values for given values of $v_t$.
```python
#Creating vector with equilibrium values for y_t and pi_t for given values of v_t
## Vector with equilibrium values for y
y_values=[]
y_values.append(eq_y_param[0])
y_values.append(eq_y_param2[0])
## Vector with equilibrium values for pi
pi_values=[]
pi_values.append(eq_pi_param)
pi_values.append(eq_pi_param2)
```
```python
#Creating illustration of the equilibrium values
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.scatter(pi_values[0], y_values[0], label='$v_t=0$')
ax.scatter(pi_values[1], y_values[1], label='$v_t=0.1$')
#Setting up illustration
ax.grid()
ax.legend()
ax.set_ylabel('$y_t^*$')
ax.set_xlabel('$\pi_t^*$')
ax.set_title('Equilibrium values for given values of $v_t$')
```
**Persistent disturbances:** Now, additionaly, assume that both the demand and the supply disturbances are AR(1) processes
$$ v_{t} = \delta v_{t-1} + x_{t} $$
$$ s_{t} = \omega s_{t-1} + c_{t} $$
where $x_{t}$ is a **demand shock**, and $c_t$ is a **supply shock**. The **autoregressive parameters** are:
```python
par['delta'] = 0.80
par['omega'] = 0.15
delta = par['delta']
omega = par['omega']
```
**Question 3:** Starting from $y_{-1} = \pi_{-1} = s_{-1} = 0$, how does the economy evolve for $x_0 = 0.1$, $x_t = 0, \forall t > 0$ and $c_t = 0, \forall t \geq 0$?
**Answer to Q3**
```python
par['delta'] = 0.80
par['omega'] = 0.15
delta = par['delta']
omega = par['omega']
```
Firstly, one needs to define the demand and supply disturbances as functions, set the value for the shocks, the amount of periods for the simulation to investigate the evolvement of the economy given $x_0 = 0.1$, $x_t = 0, \forall t > 0$ and $c_t = 0, \forall t \geq 0$. Then, one can simulate the model. This will also be the order of solution.
```python
#Defining demand and supply disturbances and vectors
v_t2 = sym.symbols('v_t2')
s_t2 = sym.symbols('s_t2')
v = lambda v_t2,x: delta*v_t2 + x
s = lambda s_t2,c: omega*s_t2 + c
v_vector = [0]
s_vector = [0]
#Defining amount of periods for simulation, demand and supply shock vectors.
T = 200
x = np.zeros(T)
c = np.zeros(T)
x[1] = 0.1
```
Now, one can run a loop to simulate the model for T periods.
```python
y_output = [0]
pi_inflation = [0]
#Running simulation of the model
for t in range (1,T):
v_vector.append(v(v_vector[t-1], x[t]))
s_vector.append(s(s_vector[t-1], c[t]))
```
```python
for t in range(1,T):
y_output.append(y_func(pi_inflation[t-1], y_output[t-1], v_vector[t], s_vector[t], s_vector[t-1], alpha, h, b, phi, gamma))
pi_inflation.append(pi_func(pi_inflation[t-1], y_output[t-1], v_vector[t], s_vector[t], s_vector[t-1], alpha, h, b, phi, gamma))
```
```python
#Creating illustration of the equilibrium values
fig = plt.figure(figsize= (10,5))
bx = fig.add_subplot(1,1,1)
bx.plot(range(0,200,1), y_output, label = 'Output')
bx.plot(range(0,200,1), pi_inflation, label = 'Inflation')
#Setting up illustration
bx.legend()
bx.grid()
bx.set_xlabel('t') #
bx.set_ylabel('$y_t, \pi_t$')
bx.set_title('Output and inflation gap for 200 periods')
plt.xticks(range(0, 201, 50))
plt.show()
```
One can see that the output has a faster convergence than inflation given $v_t$. Additionally, the output reacts immediately to the shock, whereas the inflation gap reacts slower to the demand shock.
**Stochastic shocks:** Now, additionally, assume that $x_t$ and $c_t$ are stochastic and normally distributed
$$ x_{t}\sim\mathcal{N}(0,\sigma_{x}^{2}) $$
$$ c_{t}\sim\mathcal{N}(0,\sigma_{c}^{2}) $$
The **standard deviations of the shocks** are:
```python
par['sigma_x'] = 3.492
par['sigma_c'] = 0.2
```
**Question 4:** Simulate the AS-AD model for 1,000 periods. Calculate the following five statistics:
1. Variance of $y_t$, $var(y_t)$
2. Variance of $\pi_t$, $var(\pi_t)$
3. Correlation between $y_t$ and $\pi_t$, $corr(y_t,\pi_t)$
4. Auto-correlation between $y_t$ and $y_{t-1}$, $corr(y_t,y_{t-1})$
5. Auto-correlation between $\pi_t$ and $\pi_{t-1}$, $corr(\pi_t,\pi_{t-1})$
**Answer Q4**
**Stochastic shocks:** Now, additionally, assume that $x_t$ and $c_t$ are stochastic and normally distributed
$$ x_{t}\sim\mathcal{N}(0,\sigma_{x}^{2}) $$
$$ c_{t}\sim\mathcal{N}(0,\sigma_{c}^{2}) $$
The **standard deviations of the shocks** are:
```python
par['sigma_x'] = 3.492
par['sigma_c'] = 0.2
sigma_x = par['sigma_x']
sigma_c = par['sigma_c']
```
The following will be more or less equal to the simulation from **Q3**. Just with the modelling of stochastic shocks which are normally distributed. Firstly, one define the stochastic process and then run the simulation
```python
#Defining stochastic shocks
np.random.seed(127)
T=1000
x = np.random.normal(loc = 0, scale=sigma_x, size =T)
c = np.random.normal(loc = 0, scale=sigma_c, size =T)
```
Now, one can start simulating the model for which a function, simulate, is defined.
```python
#Defining simulating function
def simulate(T,phi_var):
#Defining empty lists / Assuming y_{-1} = \pi_{-1} = s_{-1} = 0
v_vector2 = [0]
s_vector2 = [0]
pi_inflation2 = [0]
y_output2 = [0]
for t in range(1,T):
v_vector2.append(v(v_vector2[t-1], x[t]))
s_vector2.append(s(s_vector2[t-1], c[t]))
#new output and inflation
y_output2.append(y_func(pi_inflation2[t-1], y_output2[t-1], v_vector2[t], s_vector2[t], s_vector2[t-1], alpha, h, b, phi_var, gamma))
pi_inflation2.append(pi_func(pi_inflation2[t-1], y_output2[t-1], v_vector2[t], s_vector2[t], s_vector2[t-1], alpha, h, b, phi_var, gamma))
#Converting to numpy arrays
y_output_sol=np.array(y_output2)
pi_inflation_sol=np.array(pi_inflation2)
return y_output_sol, pi_inflation_sol
#Simulation of the model
y_output_sol, pi_inflation_sol = simulate(T,phi)
#Printing the results
print('Variance of y is %8.3f' % y_output_sol.var())
print('Variance of pi is %8.3f' % pi_inflation_sol.var())
print('Correlation between y and $\pi$ is %8.3f' % np.corrcoef(y_output_sol, pi_inflation_sol)[1,0])
print('Auto-correlation between $y_t$ and $y_t-1$ is %8.3f' % np.corrcoef(y_output_sol[1:], y_output_sol[:-1])[1,0])
print('Auto-correlation between $pi_t$ and $\pi_t-1$ is %8.3f' % np.corrcoef(pi_inflation_sol[1:], pi_inflation_sol[:-1])[1,0])
```
Variance of y is 1.993
Variance of pi is 1.039
Correlation between y and $\pi$ is -0.167
Auto-correlation between $y_t$ and $y_t-1$ is 0.775
Auto-correlation between $pi_t$ and $\pi_t-1$ is 0.980
As expected $y$ has larger variance than $\phi$ due to the demand shock. The negative correlation between $y$ and $\pi$ reflects, the inflation gap decreases as/if the output gap increases.
**Question 5:** Plot how the correlation between $y_t$ and $\pi_t$ changes with $\phi$. Use a numerical optimizer or root finder to choose $\phi\in(0,1)$ such that the simulated correlation between $y_t$ and $\pi_t$ comes close to 0.31.
### Answer to Q5
To plot the correlation between inflation and output, one need to simulate output and inflation for different values of $\phi$. The simulation from **Q4** will be used, and then the saved to be plotted.
```python
#Definning new phi and empty list for simulation / Assuming y_{-1} = \pi_{-1} = s_{-1} = 0
phi_new = np.linspace(0,1,10)
y_sim = {}
pi_sim = {}
corr_ypi = []
```
```python
#Simulation with changes in phi
for i,p in enumerate(phi_new):
pi_sim['pi_inflation3_%s' % i], y_sim['y_output3_%s' % i] = simulate(T, phi_new[i])
corr_ypi.append(np.corrcoef(y_sim['y_output3_%s' %i], pi_sim['pi_inflation3_%s' % i])[1,0])
```
One can now illustrate the correlation for different values of $\phi$. This is perfomed below.
```python
#Creating illustration of the equilibrium values
fig = plt.figure(figsize= (10,5))
cx = fig.add_subplot(1, 1, 1)
cx.plot(phi_new, corr_ypi, label = 'Correlation')
#Setting up illustration
cx.grid()
cx.legend()
cx.set_ylabel('Correlation between $y$ and $\pi$')
cx.set_xlabel('$\phi$')
cx.set_title('Correlation for different values of $\phi$')
plt.show()
```
One can see from the illustration that when $corr(y_t,\pi_t)=0.31$, the value of $\phi$ is approaching a values close to 1. To investigate this one can use the optimize module from the sympy packages. Define an objective, s olving the objective to finally print the result.
```python
#Objective
obj = lambda phi: np.corrcoef(simulate(T, phi)[0], simulate(T, phi)[1])[1,0] - 0.31
#Optimizing
phi_opt = optimize.root_scalar(obj, x0 = 0.8, bracket = [0,1], method = 'bisect')
#Result
sol = phi_opt.flag
phi_opt = phi_opt.root
print('Optimal values of phi to find corr(y_t,pi_t)=0.31: %8.3f' % phi_opt)
print('Correlation for optimal phi: %8.3f' % np.corrcoef(simulate(T, phi_opt)[0], simulate(T, phi_opt)[1])[1,0])
```
Optimal values of phi to find corr(y_t,pi_t)=0.31: 0.983
Correlation for optimal phi: 0.310
**Quesiton 6:** Use a numerical optimizer to choose $\sigma_x>0$, $\sigma_c>0$ and $\phi\in(0,1)$ to make the simulated statistics as close as possible to US business cycle data where:
1. $var(y_t) = 1.64$
2. $var(\pi_t) = 0.21$
3. $corr(y_t,\pi_t) = 0.31$
4. $corr(y_t,y_{t-1}) = 0.84$
5. $corr(\pi_t,\pi_{t-1}) = 0.48$
**Answer to Q6**:
We have tried to come up with a solution but no luck. This is one of the attempts did not work.
#Defining simulation smiliar to the one above
def simulate3(x):
#Assuming y_{-1} = \pi_{-1} = s_{-1} = 0
v_vector3 = [0]
s_vector3 = [0]
pi_inflation3 = [0]
y_output3 = [0]
corr_ypi3 = []
#Disturbances
np.random.seed(3)
sigma_x = par['sigma_x']
sigma_c = par['sigma_c']
x = np.random.normal(loc = 0, scale=sigma_x, size =T)
c = np.random.normal(loc = 0, scale=sigma_c, size =T)
#Simulation
for i in range(1,10):
v_vector3.append(v(v_vector3[t-1], x[t]))
s_vector3.append(s(s_vector3[t-1], c[t]))
#new output and inflation
y_output3.append(y_func(pi_inflation3[t-1], y_output3[t-1], v_vector3[t], s_vector3[t], s_vector3[t-1], alpha, h, b, phi_var, gamma))
pi_inflation3.append(pi_func(pi_inflation3[t-1], y_output3[t-1], v_vector3[t], s_vector3[t], s_vector3[t-1], alpha, h, b, phi_var, gamma))
#Converting to numpy arrays
y_output_sol3=np.array(y_output3)
pi_inflation_sol3=np.array(pi_inflation3)
#Statistics
y_var=np.var(y_output3)
pi_var=np.inflation = np.var(pi_inflation3)
corr_ypi=np.corrcoef(y_output_sol3, pi_inflation_sol3)[1,0]
pi_ac = np.corrcoef(y_output_sol[1:], y_output_sol[:-1])[1,0]
y_ac = np.corrcoef(pi_inflation_sol[1:], pi_inflation_sol[:-1])[1,0]
#Differences from the US businees cycle data / squarred to aviod negative
y_diff = (y_var - 1.64)**2
pi_diff = (var_inflation - 0.21)**2
corr_ypi_diff = (corr_ypi - 0.31)**2 #squared difference
pi_ac_doff = (pi_ac - 0.84)**2
y_ac_fiff = (y_ac - 0.48)**2
#Sum of differences
sum_diff=sum([y_diff, pi_diff, corr_ypi_diff, pi_ac_diff, y_ac_diff])
return sum_diff
#Defining objective, guess and bounds for optimizer
obj = lambda sim: simulate3(sim)
x0 = (phi_opt, sigma_x, sigma_c)
bounds = [[0,1], [0,100], [0,100]]
#Simulation and solving the problem with scipy minimizer
sol = optimize.minimize(obj, x0=x0, method='L-BFGS-B', bounds=bounds)
# 3. Exchange economy
Consider an **exchange economy** with
1. 3 goods, $(x_1,x_2,x_3)$
2. $N$ consumers indexed by \\( j \in \{1,2,\dots,N\} \\)
3. Preferences are Cobb-Douglas with log-normally distributed coefficients
$$ \begin{eqnarray*}
u^{j}(x_{1},x_{2},x_{3}) &=&
\left(x_{1}^{\beta_{1}^{j}}x_{2}^{\beta_{2}^{j}}x_{3}^{\beta_{3}^{j}}\right)^{\gamma}\\
& & \,\,\,\beta_{i}^{j}=\frac{\alpha_{i}^{j}}{\alpha_{1}^{j}+\alpha_{2}^{j}+\alpha_{3}^{j}} \\
& & \,\,\,\boldsymbol{\alpha}^{j}=(\alpha_{1}^{j},\alpha_{2}^{j},\alpha_{3}^{j}) \\
& & \,\,\,\log(\boldsymbol{\alpha}^j) \sim \mathcal{N}(\mu,\Sigma) \\
\end{eqnarray*} $$
4. Endowments are exponentially distributed,
$$
\begin{eqnarray*}
\boldsymbol{e}^{j} &=& (e_{1}^{j},e_{2}^{j},e_{3}^{j}) \\
& & e_i^j \sim f, f(z;\zeta) = 1/\zeta \exp(-z/\zeta)
\end{eqnarray*}
$$
Let $p_3 = 1$ be the **numeraire**. The implied **demand functions** are:
$$
\begin{eqnarray*}
x_{i}^{\star j}(p_{1},p_{2},\boldsymbol{e}^{j})&=&\beta^{j}_i\frac{I^j}{p_{i}} \\
\end{eqnarray*}
$$
where consumer $j$'s income is
$$I^j = p_1 e_1^j + p_2 e_2^j +p_3 e_3^j$$
The **parameters** and **random preferences and endowments** are given by:
```python
# a. parameters
N = 50000
mu = np.array([3,2,1])
Sigma = np.array([[0.25, 0, 0], [0, 0.25, 0], [0, 0, 0.25]])
gamma = 0.8
zeta = 1
# b. random draws
seed = 1986
np.random.seed(seed)
# preferences
alphas = np.exp(np.random.multivariate_normal(mu, Sigma, size=N))
betas = alphas/np.reshape(np.sum(alphas,axis=1),(N,1))
# endowments
e1 = np.random.exponential(zeta,size=N)
e2 = np.random.exponential(zeta,size=N)
e3 = np.random.exponential(zeta,size=N)
```
**Question 1:** Plot the histograms of the budget shares for each good across agents.
Consider the **excess demand functions:**
$$ z_i(p_1,p_2) = \sum_{j=1}^N x_{i}^{\star j}(p_{1},p_{2},\boldsymbol{e}^{j}) - e_i^j$$
**Answer to Q1**
```python
# Plotting histogram of budget shares
# Plotting histogram of each beta with 200 bins.
fig = plt.figure(figsize=(12,5))
ax = fig.add_subplot(1,3,1)
ax.hist(betas[:,0],bins=200,histtype='bar')
ax.set_xlabel('$beta_1$')
ax.set_ylabel('Consumers')
ax.set_title('Good 1')
ax.set_ylim([0, 1250])
ax = fig.add_subplot(1,3,2)
ax.hist(betas[:,1],bins=200,histtype='bar')
ax.set_xlabel('$beta_2$')
ax.set_title('Good 2')
ax.set_ylim([0, 1250])
ax = fig.add_subplot(1,3,3)
ax.hist(betas[:,2],bins=200,histtype='bar')
ax.set_xlabel('$beta_3$')
ax.set_title('Good 3')
ax.set_ylim([0, 1250]);
```
**Question 2:** Plot the excess demand functions.
**Answer to Q2**
```python
#We first define the demand functions, as we set p3=1 we have just excluded that term.
def demand_good1(p1,p2,e1,e2,e3,betas):
I = e1*p1+e2*p2+e3
return betas[:,0]*I/p1
def demand_good2(p1,p2,e1,e2,e3,betas):
I = e1*p1+e2*p2+e3
return betas[:,1]*I/p2
def demand_good3(p1,p2,e1,e2,e3,betas):
I = e1*p1+e2*p2+e3
return betas[:,2]*I
#We then define the excess demand functions, and supply functions for good 1 and good 2.
def excess1(p1,p2,e1,e2,e3,betas):
demand = np.sum(demand_good1(p1,p2,e1,e2,e3,betas))
supply = np.sum(e1)
excess1 = demand - supply
return excess1
def excess2(p1,p2,e1,e2,e3,betas):
demand = np.sum(demand_good2(p1,p2,e1,e2,e3,betas))
supply = np.sum(e2)
excess2 = demand - supply
return excess2
# We then define an array of prices
p1_s = np.linspace(0.1,10,100)
p2_s = np.linspace(0.1,10,100)
# Creating grids for excess demand
excess_grid1 = np.empty((100,100))
excess_grid2 = np.empty((100,100))
# And then we transform our price-vectors into grids using meshgrid
p1_grid, p2_grid = np.meshgrid(p1_s,p2_s,indexing='ij')
# We can then calculate excess demand for each combination of prices:
for i,p1 in enumerate(p1_s):
for j,p2 in enumerate(p2_s):
excess_grid1[i,j] = excess1(p1,p2,e1,e2,e3,betas)
excess_grid2[i,j] = excess2(p1,p2,e1,e2,e3,betas)
```
```python
# We can now plot our results
fig = plt.figure(figsize=(10,4))
ax = fig.add_subplot(1,2,1, projection='3d')
ax.plot_surface(p1_grid, p2_grid, excess_grid1)
ax.invert_xaxis()
ax.set_title('Excess demand, good 1')
ax.set_xlabel('$p_1$')
ax.set_ylabel('$p_2$')
ax = fig.add_subplot(1,2,2, projection='3d')
ax.plot_surface(p1_grid, p2_grid, excess_grid2)
ax.invert_xaxis()
ax.set_title('Excess demand, good 2')
ax.set_xlabel('$p_1$')
ax.set_ylabel('$p_2$');
```
**Quesiton 3:** Find the Walras-equilibrium prices, $(p_1,p_2)$, where both excess demands are (approximately) zero, e.g. by using the following tâtonnement process:
1. Guess on $p_1 > 0$, $p_2 > 0$ and choose tolerance $\epsilon > 0$ and adjustment aggressivity parameter, $\kappa > 0$.
2. Calculate $z_1(p_1,p_2)$ and $z_2(p_1,p_2)$.
3. If $|z_1| < \epsilon$ and $|z_2| < \epsilon$ then stop.
4. Else set $p_1 = p_1 + \kappa \frac{z_1}{N}$ and $p_2 = p_2 + \kappa \frac{z_2}{N}$ and return to step 2.
**Answer to Q3**
```python
def tatonnement(p1,p2,e1,e2,e3,betas,tol=1e-7,kappa=0.2,prints=True):
# Calculating initial excess demands
z_1 = excess1(p1,p2,e1,e2,e3,betas)
z_2 = excess2(p1,p2,e1,e2,e3,betas)
# Setting iteration counter to 1
t = 1
# We start the loop here, we will have a maximum of 20.000 loops
while t < 20000:
# We calculate the initial excess demands
z_1 = excess1(p1,p2,e1,e2,e3,betas)
z_2 = excess2(p1,p2,e1,e2,e3,betas)
# Set our tolerance level:
if abs(z_1)<tol and abs(z_2)<tol:
if prints:
print(f'\nIn the Walras equilibrium we have: p1 = {p1:.2f} and p2 = {p2:.2f}')
p1_eq = p1
p2_eq = p2
return p1_eq, p2_eq
# If no convergence, we will multiply our excess demand by kappa and take the average:
else:
p1 += kappa*z_1/N
p2 += kappa*z_2/N
# We then print the iteration proces
if prints:
if t <= 10 or t%1000==0:
print(f'Iter {t:6.0f}: Excess good 1 = {z_1:10.2f}, Excess good 2 = {z_2:10.2f} => p1 = {p1:10.2f}, p2 = {p2:10.2f}')
t += 1
# Print statement if maximum numbers of iterations is exceeded
if t == 20000:
text = 'No convergence \n'
print(text)
return None, None
```
```python
# Finding Walras-equilibrium prices
# Initial guess on prices
p1 = 4
p2 = 1
# Calling function to find equilibrium prices
p1_eq, p2_eq = tatonnement(p1,p2,e1,e2,e3,betas)
```
Iter 1: Excess good 1 = -1802.03, Excess good 2 = 27583.49 => p1 = 3.99, p2 = 1.11
Iter 2: Excess good 1 = -890.71, Excess good 2 = 21094.24 => p1 = 3.99, p2 = 1.19
Iter 3: Excess good 1 = -200.19, Excess good 2 = 16961.42 => p1 = 3.99, p2 = 1.26
Iter 4: Excess good 1 = 346.52, Excess good 2 = 14060.01 => p1 = 3.99, p2 = 1.32
Iter 5: Excess good 1 = 790.31, Excess good 2 = 11901.20 => p1 = 3.99, p2 = 1.37
Iter 6: Excess good 1 = 1156.25, Excess good 2 = 10231.46 => p1 = 4.00, p2 = 1.41
Iter 7: Excess good 1 = 1461.18, Excess good 2 = 8903.73 => p1 = 4.00, p2 = 1.44
Iter 8: Excess good 1 = 1717.07, Excess good 2 = 7825.81 => p1 = 4.01, p2 = 1.47
Iter 9: Excess good 1 = 1932.83, Excess good 2 = 6936.50 => p1 = 4.02, p2 = 1.50
Iter 10: Excess good 1 = 2115.29, Excess good 2 = 6193.32 => p1 = 4.03, p2 = 1.53
Iter 1000: Excess good 1 = 42.54, Excess good 2 = 15.87 => p1 = 6.44, p2 = 2.60
Iter 2000: Excess good 1 = 1.08, Excess good 2 = 0.40 => p1 = 6.49, p2 = 2.62
Iter 3000: Excess good 1 = 0.03, Excess good 2 = 0.01 => p1 = 6.49, p2 = 2.62
Iter 4000: Excess good 1 = 0.00, Excess good 2 = 0.00 => p1 = 6.49, p2 = 2.62
Iter 5000: Excess good 1 = 0.00, Excess good 2 = 0.00 => p1 = 6.49, p2 = 2.62
Iter 6000: Excess good 1 = 0.00, Excess good 2 = 0.00 => p1 = 6.49, p2 = 2.62
In the Walras equilibrium we have: p1 = 6.49 and p2 = 2.62
**Question 4:** Plot the distribution of utility in the Walras-equilibrium and calculate its mean and variance.
**Answer to Q4**
```python
def u(p1, p2, e1, e2, e3, betas, gamma):
# Income is given by:
I = p1*e1+p2*e2+e3
# Demand is given by:
demand1 = betas[:,0]*(I/p1)
demand2 = betas[:,1]*(I/p2)
demand3 = betas[:,2]*I
# Calculating utility
u = (demand1**betas[:,0]+demand2**betas[:,1]+demand3**betas[:,2])**gamma
return u
```
```python
# We make a function that creates a vector of utilities
u_vec = u(p1_eq, p2_eq, e1, e2, e3, betas, gamma)
# we can then find the mean and variance
mean = np.mean(u_vec)
var = np.var(u_vec)
print(f'Mean: {mean:.2f}')
print(f'Variance: {var:.2f}')
```
Mean: 2.38
Variance: 0.21
```python
# Plotting distribution of utilities, and printing mean and variance
plt.hist(u_vec, bins=500)
plt.title('Utility distribution')
plt.xlabel('Utility')
plt.ylabel('Consumers')
```
**Question 5:** Find the Walras-equilibrium prices if instead all endowments were distributed equally. Discuss the implied changes in the distribution of utility. Does the value of $\gamma$ play a role for your conclusions?
**Answer to Q5**
```python
e1_new = np.ones(N)*np.mean(e1)
e2_new = np.ones(N)*np.mean(e2)
e3_new = np.ones(N)*np.mean(e3)
```
```python
p1 = 4
p2 = 1
p1_new, p2_new = tatonnement(p1,p2,e1_new,e2_new,e3_new,betas)
```
Iter 1: Excess good 1 = -1811.29, Excess good 2 = 27618.68 => p1 = 3.99, p2 = 1.11
Iter 2: Excess good 1 = -898.66, Excess good 2 = 21117.58 => p1 = 3.99, p2 = 1.19
Iter 3: Excess good 1 = -207.23, Excess good 2 = 16978.68 => p1 = 3.99, p2 = 1.26
Iter 4: Excess good 1 = 340.19, Excess good 2 = 14073.55 => p1 = 3.99, p2 = 1.32
Iter 5: Excess good 1 = 784.53, Excess good 2 = 11912.24 => p1 = 3.99, p2 = 1.37
Iter 6: Excess good 1 = 1150.94, Excess good 2 = 10240.69 => p1 = 4.00, p2 = 1.41
Iter 7: Excess good 1 = 1456.26, Excess good 2 = 8911.60 => p1 = 4.00, p2 = 1.44
Iter 8: Excess good 1 = 1712.48, Excess good 2 = 7832.63 => p1 = 4.01, p2 = 1.47
Iter 9: Excess good 1 = 1928.53, Excess good 2 = 6942.48 => p1 = 4.02, p2 = 1.50
Iter 10: Excess good 1 = 2111.24, Excess good 2 = 6198.61 => p1 = 4.03, p2 = 1.53
Iter 1000: Excess good 1 = 42.32, Excess good 2 = 15.81 => p1 = 6.44, p2 = 2.60
Iter 2000: Excess good 1 = 1.07, Excess good 2 = 0.40 => p1 = 6.48, p2 = 2.62
Iter 3000: Excess good 1 = 0.03, Excess good 2 = 0.01 => p1 = 6.49, p2 = 2.62
Iter 4000: Excess good 1 = 0.00, Excess good 2 = 0.00 => p1 = 6.49, p2 = 2.62
Iter 5000: Excess good 1 = 0.00, Excess good 2 = 0.00 => p1 = 6.49, p2 = 2.62
Iter 6000: Excess good 1 = 0.00, Excess good 2 = 0.00 => p1 = 6.49, p2 = 2.62
In the Walras equilibrium we have: p1 = 6.49 and p2 = 2.62
We see that the changes in the distribution of endowments doesn't have a significant effect on the prices. In fact the prices are identical to the previous situation.
```python
def u_new(p1, p2, e1, e2, e3, betas, gamma):
# Income is given by:
I = p1*e1+p2*e2+e3
# Demand is given by:
demand1_new = betas[:,0]*(I/p1)
demand2_new = betas[:,1]*(I/p2)
demand3_new = betas[:,2]*I
# Calculating utility
u = (demand1_new**betas[:,0]+demand2_new**betas[:,1]+demand3_new**betas[:,2])**gamma
return u_new
```
```python
# We make a function that creates a vector of utilities
u_vec_new = u(p1_eq, p2_eq, e1, e2, e3, betas, 1.5)
# we can then find the mean and variance
mean = np.mean(u_vec_new)
var = np.var(u_vec_new)
print(f'Mean: {mean:.2f}')
print(f'Variance: {var:.2f}')
```
Mean: 5.22
Variance: 3.71
Even though the prices are unchanged, gamma will have an impact on the mean and variance. By increasing gamma from $0.8$ to $1.5$ the mean have increased from $2.38$ to $5.22$ while the variance have increased from $0.21$ to $3.71$
```python
fig = plt.figure(figsize=(12,5))
plt.hist(u_vec, bins=200, label='$\gamma$ = 0.8')
plt.hist(u_vec_new, bins=200, color='orange', label='$\gamma$ = 1.5')
plt.legend()
plt.title('Distribution of utility')
plt.xlabel('Utility')
plt.ylabel('Consumers');
```
| 5d1c53b3c6fdf8a60221195ce4cd360fd5ad82f4 | 311,951 | ipynb | Jupyter Notebook | examproject/examproject/examproject.ipynb | NumEconCopenhagen/projects-2019-wp | 01c7730beba383f59efc73a70cebf1fd2b8be301 | [
"MIT"
] | null | null | null | examproject/examproject/examproject.ipynb | NumEconCopenhagen/projects-2019-wp | 01c7730beba383f59efc73a70cebf1fd2b8be301 | [
"MIT"
] | 8 | 2019-04-15T16:23:44.000Z | 2019-05-21T07:35:17.000Z | examproject/examproject/examproject.ipynb | NumEconCopenhagen/projects-2019-wp | 01c7730beba383f59efc73a70cebf1fd2b8be301 | [
"MIT"
] | 2 | 2019-05-12T14:44:57.000Z | 2020-03-15T10:59:04.000Z | 144.288159 | 118,612 | 0.874458 | true | 13,284 | Qwen/Qwen-72B | 1. YES
2. YES | 0.885631 | 0.787931 | 0.697817 | __label__eng_Latn | 0.880847 | 0.459593 |
```python
from sympy.abc import s, t
from sympy.integrals.transforms import inverse_laplace_transform
from cardioLPN import A_R, A_L, A_C
from sympy import symbols
from sympy import *
import matplotlib.pyplot as plt
import numpy as np
```
```python
R_p, R_d, R, L, C = symbols('R_p, R_d R L C', positive=True)
U_2, I_2 = symbols('U_2 I_2', positive=True)
U, I = symbols('U I', positive=True)
t = symbols('t', positive=True, real = True)
#s = symbols('s', positive=True)
A_result = A_C(C) * A_L(L) * A_R(R)
# defining a function for U_2 and I_2
U_2 = 1/(s*(1+s*5))
I_2 = 1/(s*(1+s*15))
# defining
x_2 = Matrix([U_2, I_2])
x_1 = A_result * x_2
#
U_1 = x_1[0]
I_1 = x_1[1]
u_1 = inverse_laplace_transform(x_1[0], s, t)
```
```python
A_result = A_C(2) * A_L(3) * A_R(2)
# defining a function for U_2 and I_2
U_2 = 1/(s*(1+s*5))
I_2 = 1/(s*(1+s*15))
# defining
x_2 = Matrix([U_2, I_2])
x_1 = A_result * x_2
#
U_1 = x_1[0]
I_1 = x_1[1]
# inverse laplace transform
# output
u_1 = inverse_laplace_transform(x_1[0], s, t)
# inputs
u_2 = inverse_laplace_transform(U_2, s, t)
i_2 = inverse_laplace_transform(I_2, s, t)
```
```python
x_1[0]
#(3*s + 2)/(s*(15*s + 1)) + (2*s*(3*s + 2) + 1)/(s*(5*s + 1))
```
(3*s + 2)/(s*(15*s + 1)) + 1/(s*(5*s + 1))
```python
u_1_func = lambdify(t, u_1)
time = np.linspace(0.5, 100, 100)
u_1_array = [u_1_func(t_) for t_ in time]
#u_1_func(time)
#u_1_array
plt.plot(time, u_1_array)
# Input u_2
u_2_func = lambdify(t, u_2)
time = np.linspace(0.5, 100, 100)
u_2_array = [u_2_func(t_) for t_ in time]
plt.plot(time, u_2_array, 'r')
plt.show()
# Input i_2
i_2_func = lambdify(t, i_2)
time = np.linspace(0.5, 100, 100)
i_2_array = [i_2_func(t_) for t_ in time]
plt.plot(time, i_2_array, 'r')
plt.show()
```
```python
## New configuration
A_result_2 = A_L(3) * A_R(2) * A_C(2)
x_1 = A_result_2 * x_2
U_1 = x_1[0]
I_1 = x_1[1]
```
```python
# inverse laplace transform
# output
u_1_vary = inverse_laplace_transform(simplify(x_1[0]), s, t)
```
```python
x_1[0]
```
(3*s + 2)/(s*(15*s + 1)) + (2*s*(3*s + 2) + 1)/(s*(5*s + 1))
```python
u_1_vary_func = lambdify(t, u_1_vary)
u_1_vary_array = [u_1_vary_func(t_) for t_ in time]
#u_1_func(time)
#u_1_array
plt.plot(time, u_1_vary_array)
plt.show()
```
```python
import mpmath as mp
import numpy as np
def f(s):
return 1 / (s-1)
x_1_func = lambdify(s, x_1[0])
def u_1(s):
return x_1_func(s)
t = np.linspace(0.01, 100,100)
G = []
for i in t:
G.append(mp.invertlaplace(test_function, i, method = 'dehoog', dps = 10, degree = 18))
plt.plot(t, G)
plt.plot(time, u_1_array, 'r')
plt.show()
```
| 4e06f24bb1265ad1ee4be52a7afef99b502539c3 | 47,994 | ipynb | Jupyter Notebook | example_config.ipynb | xi2pi/cardioLPN | 34759fea55f73312ccb8fb645ce2d04a0e2dddea | [
"MIT"
] | null | null | null | example_config.ipynb | xi2pi/cardioLPN | 34759fea55f73312ccb8fb645ce2d04a0e2dddea | [
"MIT"
] | null | null | null | example_config.ipynb | xi2pi/cardioLPN | 34759fea55f73312ccb8fb645ce2d04a0e2dddea | [
"MIT"
] | null | null | null | 155.824675 | 12,564 | 0.85619 | true | 1,105 | Qwen/Qwen-72B | 1. YES
2. YES | 0.952574 | 0.83762 | 0.797895 | __label__eng_Latn | 0.295814 | 0.69211 |
# Diffusion Theory
```python
# Our numerical workhorses
import numpy as np
import pandas as pd
# Import matplotlib stuff for plotting
import matplotlib.pyplot as plt
import matplotlib.cm as cm
# Seaborn, useful for graphics
import seaborn as sns
# favorite Seaborn settings for notebooks
rc={'lines.linewidth': 2,
'axes.facecolor' : 'F4F3F6',
'axes.edgecolor' : '000000',
'axes.linewidth' : 1.2,
'grid.color' : 'a6a6a6',
'lines.linewidth': 2,
'axes.labelsize': 18,
'axes.titlesize': 20,
'xtick.major' : 13,
'xtick.labelsize': 'large',
'ytick.labelsize': 13,
'font.family': 'Lucida Sans Unicode',
'grid.linestyle': ':',
'grid.linewidth': 1.5,
'mathtext.fontset': 'stixsans',
'mathtext.sf': 'sans',
'legend.frameon': True,
'legend.fontsize': 13}
plt.rc('text.latex', preamble=r'\usepackage{sfmath}')
plt.rc('mathtext', fontset='stixsans', sf='sans')
sns.set_style('darkgrid', rc=rc)
sns.set_palette("colorblind", color_codes=True)
sns.set_context('notebook', rc=rc)
# Magic function to make matplotlib inline; other style specs must come AFTER
%matplotlib inline
# This enables SVG graphics inline (only use with static plots (non-Bokeh))
%config InlineBackend.figure_format = 'svg'
```
# Kolmogorov Forward Equation.
## Mutation and Selection.
The steady state distribution for mutation and selection is of the form
$$
P(f) \propto \left[ f(1-f) \right]^{2N\mu - 1},
\tag{1}
$$
where $f$ is the allele frequency $\mu$ is the mutation rate, and $N$ is the population size (effective population size can be).
Let's plot this for different values of $\mu$
```python
def mut_drift(f, N_mu):
prob = (f * (1 - f))**(2 * N_mu - 1)
idx = np.logical_and(prob >=0, prob!=np.inf) # to avoid indeterminations
return f[idx], prob[idx]
```
```python
# Define parameters
N_mu = [0.1, 1 / 2, 10, 100]
freq = np.linspace(0, 1, 200)
fig, ax = plt.subplots(1, 1)
# Loop through mutation rates and plot the distribution
for nm in N_mu:
f, prob = mut_drift(freq, nm)
plt.plot(f, prob / np.sum(prob),
label=str(nm))
ax.legend(title='$N \mu$')
ax.set_xlabel('allele frequency $f$')
ax.set_ylabel('$\propto$ probability')
ax.set_title('mutation-drift')
ax.axes.get_yaxis().set_ticks([])
ax.margins(0.02)
plt.tight_layout()
plt.savefig('fig/mutation_drift.png')
```
/Users/razo/anaconda3/lib/python3.6/site-packages/ipykernel/__main__.py:2: RuntimeWarning: divide by zero encountered in power
from ipykernel import kernelapp as app
## Selection and Drift
#### Haploid organisms
The steady state distribution of allele frequency for an allele $A$ with fitness $\omega_A = 1$ that is in a population with another allele $a$ with fitness $\omega_a = 1 - s$ is of the form
$$
P(f) \propto \frac{e^{-2Ns(1-f)}}{f(1-f)}
\tag{2}
$$
Let's plot it for different values of $s$.
```python
def sel_drift_hap(f, N_s):
prob = np.exp(-2 * N_s * (1 - f)) / (f * (1 - f))
idx = np.logical_and(prob >=0, prob!=np.inf) # to avoid indeterminations
return f[idx], prob[idx]
```
```python
# Define parameters
N_s = [0, 0.1, 1, 10]
freq = np.linspace(0, 1, 200)
fig, ax = plt.subplots(1, 1)
# Loop through mutation rates and plot the distribution
for ns in N_s:
f, prob = sel_drift_hap(freq, ns)
plt.plot(f, prob, #/ np.sum(prob),
label=str(ns))
ax.legend(loc='upper center', title='$N s$')
ax.set_xlabel('allele frequency $f$')
ax.set_ylabel('$\propto$ probability')
ax.set_title('selection-drift (haploids)')
ax.axes.get_yaxis().set_ticks([])
ax.margins(0.02)
plt.tight_layout()
plt.savefig('fig/sel_drift_haploid.png')
```
/Users/razo/anaconda3/lib/python3.6/site-packages/ipykernel/__main__.py:2: RuntimeWarning: divide by zero encountered in true_divide
from ipykernel import kernelapp as app
### Diploid organisms
#### Heterozygote advantage.
For the case of a diploid organism where the genotypes have fitness values
\begin{align}
\omega_{AA} = 1,\\
\omega_{Aa} = 1 + s,\\
\omega_{aa} = 1,
\end{align}
we have that the steady state distribution of alleles is of the form
$$
P(f) \propto \frac{e^{4Ns f(1 - f)}}{f(1 - f)}
\tag{3}
$$
Let's look at some distributions.
```python
def sel_drift_dip_hetero_advantage(f, N_s):
prob = np.exp(4 * N_s * f * (1 - f)) / (f * (1 - f))
idx = np.logical_and(prob >=0, prob!=np.inf) # to avoid indeterminations
return f[idx], prob[idx]
```
```python
# Define parameters
N_s = [1, 10, 50]
freq = np.linspace(0, 1, 200)
fig, ax = plt.subplots(1, 1)
# Loop through mutation rates and plot the distribution
for ns in N_s:
f, prob = sel_drift_dip_hetero_advantage(freq, ns)
plt.plot(f, prob / np.sum(prob),
label=str(ns))
ax.legend(loc=0, title='$N s$')
ax.set_xlabel('allele frequency $f$')
ax.set_ylabel('$\propto$ probability')
ax.set_title('selection-drift (diploids heterozygote advantage)', fontsize=14)
ax.axes.get_yaxis().set_ticks([])
ax.margins(0.02)
plt.tight_layout()
plt.savefig('fig/sel_drift_diploid_hetero_advantage.png')
```
/Users/razo/anaconda3/lib/python3.6/site-packages/ipykernel/__main__.py:2: RuntimeWarning: divide by zero encountered in true_divide
from ipykernel import kernelapp as app
#### Heterozygote Intermediate.
For the case of a diploid organism where the genotypes have fitness values
\begin{align}
\omega_{AA} = 1 + 2s,\\
\omega_{Aa} = 1 + s,\\
\omega_{aa} = 1,
\end{align}
we have that the steady state distribution of alleles is of the form
$$
P(f) \propto \frac{e^{4Ns f}}{f(1 - f)}
\tag{3}
$$
Let's look at some distributions.
```python
def sel_drift_dip_hetero_intermediate(f, N_s):
prob = np.exp(4 * N_s * f ) / (f * (1 - f))
idx = np.logical_and(prob >=0, prob!=np.inf) # to avoid indeterminations
return f[idx], prob[idx]
```
```python
# Define parameters
N_s = [0.1, 1, 5]
freq = np.linspace(0, 0.95, 200)
fig, ax = plt.subplots(1, 1)
# Loop through mutation rates and plot the distribution
for ns in N_s:
f, prob = sel_drift_dip_hetero_intermediate(freq, ns)
plt.plot(f, prob / np.sum(prob),
label=str(ns))
ax.legend(loc=0, title='$N s$')
ax.set_xlabel('allele frequency $f$')
ax.set_ylabel('$\propto$ probability')
ax.set_title('selection-drift (diploids heterozygote intermediate)', fontsize=14)
ax.axes.get_yaxis().set_ticks([])
ax.set_xlim([0, 1])
ax.margins(0.02)
plt.tight_layout()
plt.savefig('fig/sel_drift_diploid_hetero_intermediate.png')
```
/Users/razo/anaconda3/lib/python3.6/site-packages/ipykernel/__main__.py:2: RuntimeWarning: divide by zero encountered in true_divide
from ipykernel import kernelapp as app
### Mutation-Selection-Drift.
For the case of a diploid organism where the genotypes have fitness values
\begin{align}
\omega_{AA} = 1 - s,\\
\omega_{Aa} = 1,\\
\omega_{aa} = 1,
\end{align}
And the mutation rates are given by
\begin{align}
\mu_{A \rightarrow a} = 0\\
\mu_{a \rightarrow A} = \mu\\
\end{align}
we have that the steady state distribution of alleles is of the form
$$
P(f) \propto \frac{e^{-2Ns f^2} (1 - f)^{2N \mu}}{f(1 - f)}
\tag{3}
$$
Let's look at some distributions.
```python
def mut_sel_drift(f, N, s, mu):
prob = np.exp(- 2 * N * s * f**2) * (1 - f)**(2 * N * mu) / (f * (1 - f))
idx = np.logical_and(prob >=0, prob!=np.inf) # to avoid indeterminations
return f[idx], prob[idx]
```
```python
# Define parameters
N = [10**2, 10**4]
s = 0.0001
mu = [10**-1, 10**-6]
```
```python
freq = np.linspace(0, 1, 200)
fig, ax = plt.subplots(1, 1)
# Loop through mutation rates and plot the distribution
for n in N:
for m in mu:
f, prob = mut_sel_drift(freq, n, s, m)
plt.plot(f, prob / np.sum(prob),
label='$N = {0:d}, \mu = {1:f}$'.format(n, m))
ax.legend(loc=0)
ax.set_xlabel('allele frequency $f$')
ax.set_ylabel('$\propto$ probability')
ax.set_title('selection-drift (diploids heterozygote intermediate)', fontsize=14)
ax.axes.get_yaxis().set_ticks([])
ax.set_xlim([0, 1])
ax.margins(0.02)
plt.tight_layout()
# plt.savefig('fig/sel_drift_diploid_hetero_intermediate.png')
```
/Users/razo/anaconda3/lib/python3.6/site-packages/ipykernel/__main__.py:2: RuntimeWarning: divide by zero encountered in true_divide
from ipykernel import kernelapp as app
/Users/razo/anaconda3/lib/python3.6/site-packages/ipykernel/__main__.py:2: RuntimeWarning: invalid value encountered in true_divide
from ipykernel import kernelapp as app
/Users/razo/anaconda3/lib/python3.6/site-packages/ipykernel/__main__.py:3: RuntimeWarning: invalid value encountered in greater_equal
app.launch_new_instance()
```python
freq = np.linspace(1E-3, 0.02, 200)
fig, ax = plt.subplots(1, 1)
# Loop through mutation rates and plot the distribution
for n in N:
for m in mu:
f, prob = mut_sel_drift(freq, n, s, m)
plt.plot(f, prob / np.sum(prob),
label='$N = {0:d}, \mu = {1:f}$'.format(n, m))
ax.legend(loc=0)
ax.set_xlabel('allele frequency $f$')
ax.set_ylabel('$\propto$ probability')
ax.set_title('selection-drift (diploids heterozygote intermediate)', fontsize=14)
ax.axes.get_yaxis().set_ticks([])
# ax.set_xlim([0, 1])
ax.margins(0.02)
plt.tight_layout()
# plt.savefig('fig/sel_drift_diploid_hetero_intermediate.png')
```
# Kolmogorov Backwards Equation.
## Probability of fixation (Haploid)
Using the Kolmogorov backwards equation we derived that the probability of an allele being fixed is of the form
$$
P(1, \infty \mid f_o) = \frac{1 - e^{-2N_e s f_o}}{1 - e^{-2N_e s}},
\tag{4}
$$
where $f_o$ is the initial frequency of the allele, $N_e$ is the effective population size, and $s$ is the selection coefficient.
Let's define a function to compute this quantity.
```python
def fix_prob_haploid(N, s, fo):
return (1 - np.exp(-2 * N * s * fo)) / (1 - np.exp(-2 * N * s))
```
```python
N_array = 10**np.array([2, 3, 4])
s_array = np.linspace(-0.005, 0.005, 100)
fig, ax = plt.subplots(1, 1)
# Loop through mutation rates and plot the distribution
for n in N_array:
prob = fix_prob_haploid(n, s_array, 1 / n)
plt.plot(s_array, prob,
label='$N = {0:d}$'.format(n))
ax.legend(loc=0)
ax.set_xlabel('selection coefficient $s$')
ax.set_ylabel('fixation probability')
ax.set_title('haploids', fontsize=14)
ax.margins(0.02)
plt.tight_layout()
plt.savefig('fig/fix_prob_haploid.png')
```
## Probability of fixation (diploids)
For the diploid case we have a very similar equation for the probability of fixation of an allele
$$
P(1, \infty \mid f_o) = \frac{1 - e^{-4N_e h s f_o}}{1 - e^{-2N_e h s}},
\tag{5}
$$
where $f_o$ is the initial frequency of the allele, $N_e$ is the effective population size, $h$ is the dominance, and $s$ is the selection coefficient.
Let's define a function to compute this quantity.
```python
def fix_prob_diploid(N, h, s, fo):
return (1 - np.exp(-4 * N * h * s * fo)) / (1 - np.exp(-4 * N * h * s))
```
```python
h_array = [0.05, 0.5, 1]
s_array = np.linspace(-0.005, 0.005, 100)
N = 1E3
fig, ax = plt.subplots(1, 1)
# Loop through mutation rates and plot the distribution
for h in h_array:
prob = fix_prob_diploid(N, h, s_array, 1 / N)
plt.plot(s_array, prob,
label='$h = {0:.1f}$'.format(h))
ax.legend(loc=0)
ax.set_xlabel('selection coefficient $s$')
ax.set_ylabel('fixation probability')
ax.set_title('diploids', fontsize=14)
ax.margins(0.02)
plt.tight_layout()
plt.savefig('fig/fix_prob_diploids.png')
```
```python
```
| 43fbbcb149e49d6d265a46cae40718345907f890 | 446,869 | ipynb | Jupyter Notebook | code/classic_diffusion/diffusion_theory.ipynb | mrazomej/stat_gen | abafd9ecc63ae8a804c8df5b9658e47cabf951fa | [
"MIT"
] | null | null | null | code/classic_diffusion/diffusion_theory.ipynb | mrazomej/stat_gen | abafd9ecc63ae8a804c8df5b9658e47cabf951fa | [
"MIT"
] | 1 | 2019-03-05T00:17:26.000Z | 2019-03-05T00:17:26.000Z | code/classic_diffusion/diffusion_theory.ipynb | mrazomej/pop_gen | abafd9ecc63ae8a804c8df5b9658e47cabf951fa | [
"MIT"
] | null | null | null | 45.138283 | 201 | 0.517208 | true | 3,634 | Qwen/Qwen-72B | 1. YES
2. YES | 0.907312 | 0.83762 | 0.759983 | __label__eng_Latn | 0.69741 | 0.604027 |
# Lecture 30: Chi-Square, Student's t, Multivariate Normal
## Stat 110, Prof. Joe Blitzstein, Harvard University
----
## $\chi^2$ Distribution
The Chi-square Distribution is denoted as $\chi^2(n)$ or sometimes $\chi_{n}^2$, where $n$ indicates the _degrees of freedom_. It used everywhere (I think you used it before in feature analysis). It is related to Normal distribution.
Let $V = Z_1^2 + Z_2^3 + \dots + Z_n^2$, where the $Z_j$ are i.i.d. $\mathcal{N}(0,1)$. Then by definition, $V \sim \chi^2(n)$.
You will find that in a lot of things involving statistics, the sum of squares of $\mathcal{N}(0,1)$ often pops up.
```python
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.ticker import (MultipleLocator, FormatStrFormatter,
AutoMinorLocator)
from scipy.stats import chi2
%matplotlib inline
plt.xkcd()
dof_values = [1,2,3,4,5,6,7,8]
x = np.linspace(0, 10, 1000)
# plot the distributions
_, ax = plt.subplots(figsize=(12,8))
for d in dof_values:
ax.plot(x, chi2.pdf(x, d), lw=3.2, alpha=0.6, label='df={}'.format(d))
# legend styling
legend = ax.legend()
for label in legend.get_texts():
label.set_fontsize('large')
for label in legend.get_lines():
label.set_linewidth(1.5)
# y-axis
ax.set_ylim([0.0, 0.5])
ax.set_ylabel(r'$f(x)$')
# x-axis
ax.set_xlim([0, 10.0])
ax.set_xlabel(r'$x$')
# x-axis tick formatting
majorLocator = MultipleLocator(2.0)
majorFormatter = FormatStrFormatter('%0.1f')
minorLocator = MultipleLocator(1.0)
ax.xaxis.set_major_locator(majorLocator)
ax.xaxis.set_major_formatter(majorFormatter)
ax.xaxis.set_minor_locator(minorLocator)
ax.grid(color='grey', linestyle='-', linewidth=0.3)
plt.suptitle(r'Examples of $\chi^2_n$ with varying degrees of freedom')
plt.show()
```
### Fact: $\chi^2(1)$ is $\operatorname{Gamma}(\frac{1}{2}, \frac{1}{2})$
#### Proof
Let $Y = Z^2$ where $Z \sim \mathcal{N}(0,1)$ and $y \gt 0$.
\begin{align}
P(Y \le y) &= P(Z^2 \le y) \\
&= P( -y^{\frac{1}{2}} \le y \le y^{\frac{1}{2}}) \\
&= \Phi(y^{\frac{1}{2}}) - \Phi(y^{-\frac{1}{2}}) \\
&= \Phi(y^{\frac{1}{2}}) - \left( 1 - \Phi(y^{\frac{1}{2}}) \right) \\
&= 2 \, \Phi(y^{\frac{1}{2}}) - 1 \\
\\
\Rightarrow f_{Y}(y) &= y^{-\frac{1}{2}} \, \phi(y^{\frac{1}{2}}) \\
&= \frac{1}{\sqrt{2\pi}} \, y^{-\frac{1}{2}} \, e^{-\frac{y}{2}} \\
\\
\operatorname{Gamma}\left(\frac{1}{2}, \frac{1}{2}\right) &= \frac{1}{\Gamma(\frac{1}{2})} \, \left(\frac{y}{2}\right)^{\frac{1}{2}} \, e^{-\frac{y}{2}} \, \frac{1}{y} \\
&= \frac{1}{\sqrt{\pi}} \, \sqrt{\frac{y}{2}} \, e^{-\frac{y}{2}} \, \frac{1}{y} \\
&= \frac{1}{\sqrt{2\pi}} \, y^{-\frac{1}{2}} \, e^{-\frac{y}{2}} &\blacksquare \\
\end{align}
Here's a quick graph to illustrate.
```python
from scipy.stats import gamma
_, (ax1, ax2) = plt.subplots(1, 2, sharey=True, figsize=(14,6))
x = np.linspace(0, 20, 1000)
ax1.plot(x, chi2.pdf(x, 1), lw=3.2, alpha=0.6, color='#33AAFF', label='df=1')
ax1.set_title('$\chi^2_1$', y=1.02)
ax1.set_xlim((0,20.0))
ax1.set_ylim((0,0.5))
ax1.legend()
ax1.grid(color='grey', linestyle='-', linewidth=0.3)
# gamma.pdf API: scale = 1 / beta
l = 0.5
ax2.plot(x, gamma.pdf(x, 0.5, scale=1/l), lw=3.2, alpha=0.6, color='#FF9933', label=r'$\alpha$=1/2, $\lambda$=1/2')
ax2.set_title(r'$Gamma(\frac{1}{2}, \frac{1}{2})$', y=1.02)
ax2.set_xlim((0,20.0))
ax2.set_ylim((0,0.5))
ax2.legend()
ax2.grid(color='grey', linestyle='-', linewidth=0.3)
None
```
### Fact: $\chi^2(n)$ is $\operatorname{Gamma}\left( \frac{n}{2}, \frac{1}{2} \right)$
It follows then that $\chi^2(n) = \operatorname{Gamma}\left( \frac{n}{2}, \frac{1}{2} \right)$
```python
_, (ax1, ax2) = plt.subplots(1, 2, sharey=True, figsize=(14,6))
x = np.linspace(0, 20, 1000)
dof_values = [1, 2, 5, 10, 20]
col_alph_values = [0.8, 0.6, 0.5, 0.4, 0.3]
for df,c_alph in zip(dof_values, col_alph_values):
ax1.plot(x, chi2.pdf(x, df), color='#33AAFF', lw=3.2, alpha=c_alph, label='df={}'.format(df))
ax1.set_title('$\chi^2_n$ for varying degrees of freedom', y=1.02)
ax1.set_xlim((0,20.0))
ax1.set_ylim((0,0.5))
ax1.legend()
ax1.grid(color='grey', linestyle='-', linewidth=0.3)
# gamma.pdf API: scale = 1 / lambda
l = 0.5
for alph,c_alph in zip(dof_values, col_alph_values):
ax2.plot(x, gamma.pdf(x, alph/2, scale=1/l), lw=3.2, alpha=c_alph, color='#FF9933', label=r'$\alpha$={}/2, $\lambda$=1/2'.format(alph))
ax2.set_title(r'$Gamma(\frac{n}{2}, \frac{1}{2})$ for varying n', y=1.02)
ax2.set_xlim((0,20.0))
ax2.set_ylim((0,0.5))
ax2.legend()
ax2.grid(color='grey', linestyle='-', linewidth=0.3)
None
```
----
## Student's $t$-Distribution
The Student's $t$-distribution can be described in terms of the standard normal $Z \sim N(0,1)$ and $X^2$ $V(n)$ distributions, so that means it can be entirely described in terms of the standard normal distribution.
Let $T = \frac{Z}{\sqrt{V/n}}$, with $Z \sim \mathcal{N}(0,1)$ and $V \sim \chi^2(n)$, where $Z, V$ are independent.
Then we can write $T \sim t_n$, where $n$ is the degrees of freedom.
$t_1$ does not have a $1^{st}$ moment; $t_2$ does not have a $2^{nd}$ moment; $t_3$ does not have a $3^{rd}$ moment; and so on. Odd moments, if they exist, are 0
```python
from scipy.stats import t
dof_values = [1,2,5,10,30,1E10]
col_alph_values = [0.2, 0.3, 0.4, 0.5, 0.6, 0.8]
x = np.linspace(-5, 5, 1000)
# plot the distributions
fig, ax = plt.subplots(figsize=(8, 6))
for df,c_alph in zip(dof_values, col_alph_values):
if df > 30:
dl = r'$+\infty$'
else:
dl = df
ax.plot(x, t.pdf(x, df), lw=3.2, color='#A93226', alpha=c_alph, label=r'df={}'.format(dl))
# legend styling
legend = ax.legend()
for label in legend.get_texts():
label.set_fontsize('large')
for label in legend.get_lines():
label.set_linewidth(1.5)
# y-axis
ax.set_ylim([0, .43])
ax.set_ylabel(r'$P(x)$')
# x-axis
ax.set_xlim([-3.0, 3.0])
ax.set_xlabel(r'$x$')
ax.grid(color='grey', linestyle='-', linewidth=0.3)
plt.title(r'Examples of $t_n$ with varying degrees of freedom', y=1.02)
plt.text(x=3.5, y=0.22, s=r'Fatter tails with fewer degrees of freedom')
plt.text(x=3.5, y=0.19, s=r'Approaches $\mathbb{N}(0,1)$ as $df \rightarrow +\infty$')
plt.show()
```
### Properties
1. symmetric, i.e., $-T \sim t_n$ <p/>
1. $n=1 \, \Rightarrow $ Cauchy, so $t_1$ does not have a mean<p/>
1. $n \ge 2 \Rightarrow \mathbb{E}(T) = \mathbb{E}(Z)\,\mathbb{E}\left(\frac{1}{\sqrt{V/n}}\right) = 0$ <p/>
1. heavier-tailed than Normal <p/>
1. for $n$ large, $t_n$ looks very much like Normal <p/>
### Brief interlude: even moments of $Z$ and the Gamma distribution
It was proved earlier that for $Z \sim \mathcal{N}(0,1)$, the *even* moments are such that
\begin{align}
\mathbb{E}(Z^2) &= 1 \\
\mathbb{E}(Z^4) &= 1 \times 3 = 3 \\
\mathbb{E}(Z^6) &= 1 \times 3 \times 5 = 15 &\quad \text{ skip factorial} \\
\end{align}
Now, this was proven using moment-generating functions, but we can also relate this to the Gamma distribution.
\begin{align}
\mathbb{E}(Z^{2n}) &= \mathbb{E}\left( (Z^2)^n \right) \\
&= \mathbb{E}\left( \chi^2_1 )^n \right) &\text{ but by definition } Z^2 \text{ is } \chi^2_1 \\
&= \mathbb{E}\left( \operatorname{Gamma}(\frac{1}{2}, \frac{1}{2})^n \right)
\end{align}
... and after this point, we can use our knowledge of the Gamma distribution and LOTUS.
#### Finding $\mathbb{E}(Z^8)$ with $\operatorname{Gamma}(\frac{1}{2}, \frac{1}{2})$
\begin{align}
\mathbb{E}(Z^8) &= \mathbb{E}\left( (Z^2)^4 \right) \\
&= \mathbb{E}\left( \operatorname{Gamma}(\frac{1}{2}, \frac{1}{2})^4 \right) \\
&= \frac{\Gamma(\alpha + 4)}{\Gamma(\alpha) \, \lambda^4} \\
&= \frac{(\alpha + 3) \, \Gamma(\alpha + 3)}{\Gamma(\alpha) \, \lambda^4} \\
&= \frac{(\alpha + 3) \, (\alpha + 2) \, \Gamma(\alpha + 2)}{\Gamma(\alpha) \, \lambda^4} \\
&= \frac{(\alpha + 3) \, (\alpha + 2) \, (\alpha + 1) \, \Gamma(\alpha + 1)}{\Gamma(\alpha) \, \lambda^4} \\
&= \frac{(\alpha + 3) \, (\alpha + 2) \, (\alpha + 1) \, \alpha \, \Gamma(\alpha)}{\Gamma(\alpha) \, \lambda^4} \\
&= \frac{\frac{7}{2} \, \frac{5}{2} \, \frac{3}{2} \, \frac{1}{2}}{\frac{1}{2^4}} \\
&= 1 \times 3 \times 5 \times 7 \\
&= 105
\end{align}
You can [double-check this answer on WolframAlpha](http://www.wolframalpha.com/input/?i=expected+value+of+Z%5E8).
### Proof of property 5: Student's $t$ when $n$ get large
Let's prove property 5 above, but use the Law of Large Numbers (c.f. Lesson 29).
* Let $T_n = \frac{Z}{\sqrt{V/n}}$ with $Z_1, Z_2, \dots , Z_n$ i.i.d. $\mathcal{N}(0,1)$
* $V = Z_1^2 + Z_2^2 + \dots + Z_n^2$
* $Z$ is independent of the $Z_j^2$
Now we can choose any distribution for this case as long as it is $\mathcal{N}(0,1)$, and so there is nothing wrong in choosing the same distribution $Z$ for the numerator and all elements of the denominator as well.
Then $\frac{V_n}{n} \rightarrow 1$ with probability 1 by the Law of Large Numbers, since the average $\frac{V_n}{n}$ will approach the true average $Z_1^2$ as $n$ gets large. We know that $Z_1^2 = 1$.
Now, the Law of Large Numbers is with regards to point-wise estimates, so we can further state that $\sqrt{\frac{V_n}{n}} \rightarrow 1$ with probability 1.
So $T_n \rightarrow Z$ with probability 1, since the denominator goes to 1 when you have a large number of degrees of freedom; only the $Z$ in the numerator will be of importance.
\begin{align}
\end{align}
----
## Multivariate Normal
### Definition
Random vector $(X_1, X_2, \cdots , X_k) = \vec{X}$ is Multivariate Normal if every linear combination $t_1 X_1 + t_2 X_2 + \cdots + t_k X_k$ is Normal.
### An example that is multivariate Normal
Let $Z, W$ be i.i.d. $\mathcal{N}(0,1)$. Then $( Z + 2W, 3 \, Z + 5W)$ is multivariate Normal (MVN).
Given constants $s,t$
\begin{align}
s (Z + 2 \, W) + t (3 \, Z + 5 \, W) &= (s + 3t) Z + (2s + 5t) W \\
\end{align}
But since this is $(s + 3t)$ and $(2s + 5t)$ are just scaling independent Normal random variables $Z$ and $W$ respectively; and since we know that the sum of Normal random variables is also Normal, we know that $(s + 3t) Z + (2s + 5t) W$ is also necessarily a Normal r.v.
### A non-example (NOT multivariate Normal)
Let $Z \sim \mathcal{N}(0,1)$, and let $S$ be a random sign that is independent of $Z$.
Then $Z,SZ$ are marginally $\mathcal{N}(0,1)$ (consider both individually on their own).
But $(Z, SZ)$ is _not_ multivariate normal! Just test this by considering $(Z + SZ)$.
$Z + SZ$ cannot be Normal, since:
* half the time, the sum of $Z$ and $SZ$ will be zero when $S$ is negative
* the other times, the sum will be $2Z$
* this is some mixture of discrete and continuous
## MGF of $\vec{X}$
Now with $X \sim \mathcal{N}(\mu, \sigma^2)$, the moment generating function $M(X)$ is
\begin{align}
M(X) &= \mathbb{E}(e^{tX}) \\
&= e^{t\mu + \frac{1}{2} \, t^2 \sigma^2} \\\\
\end{align}
Extending this one-dimensional case to the multidimensional $\vec{X}$:
\begin{align}
M(\vec{X}) &= \mathbb{E}(e^{\vec{t} \, \vec{X}}) \\
&= \mathbb{E}(e^{t_1 \, X_1 + t_2 \, X_2 + \cdots + t_n \, X_n}) \\
&= \mathbb{E}(e^{t_1 \, \mu_1 + t_2 \, \mu_2 + \cdots + t_n \, \mu_n + \frac{1}{2} Var(t_1 \, X_1 + t_2 \, X_2 + \cdots + t_n \, X_n)}
\end{align}
## Theorem: Within an MVN, uncorrelated implies independent
Recall that in general, independence implies uncorrelation, but the vice versa is not always true. In the case of an MVN, however, it **is** true.
In other words, consider vector
\begin{align}
\vec{X} &= \begin{bmatrix}
\vec{X_1} \\
\vec{X_2} \\
\end{bmatrix}
\end{align}
If every component of $\vec{X_1}$ is uncorrelated with every component of $\vec{X_2}$, then $\vec{X_1}$ is independent of $\vec{X_2}$.
### Example
Let $X,Y$ be i.i.d. $\mathcal{N}(0,1)$. Then $(X+Y, X-Y)$ is MVN (_bivariate Normal_ to be precise).
It is easy enough to show that $X+Y$ and $X-Y$ are uncorrelated:
\begin{align}
\operatorname{Cov}(X+Y, X-Y) &= \operatorname{Var}(X) + \operatorname{Cov}(X,Y) - \operatorname{Cov}(X,Y) - \operatorname{Var}(Y) \\
&= \operatorname{Var}(X) - \operatorname{Var}(Y) \\
&= 1 - 1 \\
&= 0
\end{align}
But can we show that $X+Y$ and $X-Y$ are _independent_?
### Proof
Let's try for something a bit more abstract.
We suppose that $X,Y$ are _independent_, zero-mean normal random variables with variances $\sigma_U, \sigma_V$.
Let $U = aX + bY$, and $V = cX + dY$ so that $U,V$ are jointly normal; this is a more general represention of the above example, where $a = 1, b=1, c=1, d=-1$.
Say we have some scalars $t_1, t_2$, and let $Z = t_{1}U + t_{2}V$. Then
\begin{align}
M_{U,V}(t_1, t_2) &= \mathbb{E}(e^{t_1 \, U + t_2 \, V}) \\
&= \mathbb{E}(e^{Z}) \\
&= \mathbb{E}(e^{t_1 \, \mu_U + t_2 \, \mu_V + \frac{1}{2} Var(t_1 \, U + t_2 \, V)} \\
&= \mathbb{E}(e^{\frac{1}{2} Var(t_1 \, U + t_2 \, V}) \\
&= \mathbb{E}(e^{\frac{t_U^2 \, \sigma_U^2 + t_V^2 \, \sigma_V^2}{2}})
\end{align}
Now let $U\prime, V\prime$ be independent zero-mean normal random variables with the same variances $\sigma_U, \sigma_V$. Since $U\prime, V\prime$ are independent, they are also _uncorrelated_, and so the moment generating function of their bivariate normal distribution is given by $M_{U\prime,V\prime}(t_1, t_2) = \mathbb{E}(e^{\frac{t_U^2 \, \sigma_U^2 + t_V^2 \, \sigma_V^2}{2}}) $.
Since both $U,V$ and $U\prime,V\prime$ have the same moment generating function, they are boht associated with the same bivariate Normal distribution (they share the same joint PDF).
Thereforce, since $U\prime,V\prime$ are _independent_, we conclude that $U,V$ are also independent. QED.
----
View [Lecture 30: Chi-Square, Student-t, Multivariate Normal | Statistics 110](http://bit.ly/2Qfy1vJ) on YouTube.
| 3b3e7d45c8d7acb41253367cb003e48daf15c98c | 512,148 | ipynb | Jupyter Notebook | Lecture_30.ipynb | abhra-nilIITKgp/stats-110 | 258461cdfbdcf99de5b96bcf5b4af0dd98d48f85 | [
"BSD-3-Clause"
] | 113 | 2016-04-29T07:27:33.000Z | 2022-02-27T18:32:47.000Z | Lecture_30.ipynb | snoop2head/stats-110 | 88d0cc56ede406a584f6ba46368e548010f2b14a | [
"BSD-3-Clause"
] | null | null | null | Lecture_30.ipynb | snoop2head/stats-110 | 88d0cc56ede406a584f6ba46368e548010f2b14a | [
"BSD-3-Clause"
] | 65 | 2016-12-24T02:02:25.000Z | 2022-02-13T13:20:02.000Z | 951.947955 | 155,904 | 0.935657 | true | 5,092 | Qwen/Qwen-72B | 1. YES
2. YES | 0.903294 | 0.868827 | 0.784806 | __label__eng_Latn | 0.819345 | 0.6617 |
# CHEM 1000 - Spring 2022
Prof. Geoffrey Hutchison, University of Pittsburgh
## Graded Homework 3
For this homework, we'll focus on:
- vector arithmetic
- scalar dot product
- vector cross product
- simple operators
---
As a reminder, you do not need to use Python to solve the problems. If you want, you can use other methods, just put your answers in the appropriate places.
To turn in, either download as Notebook (.ipynb) or Print to PDF and upload to Gradescope.
Make sure you fill in any place that says YOUR CODE HERE or "YOUR ANSWER HERE", as well as your name and collaborators (i.e., anyone you discussed this with) below:
```python
NAME = ""
COLLABORATORS = ""
```
### Vector Arithmetic (4 points)
We're going to use some vectors on a more complicated molecule, biphenyl:
```python
import py3Dmol
# 7095 is the PubChem Compound ID (CID) for biphenyl:
# https://pubchem.ncbi.nlm.nih.gov/compound/Biphenyl
view = py3Dmol.view(width=400,height=400,query='cid:7095')
view.setStyle({'stick':{}})
view.addSurface(py3Dmol.VDW,{'opacity':0.7,'color':'white'})
view.zoomTo()
view.show()
```
<div id="3dmolviewer_1599837352608927" style="position: relative; width: 400px; height: 400px">
<p id="3dmolwarning_1599837352608927" style="background-color:#ffcccc;color:black">You appear to be running in JupyterLab (or JavaScript failed to load for some other reason). You need to install the 3dmol extension: <br>
<tt>jupyter labextension install jupyterlab_3dmol</tt></p>
</div>
```python
# here are the coordinates of the indicated atoms
import numpy as np
c1 = np.array([ 1.441, -1.132, 0.414])
c2 = np.array([ 0.742, 0.001, 0.004])
c3 = np.array([-0.742, 0.000, 0.002])
c4 = np.array([-1.443, 1.133, 0.412])
# find the vector for c1-c2 bond by subtraction
c1c2 =
c2c3 =
c3c4 =
# print the length of the c1c2 bond (i.e., the magnitude or 'norm' in numpy)
print( round(np.linalg.norm(c1c2), 3) )
# print the length of the c2c3 bond
print( round(np.linalg.norm(c2c3), 3) )
# print the length of the c3c4 bond
print( round(np.linalg.norm(c3c4), 3) )
```
<div class="alert alert-block alert-info">
**Concept**: Are the bonds all the same length or not? Why?
YOUR ANSWER HERE
</div>
## Center of Geometry
Find the 'centroid' or center point for the atoms c1, c2, c3, c4. (This should obviously be the center of the molecule, and between atoms c3 and c4).
```python
# YOUR CODE HERE
centroid = # FIXME
print(centroid)
```
<div class="alert alert-block alert-info">
**Concept**: Are the coordinates of this molecule exactly at the origin 0,0,0?
YOUR ANSWER HERE
</div>
## Scalar Dot Product
Use the scalar dot product to find the angle between c1c2 and c2c3 (i.e., the c1-c2-c3 angle)
As a reminder:
$$
\cos (\theta)=\frac{\mathbf{u} \cdot \mathbf{v}}{|\mathbf{u}||\mathbf{v}|}
$$
```python
# YOUR CODE HERE
mag_c1c2 = np.linalg.norm(c1c2)
mag_c2c3 = np.linalg.norm(c2c3)
theta = np.arccos( # FIXME )
print( round(np.degrees(theta), 3) )
```
<div class="alert alert-block alert-info">
**Concept:** Does your answer above match your expectation? What do you think the angle should be based on chemistry?
YOUR ANSWER HERE
</div>
### Vector Cross Product
Let's say we want to know the angle between the two benzene rings in biphenyl. That's the *dihedral* or torsion angle around c1-c2-c3-c4.
We know that c1-c2-c3 form a plane (i.e., three points determine a plane) and that the cross product will give us the vector perpendicular to that plane.
So we'll need to take two cross products (and be careful to take them in the right order)
```python
# YOUR CODE HERE
# find the cross product between c1c2 and c2c3 bonds
c1c2c3 =
print(c1c2c3)
# find the cross product between c2c3 and c3c4 bonds
c2c3c4 =
print(c2c3c4)
# the following code should get the torsion angle for you
mag_c1c2c3 = np.linalg.norm(c1c2c3)
mag_c2c3c4 = np.linalg.norm(c2c3c4)
torsion = np.arccos(np.dot(c1c2c3, c2c3c4) / (mag_c1c2c3 * mag_c2c3c4))
print( round(np.degrees(torsion), 3) )
```
<div class="alert alert-block alert-info">
**Concept** What happens if you reverse the order of the bonds in the cross product? Will it change your answer for the angle?
Explain:
YOUR ANSWER HERE
</div>
### Operators
Unfortunately, what we've learned about operators is a bit abstract still.
(We'll design some operators that tell us things about energies, forces, etc. on Monday.)
If you find these hard, they can be. You can either do this with Sympy, or create a PDF of this notebook and add pages with your work.
5.3 Consider the linear momentum operator
$$
\hat{p} \equiv-i \hbar \frac{d}{d x}
$$
where $\hbar$ is a constant. For the function $\psi(x)=\mathrm{e}^{i k x},$ show that $\hat{p} \psi(x)=\hbar k \psi(x)$
5.4 Consider the kinetic energy operator
$$
\hat{H} \equiv-\frac{h^{2}}{8 \pi^{2} m} \frac{d^{2}}{d x^{2}}
$$
For the function $\psi(x)=\sin \left(\frac{\pi x}{L}\right),$ show that $\hat{H} \psi(x)=\frac{h^{2}}{8 m L^{2}} \psi(x)$
```python
from sympy import init_session
init_session()
i, hbar = symbols('i hbar')
h, m, L = symbols('h m L')
```
```python
# YOUR CODE HERE
# if you have an error, make sure you run the cell above this
psi = exp(i*k*x)
p =
print(p)
```
```python
# YOUR CODE HERE
psi = sin(pi*x/L)
H =
print(H)
```
| 85e77515445faeadfc1dcd1232af1afe46b105b4 | 17,590 | ipynb | Jupyter Notebook | homework/ps3/ps3.ipynb | ghutchis/chem1000 | 07a7eac20cc04ee9a1bdb98339fbd5653a02a38d | [
"CC-BY-4.0"
] | 12 | 2020-06-23T18:44:37.000Z | 2022-03-14T10:13:05.000Z | homework/ps3/ps3.ipynb | ghutchis/chem1000 | 07a7eac20cc04ee9a1bdb98339fbd5653a02a38d | [
"CC-BY-4.0"
] | null | null | null | homework/ps3/ps3.ipynb | ghutchis/chem1000 | 07a7eac20cc04ee9a1bdb98339fbd5653a02a38d | [
"CC-BY-4.0"
] | 4 | 2021-07-29T10:45:23.000Z | 2021-10-16T09:51:00.000Z | 29.316667 | 1,608 | 0.559409 | true | 1,642 | Qwen/Qwen-72B | 1. YES
2. YES | 0.803174 | 0.73412 | 0.589626 | __label__eng_Latn | 0.948469 | 0.208228 |
```python
%matplotlib inline
```
```python
# Write your imports here
import numpy as np
import math
import matplotlib.pyplot as plt
```
# Basic Algebra Exercise
## Functions, Polynomials, Complex Numbers. Applications of Abstract Algebra
### Problem 1. Polynomial Interpolation
We know that if we have a set of $n$ data points with coordinates $(x_1; y_1), (x_2; y_2), \dots, (x_n; y_n)$, we can try to figure out what function may have generated these points.
Please note that **our assumptions about the data** will lead us to choosing one function over another. This means that our results are as good as our data and assumptions. Therefore, it's extremely important that we write down our assumptions (which sometimes can be difficult as we sometimes don't realize we're making them). It will be better for our readers if they know what those assumptions and models are.
In this case, we'll state two assumptions:
1. The points in our dataset are generated by a polynomial function
2. The points are very precise, there is absolutely no error in them. This means that the function should pass **through every point**
This method is called *polynomial interpolation* (*"polynomial"* captures assumption 1 and *"interpolation"* captures assumption 2).
It can be proved (look at [Wikipedia](https://en.wikipedia.org/wiki/Polynomial_interpolation) for example) that if we have $n$ data points, there is only one polynomial of degree $n-1$ which passes through them. In "math speak": "the vector spaces of $n$ points and polynomials of degree $n-1$ are isomorphic (there exists a bijection mapping one to the other)".
There are a lot of ways to do interpolation. We can also write the function ourselves if we want but this requires quite a lot more knowledge than we already covered in this course. So we'll use a function which does this for us. `numpy.polyfit()` is one such function. It accepts three main parameters (there are others as well, but they are optional): a list of $x$ coordinates, a list of $y$ coordinates, and a polynomial degree.
Let's say we have these points:
```python
points = np.array([(0, 0), (1, 0.8), (2, 0.9), (3, 0.1), (4, -0.8), (5, -1.0)])
```
First, we need to "extract" the coordinates:
```python
x = points[:, 0]
y = points[:, 1]
```
Then, we need to calculate the interpolating polynomial. For the degree, we'll set $n-1$:
```python
coefficients = np.polyfit(x, y, len(points) - 1)
poly = np.poly1d(coefficients)
```
After that, we need to plot the function. To do this, we'll create a range of $x$ values and evaluate the polynomial at each value:
```python
plot_x = np.linspace(np.min(x), np.max(x), 1000)
plot_y = poly(plot_x)
```
Finally, we need to plot the result. We'll plot both the fitting polynomial curve (using `plt.plot()`) and the points (using `plt.scatter`). It's also nice to have different colors to make the line stand out from the points.
```python
plt.plot(plot_x, plot_y, c = "green")
plt.scatter(x, y)
plt.xlabel("x")
plt.ylabel("y")
plt.show()
```
Don't forget to label the axes!
Your task now is to wrap the code in a function. It should accept a list of points, the polynomial degree, min and max value of $x$ used for plotting. We'll use this function to try some other cases.
```python
import numpy as np
import numpy.polynomial.polynomial as p
def interpolate_polynomial(points, degree, min_x, max_x):
"""
Interpolates a polynomial of the specified degree through the given points and plots it
points - a list of points (x, y) to plot
degree - the polynomial degree
min_x, max_x - range of x values used to plot the interpolating polynomial
"""
x = points[:, 0]
y = points[:, 1]
coefficients = np.polyfit(x, y, degree)
poly = np.poly1d(coefficients)
plot_x = np.linspace(np.min(min_x), np.max(max_x), 1000)
plot_y = poly(plot_x)
plt.plot(plot_x, plot_y, c = "green")
plt.scatter(x, y)
plt.xlabel("x")
plt.ylabel("y")
plt.show()
```
```python
points = np.array([(0, 0), (1, 0.8), (2, 0.9), (3, 0.1), (4, -0.8), (5, -1.0)])
interpolate_polynomial(points, len(points) - 1, np.min(points[:, 0]), np.max(points[:, 0]))
```
We see this is a very nice fit. This is expected, of course. Let's try to expand our view a little. Let's try to plot other values of $x$, further than the original ones. This is **extrapolation**.
```python
interpolate_polynomial(points, len(points) - 1, -5, 10)
```
Hmmm... it seems our polynomial goes a little wild outside the original range. This is to show how **extrapolation can be quite dangerous**.
Let's try a lower polynomial degree now. We used 4, how about 3, 2 and 1?
**Note:** We can add titles to every plot so that we know what exactly we're doing. Te title may be passed as an additional parameter to our function.
```python
interpolate_polynomial(points, 3, np.min(points[:, 0]), np.max(points[:, 0]))
interpolate_polynomial(points, 2, np.min(points[:, 0]), np.max(points[:, 0]))
interpolate_polynomial(points, 1, np.min(points[:, 0]), np.max(points[:, 0]))
```
We see the fitting curves (or line in the last case) struggle more and more and they don't pass through every point. This breaks our assumptions but it can be very useful.
Okay, one more thing. How about increasing the degree? Let's try 5, 7 and 10. Python might complain a little, just ignore it, everything is fine... sort of :).
```python
interpolate_polynomial(points, 5, np.min(points[:, 0]), np.max(points[:, 0]))
interpolate_polynomial(points, 7, np.min(points[:, 0]), np.max(points[:, 0]))
interpolate_polynomial(points, 10, np.min(points[:, 0]), np.max(points[:, 0]))
```
Those graphs look pretty much the same. But that's the point exactly. I'm being quite sneaky here. Let's try to expand our view once again and see what our results really look like.
```python
interpolate_polynomial(points, 5, -10, 10)
interpolate_polynomial(points, 7, -10, 10)
interpolate_polynomial(points, 10, -10, 10)
```
Now we see there are very wild differences. Even though the first two plots look quite similar, look at the $y$ values - they're quite different.
So, these are the dangers of interpolation. Use a too high degree, and you get "the polynomial wiggle". These are all meant to represent **the same** data points but they look insanely different. Here's one more comparison.
```python
interpolate_polynomial(points, len(points) - 1, -2, 7)
interpolate_polynomial(points, len(points) + 1, -2, 7)
```
Now we can see what big difference even a small change in degree can make. This is why we have to choose our interpolating functions very carefully. Generally, a lower degree means a simpler function, which is to be preferred. See [Occam's razor](https://en.wikipedia.org/wiki/Occam%27s_razor).
And also, **we need to be very careful about our assumptions**.
```python
points = np.array([(-5, 0.03846), (-4, 0.05882), (-3, 0.1), (-2, 0.2), (-1, 0.5), (0, 1), (1, 0.5), (2, 0.2), (3, 0.1), (4, 0.05882), (5, 0.03846)])
interpolate_polynomial(points, len(points) - 1, np.min(points[:, 0]), np.max(points[:, 0]))
```
This one definitely looks strange. Even stranger, if we remove the outermost points... ($x = \pm 5$), we get this
```python
points = np.array([(-4, 0.05882), (-3, 0.1), (-2, 0.2), (-1, 0.5), (0, 1), (1, 0.5), (2, 0.2), (3, 0.1), (4, 0.05882)])
interpolate_polynomial(points, len(points - 1), np.min(points[:, 0]), np.max(points[:, 0]))
```
This is because the generating function is not a polynomial. It's actually:
$$ y = \frac{1}{1 + x^2} $$
Plot the polynomial interpolation and the real generating function **on the same plot**. You may need to modify the original plotting function or just copy its contents.
```python
def interpolate_polynomial2(points, degree, min_x, max_x):
x=np.linspace(np.min(min_x), np.max(max_x), 1000)
f_vectorized = np.vectorize(lambda x:1/(1+x**2))
y = f_vectorized(x)
plt.plot(x, y)
ax = plt.gca()
ax.spines["bottom"].set_position("zero")
ax.spines["left"].set_position("zero")
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
xx = points[:, 0]
yy = points[:, 1]
coefficients = np.polyfit(xx, yy, degree)
poly = np.poly1d(coefficients)
plot_x = np.linspace(np.min(min_x), np.max(max_x), 1000)
plot_y = poly(plot_x)
plt.plot(plot_x, plot_y, c = "green")
plt.scatter(xx, yy)
plt.xlabel("x")
plt.ylabel("y")
plt.show()
pass
```
```python
points = np.array([(-4, 0.05882), (-3, 0.1), (-2, 0.2), (-1, 0.5), (0, 1), (1, 0.5), (2, 0.2), (3, 0.1), (4, 0.05882)])
interpolate_polynomial2(points, len(points - 1), np.min(points[:, 0]), np.max(points[:, 0]))
```
### Problem 2. Complex Numbers as Vectors
We saw that a complex number $z = a + bi$ is equivalent to (and therefore can be represented as) the ordered tuple $(a; b)$, which can be plotted in a 2D space. So, complex numbers and 2D points are equivalent. What is more, we can draw a vector from the origin of the coordinate plane to our point. This is called a point's **radius-vector**.
Let's try plotting complex numbers as radius vectors. Don't forget to label the real and imaginary axes. Also, move the axes to the origin. Hint: These are called "spines"; you'll need to move 2 of them to the origin and remove the other 2 completely. Hint 2: You already did this in the previous lab.
We can use `plt.quiver()` to plot the vector. It can behave a bit strangely, so we'll need to set the scale of the vectors to be the same as the scale on the graph axes:
```python
plt.quiver(0, 0, z.real, z.imag, angles = "xy", scale_units = "xy", scale = 1)
```
Other than that, the main parameters are: $x_{begin}$, $y_{begin}$, $x_{length}$, $y_{length}$ in that order.
Now, set the aspect ratio of the axes to be equal. Also, add grid lines. Set the axis numbers (called ticks) to be something like `range(-3, 4)` for now.
```python
plt.xticks(range(-3, 4))
plt.yticks(range(-3, 4))
```
If you wish to, you can be a bit more clever with the tick marks. Find the minimal and maximal $x$ and $y$ values and set the ticks according to them. It's a good practice not to jam the plot too much, so leave a little bit of space. That is, if the actual x-range is $[-2; 2]$, set the plotting to be $[-2.5; 2.5]$ for example. Otherwise, the vector heads (arrows) will be "jammed" into a corner or side of the plot.
```python
def plot_complex_number(z):
"""
Plots the complex number z as a radius vector in the 2D space
"""
plt.quiver(0, 0, z.real, z.imag, angles = "xy", scale_units = "xy", scale = 1)
plt.xticks(range(-3, 5))
plt.yticks(range(-3, 5))
pass
plot_complex_number(2 + 3j)
```
How about many numbers? We'll need to get a little bit more creative. First, we need to create a 2D array, each element of which will be a 4-element array: `[0, 0, z.real, z.imag]`. Next, `plt.quiver()` can accept a range of values. Look at [this StackOverflow post](https://stackoverflow.com/questions/12265234/how-to-plot-2d-math-vectors-with-matplotlib) for details and adapt your code.
```python
def plot_complex_numbers(numbers, colors):
"""
Plots the given complex numbers as radius vectors in the 2D space
"""
list_numbers = np.array([numbers])
for z in list_numbers:
# Plots the vector
plt.quiver(0, 0, z.real, z.imag, angles = "xy", scale_units = "xy", scale = 1.5, color=colors)
# Sets the aspect ratio of the axes to be equal
plt.gca().set_aspect("equal")
# Sets range of axis numbers
plt.xticks(range(-3, 4))
plt.yticks(range(-3, 4))
pass
```
```python
plot_complex_numbers([2 + 3j, -2 - 1j, -3, 2j], ["green", "red", "blue", "orange"])
```
Now let's see what the operations look like. Let's add two numbers and plot the result.
```python
z1 = 2 + 3j
z2 = 1 - 1j
plot_complex_numbers([z1, z2, z1 + z2], ["red", "blue", "green"])
```
We can see that adding the complex numbers is equivalent to adding vectors (remember the "parallelogram rule"). As special cases, let's try adding pure real and pure imaginary numbers:
```python
z1 = 2 + 3j
z2 = 2 + 0j
plot_complex_numbers([z1, z2, z1 + z2], ["red", "blue", "green"])
```
```python
z1 = 2 + 3j
z2 = 0 + 2j
plot_complex_numbers([z1, z2, z1 + z2], ["red", "blue", "green"])
```
How about multiplication? First we know that multiplying by 1 gives us the same vector and mulpiplying by -1 gives us the reversed version of the same vector. How about multiplication by $\pm i$?
```python
z = 2 + 3j
plot_complex_numbers([z, z * 1], ["red", "blue"])
plot_complex_numbers([z, z * -1], ["red", "blue"])
plot_complex_numbers([z, z * 1j], ["red", "blue"])
plot_complex_numbers([z, z * -1j], ["red", "blue"])
```
So, multiplication by $i$ is equivalent to 90-degree rotation. We can actually see the following equivalence relationships between multiplying numbers and rotation about the origin:
| Real | Imaginary | Result rotation |
|------|-----------|-----------------|
| 1 | 0 | $0^\circ$ |
| 0 | 1 | $90^\circ$ |
| -1 | 0 | $180^\circ$ |
| 0 | -1 | $270^\circ$ |
Once again, we see the power of abstraction and algebra in practice. We know that complex numbers and 2D vectors are equivalent. Now we see something more: addition and multiplication are equivalent to translation (movement) and rotation!
Let's test the multiplication some more. We can see the resulting vector is the sum of the original vectors, but *scaled and rotated*:
```python
z1 = 2 + 3j
z2 = 1 - 2j
plot_complex_numbers([z1, z2, z1 * z2], ["red", "blue", "green"])
```
### Problem 3. Recursion and Fractals
> "To understand recursion, you first need to understand recursion."
There are three main parts to a recursive function:
1. Bottom - when the recursion should finish
2. Operation - some meaningful thing to do
3. Recursive call - calling the same function
4. Clean-up - returning all data to its previous state (this reverses the effect of the operation)
Let's do one of the most famous recursion examples. And I'm not talking about Fibonacci here. Let's draw a tree using recursive functions.
The figure we're going to draw is called a **fractal**. It's self-similar, which means that if you zoom in on a part of it, it will look the same. You can see fractals everywhere in nature, with broccoli being one of the prime examples. Have a look:
First, we need to specify the recursive part. In order to draw a tree, we need to draw a line of a given length (which will be the current branch), and then draw two more lines to the left and right. By "left" and "right", we should mean "rotation by a specified angle".
So, this is how to draw a branch: draw a line and prepare to draw two more branches to the left and right. This is going to be our recursive call.
To make things prettier, more natural-looking (and have a natural end to our recursion), let's draw each "sub-branch" a little shorter. If the branch becomes too short, it won't have "child branches". This will be the bottom of our recursion.
There's one more important part of recursion, and this is **"the clean-up"**. After we did something in the recursive calls, it's very important to return the state of everything as it was **before** we did anything. In this case, after we draw a branch, we go back to our starting position.
Let's first import the most import-ant (no pun intended...) Python drawing library: `turtle`! In order to make things easier, we'll import all methods directly.
```python
from turtle import *
```
You can look up the docs about turtle if you're more interested. The basic things we're going to use are going forward and backward by a specified number of pixels and turning left and right by a specified angle (in degrees).
Let's now define our recursive function:
```python
def draw_branch(branch_length, angle):
if branch_length > 5:
forward(branch_length)
right(angle)
draw_branch(branch_length - 15, angle)
left(2 * angle)
draw_branch(branch_length - 15, angle)
right(angle)
backward(branch_length)
```
And let's call it:
```python
draw_branch(100, 20)
```
We need to start the tree not at the middle, but toward the bottom of the screen, so we need to make a few more adjustments. We can wrap the setup in another function and call it. Let's start one trunk length below the center (the trunk length is the length of the longest line).
```python
def draw_tree(trunk_length, angle):
speed("fastest")
left(90)
up()
backward(trunk_length)
down()
draw_branch(trunk_length, angle)
```
Note that the graphics will show in a separate window. Also note that sometimes you might get bugs. If you do, go to Kernel > Restart.
```python
from turtle import *
def draw_branch(branch_length, angle):
if branch_length > 5:
forward(branch_length)
right(angle)
draw_branch(branch_length - 15, angle)
left(2 * angle)
draw_branch(branch_length - 15, angle)
right(angle)
backward(branch_length)
```
```python
def draw_tree(trunk_length, angle):
speed("fastest")
left(90)
up()
backward(trunk_length)
down()
draw_branch(trunk_length, angle)
```
```python
draw_tree(100, 20)
```
Experiment with different lengths and angles. Especially interesting angles are $30^\circ$, $45^\circ$, $60^\circ$ and $90^\circ$.
```python
draw_tree(100, 30)
```
```python
draw_tree(100, 45)
```
```python
draw_tree(100, 90)
```
Now modify the original function a little. Draw the lines with different thickness. Provide the trunk thickness at the initial call. Similar to how branches go shorter, they should also go thinner.
```python
def draw_branch2(branch_length, angle):
if branch_length > 5:
forward(branch_length)
right(angle)
draw_branch2(branch_length - 15, angle)
left(2 * angle)
draw_branch2(branch_length - 15, angle)
right(angle)
backward(branch_length)
```
```python
def draw_tree2(trunk_length, angle):
width(width=10)
speed("fastest")
left(90)
up()
backward(trunk_length)
down()
draw_branch2(trunk_length, angle)
```
```python
draw_tree2(100, 90)
```
#### * Optional problem
Try to draw another kind of fractal graphic using recursion and the `turtle` library. Two very popular examples are the "Koch snowflake" and the "Sierpinski triangle". You can also modify the original tree algorithm to create more natural-looking trees. You can, for example, play with angles, number of branches, lengths, and widths. The Internet has a lot of ideas about this :). Hint: Look up **"L-systems"**.
### Problem 4. Run-length Encoding
One application of algebra and basic math can be **compression**. This is a way to save data in less space than it originally takes. The most basic form of compression is called [run-length encoding](https://en.wikipedia.org/wiki/Run-length_encoding).
Write a function that encodes a given text. Write another one that decodes.
We can see that RLE is not very useful in the general case. But it can be extremely useful if we have very few symbols. An example of this can be DNA and protein sequences. DNA code, for example, has only 4 characters.
Test your encoding and decoding functions on a DNA sequence (you can look up some on the Internet). Measure how much your data is compressed relative to the original.
```python
from re import sub
def encode(text):
"""
Returns the run-length encoded version of the text
(numbers after symbols, length = 1 is skipped)
"""
di=sub(r'(.)\1*', lambda m: m.group(1)+str(len(m.group(0))),text)
di = di.replace("1", "")
return di
pass
def decode(text):
"""
Decodes the text using run-length encoding
"""
return sub(r'(\D)(\d+)', lambda m: m.group(1) * int(m.group(2)),text)
pass
```
```python
# Tests
# Test that the functions work on their own
assert encode("AABCCCDEEEE") == "A2BC3DE4"
assert decode("A2BC3DE4") == "AABCCCDEEEE"
# Test that the functions really invert each other
assert decode(encode("AABCCCDEEEE")) == "AABCCCDEEEE"
assert encode(decode("A2BC3DE4")) == "A2BC3DE4"
```
### Problem 5. Function Invertibility and Cryptography
As we already saw, some functions are able to be inverted. That is, if we know the output, we can see what input generated it directly. This is true if the function is **one-to-one correspondence** (bijection).
However, not all functions are created the same. Some functions are easy to compute but their inverses are extremely difficult. A very important example is **number factorization**. It's relatively easy (computationally) to multiply numbers but factoring them is quite difficult. Let's run an experiment.
We'll need a function to generate random n-bit numbers. One such number can be found in the `Crypto` package
```python
from Crypto.Util import number
random_integer = number.getRandomNBitInteger(n_bits)
```
We could, of course, write our factorization by hand but we'll use `sympy`
```python
from sympy.ntheory import factorint
factorint(1032969399047817906432668079951) # {3: 2, 79: 1, 36779: 1, 7776252885493: 1, 5079811103: 1}
```
This function returns a `dict` where the keys are the factors, and the values - how many times they should be multiplied.
We'll also need a tool to accurately measure performance. Have a look at [this one](https://docs.python.org/3/library/time.html#time.time) for example.
Specity a sequence of bit lengths, in increasing order. For example, you might choose something like `[10, 20, 25, 30, 32, 33, 35, 38, 40]`. Depending on your computer's abilities you can go as high as you want. For each bit length, generate a number. See how much time it takes to factor it. Then see how much time it takes to multiply the factors. Be careful how you measure these. You shouldn't include the number generation (or any other external functions) in your timing.
In order to have better accuracy, don't do this once per bit length. Do it, for example, five times, and average the results.
Plot all multiplication and factorization times as a function of the number of bits. You should see that factorization is much, much slower. If you don't see this, just try larger numbers :D.
```python
# Write your code here
from Crypto.Util import number
random_integer = number.getRandomNBitInteger(40)
from sympy.ntheory import factorint
factorint(1032969399047817906432668079951)
```
{3: 2, 79: 1, 36779: 1, 5079811103: 1, 7776252885493: 1}
### * Problem 6. Diffie - Hellman Simulation
As we already saw, there are functions which are very easy to compute in the "forward" direction but really difficult (computationally) to invert (that is, determine the input from the output). There is a special case: the function may have a hidden "trap door". If you know where that door is, you can invert the function easily. This statement is at the core of modern cryptography.
Look up **Diffie - Hellman key exchange** (here's a [video](https://www.youtube.com/watch?v=cM4mNVUBtHk) on that but feel free to use anything else you might find useful).
Simulate the algorithm you just saw. Generate large enough numbers so the difference is noticeable (say, factoring takes 10-15 seconds). Simulate both participants in the key exchange. Simulate an eavesdropper.
First, make sure after both participants run the algotihm, they have *the same key* (they generate the same number).
Second, see how long it takes for them to exchange keys.
Third, see how long it takes the eavesdropper to arrive at the correct shared secret.
You should be able to see **the power of cryptography**. In this case, it's not that the function is irreversible. It can be reversed, but it takes a really long time (and with more bits, we're talking billions of years). However, if you know something else (this is called a **trap door**), the function becomes relatively easy to invert.
```python
# Write your code here
```
### ** Problem 7. The Galois Field in Cryptography
Research about the uses of the Galois field. What are its properties? How can it be used in cryptography? Write a simple cryptosystem based on the field.
You can use the following questions to facilitate your research:
* What is a field?
* What is GF(2)? Why is it an algebraic field?
* What is perfect secrecy? How does it relate to the participants in the conversation, and to the outside eavesdropper?
* What is symmetrical encryption?
* How to encrypt one-bit messages?
* How to extend the one-bit encryption system to many buts?
* Why is the system decryptable? How do the participants decrypt the encrypted messages?
* Why isn't the eavesdropper able to decrypt?
* What is a one-time pad?
* How does the one-time pad achieve perfect secrecy?
* What happens if we try to use a one-time pad many times?
* Provide an example where you break the "many-time pad" security
* What are some current enterprise-grade applications of encryption over GF(2)?
* Implement a cryptosystem based on GF(2). Show correctness on various test cases
### ** Problem 8. Huffman Compression Algorithm
Examine and implement the **Huffman algorithm** for compressing data. It's based on information theory and probiability theory. Document your findings and provide your implementation.
This algorithm is used for **lossless compression**: compressing data without loss of quality. You can use the following checklist:
* What is the difference betwenn lossless and lossy compression?
* When can we get away with lossy compression?
* What is entropy?
* How are Huffman trees constructed?
* Provide a few examples
* How can we get back the uncompressed data from the Huffman tree?
* How and where are Huffman trees stored?
* Implement the algorithm. Add any other formulas / assumptions / etc. you might need.
* Test the algorithm. A good meaure would be percentage compression: $$\frac{\text{compressed}}{\text{uncompressed}} * 100\%$$
* How well does Huffman's algorithm perform compared to other compression algorithms (e.g. LZ77)?
| 37361c9cb790d1b87657fcb38bb915a90930e6c6 | 323,417 | ipynb | Jupyter Notebook | Basic_Algebra/.ipynb_checkpoints/Basic-Algebra-Exercise-checkpoint.ipynb | ivaylokanov/Math_Concepts_for_Developers | 646d4d5de48535c22b9a8fcb624973b917661c5e | [
"MIT"
] | null | null | null | Basic_Algebra/.ipynb_checkpoints/Basic-Algebra-Exercise-checkpoint.ipynb | ivaylokanov/Math_Concepts_for_Developers | 646d4d5de48535c22b9a8fcb624973b917661c5e | [
"MIT"
] | null | null | null | Basic_Algebra/.ipynb_checkpoints/Basic-Algebra-Exercise-checkpoint.ipynb | ivaylokanov/Math_Concepts_for_Developers | 646d4d5de48535c22b9a8fcb624973b917661c5e | [
"MIT"
] | null | null | null | 265.531199 | 24,836 | 0.904275 | true | 6,916 | Qwen/Qwen-72B | 1. YES
2. YES | 0.931463 | 0.879147 | 0.818892 | __label__eng_Latn | 0.996495 | 0.740894 |
# Animating a simple wave
We'll plot at various times a wave $u(x,t)$ that starts as a triangular shape as in Taylor Example 16.1, and then animate it. We can imagine this as simulating a wave on a taut string. Here $u$ is the transverse displacement (i.e., $y$ in our two-dimensional plots). We are not solving the wave equation as a differential equation here, but starting with $u(x,0) \equiv u_0(x)$ and plotting the solution at time $t$:
$\begin{align}
u(x,t) = \frac12 u_0(x - ct) + \frac12 u_0(x + ct)
\;,
\end{align}$
which *is* the solution to the wave equation starting with $u_0(x)$ at time $t=0$.
We have various choices for animation in a Jupyter notebook. We will consider two possibilities, which both use `FuncAnimation` from `matplotlib.animation`.
1. Make a javascript movie that is then displayed inline with a movie-playing widget using `HTML`. We use
`%matplotlib inline` for this option and use `%%capture` to prevent the figure from displaying prematurely.
2. Update the figure in real time (so to speak) by using `%matplotlib notebook`, which creates active figures that we can modify after they are displayed.
We'll do the first option here. We should define at least one class for the animation.
v1: Created 25-Mar-2019. Last revised 27-Mar-2019 by Dick Furnstahl (furnstahl.1@osu.edu).
```python
%matplotlib inline
```
To use option 2: uncomment `%matplotlib notebook` here and `fig.show()` just after we define `anim`. Comment out `%%capture` and `HTML(anim.to_jshtml())` below.
```python
#%matplotlib notebook
```
```python
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import animation, rc
from IPython.display import HTML
```
First define functions for the $t=0$ wave function form (here a triangle) and for the subsequent shape at any time $t$ based on the wave speed `c_wave`.
```python
def u_0_triangle(x_pts, height=1., width=1.):
"""Returns a triangular wave of amplitude height and width 2*width.
"""
y_pts = np.zeros(len(x_pts)) # set the y array to all zeros
for i, x in enumerate(x_pts):
if x < width and x >= 0.:
y_pts[i] = -(height/width) * x + height
elif x < 0 and x >= -width:
y_pts[i] = (height/width) * x + height
else:
pass # do nothing (everything else is zero already)
return y_pts
```
```python
def u_triangle(x_pts, t, c_wave = 1., height=1., width=1.):
"""Returns the wave at time t resulting from a triangular wave of
amplitude height and width 2*width at time t=0. It is the
superposition of two traveling waves moving to the left and the right.
"""
y_pts = -u_0_triangle(x_pts + c_wave * t) / 2. + \
u_0_triangle(x_pts - c_wave * t) / 2.
return y_pts
```
```python
# Set up the array of x points (whatever looks good)
x_min = -5.
x_max = +5.
delta_x = 0.01
x_pts = np.arange(x_min, x_max, delta_x)
```
First look at the initial ($t=0$) wave form.
```python
# Define the initial (t=0) wave form and the wave speed.
height = 1.
width = 1.
c_wave = 1.
# Make a figure showing the initial wave.
t_now = 0.
fig = plt.figure(figsize=(6,2), num='Triangular wave')
ax = fig.add_subplot(1,1,1)
ax.set_xlim(x_min, x_max)
gap = 0.1
ax.set_ylim(-height -gap, height + gap)
ax.set_xlabel(r'$x$')
ax.set_ylabel(r'$u_0(x)$')
ax.set_title(rf'$t = {t_now:.1f}$')
line, = ax.plot(x_pts,
u_triangle(x_pts, t_now, c_wave, height, width),
color='blue', lw=2)
fig.tight_layout()
```
Next make some plots at an array of time points.
```python
t_array = np.array([0., 1, 2., 3, 4., -2.])
fig_array = plt.figure(figsize=(12,6), num='Triangular wave')
for i, t_now in enumerate(t_array):
ax_array = fig_array.add_subplot(3, 2, i+1)
ax_array.set_xlim(x_min, x_max)
gap = 0.1
ax_array.set_ylim(-height -gap, height + gap)
ax_array.set_xlabel(r'$x$')
ax_array.set_ylabel(r'$u_0(x)$')
ax_array.set_title(rf'$t = {t_now:.1f}$')
ax_array.plot(x_pts,
u_triangle(x_pts, t_now, c_wave, height, width),
color='blue', lw=2)
fig_array.tight_layout()
fig_array.savefig('Taylor_Example_16p2_triangle_waves.png',
bbox_inches='tight')
```
Now it is time to animate!
```python
# Set up the t mesh for the animation. The maximum value of t shown in
# the movie will be t_min + delta_t * frame_number
t_min = 0. # You can make this negative to see what happens before t=0!
t_max = 15.
delta_t = 0.1
t_pts = np.arange(t_min, t_max, delta_t)
```
We use the cell "magic" `%%capture` to keep the figure from being shown here. If we didn't the animated version below would be blank.
```python
%%capture
fig_anim = plt.figure(figsize=(6,2), num='Triangular wave')
ax_anim = fig_anim.add_subplot(1,1,1)
ax_anim.set_xlim(x_min, x_max)
gap = 0.1
ax_anim.set_ylim(-height -gap, height + gap)
# By assigning the first return from plot to line_anim, we can later change
# the values in the line.
line_anim, = ax_anim.plot(x_pts,
u_triangle(x_pts, t_min, c_wave, height, width),
color='blue', lw=2)
fig_anim.tight_layout()
```
```python
def animate_wave(i):
"""This is the function called by FuncAnimation to create each frame,
numbered by i. So each i corresponds to a point in the t_pts
array, with index i.
"""
t = t_pts[i]
y_pts = u_triangle(x_pts, t, c_wave, height, width)
line_anim.set_data(x_pts, y_pts) # overwrite line_anim with new points
return (line_anim,) # this is needed for blit=True to work
```
```python
frame_interval = 40. # time between frames
frame_number = 100 # number of frames to include (index of t_pts)
anim = animation.FuncAnimation(fig_anim,
animate_wave,
init_func=None,
frames=frame_number,
interval=frame_interval,
blit=True,
repeat=False)
#fig.show()
```
```python
HTML(anim.to_jshtml())
```
<link rel="stylesheet"
href="https://maxcdn.bootstrapcdn.com/font-awesome/4.4.0/
css/font-awesome.min.css">
<div class="animation" align="center">
<br>
<input id="_anim_slider769a861c9a6848a797049c48584581ca" type="range" style="width:350px"
name="points" min="0" max="1" step="1" value="0"
onchange="anim769a861c9a6848a797049c48584581ca.set_frame(parseInt(this.value));"></input>
<br>
<button onclick="anim769a861c9a6848a797049c48584581ca.slower()"><i class="fa fa-minus"></i></button>
<button onclick="anim769a861c9a6848a797049c48584581ca.first_frame()"><i class="fa fa-fast-backward">
</i></button>
<button onclick="anim769a861c9a6848a797049c48584581ca.previous_frame()">
<i class="fa fa-step-backward"></i></button>
<button onclick="anim769a861c9a6848a797049c48584581ca.reverse_animation()">
<i class="fa fa-play fa-flip-horizontal"></i></button>
<button onclick="anim769a861c9a6848a797049c48584581ca.pause_animation()"><i class="fa fa-pause">
</i></button>
<button onclick="anim769a861c9a6848a797049c48584581ca.play_animation()"><i class="fa fa-play"></i>
</button>
<button onclick="anim769a861c9a6848a797049c48584581ca.next_frame()"><i class="fa fa-step-forward">
</i></button>
<button onclick="anim769a861c9a6848a797049c48584581ca.last_frame()"><i class="fa fa-fast-forward">
</i></button>
<button onclick="anim769a861c9a6848a797049c48584581ca.faster()"><i class="fa fa-plus"></i></button>
<form action="#n" name="_anim_loop_select769a861c9a6848a797049c48584581ca" class="anim_control">
<input type="radio" name="state"
value="once" checked> Once </input>
<input type="radio" name="state"
value="loop" > Loop </input>
<input type="radio" name="state"
value="reflect" > Reflect </input>
</form>
</div>
```python
```
| c7c7f5e1fd9a5aa8c4872ece225ae95648b6fdb7 | 744,797 | ipynb | Jupyter Notebook | 2020_week_11/Problem_16.11.ipynb | CLima86/Physics_5300_CDL | d9e8ee0861d408a85b4be3adfc97e98afb4a1149 | [
"MIT"
] | null | null | null | 2020_week_11/Problem_16.11.ipynb | CLima86/Physics_5300_CDL | d9e8ee0861d408a85b4be3adfc97e98afb4a1149 | [
"MIT"
] | null | null | null | 2020_week_11/Problem_16.11.ipynb | CLima86/Physics_5300_CDL | d9e8ee0861d408a85b4be3adfc97e98afb4a1149 | [
"MIT"
] | null | null | null | 90.049208 | 27,628 | 0.815783 | true | 2,300 | Qwen/Qwen-72B | 1. YES
2. YES | 0.896251 | 0.880797 | 0.789416 | __label__eng_Latn | 0.894351 | 0.672409 |
<a href="https://colab.research.google.com/github/julianovale/simulacao_python/blob/master/0008_ex_trem_kronecker_artigo.ipynb" target="_parent"></a>
```
from sympy import I, Matrix, symbols, Symbol, eye
from datetime import datetime
import numpy as np
import pandas as pd
```
```
'''
Rotas
'''
R1 = Matrix([[0,"L1p3",0,0,0,0],[0,0,"L1v1",0,0,0],[0,0,0,"L1p4",0,0],[0,0,0,0,"L1v3",0],[0,0,0,0,0,"L1v4"],[0,0,0,0,0,0]])
R2 = Matrix([[0,"L2p3",0,0,0,0],[0,0,"L2v2",0,0,0],[0,0,0,"L2p5",0,0],[0,0,0,0,"L2v3",0],[0,0,0,0,0,"L2v5"],[0,0,0,0,0,0]])
R3 = Matrix([[0,"L3p3",0,0,0,0],[0,0,"L3v5",0,0,0],[0,0,0,"L3p1",0,0],[0,0,0,0,"L3v3",0],[0,0,0,0,0,"L3v1"],[0,0,0,0,0,0]])
```
```
'''
Seções de bloqueio
'''
T1 = Matrix([[0, "p1"],["v1", 0]])
T2 = Matrix([[0, "p2"],["v2", 0]])
T3 = Matrix([[0, "p3"],["v3", 0]])
T4 = Matrix([[0, "p4"],["v4", 0]])
T5 = Matrix([[0, "p5"],["v5", 0]])
```
```
def kronSum(A,B):
m = np.size(A,1)
n = np.size(B,1)
A = np.kron(A,np.eye(n))
B = np.kron(np.eye(m),B)
return A + B
```
```
momento_inicio = datetime.now()
'''
Algebra de rotas
'''
rotas = kronSum(R1,R2)
rotas = kronSum(rotas,R3)
'''
Algebra de seções
'''
secoes = kronSum(T1,T2)
secoes = kronSum(secoes,T3)
secoes = kronSum(secoes,T4)
secoes = kronSum(secoes,T5)
'''
Algebra de sistema
'''
sistema = np.kron(rotas, secoes)
# calcula tempo de processamento
tempo_processamento = datetime.now() - momento_inicio
```
```
sistema = pd.DataFrame(data=sistema,index=list(range(1,np.size(sistema,0)+1)), columns=list(range(1,np.size(sistema,1)+1)))
```
```
sistema.shape
```
(6912, 6912)
```
print(tempo_processamento)
```
0:01:10.109705
```
sistema
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
<th>8</th>
<th>9</th>
<th>10</th>
<th>11</th>
<th>12</th>
<th>13</th>
<th>14</th>
<th>15</th>
<th>16</th>
<th>17</th>
<th>18</th>
<th>19</th>
<th>20</th>
<th>21</th>
<th>22</th>
<th>23</th>
<th>24</th>
<th>25</th>
<th>26</th>
<th>27</th>
<th>28</th>
<th>29</th>
<th>30</th>
<th>31</th>
<th>32</th>
<th>33</th>
<th>34</th>
<th>35</th>
<th>36</th>
<th>37</th>
<th>38</th>
<th>39</th>
<th>40</th>
<th>...</th>
<th>6873</th>
<th>6874</th>
<th>6875</th>
<th>6876</th>
<th>6877</th>
<th>6878</th>
<th>6879</th>
<th>6880</th>
<th>6881</th>
<th>6882</th>
<th>6883</th>
<th>6884</th>
<th>6885</th>
<th>6886</th>
<th>6887</th>
<th>6888</th>
<th>6889</th>
<th>6890</th>
<th>6891</th>
<th>6892</th>
<th>6893</th>
<th>6894</th>
<th>6895</th>
<th>6896</th>
<th>6897</th>
<th>6898</th>
<th>6899</th>
<th>6900</th>
<th>6901</th>
<th>6902</th>
<th>6903</th>
<th>6904</th>
<th>6905</th>
<th>6906</th>
<th>6907</th>
<th>6908</th>
<th>6909</th>
<th>6910</th>
<th>6911</th>
<th>6912</th>
</tr>
</thead>
<tbody>
<tr>
<th>1</th>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1.0*L3p3*p5</td>
<td>1.0*L3p3*p4</td>
<td>0</td>
<td>1.0*L3p3*p3</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>...</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>2</th>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1.0*L3p3*v5</td>
<td>0</td>
<td>0</td>
<td>1.0*L3p3*p4</td>
<td>0</td>
<td>1.0*L3p3*p3</td>
<td>0</td>
<td>0</td>
<td>...</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>3</th>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1.0*L3p3*v4</td>
<td>0</td>
<td>0</td>
<td>1.0*L3p3*p5</td>
<td>0</td>
<td>0</td>
<td>1.0*L3p3*p3</td>
<td>0</td>
<td>...</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>4</th>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1.0*L3p3*v4</td>
<td>1.0*L3p3*v5</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1.0*L3p3*p3</td>
<td>...</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>5</th>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1.0*L3p3*v3</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1.0*L3p3*p5</td>
<td>1.0*L3p3*p4</td>
<td>0</td>
<td>...</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>...</th>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<th>6908</th>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>...</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>6909</th>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>...</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>6910</th>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>...</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>6911</th>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>...</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>6912</th>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>...</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
</tbody>
</table>
<p>6912 rows × 6912 columns</p>
</div>
```
sistema.loc[6858,6906]
```
1.0*L3v1*p1
```
momento_inicio = datetime.now()
colunas = ['denode', 'paranode', 'aresta']
grafo = pd.DataFrame(columns=colunas)
r = 1
c = 1
for j in range(np.size(sistema,0)):
for i in range(np.size(sistema,0)):
if sistema.loc[r,c]==0 and c < np.size(sistema,0):
c += 1
elif c < np.size(sistema,0):
grafo.loc[len(grafo)+1] = (r, c, sistema.loc[r,c])
c += 1
else:
c = 1
r += 1
tempo_processamento = datetime.now() - momento_inicio
print(tempo_processamento)
```
0:19:39.230451
```
grafo
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>denode</th>
<th>paranode</th>
<th>aresta</th>
</tr>
</thead>
<tbody>
<tr>
<th>1</th>
<td>1</td>
<td>34</td>
<td>1.0*L3p3*p5</td>
</tr>
<tr>
<th>2</th>
<td>1</td>
<td>35</td>
<td>1.0*L3p3*p4</td>
</tr>
<tr>
<th>3</th>
<td>1</td>
<td>37</td>
<td>1.0*L3p3*p3</td>
</tr>
<tr>
<th>4</th>
<td>1</td>
<td>41</td>
<td>1.0*L3p3*p2</td>
</tr>
<tr>
<th>5</th>
<td>1</td>
<td>49</td>
<td>1.0*L3p3*p1</td>
</tr>
<tr>
<th>...</th>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<th>86381</th>
<td>6880</td>
<td>6896</td>
<td>1.0*L3v1*v1</td>
</tr>
<tr>
<th>86382</th>
<td>6880</td>
<td>6904</td>
<td>1.0*L3v1*v2</td>
</tr>
<tr>
<th>86383</th>
<td>6880</td>
<td>6908</td>
<td>1.0*L3v1*v3</td>
</tr>
<tr>
<th>86384</th>
<td>6880</td>
<td>6910</td>
<td>1.0*L3v1*v4</td>
</tr>
<tr>
<th>86385</th>
<td>6880</td>
<td>6911</td>
<td>1.0*L3v1*v5</td>
</tr>
</tbody>
</table>
<p>86385 rows × 3 columns</p>
</div>
```
grafo.to_csv('grafo.csv', sep=";")
```
```
from google.colab import files
files.download('grafo.csv')
```
<IPython.core.display.Javascript object>
<IPython.core.display.Javascript object>
| 6099cbb8a5863f1d85338b638115b451c5b9ea87 | 54,797 | ipynb | Jupyter Notebook | 0008_ex_trem_kronecker_artigo.ipynb | julianovale/simulacao_python | 9d29fe05d1580ca46311fc6fb6ab41b1b1c7ca5d | [
"MIT"
] | null | null | null | 0008_ex_trem_kronecker_artigo.ipynb | julianovale/simulacao_python | 9d29fe05d1580ca46311fc6fb6ab41b1b1c7ca5d | [
"MIT"
] | null | null | null | 0008_ex_trem_kronecker_artigo.ipynb | julianovale/simulacao_python | 9d29fe05d1580ca46311fc6fb6ab41b1b1c7ca5d | [
"MIT"
] | null | null | null | 34.57224 | 256 | 0.227458 | true | 9,830 | Qwen/Qwen-72B | 1. YES
2. YES | 0.90053 | 0.672332 | 0.605455 | __label__lmo_Latn | 0.228766 | 0.245004 |
```python
import os
from galgebra_ipython_helpers import check as check_latex, run
os.chdir('../Old Format')
```
```python
run('bad_example')
```
3*e_x + 4*e_y
5
25
3*e_x/5 + 4*e_y/5
3*e_x/25 + 4*e_y/25
1
3*e_x/25 + 4*e_y/25
bad_example.py:4: DeprecationWarning: The `galgebra.deprecated` module is deprecated
from galgebra.deprecated import MV
```python
run('eval_check')
```
Frame = ([0;34mex[0m + [0;34mey[0m, [0;34mex[0m - [0;34mey[0m)
Reciprocal Frame = ([0;34mex[0m/2 + [0;34mey[0m/2, [0;34mex[0m/2 - [0;34mey[0m/2)
eu.eu_r = 1
eu.ev_r = 0
ev.eu_r = 0
ev.ev_r = 1
Frame = ([0;34mex[0m + [0;34mey[0m + [0;34mez[0m, [0;34mex[0m - [0;34mey[0m)
Reciprocal Frame = ([0;34mex[0m/3 + [0;34mey[0m/3 + [0;34mez[0m/3, [0;34mex[0m/2 - [0;34mey[0m/2)
eu.eu_r = 1
eu.ev_r = 0
ev.eu_r = 0
ev.ev_r = 1
eu = [0;34mex[0m + [0;34mey[0m + [0;34mez[0m
ev = [0;34mex[0m - [0;34mey[0m
eu^ev|ex
(eu^(ev|ex))
[0;34mex[0m + [0;34mey[0m + [0;34mez[0m
eu^ev|ex*eu
((eu^(ev|ex))*eu)
3
eval_check.py:5: DeprecationWarning: The `galgebra.deprecated` module is deprecated
from galgebra.deprecated import MV,ReciprocalFrame
eval_check.py:19: DeprecationWarning: The `galgebra.deprecated.ReciprocalFrame` function is deprecated in favor of the `ReciprocalFrame` method of `Ga` objects.
(eu_r,ev_r) = ReciprocalFrame([eu,ev])
eval_check.py:31: DeprecationWarning: The `galgebra.deprecated.ReciprocalFrame` function is deprecated in favor of the `ReciprocalFrame` method of `Ga` objects.
(eu_r,ev_r) = ReciprocalFrame([eu,ev])
```python
run('exp_check')
```
u__x[0m*[0;34me_x[0m + u__y[0m*[0;34me_y[0m + u__z[0m*[0;34me_z[0m
v__x[0m*[0;34me_x[0m + v__y[0m*[0;34me_y[0m + v__z[0m*[0;34me_z[0m
(u__x[0m*v__y[0m - u__y[0m*v__x[0m)*[0;34me_x[0m^[0;34me_y[0m + (u__x[0m*v__z[0m - u__z[0m*v__x[0m)*[0;34me_x[0m^[0;34me_z[0m + (u__y[0m*v__z[0m - u__z[0m*v__y[0m)*[0;34me_y[0m^[0;34me_z[0m
True
exp_check.py:5: DeprecationWarning: The `galgebra.deprecated` module is deprecated
from galgebra.deprecated import MV
exp_check.py:12: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
u = MV('u','vector')
exp_check.py:13: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
v = MV('v','vector')
exp_check.py:14: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
w = MV('w','vector')
```python
check_latex('latex_check')
```
latex_check.py:7: DeprecationWarning: The `galgebra.deprecated` module is deprecated
from galgebra.deprecated import MV
latex_check.py:32: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
A = MV('A','mv')
latex_check.py:41: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
X = MV('X','vector')
latex_check.py:42: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
Y = MV('Y','vector')
latex_check.py:60: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
X = MV('X','vector')
latex_check.py:61: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
A = MV('A','spinor')
latex_check.py:77: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
X = MV('X','vector')
latex_check.py:78: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
A = MV('A','spinor')
latex_check.py:115: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
f = MV('f','scalar',fct=True)
latex_check.py:116: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
A = MV('A','vector',fct=True)
latex_check.py:117: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
B = MV('B','grade2',fct=True)
latex_check.py:118: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
C = MV('C','mv')
latex_check.py:142: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
f = MV('f','scalar',fct=True)
latex_check.py:143: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
A = MV('A','vector',fct=True)
latex_check.py:144: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
B = MV('B','grade2',fct=True)
latex_check.py:24: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
a = MV(sym_lst,'vector')
\documentclass[10pt,fleqn]{report}
\usepackage[vcentering]{geometry}
\geometry{papersize={14in,11in},total={13in,10in}}
\pagestyle{empty}
\usepackage[latin1]{inputenc}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{tensor}
\usepackage{listings}
\usepackage{color}
\usepackage{xcolor}
\usepackage{bm}
\usepackage{breqn}
\definecolor{gray}{rgb}{0.95,0.95,0.95}
\setlength{\parindent}{0pt}
\DeclareMathOperator{\Tr}{Tr}
\DeclareMathOperator{\Adj}{Adj}
\newcommand{\bfrac}[2]{\displaystyle\frac{#1}{#2}}
\newcommand{\lp}{\left (}
\newcommand{\rp}{\right )}
\newcommand{\paren}[1]{\lp {#1} \rp}
\newcommand{\half}{\frac{1}{2}}
\newcommand{\llt}{\left <}
\newcommand{\rgt}{\right >}
\newcommand{\abs}[1]{\left |{#1}\right | }
\newcommand{\pdiff}[2]{\bfrac{\partial {#1}}{\partial {#2}}}
\newcommand{\lbrc}{\left \{}
\newcommand{\rbrc}{\right \}}
\newcommand{\W}{\wedge}
\newcommand{\prm}[1]{{#1}'}
\newcommand{\ddt}[1]{\bfrac{d{#1}}{dt}}
\newcommand{\R}{\dagger}
\newcommand{\deriv}[3]{\bfrac{d^{#3}#1}{d{#2}^{#3}}}
\newcommand{\grade}[1]{\left < {#1} \right >}
\newcommand{\f}[2]{{#1}\lp{#2}\rp}
\newcommand{\eval}[2]{\left . {#1} \right |_{#2}}
\newcommand{\Nabla}{\boldsymbol{\nabla}}
\newcommand{\eb}{\boldsymbol{e}}
\usepackage{float}
\floatstyle{plain} % optionally change the style of the new float
\newfloat{Code}{H}{myc}
\lstloadlanguages{Python}
\begin{document}
\begin{lstlisting}[language=Python,showspaces=false,showstringspaces=false,backgroundcolor=\color{gray},frame=single]
def basic_multivector_operations_3D():
Print_Function()
(ex,ey,ez) = MV.setup('e*x|y|z')
A = MV('A','mv')
A.Fmt(1,'A')
A.Fmt(2,'A')
A.Fmt(3,'A')
A.even().Fmt(1,'%A_{+}')
A.odd().Fmt(1,'%A_{-}')
X = MV('X','vector')
Y = MV('Y','vector')
print('g_{ij} = ',MV.metric)
X.Fmt(1,'X')
Y.Fmt(1,'Y')
(X*Y).Fmt(2,'X*Y')
(X^Y).Fmt(2,'X^Y')
(X|Y).Fmt(2,'X|Y')
return
\end{lstlisting}
Code Output:
\begin{equation*} A = A + A^{x} \boldsymbol{e}_{x} + A^{y} \boldsymbol{e}_{y} + A^{z} \boldsymbol{e}_{z} + A^{xy} \boldsymbol{e}_{x}\wedge \boldsymbol{e}_{y} + A^{xz} \boldsymbol{e}_{x}\wedge \boldsymbol{e}_{z} + A^{yz} \boldsymbol{e}_{y}\wedge \boldsymbol{e}_{z} + A^{xyz} \boldsymbol{e}_{x}\wedge \boldsymbol{e}_{y}\wedge \boldsymbol{e}_{z} \end{equation*}
\begin{equation*} A = \begin{aligned}[t] & A \\ & + A^{x} \boldsymbol{e}_{x} + A^{y} \boldsymbol{e}_{y} + A^{z} \boldsymbol{e}_{z} \\ & + A^{xy} \boldsymbol{e}_{x}\wedge \boldsymbol{e}_{y} + A^{xz} \boldsymbol{e}_{x}\wedge \boldsymbol{e}_{z} + A^{yz} \boldsymbol{e}_{y}\wedge \boldsymbol{e}_{z} \\ & + A^{xyz} \boldsymbol{e}_{x}\wedge \boldsymbol{e}_{y}\wedge \boldsymbol{e}_{z} \end{aligned} \end{equation*}
\begin{equation*} A = \begin{aligned}[t] & A \\ & + A^{x} \boldsymbol{e}_{x} \\ & + A^{y} \boldsymbol{e}_{y} \\ & + A^{z} \boldsymbol{e}_{z} \\ & + A^{xy} \boldsymbol{e}_{x}\wedge \boldsymbol{e}_{y} \\ & + A^{xz} \boldsymbol{e}_{x}\wedge \boldsymbol{e}_{z} \\ & + A^{yz} \boldsymbol{e}_{y}\wedge \boldsymbol{e}_{z} \\ & + A^{xyz} \boldsymbol{e}_{x}\wedge \boldsymbol{e}_{y}\wedge \boldsymbol{e}_{z} \end{aligned} \end{equation*}
\begin{equation*} g_{ij} = \left[\begin{array}{ccc}\left (\boldsymbol{e}_{x}\cdot \boldsymbol{e}_{x}\right ) & \left (\boldsymbol{e}_{x}\cdot \boldsymbol{e}_{y}\right ) & \left (\boldsymbol{e}_{x}\cdot \boldsymbol{e}_{z}\right ) \\\left (\boldsymbol{e}_{x}\cdot \boldsymbol{e}_{y}\right ) & \left (\boldsymbol{e}_{y}\cdot \boldsymbol{e}_{y}\right ) & \left (\boldsymbol{e}_{y}\cdot \boldsymbol{e}_{z}\right ) \\\left (\boldsymbol{e}_{x}\cdot \boldsymbol{e}_{z}\right ) & \left (\boldsymbol{e}_{y}\cdot \boldsymbol{e}_{z}\right ) & \left (\boldsymbol{e}_{z}\cdot \boldsymbol{e}_{z}\right ) \end{array}\right] \end{equation*}
\begin{equation*} X = X^{x} \boldsymbol{e}_{x} + X^{y} \boldsymbol{e}_{y} + X^{z} \boldsymbol{e}_{z} \end{equation*}
\begin{equation*} Y = Y^{x} \boldsymbol{e}_{x} + Y^{y} \boldsymbol{e}_{y} + Y^{z} \boldsymbol{e}_{z} \end{equation*}
\begin{lstlisting}[language=Python,showspaces=false,showstringspaces=false,backgroundcolor=\color{gray},frame=single]
def basic_multivector_operations_2D():
Print_Function()
(ex,ey) = MV.setup('e*x|y')
print('g_{ij} =',MV.metric)
X = MV('X','vector')
A = MV('A','spinor')
X.Fmt(1,'X')
A.Fmt(1,'A')
(X|A).Fmt(2,'X|A')
(X<A).Fmt(2,'X<A')
(A>X).Fmt(2,'A>X')
return
\end{lstlisting}
Code Output:
\begin{equation*} g_{ij} = \left[\begin{array}{cc}\left (\boldsymbol{e}_{x}\cdot \boldsymbol{e}_{x}\right ) & \left (\boldsymbol{e}_{x}\cdot \boldsymbol{e}_{y}\right ) \\\left (\boldsymbol{e}_{x}\cdot \boldsymbol{e}_{y}\right ) & \left (\boldsymbol{e}_{y}\cdot \boldsymbol{e}_{y}\right ) \end{array}\right] \end{equation*}
\begin{equation*} X = X^{x} \boldsymbol{e}_{x} + X^{y} \boldsymbol{e}_{y} \end{equation*}
\begin{equation*} A = A + A^{xy} \boldsymbol{e}_{x}\wedge \boldsymbol{e}_{y} \end{equation*}
\begin{lstlisting}[language=Python,showspaces=false,showstringspaces=false,backgroundcolor=\color{gray},frame=single]
def basic_multivector_operations_2D_orthogonal():
Print_Function()
(ex,ey) = MV.setup('e*x|y',metric='[1,1]')
print('g_{ii} =',MV.metric)
X = MV('X','vector')
A = MV('A','spinor')
X.Fmt(1,'X')
A.Fmt(1,'A')
(X*A).Fmt(2,'X*A')
(X|A).Fmt(2,'X|A')
(X<A).Fmt(2,'X<A')
(X>A).Fmt(2,'X>A')
(A*X).Fmt(2,'A*X')
(A|X).Fmt(2,'A|X')
(A<X).Fmt(2,'A<X')
(A>X).Fmt(2,'A>X')
return
\end{lstlisting}
Code Output:
\begin{equation*} g_{ii} = \left[\begin{array}{cc}1 & 0\\0 & 1\end{array}\right] \end{equation*}
\begin{equation*} X = X^{x} \boldsymbol{e}_{x} + X^{y} \boldsymbol{e}_{y} \end{equation*}
\begin{equation*} A = A + A^{xy} \boldsymbol{e}_{x}\wedge \boldsymbol{e}_{y} \end{equation*}
\begin{lstlisting}[language=Python,showspaces=false,showstringspaces=false,backgroundcolor=\color{gray},frame=single]
def check_generalized_BAC_CAB_formulas():
Print_Function()
(a,b,c,d) = MV.setup('a b c d')
print('g_{ij} =',MV.metric)
print('\\bm{a|(b*c)} =',a|(b*c))
print('\\bm{a|(b^c)} =',a|(b^c))
print('\\bm{a|(b^c^d)} =',a|(b^c^d))
print('\\bm{a|(b^c)+c|(a^b)+b|(c^a)} =',(a|(b^c))+(c|(a^b))+(b|(c^a)))
print('\\bm{a*(b^c)-b*(a^c)+c*(a^b)} =',a*(b^c)-b*(a^c)+c*(a^b))
print('\\bm{a*(b^c^d)-b*(a^c^d)+c*(a^b^d)-d*(a^b^c)} =',a*(b^c^d)-b*(a^c^d)+c*(a^b^d)-d*(a^b^c))
print('\\bm{(a^b)|(c^d)} =',(a^b)|(c^d))
print('\\bm{((a^b)|c)|d} =',((a^b)|c)|d)
print('\\bm{(a^b)\\times (c^d)} =',Ga.com(a^b,c^d))
return
\end{lstlisting}
Code Output:
\begin{equation*} g_{ij} = \left[\begin{array}{cccc}\left (\boldsymbol{a}\cdot \boldsymbol{a}\right ) & \left (\boldsymbol{a}\cdot \boldsymbol{b}\right ) & \left (\boldsymbol{a}\cdot \boldsymbol{c}\right ) & \left (\boldsymbol{a}\cdot \boldsymbol{d}\right ) \\\left (\boldsymbol{a}\cdot \boldsymbol{b}\right ) & \left (\boldsymbol{b}\cdot \boldsymbol{b}\right ) & \left (\boldsymbol{b}\cdot \boldsymbol{c}\right ) & \left (\boldsymbol{b}\cdot \boldsymbol{d}\right ) \\\left (\boldsymbol{a}\cdot \boldsymbol{c}\right ) & \left (\boldsymbol{b}\cdot \boldsymbol{c}\right ) & \left (\boldsymbol{c}\cdot \boldsymbol{c}\right ) & \left (\boldsymbol{c}\cdot \boldsymbol{d}\right ) \\\left (\boldsymbol{a}\cdot \boldsymbol{d}\right ) & \left (\boldsymbol{b}\cdot \boldsymbol{d}\right ) & \left (\boldsymbol{c}\cdot \boldsymbol{d}\right ) & \left (\boldsymbol{d}\cdot \boldsymbol{d}\right ) \end{array}\right] \end{equation*}
\begin{equation*} \bm{a\cdot (b c)} = - \left (\boldsymbol{a}\cdot \boldsymbol{c}\right ) \boldsymbol{b} + \left (\boldsymbol{a}\cdot \boldsymbol{b}\right ) \boldsymbol{c} \end{equation*}
\begin{equation*} \bm{a\cdot (b\W c)} = - \left (\boldsymbol{a}\cdot \boldsymbol{c}\right ) \boldsymbol{b} + \left (\boldsymbol{a}\cdot \boldsymbol{b}\right ) \boldsymbol{c} \end{equation*}
\begin{equation*} \bm{a\cdot (b\W c\W d)} = \left (\boldsymbol{a}\cdot \boldsymbol{d}\right ) \boldsymbol{b}\wedge \boldsymbol{c} - \left (\boldsymbol{a}\cdot \boldsymbol{c}\right ) \boldsymbol{b}\wedge \boldsymbol{d} + \left (\boldsymbol{a}\cdot \boldsymbol{b}\right ) \boldsymbol{c}\wedge \boldsymbol{d} \end{equation*}
\begin{equation*} \bm{a\cdot (b\W c)+c\cdot (a\W b)+b\cdot (c\W a)} = 0 \end{equation*}
\begin{equation*} \bm{a (b\W c)-b (a\W c)+c (a\W b)} = 3 \boldsymbol{a}\wedge \boldsymbol{b}\wedge \boldsymbol{c} \end{equation*}
\begin{equation*} \bm{a (b\W c\W d)-b (a\W c\W d)+c (a\W b\W d)-d (a\W b\W c)} = 4 \boldsymbol{a}\wedge \boldsymbol{b}\wedge \boldsymbol{c}\wedge \boldsymbol{d} \end{equation*}
\begin{equation*} \bm{(a\W b)\cdot (c\W d)} = - \left (\boldsymbol{a}\cdot \boldsymbol{c}\right ) \left (\boldsymbol{b}\cdot \boldsymbol{d}\right ) + \left (\boldsymbol{a}\cdot \boldsymbol{d}\right ) \left (\boldsymbol{b}\cdot \boldsymbol{c}\right ) \end{equation*}
\begin{equation*} \bm{((a\W b)\cdot c)\cdot d} = - \left (\boldsymbol{a}\cdot \boldsymbol{c}\right ) \left (\boldsymbol{b}\cdot \boldsymbol{d}\right ) + \left (\boldsymbol{a}\cdot \boldsymbol{d}\right ) \left (\boldsymbol{b}\cdot \boldsymbol{c}\right ) \end{equation*}
\begin{equation*} \bm{(a\W b)\times (c\W d)} = - \left (\boldsymbol{b}\cdot \boldsymbol{d}\right ) \boldsymbol{a}\wedge \boldsymbol{c} + \left (\boldsymbol{b}\cdot \boldsymbol{c}\right ) \boldsymbol{a}\wedge \boldsymbol{d} + \left (\boldsymbol{a}\cdot \boldsymbol{d}\right ) \boldsymbol{b}\wedge \boldsymbol{c} - \left (\boldsymbol{a}\cdot \boldsymbol{c}\right ) \boldsymbol{b}\wedge \boldsymbol{d} \end{equation*}
\begin{lstlisting}[language=Python,showspaces=false,showstringspaces=false,backgroundcolor=\color{gray},frame=single]
def rounding_numerical_components():
Print_Function()
(ex,ey,ez) = MV.setup('e_x e_y e_z',metric='[1,1,1]')
X = 1.2*ex+2.34*ey+0.555*ez
Y = 0.333*ex+4*ey+5.3*ez
print('X =',X)
print('Nga(X,2) =',Nga(X,2))
print('X*Y =',X*Y)
print('Nga(X*Y,2) =',Nga(X*Y,2))
return
\end{lstlisting}
Code Output:
\begin{equation*} X = 1.2 \boldsymbol{e}_{x} + 2.34 \boldsymbol{e}_{y} + 0.555 \boldsymbol{e}_{z} \end{equation*}
\begin{equation*} Nga(X,2) = 1.2 \boldsymbol{e}_{x} + 2.3 \boldsymbol{e}_{y} + 0.55 \boldsymbol{e}_{z} \end{equation*}
\begin{equation*} X Y = 12.7011 + 4.02078 \boldsymbol{e}_{x}\wedge \boldsymbol{e}_{y} + 6.175185 \boldsymbol{e}_{x}\wedge \boldsymbol{e}_{z} + 10.182 \boldsymbol{e}_{y}\wedge \boldsymbol{e}_{z} \end{equation*}
\begin{equation*} Nga(X Y,2) = 13.0 + 4.0 \boldsymbol{e}_{x}\wedge \boldsymbol{e}_{y} + 6.2 \boldsymbol{e}_{x}\wedge \boldsymbol{e}_{z} + 10.0 \boldsymbol{e}_{y}\wedge \boldsymbol{e}_{z} \end{equation*}
\begin{lstlisting}[language=Python,showspaces=false,showstringspaces=false,backgroundcolor=\color{gray},frame=single]
def derivatives_in_rectangular_coordinates():
Print_Function()
X = (x,y,z) = symbols('x y z')
(ex,ey,ez,grad) = MV.setup('e_x e_y e_z',metric='[1,1,1]',coords=X)
f = MV('f','scalar',fct=True)
A = MV('A','vector',fct=True)
B = MV('B','grade2',fct=True)
C = MV('C','mv')
print('f =',f)
print('A =',A)
print('B =',B)
print('C =',C)
print('grad*f =',grad*f)
print('grad|A =',grad|A)
print('grad*A =',grad*A)
print(-MV.I)
print('-I*(grad^A) =',-MV.I*(grad^A))
print('grad*B =',grad*B)
print('grad^B =',grad^B)
print('grad|B =',grad|B)
return
\end{lstlisting}
Code Output:
\begin{equation*} f = f \end{equation*}
\begin{equation*} A = A^{x} \boldsymbol{e}_{x} + A^{y} \boldsymbol{e}_{y} + A^{z} \boldsymbol{e}_{z} \end{equation*}
\begin{equation*} B = B^{xy} \boldsymbol{e}_{x}\wedge \boldsymbol{e}_{y} + B^{xz} \boldsymbol{e}_{x}\wedge \boldsymbol{e}_{z} + B^{yz} \boldsymbol{e}_{y}\wedge \boldsymbol{e}_{z} \end{equation*}
\begin{equation*} C = C + C^{x} \boldsymbol{e}_{x} + C^{y} \boldsymbol{e}_{y} + C^{z} \boldsymbol{e}_{z} + C^{xy} \boldsymbol{e}_{x}\wedge \boldsymbol{e}_{y} + C^{xz} \boldsymbol{e}_{x}\wedge \boldsymbol{e}_{z} + C^{yz} \boldsymbol{e}_{y}\wedge \boldsymbol{e}_{z} + C^{xyz} \boldsymbol{e}_{x}\wedge \boldsymbol{e}_{y}\wedge \boldsymbol{e}_{z} \end{equation*}
\begin{equation*} \boldsymbol{\nabla} f = \partial_{x} f \boldsymbol{e}_{x} + \partial_{y} f \boldsymbol{e}_{y} + \partial_{z} f \boldsymbol{e}_{z} \end{equation*}
\begin{equation*} \boldsymbol{\nabla} \cdot A = \partial_{x} A^{x} + \partial_{y} A^{y} + \partial_{z} A^{z} \end{equation*}
\begin{equation*} \boldsymbol{\nabla} A = \left ( \partial_{x} A^{x} + \partial_{y} A^{y} + \partial_{z} A^{z} \right ) + \left ( - \partial_{y} A^{x} + \partial_{x} A^{y} \right ) \boldsymbol{e}_{x}\wedge \boldsymbol{e}_{y} + \left ( - \partial_{z} A^{x} + \partial_{x} A^{z} \right ) \boldsymbol{e}_{x}\wedge \boldsymbol{e}_{z} + \left ( - \partial_{z} A^{y} + \partial_{y} A^{z} \right ) \boldsymbol{e}_{y}\wedge \boldsymbol{e}_{z} \end{equation*}
\begin{equation*} - \boldsymbol{e}_{x}\wedge \boldsymbol{e}_{y}\wedge \boldsymbol{e}_{z} \end{equation*}
\begin{equation*} -I (\boldsymbol{\nabla} \W A) = \left ( - \partial_{z} A^{y} + \partial_{y} A^{z} \right ) \boldsymbol{e}_{x} + \left ( \partial_{z} A^{x} - \partial_{x} A^{z} \right ) \boldsymbol{e}_{y} + \left ( - \partial_{y} A^{x} + \partial_{x} A^{y} \right ) \boldsymbol{e}_{z} \end{equation*}
\begin{equation*} \boldsymbol{\nabla} B = \left ( - \partial_{y} B^{xy} - \partial_{z} B^{xz} \right ) \boldsymbol{e}_{x} + \left ( \partial_{x} B^{xy} - \partial_{z} B^{yz} \right ) \boldsymbol{e}_{y} + \left ( \partial_{x} B^{xz} + \partial_{y} B^{yz} \right ) \boldsymbol{e}_{z} + \left ( \partial_{z} B^{xy} - \partial_{y} B^{xz} + \partial_{x} B^{yz} \right ) \boldsymbol{e}_{x}\wedge \boldsymbol{e}_{y}\wedge \boldsymbol{e}_{z} \end{equation*}
\begin{equation*} \boldsymbol{\nabla} \W B = \left ( \partial_{z} B^{xy} - \partial_{y} B^{xz} + \partial_{x} B^{yz} \right ) \boldsymbol{e}_{x}\wedge \boldsymbol{e}_{y}\wedge \boldsymbol{e}_{z} \end{equation*}
\begin{equation*} \boldsymbol{\nabla} \cdot B = \left ( - \partial_{y} B^{xy} - \partial_{z} B^{xz} \right ) \boldsymbol{e}_{x} + \left ( \partial_{x} B^{xy} - \partial_{z} B^{yz} \right ) \boldsymbol{e}_{y} + \left ( \partial_{x} B^{xz} + \partial_{y} B^{yz} \right ) \boldsymbol{e}_{z} \end{equation*}
\begin{lstlisting}[language=Python,showspaces=false,showstringspaces=false,backgroundcolor=\color{gray},frame=single]
def derivatives_in_spherical_coordinates():
Print_Function()
X = (r,th,phi) = symbols('r theta phi')
curv = [[r*cos(phi)*sin(th),r*sin(phi)*sin(th),r*cos(th)],[1,r,r*sin(th)]]
(er,eth,ephi,grad) = MV.setup('e_r e_theta e_phi',metric='[1,1,1]',coords=X,curv=curv)
f = MV('f','scalar',fct=True)
A = MV('A','vector',fct=True)
B = MV('B','grade2',fct=True)
print('f =',f)
print('A =',A)
print('B =',B)
print('grad*f =',grad*f)
print('grad|A =',grad|A)
print('-I*(grad^A) =',(-MV.I*(grad^A)).simplify())
print('grad^B =',grad^B)
\end{lstlisting}
Code Output:
\begin{equation*} f = f \end{equation*}
\begin{equation*} A = A^{r} \boldsymbol{e}_{r} + A^{\theta } \boldsymbol{e}_{\theta } + A^{\phi } \boldsymbol{e}_{\phi } \end{equation*}
\begin{equation*} B = B^{r\theta } \boldsymbol{e}_{r}\wedge \boldsymbol{e}_{\theta } + B^{r\phi } \boldsymbol{e}_{r}\wedge \boldsymbol{e}_{\phi } + B^{\theta \phi } \boldsymbol{e}_{\theta }\wedge \boldsymbol{e}_{\phi } \end{equation*}
\begin{equation*} \boldsymbol{\nabla} f = \partial_{r} f \boldsymbol{e}_{r} + \frac{\partial_{\theta } f }{r^{2}} \boldsymbol{e}_{\theta } + \frac{\partial_{\phi } f }{r^{2} {\sin{\left (\theta \right )}}^{2}} \boldsymbol{e}_{\phi } \end{equation*}
\begin{equation*} \boldsymbol{\nabla} \cdot A = \frac{A^{\theta } }{\tan{\left (\theta \right )}} + \partial_{\phi } A^{\phi } + \partial_{r} A^{r} + \partial_{\theta } A^{\theta } + \frac{2 A^{r} }{r} \end{equation*}
\begin{equation*} -I (\boldsymbol{\nabla} \W A) = \frac{\sqrt{r^{4} {\sin{\left (\theta \right )}}^{2}} \left(\frac{2 A^{\phi } }{\tan{\left (\theta \right )}} + \partial_{\theta } A^{\phi } - \frac{\partial_{\phi } A^{\theta } }{{\sin{\left (\theta \right )}}^{2}}\right)}{r^{2}} \boldsymbol{e}_{r} + \frac{- r^{2} {\sin{\left (\theta \right )}}^{2} \partial_{r} A^{\phi } - 2 r A^{\phi } {\sin{\left (\theta \right )}}^{2} + \partial_{\phi } A^{r} }{\sqrt{r^{4} {\sin{\left (\theta \right )}}^{2}}} \boldsymbol{e}_{\theta } + \frac{r^{2} \partial_{r} A^{\theta } + 2 r A^{\theta } - \partial_{\theta } A^{r} }{\sqrt{r^{4} {\sin{\left (\theta \right )}}^{2}}} \boldsymbol{e}_{\phi } \end{equation*}
\begin{equation*} \boldsymbol{\nabla} \W B = \frac{r^{2} \partial_{r} B^{\theta \phi } + 4 r B^{\theta \phi } - \frac{2 B^{r\phi } }{\tan{\left (\theta \right )}} - \partial_{\theta } B^{r\phi } + \frac{\partial_{\phi } B^{r\theta } }{{\sin{\left (\theta \right )}}^{2}}}{r^{2}} \boldsymbol{e}_{r}\wedge \boldsymbol{e}_{\theta }\wedge \boldsymbol{e}_{\phi } \end{equation*}
\begin{lstlisting}[language=Python,showspaces=false,showstringspaces=false,backgroundcolor=\color{gray},frame=single]
def conformal_representations_of_circles_lines_spheres_and_planes():
Print_Function()
global n,nbar
metric = '1 0 0 0 0,0 1 0 0 0,0 0 1 0 0,0 0 0 0 2,0 0 0 2 0'
(e1,e2,e3,n,nbar) = MV.setup('e_1 e_2 e_3 n \\bar{n}',metric)
print('g_{ij} =',MV.metric)
e = n+nbar
#conformal representation of points
A = make_vector(e1) # point a = (1,0,0) A = F(a)
B = make_vector(e2) # point b = (0,1,0) B = F(b)
C = make_vector(-e1) # point c = (-1,0,0) C = F(c)
D = make_vector(e3) # point d = (0,0,1) D = F(d)
X = make_vector('x',3)
print('F(a) =',A)
print('F(b) =',B)
print('F(c) =',C)
print('F(d) =',D)
print('F(x) =',X)
print('#a = e1, b = e2, c = -e1, and d = e3')
print('#A = F(a) = 1/2*(a*a*n+2*a-nbar), etc.')
print('#Circle through a, b, and c')
print('Circle: A^B^C^X = 0 =',(A^B^C^X))
print('#Line through a and b')
print('Line : A^B^n^X = 0 =',(A^B^n^X))
print('#Sphere through a, b, c, and d')
print('Sphere: A^B^C^D^X = 0 =',(((A^B)^C)^D)^X)
print('#Plane through a, b, and d')
print('Plane : A^B^n^D^X = 0 =',(A^B^n^D^X))
L = (A^B^e)^X
L.Fmt(3,'Hyperbolic\\;\\; Circle: (A^B^e)^X = 0')
return
\end{lstlisting}
Code Output:
\begin{equation*} g_{ij} = \left[\begin{array}{ccccc}1 & 0 & 0 & 0 & 0\\0 & 1 & 0 & 0 & 0\\0 & 0 & 1 & 0 & 0\\0 & 0 & 0 & 0 & 2\\0 & 0 & 0 & 2 & 0\end{array}\right] \end{equation*}
\begin{equation*} F(a) = \boldsymbol{e}_{1} + \frac{1}{2} \boldsymbol{n} - \frac{1}{2} \boldsymbol{\bar{n}} \end{equation*}
\begin{equation*} F(b) = \boldsymbol{e}_{2} + \frac{1}{2} \boldsymbol{n} - \frac{1}{2} \boldsymbol{\bar{n}} \end{equation*}
\begin{equation*} F(c) = - \boldsymbol{e}_{1} + \frac{1}{2} \boldsymbol{n} - \frac{1}{2} \boldsymbol{\bar{n}} \end{equation*}
\begin{equation*} F(d) = \boldsymbol{e}_{3} + \frac{1}{2} \boldsymbol{n} - \frac{1}{2} \boldsymbol{\bar{n}} \end{equation*}
\begin{equation*} F(x) = x_{1} \boldsymbol{e}_{1} + x_{2} \boldsymbol{e}_{2} + x_{3} \boldsymbol{e}_{3} + \left ( \frac{{\left ( x_{1} \right )}^{2}}{2} + \frac{{\left ( x_{2} \right )}^{2}}{2} + \frac{{\left ( x_{3} \right )}^{2}}{2}\right ) \boldsymbol{n} - \frac{1}{2} \boldsymbol{\bar{n}} \end{equation*}
a = e1, b = e2, c = -e1, and d = e3
A = F(a) = 1/2*(a*a*n+2*a-nbar), etc.
Circle through a, b, and c
\begin{equation*} Circle: A\W B\W C\W X = 0 = - x_{3} \boldsymbol{e}_{1}\wedge \boldsymbol{e}_{2}\wedge \boldsymbol{e}_{3}\wedge \boldsymbol{n} + x_{3} \boldsymbol{e}_{1}\wedge \boldsymbol{e}_{2}\wedge \boldsymbol{e}_{3}\wedge \boldsymbol{\bar{n}} + \left ( \frac{{\left ( x_{1} \right )}^{2}}{2} + \frac{{\left ( x_{2} \right )}^{2}}{2} + \frac{{\left ( x_{3} \right )}^{2}}{2} - \frac{1}{2}\right ) \boldsymbol{e}_{1}\wedge \boldsymbol{e}_{2}\wedge \boldsymbol{n}\wedge \boldsymbol{\bar{n}} \end{equation*}
Line through a and b
\begin{equation*} Line : A\W B\W n\W X = 0 = - x_{3} \boldsymbol{e}_{1}\wedge \boldsymbol{e}_{2}\wedge \boldsymbol{e}_{3}\wedge \boldsymbol{n} + \left ( \frac{x_{1}}{2} + \frac{x_{2}}{2} - \frac{1}{2}\right ) \boldsymbol{e}_{1}\wedge \boldsymbol{e}_{2}\wedge \boldsymbol{n}\wedge \boldsymbol{\bar{n}} + \frac{x_{3}}{2} \boldsymbol{e}_{1}\wedge \boldsymbol{e}_{3}\wedge \boldsymbol{n}\wedge \boldsymbol{\bar{n}} - \frac{x_{3}}{2} \boldsymbol{e}_{2}\wedge \boldsymbol{e}_{3}\wedge \boldsymbol{n}\wedge \boldsymbol{\bar{n}} \end{equation*}
Sphere through a, b, c, and d
\begin{equation*} Sphere: A\W B\W C\W D\W X = 0 = \left ( - \frac{{\left ( x_{1} \right )}^{2}}{2} - \frac{{\left ( x_{2} \right )}^{2}}{2} - \frac{{\left ( x_{3} \right )}^{2}}{2} + \frac{1}{2}\right ) \boldsymbol{e}_{1}\wedge \boldsymbol{e}_{2}\wedge \boldsymbol{e}_{3}\wedge \boldsymbol{n}\wedge \boldsymbol{\bar{n}} \end{equation*}
Plane through a, b, and d
\begin{equation*} Plane : A\W B\W n\W D\W X = 0 = \left ( - \frac{x_{1}}{2} - \frac{x_{2}}{2} - \frac{x_{3}}{2} + \frac{1}{2}\right ) \boldsymbol{e}_{1}\wedge \boldsymbol{e}_{2}\wedge \boldsymbol{e}_{3}\wedge \boldsymbol{n}\wedge \boldsymbol{\bar{n}} \end{equation*}
\begin{lstlisting}[language=Python,showspaces=false,showstringspaces=false,backgroundcolor=\color{gray},frame=single]
def properties_of_geometric_objects():
global n,nbar
Print_Function()
metric = '# # # 0 0,'+ \
'# # # 0 0,'+ \
'# # # 0 0,'+ \
'0 0 0 0 2,'+ \
'0 0 0 2 0'
(p1,p2,p3,n,nbar) = MV.setup('p1 p2 p3 n \\bar{n}',metric)
print('g_{ij} =',MV.metric)
P1 = F(p1)
P2 = F(p2)
P3 = F(p3)
print('#%\\text{Extracting direction of line from }L = P1\\W P2\\W n')
L = P1^P2^n
delta = (L|n)|nbar
print('(L|n)|\\bar{n} =',delta)
print('#%\\text{Extracting plane of circle from }C = P1\\W P2\\W P3')
C = P1^P2^P3
delta = ((C^n)|n)|nbar
print('((C^n)|n)|\\bar{n}=',delta)
print('(p2-p1)^(p3-p1)=',(p2-p1)^(p3-p1))
return
\end{lstlisting}
Code Output:
\begin{equation*} g_{ij} = \left[\begin{array}{ccccc}\left (\boldsymbol{p}_{1}\cdot \boldsymbol{p}_{1}\right ) & \left (\boldsymbol{p}_{1}\cdot \boldsymbol{p}_{2}\right ) & \left (\boldsymbol{p}_{1}\cdot \boldsymbol{p}_{3}\right ) & 0 & 0\\\left (\boldsymbol{p}_{1}\cdot \boldsymbol{p}_{2}\right ) & \left (\boldsymbol{p}_{2}\cdot \boldsymbol{p}_{2}\right ) & \left (\boldsymbol{p}_{2}\cdot \boldsymbol{p}_{3}\right ) & 0 & 0\\\left (\boldsymbol{p}_{1}\cdot \boldsymbol{p}_{3}\right ) & \left (\boldsymbol{p}_{2}\cdot \boldsymbol{p}_{3}\right ) & \left (\boldsymbol{p}_{3}\cdot \boldsymbol{p}_{3}\right ) & 0 & 0\\0 & 0 & 0 & 0 & 2\\0 & 0 & 0 & 2 & 0\end{array}\right] \end{equation*}
\begin{equation*} \text{Extracting direction of line from }L = P1\W P2\W n \end{equation*}
\begin{equation*} (L\cdot n)\cdot \bar{n} = 2 \boldsymbol{p}_{1} -2 \boldsymbol{p}_{2} \end{equation*}
\begin{equation*} \text{Extracting plane of circle from }C = P1\W P2\W P3 \end{equation*}
\begin{equation*} ((C\W n)\cdot n)\cdot \bar{n}= 2 \boldsymbol{p}_{1}\wedge \boldsymbol{p}_{2} -2 \boldsymbol{p}_{1}\wedge \boldsymbol{p}_{3} + 2 \boldsymbol{p}_{2}\wedge \boldsymbol{p}_{3} \end{equation*}
\begin{equation*} (p2-p1)\W (p3-p1)= \boldsymbol{p}_{1}\wedge \boldsymbol{p}_{2} - \boldsymbol{p}_{1}\wedge \boldsymbol{p}_{3} + \boldsymbol{p}_{2}\wedge \boldsymbol{p}_{3} \end{equation*}
\begin{lstlisting}[language=Python,showspaces=false,showstringspaces=false,backgroundcolor=\color{gray},frame=single]
def extracting_vectors_from_conformal_2_blade():
Print_Function()
print(r'B = P1\W P2')
metric = '0 -1 #,'+ \
'-1 0 #,'+ \
'# # #'
(P1,P2,a) = MV.setup('P1 P2 a',metric)
print('g_{ij} =',MV.metric)
B = P1^P2
Bsq = B*B
print('%B^{2} =',Bsq)
ap = a-(a^B)*B
print("a' = a-(a^B)*B =",ap)
Ap = ap+ap*B
Am = ap-ap*B
print("A+ = a'+a'*B =",Ap)
print("A- = a'-a'*B =",Am)
print('%(A+)^{2} =',Ap*Ap)
print('%(A-)^{2} =',Am*Am)
aB = a|B
print('a|B =',aB)
return
\end{lstlisting}
Code Output:
\begin{equation*} B = P1\W P2 \end{equation*}
\begin{equation*} g_{ij} = \left[\begin{array}{ccc}0 & -1 & \left (\boldsymbol{P}_{1}\cdot \boldsymbol{a}\right ) \\-1 & 0 & \left (\boldsymbol{P}_{2}\cdot \boldsymbol{a}\right ) \\\left (\boldsymbol{P}_{1}\cdot \boldsymbol{a}\right ) & \left (\boldsymbol{P}_{2}\cdot \boldsymbol{a}\right ) & \left (\boldsymbol{a}\cdot \boldsymbol{a}\right ) \end{array}\right] \end{equation*}
\begin{equation*} B^{2} = 1 \end{equation*}
\begin{equation*} a' = a-(a\W B) B = - \left (\boldsymbol{P}_{2}\cdot \boldsymbol{a}\right ) \boldsymbol{P}_{1} - \left (\boldsymbol{P}_{1}\cdot \boldsymbol{a}\right ) \boldsymbol{P}_{2} \end{equation*}
\begin{equation*} A+ = a'+a' B = - 2 \left (\boldsymbol{P}_{2}\cdot \boldsymbol{a}\right ) \boldsymbol{P}_{1} \end{equation*}
\begin{equation*} A- = a'-a' B = - 2 \left (\boldsymbol{P}_{1}\cdot \boldsymbol{a}\right ) \boldsymbol{P}_{2} \end{equation*}
\begin{equation*} (A+)^{2} = 0 \end{equation*}
\begin{equation*} (A-)^{2} = 0 \end{equation*}
\begin{equation*} a\cdot B = - \left (\boldsymbol{P}_{2}\cdot \boldsymbol{a}\right ) \boldsymbol{P}_{1} + \left (\boldsymbol{P}_{1}\cdot \boldsymbol{a}\right ) \boldsymbol{P}_{2} \end{equation*}
\begin{lstlisting}[language=Python,showspaces=false,showstringspaces=false,backgroundcolor=\color{gray},frame=single]
def reciprocal_frame_test():
Print_Function()
metric = '1 # #,'+ \
'# 1 #,'+ \
'# # 1'
(e1,e2,e3) = MV.setup('e1 e2 e3',metric)
print('g_{ij} =',MV.metric)
E = e1^e2^e3
Esq = (E*E).scalar()
print('E =',E)
print('%E^{2} =',Esq)
Esq_inv = 1/Esq
E1 = (e2^e3)*E
E2 = (-1)*(e1^e3)*E
E3 = (e1^e2)*E
print('E1 = (e2^e3)*E =',E1)
print('E2 =-(e1^e3)*E =',E2)
print('E3 = (e1^e2)*E =',E3)
w = (E1|e2)
w = w.expand()
print('E1|e2 =',w)
w = (E1|e3)
w = w.expand()
print('E1|e3 =',w)
w = (E2|e1)
w = w.expand()
print('E2|e1 =',w)
w = (E2|e3)
w = w.expand()
print('E2|e3 =',w)
w = (E3|e1)
w = w.expand()
print('E3|e1 =',w)
w = (E3|e2)
w = w.expand()
print('E3|e2 =',w)
w = (E1|e1)
w = (w.expand()).scalar()
Esq = expand(Esq)
print('%(E1\\cdot e1)/E^{2} =',simplify(w/Esq))
w = (E2|e2)
w = (w.expand()).scalar()
print('%(E2\\cdot e2)/E^{2} =',simplify(w/Esq))
w = (E3|e3)
w = (w.expand()).scalar()
print('%(E3\\cdot e3)/E^{2} =',simplify(w/Esq))
return
\end{lstlisting}
Code Output:
\begin{equation*} g_{ij} = \left[\begin{array}{ccc}1 & \left (\boldsymbol{e}_{1}\cdot \boldsymbol{e}_{2}\right ) & \left (\boldsymbol{e}_{1}\cdot \boldsymbol{e}_{3}\right ) \\\left (\boldsymbol{e}_{1}\cdot \boldsymbol{e}_{2}\right ) & 1 & \left (\boldsymbol{e}_{2}\cdot \boldsymbol{e}_{3}\right ) \\\left (\boldsymbol{e}_{1}\cdot \boldsymbol{e}_{3}\right ) & \left (\boldsymbol{e}_{2}\cdot \boldsymbol{e}_{3}\right ) & 1\end{array}\right] \end{equation*}
\begin{equation*} E = \boldsymbol{e}_{1}\wedge \boldsymbol{e}_{2}\wedge \boldsymbol{e}_{3} \end{equation*}
\begin{equation*} E^{2} = \left (\boldsymbol{e}_{1}\cdot \boldsymbol{e}_{2}\right ) ^{2} - 2 \left (\boldsymbol{e}_{1}\cdot \boldsymbol{e}_{2}\right ) \left (\boldsymbol{e}_{1}\cdot \boldsymbol{e}_{3}\right ) \left (\boldsymbol{e}_{2}\cdot \boldsymbol{e}_{3}\right ) + \left (\boldsymbol{e}_{1}\cdot \boldsymbol{e}_{3}\right ) ^{2} + \left (\boldsymbol{e}_{2}\cdot \boldsymbol{e}_{3}\right ) ^{2} - 1 \end{equation*}
\begin{equation*} E1 = (e2\W e3) E = \left ( \left (\boldsymbol{e}_{2}\cdot \boldsymbol{e}_{3}\right ) ^{2} - 1\right ) \boldsymbol{e}_{1} + \left ( \left (\boldsymbol{e}_{1}\cdot \boldsymbol{e}_{2}\right ) - \left (\boldsymbol{e}_{1}\cdot \boldsymbol{e}_{3}\right ) \left (\boldsymbol{e}_{2}\cdot \boldsymbol{e}_{3}\right ) \right ) \boldsymbol{e}_{2} + \left ( - \left (\boldsymbol{e}_{1}\cdot \boldsymbol{e}_{2}\right ) \left (\boldsymbol{e}_{2}\cdot \boldsymbol{e}_{3}\right ) + \left (\boldsymbol{e}_{1}\cdot \boldsymbol{e}_{3}\right ) \right ) \boldsymbol{e}_{3} \end{equation*}
\begin{equation*} E2 =-(e1\W e3) E = \left ( \left (\boldsymbol{e}_{1}\cdot \boldsymbol{e}_{2}\right ) - \left (\boldsymbol{e}_{1}\cdot \boldsymbol{e}_{3}\right ) \left (\boldsymbol{e}_{2}\cdot \boldsymbol{e}_{3}\right ) \right ) \boldsymbol{e}_{1} + \left ( \left (\boldsymbol{e}_{1}\cdot \boldsymbol{e}_{3}\right ) ^{2} - 1\right ) \boldsymbol{e}_{2} + \left ( - \left (\boldsymbol{e}_{1}\cdot \boldsymbol{e}_{2}\right ) \left (\boldsymbol{e}_{1}\cdot \boldsymbol{e}_{3}\right ) + \left (\boldsymbol{e}_{2}\cdot \boldsymbol{e}_{3}\right ) \right ) \boldsymbol{e}_{3} \end{equation*}
\begin{equation*} E3 = (e1\W e2) E = \left ( - \left (\boldsymbol{e}_{1}\cdot \boldsymbol{e}_{2}\right ) \left (\boldsymbol{e}_{2}\cdot \boldsymbol{e}_{3}\right ) + \left (\boldsymbol{e}_{1}\cdot \boldsymbol{e}_{3}\right ) \right ) \boldsymbol{e}_{1} + \left ( - \left (\boldsymbol{e}_{1}\cdot \boldsymbol{e}_{2}\right ) \left (\boldsymbol{e}_{1}\cdot \boldsymbol{e}_{3}\right ) + \left (\boldsymbol{e}_{2}\cdot \boldsymbol{e}_{3}\right ) \right ) \boldsymbol{e}_{2} + \left ( \left (\boldsymbol{e}_{1}\cdot \boldsymbol{e}_{2}\right ) ^{2} - 1\right ) \boldsymbol{e}_{3} \end{equation*}
\begin{equation*} E1\cdot e2 = 0 \end{equation*}
\begin{equation*} E1\cdot e3 = 0 \end{equation*}
\begin{equation*} E2\cdot e1 = 0 \end{equation*}
\begin{equation*} E2\cdot e3 = 0 \end{equation*}
\begin{equation*} E3\cdot e1 = 0 \end{equation*}
\begin{equation*} E3\cdot e2 = 0 \end{equation*}
\begin{equation*} (E1\cdot e1)/E^{2} = 1 \end{equation*}
\begin{equation*} (E2\cdot e2)/E^{2} = 1 \end{equation*}
\begin{equation*} (E3\cdot e3)/E^{2} = 1 \end{equation*}
\end{document}
```python
check_latex('matrix_latex')
```
\documentclass[10pt,fleqn]{report}
\usepackage[vcentering]{geometry}
\geometry{papersize={14in,11in},total={13in,10in}}
\pagestyle{empty}
\usepackage[latin1]{inputenc}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{tensor}
\usepackage{listings}
\usepackage{color}
\usepackage{xcolor}
\usepackage{bm}
\usepackage{breqn}
\definecolor{gray}{rgb}{0.95,0.95,0.95}
\setlength{\parindent}{0pt}
\DeclareMathOperator{\Tr}{Tr}
\DeclareMathOperator{\Adj}{Adj}
\newcommand{\bfrac}[2]{\displaystyle\frac{#1}{#2}}
\newcommand{\lp}{\left (}
\newcommand{\rp}{\right )}
\newcommand{\paren}[1]{\lp {#1} \rp}
\newcommand{\half}{\frac{1}{2}}
\newcommand{\llt}{\left <}
\newcommand{\rgt}{\right >}
\newcommand{\abs}[1]{\left |{#1}\right | }
\newcommand{\pdiff}[2]{\bfrac{\partial {#1}}{\partial {#2}}}
\newcommand{\lbrc}{\left \{}
\newcommand{\rbrc}{\right \}}
\newcommand{\W}{\wedge}
\newcommand{\prm}[1]{{#1}'}
\newcommand{\ddt}[1]{\bfrac{d{#1}}{dt}}
\newcommand{\R}{\dagger}
\newcommand{\deriv}[3]{\bfrac{d^{#3}#1}{d{#2}^{#3}}}
\newcommand{\grade}[1]{\left < {#1} \right >}
\newcommand{\f}[2]{{#1}\lp{#2}\rp}
\newcommand{\eval}[2]{\left . {#1} \right |_{#2}}
\newcommand{\Nabla}{\boldsymbol{\nabla}}
\newcommand{\eb}{\boldsymbol{e}}
\usepackage{float}
\floatstyle{plain} % optionally change the style of the new float
\newfloat{Code}{H}{myc}
\lstloadlanguages{Python}
\begin{document}
\begin{equation*} \left[\begin{array}{cc}1 & 2\\3 & 4\end{array}\right] \left[\begin{array}{c}5\\6\end{array}\right] = \left[\begin{array}{c}17\\39\end{array}\right] \end{equation*}
\begin{equation*} \left[\begin{array}{cc}x^{3} & y^{3}\end{array}\right] \left[\begin{array}{cc}x^{2} & 2 x y\\2 x y & y^{2}\end{array}\right] = \left[\begin{array}{cc}x^{5} + 2 x y^{4} & 2 x^{4} y + y^{5}\end{array}\right] \end{equation*}
\end{document}
```python
run('mv_setup_options')
```
v__1[0m*[0;34me_1[0m + v__2[0m*[0;34me_2[0m + v__3[0m*[0;34me_3[0m
v__1[0m*[0;34me_1[0m + v__2[0m*[0;34me_2[0m + v__3[0m*[0;34me_3[0m
v__x[0m*[0;34me_x[0m + v__y[0m*[0;34me_y[0m + v__z[0m*[0;34me_z[0m
v__x*[0;34me_x[0m + v__y*[0;34me_y[0m + v__z*[0;34me_z[0m
mv_setup_options.py:3: DeprecationWarning: The `galgebra.deprecated` module is deprecated
from galgebra.deprecated import MV
mv_setup_options.py:8: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
v = MV('v', 'vector')
mv_setup_options.py:12: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
v = MV('v', 'vector')
mv_setup_options.py:16: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
v = MV('v', 'vector')
mv_setup_options.py:21: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
v = MV('v', 'vector')
```python
check_latex('physics_check_latex')
```
physics_check_latex.py:6: DeprecationWarning: The `galgebra.deprecated` module is deprecated
from galgebra.deprecated import MV
physics_check_latex.py:14: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
B = MV('B','vector',fct=True)
physics_check_latex.py:15: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
E = MV('E','vector',fct=True)
physics_check_latex.py:20: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
J = MV('J','vector',fct=True)
physics_check_latex.py:48: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
psi = MV('psi','spinor',fct=True)
physics_check_latex.py:49: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
A = MV('A','vector',fct=True)
\documentclass[10pt,fleqn]{report}
\usepackage[vcentering]{geometry}
\geometry{papersize={14in,11in},total={13in,10in}}
\pagestyle{empty}
\usepackage[latin1]{inputenc}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{tensor}
\usepackage{listings}
\usepackage{color}
\usepackage{xcolor}
\usepackage{bm}
\usepackage{breqn}
\definecolor{gray}{rgb}{0.95,0.95,0.95}
\setlength{\parindent}{0pt}
\DeclareMathOperator{\Tr}{Tr}
\DeclareMathOperator{\Adj}{Adj}
\newcommand{\bfrac}[2]{\displaystyle\frac{#1}{#2}}
\newcommand{\lp}{\left (}
\newcommand{\rp}{\right )}
\newcommand{\paren}[1]{\lp {#1} \rp}
\newcommand{\half}{\frac{1}{2}}
\newcommand{\llt}{\left <}
\newcommand{\rgt}{\right >}
\newcommand{\abs}[1]{\left |{#1}\right | }
\newcommand{\pdiff}[2]{\bfrac{\partial {#1}}{\partial {#2}}}
\newcommand{\lbrc}{\left \{}
\newcommand{\rbrc}{\right \}}
\newcommand{\W}{\wedge}
\newcommand{\prm}[1]{{#1}'}
\newcommand{\ddt}[1]{\bfrac{d{#1}}{dt}}
\newcommand{\R}{\dagger}
\newcommand{\deriv}[3]{\bfrac{d^{#3}#1}{d{#2}^{#3}}}
\newcommand{\grade}[1]{\left < {#1} \right >}
\newcommand{\f}[2]{{#1}\lp{#2}\rp}
\newcommand{\eval}[2]{\left . {#1} \right |_{#2}}
\newcommand{\Nabla}{\boldsymbol{\nabla}}
\newcommand{\eb}{\boldsymbol{e}}
\usepackage{float}
\floatstyle{plain} % optionally change the style of the new float
\newfloat{Code}{H}{myc}
\lstloadlanguages{Python}
\begin{document}
\begin{lstlisting}[language=Python,showspaces=false,showstringspaces=false,backgroundcolor=\color{gray},frame=single]
def Maxwells_Equations_in_Geometric_Calculus():
Print_Function()
X = symbols('t x y z')
(g0,g1,g2,g3,grad) = MV.setup('gamma*t|x|y|z',metric='[1,-1,-1,-1]',coords=X)
I = MV.I
B = MV('B','vector',fct=True)
E = MV('E','vector',fct=True)
B.set_coef(1,0,0)
E.set_coef(1,0,0)
B *= g0
E *= g0
J = MV('J','vector',fct=True)
F = E+I*B
print(r'\text{Pseudo Scalar\;\;}I =',I)
print('\\text{Magnetic Field Bi-Vector\\;\\;} B = \\bm{B\\gamma_{t}} =',B)
print('\\text{Electric Field Bi-Vector\\;\\;} E = \\bm{E\\gamma_{t}} =',E)
print('\\text{Electromagnetic Field Bi-Vector\\;\\;} F = E+IB =',F)
print('%\\text{Four Current Density\\;\\;} J =',J)
gradF = grad*F
print('#Geometric Derivative of Electomagnetic Field Bi-Vector')
gradF.Fmt(3,'grad*F')
print('#Maxwell Equations')
print('grad*F = J')
print('#Div $E$ and Curl $H$ Equations')
(gradF.grade(1)-J).Fmt(3,'%\\grade{\\nabla F}_{1} -J = 0')
print('#Curl $E$ and Div $B$ equations')
(gradF.grade(3)).Fmt(3,'%\\grade{\\nabla F}_{3} = 0')
return
\end{lstlisting}
Code Output:
\begin{equation*} \text{Pseudo Scalar\;\;}I = \boldsymbol{\gamma }_{t}\wedge \boldsymbol{\gamma }_{x}\wedge \boldsymbol{\gamma }_{y}\wedge \boldsymbol{\gamma }_{z} \end{equation*}
\begin{equation*} \text{Magnetic Field Bi-Vector\;\;} B = \bm{B\gamma_{t}} = - B^{x} \boldsymbol{\gamma }_{t}\wedge \boldsymbol{\gamma }_{x} - B^{y} \boldsymbol{\gamma }_{t}\wedge \boldsymbol{\gamma }_{y} - B^{z} \boldsymbol{\gamma }_{t}\wedge \boldsymbol{\gamma }_{z} \end{equation*}
\begin{equation*} \text{Electric Field Bi-Vector\;\;} E = \bm{E\gamma_{t}} = - E^{x} \boldsymbol{\gamma }_{t}\wedge \boldsymbol{\gamma }_{x} - E^{y} \boldsymbol{\gamma }_{t}\wedge \boldsymbol{\gamma }_{y} - E^{z} \boldsymbol{\gamma }_{t}\wedge \boldsymbol{\gamma }_{z} \end{equation*}
\begin{equation*} \text{Electromagnetic Field Bi-Vector\;\;} F = E+IB = - E^{x} \boldsymbol{\gamma }_{t}\wedge \boldsymbol{\gamma }_{x} - E^{y} \boldsymbol{\gamma }_{t}\wedge \boldsymbol{\gamma }_{y} - E^{z} \boldsymbol{\gamma }_{t}\wedge \boldsymbol{\gamma }_{z} - B^{z} \boldsymbol{\gamma }_{x}\wedge \boldsymbol{\gamma }_{y} + B^{y} \boldsymbol{\gamma }_{x}\wedge \boldsymbol{\gamma }_{z} - B^{x} \boldsymbol{\gamma }_{y}\wedge \boldsymbol{\gamma }_{z} \end{equation*}
\begin{equation*} \text{Four Current Density\;\;} J = J^{t} \boldsymbol{\gamma }_{t} + J^{x} \boldsymbol{\gamma }_{x} + J^{y} \boldsymbol{\gamma }_{y} + J^{z} \boldsymbol{\gamma }_{z} \end{equation*}
Geometric Derivative of Electomagnetic Field Bi-Vector
Maxwell Equations
\begin{equation*} \boldsymbol{\nabla} F = J \end{equation*}
Div $E$ and Curl $H$ Equations
Curl $E$ and Div $B$ equations
\begin{lstlisting}[language=Python,showspaces=false,showstringspaces=false,backgroundcolor=\color{gray},frame=single]
def Dirac_Equation_in_Geometric_Calculus():
Print_Function()
vars = symbols('t x y z')
(g0,g1,g2,g3,grad) = MV.setup('gamma*t|x|y|z',metric='[1,-1,-1,-1]',coords=vars)
I = MV.I
(m,e) = symbols('m e')
psi = MV('psi','spinor',fct=True)
A = MV('A','vector',fct=True)
sig_z = g3*g0
print('\\text{4-Vector Potential\\;\\;}\\bm{A} =',A)
print('\\text{8-component real spinor\\;\\;}\\bm{\\psi} =',psi)
dirac_eq = (grad*psi)*I*sig_z-e*A*psi-m*psi*g0
dirac_eq.simplify()
dirac_eq.Fmt(3,r'%\text{Dirac Equation\;\;}\nabla \bm{\psi} I \sigma_{z}-e\bm{A}\bm{\psi}-m\bm{\psi}\gamma_{t} = 0')
return
\end{lstlisting}
Code Output:
\begin{equation*} \text{4-Vector Potential\;\;}\bm{A} = A^{t} \boldsymbol{\gamma }_{t} + A^{x} \boldsymbol{\gamma }_{x} + A^{y} \boldsymbol{\gamma }_{y} + A^{z} \boldsymbol{\gamma }_{z} \end{equation*}
\begin{equation*} \text{8-component real spinor\;\;}\bm{\psi} = \psi + \psi ^{tx} \boldsymbol{\gamma }_{t}\wedge \boldsymbol{\gamma }_{x} + \psi ^{ty} \boldsymbol{\gamma }_{t}\wedge \boldsymbol{\gamma }_{y} + \psi ^{tz} \boldsymbol{\gamma }_{t}\wedge \boldsymbol{\gamma }_{z} + \psi ^{xy} \boldsymbol{\gamma }_{x}\wedge \boldsymbol{\gamma }_{y} + \psi ^{xz} \boldsymbol{\gamma }_{x}\wedge \boldsymbol{\gamma }_{z} + \psi ^{yz} \boldsymbol{\gamma }_{y}\wedge \boldsymbol{\gamma }_{z} + \psi ^{txyz} \boldsymbol{\gamma }_{t}\wedge \boldsymbol{\gamma }_{x}\wedge \boldsymbol{\gamma }_{y}\wedge \boldsymbol{\gamma }_{z} \end{equation*}
\begin{lstlisting}[language=Python,showspaces=false,showstringspaces=false,backgroundcolor=\color{gray},frame=single]
def Lorentz_Tranformation_in_Geometric_Algebra():
Print_Function()
(alpha,beta,gamma) = symbols('alpha beta gamma')
(x,t,xp,tp) = symbols("x t x' t'")
(g0,g1) = MV.setup('gamma*t|x',metric='[1,-1]')
from sympy import sinh,cosh
R = cosh(alpha/2)+sinh(alpha/2)*(g0^g1)
X = t*g0+x*g1
Xp = tp*g0+xp*g1
print('R =',R)
print(r"#%t\bm{\gamma_{t}}+x\bm{\gamma_{x}} = t'\bm{\gamma'_{t}}+x'\bm{\gamma'_{x}} = R\lp t'\bm{\gamma_{t}}+x'\bm{\gamma_{x}}\rp R^{\dagger}")
Xpp = R*Xp*R.rev()
Xpp = Xpp.collect()
Xpp = Xpp.subs({2*sinh(alpha/2)*cosh(alpha/2):sinh(alpha),sinh(alpha/2)**2+cosh(alpha/2)**2:cosh(alpha)})
print(r"%t\bm{\gamma_{t}}+x\bm{\gamma_{x}} =",Xpp)
Xpp = Xpp.subs({sinh(alpha):gamma*beta,cosh(alpha):gamma})
print(r'%\f{\sinh}{\alpha} = \gamma\beta')
print(r'%\f{\cosh}{\alpha} = \gamma')
print(r"%t\bm{\gamma_{t}}+x\bm{\gamma_{x}} =",Xpp.collect())
return
\end{lstlisting}
Code Output:
\begin{equation*} R = \cosh{\left (\frac{\alpha }{2} \right )} + \sinh{\left (\frac{\alpha }{2} \right )} \boldsymbol{\gamma }_{t}\wedge \boldsymbol{\gamma }_{x} \end{equation*}
\begin{equation*} t\bm{\gamma_{t}}+x\bm{\gamma_{x}} = t'\bm{\gamma'_{t}}+x'\bm{\gamma'_{x}} = R\lp t'\bm{\gamma_{t}}+x'\bm{\gamma_{x}}\rp R^{\dagger} \end{equation*}
\begin{equation*} t\bm{\gamma_{t}}+x\bm{\gamma_{x}} = \left ( 2 t' {\sinh{\left (\frac{\alpha }{2} \right )}}^{2} + t' - x' \sinh{\left (\alpha \right )}\right ) \boldsymbol{\gamma }_{t} + \left ( - t' \sinh{\left (\alpha \right )} + 2 x' {\sinh{\left (\frac{\alpha }{2} \right )}}^{2} + x'\right ) \boldsymbol{\gamma }_{x} \end{equation*}
\begin{equation*} \f{\sinh}{\alpha} = \gamma\beta \end{equation*}
\begin{equation*} \f{\cosh}{\alpha} = \gamma \end{equation*}
\begin{equation*} t\bm{\gamma_{t}}+x\bm{\gamma_{x}} = \left ( - \beta \gamma x' + 2 t' {\sinh{\left (\frac{\alpha }{2} \right )}}^{2} + t'\right ) \boldsymbol{\gamma }_{t} + \left ( - \beta \gamma t' + 2 x' {\sinh{\left (\frac{\alpha }{2} \right )}}^{2} + x'\right ) \boldsymbol{\gamma }_{x} \end{equation*}
\end{document}
```python
check_latex('print_check_latex')
```
print_check_latex.py:5: DeprecationWarning: The `galgebra.deprecated` module is deprecated
from galgebra.deprecated import MV
print_check_latex.py:11: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
A = MV('A','mv')
print_check_latex.py:19: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
f = MV('f','scalar',fct=True)
print_check_latex.py:20: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
A = MV('A','vector',fct=True)
print_check_latex.py:21: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
B = MV('B','grade2',fct=True)
print_check_latex.py:88: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
f = MV('f','scalar',fct=True)
print_check_latex.py:89: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
A = MV('A','vector',fct=True)
print_check_latex.py:90: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
B = MV('B','grade2',fct=True)
print_check_latex.py:104: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
B = MV('B','vector',fct=True)
print_check_latex.py:105: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
E = MV('E','vector',fct=True)
print_check_latex.py:110: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
J = MV('J','vector',fct=True)
print_check_latex.py:152: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
psi = MV('psi','spinor',fct=True)
print_check_latex.py:153: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
A = MV('A','vector',fct=True)
\documentclass[10pt,fleqn]{report}
\usepackage[vcentering]{geometry}
\geometry{papersize={14in,11in},total={13in,10in}}
\pagestyle{empty}
\usepackage[latin1]{inputenc}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{tensor}
\usepackage{listings}
\usepackage{color}
\usepackage{xcolor}
\usepackage{bm}
\usepackage{breqn}
\definecolor{gray}{rgb}{0.95,0.95,0.95}
\setlength{\parindent}{0pt}
\DeclareMathOperator{\Tr}{Tr}
\DeclareMathOperator{\Adj}{Adj}
\newcommand{\bfrac}[2]{\displaystyle\frac{#1}{#2}}
\newcommand{\lp}{\left (}
\newcommand{\rp}{\right )}
\newcommand{\paren}[1]{\lp {#1} \rp}
\newcommand{\half}{\frac{1}{2}}
\newcommand{\llt}{\left <}
\newcommand{\rgt}{\right >}
\newcommand{\abs}[1]{\left |{#1}\right | }
\newcommand{\pdiff}[2]{\bfrac{\partial {#1}}{\partial {#2}}}
\newcommand{\lbrc}{\left \{}
\newcommand{\rbrc}{\right \}}
\newcommand{\W}{\wedge}
\newcommand{\prm}[1]{{#1}'}
\newcommand{\ddt}[1]{\bfrac{d{#1}}{dt}}
\newcommand{\R}{\dagger}
\newcommand{\deriv}[3]{\bfrac{d^{#3}#1}{d{#2}^{#3}}}
\newcommand{\grade}[1]{\left < {#1} \right >}
\newcommand{\f}[2]{{#1}\lp{#2}\rp}
\newcommand{\eval}[2]{\left . {#1} \right |_{#2}}
\newcommand{\Nabla}{\boldsymbol{\nabla}}
\newcommand{\eb}{\boldsymbol{e}}
\usepackage{float}
\floatstyle{plain} % optionally change the style of the new float
\newfloat{Code}{H}{myc}
\lstloadlanguages{Python}
\begin{document}
\begin{equation*} \bm{A} = A + A^{x} \boldsymbol{e}_{x} + A^{y} \boldsymbol{e}_{y} + A^{z} \boldsymbol{e}_{z} + A^{xy} \boldsymbol{e}_{x}\wedge \boldsymbol{e}_{y} + A^{xz} \boldsymbol{e}_{x}\wedge \boldsymbol{e}_{z} + A^{yz} \boldsymbol{e}_{y}\wedge \boldsymbol{e}_{z} + A^{xyz} \boldsymbol{e}_{x}\wedge \boldsymbol{e}_{y}\wedge \boldsymbol{e}_{z} \end{equation*}
\begin{equation*} \bm{A} = \begin{aligned}[t] & A \\ & + A^{x} \boldsymbol{e}_{x} + A^{y} \boldsymbol{e}_{y} + A^{z} \boldsymbol{e}_{z} \\ & + A^{xy} \boldsymbol{e}_{x}\wedge \boldsymbol{e}_{y} + A^{xz} \boldsymbol{e}_{x}\wedge \boldsymbol{e}_{z} + A^{yz} \boldsymbol{e}_{y}\wedge \boldsymbol{e}_{z} \\ & + A^{xyz} \boldsymbol{e}_{x}\wedge \boldsymbol{e}_{y}\wedge \boldsymbol{e}_{z} \end{aligned} \end{equation*}
\begin{equation*} \bm{A} = \begin{aligned}[t] & A \\ & + A^{x} \boldsymbol{e}_{x} \\ & + A^{y} \boldsymbol{e}_{y} \\ & + A^{z} \boldsymbol{e}_{z} \\ & + A^{xy} \boldsymbol{e}_{x}\wedge \boldsymbol{e}_{y} \\ & + A^{xz} \boldsymbol{e}_{x}\wedge \boldsymbol{e}_{z} \\ & + A^{yz} \boldsymbol{e}_{y}\wedge \boldsymbol{e}_{z} \\ & + A^{xyz} \boldsymbol{e}_{x}\wedge \boldsymbol{e}_{y}\wedge \boldsymbol{e}_{z} \end{aligned} \end{equation*}
\begin{equation*} \bm{A} = A^{x} \boldsymbol{e}_{x} + A^{y} \boldsymbol{e}_{y} + A^{z} \boldsymbol{e}_{z} \end{equation*}
\begin{equation*} \bm{B} = B^{xy} \boldsymbol{e}_{x}\wedge \boldsymbol{e}_{y} + B^{xz} \boldsymbol{e}_{x}\wedge \boldsymbol{e}_{z} + B^{yz} \boldsymbol{e}_{y}\wedge \boldsymbol{e}_{z} \end{equation*}
\begin{equation*} \boldsymbol{\nabla} f = \partial_{x} f \boldsymbol{e}_{x} + \partial_{y} f \boldsymbol{e}_{y} + \partial_{z} f \boldsymbol{e}_{z} \end{equation*}
\begin{equation*} \boldsymbol{\nabla} \cdot \bm{A} = \partial_{x} A^{x} + \partial_{y} A^{y} + \partial_{z} A^{z} \end{equation*}
\begin{equation*} \boldsymbol{\nabla} \bm{A} = \left ( \partial_{x} A^{x} + \partial_{y} A^{y} + \partial_{z} A^{z} \right ) + \left ( - \partial_{y} A^{x} + \partial_{x} A^{y} \right ) \boldsymbol{e}_{x}\wedge \boldsymbol{e}_{y} + \left ( - \partial_{z} A^{x} + \partial_{x} A^{z} \right ) \boldsymbol{e}_{x}\wedge \boldsymbol{e}_{z} + \left ( - \partial_{z} A^{y} + \partial_{y} A^{z} \right ) \boldsymbol{e}_{y}\wedge \boldsymbol{e}_{z} \end{equation*}
\begin{equation*} -I (\boldsymbol{\nabla} \W \bm{A}) = \left ( - \partial_{z} A^{y} + \partial_{y} A^{z} \right ) \boldsymbol{e}_{x} + \left ( \partial_{z} A^{x} - \partial_{x} A^{z} \right ) \boldsymbol{e}_{y} + \left ( - \partial_{y} A^{x} + \partial_{x} A^{y} \right ) \boldsymbol{e}_{z} \end{equation*}
\begin{equation*} \boldsymbol{\nabla} \bm{B} = \left ( - \partial_{y} B^{xy} - \partial_{z} B^{xz} \right ) \boldsymbol{e}_{x} + \left ( \partial_{x} B^{xy} - \partial_{z} B^{yz} \right ) \boldsymbol{e}_{y} + \left ( \partial_{x} B^{xz} + \partial_{y} B^{yz} \right ) \boldsymbol{e}_{z} + \left ( \partial_{z} B^{xy} - \partial_{y} B^{xz} + \partial_{x} B^{yz} \right ) \boldsymbol{e}_{x}\wedge \boldsymbol{e}_{y}\wedge \boldsymbol{e}_{z} \end{equation*}
\begin{equation*} \boldsymbol{\nabla} \W \bm{B} = \left ( \partial_{z} B^{xy} - \partial_{y} B^{xz} + \partial_{x} B^{yz} \right ) \boldsymbol{e}_{x}\wedge \boldsymbol{e}_{y}\wedge \boldsymbol{e}_{z} \end{equation*}
\begin{equation*} \boldsymbol{\nabla} \cdot \bm{B} = \left ( - \partial_{y} B^{xy} - \partial_{z} B^{xz} \right ) \boldsymbol{e}_{x} + \left ( \partial_{x} B^{xy} - \partial_{z} B^{yz} \right ) \boldsymbol{e}_{y} + \left ( \partial_{x} B^{xz} + \partial_{y} B^{yz} \right ) \boldsymbol{e}_{z} \end{equation*}
\begin{equation*} g_{ij} = \left[\begin{array}{cccc}\left (\boldsymbol{a}\cdot \boldsymbol{a}\right ) & \left (\boldsymbol{a}\cdot \boldsymbol{b}\right ) & \left (\boldsymbol{a}\cdot \boldsymbol{c}\right ) & \left (\boldsymbol{a}\cdot \boldsymbol{d}\right ) \\\left (\boldsymbol{a}\cdot \boldsymbol{b}\right ) & \left (\boldsymbol{b}\cdot \boldsymbol{b}\right ) & \left (\boldsymbol{b}\cdot \boldsymbol{c}\right ) & \left (\boldsymbol{b}\cdot \boldsymbol{d}\right ) \\\left (\boldsymbol{a}\cdot \boldsymbol{c}\right ) & \left (\boldsymbol{b}\cdot \boldsymbol{c}\right ) & \left (\boldsymbol{c}\cdot \boldsymbol{c}\right ) & \left (\boldsymbol{c}\cdot \boldsymbol{d}\right ) \\\left (\boldsymbol{a}\cdot \boldsymbol{d}\right ) & \left (\boldsymbol{b}\cdot \boldsymbol{d}\right ) & \left (\boldsymbol{c}\cdot \boldsymbol{d}\right ) & \left (\boldsymbol{d}\cdot \boldsymbol{d}\right ) \end{array}\right] \end{equation*}
\begin{equation*} \bm{a\cdot (b c)} = - \left (\boldsymbol{a}\cdot \boldsymbol{c}\right ) \boldsymbol{b} + \left (\boldsymbol{a}\cdot \boldsymbol{b}\right ) \boldsymbol{c} \end{equation*}
\begin{equation*} \bm{a\cdot (b\W c)} = - \left (\boldsymbol{a}\cdot \boldsymbol{c}\right ) \boldsymbol{b} + \left (\boldsymbol{a}\cdot \boldsymbol{b}\right ) \boldsymbol{c} \end{equation*}
\begin{equation*} \bm{a\cdot (b\W c\W d)} = \left (\boldsymbol{a}\cdot \boldsymbol{d}\right ) \boldsymbol{b}\wedge \boldsymbol{c} - \left (\boldsymbol{a}\cdot \boldsymbol{c}\right ) \boldsymbol{b}\wedge \boldsymbol{d} + \left (\boldsymbol{a}\cdot \boldsymbol{b}\right ) \boldsymbol{c}\wedge \boldsymbol{d} \end{equation*}
\begin{equation*} \bm{a\cdot (b\W c)+c\cdot (a\W b)+b\cdot (c\W a)} = 0 \end{equation*}
\begin{equation*} \bm{a (b\W c)-b (a\W c)+c (a\W b)} = 3 \boldsymbol{a}\wedge \boldsymbol{b}\wedge \boldsymbol{c} \end{equation*}
\begin{equation*} \bm{a (b\W c\W d)-b (a\W c\W d)+c (a\W b\W d)-d (a\W b\W c)} = 4 \boldsymbol{a}\wedge \boldsymbol{b}\wedge \boldsymbol{c}\wedge \boldsymbol{d} \end{equation*}
\begin{equation*} \bm{(a\W b)\cdot (c\W d)} = - \left (\boldsymbol{a}\cdot \boldsymbol{c}\right ) \left (\boldsymbol{b}\cdot \boldsymbol{d}\right ) + \left (\boldsymbol{a}\cdot \boldsymbol{d}\right ) \left (\boldsymbol{b}\cdot \boldsymbol{c}\right ) \end{equation*}
\begin{equation*} \bm{((a\W b)\cdot c)\cdot d} = - \left (\boldsymbol{a}\cdot \boldsymbol{c}\right ) \left (\boldsymbol{b}\cdot \boldsymbol{d}\right ) + \left (\boldsymbol{a}\cdot \boldsymbol{d}\right ) \left (\boldsymbol{b}\cdot \boldsymbol{c}\right ) \end{equation*}
\begin{equation*} \bm{(a\W b)\times (c\W d)} = - \left (\boldsymbol{b}\cdot \boldsymbol{d}\right ) \boldsymbol{a}\wedge \boldsymbol{c} + \left (\boldsymbol{b}\cdot \boldsymbol{c}\right ) \boldsymbol{a}\wedge \boldsymbol{d} + \left (\boldsymbol{a}\cdot \boldsymbol{d}\right ) \boldsymbol{b}\wedge \boldsymbol{c} - \left (\boldsymbol{a}\cdot \boldsymbol{c}\right ) \boldsymbol{b}\wedge \boldsymbol{d} \end{equation*}
\begin{equation*} E = \boldsymbol{e}_{1}\wedge \boldsymbol{e}_{2}\wedge \boldsymbol{e}_{3} \end{equation*}
\begin{equation*} E^{2} = \left (\boldsymbol{e}_{1}\cdot \boldsymbol{e}_{2}\right ) ^{2} - 2 \left (\boldsymbol{e}_{1}\cdot \boldsymbol{e}_{2}\right ) \left (\boldsymbol{e}_{1}\cdot \boldsymbol{e}_{3}\right ) \left (\boldsymbol{e}_{2}\cdot \boldsymbol{e}_{3}\right ) + \left (\boldsymbol{e}_{1}\cdot \boldsymbol{e}_{3}\right ) ^{2} + \left (\boldsymbol{e}_{2}\cdot \boldsymbol{e}_{3}\right ) ^{2} - 1 \end{equation*}
\begin{equation*} E1 = (e2\W e3) E = \left ( \left (\boldsymbol{e}_{2}\cdot \boldsymbol{e}_{3}\right ) ^{2} - 1\right ) \boldsymbol{e}_{1} + \left ( \left (\boldsymbol{e}_{1}\cdot \boldsymbol{e}_{2}\right ) - \left (\boldsymbol{e}_{1}\cdot \boldsymbol{e}_{3}\right ) \left (\boldsymbol{e}_{2}\cdot \boldsymbol{e}_{3}\right ) \right ) \boldsymbol{e}_{2} + \left ( - \left (\boldsymbol{e}_{1}\cdot \boldsymbol{e}_{2}\right ) \left (\boldsymbol{e}_{2}\cdot \boldsymbol{e}_{3}\right ) + \left (\boldsymbol{e}_{1}\cdot \boldsymbol{e}_{3}\right ) \right ) \boldsymbol{e}_{3} \end{equation*}
\begin{equation*} E2 =-(e1\W e3) E = \left ( \left (\boldsymbol{e}_{1}\cdot \boldsymbol{e}_{2}\right ) - \left (\boldsymbol{e}_{1}\cdot \boldsymbol{e}_{3}\right ) \left (\boldsymbol{e}_{2}\cdot \boldsymbol{e}_{3}\right ) \right ) \boldsymbol{e}_{1} + \left ( \left (\boldsymbol{e}_{1}\cdot \boldsymbol{e}_{3}\right ) ^{2} - 1\right ) \boldsymbol{e}_{2} + \left ( - \left (\boldsymbol{e}_{1}\cdot \boldsymbol{e}_{2}\right ) \left (\boldsymbol{e}_{1}\cdot \boldsymbol{e}_{3}\right ) + \left (\boldsymbol{e}_{2}\cdot \boldsymbol{e}_{3}\right ) \right ) \boldsymbol{e}_{3} \end{equation*}
\begin{equation*} E3 = (e1\W e2) E = \left ( - \left (\boldsymbol{e}_{1}\cdot \boldsymbol{e}_{2}\right ) \left (\boldsymbol{e}_{2}\cdot \boldsymbol{e}_{3}\right ) + \left (\boldsymbol{e}_{1}\cdot \boldsymbol{e}_{3}\right ) \right ) \boldsymbol{e}_{1} + \left ( - \left (\boldsymbol{e}_{1}\cdot \boldsymbol{e}_{2}\right ) \left (\boldsymbol{e}_{1}\cdot \boldsymbol{e}_{3}\right ) + \left (\boldsymbol{e}_{2}\cdot \boldsymbol{e}_{3}\right ) \right ) \boldsymbol{e}_{2} + \left ( \left (\boldsymbol{e}_{1}\cdot \boldsymbol{e}_{2}\right ) ^{2} - 1\right ) \boldsymbol{e}_{3} \end{equation*}
\begin{equation*} E1\cdot e2 = 0 \end{equation*}
\begin{equation*} E1\cdot e3 = 0 \end{equation*}
\begin{equation*} E2\cdot e1 = 0 \end{equation*}
\begin{equation*} E2\cdot e3 = 0 \end{equation*}
\begin{equation*} E3\cdot e1 = 0 \end{equation*}
\begin{equation*} E3\cdot e2 = 0 \end{equation*}
\begin{equation*} (E1\cdot e1)/E^{2} = 1 \end{equation*}
\begin{equation*} (E2\cdot e2)/E^{2} = 1 \end{equation*}
\begin{equation*} (E3\cdot e3)/E^{2} = 1 \end{equation*}
\begin{equation*} A = A^{r} \boldsymbol{e}_{r} + A^{\theta } \boldsymbol{e}_{\theta } + A^{\phi } \boldsymbol{e}_{\phi } \end{equation*}
\begin{equation*} B = B^{r\theta } \boldsymbol{e}_{r}\wedge \boldsymbol{e}_{\theta } + B^{r\phi } \boldsymbol{e}_{r}\wedge \boldsymbol{e}_{\phi } + B^{\theta \phi } \boldsymbol{e}_{\theta }\wedge \boldsymbol{e}_{\phi } \end{equation*}
\begin{equation*} \boldsymbol{\nabla} f = \partial_{r} f \boldsymbol{e}_{r} + \frac{\partial_{\theta } f }{r^{2}} \boldsymbol{e}_{\theta } + \frac{\partial_{\phi } f }{r^{2} {\sin{\left (\theta \right )}}^{2}} \boldsymbol{e}_{\phi } \end{equation*}
\begin{equation*} \boldsymbol{\nabla} \cdot A = \frac{A^{\theta } }{\tan{\left (\theta \right )}} + \partial_{\phi } A^{\phi } + \partial_{r} A^{r} + \partial_{\theta } A^{\theta } + \frac{2 A^{r} }{r} \end{equation*}
\begin{equation*} -I (\boldsymbol{\nabla} \W A) = \frac{\sqrt{r^{4} {\sin{\left (\theta \right )}}^{2}} \left(\frac{2 A^{\phi } }{\tan{\left (\theta \right )}} + \partial_{\theta } A^{\phi } - \frac{\partial_{\phi } A^{\theta } }{{\sin{\left (\theta \right )}}^{2}}\right)}{r^{2}} \boldsymbol{e}_{r} + \frac{- r^{2} {\sin{\left (\theta \right )}}^{2} \partial_{r} A^{\phi } - 2 r A^{\phi } {\sin{\left (\theta \right )}}^{2} + \partial_{\phi } A^{r} }{\sqrt{r^{4} {\sin{\left (\theta \right )}}^{2}}} \boldsymbol{e}_{\theta } + \frac{r^{2} \partial_{r} A^{\theta } + 2 r A^{\theta } - \partial_{\theta } A^{r} }{\sqrt{r^{4} {\sin{\left (\theta \right )}}^{2}}} \boldsymbol{e}_{\phi } \end{equation*}
\begin{equation*} \boldsymbol{\nabla} \W B = \frac{r^{2} \partial_{r} B^{\theta \phi } + 4 r B^{\theta \phi } - \frac{2 B^{r\phi } }{\tan{\left (\theta \right )}} - \partial_{\theta } B^{r\phi } + \frac{\partial_{\phi } B^{r\theta } }{{\sin{\left (\theta \right )}}^{2}}}{r^{2}} \boldsymbol{e}_{r}\wedge \boldsymbol{e}_{\theta }\wedge \boldsymbol{e}_{\phi } \end{equation*}
\begin{equation*} B = \bm{B\gamma_{t}} = - B^{x} \boldsymbol{\gamma }_{t}\wedge \boldsymbol{\gamma }_{x} - B^{y} \boldsymbol{\gamma }_{t}\wedge \boldsymbol{\gamma }_{y} - B^{z} \boldsymbol{\gamma }_{t}\wedge \boldsymbol{\gamma }_{z} \end{equation*}
\begin{equation*} E = \bm{E\gamma_{t}} = - E^{x} \boldsymbol{\gamma }_{t}\wedge \boldsymbol{\gamma }_{x} - E^{y} \boldsymbol{\gamma }_{t}\wedge \boldsymbol{\gamma }_{y} - E^{z} \boldsymbol{\gamma }_{t}\wedge \boldsymbol{\gamma }_{z} \end{equation*}
\begin{equation*} F = E+IB = - E^{x} \boldsymbol{\gamma }_{t}\wedge \boldsymbol{\gamma }_{x} - E^{y} \boldsymbol{\gamma }_{t}\wedge \boldsymbol{\gamma }_{y} - E^{z} \boldsymbol{\gamma }_{t}\wedge \boldsymbol{\gamma }_{z} - B^{z} \boldsymbol{\gamma }_{x}\wedge \boldsymbol{\gamma }_{y} + B^{y} \boldsymbol{\gamma }_{x}\wedge \boldsymbol{\gamma }_{z} - B^{x} \boldsymbol{\gamma }_{y}\wedge \boldsymbol{\gamma }_{z} \end{equation*}
\begin{equation*} J = J^{t} \boldsymbol{\gamma }_{t} + J^{x} \boldsymbol{\gamma }_{x} + J^{y} \boldsymbol{\gamma }_{y} + J^{z} \boldsymbol{\gamma }_{z} \end{equation*}
\begin{equation*} \boldsymbol{\nabla} F = J \end{equation*}
\begin{equation*} R = \cosh{\left (\frac{\alpha }{2} \right )} + \sinh{\left (\frac{\alpha }{2} \right )} \boldsymbol{\gamma }_{t}\wedge \boldsymbol{\gamma }_{x} \end{equation*}
\begin{equation*} t\bm{\gamma_{t}}+x\bm{\gamma_{x}} = t'\bm{\gamma'_{t}}+x'\bm{\gamma'_{x}} = R\lp t'\bm{\gamma_{t}}+x'\bm{\gamma_{x}}\rp R^{\dagger} \end{equation*}
\begin{equation*} t\bm{\gamma_{t}}+x\bm{\gamma_{x}} = \left ( 2 t' {\sinh{\left (\frac{\alpha }{2} \right )}}^{2} + t' - x' \sinh{\left (\alpha \right )}\right ) \boldsymbol{\gamma }_{t} + \left ( - t' \sinh{\left (\alpha \right )} + 2 x' {\sinh{\left (\frac{\alpha }{2} \right )}}^{2} + x'\right ) \boldsymbol{\gamma }_{x} \end{equation*}
\begin{equation*} \f{\sinh}{\alpha} = \gamma\beta \end{equation*}
\begin{equation*} \f{\cosh}{\alpha} = \gamma \end{equation*}
\begin{equation*} t\bm{\gamma_{t}}+x\bm{\gamma_{x}} = \left ( - \beta \gamma x' + 2 t' {\sinh{\left (\frac{\alpha }{2} \right )}}^{2} + t'\right ) \boldsymbol{\gamma }_{t} + \left ( - \beta \gamma t' + 2 x' {\sinh{\left (\frac{\alpha }{2} \right )}}^{2} + x'\right ) \boldsymbol{\gamma }_{x} \end{equation*}
\begin{equation*} \bm{A} = A^{t} \boldsymbol{\gamma }_{t} + A^{x} \boldsymbol{\gamma }_{x} + A^{y} \boldsymbol{\gamma }_{y} + A^{z} \boldsymbol{\gamma }_{z} \end{equation*}
\begin{equation*} \bm{\psi} = \psi + \psi ^{tx} \boldsymbol{\gamma }_{t}\wedge \boldsymbol{\gamma }_{x} + \psi ^{ty} \boldsymbol{\gamma }_{t}\wedge \boldsymbol{\gamma }_{y} + \psi ^{tz} \boldsymbol{\gamma }_{t}\wedge \boldsymbol{\gamma }_{z} + \psi ^{xy} \boldsymbol{\gamma }_{x}\wedge \boldsymbol{\gamma }_{y} + \psi ^{xz} \boldsymbol{\gamma }_{x}\wedge \boldsymbol{\gamma }_{z} + \psi ^{yz} \boldsymbol{\gamma }_{y}\wedge \boldsymbol{\gamma }_{z} + \psi ^{txyz} \boldsymbol{\gamma }_{t}\wedge \boldsymbol{\gamma }_{x}\wedge \boldsymbol{\gamma }_{y}\wedge \boldsymbol{\gamma }_{z} \end{equation*}
\end{document}
```python
run('prob_not_solenoidal')
```
A = z*[0;34me_x[0m^[0;34me_y[0m - y*[0;34me_x[0m^[0;34me_z[0m + x*[0;34me_y[0m^[0;34me_z[0m
grad^A = 3*[0;34me_x[0m^[0;34me_y[0m^[0;34me_z[0m
f = (x**2 + y**2 + z**2)**(-1.5)
grad*f = -3.0*x*(x**2 + y**2 + z**2)**(-2.5)*[0;34me_x[0m - 3.0*y*(x**2 + y**2 + z**2)**(-2.5)*[0;34me_y[0m - 3.0*z*(x**2 + y**2 + z**2)**(-2.5)*[0;34me_z[0m
B = z*(x**2 + y**2 + z**2)**(-1.5)*[0;34me_x[0m^[0;34me_y[0m - y*(x**2 + y**2 + z**2)**(-1.5)*[0;34me_x[0m^[0;34me_z[0m + x*(x**2 + y**2 + z**2)**(-1.5)*[0;34me_y[0m^[0;34me_z[0m
grad^B = 0
0
prob_not_solenoidal.py:5: DeprecationWarning: The `galgebra.deprecated` module is deprecated
from galgebra.deprecated import MV
prob_not_solenoidal.py:18: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
f = MV('f','scalar',fct=True)
```python
run('products_latex')
```
#3D Orthogonal Metric\newline
#Multvectors:
s = s
v = v__x*e_x + v__y*e_y + v__z*e_z
b = b__xy*e_x^e_y + b__xz*e_x^e_z + b__yz*e_y^e_z
#Products:
s*s = s**2
s^s = s**2
s<s = s**2
s>s = s**2
s*v = s*v__x*e_x + s*v__y*e_y + s*v__z*e_z
s^v = s*v__x*e_x + s*v__y*e_y + s*v__z*e_z
s<v = s*v__x*e_x + s*v__y*e_y + s*v__z*e_z
s>v = 0
s*b = b__xy*s*e_x^e_y + b__xz*s*e_x^e_z + b__yz*s*e_y^e_z
s^b = b__xy*s*e_x^e_y + b__xz*s*e_x^e_z + b__yz*s*e_y^e_z
s<b = b__xy*s*e_x^e_y + b__xz*s*e_x^e_z + b__yz*s*e_y^e_z
s>b = 0
v*s = s*v__x*e_x + s*v__y*e_y + s*v__z*e_z
v^s = s*v__x*e_x + s*v__y*e_y + s*v__z*e_z
v<s = 0
v>s = s*v__x*e_x + s*v__y*e_y + s*v__z*e_z
v*v = v__x**2 + v__y**2 + v__z**2
v^v = 0
v|v = v__x**2 + v__y**2 + v__z**2
v<v = v__x**2 + v__y**2 + v__z**2
v>v = v__x**2 + v__y**2 + v__z**2
v*b = (-b__xy*v__y - b__xz*v__z)*e_x + (b__xy*v__x - b__yz*v__z)*e_y + (b__xz*v__x + b__yz*v__y)*e_z + (b__xy*v__z - b__xz*v__y + b__yz*v__x)*e_x^e_y^e_z
v^b = (b__xy*v__z - b__xz*v__y + b__yz*v__x)*e_x^e_y^e_z
v|b = (-b__xy*v__y - b__xz*v__z)*e_x + (b__xy*v__x - b__yz*v__z)*e_y + (b__xz*v__x + b__yz*v__y)*e_z
v<b = (-b__xy*v__y - b__xz*v__z)*e_x + (b__xy*v__x - b__yz*v__z)*e_y + (b__xz*v__x + b__yz*v__y)*e_z
v>b = 0
b*s = b__xy*s*e_x^e_y + b__xz*s*e_x^e_z + b__yz*s*e_y^e_z
b^s = b__xy*s*e_x^e_y + b__xz*s*e_x^e_z + b__yz*s*e_y^e_z
b<s = 0
b>s = b__xy*s*e_x^e_y + b__xz*s*e_x^e_z + b__yz*s*e_y^e_z
b*v = (b__xy*v__y + b__xz*v__z)*e_x + (-b__xy*v__x + b__yz*v__z)*e_y + (-b__xz*v__x - b__yz*v__y)*e_z + (b__xy*v__z - b__xz*v__y + b__yz*v__x)*e_x^e_y^e_z
b^v = (b__xy*v__z - b__xz*v__y + b__yz*v__x)*e_x^e_y^e_z
b|v = (b__xy*v__y + b__xz*v__z)*e_x + (-b__xy*v__x + b__yz*v__z)*e_y + (-b__xz*v__x - b__yz*v__y)*e_z
b<v = 0
b>v = (b__xy*v__y + b__xz*v__z)*e_x + (-b__xy*v__x + b__yz*v__z)*e_y + (-b__xz*v__x - b__yz*v__y)*e_z
b*b = -b__xy**2 - b__xz**2 - b__yz**2
b^b = 0
b|b = -b__xy**2 - b__xz**2 - b__yz**2
b<b = -b__xy**2 - b__xz**2 - b__yz**2
b>b = -b__xy**2 - b__xz**2 - b__yz**2
#Multivector Functions:
s(X) = s
v(X) = v__x*e_x + v__y*e_y + v__z*e_z
b(X) = b__xy*e_x^e_y + b__xz*e_x^e_z + b__yz*e_y^e_z
#Products:
grad*s = D{x}s*e_x + D{y}s*e_y + D{z}s*e_z
grad^s = D{x}s*e_x + D{y}s*e_y + D{z}s*e_z
grad<s = 0
grad>s = D{x}s*e_x + D{y}s*e_y + D{z}s*e_z
grad*v = D{x}v__x + D{y}v__y + D{z}v__z + (-D{y}v__x + D{x}v__y)*e_x^e_y + (-D{z}v__x + D{x}v__z)*e_x^e_z + (-D{z}v__y + D{y}v__z)*e_y^e_z
grad^v = (-D{y}v__x + D{x}v__y)*e_x^e_y + (-D{z}v__x + D{x}v__z)*e_x^e_z + (-D{z}v__y + D{y}v__z)*e_y^e_z
grad|v = D{x}v__x + D{y}v__y + D{z}v__z
grad<v = D{x}v__x + D{y}v__y + D{z}v__z
grad>v = D{x}v__x + D{y}v__y + D{z}v__z
grad*b = (-D{y}b__xy - D{z}b__xz)*e_x + (D{x}b__xy - D{z}b__yz)*e_y + (D{x}b__xz + D{y}b__yz)*e_z + (D{z}b__xy - D{y}b__xz + D{x}b__yz)*e_x^e_y^e_z
grad^b = (D{z}b__xy - D{y}b__xz + D{x}b__yz)*e_x^e_y^e_z
grad|b = (-D{y}b__xy - D{z}b__xz)*e_x + (D{x}b__xy - D{z}b__yz)*e_y + (D{x}b__xz + D{y}b__yz)*e_z
grad<b = (-D{y}b__xy - D{z}b__xz)*e_x + (D{x}b__xy - D{z}b__yz)*e_y + (D{x}b__xz + D{y}b__yz)*e_z
grad>b = 0
s*grad = e_x*s*D{x} + e_y*s*D{y} + e_z*s*D{z}
s^grad = e_x*s*D{x} + e_y*s*D{y} + e_z*s*D{z}
s<grad = e_x*s*D{x} + e_y*s*D{y} + e_z*s*D{z}
s>grad = 0
s*s = s**2
s^s = s**2
s<s = s**2
s>s = s**2
s*v = s*v__x*e_x + s*v__y*e_y + s*v__z*e_z
s^v = s*v__x*e_x + s*v__y*e_y + s*v__z*e_z
s<v = s*v__x*e_x + s*v__y*e_y + s*v__z*e_z
s>v = 0
s*b = b__xy*s*e_x^e_y + b__xz*s*e_x^e_z + b__yz*s*e_y^e_z
s^b = b__xy*s*e_x^e_y + b__xz*s*e_x^e_z + b__yz*s*e_y^e_z
s<b = b__xy*s*e_x^e_y + b__xz*s*e_x^e_z + b__yz*s*e_y^e_z
s>b = 0
v*grad = v__x*D{x} + v__y*D{y} + v__z*D{z} + e_x^e_y*(-v__y*D{x} + v__x*D{y}) + e_x^e_z*(-v__z*D{x} + v__x*D{z}) + e_y^e_z*(-v__z*D{y} + v__y*D{z})
v^grad = e_x^e_y*(-v__y*D{x} + v__x*D{y}) + e_x^e_z*(-v__z*D{x} + v__x*D{z}) + e_y^e_z*(-v__z*D{y} + v__y*D{z})
v|grad = v__x*D{x} + v__y*D{y} + v__z*D{z}
v<grad = v__x*D{x} + v__y*D{y} + v__z*D{z}
v>grad = v__x*D{x} + v__y*D{y} + v__z*D{z}
v*s = s*v__x*e_x + s*v__y*e_y + s*v__z*e_z
v^s = s*v__x*e_x + s*v__y*e_y + s*v__z*e_z
v<s = 0
v>s = s*v__x*e_x + s*v__y*e_y + s*v__z*e_z
v*v = v__x**2 + v__y**2 + v__z**2
v^v = 0
v|v = v__x**2 + v__y**2 + v__z**2
v<v = v__x**2 + v__y**2 + v__z**2
v>v = v__x**2 + v__y**2 + v__z**2
v*b = (-b__xy*v__y - b__xz*v__z)*e_x + (b__xy*v__x - b__yz*v__z)*e_y + (b__xz*v__x + b__yz*v__y)*e_z + (b__xy*v__z - b__xz*v__y + b__yz*v__x)*e_x^e_y^e_z
v^b = (b__xy*v__z - b__xz*v__y + b__yz*v__x)*e_x^e_y^e_z
v|b = (-b__xy*v__y - b__xz*v__z)*e_x + (b__xy*v__x - b__yz*v__z)*e_y + (b__xz*v__x + b__yz*v__y)*e_z
v<b = (-b__xy*v__y - b__xz*v__z)*e_x + (b__xy*v__x - b__yz*v__z)*e_y + (b__xz*v__x + b__yz*v__y)*e_z
v>b = 0
b*grad = e_x*(b__xy*D{y} + b__xz*D{z}) + e_y*(-b__xy*D{x} + b__yz*D{z}) + e_z*(-b__xz*D{x} - b__yz*D{y}) + e_x^e_y^e_z*(b__yz*D{x} - b__xz*D{y} + b__xy*D{z})
b^grad = e_x^e_y^e_z*(b__yz*D{x} - b__xz*D{y} + b__xy*D{z})
b|grad = e_x*(b__xy*D{y} + b__xz*D{z}) + e_y*(-b__xy*D{x} + b__yz*D{z}) + e_z*(-b__xz*D{x} - b__yz*D{y})
b<grad = 0
b>grad = e_x*(b__xy*D{y} + b__xz*D{z}) + e_y*(-b__xy*D{x} + b__yz*D{z}) + e_z*(-b__xz*D{x} - b__yz*D{y})
b*s = b__xy*s*e_x^e_y + b__xz*s*e_x^e_z + b__yz*s*e_y^e_z
b^s = b__xy*s*e_x^e_y + b__xz*s*e_x^e_z + b__yz*s*e_y^e_z
b<s = 0
b>s = b__xy*s*e_x^e_y + b__xz*s*e_x^e_z + b__yz*s*e_y^e_z
b*v = (b__xy*v__y + b__xz*v__z)*e_x + (-b__xy*v__x + b__yz*v__z)*e_y + (-b__xz*v__x - b__yz*v__y)*e_z + (b__xy*v__z - b__xz*v__y + b__yz*v__x)*e_x^e_y^e_z
b^v = (b__xy*v__z - b__xz*v__y + b__yz*v__x)*e_x^e_y^e_z
b|v = (b__xy*v__y + b__xz*v__z)*e_x + (-b__xy*v__x + b__yz*v__z)*e_y + (-b__xz*v__x - b__yz*v__y)*e_z
b<v = 0
b>v = (b__xy*v__y + b__xz*v__z)*e_x + (-b__xy*v__x + b__yz*v__z)*e_y + (-b__xz*v__x - b__yz*v__y)*e_z
b*b = -b__xy**2 - b__xz**2 - b__yz**2
b^b = 0
b|b = -b__xy**2 - b__xz**2 - b__yz**2
b<b = -b__xy**2 - b__xz**2 - b__yz**2
b>b = -b__xy**2 - b__xz**2 - b__yz**2
#General 2D Metric\newline
#Multivector Functions:
s(X) = s
v(X) = v__x*e_x + v__y*e_y
b(X) = v__xy*e_x^e_y
#Products:
grad*s = (-(e_x.e_y)*D{y}s + (e_y.e_y)*D{x}s)*e_x/((e_x.e_x)*(e_y.e_y) - (e_x.e_y)**2) + ((e_x.e_x)*D{y}s - (e_x.e_y)*D{x}s)*e_y/((e_x.e_x)*(e_y.e_y) - (e_x.e_y)**2)
grad^s = (-(e_x.e_y)*D{y}s + (e_y.e_y)*D{x}s)*e_x/((e_x.e_x)*(e_y.e_y) - (e_x.e_y)**2) + ((e_x.e_x)*D{y}s - (e_x.e_y)*D{x}s)*e_y/((e_x.e_x)*(e_y.e_y) - (e_x.e_y)**2)
grad<s = 0
grad>s = (-(e_x.e_y)*D{y}s + (e_y.e_y)*D{x}s)*e_x/((e_x.e_x)*(e_y.e_y) - (e_x.e_y)**2) + ((e_x.e_x)*D{y}s - (e_x.e_y)*D{x}s)*e_y/((e_x.e_x)*(e_y.e_y) - (e_x.e_y)**2)
grad*v = D{x}v__x + D{y}v__y + (-(e_x.e_x)*D{y}v__x + (e_x.e_y)*D{x}v__x - (e_x.e_y)*D{y}v__y + (e_y.e_y)*D{x}v__y)*e_x^e_y/((e_x.e_x)*(e_y.e_y) - (e_x.e_y)**2)
grad^v = (-(e_x.e_x)*D{y}v__x + (e_x.e_y)*D{x}v__x - (e_x.e_y)*D{y}v__y + (e_y.e_y)*D{x}v__y)*e_x^e_y/((e_x.e_x)*(e_y.e_y) - (e_x.e_y)**2)
grad|v = D{x}v__x + D{y}v__y
grad<v = D{x}v__x + D{y}v__y
grad>v = D{x}v__x + D{y}v__y
s*grad = e_x*((e_y.e_y)*s/((e_x.e_x)*(e_y.e_y) - (e_x.e_y)**2)*D{x} - (e_x.e_y)*s/((e_x.e_x)*(e_y.e_y) - (e_x.e_y)**2)*D{y}) + e_y*(-(e_x.e_y)*s/((e_x.e_x)*(e_y.e_y) - (e_x.e_y)**2)*D{x} + (e_x.e_x)*s/((e_x.e_x)*(e_y.e_y) - (e_x.e_y)**2)*D{y})
s^grad = e_x*((e_y.e_y)*s/((e_x.e_x)*(e_y.e_y) - (e_x.e_y)**2)*D{x} - (e_x.e_y)*s/((e_x.e_x)*(e_y.e_y) - (e_x.e_y)**2)*D{y}) + e_y*(-(e_x.e_y)*s/((e_x.e_x)*(e_y.e_y) - (e_x.e_y)**2)*D{x} + (e_x.e_x)*s/((e_x.e_x)*(e_y.e_y) - (e_x.e_y)**2)*D{y})
s<grad = e_x*((e_y.e_y)*s/((e_x.e_x)*(e_y.e_y) - (e_x.e_y)**2)*D{x} - (e_x.e_y)*s/((e_x.e_x)*(e_y.e_y) - (e_x.e_y)**2)*D{y}) + e_y*(-(e_x.e_y)*s/((e_x.e_x)*(e_y.e_y) - (e_x.e_y)**2)*D{x} + (e_x.e_x)*s/((e_x.e_x)*(e_y.e_y) - (e_x.e_y)**2)*D{y})
s>grad = 0
s*s = s**2
s^s = s**2
s<s = s**2
s>s = s**2
s*v = s*v__x*e_x + s*v__y*e_y
s^v = s*v__x*e_x + s*v__y*e_y
s<v = s*v__x*e_x + s*v__y*e_y
s>v = 0
v*grad = v__x*D{x} + v__y*D{y} + e_x^e_y*(-((e_x.e_y)*v__x + (e_y.e_y)*v__y)/((e_x.e_x)*(e_y.e_y) - (e_x.e_y)**2)*D{x} + ((e_x.e_x)*v__x + (e_x.e_y)*v__y)/((e_x.e_x)*(e_y.e_y) - (e_x.e_y)**2)*D{y})
v^grad = e_x^e_y*(-((e_x.e_y)*v__x + (e_y.e_y)*v__y)/((e_x.e_x)*(e_y.e_y) - (e_x.e_y)**2)*D{x} + ((e_x.e_x)*v__x + (e_x.e_y)*v__y)/((e_x.e_x)*(e_y.e_y) - (e_x.e_y)**2)*D{y})
v|grad = v__x*D{x} + v__y*D{y}
v<grad = v__x*D{x} + v__y*D{y}
v>grad = v__x*D{x} + v__y*D{y}
v*s = s*v__x*e_x + s*v__y*e_y
v^s = s*v__x*e_x + s*v__y*e_y
v<s = 0
v>s = s*v__x*e_x + s*v__y*e_y
v*v = (e_x.e_x)*v__x**2 + 2*(e_x.e_y)*v__x*v__y + (e_y.e_y)*v__y**2
v^v = 0
v|v = (e_x.e_x)*v__x**2 + 2*(e_x.e_y)*v__x*v__y + (e_y.e_y)*v__y**2
v<v = (e_x.e_x)*v__x**2 + 2*(e_x.e_y)*v__x*v__y + (e_y.e_y)*v__y**2
v>v = (e_x.e_x)*v__x**2 + 2*(e_x.e_y)*v__x*v__y + (e_y.e_y)*v__y**2
products_latex.py:3: DeprecationWarning: The `galgebra.deprecated` module is deprecated
from galgebra.deprecated import MV
products_latex.py:13: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
s = MV('s','scalar')
products_latex.py:14: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
v = MV('v','vector')
products_latex.py:15: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
b = MV('b','bivector')
products_latex.py:38: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
fs = MV('s','scalar',fct=True)
products_latex.py:39: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
fv = MV('v','vector',fct=True)
products_latex.py:40: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
fb = MV('b','bivector',fct=True)
products_latex.py:71: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
s = MV('s','scalar',fct=True)
products_latex.py:72: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
v = MV('v','vector',fct=True)
products_latex.py:73: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
b = MV('v','bivector',fct=True)
```python
check_latex('reflect_test')
```
reflect_test.py:4: DeprecationWarning: The `galgebra.deprecated` module is deprecated
from galgebra.deprecated import MV
reflect_test.py:10: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
a = MV('a','vector')
reflect_test.py:13: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
c = MV('c','vector')
\documentclass[10pt,fleqn]{report}
\usepackage[vcentering]{geometry}
\geometry{papersize={14in,11in},total={13in,10in}}
\pagestyle{empty}
\usepackage[latin1]{inputenc}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{tensor}
\usepackage{listings}
\usepackage{color}
\usepackage{xcolor}
\usepackage{bm}
\usepackage{breqn}
\definecolor{gray}{rgb}{0.95,0.95,0.95}
\setlength{\parindent}{0pt}
\DeclareMathOperator{\Tr}{Tr}
\DeclareMathOperator{\Adj}{Adj}
\newcommand{\bfrac}[2]{\displaystyle\frac{#1}{#2}}
\newcommand{\lp}{\left (}
\newcommand{\rp}{\right )}
\newcommand{\paren}[1]{\lp {#1} \rp}
\newcommand{\half}{\frac{1}{2}}
\newcommand{\llt}{\left <}
\newcommand{\rgt}{\right >}
\newcommand{\abs}[1]{\left |{#1}\right | }
\newcommand{\pdiff}[2]{\bfrac{\partial {#1}}{\partial {#2}}}
\newcommand{\lbrc}{\left \{}
\newcommand{\rbrc}{\right \}}
\newcommand{\W}{\wedge}
\newcommand{\prm}[1]{{#1}'}
\newcommand{\ddt}[1]{\bfrac{d{#1}}{dt}}
\newcommand{\R}{\dagger}
\newcommand{\deriv}[3]{\bfrac{d^{#3}#1}{d{#2}^{#3}}}
\newcommand{\grade}[1]{\left < {#1} \right >}
\newcommand{\f}[2]{{#1}\lp{#2}\rp}
\newcommand{\eval}[2]{\left . {#1} \right |_{#2}}
\newcommand{\Nabla}{\boldsymbol{\nabla}}
\newcommand{\eb}{\boldsymbol{e}}
\usepackage{float}
\floatstyle{plain} % optionally change the style of the new float
\newfloat{Code}{H}{myc}
\lstloadlanguages{Python}
\begin{document}
\begin{equation*} a = a^{x} \boldsymbol{e}_{x} + a^{y} \boldsymbol{e}_{y} + a^{z} \boldsymbol{e}_{z} \end{equation*}
\begin{equation*} b = \boldsymbol{e}_{x} + \boldsymbol{e}_{y} + \boldsymbol{e}_{z} \end{equation*}
\begin{equation*} c = c^{w} \boldsymbol{e}_{w} + c^{x} \boldsymbol{e}_{x} + c^{y} \boldsymbol{e}_{y} + c^{z} \boldsymbol{e}_{z} \end{equation*}
\begin{equation*} a\mbox{ reflect in }xy = a^{x} \boldsymbol{e}_{x} + a^{y} \boldsymbol{e}_{y} - a^{z} \boldsymbol{e}_{z} \end{equation*}
\begin{equation*} a\mbox{ reflect in }yz = - a^{x} \boldsymbol{e}_{x} + a^{y} \boldsymbol{e}_{y} + a^{z} \boldsymbol{e}_{z} \end{equation*}
\begin{equation*} a\mbox{ reflect in }zx = a^{x} \boldsymbol{e}_{x} - a^{y} \boldsymbol{e}_{y} + a^{z} \boldsymbol{e}_{z} \end{equation*}
\begin{equation*} a\mbox{ reflect in plane }(x=y) = a^{y} \boldsymbol{e}_{x} + a^{x} \boldsymbol{e}_{y} + a^{z} \boldsymbol{e}_{z} \end{equation*}
\begin{equation*} b\mbox{ reflect in plane }(x+y+z=0) = - \boldsymbol{e}_{x} - \boldsymbol{e}_{y} - \boldsymbol{e}_{z} \end{equation*}
\begin{equation*} \mbox{Reflect in }\bm{e}_{x} = a^{x} \boldsymbol{e}_{x} - a^{y} \boldsymbol{e}_{y} - a^{z} \boldsymbol{e}_{z} \end{equation*}
\begin{equation*} \mbox{Reflect in }\bm{e}_{y} = - a^{x} \boldsymbol{e}_{x} + a^{y} \boldsymbol{e}_{y} - a^{z} \boldsymbol{e}_{z} \end{equation*}
\begin{equation*} \mbox{Reflect in }\bm{e}_{z} = - a^{x} \boldsymbol{e}_{x} - a^{y} \boldsymbol{e}_{y} + a^{z} \boldsymbol{e}_{z} \end{equation*}
\begin{equation*} c\mbox{ reflect in }xy = - c^{w} \boldsymbol{e}_{w} + c^{x} \boldsymbol{e}_{x} + c^{y} \boldsymbol{e}_{y} - c^{z} \boldsymbol{e}_{z} \end{equation*}
\begin{equation*} c\mbox{ reflect in }xyz = - c^{w} \boldsymbol{e}_{w} + c^{x} \boldsymbol{e}_{x} + c^{y} \boldsymbol{e}_{y} + c^{z} \boldsymbol{e}_{z} \end{equation*}
\begin{equation*} wx\mbox{ reflect in }yz = \boldsymbol{e}_{w}\wedge \boldsymbol{e}_{x} \end{equation*}
\begin{equation*} wx\mbox{ reflect in }xy = - \boldsymbol{e}_{w}\wedge \boldsymbol{e}_{x} \end{equation*}
\end{document}
```python
check_latex('simple_check_latex')
```
simple_check_latex.py:3: DeprecationWarning: The `galgebra.deprecated` module is deprecated
from galgebra.deprecated import MV
simple_check_latex.py:10: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
A = MV('A','mv')
simple_check_latex.py:19: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
X = MV('X','vector')
simple_check_latex.py:20: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
Y = MV('Y','vector')
simple_check_latex.py:35: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
X = MV('X','vector')
simple_check_latex.py:36: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
A = MV('A','spinor')
\documentclass[10pt,fleqn]{report}
\usepackage[vcentering]{geometry}
\geometry{papersize={14in,11in},total={13in,10in}}
\pagestyle{empty}
\usepackage[latin1]{inputenc}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{tensor}
\usepackage{listings}
\usepackage{color}
\usepackage{xcolor}
\usepackage{bm}
\usepackage{breqn}
\definecolor{gray}{rgb}{0.95,0.95,0.95}
\setlength{\parindent}{0pt}
\DeclareMathOperator{\Tr}{Tr}
\DeclareMathOperator{\Adj}{Adj}
\newcommand{\bfrac}[2]{\displaystyle\frac{#1}{#2}}
\newcommand{\lp}{\left (}
\newcommand{\rp}{\right )}
\newcommand{\paren}[1]{\lp {#1} \rp}
\newcommand{\half}{\frac{1}{2}}
\newcommand{\llt}{\left <}
\newcommand{\rgt}{\right >}
\newcommand{\abs}[1]{\left |{#1}\right | }
\newcommand{\pdiff}[2]{\bfrac{\partial {#1}}{\partial {#2}}}
\newcommand{\lbrc}{\left \{}
\newcommand{\rbrc}{\right \}}
\newcommand{\W}{\wedge}
\newcommand{\prm}[1]{{#1}'}
\newcommand{\ddt}[1]{\bfrac{d{#1}}{dt}}
\newcommand{\R}{\dagger}
\newcommand{\deriv}[3]{\bfrac{d^{#3}#1}{d{#2}^{#3}}}
\newcommand{\grade}[1]{\left < {#1} \right >}
\newcommand{\f}[2]{{#1}\lp{#2}\rp}
\newcommand{\eval}[2]{\left . {#1} \right |_{#2}}
\newcommand{\Nabla}{\boldsymbol{\nabla}}
\newcommand{\eb}{\boldsymbol{e}}
\usepackage{float}
\floatstyle{plain} % optionally change the style of the new float
\newfloat{Code}{H}{myc}
\lstloadlanguages{Python}
\begin{document}
\begin{equation*} g_{ij} = \left[\begin{array}{ccc}\left (\boldsymbol{e}_{x}\cdot \boldsymbol{e}_{x}\right ) & \left (\boldsymbol{e}_{x}\cdot \boldsymbol{e}_{y}\right ) & \left (\boldsymbol{e}_{x}\cdot \boldsymbol{e}_{z}\right ) \\\left (\boldsymbol{e}_{x}\cdot \boldsymbol{e}_{y}\right ) & \left (\boldsymbol{e}_{y}\cdot \boldsymbol{e}_{y}\right ) & \left (\boldsymbol{e}_{y}\cdot \boldsymbol{e}_{z}\right ) \\\left (\boldsymbol{e}_{x}\cdot \boldsymbol{e}_{z}\right ) & \left (\boldsymbol{e}_{y}\cdot \boldsymbol{e}_{z}\right ) & \left (\boldsymbol{e}_{z}\cdot \boldsymbol{e}_{z}\right ) \end{array}\right] \end{equation*}
\begin{equation*} A = A + A^{x} \boldsymbol{e}_{x} + A^{y} \boldsymbol{e}_{y} + A^{z} \boldsymbol{e}_{z} + A^{xy} \boldsymbol{e}_{x}\wedge \boldsymbol{e}_{y} + A^{xz} \boldsymbol{e}_{x}\wedge \boldsymbol{e}_{z} + A^{yz} \boldsymbol{e}_{y}\wedge \boldsymbol{e}_{z} + A^{xyz} \boldsymbol{e}_{x}\wedge \boldsymbol{e}_{y}\wedge \boldsymbol{e}_{z} \end{equation*}
\begin{equation*} A = \begin{aligned}[t] & A \\ & + A^{x} \boldsymbol{e}_{x} + A^{y} \boldsymbol{e}_{y} + A^{z} \boldsymbol{e}_{z} \\ & + A^{xy} \boldsymbol{e}_{x}\wedge \boldsymbol{e}_{y} + A^{xz} \boldsymbol{e}_{x}\wedge \boldsymbol{e}_{z} + A^{yz} \boldsymbol{e}_{y}\wedge \boldsymbol{e}_{z} \\ & + A^{xyz} \boldsymbol{e}_{x}\wedge \boldsymbol{e}_{y}\wedge \boldsymbol{e}_{z} \end{aligned} \end{equation*}
\begin{equation*} A = \begin{aligned}[t] & A \\ & + A^{x} \boldsymbol{e}_{x} \\ & + A^{y} \boldsymbol{e}_{y} \\ & + A^{z} \boldsymbol{e}_{z} \\ & + A^{xy} \boldsymbol{e}_{x}\wedge \boldsymbol{e}_{y} \\ & + A^{xz} \boldsymbol{e}_{x}\wedge \boldsymbol{e}_{z} \\ & + A^{yz} \boldsymbol{e}_{y}\wedge \boldsymbol{e}_{z} \\ & + A^{xyz} \boldsymbol{e}_{x}\wedge \boldsymbol{e}_{y}\wedge \boldsymbol{e}_{z} \end{aligned} \end{equation*}
\begin{equation*} X = X^{x} \boldsymbol{e}_{x} + X^{y} \boldsymbol{e}_{y} + X^{z} \boldsymbol{e}_{z} \end{equation*}
\begin{equation*} Y = Y^{x} \boldsymbol{e}_{x} + Y^{y} \boldsymbol{e}_{y} + Y^{z} \boldsymbol{e}_{z} \end{equation*}
\begin{equation*} g_{ij} = \left[\begin{array}{cc}\left (\boldsymbol{e}_{x}\cdot \boldsymbol{e}_{x}\right ) & \left (\boldsymbol{e}_{x}\cdot \boldsymbol{e}_{y}\right ) \\\left (\boldsymbol{e}_{x}\cdot \boldsymbol{e}_{y}\right ) & \left (\boldsymbol{e}_{y}\cdot \boldsymbol{e}_{y}\right ) \end{array}\right] \end{equation*}
\begin{equation*} X = X^{x} \boldsymbol{e}_{x} + X^{y} \boldsymbol{e}_{y} \end{equation*}
\begin{equation*} A = A + A^{xy} \boldsymbol{e}_{x}\wedge \boldsymbol{e}_{y} \end{equation*}
\end{document}
```python
run('simple_check')
```
u__x[0m*[0;34me_x[0m + u__y[0m*[0;34me_y[0m + u__z[0m*[0;34me_z[0m
v__x[0m*[0;34me_x[0m + v__y[0m*[0;34me_y[0m + v__z[0m*[0;34me_z[0m
w__x[0m*[0;34me_x[0m + w__y[0m*[0;34me_y[0m + w__z[0m*[0;34me_z[0m
(u__x[0m*v__y[0m - u__y[0m*v__x[0m)*[0;34me_x[0m^[0;34me_y[0m + (u__x[0m*v__z[0m - u__z[0m*v__x[0m)*[0;34me_x[0m^[0;34me_z[0m + (u__y[0m*v__z[0m - u__z[0m*v__y[0m)*[0;34me_y[0m^[0;34me_z[0m
True
(u__x[0m*v__y[0m*w__z[0m - u__x[0m*v__z[0m*w__y[0m - u__y[0m*v__x[0m*w__z[0m + u__y[0m*v__z[0m*w__x[0m + u__z[0m*v__x[0m*w__y[0m - u__z[0m*v__y[0m*w__x[0m)*[0;34me_x[0m^[0;34me_y[0m^[0;34me_z[0m
True
-u__x[0m**2*v__y[0m**2 - u__x[0m**2*v__z[0m**2 + 2*u__x[0m*u__y[0m*v__x[0m*v__y[0m + 2*u__x[0m*u__z[0m*v__x[0m*v__z[0m - u__y[0m**2*v__x[0m**2 - u__y[0m**2*v__z[0m**2 + 2*u__y[0m*u__z[0m*v__y[0m*v__z[0m - u__z[0m**2*v__x[0m**2 - u__z[0m**2*v__y[0m**2
simple_check.py:5: DeprecationWarning: The `galgebra.deprecated` module is deprecated
from galgebra.deprecated import MV
simple_check.py:13: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
u = MV('u','vector')
simple_check.py:14: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
v = MV('v','vector')
simple_check.py:15: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
w = MV('w','vector')
```python
check_latex('spherical_latex')
```
spherical_latex.py:5: DeprecationWarning: The `galgebra.deprecated` module is deprecated
from galgebra.deprecated import MV
spherical_latex.py:14: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
f = MV('f','scalar',fct=True)
spherical_latex.py:15: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
A = MV('A','vector',fct=True)
spherical_latex.py:16: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
B = MV('B','grade2',fct=True)
\documentclass[10pt,fleqn]{report}
\usepackage[vcentering]{geometry}
\geometry{papersize={14in,11in},total={13in,10in}}
\pagestyle{empty}
\usepackage[latin1]{inputenc}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{tensor}
\usepackage{listings}
\usepackage{color}
\usepackage{xcolor}
\usepackage{bm}
\usepackage{breqn}
\definecolor{gray}{rgb}{0.95,0.95,0.95}
\setlength{\parindent}{0pt}
\DeclareMathOperator{\Tr}{Tr}
\DeclareMathOperator{\Adj}{Adj}
\newcommand{\bfrac}[2]{\displaystyle\frac{#1}{#2}}
\newcommand{\lp}{\left (}
\newcommand{\rp}{\right )}
\newcommand{\paren}[1]{\lp {#1} \rp}
\newcommand{\half}{\frac{1}{2}}
\newcommand{\llt}{\left <}
\newcommand{\rgt}{\right >}
\newcommand{\abs}[1]{\left |{#1}\right | }
\newcommand{\pdiff}[2]{\bfrac{\partial {#1}}{\partial {#2}}}
\newcommand{\lbrc}{\left \{}
\newcommand{\rbrc}{\right \}}
\newcommand{\W}{\wedge}
\newcommand{\prm}[1]{{#1}'}
\newcommand{\ddt}[1]{\bfrac{d{#1}}{dt}}
\newcommand{\R}{\dagger}
\newcommand{\deriv}[3]{\bfrac{d^{#3}#1}{d{#2}^{#3}}}
\newcommand{\grade}[1]{\left < {#1} \right >}
\newcommand{\f}[2]{{#1}\lp{#2}\rp}
\newcommand{\eval}[2]{\left . {#1} \right |_{#2}}
\newcommand{\Nabla}{\boldsymbol{\nabla}}
\newcommand{\eb}{\boldsymbol{e}}
\usepackage{float}
\floatstyle{plain} % optionally change the style of the new float
\newfloat{Code}{H}{myc}
\lstloadlanguages{Python}
\begin{document}
\begin{lstlisting}[language=Python,showspaces=false,showstringspaces=false,backgroundcolor=\color{gray},frame=single]
def derivatives_in_spherical_coordinates():
Print_Function()
X = (r,th,phi) = symbols('r theta phi')
curv = [[r*cos(phi)*sin(th),r*sin(phi)*sin(th),r*cos(th)],[1,r,r*sin(th)]]
(er,eth,ephi,grad) = MV.setup('e_r e_theta e_phi',metric='[1,1,1]',coords=X,curv=curv)
f = MV('f','scalar',fct=True)
A = MV('A','vector',fct=True)
B = MV('B','grade2',fct=True)
print('f =',f)
print('A =',A)
print('B =',B)
print('grad*f =',grad*f)
print('grad|A =',grad|A)
print('-I*(grad^A) =',-MV.I*(grad^A))
print('grad^B =',grad^B)
return
\end{lstlisting}
Code Output:
\begin{equation*} f = f \end{equation*}
\begin{equation*} A = A^{r} \boldsymbol{e}_{r} + A^{\theta } \boldsymbol{e}_{\theta } + A^{\phi } \boldsymbol{e}_{\phi } \end{equation*}
\begin{equation*} B = B^{r\theta } \boldsymbol{e}_{r}\wedge \boldsymbol{e}_{\theta } + B^{r\phi } \boldsymbol{e}_{r}\wedge \boldsymbol{e}_{\phi } + B^{\theta \phi } \boldsymbol{e}_{\theta }\wedge \boldsymbol{e}_{\phi } \end{equation*}
\begin{equation*} \boldsymbol{\nabla} f = \partial_{r} f \boldsymbol{e}_{r} + \frac{\partial_{\theta } f }{r^{2}} \boldsymbol{e}_{\theta } + \frac{\partial_{\phi } f }{r^{2} {\sin{\left (\theta \right )}}^{2}} \boldsymbol{e}_{\phi } \end{equation*}
\begin{equation*} \boldsymbol{\nabla} \cdot A = \frac{A^{\theta } }{\tan{\left (\theta \right )}} + \partial_{\phi } A^{\phi } + \partial_{r} A^{r} + \partial_{\theta } A^{\theta } + \frac{2 A^{r} }{r} \end{equation*}
\begin{equation*} -I (\boldsymbol{\nabla} \W A) = \frac{\sqrt{r^{4} {\sin{\left (\theta \right )}}^{2}} \left(\frac{2 A^{\phi } }{\tan{\left (\theta \right )}} + \partial_{\theta } A^{\phi } - \frac{\partial_{\phi } A^{\theta } }{{\sin{\left (\theta \right )}}^{2}}\right)}{r^{2}} \boldsymbol{e}_{r} + \frac{- r^{2} {\sin{\left (\theta \right )}}^{2} \partial_{r} A^{\phi } - 2 r A^{\phi } {\sin{\left (\theta \right )}}^{2} + \partial_{\phi } A^{r} }{\sqrt{r^{4} {\sin{\left (\theta \right )}}^{2}}} \boldsymbol{e}_{\theta } + \frac{r^{2} \partial_{r} A^{\theta } + 2 r A^{\theta } - \partial_{\theta } A^{r} }{\sqrt{r^{4} {\sin{\left (\theta \right )}}^{2}}} \boldsymbol{e}_{\phi } \end{equation*}
\begin{equation*} \boldsymbol{\nabla} \W B = \frac{r^{2} \partial_{r} B^{\theta \phi } + 4 r B^{\theta \phi } - \frac{2 B^{r\phi } }{\tan{\left (\theta \right )}} - \partial_{\theta } B^{r\phi } + \frac{\partial_{\phi } B^{r\theta } }{{\sin{\left (\theta \right )}}^{2}}}{r^{2}} \boldsymbol{e}_{r}\wedge \boldsymbol{e}_{\theta }\wedge \boldsymbol{e}_{\phi } \end{equation*}
\end{document}
```python
run('terminal_check')
```
A = A + A__x[0m*[0;34me_x[0m + A__y[0m*[0;34me_y[0m + A__z[0m*[0;34me_z[0m + A__x[0my[0m*[0;34me_x[0m^[0;34me_y[0m + A__x[0mz[0m*[0;34me_x[0m^[0;34me_z[0m + A__y[0mz[0m*[0;34me_y[0m^[0;34me_z[0m + A__x[0my[0mz[0m*[0;34me_x[0m^[0;34me_y[0m^[0;34me_z[0m
A = A
+ A__x[0m*[0;34me_x[0m + A__y[0m*[0;34me_y[0m + A__z[0m*[0;34me_z[0m
+ A__x[0my[0m*[0;34me_x[0m^[0;34me_y[0m + A__x[0mz[0m*[0;34me_x[0m^[0;34me_z[0m + A__y[0mz[0m*[0;34me_y[0m^[0;34me_z[0m
+ A__x[0my[0mz[0m*[0;34me_x[0m^[0;34me_y[0m^[0;34me_z[0m
A = A
+ A__x[0m*[0;34me_x[0m
+ A__y[0m*[0;34me_y[0m
+ A__z[0m*[0;34me_z[0m
+ A__x[0my[0m*[0;34me_x[0m^[0;34me_y[0m
+ A__x[0mz[0m*[0;34me_x[0m^[0;34me_z[0m
+ A__y[0mz[0m*[0;34me_y[0m^[0;34me_z[0m
+ A__x[0my[0mz[0m*[0;34me_x[0m^[0;34me_y[0m^[0;34me_z[0m
g_{ij} =
Matrix([
[([0;34me_x[0m.[0;34me_x[0m), ([0;34me_x[0m.[0;34me_y[0m), ([0;34me_x[0m.[0;34me_z[0m)],
[([0;34me_x[0m.[0;34me_y[0m), ([0;34me_y[0m.[0;34me_y[0m), ([0;34me_y[0m.[0;34me_z[0m)],
[([0;34me_x[0m.[0;34me_z[0m), ([0;34me_y[0m.[0;34me_z[0m), ([0;34me_z[0m.[0;34me_z[0m)]])
X = X__x[0m*[0;34me_x[0m + X__y[0m*[0;34me_y[0m + X__z[0m*[0;34me_z[0m
Y = Y__x[0m*[0;34me_x[0m + Y__y[0m*[0;34me_y[0m + Y__z[0m*[0;34me_z[0m
g_{ij} =
Matrix([
[([0;34me_x[0m.[0;34me_x[0m), ([0;34me_x[0m.[0;34me_y[0m)],
[([0;34me_x[0m.[0;34me_y[0m), ([0;34me_y[0m.[0;34me_y[0m)]])
X = X__x[0m*[0;34me_x[0m + X__y[0m*[0;34me_y[0m
A = A + A__x[0my[0m*[0;34me_x[0m^[0;34me_y[0m
g_{ii} =
Matrix([
[1, 0],
[0, 1]])
X = X__x[0m*[0;34me_x[0m + X__y[0m*[0;34me_y[0m
A = A + A__x[0my[0m*[0;34me_x[0m^[0;34me_y[0m
g_{ij} =
Matrix([
[([0;34ma[0m.[0;34ma[0m), ([0;34ma[0m.[0;34mb[0m), ([0;34ma[0m.[0;34mc[0m), ([0;34ma[0m.[0;34md[0m), ([0;34ma[0m.[0;34me[0m)],
[([0;34ma[0m.[0;34mb[0m), ([0;34mb[0m.[0;34mb[0m), ([0;34mb[0m.[0;34mc[0m), ([0;34mb[0m.[0;34md[0m), ([0;34mb[0m.[0;34me[0m)],
[([0;34ma[0m.[0;34mc[0m), ([0;34mb[0m.[0;34mc[0m), ([0;34mc[0m.[0;34mc[0m), ([0;34mc[0m.[0;34md[0m), ([0;34mc[0m.[0;34me[0m)],
[([0;34ma[0m.[0;34md[0m), ([0;34mb[0m.[0;34md[0m), ([0;34mc[0m.[0;34md[0m), ([0;34md[0m.[0;34md[0m), ([0;34md[0m.[0;34me[0m)],
[([0;34ma[0m.[0;34me[0m), ([0;34mb[0m.[0;34me[0m), ([0;34mc[0m.[0;34me[0m), ([0;34md[0m.[0;34me[0m), ([0;34me[0m.[0;34me[0m)]])
a|(b*c) = -([0;34ma[0m.[0;34mc[0m)*[0;34mb[0m + ([0;34ma[0m.[0;34mb[0m)*[0;34mc[0m
a|(b^c) = -([0;34ma[0m.[0;34mc[0m)*[0;34mb[0m + ([0;34ma[0m.[0;34mb[0m)*[0;34mc[0m
a|(b^c^d) = ([0;34ma[0m.[0;34md[0m)*[0;34mb[0m^[0;34mc[0m - ([0;34ma[0m.[0;34mc[0m)*[0;34mb[0m^[0;34md[0m + ([0;34ma[0m.[0;34mb[0m)*[0;34mc[0m^[0;34md[0m
a|(b^c)+c|(a^b)+b|(c^a) = 0
a*(b^c)-b*(a^c)+c*(a^b) = 3*[0;34ma[0m^[0;34mb[0m^[0;34mc[0m
a*(b^c^d)-b*(a^c^d)+c*(a^b^d)-d*(a^b^c) = 4*[0;34ma[0m^[0;34mb[0m^[0;34mc[0m^[0;34md[0m
(a^b)|(c^d) = -([0;34ma[0m.[0;34mc[0m)*([0;34mb[0m.[0;34md[0m) + ([0;34ma[0m.[0;34md[0m)*([0;34mb[0m.[0;34mc[0m)
((a^b)|c)|d = -([0;34ma[0m.[0;34mc[0m)*([0;34mb[0m.[0;34md[0m) + ([0;34ma[0m.[0;34md[0m)*([0;34mb[0m.[0;34mc[0m)
(a^b)x(c^d) = -([0;34mb[0m.[0;34md[0m)*[0;34ma[0m^[0;34mc[0m + ([0;34mb[0m.[0;34mc[0m)*[0;34ma[0m^[0;34md[0m + ([0;34ma[0m.[0;34md[0m)*[0;34mb[0m^[0;34mc[0m - ([0;34ma[0m.[0;34mc[0m)*[0;34mb[0m^[0;34md[0m
(a|(b^c))|(d^e) = (-([0;34ma[0m.[0;34mb[0m)*([0;34mc[0m.[0;34me[0m) + ([0;34ma[0m.[0;34mc[0m)*([0;34mb[0m.[0;34me[0m))*[0;34md[0m + (([0;34ma[0m.[0;34mb[0m)*([0;34mc[0m.[0;34md[0m) - ([0;34ma[0m.[0;34mc[0m)*([0;34mb[0m.[0;34md[0m))*[0;34me[0m
f = [0;31mf[0m
A = [0;31mA__x[0m*[0;34me_x[0m + [0;31mA__y[0m*[0;34me_y[0m + [0;31mA__z[0m*[0;34me_z[0m
B = [0;31mB__xy[0m*[0;34me_x[0m^[0;34me_y[0m + [0;31mB__xz[0m*[0;34me_x[0m^[0;34me_z[0m + [0;31mB__yz[0m*[0;34me_y[0m^[0;34me_z[0m
C = [0;31mC[0m + [0;31mC__x[0m*[0;34me_x[0m + [0;31mC__y[0m*[0;34me_y[0m + [0;31mC__z[0m*[0;34me_z[0m + [0;31mC__xy[0m*[0;34me_x[0m^[0;34me_y[0m + [0;31mC__xz[0m*[0;34me_x[0m^[0;34me_z[0m + [0;31mC__yz[0m*[0;34me_y[0m^[0;34me_z[0m + [0;31mC__xyz[0m*[0;34me_x[0m^[0;34me_y[0m^[0;34me_z[0m
grad*f = [0;36mD{x}[0;31mf[0m[0m*[0;34me_x[0m + [0;36mD{y}[0;31mf[0m[0m*[0;34me_y[0m + [0;36mD{z}[0;31mf[0m[0m*[0;34me_z[0m
grad|A = [0;36mD{x}[0;31mA__x[0m[0m + [0;36mD{y}[0;31mA__y[0m[0m + [0;36mD{z}[0;31mA__z[0m[0m
grad*A = [0;36mD{x}[0;31mA__x[0m[0m + [0;36mD{y}[0;31mA__y[0m[0m + [0;36mD{z}[0;31mA__z[0m[0m + (-[0;36mD{y}[0;31mA__x[0m[0m + [0;36mD{x}[0;31mA__y[0m[0m)*[0;34me_x[0m^[0;34me_y[0m + (-[0;36mD{z}[0;31mA__x[0m[0m + [0;36mD{x}[0;31mA__z[0m[0m)*[0;34me_x[0m^[0;34me_z[0m + (-[0;36mD{z}[0;31mA__y[0m[0m + [0;36mD{y}[0;31mA__z[0m[0m)*[0;34me_y[0m^[0;34me_z[0m
-I*(grad^A) = (-[0;36mD{z}[0;31mA__y[0m[0m + [0;36mD{y}[0;31mA__z[0m[0m)*[0;34me_x[0m + ([0;36mD{z}[0;31mA__x[0m[0m - [0;36mD{x}[0;31mA__z[0m[0m)*[0;34me_y[0m + (-[0;36mD{y}[0;31mA__x[0m[0m + [0;36mD{x}[0;31mA__y[0m[0m)*[0;34me_z[0m
grad*B = (-[0;36mD{y}[0;31mB__xy[0m[0m - [0;36mD{z}[0;31mB__xz[0m[0m)*[0;34me_x[0m + ([0;36mD{x}[0;31mB__xy[0m[0m - [0;36mD{z}[0;31mB__yz[0m[0m)*[0;34me_y[0m + ([0;36mD{x}[0;31mB__xz[0m[0m + [0;36mD{y}[0;31mB__yz[0m[0m)*[0;34me_z[0m + ([0;36mD{z}[0;31mB__xy[0m[0m - [0;36mD{y}[0;31mB__xz[0m[0m + [0;36mD{x}[0;31mB__yz[0m[0m)*[0;34me_x[0m^[0;34me_y[0m^[0;34me_z[0m
grad^B = ([0;36mD{z}[0;31mB__xy[0m[0m - [0;36mD{y}[0;31mB__xz[0m[0m + [0;36mD{x}[0;31mB__yz[0m[0m)*[0;34me_x[0m^[0;34me_y[0m^[0;34me_z[0m
grad|B = (-[0;36mD{y}[0;31mB__xy[0m[0m - [0;36mD{z}[0;31mB__xz[0m[0m)*[0;34me_x[0m + ([0;36mD{x}[0;31mB__xy[0m[0m - [0;36mD{z}[0;31mB__yz[0m[0m)*[0;34me_y[0m + ([0;36mD{x}[0;31mB__xz[0m[0m + [0;36mD{y}[0;31mB__yz[0m[0m)*[0;34me_z[0m
grad<A = [0;36mD{x}[0;31mA__x[0m[0m + [0;36mD{y}[0;31mA__y[0m[0m + [0;36mD{z}[0;31mA__z[0m[0m
grad>A = [0;36mD{x}[0;31mA__x[0m[0m + [0;36mD{y}[0;31mA__y[0m[0m + [0;36mD{z}[0;31mA__z[0m[0m
grad<B = (-[0;36mD{y}[0;31mB__xy[0m[0m - [0;36mD{z}[0;31mB__xz[0m[0m)*[0;34me_x[0m + ([0;36mD{x}[0;31mB__xy[0m[0m - [0;36mD{z}[0;31mB__yz[0m[0m)*[0;34me_y[0m + ([0;36mD{x}[0;31mB__xz[0m[0m + [0;36mD{y}[0;31mB__yz[0m[0m)*[0;34me_z[0m
grad>B = 0
grad<C = [0;36mD{x}[0;31mC__x[0m[0m + [0;36mD{y}[0;31mC__y[0m[0m + [0;36mD{z}[0;31mC__z[0m[0m + (-[0;36mD{y}[0;31mC__xy[0m[0m - [0;36mD{z}[0;31mC__xz[0m[0m)*[0;34me_x[0m + ([0;36mD{x}[0;31mC__xy[0m[0m - [0;36mD{z}[0;31mC__yz[0m[0m)*[0;34me_y[0m + ([0;36mD{x}[0;31mC__xz[0m[0m + [0;36mD{y}[0;31mC__yz[0m[0m)*[0;34me_z[0m + [0;36mD{z}[0;31mC__xyz[0m[0m*[0;34me_x[0m^[0;34me_y[0m - [0;36mD{y}[0;31mC__xyz[0m[0m*[0;34me_x[0m^[0;34me_z[0m + [0;36mD{x}[0;31mC__xyz[0m[0m*[0;34me_y[0m^[0;34me_z[0m
grad>C = [0;36mD{x}[0;31mC__x[0m[0m + [0;36mD{y}[0;31mC__y[0m[0m + [0;36mD{z}[0;31mC__z[0m[0m + [0;36mD{x}[0;31mC[0m[0m*[0;34me_x[0m + [0;36mD{y}[0;31mC[0m[0m*[0;34me_y[0m + [0;36mD{z}[0;31mC[0m[0m*[0;34me_z[0m
f = [0;31mf[0m
A = [0;31mA__r[0m*[0;34me_r[0m + [0;31mA__theta[0m*[0;34me_theta[0m + [0;31mA__phi[0m*[0;34me_phi[0m
B = [0;31mB__rtheta[0m*[0;34me_r[0m^[0;34me_theta[0m + [0;31mB__rphi[0m*[0;34me_r[0m^[0;34me_phi[0m + [0;31mB__thetaphi[0m*[0;34me_theta[0m^[0;34me_phi[0m
grad*f = [0;36mD{r}[0;31mf[0m[0m*[0;34me_r[0m + [0;36mD{theta}[0;31mf[0m[0m*[0;34me_theta[0m/r**2 + [0;36mD{phi}[0;31mf[0m[0m*[0;34me_phi[0m/(r**2*sin(theta)**2)
grad|A = [0;31mA__theta[0m/tan(theta) + [0;36mD{phi}[0;31mA__phi[0m[0m + [0;36mD{r}[0;31mA__r[0m[0m + [0;36mD{theta}[0;31mA__theta[0m[0m + 2*[0;31mA__r[0m/r
-I*(grad^A) = sqrt(r**4*sin(theta)**2)*(2*[0;31mA__phi[0m/tan(theta) + [0;36mD{theta}[0;31mA__phi[0m[0m - [0;36mD{phi}[0;31mA__theta[0m[0m/sin(theta)**2)*[0;34me_r[0m/r**2 + (-r**2*sin(theta)**2*[0;36mD{r}[0;31mA__phi[0m[0m - 2*r*[0;31mA__phi[0m*sin(theta)**2 + [0;36mD{phi}[0;31mA__r[0m[0m)*[0;34me_theta[0m/sqrt(r**4*sin(theta)**2) + (r**2*[0;36mD{r}[0;31mA__theta[0m[0m + 2*r*[0;31mA__theta[0m - [0;36mD{theta}[0;31mA__r[0m[0m)*[0;34me_phi[0m/sqrt(r**4*sin(theta)**2)
grad^B = (r**2*[0;36mD{r}[0;31mB__thetaphi[0m[0m + 4*r*[0;31mB__thetaphi[0m - 2*[0;31mB__rphi[0m/tan(theta) - [0;36mD{theta}[0;31mB__rphi[0m[0m + [0;36mD{phi}[0;31mB__rtheta[0m[0m/sin(theta)**2)*[0;34me_r[0m^[0;34me_theta[0m^[0;34me_phi[0m/r**2
X = 1.2*[0;34me_x[0m + 2.34*[0;34me_y[0m + 0.555*[0;34me_z[0m
Nga(X,2) = 1.2*[0;34me_x[0m + 2.3*[0;34me_y[0m + 0.55*[0;34me_z[0m
X*Y = 12.7011 + 4.02078*[0;34me_x[0m^[0;34me_y[0m + 6.175185*[0;34me_x[0m^[0;34me_z[0m + 10.182*[0;34me_y[0m^[0;34me_z[0m
Nga(X*Y,2) = 13.0 + 4.0*[0;34me_x[0m^[0;34me_y[0m + 6.2*[0;34me_x[0m^[0;34me_z[0m + 10.0*[0;34me_y[0m^[0;34me_z[0m
g_{ij} =
Matrix([
[1, 0, 0, 0, 0],
[0, 1, 0, 0, 0],
[0, 0, 1, 0, 0],
[0, 0, 0, 0, 2],
[0, 0, 0, 2, 0]])
F(a) = [0;34me_1[0m + [0;34mn[0m/2 - [0;34mnbar[0m/2
F(b) = [0;34me_2[0m + [0;34mn[0m/2 - [0;34mnbar[0m/2
F(c) = -[0;34me_1[0m + [0;34mn[0m/2 - [0;34mnbar[0m/2
F(d) = [0;34me_3[0m + [0;34mn[0m/2 - [0;34mnbar[0m/2
F(x) = x1*[0;34me_1[0m + x2*[0;34me_2[0m + x3*[0;34me_3[0m + (x1**2/2 + x2**2/2 + x3**2/2)*[0;34mn[0m - [0;34mnbar[0m/2
a = e1, b = e2, c = -e1, and d = e3
A = F(a) = 1/2*(a*a*n+2*a-nbar), etc.
Circle through a, b, and c
Circle: A^B^C^X = 0 = -x3*[0;34me_1[0m^[0;34me_2[0m^[0;34me_3[0m^[0;34mn[0m + x3*[0;34me_1[0m^[0;34me_2[0m^[0;34me_3[0m^[0;34mnbar[0m + (x1**2/2 + x2**2/2 + x3**2/2 - 1/2)*[0;34me_1[0m^[0;34me_2[0m^[0;34mn[0m^[0;34mnbar[0m
Line through a and b
Line : A^B^n^X = 0 = -x3*[0;34me_1[0m^[0;34me_2[0m^[0;34me_3[0m^[0;34mn[0m + (x1/2 + x2/2 - 1/2)*[0;34me_1[0m^[0;34me_2[0m^[0;34mn[0m^[0;34mnbar[0m + x3*[0;34me_1[0m^[0;34me_3[0m^[0;34mn[0m^[0;34mnbar[0m/2 - x3*[0;34me_2[0m^[0;34me_3[0m^[0;34mn[0m^[0;34mnbar[0m/2
Sphere through a, b, c, and d
Sphere: A^B^C^D^X = 0 = (-x1**2/2 - x2**2/2 - x3**2/2 + 1/2)*[0;34me_1[0m^[0;34me_2[0m^[0;34me_3[0m^[0;34mn[0m^[0;34mnbar[0m
Plane through a, b, and d
Plane : A^B^n^D^X = 0 = (-x1/2 - x2/2 - x3/2 + 1/2)*[0;34me_1[0m^[0;34me_2[0m^[0;34me_3[0m^[0;34mn[0m^[0;34mnbar[0m
g_{ij} =
Matrix([
[([0;34mp1[0m.[0;34mp1[0m), ([0;34mp1[0m.[0;34mp2[0m), ([0;34mp1[0m.[0;34mp3[0m), 0, 0],
[([0;34mp1[0m.[0;34mp2[0m), ([0;34mp2[0m.[0;34mp2[0m), ([0;34mp2[0m.[0;34mp3[0m), 0, 0],
[([0;34mp1[0m.[0;34mp3[0m), ([0;34mp2[0m.[0;34mp3[0m), ([0;34mp3[0m.[0;34mp3[0m), 0, 0],
[ 0, 0, 0, 0, 2],
[ 0, 0, 0, 2, 0]])
Extracting direction of line from L = P1^P2^n
(L|n)|nbar = 2*[0;34mp1[0m - 2*[0;34mp2[0m
Extracting plane of circle from C = P1^P2^P3
((C^n)|n)|nbar = 2*[0;34mp1[0m^[0;34mp2[0m - 2*[0;34mp1[0m^[0;34mp3[0m + 2*[0;34mp2[0m^[0;34mp3[0m
(p2-p1)^(p3-p1) = [0;34mp1[0m^[0;34mp2[0m - [0;34mp1[0m^[0;34mp3[0m + [0;34mp2[0m^[0;34mp3[0m
g_{ij} =
Matrix([
[ 0, -1, ([0;34mP1[0m.[0;34ma[0m)],
[ -1, 0, ([0;34mP2[0m.[0;34ma[0m)],
[([0;34mP1[0m.[0;34ma[0m), ([0;34mP2[0m.[0;34ma[0m), ([0;34ma[0m.[0;34ma[0m)]])
B**2 = 1
a' = a-(a^B)*B = -([0;34mP2[0m.[0;34ma[0m)*[0;34mP1[0m - ([0;34mP1[0m.[0;34ma[0m)*[0;34mP2[0m
A+ = a'+a'*B = -2*([0;34mP2[0m.[0;34ma[0m)*[0;34mP1[0m
A- = a'-a'*B = -2*([0;34mP1[0m.[0;34ma[0m)*[0;34mP2[0m
(A+)^2 = 0
(A-)^2 = 0
a|B = -([0;34mP2[0m.[0;34ma[0m)*[0;34mP1[0m + ([0;34mP1[0m.[0;34ma[0m)*[0;34mP2[0m
g_{ij} =
Matrix([
[ 1, ([0;34me1[0m.[0;34me2[0m), ([0;34me1[0m.[0;34me3[0m)],
[([0;34me1[0m.[0;34me2[0m), 1, ([0;34me2[0m.[0;34me3[0m)],
[([0;34me1[0m.[0;34me3[0m), ([0;34me2[0m.[0;34me3[0m), 1]])
E = [0;34me1[0m^[0;34me2[0m^[0;34me3[0m
E**2 = ([0;34me1[0m.[0;34me2[0m)**2 - 2*([0;34me1[0m.[0;34me2[0m)*([0;34me1[0m.[0;34me3[0m)*([0;34me2[0m.[0;34me3[0m) + ([0;34me1[0m.[0;34me3[0m)**2 + ([0;34me2[0m.[0;34me3[0m)**2 - 1
E1 = (e2^e3)*E = (([0;34me2[0m.[0;34me3[0m)**2 - 1)*[0;34me1[0m + (([0;34me1[0m.[0;34me2[0m) - ([0;34me1[0m.[0;34me3[0m)*([0;34me2[0m.[0;34me3[0m))*[0;34me2[0m + (-([0;34me1[0m.[0;34me2[0m)*([0;34me2[0m.[0;34me3[0m) + ([0;34me1[0m.[0;34me3[0m))*[0;34me3[0m
E2 =-(e1^e3)*E = (([0;34me1[0m.[0;34me2[0m) - ([0;34me1[0m.[0;34me3[0m)*([0;34me2[0m.[0;34me3[0m))*[0;34me1[0m + (([0;34me1[0m.[0;34me3[0m)**2 - 1)*[0;34me2[0m + (-([0;34me1[0m.[0;34me2[0m)*([0;34me1[0m.[0;34me3[0m) + ([0;34me2[0m.[0;34me3[0m))*[0;34me3[0m
E3 = (e1^e2)*E = (-([0;34me1[0m.[0;34me2[0m)*([0;34me2[0m.[0;34me3[0m) + ([0;34me1[0m.[0;34me3[0m))*[0;34me1[0m + (-([0;34me1[0m.[0;34me2[0m)*([0;34me1[0m.[0;34me3[0m) + ([0;34me2[0m.[0;34me3[0m))*[0;34me2[0m + (([0;34me1[0m.[0;34me2[0m)**2 - 1)*[0;34me3[0m
E1|e2 = 0
E1|e3 = 0
E2|e1 = 0
E2|e3 = 0
E3|e1 = 0
E3|e2 = 0
(E1|e1)/E**2 = 1
(E2|e2)/E**2 = 1
(E3|e3)/E**2 = 1
terminal_check.py:6: DeprecationWarning: The `galgebra.deprecated` module is deprecated
from galgebra.deprecated import MV
terminal_check.py:13: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
A = MV('A','mv')
terminal_check.py:19: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
X = MV('X','vector')
terminal_check.py:20: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
Y = MV('Y','vector')
terminal_check.py:35: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
X = MV('X','vector')
terminal_check.py:36: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
A = MV('A','spinor')
terminal_check.py:49: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
X = MV('X','vector')
terminal_check.py:50: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
A = MV('A','spinor')
terminal_check.py:90: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
f = MV('f','scalar',fct=True)
terminal_check.py:91: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
A = MV('A','vector',fct=True)
terminal_check.py:92: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
B = MV('B','grade2',fct=True)
terminal_check.py:93: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
C = MV('C','mv',fct=True)
terminal_check.py:123: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
f = MV('f','scalar',fct=True)
terminal_check.py:124: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
A = MV('A','vector',fct=True)
terminal_check.py:125: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
B = MV('B','grade2',fct=True)
terminal_check.py:267: DeprecationWarning: The `galgebra.deprecated.MV` class is deprecated in favor of `galgebra.mv.Mv`.
a = MV(sym_lst,'vector')
| 72caade86484e5b766085d9f1b96f7036bee8578 | 141,922 | ipynb | Jupyter Notebook | examples/ipython/Old Format.ipynb | waldyrious/galgebra | b5eb070340434d030dd737a5656fbf709538b0b1 | [
"BSD-3-Clause"
] | null | null | null | examples/ipython/Old Format.ipynb | waldyrious/galgebra | b5eb070340434d030dd737a5656fbf709538b0b1 | [
"BSD-3-Clause"
] | null | null | null | examples/ipython/Old Format.ipynb | waldyrious/galgebra | b5eb070340434d030dd737a5656fbf709538b0b1 | [
"BSD-3-Clause"
] | null | null | null | 77.722892 | 1,032 | 0.550218 | true | 49,098 | Qwen/Qwen-72B | 1. YES
2. YES | 0.798187 | 0.644225 | 0.514212 | __label__eng_Latn | 0.112134 | 0.033016 |
```python
# Front matter
import os
import glob
import re
import pandas as pd
import numpy as np
import scipy.constants as constants
import sympy as sp
from sympy import Matrix, Symbol
from sympy.utilities.lambdify import lambdify
import matplotlib
import matplotlib.pyplot as plt
from matplotlib.ticker import AutoMinorLocator
from matplotlib import gridspec
# Magic function to make matplotlib inline; other style specs must come AFTER
%matplotlib inline
# Seaborn, useful for graphics
import seaborn as sns
matplotlib.rc('xtick', labelsize=14)
matplotlib.rc('ytick', labelsize=14)
rc = {'lines.linewidth': 1,
'axes.labelsize': 20,
'axes.titlesize': 20,
'legend.fontsize': 26,
'xtick.direction': u'in',
'ytick.direction': u'in'}
sns.set_style('ticks', rc=rc)
```
```python
# Functions
# Numeric Vinet EOS, used for everything except calculating dP
def VinetEOS(V,V0,K0,Kprime0):
A = V/V0
P = 3*K0*A**(-2/3) * (1-A**(1/3)) * np.exp((3/2)*(Kprime0-1)*(1-A**(1/3)))
return P
# Symbolic Vinet EOS, needed to calculate dP
def VinetEOS_sym(V,V0,K0,Kprime0):
A = V/V0
P = 3*K0*A**(-2/3) * (1-A**(1/3)) * sp.exp((3/2)*(Kprime0-1)*(1-A**(1/3)))
return P
def getEOSparams(EOS_df, phase):
V0 = np.float(EOS_df[EOS_df['Phase'] == phase]['V0'])
K0 = np.float(EOS_df[EOS_df['Phase'] == phase]['K0'])
Kprime0 = np.float(EOS_df[EOS_df['Phase'] == phase]['Kprime0'])
return V0, K0, Kprime0
# Create a covariance matrix from EOS_df with V0, K0, and K0prime; used to get dP
def getCov3(EOS_df, phase):
dV0 = np.float(EOS_df[EOS_df['Phase'] == phase]['dV0'])
dK0 = np.float(EOS_df[EOS_df['Phase'] == phase]['dK0'])
dKprime0 = np.float(EOS_df[EOS_df['Phase'] == phase]['dKprime0'])
V0K0_corr = np.float(EOS_df[EOS_df['Phase'] == phase]['V0K0 corr'])
V0Kprime0_corr = np.float(EOS_df[EOS_df['Phase'] == phase]['V0Kprime0 corr'])
K0Kprime0_corr = np.float(EOS_df[EOS_df['Phase'] == phase]['K0Kprime0 corr'])
corr_matrix = np.eye(3)
corr_matrix[0,1] = V0K0_corr
corr_matrix[1,0] = V0K0_corr
corr_matrix[0,2] = V0Kprime0_corr
corr_matrix[2,0] = V0Kprime0_corr
corr_matrix[1,2] = K0Kprime0_corr
corr_matrix[2,1] = K0Kprime0_corr
# print(corr_matrix)
sigmas = np.array([[dV0,dK0,dKprime0]])
cov = (sigmas.T@sigmas)*corr_matrix
return cov
# Create a covariance matrix with V, V0, K0, and K0prime; used to get dP
def getVinetCov(dV, EOS_df, phase):
cov3 = getCov3(EOS_df, phase)
cov = np.eye(4)
cov[1:4,1:4] = cov3
cov[0,0] = dV**2
return cov
def calc_dP_VinetEOS(V, dV, EOS_df, phase):
# Create function for Jacobian of Vinet EOS
a,b,c,d = Symbol('a'),Symbol('b'),Symbol('c'),Symbol('d') # Symbolic variables V, V0, K0, K'0
Vinet_matrix = Matrix([VinetEOS_sym(a,b,c,d)]) # Create a symbolic Vinet EOS matrix
param_matrix = Matrix([a,b,c,d]) # Create a matrix of symbolic variables
# Symbolically take the Jacobian of the Vinet EOS and turn into a column matrix
J_sym = Vinet_matrix.jacobian(param_matrix).T
# Create a numpy function for the above expression
# (easier to work with numerically)
J_Vinet = lambdify((a,b,c,d), J_sym, 'numpy')
J = J_Vinet(V,*getEOSparams(EOS_df, phase)) # Calculate Jacobian
# print(J)
cov = getVinetCov(dV, EOS_df, phase) # Calculate covariance matrix
# print(cov)
dP = (J.T@cov@J).item() # Calculate uncertainty and convert to a scalar
# print(dP)
return dP
def plot_results(V_array,P_array,dP_array):
# Plot results
fig, (ax0, ax1) = plt.subplots(nrows = 2, ncols=1, sharex=True, figsize=(8, 8),
gridspec_kw = {'height_ratios':[3, 1]})
h0, = ax0.plot(V_array,P_array,'-',lw=1)
ax0.fill_between(V_array,P_array+dP_array,P_array-dP_array,alpha=0.3)
h1, = ax1.plot(V_array,np.zeros(len(P_array)),'-',lw=1)
ax1.fill_between(V_array,V_array-V_array-dP_array,dP_array,alpha=0.3)
ax0.set_ylabel(r'Pressure (GPa)', fontsize=18)
ax1.set_ylabel(r'Pressure Uncertainty (GPa)', fontsize=18)
ax1.set_xlabel(r'Volume ($\AA^3$)', fontsize=18)
# Fine-tune figure; make subplots close to each other and hide x ticks for
# all but bottom plot.
fig.subplots_adjust(hspace=0)
plt.setp([ax0.get_xticklabels() for a in fig.axes[:1]], visible=False);
```
```python
# Import EOS information
EOS_df = pd.read_csv('FeAlloyEOS.csv',engine='python')
```
```python
phase = 'hcp Fe'
# Define simple V and dV arrays to check calc_dP_VinetEOS()
V_array = np.linspace(20.5,16.5,41)
dV_array = np.zeros(len(V_array))
# Calculate P and dP
P_array = VinetEOS(V_array,*getEOSparams(EOS_df, phase))
dP_array =[calc_dP_VinetEOS(V,dV,EOS_df, phase) for V, dV in zip(V_array,dV_array)]
plot_results(V_array,P_array,dP_array)
```
```python
phase = 'hcp FeNi'
# Define simple V and dV arrays to check calc_dP_VinetEOS()
V_array = np.linspace(20.5,16.5,41)
dV_array = np.zeros(len(V_array))
# Calculate P and dP
P_array = VinetEOS(V_array,*getEOSparams(EOS_df, phase))
dP_array =[calc_dP_VinetEOS(V,dV,EOS_df, phase) for V, dV in zip(V_array,dV_array)]
plot_results(V_array,P_array,dP_array)
```
```python
phase = 'hcp FeNiSi'
# Define simple V and dV arrays to check calc_dP_VinetEOS()
V_array = np.linspace(20.5,16.0,41)
dV_array = np.zeros(len(V_array))
# Calculate P and dP
P_array = VinetEOS(V_array,*getEOSparams(EOS_df, phase))
dP_array =[calc_dP_VinetEOS(V,dV,EOS_df, phase) for V, dV in zip(V_array,dV_array)]
plot_results(V_array,P_array,dP_array)
```
```python
phase = 'hcp FeNi'
# Define simple V and dV arrays to check calc_dP_VinetEOS()
V_array = np.linspace(20.5,16.8,41)
dV_array = 0.07*np.ones(len(V_array))
# Calculate P and dP
P_array = VinetEOS(V_array,*getEOSparams(EOS_df, phase))
dP_array =[calc_dP_VinetEOS(V,dV,EOS_df, phase) for V, dV in zip(V_array,dV_array)]
plot_results(V_array,P_array,dP_array)
```
```python
phase = 'hcp FeNiSi'
# Define simple V and dV arrays to check calc_dP_VinetEOS()
V_array = np.linspace(20.5,16.0,41)
dV_array = 0.07*np.ones(len(V_array))
# Calculate P and dP
P_array = VinetEOS(V_array,*getEOSparams(EOS_df, phase))
dP_array =[calc_dP_VinetEOS(V,dV,EOS_df, phase) for V, dV in zip(V_array,dV_array)]
plot_results(V_array,P_array,dP_array)
```
```python
# Covariance agrees with MINUTI
dP = calc_dP_VinetEOS(V,dV,EOS_df, phase)
```
[[-11.7503395 ]
[ 10.49503136]
[ 0.16110025]
[ 1.11820322]]
[[ 1. -0.99 0.96]
[-0.99 1. -0.99]
[ 0.96 -0.99 1. ]]
```python
# Confirm Jacobian is working
phase = 'hcp FeNiSi'
V = 20.5
V0, K0, Kprime0 = getEOSparams(EOS_df, phase)
# Check Jacobian
dP_dV = (K0/(2*V))*(V/V0)**(-2/3) * np.exp((-3/2)*(Kprime0-1)*((V/V0)**(1/3)-1)) * (3*Kprime0*((V/V0)**(2/3)-(V/V0)**(1/3)) - 3*(V/V0)**(2/3) + 5*(V/V0)**(1/3) - 4)
print(dP_dV)
dP_dV0 = -(K0*V*np.exp((3/2)*(Kprime0 - 1)*(1 - (V/V0)**(1/3)))*(3*Kprime0*(V/V0)**(2/3) - 3*Kprime0*(V/V0)**(1/3) - 3*(V/V0)**(2/3) + 5*(V/V0)**(1/3) - 4))/(2*V0**2*(V/V0)**(5/3))
print(dP_dV0)
dP_dK0 = (V/V0)**(-2/3)*3*(1 - (V/V0)**(1/3))*np.exp((3/2)*(Kprime0 - 1)*(1 - (V/V0)**(1/3)))
print(dP_dK0)
dP_dKprime0 = (1/2)*(V/V0)**(-2/3)*(9*K0*(1 - (V/V0)**(1/3))**2*np.exp((3/2)*(Kprime0-1)*(1-(V/V0)**(1/3))))
print(dP_dKprime0)
A = (V/V0)**(1/3)
B = (V/V0)**(2/3)
C = (V/V0)**(-2/3)
D = (V/V0)**(-5/3)
dP_dV = (K0/(2*V))*C * np.exp((-3/2)*(Kprime0-1)*(A-1)) * (3*Kprime0*(B-A) - 3*B + 5*A - 4)
print(dP_dV)
dP_dV0 = (-K0*V/(2*V0**2))*D * np.exp((3/2)*(Kprime0-1)*(1-A)) * (3*Kprime0*B - 3*Kprime0*A - 3*B + 5*A - 4)
print(dP_dV0)
dP_dK0 = 3*C*(1-A) * np.exp((3/2)*(Kprime0-1)*(1-A))
print(dP_dK0)
dP_dKprime0 = (9*K0/2)*C*(1-A)**2 * np.exp((3/2)*(Kprime0-1)*(1-A))
print(dP_dKprime0)
```
-11.7503395013
10.4950313601
0.161100248313
1.11820322081
-11.7503395013
10.4950313601
0.161100248313
1.11820322081
```python
phase = 'hcp FeNiSi'
# Define simple V and dV arrays to check calc_dP_VinetEOS()
V_array = np.linspace(23.0,16.5,41)
dV_array = np.zeros(len(V_array))
# Calculate P and dP
P_array = VinetEOS(V_array,*getEOSparams(EOS_df, phase))
# dP_array =[calc_dP_VinetEOS(V,dV,EOS_df, phase) for V, dV in zip(V_array,dV_array)]
def calc_dP_VinetEOS_crop(V,dV,EOS_df, phase):
# Create function for Jacobian of Vinet EOS
a,b,c,d = Symbol('a'),Symbol('b'),Symbol('c'),Symbol('d') # Symbolic variables V, V0, K0, K'0
Vinet_matrix = Matrix([VinetEOS_sym(a,b,c,d)]) # Create a symbolic Vinet EOS matrix
param_matrix = Matrix([a,b,c,d]) # Create a matrix of symbolic variables
# Symbolically take the Jacobian of the Vinet EOS and turn into a column matrix
J_sym = Vinet_matrix.jacobian(param_matrix).T
# Create a numpy function for the above expression
# (easier to work with numerically)
J_Vinet = lambdify((a,b,c,d), J_sym, 'numpy')
J = J_Vinet(V,*getEOSparams(EOS_df, phase)) # Calculate Jacobian
J_crop = J[1:4]
print(J_crop)
cov = getVinetCov(dV, EOS_df, phase) # Calculate covariance matrix
print(cov)
cov_crop = cov[1:4,1:4]
dP = (J_crop.T@cov_crop@J_crop).item() # Calculate uncertainty and convert to a scalar
return dP
dP_array =[calc_dP_VinetEOS_crop(V,dV,EOS_df, phase) for V, dV in zip(V_array,dV_array)]
# plot_results(V_array,P_array,dP_array)
```
[[ 1. -0.99 0.96]
[-0.99 1. -0.99]
[ 0.96 -0.99 1. ]]
```python
```
| 61b2cb067d98b926c4332efd1a5a82ff9ce64202 | 245,704 | ipynb | Jupyter Notebook | 010_XRDAnalysis/Check_P_Error_Propagation.ipynb | r-a-morrison/fe_alloy_sound_velocities | 8da1b0d073e93fb4b4be3d61b73e58b7a7a3097b | [
"MIT"
] | null | null | null | 010_XRDAnalysis/Check_P_Error_Propagation.ipynb | r-a-morrison/fe_alloy_sound_velocities | 8da1b0d073e93fb4b4be3d61b73e58b7a7a3097b | [
"MIT"
] | null | null | null | 010_XRDAnalysis/Check_P_Error_Propagation.ipynb | r-a-morrison/fe_alloy_sound_velocities | 8da1b0d073e93fb4b4be3d61b73e58b7a7a3097b | [
"MIT"
] | null | null | null | 542.392936 | 50,176 | 0.929492 | true | 3,460 | Qwen/Qwen-72B | 1. YES
2. YES | 0.855851 | 0.72487 | 0.620381 | __label__eng_Latn | 0.279211 | 0.279684 |
# MACD Analysis and Buy/Sell Signals
> MACD, short for moving average convergence/divergence, is a trading indicator used in technical analysis of stock prices.
Source: https://en.wikipedia.org/wiki/MACD
We analyse stock data using MACD and generate buy and sell signals.
```python
# Parameters for MACD computation
# fast_window: window length of the fast EMA
# slow_window: window length of the slow EMA
# signal_window: window length of the signal EMA
fast_window = 12
slow_window = 26
signal_window = 9
# stock's data as csv file from ariva.de
input_data_file = '../data/external/data.csv'
# notebook stores the result in output_data_path
output_data_path = '../data/interim'
# notebook name; required for outfile naming convention
# Can't be acquired by javascript approaches when 'run all'
# see: https://github.com/jupyter/notebook/issues/1622
nb_file = 'MACD_BuySell'
```
#### Some installs and imports
```python
# Install a pip module in the current Jupyter kernel
import sys
try:
from stockstats import StockDataFrame as Sdf
except ImportError as err:
print("Handling run-time error: ", err)
print("Will now install missing module.")
!{sys.executable} -m pip install stockstats
try:
import seaborn as sns
except ImportError as err:
print("Handling run-time error: ", err)
print("Will now install missing module.")
!{sys.executable} -m pip install seaborn
try:
import papermill as pm
except ImportError as err:
print("Handling run-time error: ", err)
print("Will now install missing module.")
!{sys.executable} -m pip install papermill
```
```python
# other imports
# manipulating data
import pandas as pd
import numpy as np
from stockstats import StockDataFrame as Sdf
# plotting
from matplotlib import pyplot as plt
%matplotlib inline
import seaborn as sns
import matplotlib.style as style
# notebook parameterizing and automation
import papermill as pm
```
## MACD Function using Parameters
Below there is an example of MACD calculation:
The MACD line (macd): 12-day EMA - 26-day EMA
Signal Line (macds): 9-day EMA of MACD Line
The parameter are (fast, slow, signal) = (12, 26, 9).
We generalize the example above and compute the parameterize MACD as follows:
\begin{align}
MACD^{fast, slow}(t) & = EMA^{fast}(close, t) - EMA^{slow}(close, t) \\
Signal^{sig}(t) & = EMA^{sig}(MACD^{fast, slow}(t), t) \\
\end{align}
```python
#
# original from stockstats source
# modified to paramterize it
#
def macd_params(df, fast_window=12, slow_window=26, signal_window=9):
""" Moving Average Convergence Divergence
This function will initialize all following columns.
MACD Line (macd): (fast_window EMA - slow_window EMA)
Signal Line (macds): signal_window EMA of MACD Line
MACD Histogram (macdh): MACD Line - Signal Line
:param df: data
:param fast_window: window length of the fast EMA
:param slow_window: window length of the slow EMA
:param signal_window: window length of the signal EMA
:return: None
"""
# parameter check
if (fast_window > slow_window) or (signal_window > fast_window):
# error
raise ValueError('Parameter values do not fit.')
fast = df['close_' + str(fast_window) + '_ema']
slow = df['close_' + str(slow_window) + '_ema']
df['macd'] = fast - slow
df['macds'] = df['macd_' + str(signal_window) + '_ema']
df['macdh'] = (df['macd'] - df['macds'])
del df['macd_' + str(signal_window) + '_ema']
del fast
del slow
```
## Data Import
The data download is from ariva.de. The import adjust some formatting in order to work with python's `stockstats` package.
* delimiter is ';'
* header line uses english idenifiers
* decimal separator is specified as ','
* thousands separator is '.'
```python
# Path to data is in parameter 'data_file'
# import data
colnames = ['date', 'open', 'high', 'low', 'close', 'shares', 'volume']
data = pd.read_csv(input_data_file, delimiter=';', header=0, names=colnames, decimal=',', thousands='.')
stock = Sdf.retype(data)
# sort by date ascending
stock.sort_index(ascending=True, inplace=True)
```
## Data Visualization
Let's quickly have an overview about the stock data.
```python
# reset the index to use it as x-axis data
stock.reset_index(inplace=True)
# reindex the dataframe
stock.index = [stock['date']]
style.use('seaborn-notebook')
style.use('seaborn-white')
# 2x1 plot
stock_fig = plt.figure()
stock_fig1 = stock_fig.add_subplot(211) # 3x1, fig.1
stock_fig2 = stock_fig.add_subplot(212) # 3x1, fig.2
# line plot for close
sns.lineplot(data=stock, x='date', y='close', label='Close', ax=stock_fig1)
# remove ticks and labels
stock_fig1.tick_params(labelbottom=False, bottom=False)
# box plot for open, close, high, low
stock_prices = pd.DataFrame({'price': stock['high'], 'label': 'high'})
stock_prices = pd.concat([stock_prices, pd.DataFrame({'price': stock['low'], 'label': 'low'})], sort=False)
stock_prices = pd.concat([stock_prices, pd.DataFrame({'price': stock['open'], 'label': 'open'})], sort=False)
stock_prices = pd.concat([stock_prices, pd.DataFrame({'price': stock['close'], 'label': 'close'})], sort=False)
sns.boxplot(x="label", y="price", data=stock_prices)
```
## Compute MACD and Visualize
```python
# compute macd with standard param (12 / 26 / 9)
macd_params(stock, fast_window, slow_window, signal_window)
```
```python
stock.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>date</th>
<th>open</th>
<th>high</th>
<th>low</th>
<th>close</th>
<th>shares</th>
<th>volume</th>
<th>close_12_ema</th>
<th>close_26_ema</th>
<th>macd</th>
<th>macds</th>
<th>macdh</th>
</tr>
<tr>
<th>date</th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<th>2017-08-21</th>
<td>2017-08-21</td>
<td>174.3375</td>
<td>175.6815</td>
<td>173.1374</td>
<td>174.0495</td>
<td>776936</td>
<td>135380672</td>
<td>174.049500</td>
<td>174.049500</td>
<td>0.000000</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>2017-08-22</th>
<td>2017-08-22</td>
<td>175.2015</td>
<td>176.1615</td>
<td>174.8655</td>
<td>175.5855</td>
<td>714906</td>
<td>125526592</td>
<td>174.881500</td>
<td>174.847038</td>
<td>0.034462</td>
<td>0.019145</td>
<td>0.015316</td>
</tr>
<tr>
<th>2017-08-23</th>
<td>2017-08-23</td>
<td>175.5375</td>
<td>176.0655</td>
<td>174.0014</td>
<td>174.3375</td>
<td>800080</td>
<td>139733784</td>
<td>174.669177</td>
<td>174.663966</td>
<td>0.005210</td>
<td>0.013434</td>
<td>-0.008224</td>
</tr>
<tr>
<th>2017-08-24</th>
<td>2017-08-24</td>
<td>174.4815</td>
<td>176.5935</td>
<td>174.4815</td>
<td>175.4415</td>
<td>725802</td>
<td>127510400</td>
<td>174.912969</td>
<td>174.881331</td>
<td>0.031638</td>
<td>0.019601</td>
<td>0.012037</td>
</tr>
<tr>
<th>2017-08-25</th>
<td>2017-08-25</td>
<td>175.3455</td>
<td>176.3535</td>
<td>174.8175</td>
<td>174.9135</td>
<td>673465</td>
<td>118082856</td>
<td>174.913113</td>
<td>174.888791</td>
<td>0.024322</td>
<td>0.021005</td>
<td>0.003317</td>
</tr>
</tbody>
</table>
</div>
```python
# visualize MACD and MACD signal
style.use('seaborn-notebook')
style.use('seaborn-white')
#stock.reset_index(inplace=True)
sns.lineplot(data=stock, x='date', y='macds', label='MACD Signal')
sns.lineplot(data=stock, x='date', y='macd', label='MACD')
l=stock.date.count()/12
plt.xticks(stock.date[0::int(l)], rotation=45)
plt.show()
```
## Compute Buy/Sell Signals
The buy/sell signals are the result when the MACD and the MACD-Signal cross each other.
* MACD - Moving Average Convergence/Divergence
* MACD Signal - EMA of MACD
**BUY** Signal: $ MACD(t-1) < Signal(t-1) \textrm{ and } MACD(t) > Signal(t) $ <br>
**SELL** Signal: $ MACD(t-1) > Signal(t-1) \textrm{ and } MACD(t) < Signal(t) $ <br>
**HOLD** Signal: else
**Note:** In the code below the variable are assigned
* MACD -> macd
* MACD Signal -> macds
* BUY/SELL/HOLD -> signal
Finally, complement the dataframe with buy/sell/hold signal.
```python
macd = stock['macd']
macds = stock['macds']
macd.sort_index(ascending=True, inplace=True)
macds.sort_index(ascending=True, inplace=True)
# sort stock ascending and reset to standard integer index
stock.sort_index(ascending=True, inplace=True)
stock.reset_index(inplace=True, drop=True)
stock['signal'] = ""
stock.at[0, 'signal']="No data"
for t in range(1, len(macds)):
# # If the MACD crosses the signal line upward
if macd[t - 1] < macds[t - 1] and macd[t] > macds[t]:
stock.at[t, 'signal']="BUY"
# # The other way around
elif macd[t - 1] > macds[t - 1] and macd[t] < macds[t]:
stock.at[t, 'signal']="SELL"
# # Do nothing if not crossed
else:
stock.at[t, 'signal']="HOLD"
```
## Visualize all Data
Plot the MACD, MACD signal and the buy/sell/hold signals.
```python
# replace BUY/SELL/HOLD by 1/-1/0 for plotting
stock['signal'] = stock['signal'].replace('BUY', 1).replace('SELL', -1).replace('HOLD', 0).replace('No data', 0)
# set plot style
style.use('seaborn-poster')
style.use('seaborn-white')
# Create a figure instance, and the two subplots
fig = plt.figure()
ax1 = fig.add_subplot(311) # 3x1, fig.1
ax2 = fig.add_subplot(312) # 3x1, fig.2
ax3 = fig.add_subplot(313) # 3x1, fig.3
# Stock data
sns.lineplot(data=stock, x='date', y='close', label='Close', ax=ax1)
# MACD, MACD Signal
sns.lineplot(data=stock, x='date', y='macds', label='MACD Signal', ax=ax2)
sns.lineplot(data=stock, x='date', y='macd', label='MACD', ax=ax2)
# Buy/Sell Signal
sns.lineplot(data=stock, x='date', y='signal', label='Buy/Sell Signal', ax=ax3)
# remove ticks and labels
ax1.tick_params(labelbottom=False, bottom=False)
ax2.tick_params(labelbottom=False, bottom=False)
# format ticks
l=stock.date.count()/12
plt.xticks(stock.date[0::int(l)], rotation=45)
plt.show()
```
## Stats and Data stored for later Use
We store some statistics data.
Additionally, we record values in the notebook using Papermill to be consumed by other notebooks later on.
```python
# compute stats
BUY_cnt = stock[stock['signal']==1].shape[0]
SELL_cnt = stock[stock['signal']==-1].shape[0]
HOLD_cnt = stock[stock['signal']==0].shape[0]
STATS = stock.describe()
# print stats
print("BUY Signals: ", BUY_cnt)
print("SELL Signals: ", SELL_cnt)
print("HOLD Signals: ", HOLD_cnt)
print("Statistics:")
print(STATS)
```
BUY Signals: 11
SELL Signals: 12
HOLD Signals: 229
Statistics:
open high low close shares \
count 252.000000 252.000000 252.000000 252.000000 2.520000e+02
mean 184.629700 185.828262 183.221285 184.497138 1.323365e+06
std 6.438659 6.375852 6.556162 6.482004 5.376838e+05
min 171.073400 172.897400 169.057400 170.929400 6.302010e+05
25% 179.698575 181.010000 178.225500 179.375100 1.015912e+06
50% 185.070300 186.094400 183.720000 184.810350 1.212792e+06
75% 189.457200 191.262400 188.365600 189.436000 1.436933e+06
max 197.953600 198.577700 196.753600 197.521600 4.478718e+06
volume close_12_ema close_26_ema macd macds \
count 2.520000e+02 252.000000 252.000000 252.000000 252.000000
mean 2.438393e+08 184.247229 184.007997 0.239232 0.217932
std 9.737240e+07 5.889254 5.303585 1.775688 1.620324
min 1.098735e+08 172.884507 173.101244 -3.318409 -2.689569
25% 1.880567e+08 179.462166 179.949202 -1.379656 -1.221922
50% 2.241906e+08 184.368493 184.218385 0.344283 0.285781
75% 2.709752e+08 189.810828 188.696025 1.817561 1.607280
max 8.140323e+08 195.266718 193.579560 3.271141 2.991973
macdh signal
count 252.000000 252.000000
mean 0.021300 -0.003968
std 0.637962 0.302684
min -2.051876 -1.000000
25% -0.308985 0.000000
50% 0.056835 0.000000
75% 0.470143 0.000000
max 1.100467 1.000000
```python
# csv file naming convention
# <nb_name>_<data_name>_<fast>_<slow>_<signal>.csv
import os
data_name = os.path.basename(input_data_file).replace('.csv','')
nb_name = os.path.basename(nb_file).replace('.ipynb','')
csv = nb_name + '_' + data_name + '_' + str(fast_window) + '_' + str(slow_window) + '_' + str(signal_window) + '.csv'
```
```python
# here, we record the data for later use
pm.record("csv", csv)
# We don't need them, because pm stores parameters anyway
#pm.record("fast_window", fast_window)
#pm.record("slow_window", slow_window)
#pm.record("signal_window", signal_window)
```
## Save Results as CSV
We store the complete dataframe in the `output_data_path` directory.
```python
csv_file = os.path.join(output_data_path, csv )
stock.to_csv(csv_file, sep=';')
#csv_file
```
'../data/interim/MACD_BuySell_data_12_26_9.csv'
```python
```
| 52041c139bd9692820d9db7a6baeacd231b2898b | 241,126 | ipynb | Jupyter Notebook | LoSTanSiBLE/notebooks/MACD_BuySell.ipynb | cdeck3r/LoSTanSiBLE | 1bacee79ed6213b59ca4387f45ac539fb7ac9f16 | [
"MIT"
] | 1 | 2019-07-03T10:05:14.000Z | 2019-07-03T10:05:14.000Z | LoSTanSiBLE/notebooks/MACD_BuySell.ipynb | cdeck3r/LoSTanSiBLE | 1bacee79ed6213b59ca4387f45ac539fb7ac9f16 | [
"MIT"
] | null | null | null | LoSTanSiBLE/notebooks/MACD_BuySell.ipynb | cdeck3r/LoSTanSiBLE | 1bacee79ed6213b59ca4387f45ac539fb7ac9f16 | [
"MIT"
] | null | null | null | 318.108179 | 126,392 | 0.920237 | true | 4,551 | Qwen/Qwen-72B | 1. YES
2. YES | 0.853913 | 0.805632 | 0.68794 | __label__eng_Latn | 0.488516 | 0.436645 |
```python
from logicqubit.logic import *
from cmath import *
import numpy as np
import sympy as sp
import scipy
from random import randrange
from scipy.optimize import *
import matplotlib.pyplot as plt
```
```python
gates = Gates(1)
ID = gates.ID()
X = gates.X()
Y = gates.Y()
Z = gates.Z()
```
```python
III = ID.kron(ID).kron(ID)
XXX = X.kron(X).kron(X)
YYY = Y.kron(Y).kron(Y)
ZZZ = Z.kron(Z).kron(Z)
IZZ = ID.kron(Z).kron(Z)
ZZI = ID.kron(Z).kron(ID)
sig_izz = [IZZ.get()[i,i] for i in range(len(IZZ.get()))]
sig_zzi = [ZZI.get()[i,i] for i in range(len(ZZI.get()))]
sig_izz
```
[1, -1, -1, 1, 1, -1, -1, 1]
```python
H = III*2 + YYY*4 + IZZ*3 + ZZZ*2
min(scipy.linalg.eig(H.get())[0])
```
(-5.47213595499958+0j)
```python
def _ansatz(reg, params):
n_qubits = len(reg)
depth = n_qubits
for i in range(depth):
reg[1].CNOT(reg[0])
for j in range(n_qubits):
reg[i].RY(params[j])
def ansatz(reg, params):
n_qubits = len(reg)
depth = n_qubits
for i in range(depth):
for j in range(n_qubits):
if(j < n_qubits-1):
reg[j+1].CNOT(reg[j])
reg[i].RY(params[j])
def ansatz_3q(q1, q2, q3, params):
q1.RY(params[0])
q2.RY(params[1])
q3.RY(params[2])
q2.CNOT(q1)
q3.CNOT(q2)
q1.RX(params[3])
q2.RX(params[4])
q3.RX(params[5])
q2.CNOT(q1)
q3.CNOT(q2)
q1.RY(params[6])
q2.RY(params[7])
q3.RY(params[8])
q2.CNOT(q1)
q3.CNOT(q2)
```
```python
def expectation_3q(params):
logicQuBit = LogicQuBit(3)
q1 = Qubit()
q2 = Qubit()
q3 = Qubit()
ansatz_3q(q1,q2,q3,params)
#ansatz([q1,q2,q3],params)
psi = logicQuBit.getPsi()
return (psi.adjoint()*H*psi).get()[0][0]
minimum = minimize(expectation_3q, [0,0,0,0,0,0,0,0,0], method='Nelder-Mead', options={'xtol': 1e-10, 'ftol': 1e-10})
print(minimum)
```
final_simplex: (array([[ 0.12921139, -1.57086317, 1.57080437, -1.12420308, -3.03704134,
1.45367023, -0.29225308, 1.95750013, 1.5707652 ],
[ 0.12924501, -1.57078486, 1.57088293, -1.1242658 , -3.0369472 ,
1.4536376 , -0.29225486, 1.95729195, 1.57079917],
[ 0.12916145, -1.5708624 , 1.57083772, -1.12424803, -3.03717848,
1.45370456, -0.29226349, 1.95760535, 1.57082513],
[ 0.12919184, -1.57076394, 1.57080683, -1.12432742, -3.03703452,
1.45363379, -0.29218685, 1.95743054, 1.57083746],
[ 0.12922316, -1.5708506 , 1.57084737, -1.12431734, -3.03698036,
1.45369964, -0.29223934, 1.95744884, 1.57076656],
[ 0.1291532 , -1.57080607, 1.57081857, -1.12425794, -3.03700681,
1.45368743, -0.29227974, 1.95751193, 1.57072828],
[ 0.12924718, -1.57072935, 1.5707414 , -1.12422313, -3.03688703,
1.45354606, -0.29216359, 1.95727596, 1.57078718],
[ 0.12910942, -1.57073622, 1.57084631, -1.12426268, -3.03708219,
1.45366151, -0.29229058, 1.95749856, 1.57078757],
[ 0.12925603, -1.57089954, 1.57074983, -1.12423081, -3.03709666,
1.45364761, -0.29216064, 1.95753102, 1.57083743],
[ 0.12913261, -1.57076602, 1.57069762, -1.12420655, -3.03696704,
1.45362587, -0.29222644, 1.95754186, 1.57069371]]), array([-5.47213594, -5.47213594, -5.47213594, -5.47213593, -5.47213593,
-5.47213593, -5.47213593, -5.47213591, -5.47213591, -5.47213589]))
fun: -5.472135937489735
message: 'Maximum number of function evaluations has been exceeded.'
nfev: 1801
nit: 1237
status: 1
success: False
x: array([ 0.12921139, -1.57086317, 1.57080437, -1.12420308, -3.03704134,
1.45367023, -0.29225308, 1.95750013, 1.5707652 ])
```python
#ZZZ
#000 = 1
#001 = -1
#010 = -1
#011 = 1
#100 = -1
#101 = 1
#110 = 1
#111 = -1
def expectation_value(measurements, base = np.array([1,-1,-1,1,-1,1,1,-1])):
probabilities = np.array(measurements)
expectation = np.sum(base * probabilities)
return expectation
def sigma_xxx(params):
logicQuBit = LogicQuBit(3, first_left = False)
q1 = Qubit()
q2 = Qubit()
q3 = Qubit()
ansatz_3q(q1,q2,q3,params)
# medidas em XX
q1.RY(-pi/2)
q2.RY(-pi/2)
q3.RY(-pi/2)
result = logicQuBit.Measure([q1,q2,q3])
result = expectation_value(result)
return result
def sigma_yyy(params):
logicQuBit = LogicQuBit(3, first_left = False)
q1 = Qubit()
q2 = Qubit()
q3 = Qubit()
ansatz_3q(q1,q2,q3,params)
# medidas em YY
q1.RX(pi/2)
q2.RX(pi/2)
q3.RX(pi/2)
result = logicQuBit.Measure([q1,q2,q3])
result = expectation_value(result)
return result
def sigma_zzz(params):
logicQuBit = LogicQuBit(3, first_left = False)
q1 = Qubit()
q2 = Qubit()
q3 = Qubit()
ansatz_3q(q1,q2,q3,params)
result = logicQuBit.Measure([q1,q2,q3])
zzz = expectation_value(result)
izz = expectation_value(result, sig_izz) # [zzz, izz] = 0
return zzz, izz
def expectation_energy(params):
xxx = sigma_xxx(params)
yyy = sigma_yyy(params)
zzz, izz = sigma_zzz(params)
result = 2 + 4*yyy + 3*izz + 2*zzz
return result
```
```python
#initial_values = [random.random() for _ in range(3)]
minimum = minimize(expectation_energy, [0,0,0,0,0,0,0,0,0], method='Nelder-Mead', options={'xtol': 1e-10, 'ftol': 1e-10})
print(minimum)
```
final_simplex: (array([[-1.02400892, 1.28055431, 0.05946761, 1.33300369, 0.08342998,
2.19934652, -6.29492554, 1.83668327, 0.42674918],
[-1.0045284 , 1.23991259, 0.10521914, 1.37250857, 0.10442777,
2.12314711, -6.24443198, 1.80795255, 0.41251561],
[-0.96316889, 1.28037413, -0.02258307, 1.37967455, 0.087312 ,
2.19233154, -6.29865919, 1.84036414, 0.42136336],
[-1.01088839, 1.26968456, 0.02772341, 1.47440312, 0.11925323,
2.16213358, -6.28100343, 1.81227668, 0.37292877],
[-0.93486007, 1.27946439, 0.01883379, 1.40376204, 0.10724767,
2.13684427, -6.31317357, 1.84579082, 0.39146723],
[-1.01355085, 1.25750614, 0.099545 , 1.45014256, 0.12627078,
2.12002938, -6.27548029, 1.81035501, 0.36927212],
[-1.07176354, 1.24202417, 0.03454186, 1.48001232, 0.11766726,
2.1641785 , -6.1354307 , 1.75219463, 0.36242693],
[-0.96319626, 1.2278143 , 0.03577467, 1.41344896, 0.10253174,
2.13218961, -6.27560476, 1.81349595, 0.43456215],
[-0.99780333, 1.2340127 , 0.00854529, 1.45831798, 0.10591945,
2.16471894, -6.27091469, 1.80239979, 0.42345559],
[-0.91029119, 1.2947923 , 0.07297279, 1.40533249, 0.11793036,
2.11702152, -6.44888308, 1.89306445, 0.39471469]]), array([-5.17369093, -5.15754491, -5.15406618, -5.14643163, -5.14535456,
-5.1451482 , -5.14303204, -5.13943651, -5.13679564, -5.13092301]))
fun: -5.173690931894985
message: 'Maximum number of function evaluations has been exceeded.'
nfev: 1800
nit: 1247
status: 1
success: False
x: array([-1.02400892, 1.28055431, 0.05946761, 1.33300369, 0.08342998,
2.19934652, -6.29492554, 1.83668327, 0.42674918])
```python
def gradient(params, evaluate):
n_params = params.shape[0]
shift = pi/2
gradients = np.zeros(n_params)
for i in range(n_params):
#parameter shift rule
shift_vect = np.array([shift if j==i else 0 for j in range(n_params)])
shift_right = params + shift_vect
shift_left = params - shift_vect
expectation_right = evaluate(shift_right)
expectation_left = evaluate(shift_left)
gradients[i] = expectation_right - expectation_left
return gradients
```
```python
params = np.random.uniform(-np.pi, np.pi, 9)
last_params = np.zeros(9)
```
```python
lr = 0.1
err = 1
while err > 1e-3:
grad = gradient(params, expectation_energy)
params = params - lr*grad
err = abs(sum(params - last_params))
last_params = np.array(params)
print(err)
```
/tmp/ipykernel_12923/1088512639.py:15: ComplexWarning: Casting complex values to real discards the imaginary part
gradients[i] = expectation_right - expectation_left
2.269557430510688
0.43148743285348523
0.3173515038192558
0.1843803492700199
0.04327682033010166
0.0205755822034282
0.02982539400493711
0.0242148016084765
0.01835772396267603
0.013352718021686673
0.009994564789378567
0.007462727872315965
0.005675856339475829
0.004299655543890113
0.00327869541767134
0.002487918781161013
0.0018904697303590463
0.0014302939341544803
0.001081493614157969
0.00081494892435412
```python
expectation_energy(params)
```
(-5.472130682938133+0j)
```python
```
| 1c04d7cae718b046743bf9b2a2c28159b0c81eb7 | 13,842 | ipynb | Jupyter Notebook | vqe_3q.ipynb | clnrp/quantum_machine_learning | 5528a440d230b0613f1bd44a81a2a352441c76e5 | [
"MIT"
] | null | null | null | vqe_3q.ipynb | clnrp/quantum_machine_learning | 5528a440d230b0613f1bd44a81a2a352441c76e5 | [
"MIT"
] | null | null | null | vqe_3q.ipynb | clnrp/quantum_machine_learning | 5528a440d230b0613f1bd44a81a2a352441c76e5 | [
"MIT"
] | null | null | null | 31.316742 | 145 | 0.494148 | true | 3,646 | Qwen/Qwen-72B | 1. YES
2. YES | 0.865224 | 0.695958 | 0.60216 | __label__yue_Hant | 0.114032 | 0.237349 |
$$
\sqrt{2}+\sqrt{3}=\sqrt{\left(\sqrt{2}+\sqrt{3}\right)^2}=\sqrt{2\sqrt{6}+5}
=\sqrt{\sqrt{\left(2\sqrt{6}+5\right)^2}} = \sqrt{\sqrt{20\sqrt{6}+49}}
$$
```python
import sympy as S
S.init_printing()
a = S.sqrt( S.sqrt(49+20*S.sqrt(6)))
a
```
```python
S.sqrtdenest(a)
```
$$
\sqrt{2}+\sqrt{3}=\sqrt{2\sqrt{6}+5}=\sqrt{1+(2\sqrt{6}+4)}
=\sqrt{1+\sqrt{\left(2\sqrt{6}+4\right)^2}} = \sqrt{1+\sqrt{16\sqrt{6}+40}}
$$
```python
b = S.sqrt(1+S.sqrt(40+16*S.sqrt(6)))
b
```
```python
S.sqrtdenest(b)
```
| bfbf37d8ec53794a41d896a7ab159d1e9e10d977 | 11,020 | ipynb | Jupyter Notebook | sympy_sqrtdenest.ipynb | hamukazu/notebook-misc | 1b39d137f99dcf0495dc101f82997e669ff6dead | [
"MIT"
] | null | null | null | sympy_sqrtdenest.ipynb | hamukazu/notebook-misc | 1b39d137f99dcf0495dc101f82997e669ff6dead | [
"MIT"
] | null | null | null | sympy_sqrtdenest.ipynb | hamukazu/notebook-misc | 1b39d137f99dcf0495dc101f82997e669ff6dead | [
"MIT"
] | null | null | null | 72.5 | 2,510 | 0.80971 | true | 242 | Qwen/Qwen-72B | 1. YES
2. YES | 0.950411 | 0.857768 | 0.815232 | __label__azj_Latn | 0.230006 | 0.73239 |
Author: Drishika Nadella
Date: 4th March 2021
```python
import numpy as np
from sympy import *
```
```python
def func(x):
return x*(x-1)
```
```python
def derivative(x, delta):
f_ = (func(x+delta) - func(x))/delta
return f_
```
```python
# Analytical derivative
x = Symbol('x')
y = func(x)
yprime = y.diff(x)
f = lambdify(x, yprime, 'numpy')
f(1)
```
1
```python
print(derivative(1, 10**-2))
```
1.010000000000001
```python
print(derivative(1, 10**-4))
```
1.0000999999998899
```python
print(derivative(1, 10**-6))
```
1.0000009999177333
```python
print(derivative(1, 10**-8))
```
1.0000000039225287
```python
print(derivative(1, 10**-10))
```
1.000000082840371
```python
print(derivative(1, 10**-12))
```
1.0000889005833413
```python
print(derivative(1, 10**-14))
```
0.9992007221626509
Until $10^{-8}$, the accuracy got better, and then it got worse again. This is because of the multiplication of very small floating point numbers being multiplied with each other, which increases the round-off error.
| 35813893029e342accb4b028f4437cae4d4d00ca | 3,801 | ipynb | Jupyter Notebook | Week 2/HW2_3.ipynb | drkndl/PH354-IISc | e1b40a1ed11fb1967cfb5204d81ee237df453d39 | [
"MIT"
] | null | null | null | Week 2/HW2_3.ipynb | drkndl/PH354-IISc | e1b40a1ed11fb1967cfb5204d81ee237df453d39 | [
"MIT"
] | null | null | null | Week 2/HW2_3.ipynb | drkndl/PH354-IISc | e1b40a1ed11fb1967cfb5204d81ee237df453d39 | [
"MIT"
] | null | null | null | 17.356164 | 222 | 0.470666 | true | 375 | Qwen/Qwen-72B | 1. YES
2. YES | 0.933431 | 0.919643 | 0.858423 | __label__eng_Latn | 0.79141 | 0.832737 |
# Estimating alcohol content in red wines
* Author: Martin Rožnovják
* Last edited: 2019-02-11
* Organization: Metropolia University of Applied Sciences
## What is this?
This notebook is a school assignment for a course called *Cognitive Systems - Mathematics and Methods*.
Its objective is to conduct linear regression analysis on wine properties
from the following UCI dataset https://archive.ics.uci.edu/ml/datasets/Wine+Quality.
Personally, I prefer red wine over white so I'll analyze the red wine dataset.
I'll be interested in estimating the alcohol content of the wines instead
of their quality - just out of interest and objectivity of the measurements,
not because I'm a student... :)
## Let's begin
### All the necessary imports
```python
import IPython
import scipy
import sklearn
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn import metrics, preprocessing
from sklearn.linear_model import LinearRegression
```
Setting options, in particular those for plotting, often interferes with Jupyter's own initializations on import. Therefore, I just do it in the next cell.
```python
### plotting ###
# prettier and larger basic graphs
sns.set(rc={
'figure.figsize':(18,8),
'axes.titlesize':14,
})
### pandas ###
# no need to see many decimal places and makes nicer horizontal fits :-)
pd.options.display.float_format = '{:.3f}'.format
# pd.options.display.precision = 3
# make the tables more compact vertically, too
pd.options.display.max_rows = 20
### numpy ###
# same as for pandas - max. 3 decimal places
np.set_printoptions(formatter={'float_kind':'{:.3f}'.format})
# np.set_printoptions(precision=3)
```
### Fetching the dataset
```python
wines_df = pd.read_csv(
r'https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-red.csv',
sep=';',
)
# a peek whether it went alright
wines_df.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>fixed acidity</th>
<th>volatile acidity</th>
<th>citric acid</th>
<th>residual sugar</th>
<th>chlorides</th>
<th>free sulfur dioxide</th>
<th>total sulfur dioxide</th>
<th>density</th>
<th>pH</th>
<th>sulphates</th>
<th>alcohol</th>
<th>quality</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>7.400</td>
<td>0.700</td>
<td>0.000</td>
<td>1.900</td>
<td>0.076</td>
<td>11.000</td>
<td>34.000</td>
<td>0.998</td>
<td>3.510</td>
<td>0.560</td>
<td>9.400</td>
<td>5</td>
</tr>
<tr>
<th>1</th>
<td>7.800</td>
<td>0.880</td>
<td>0.000</td>
<td>2.600</td>
<td>0.098</td>
<td>25.000</td>
<td>67.000</td>
<td>0.997</td>
<td>3.200</td>
<td>0.680</td>
<td>9.800</td>
<td>5</td>
</tr>
<tr>
<th>2</th>
<td>7.800</td>
<td>0.760</td>
<td>0.040</td>
<td>2.300</td>
<td>0.092</td>
<td>15.000</td>
<td>54.000</td>
<td>0.997</td>
<td>3.260</td>
<td>0.650</td>
<td>9.800</td>
<td>5</td>
</tr>
<tr>
<th>3</th>
<td>11.200</td>
<td>0.280</td>
<td>0.560</td>
<td>1.900</td>
<td>0.075</td>
<td>17.000</td>
<td>60.000</td>
<td>0.998</td>
<td>3.160</td>
<td>0.580</td>
<td>9.800</td>
<td>6</td>
</tr>
<tr>
<th>4</th>
<td>7.400</td>
<td>0.700</td>
<td>0.000</td>
<td>1.900</td>
<td>0.076</td>
<td>11.000</td>
<td>34.000</td>
<td>0.998</td>
<td>3.510</td>
<td>0.560</td>
<td>9.400</td>
<td>5</td>
</tr>
</tbody>
</table>
</div>
No need for the "quality" column, let's forget it...
```python
wines_df.drop(columns=["quality"], inplace=True)
```
### Basic stats and info
```python
print('Summary about the DataFrame:')
wines_df.info();
```
Summary about the DataFrame:
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1599 entries, 0 to 1598
Data columns (total 11 columns):
fixed acidity 1599 non-null float64
volatile acidity 1599 non-null float64
citric acid 1599 non-null float64
residual sugar 1599 non-null float64
chlorides 1599 non-null float64
free sulfur dioxide 1599 non-null float64
total sulfur dioxide 1599 non-null float64
density 1599 non-null float64
pH 1599 non-null float64
sulphates 1599 non-null float64
alcohol 1599 non-null float64
dtypes: float64(11)
memory usage: 137.5 KB
All looks good so far, nothing missing (just as promised in the dataset description), correct data types...
```python
print('Basic descriptive statistics:')
wines_df.describe()
```
Basic descriptive statistics:
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>fixed acidity</th>
<th>volatile acidity</th>
<th>citric acid</th>
<th>residual sugar</th>
<th>chlorides</th>
<th>free sulfur dioxide</th>
<th>total sulfur dioxide</th>
<th>density</th>
<th>pH</th>
<th>sulphates</th>
<th>alcohol</th>
</tr>
</thead>
<tbody>
<tr>
<th>count</th>
<td>1599.000</td>
<td>1599.000</td>
<td>1599.000</td>
<td>1599.000</td>
<td>1599.000</td>
<td>1599.000</td>
<td>1599.000</td>
<td>1599.000</td>
<td>1599.000</td>
<td>1599.000</td>
<td>1599.000</td>
</tr>
<tr>
<th>mean</th>
<td>8.320</td>
<td>0.528</td>
<td>0.271</td>
<td>2.539</td>
<td>0.087</td>
<td>15.875</td>
<td>46.468</td>
<td>0.997</td>
<td>3.311</td>
<td>0.658</td>
<td>10.423</td>
</tr>
<tr>
<th>std</th>
<td>1.741</td>
<td>0.179</td>
<td>0.195</td>
<td>1.410</td>
<td>0.047</td>
<td>10.460</td>
<td>32.895</td>
<td>0.002</td>
<td>0.154</td>
<td>0.170</td>
<td>1.066</td>
</tr>
<tr>
<th>min</th>
<td>4.600</td>
<td>0.120</td>
<td>0.000</td>
<td>0.900</td>
<td>0.012</td>
<td>1.000</td>
<td>6.000</td>
<td>0.990</td>
<td>2.740</td>
<td>0.330</td>
<td>8.400</td>
</tr>
<tr>
<th>25%</th>
<td>7.100</td>
<td>0.390</td>
<td>0.090</td>
<td>1.900</td>
<td>0.070</td>
<td>7.000</td>
<td>22.000</td>
<td>0.996</td>
<td>3.210</td>
<td>0.550</td>
<td>9.500</td>
</tr>
<tr>
<th>50%</th>
<td>7.900</td>
<td>0.520</td>
<td>0.260</td>
<td>2.200</td>
<td>0.079</td>
<td>14.000</td>
<td>38.000</td>
<td>0.997</td>
<td>3.310</td>
<td>0.620</td>
<td>10.200</td>
</tr>
<tr>
<th>75%</th>
<td>9.200</td>
<td>0.640</td>
<td>0.420</td>
<td>2.600</td>
<td>0.090</td>
<td>21.000</td>
<td>62.000</td>
<td>0.998</td>
<td>3.400</td>
<td>0.730</td>
<td>11.100</td>
</tr>
<tr>
<th>max</th>
<td>15.900</td>
<td>1.580</td>
<td>1.000</td>
<td>15.500</td>
<td>0.611</td>
<td>72.000</td>
<td>289.000</td>
<td>1.004</td>
<td>4.010</td>
<td>2.000</td>
<td>14.900</td>
</tr>
</tbody>
</table>
</div>
```python
print('Almost the same thing again, this time graphically:')
wines_df.plot.box(subplots=True, layout=(2, 6), figsize=(20, 10));
```
There seem to be several outliers, I will get back to them soon.
```python
print('Yet different perspective, feature histograms:')
wines_df.hist(bins=20, figsize=(18, 14))
plt.tight_layout()
```
```python
ax = sns.heatmap(
wines_df.corr(),
annot=True,
vmin=-1,
vmax=1,
cmap="coolwarm",
fmt='0.2f',
linewidths=1,
)
ax.set_title('Correlation Matrix');
```
### Training-testing split
Error estimates are more credible when done on "unseen" data, therefore, I will set aside randomly selected 30% of the data for testing.
I will also make a separate training set with removed outliers for inspecting their role in the regression - how they skew the regression and influence generalization.
I will remove rows that contain a value more than 3 standard deviations in any of the columns.
```python
train_df, test_df = sklearn.model_selection.train_test_split(
wines_df,
test_size=0.3,
shuffle=True,
)
# zcore = (x - x.avg)/x.std
good_zscore = np.abs(scipy.stats.zscore(train_df)) <= 3
trim_index = good_zscore.all(axis=1) # all values in a row have good zscore
train_df_trimmed = train_df[trim_index]
```
Splitting into inputs (explanatory features) and results (responses).
```python
def xy_split(df):
x_labels = wines_df.columns[:-1]
y_label = wines_df.columns[-1]
return df[x_labels], df[y_label]
x_train, y_train = xy_split(train_df)
x_train_trim, y_train_trim = xy_split(train_df_trimmed)
x_test, y_test = xy_split(test_df)
```
```python
# a simple utility
def compare_distributions(
title='Kernel densities (distribution) of response values',
ax=None,
kde=True,
hist=False,
**label_value_pairs
):
for label, values in label_value_pairs.items():
ax = sns.distplot(
values,
kde=kde,
hist=hist,
label=label,
ax=ax,
axlabel=False
)
if title: ax.set_title(title)
# if hist it won't set the legend automatically
if hist: ax.legend(label_value_pairs.keys())
return ax
```
```python
# ordinary histogram would be quite confusing...
compare_distributions(
Training=y_train,
Trimmed_training=y_train_trim,
Validation=y_test
);
```
```python
print('Training data:')
train_df.plot.box(subplots=True, figsize=(18,5))
plt.tight_layout()
```
```python
print('Testing data:')
test_df.plot.box(subplots=True, figsize=(18,5))
plt.tight_layout()
```
### Model
The model and regression method are perhaps the simplest possible - i.e. a multilinear model
$\theta_0 + \theta_1 a_1 + \theta_2 a_2 + \dots + \theta_n a_n$
where $a_i$ is the $i$-th attribute (explanatory feature) and $\theta_i, i \in \{0, 1, \dots, n\}$
are the regression coefficients obtained using the least squares method.
If the data has unit variance and zero mean, the absolute value of the resulting coefficients can be
interpreted as the "importance/usefulness" of the corresponding attribute for the problem.
Therefore, I will shift and scale the training data accordingly.
<em>Note: it does not really matter whether the response (results) column is normalized.
If it is not, then the regression coefficients will be multiplied by the standard deviation
of the response column and intercept (the constant term) will be its mean.
I find working with the original values easier in this case (less to worry about).</em>
```python
# some utils
def print_regr_coefs(lin_reg):
print(f'Intercept: {lin_reg.intercept_:.3f}')
print('Coefs:', lin_reg.coef_)
def print_metrics(y_true, y_pred, comment=None, show_dists=True):
mae = metrics.mean_absolute_error(y_true, y_pred)
mse = metrics.mean_squared_error(y_true, y_pred)
r2 = metrics.r2_score(y_true, y_pred)
desc = f'MAE: {mae:.3f} MSE: {mse:.3f} R2: {r2:.3f}'
if comment:
desc += f' [{comment}]'
if show_dists:
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(18, 3))
fig.suptitle(desc)
residuals = (y_true - y_pred).values
abs_residuals = np.abs(residuals)
# distribution of residuals
sns.distplot(residuals, ax=ax1)
sns.distplot(abs_residuals, ax=ax1)
start = np.floor(min(residuals))
end = np.ceil(max(abs_residuals))
ax1.xaxis.set_ticks(np.round(np.linspace(start, end, 11),2))
ax1.legend(['Residuals','Absolute residuals'])
# dist. of response values
compare_distributions(
title=None,
ax=ax2,
hist=True,
Measured=y_true,
Predicted=y_pred,
)
plt.show()
else:
print(desc)
def test_report(y_true, y_pred):
err_df = pd.DataFrame()
err_df['measured'] = y_true
err_df['predicted'] = y_pred
err_df['residual'] = err_df.measured - err_df.predicted
err_df['abs_residual'] = np.abs(err_df.residual)
# show statistics
print('Response statistics:')
IPython.display.display(err_df.describe())
## plot results and residuals
# true vs. predicted
ax1 = plt.subplot(1,2,1)
ax1.set_title('Measured values vs. predicted')
ax1.plot(y_pred, y_pred)
err_df.plot.scatter('measured', 'predicted', c='orange', ax=ax1)
# residuals
ax2 = plt.subplot(1,2,2)
ax2.set_title('Residuals')
# y=0 line
x_infimum = np.floor(err_df.measured.min())
x_supremum = np.ceil(err_df.measured.max())
ax2.plot([x_infimum, x_supremum], [0, 0])
err_df.plot.scatter('measured', 'residual', c='orange', ax=ax2)
plt.tight_layout()
plt.show()
```
```python
print('Regressing on all features, whole training set\n')
# scaling
x_scaler = sklearn.preprocessing.StandardScaler()
x_scaler.fit(x_train)
x_train_scaled = x_scaler.transform(x_train)
x_train_trim_scaled = x_scaler.transform(x_train_trim)
x_test_scaled = x_scaler.transform(x_test)
# regression
reg = LinearRegression().fit(x_train_scaled, y_train)
# reports
print_regr_coefs(reg)
y_pred_train = reg.predict(x_train_scaled)
print_metrics(y_train, y_pred_train, 'training')
y_pred_test = reg.predict(x_test_scaled)
print_metrics(y_test, y_pred_test, 'validation')
print()
test_report(y_test, y_pred_test)
```
The mean average error (same as mean absolute residual) tells us
that the model is on average off by 0.48% off from the actual percentual content of the alcohol,
considering that the range of the alcohol volume is 8.4% - 14.9%,
this can be translated to about 7.4% relative error.
From the "Response statistics" table, based on the quantiles of absolute residual,
we can see that a quarter of the results are predicted within 0.2% (absolute error),
half of them within 0.4%, three quarters only within 0.7%
and the worst error is 2.4% off.
```python
print('Features by "importance" for the last regression:')
coef_df = pd.DataFrame()
coef_df['feature'] = x_train.columns
coef_df['coef'] = reg.coef_
coef_df['abs_coef'] = abs(coef_df['coef'])
coef_df['natural_coef'] = x_scaler.inverse_transform(reg.coef_)
coef_df.sort_values('abs_coef', ascending=False, inplace=True)
coef_df.reset_index(drop=True, inplace=True)
coef_df
```
Features by "importance" for the last regression:
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>feature</th>
<th>coef</th>
<th>abs_coef</th>
<th>natural_coef</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>density</td>
<td>-1.118</td>
<td>1.118</td>
<td>0.995</td>
</tr>
<tr>
<th>1</th>
<td>fixed acidity</td>
<td>0.904</td>
<td>0.904</td>
<td>9.912</td>
</tr>
<tr>
<th>2</th>
<td>pH</td>
<td>0.571</td>
<td>0.571</td>
<td>3.395</td>
</tr>
<tr>
<th>3</th>
<td>residual sugar</td>
<td>0.397</td>
<td>0.397</td>
<td>3.093</td>
</tr>
<tr>
<th>4</th>
<td>sulphates</td>
<td>0.200</td>
<td>0.200</td>
<td>0.690</td>
</tr>
<tr>
<th>5</th>
<td>citric acid</td>
<td>0.170</td>
<td>0.170</td>
<td>0.305</td>
</tr>
<tr>
<th>6</th>
<td>chlorides</td>
<td>-0.086</td>
<td>0.086</td>
<td>0.083</td>
</tr>
<tr>
<th>7</th>
<td>total sulfur dioxide</td>
<td>-0.076</td>
<td>0.076</td>
<td>43.908</td>
</tr>
<tr>
<th>8</th>
<td>volatile acidity</td>
<td>0.076</td>
<td>0.076</td>
<td>0.543</td>
</tr>
<tr>
<th>9</th>
<td>free sulfur dioxide</td>
<td>-0.025</td>
<td>0.025</td>
<td>15.461</td>
</tr>
</tbody>
</table>
</div>
```python
# I will leave this here, perhaps I had too long weekend...
print('Formula for estimating alcohol content in red wine:')
print('(features by contribution, no need to scale)')
IPython.display.Math(
r'\begin{align}' +
r'{:.3f}\ &+\ '.format(reg.intercept_) +
''.join([
r'\\& {:+.3f} \cdot [\verb|{}|]'.format(coef, ftr)
for coef, ftr in coef_df[['natural_coef', 'feature']].values
]) +
r'\end{align}'
)
```
Formula for estimating alcohol content in red wine:
(features by contribution, no need to scale)
$\displaystyle \begin{align}10.396\ &+\ \\& +0.995 \cdot [\verb|density|]\\& +9.912 \cdot [\verb|fixed acidity|]\\& +3.395 \cdot [\verb|pH|]\\& +3.093 \cdot [\verb|residual sugar|]\\& +0.690 \cdot [\verb|sulphates|]\\& +0.305 \cdot [\verb|citric acid|]\\& +0.083 \cdot [\verb|chlorides|]\\& +43.908 \cdot [\verb|total sulfur dioxide|]\\& +0.543 \cdot [\verb|volatile acidity|]\\& +15.461 \cdot [\verb|free sulfur dioxide|]\end{align}$
## Conclusion
I experimented with leaving some columns and with the threshold for trimming outliers,
the results were, however, insignificant and inconsistent, so I won't clutter the notebook
with examples of those, I will just leave one quite general cell for quickly playing
with these adjustments (just comment out columns, change training set, etc.).
This approximation method is trying to minimize the MSE metric.
In my opinion, it would be better to minimize error so that it favors
the majority of the data and not being influenced much by edge cases.
```python
### tinkering cell ###
# scaling
columns = [
'density',
'fixed acidity',
'pH',
'residual sugar',
'sulphates',
'citric acid',
'volatile acidity',
'chlorides',
'total sulfur dioxide',
'free sulfur dioxide'
]
x_scaler = sklearn.preprocessing.StandardScaler()
x_scaler.fit(x_train_trim[columns])
x_train_scaled = x_scaler.transform(x_train[columns])
x_train_trim_scaled = x_scaler.transform(x_train_trim[columns])
x_test_scaled = x_scaler.transform(x_test[columns])
# regression
reg = LinearRegression().fit(x_train_trim_scaled, y_train_trim)
# reg.intercept_ = 10.2
# reports
print_regr_coefs(reg)
y_pred_train_trim = reg.predict(x_train_trim_scaled)
print_metrics(y_train_trim, y_pred_train_trim, 'training trimmed')
y_pred_train = reg.predict(x_train_scaled)
print_metrics(y_train, y_pred_train, 'whole training data')
y_pred_test = reg.predict(x_test_scaled)
print_metrics(y_test, y_pred_test, 'validation')
print()
test_report(y_test, y_pred_test)
```
| 9c2bd69d9657c7aec08e2234d834dc1893a0dd46 | 951,068 | ipynb | Jupyter Notebook | Cognitive_Systems-Mathematics_and_Methods/week04/Roznovjak_Assignment_4-Linear_regression_on_red_wines.ipynb | rozni/uni-ml | 0667c7504927ea3bd1850d118708ea72b4b43430 | [
"MIT"
] | null | null | null | Cognitive_Systems-Mathematics_and_Methods/week04/Roznovjak_Assignment_4-Linear_regression_on_red_wines.ipynb | rozni/uni-ml | 0667c7504927ea3bd1850d118708ea72b4b43430 | [
"MIT"
] | null | null | null | Cognitive_Systems-Mathematics_and_Methods/week04/Roznovjak_Assignment_4-Linear_regression_on_red_wines.ipynb | rozni/uni-ml | 0667c7504927ea3bd1850d118708ea72b4b43430 | [
"MIT"
] | null | null | null | 624.88042 | 128,920 | 0.939818 | true | 6,163 | Qwen/Qwen-72B | 1. YES
2. YES | 0.887205 | 0.793106 | 0.703647 | __label__eng_Latn | 0.679614 | 0.47314 |
```python
import numpy as np
import control
import matplotlib.pyplot as plt # plotting library
from sympy import symbols
from sympy.physics.control.lti import TransferFunction, Feedback, Series
from sympy.physics.control.control_plots import pole_zero_plot, step_response_plot
```
# Equations of motion
The EOM for $m_1$ is:
$m_1 \ddot{x}_1+c_s \dot{x}_1+k_s x_1 = c_s \dot{x}_2 + k_s x_2$
The EOM for $m_2$ is:
$m_2 \ddot{x}_2+c_s \dot{x}_2+(k_s +k_t) x_2 = c_s \dot{x}_1 + k_s x_1 + k_t y$
# Block diagram
# Transfer Function
## Sympy Implementation
```python
m1, m2, kt, ks, cs, s = symbols('m_1 m_2 k_t k_s c_s s')
# have python perform the block diagram operations symbolically
num1 = cs*s + ks
den1 = m1*s**2 + cs*s + ks
G1 = TransferFunction(num1, den1, s)
num2 = cs*s + ks
den2 = m2*s**2 + cs*s + (ks+kt)
G2 = TransferFunction(num2,den2, s)
num3 = kt
den3 = m2*s**2 + cs*s + (ks+kt)
G3 = TransferFunction(num3,den3, s)
G4 = Feedback(G1,G2,sign=1).doit()
TF = Series(G3,G4).doit()
TF=TF.simplify()
TF
```
$\displaystyle \frac{- k_{t} \left(c_{s} s + k_{s}\right)}{\left(c_{s} s + k_{s}\right)^{2} - \left(c_{s} s + k_{s} + m_{1} s^{2}\right) \left(c_{s} s + k_{s} + k_{t} + m_{2} s^{2}\right)}$
Expanding this TF gives us
```python
TF.expand()
```
$\displaystyle \frac{- c_{s} k_{t} s - k_{s} k_{t}}{- c_{s} k_{t} s - c_{s} m_{1} s^{3} - c_{s} m_{2} s^{3} - k_{s} k_{t} - k_{s} m_{1} s^{2} - k_{s} m_{2} s^{2} - k_{t} m_{1} s^{2} - m_{1} m_{2} s^{4}}$
```python
# sub in values for the variables
TF = TF.subs([(kt,190000),(cs,5000),(ks,20000),(m1,290),(m2,59)])
```
```python
# Plot the poles and zeros of the TF
pole_zero_plot(TF)
```
```python
step_response_plot(TF,grid=False,color='r')
```
## Controls Package Implementation
```python
# cs = 5000
# ks = 20000
cs = 10
ks = 5
kt = 190000
m1 = 290
m2 = 59
num1 = [cs, ks]
den1 = [m1, cs, ks]
G1 = control.tf(num1,den1)
num2 = [cs, ks]
den2 = [m2, cs, (ks+kt)]
G2 = control.tf(num2,den2)
num3 = [kt]
den3 = [m2, cs, (ks+kt)]
G3 = control.tf(num3,den3)
G4 = control.feedback(G1,G2,sign=1)
G = control.series(G4,G3)
G = control.minreal(G)
print(G)
```
2 states have been removed from the model
111 s + 55.52
------------------------------------------
s^4 + 0.204 s^3 + 3220 s^2 + 111 s + 55.52
```python
poles, zeros = control.pzmap(G)
```
```python
time = np.arange(0,100,0.1) # time samples from 0 to 20 seconds with time spacing of 0.05 seconds
# t, xout = control.step_response(G,T=time)
t, xout = control.impulse_response(G,T=time)
```
```python
#############################################################
fig = plt.figure(figsize=(6,4))
ax = plt.gca()
plt.subplots_adjust(bottom=0.17, left=0.17, top=0.96, right=0.96)
# Change the axis units font
plt.setp(ax.get_ymajorticklabels(),fontsize=18)
plt.setp(ax.get_xmajorticklabels(),fontsize=18)
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
# Turn on the plot grid and set appropriate linestyle and color
ax.grid(True,linestyle=':', color='0.75')
ax.set_axisbelow(True)
# Define the X and Y axis labels
plt.xlabel('Time (s)', fontsize=22, weight='bold', labelpad=5)
plt.ylabel('Displacement', fontsize=22, weight='bold', labelpad=10)
plt.plot(time, xout, linewidth=2, linestyle='-', label=r'Response')
# uncomment below and set limits if needed
# plt.xlim(0,5)
# plt.ylim(-0.01,2.5)
# # Create the legend, then fix the fontsize
# leg = plt.legend(loc='upper right', ncol = 1, fancybox=True, )
# ltext = leg.get_texts()
# plt.setp(ltext,fontsize=16)
# Adjust the page layout filling the page using the new tight_layout command
plt.tight_layout(pad=0.5)
# save the figure as a high-res pdf in the current folder
# plt.savefig('plot_filename.pdf')
# plt.show()
```
```python
```
| e73985f108c4f8949bda1d25928faeb549bbeb50 | 86,248 | ipynb | Jupyter Notebook | Jupyter Notebooks/Car_suspension.ipynb | gge0866/MCHE474---Control-Systems | 8b3c6212223d104d098e8f306d46ccbba2b5082f | [
"BSD-3-Clause"
] | null | null | null | Jupyter Notebooks/Car_suspension.ipynb | gge0866/MCHE474---Control-Systems | 8b3c6212223d104d098e8f306d46ccbba2b5082f | [
"BSD-3-Clause"
] | null | null | null | Jupyter Notebooks/Car_suspension.ipynb | gge0866/MCHE474---Control-Systems | 8b3c6212223d104d098e8f306d46ccbba2b5082f | [
"BSD-3-Clause"
] | null | null | null | 223.440415 | 29,624 | 0.917934 | true | 1,363 | Qwen/Qwen-72B | 1. YES
2. YES | 0.882428 | 0.76908 | 0.678658 | __label__eng_Latn | 0.373459 | 0.415081 |
# Bond Pricing with Vasicek Model
Author:<br>
Stanislav Khrapov<br>
<a href="mailto:khrapovs@gmail.com">khrapovs@gmail.com</a><br>
http://sites.google.com/site/khrapovs/<br>
## Introduction
The following code is the example of adapting methodology of<br>
<a href = "http://onlinelibrary.wiley.com/doi/10.1111/1468-0262.00164/abstract">Duffie, D., Pan, J., & Singleton, K. J. (2000). Transform Analysis and Asset Pricing for Affine Jump-Diffusions. Econometrica, 68(6), 1343–1376.</a>
<br>
to the Vasicek model of interest rates.
Set up environment
```
import scipy.integrate as si
import numpy as np
import matplotlib.pylab as plt
import functools
```
## The Model
\begin{equation}
dr_{t}=\kappa\left(\mu-r_{t}\right)dt+\sigma dW_{t}.
\end{equation}
Set up parameters
```
h = 1
kappa, mu, sigma = 1.5, .5, .1
theta = [kappa, mu, sigma]
u = np.linspace(0, 1e3, 1e2)
```
## Characteristic function
### Analytic solution
The analytic solution for the characteristic function is exponentially affine
<p>
\begin{equation}
E_{t}\left[\exp\left(iur_{T}\right)\right]=\exp\left(\alpha_{t}\left(u\right)+\beta_{t}\left(u\right)r_{t}\right),
\end{equation}
with
<p>
\begin{eqnarray*}
\beta_{t}\left(u\right) & = & iue^{-\kappa h}\\
\alpha_{t}\left(u\right) & = & \mu iu\left(1-e^{-\kappa h}\right)-\frac{\sigma^{2}}{4\kappa}u^{2}\left(1-e^{-2\kappa h}\right)
\end{eqnarray*}
```
def alpha(u, h, theta):
kappa, mu, sigma = theta
e = np.exp( - kappa * h )
return 1j * u * mu * (1 - e) - u ** 2 * sigma ** 2 * (1 - e ** 2) / kappa / 4
def beta(u, h, theta):
kappa, mu, sigma = theta
return 1j * u * np.exp( - kappa * h )
```
Evaluate these functions on the grid.
```
a = alpha(u, h, theta)
b = beta(u, h, theta)
sol_analytic = np.array([a, b])
```
### Numeric solution
We need to solve the following system of ODE:<p>
\begin{eqnarray*}
\dot{\alpha}_{t} & = & -\kappa\mu\beta_{t}-\frac{1}{2}\sigma^{2}\beta_{t}^{2},\\
\dot{\beta}_{t} & = & \kappa\beta_{t},
\end{eqnarray*}
<p>
with terminal conditions $\beta_{T}=iu$ and $\alpha_{T}=0$.
The following function is the right hand side of the ODE system. Since the solution is actually in terms of distance $h$ to the terminal date and not in calendar time $t$, the left-hand side derivatives go with minus.
```
def df(t, x, theta):
kappa, mu, sigma = theta
da = - kappa * mu * x[1] - .5 * sigma ** 2 * x[1] ** 2
db = kappa * x[1]
return [-da, -db]
```
This function solves the system of complex valued ODE.
```
def f(df, u, h, theta):
dt = h / 1e1
df_args = functools.partial(df, theta = theta)
r = si.complex_ode(df_args)
sol = []
for v in u:
x0 = [0., v * 1j]
r.set_initial_value(x0, 0)
while r.successful() and r.t < h:
r.integrate(r.t + dt)
sol.append(r.y)
return np.array(sol).T
```
Separate solution into functions $\alpha_t(u)$ and $\beta_t(u)$
```
alpha_num = lambda u, h, theta: f(df, u, h, theta)[0]
beta_num = lambda u, h, theta: f(df, u, h, theta)[1]
```
Evaluate numerical solutions at specific values.
```
a = alpha_num(u, h, theta)
b = beta_num(u, h, theta)
sol_integrate = np.array([a, b])
```
### Compare solutions
Plot them.
```
fig, axes = plt.subplots(2, 2, figsize = (8, 6), sharex = True)
for sol in [sol_analytic, sol_integrate]:
axes[0,0].plot(u, sol[0].real)
axes[1,0].plot(u, sol[0].imag)
axes[0,1].plot(u, sol[1].real)
axes[1,1].plot(u, sol[1].imag)
axes[0,0].set_title('Alpha real')
axes[1,0].set_title('Alpha imag')
axes[0,1].set_title('Beta real')
axes[1,1].set_title('Beta imag')
axes[1,0].set_xlabel('u')
axes[1,1].set_xlabel('u')
axes[0,1].legend(['Analytic','Numeric'])
plt.show()
```
### Compare characteristic functions
Define conditional characteristic functions.
```
rt = 1.5
psi_analytic = lambda u: np.exp( alpha(u, h, theta) + beta(u, h, theta) * rt )
psi_numeric = lambda u: np.exp( alpha_num(u, h, theta) + beta_num(u, h, theta) * rt )
```
Plot them.
```
x = np.linspace(0, 60, 1e2)
psi_a = psi_analytic(x)
psi_n = psi_numeric(x)
fig, axes = plt.subplots(2, 1, figsize = (8, 6), sharex = True)
for psi in [psi_a, psi_n]:
axes[0].plot(x, psi.real)
axes[1].plot(x, psi.imag)
axes[1].set_xlabel('u')
axes[0].set_title('Real')
axes[1].set_title('Imag')
axes[0].legend(['Analytic','Numeric'])
plt.show()
```
### Compare transition densities
Fast Fourier inverse of charcteristic function. Returns density.
```
def CFinverse(psi, A = -1e2, B = 1e2, N = 1e5):
eta = (N - 1) / N * 2 * np.pi / (B - A)
lmbd = (B - A) / (N - 1)
k = np.arange(0, N)
x = A + lmbd * k
v = eta * k
y = psi(v) * np.exp(- 1j * A * v) * eta / np.pi
f = np.fft.fft(y)
return x, f.real
```
Compute densities. Note how much time was spent on each operation.
```
N = 1e4
%time x, density_analytic = CFinverse(psi_analytic, N = N)
%time x, density_numeric = CFinverse(psi_numeric, N = N)
```
CPU times: user 8.27 ms, sys: 7 µs, total: 8.27 ms
Wall time: 7.86 ms
CPU times: user 40.8 s, sys: 15.9 ms, total: 40.8 s
Wall time: 40.8 s
Plot transition densities.
```
plt.plot(x, density_analytic)
plt.plot(x, density_numeric)
plt.xlim([0, 2])
plt.legend(['Analytic','Numeric'])
plt.xlabel('x')
plt.ylabel('f(x)')
plt.show()
```
# Bond Pricing
Thanks to Duffie, Pan, & Singleton (2000, Econometrica) we know how to compute the following transform for AJD models:
<p>
\begin{equation}
\mathbb{T}^{\chi}\left(u,Y_{t},t,T\right)=E_{t}\left[\exp\left(-\int_{t}^{T}r_{s}ds\right)\exp\left(ur_{T}\right)\right]
=\exp\left(\alpha_{t}(u)+\beta_{t}(u)r_{t}\right).
\end{equation}
<p>
But in order to compute the bond price we obviously need to set $u=0$ in this transform:
<p>
\begin{equation}
B_{t,T}=E_{t}\left[\exp\left(-\int_{t}^{T}r_{s}ds\right)\right]=\mathbb{T}^{\chi}\left(0,r_{t},t,T\right)=\exp\left(\alpha_{t}\left(0\right)+\beta_{t}\left(0\right)r_{t}\right),
\end{equation}
<p>
In case of Vasicek model we need to solve the following system of ODE
<p>
\begin{eqnarray*}
\dot{\beta}_{t} & = & 1+\kappa\beta_{t},\\
\dot{\alpha}_{t} & = & -\kappa\mu\beta_{t}-\frac{1}{2}\sigma^{2}\beta_{t}^{2},
\end{eqnarray*}
with terminal conditions $\beta_{T}=u$ and $\alpha_{T}=0$.
## Analytic solution
The analytic solution to this problem is known to be
<p>
\begin{eqnarray*}
\beta_{t}\left(0\right) & = & \frac{1}{\kappa}\left(e^{-\kappa h}-1\right),\\
\alpha_{t}\left(0\right) & = & -\left(\mu-\frac{\sigma^{2}}{2\kappa^{2}}\right)\left(\beta_{t}\left(0\right)+h\right)-\frac{\sigma^{2}}{4\kappa}\beta_{t}^{2}\left(0\right).
\end{eqnarray*}
It is coded below.
```
def beta(h, theta):
kappa, mu, sigma = theta
return ( np.exp( - kappa * h ) - 1 ) / kappa
def alpha(h, theta):
kappa, mu, sigma = theta
a = - (mu - .5 * sigma ** 2 / kappa ** 2) * (beta(h, theta) + h)
a -= .25 * sigma ** 2 / kappa * beta(h, theta) ** 2
return a
```
Most of the time we are actually interested in yields, rather than in bond prices directly.
<p>
\begin{equation}
y_{t,T}= -\frac{1}{h}\log B_{t,T}=-\frac{1}{h}\alpha_{t}\left(0\right)-\frac{1}{h}\beta_{t}\left(0\right)r_{t}
=A\left(h\right) + B\left(h\right)r_t
\end{equation}
```
A_exact = lambda h, theta: - alpha(h, theta) / h
B_exact = lambda h, theta: - beta(h, theta) / h
```
We can compute these for the interval of $H$ days in case of analytic solution.
```
H = np.linspace(.1, 10, 1e2)
a = A_exact(H, theta)
b = B_exact(H, theta)
sol_analytic = np.array([a, b])
```
## Numeric solution
For numerical solution we need to set up the system of ODEs.
```
def df(t, x, theta):
kappa, mu, sigma = theta
da = - kappa * mu * x[1] - .5 * sigma ** 2 * x[1] ** 2
db = 1 + kappa * x[1]
return [-da, -db]
```
The solver is adjusted since for bond pricign we only need the value of functions $\alpha_t(u)$ and $\beta_t(u)$ at zero but for variety of $h$.
```
def f(df, H, theta):
df_args = functools.partial(df, theta = theta)
r = si.ode(df_args)
sol = []
for h in H:
dt = h / 1e2
x0 = [0, 0]
r.set_initial_value(x0, 0)
while r.successful() and r.t < h:
r.integrate(r.t + dt)
sol.append(r.y)
return np.array(sol).T
```
The solutions are written as function below.
```
alpha_num = lambda H, theta: f(df, H, theta)[0]
beta_num = lambda H, theta: f(df, H, theta)[1]
A_numeric = lambda H, theta: - alpha_num(H, theta) / H
B_numeric = lambda H, theta: - beta_num(H, theta) / H
```
We can evaluate these at a specific time interval.
```
a = A_numeric(H, theta)
b = B_numeric(H, theta)
sol_integrate = np.array([a, b])
```
## Comparison
Now plot the solutions and the resulting yield.
```
fig, axes = plt.subplots(3, 1, figsize = (8, 6), sharex = True)
for sol in [sol_analytic, sol_integrate]:
axes[0].plot(H, sol[0])
axes[1].plot(H, sol[1])
axes[2].plot(H, sol[0] + sol[1] * rt)
axes[0].set_title('A')
axes[1].set_title('B')
axes[2].set_title('Yields conditional on $r_0=$ ' + str(rt))
axes[2].legend(['Analytic','Numeric'])
axes[2].set_xlabel('H')
axes[2].set_ylabel('%')
plt.show()
```
| baae5af6e65a4ec165129909f6ed9b7b00b44f1b | 173,016 | ipynb | Jupyter Notebook | Vasicek.ipynb | khrapovs/finmetrix-code | f278df1c15a225385846c2f0d7a6700c5737e901 | [
"MIT"
] | 4 | 2015-07-03T16:34:29.000Z | 2019-05-09T13:10:26.000Z | Vasicek.ipynb | khrapovs/finmetrix-code | f278df1c15a225385846c2f0d7a6700c5737e901 | [
"MIT"
] | null | null | null | Vasicek.ipynb | khrapovs/finmetrix-code | f278df1c15a225385846c2f0d7a6700c5737e901 | [
"MIT"
] | 2 | 2016-04-01T05:33:44.000Z | 2020-07-12T06:58:25.000Z | 222.671815 | 54,523 | 0.89431 | true | 3,222 | Qwen/Qwen-72B | 1. YES
2. YES | 0.951863 | 0.849971 | 0.809056 | __label__eng_Latn | 0.597799 | 0.718042 |
---
# Section 3.2: Orthogonal Matrices
---
## Inner-product notation
We will use the following notation for the **inner-product** between vectors $x, y \in \mathbb{R}^n$:
$$
\langle x, y \rangle = \sum_{i=1}^n x_i y_i = x^T y = \|x\|_2 \|y\|_2 \cos\theta,
$$
where $0 \leq \theta \leq \pi$ is the **angle** between $x$ and $y$.
**Note:** $\|x\|_2 = \sqrt{\langle x, x \rangle}$.
---
## Orthogonal matrix definition
$Q \in \mathbb{R}^{n \times n}$ is **orthogonal** if the columns of $Q$ are:
1. **unit-length**:
$$
\|q_i\|_2 = 1, \qquad \forall i,
$$
2. **mutually orthogonal**:
$$
\langle q_i, q_j \rangle = 0, \qquad i \neq j
$$
This is equivalent to saying that
$$Q^T Q = I$$
which is equivalent to
$$Q^{-1} = Q^T.$$
The rows of $Q$ are also unit-length and mutually orthogonal since $QQ^T = I$.
---
## Exercise
1. Prove that the product of orthogonal matrices is orthogonal.
2. Prove that the transpose of an othogonal matrix is orthogonal.
### Part 1
Suppose that $Q_i \in \mathbb{R}^{n \times n}$ is orthogonal for $i = 1,\ldots,k$. Let
$$
Q = Q_1 Q_2 \cdots Q_k.
$$
Now we want to show that $Q$ is orthogonal. To show this, we compute
\begin{align}
Q^T Q
& = (Q_1 Q_2 \cdots Q_k)^T (Q_1 Q_2 \cdots Q_k) \\
& = (Q_k^T \cdots Q_2^T Q_1^T) (Q_1 Q_2 \cdots Q_k) \\
& = Q_k^T \cdots Q_2^T (Q_1^T Q_1) Q_2 \cdots Q_k \\
& = Q_k^T \cdots Q_2^T I Q_2 \cdots Q_k \\
& = Q_k^T \cdots Q_3^T (Q_2^T Q_2) Q_3 \cdots Q_k \\
& = Q_k^T \cdots Q_3^T Q_3 \cdots Q_k \\
& \quad \vdots \\
& = Q_k^T Q_k \\
& = I.
\end{align}
Therefore, $Q^T Q = I$, so $Q$ is orthogonal.
### Part 2
Suppose that $Q \in \mathbb{R}^{n \times n}$ is orthogonal. To show that $Q^T$ is orthogonal, we need to show that $(Q^T)^T (Q^T) = I$. So, we compute
$$
(Q^T)^T (Q^T) = Q Q^T = Q Q^{-1} = I.
$$
Therefore, $Q^T$ is also an orthogonal matrix.
---
> ## Theorem:
>
> If $Q \in \mathbb{R}^{n \times n}$ is orthogonal, then:
>
> 1. $\langle Qx, Qy \rangle = \langle x, y \rangle$
>
> 2. $\|Qx\|_2 = \|x\|_2$
>
> This theorem states that any **orthogonal transformation**, $x \mapsto Qx$, preserve angles and lengths.
---
## Exercise:
Prove the theorem.
### Part 1
Since $Q$ is an orthogonal matrix, we have
\begin{align}
\langle Qx, Qy \rangle
&= (Qx)^T (Qy) \\
&= x^T Q^T Q y \\
&= x^T I y \\
&= x^T y \\
&= \langle x, y \rangle.
\end{align}
### Part 2
Using part 1, we have
\begin{align}
\| Q x \|_2
&= \sqrt{ \langle Q x, Q x \rangle } \\
&= \sqrt{ \langle x, x \rangle } \\
&= \| x \|_2. \\
\end{align}
---
## Rotation matrices
A $2 \times 2$ rotation matrix has the form
$$
Q = \begin{bmatrix}
\cos\theta & -\sin\theta \\
\sin\theta & \cos\theta
\end{bmatrix}.
$$
We can use rotation matrices to introduce zeros into vectors.
---
## Exercise
1. Prove that $2 \times 2$ rotation matrices are orthogonal.
2. Find a rotation matrix $Q$ such that
$$
Q^T \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} y_1 \\ 0 \end{bmatrix}
$$
where $y_1 \geq 0$.
### Part 1
Let $Q$ be a $2 \times 2$ rotation matrix, as above. Then
$$
\begin{align}
Q^T Q
&= \begin{bmatrix}
\cos\theta & -\sin\theta \\
\sin\theta & \cos\theta
\end{bmatrix}^T
\begin{bmatrix}
\cos\theta & -\sin\theta \\
\sin\theta & \cos\theta
\end{bmatrix} \\
&= \begin{bmatrix}
\cos\theta & \sin\theta \\
-\sin\theta & \cos\theta
\end{bmatrix}
\begin{bmatrix}
\cos\theta & -\sin\theta \\
\sin\theta & \cos\theta
\end{bmatrix} \\
&= \begin{bmatrix}
\cos^2\theta + \sin^2\theta & -\cos\theta\sin\theta + \sin\theta\cos\theta \\
-\sin\theta\cos\theta + \cos\theta\sin\theta & \sin^2\theta + \cos^2\theta
\end{bmatrix} \\
&= \begin{bmatrix}
1 & 0 \\
0 & 1
\end{bmatrix} \\
&= I.
\end{align}
$$
Therefore, $Q$ is an orthogonal matrix.
### Part 2
Let
$$
Q =
\begin{bmatrix}
c & -s \\ s & c
\end{bmatrix},
$$
where $c^2 + s^2 = 1$. Then
$$
Q^T \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} y_1 \\ 0 \end{bmatrix}
$$
implies that
$$
\begin{bmatrix}
c & -s \\ s & c
\end{bmatrix}^T
\begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} y_1 \\ 0 \end{bmatrix}.
$$
Thus,
$$
\begin{bmatrix}
c x_1 + s x_2 \\ -s x_1 + c x_2
\end{bmatrix} = \begin{bmatrix} y_1 \\ 0 \end{bmatrix}.
$$
Also, since $Q^T$ is orthogonal,
$$
\left\| \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} \right \|_2 = \left\| \begin{bmatrix} y_1 \\ 0 \end{bmatrix} \right \|_2,
$$
which implies that $y_1 = \sqrt{x_1^2 + x_2^2}$ since $y_1 \ge 0$.
If $x = 0$, then $y_1 = \sqrt{x_1^2 + x_2^2} = 0$, and we would let $c = 1$ and $s = 0$.
Now, we assume that $x \ne 0$. Thus, $y_1 > 0$.
Multiply the first equation by $s$ and the second equation by $c$. Then,
\begin{align}
c s x_1 + s^2 x_2 &= s y_1 \\
-c s x_1 + c^2 x_2 &= 0. \\
\end{align}
Summing these equations gives us
$$
(s^2 + c^2) x_2 = s y_1.
$$
Since we want $s^2 + c^2 = 1$, we have $x_2 = s y_1$, so
$$
s = \frac{x_2}{y_1}.
$$
In a similar way, we find that
$$
c = \frac{x_1}{y_1}.
$$
---
Another way to find $s$ and $c$ is to rewrite
$$
\begin{bmatrix}
c x_1 + s x_2 \\ -s x_1 + c x_2
\end{bmatrix} = \begin{bmatrix} y_1 \\ 0 \end{bmatrix}
$$
as
$$
\begin{bmatrix}
x_1 & x_2 \\ x_2 & -x_1
\end{bmatrix} \begin{bmatrix} c \\ s \end{bmatrix} = \begin{bmatrix} y_1 \\ 0 \end{bmatrix}.
$$
Then, just solve the above system for $c$ and $s$ by multiplying both sides by the inverse of the coefficient matrix.
Since
$$
\begin{bmatrix}
a & b \\ c & d
\end{bmatrix}^{-1} =
\frac{1}{ad - bc}
\begin{bmatrix}
d & -b \\ -c & a
\end{bmatrix},
$$
we have that
$$
\begin{bmatrix}
x_1 & x_2 \\ x_2 & -x_1
\end{bmatrix}^{-1} =
\frac{1}{-x_1^2 - x_2^2}
\begin{bmatrix}
-x_1 & -x_2 \\ -x_2 & x_1
\end{bmatrix} =
\frac{1}{x_1^2 + x_2^2}
\begin{bmatrix}
x_1 & x_2 \\ x_2 & -x_1
\end{bmatrix}.
$$
Therefore,
$$
\begin{bmatrix}
c \\ s
\end{bmatrix} =
\frac{1}{x_1^2 + x_2^2}
\begin{bmatrix}
x_1 & x_2 \\ x_2 & -x_1
\end{bmatrix}
\begin{bmatrix}
y_1 \\ 0
\end{bmatrix} =
\frac{1}{y_1^2}
\begin{bmatrix}
x_1 y_1 \\ x_2 y_1
\end{bmatrix} =
\begin{bmatrix}
x_1/y_1 \\ x_2/y_1
\end{bmatrix}.
$$
```julia
using LinearAlgebra
```
```julia
x = randn(2)
```
```julia
y1 = norm(x)
```
```julia
c, s = x[1]/y1, x[2]/y1
```
```julia
c, s = x/norm(x)
```
```julia
c^2 + s^2
```
```julia
Q = [c -s; s c]
```
```julia
Q'x
```
---
## $QR$-decomposition of a $2 \times 2$ matrix $A$
Suppose
$$
A = \begin{bmatrix}
a_{11} & a_{12} \\
a_{21} & a_{22}
\end{bmatrix}.
$$
Let $Q$ be the rotation matrix that introduces a zero in the first column of $A$:
$$
Q^T \begin{bmatrix} a_{11} \\ a_{21} \end{bmatrix} = \begin{bmatrix} r_{11} \\ 0 \end{bmatrix}.
$$
Let
$$
\begin{bmatrix} r_{12} \\ r_{22} \end{bmatrix} = Q^T \begin{bmatrix} a_{21} \\ a_{22} \end{bmatrix}.
$$
Let
$$
R = \begin{bmatrix}
r_{11} & r_{12} \\
0 & r_{22}
\end{bmatrix}.
$$
Then $Q^T A = R$, and since $Q$ is orthogonal,
$$
A = QR.
$$
---
## Exercise
Compute the $QR$-decomposition of
$$
A = \begin{bmatrix}
1 & 2 \\
1 & 3
\end{bmatrix}
$$
and check your answer using the `qr` function in Julia.
```julia
A = [1 1; 2 3.0]
```
```julia
function formQ(x)
c, s = x/norm(x)
Q = [c -s; s c]
end
```
```julia
Q = formQ(A[:,1])
```
```julia
R = Q'A
```
```julia
A - Q*R
```
```julia
F = qr(A)
```
```julia
A - F.Q*F.R
```
---
## Givens rotations
A **Givens rotation** matrix is
$$
Q =
\begin{bmatrix}
1 \\
&\ddots\\
&&1\\
&&&c&&&&-s\\
&&&&1\\
&&&&&\ddots\\
&&&&&&1\\
&&&s&&&&c\\
&&&&&&&&1\\
&&&&&&&&&\ddots\\
&&&&&&&&&&1\\
\end{bmatrix},
$$
where $c = \cos\theta$ and $s = \sin\theta$. This matrix rotates the $(x_i,x_j)$ plane by an angle of $\theta$.
These matrices can be used to introduce zeros in general $n \times n$ matrices.
---
## Exercise
Use Givens rotations to compute the $QR$-decomposition of
$$
A = \begin{bmatrix}
1 & 2 & 0 \\
0 & 1 & 3 \\
1 & 3 & 0
\end{bmatrix}.
$$
```julia
A = [ 1 2 0; 0 1 3; 1 3 0.0 ]
```
```julia
x = A[[1,3],1]
```
```julia
c, s = x/norm(x)
```
```julia
Q1 = [
c 0 -s
0 1 0
s 0 c
]
```
```julia
A1 = Q1'A
```
```julia
x = A1[[2,3],2]
```
```julia
c, s = x/norm(x)
```
```julia
Q2 = [
1 0 0
0 c -s
0 s c
]
```
```julia
A2 = Q2'A1
```
```julia
R = A2
```
```julia
Q = Q1*Q2
```
```julia
A - Q*R
```
```julia
qr(A)
```
```julia
UpperTriangular(R)
```
---
## Solving $Ax = b$ using $QR$
If $Ax = b$ and $A = QR$, then
$$
Q(Rx) = b.
$$
If we let $c = Rx$, then we have $Qc = b$.
Thus, we have the following algorithm for solving $Ax = b$:
1. Let $c = Q^Tb$.
2. Solve $Rx = c$ using backward substitution.
---
## Exercise
Use the $QR$-decomposition of $A$ to solve $Ax = b$.
$$
A = \begin{bmatrix}
1 & 2 \\
1 & 3
\end{bmatrix},
\qquad
b = \begin{bmatrix} 1 \\ 2 \end{bmatrix}.
$$
```julia
A = [1 2; 1 3.0]
b = [1, 2.0]
Q, R = qr(A)
```
```julia
c = Q'b
```
```julia
x = R\c
```
```julia
A*x - b
```
---
## Reflection matrices
Another way to create zeros in a matrix is by the [Householder reflection transformation](https://en.wikipedia.org/wiki/Householder_transformation):
$$
Q = I - 2uu^T, \qquad \|u\|_2 = 1.
$$
Let $L$ be the set of vectors $v$ that are orthogonal to the unit vector $u$,
$$
L = \left\{ v \in \mathbb{R}^n : u^T v = 0 \right\}.
$$
Then $L$ is a **hyperplane** containing the origin, and $Q$ reflects vectors $x$ across $L$.
---
## Properties of $Q = I - 2uu^T$
1. $Qu = -u$
2. If $v \in L$, then $Qv = v$.
3. $Q = Q^T$
4. $Q^TQ = I$
5. $Q^{-1} = Q$
---
## Exercise
Prove the above properties.
### Part 1
Since $\|u\|_2 = 1$, we have
\begin{align}
Q u
&= (I - 2 u u^T) u \\
&= u - 2 u (u^T u) \\
&= u - 2 u \|u\|_2^2 \\
&= u - 2 u \\
&= -u.
\end{align}
### Part 2
Since $v \in L$, we have that $u^T v = 0$. Thus,
\begin{align}
Q v
&= (I - 2 u u^T) v \\
&= v - 2 u (u^T v) \\
&= v - 2 u (0) \\
&= v.
\end{align}
### Part 3
We have that
\begin{align}
Q^T
&= (I - 2 u u^T)^T \\
&= I^T - 2 (u u^T)^T \\
&= I - 2 (u^T)^T u^T \\
&= I - 2 u u^T \\
&= Q. \\
\end{align}
Therefore, $Q$ is symmetric.
### Part 4
Since $Q^T = Q$, we have
\begin{align}
Q^T Q
&= Q Q \\
&= (I - 2 u u^T)(I - 2 u u^T) \\
&= I - 2 u u^T - 2 u u^T + 4 u (u^T u) u^T \\
&= I - 2 u u^T - 2 u u^T + 4 u u^T \\
&= I. \\
\end{align}
Therefore, $Q$ is orthogonal.
### Part 5
Since $QQ = I$, the matrix $Q$ is its own inverse, so $Q^{-1} = Q$.
---
## Reflecting $x$ to $y$
If $\|u\|_2 \neq 1$, then the **Householder reflector** is
$$
Q = I - \gamma uu^T, \qquad \gamma = \frac{2}{\|u\|_2^2}.
$$
If $x, y \in \mathbb{R}^n$ such that $\|x\|_2 = \|y\|_2$, then the reflector $Q$ using
$$u = x - y$$
satisfies
$$
Qx = y.
$$
---
## Exercise
Test that $Qx = y$ on random vectors $x$ and $y$.
```julia
n = 4
x = randn(n)
y = randn(n)
L = 10*rand()
x *= L/norm(x)
y *= L/norm(y)
norm(x) ≈ norm(y)
```
```julia
u = x - y
γ = 2/dot(u,u)
Q = I - γ*(u*u')
```
```julia
Qmap(v) = v - (γ*dot(u,v))*u
```
```julia
[Q*x y]
```
```julia
norm(Q*x - y)
```
```julia
[Qmap(x) y]
```
```julia
norm(Qmap(x) - y)
```
---
## Creating zeros using reflectors
We want the reflector $Q$ that reflects $x$ to $y$, where
$$
x =
\begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{bmatrix},
\qquad
y =
\begin{bmatrix} -\tau \\ 0 \\ \vdots \\ 0 \end{bmatrix},
\qquad
\tau = \mathrm{sign}(x_1)\|x\|_2.
$$
We define $u$ as
$$
u = \frac{x - y}{\tau + x_1} =
\begin{bmatrix} 1 \\ x_2/(\tau + x_1) \\ \vdots \\ x_n/(\tau + x_1) \end{bmatrix}.
$$
Note that we have divided by $\tau + x_1$ to ensure that $u_1 = 1$.
Since $\tau$ and $x_1$ have the same sign, the calculation $\tau + x_1$ avoids catastrophic cancellation.
Letting
$$
Q = I - \gamma uu^T,
\qquad
\gamma = \frac{2}{\|u\|_2^2},
$$
we have
$$
Qx = \begin{bmatrix} -\tau \\ 0 \\ \vdots \\ 0 \end{bmatrix}.
$$
---
## Exercise
Prove that
$$\gamma = \frac{\tau + x_1}{\tau}.$$
### Proof.
First note that
$$
(\tau + x_1) u =
\begin{bmatrix} \tau + x_1 \\ x_2 \\ \vdots \\ x_n \end{bmatrix}.
$$
Now, taking the norm squared of both sides, we have
$$
\|(\tau + x_1) u \|_2^2 = (\tau + x_1)^2 + x_2^2 + \cdots + x_n^2.
$$
Thus,
$$
|\tau + x_1|^2 \|u\|_2^2 = \tau^2 + 2 \tau x_1 + x_1^2 + \cdots + x_n^2.
$$
Then we have
$$
(\tau + x_1)^2 \|u\|_2^2 = \tau^2 + 2 \tau x_1 + \|x\|_2^2.
$$
Since $\tau = \mathrm{sign}(x_1) \|x\|_2$, we have that $\tau^2 = \|x\|_2^2$. Therefore,
$$
(\tau + x_1)^2 \|u\|_2^2 = \tau^2 + 2 \tau x_1 + \tau^2.
$$
Thus,
$$
(\tau + x_1)^2 \|u\|_2^2 = 2 \tau( \tau + x_1 ).
$$
Since $\tau + x_1 \ne 0$, we can divide both sides by $\tau + x_1$, and we get
$$
(\tau + x_1) \|u\|_2^2 = 2 \tau.
$$
Rearranging, we have that
$$
\frac{\tau + x_1}{\tau} = \frac{2}{\|u\|_2^2}.
$$
Since $\gamma = 2/\|u\|_2^2$, we have that
$$
\frac{\tau + x_1}{\tau} = \gamma.
$$
Q.E.D.
---
## Exercise
Test above method for generating $Q$ on a random vector $x$.
```julia
n = 5
x = randn(5)
```
```julia
τ = sign(x[1])*norm(x)
```
```julia
u = [1; x[2:end]/(τ + x[1])]
```
```julia
γ = (τ + x[1])/τ
```
```julia
2/dot(u,u)
```
```julia
Q = I - (γ*u)*u'
```
```julia
Q*x
```
```julia
Qmap(v) = v - (γ*dot(u,v))*u
```
```julia
Qmap(x)
```
---
## `house`
We can now write a function to compute the $u$ and $\gamma$ of the Householder reflector $Q = I - \gamma uu^T$:
```julia
u, γ, τ = house(x)
```
```julia
function house(x)
u = copy(x)
τ = norm(x)
if τ == 0.0
γ = 0.0
else
if x[1] < 0
τ = -τ # τ = sign(x[1])*norm(x)
end
γ = τ + x[1] # γ temporarily stores τ + x[1]
u[1] = 1.0 # u normalized to u[1] = 1
u[2:end] /= γ # divide u[2:end] by τ + x[1]
γ /= τ # γ = (τ + x[1])/τ
end
return u, γ, τ
end
```
```julia
n = 5
x = randn(n)
u, γ, τ = house(x)
Q = I - γ*(u*u')
```
```julia
issymmetric(Q)
```
```julia
Q*Q
```
```julia
[x Q*x]
```
```julia
Qmap(x)
```
---
## `housetimes`
The way we computed $Qx$ in the above numerical example was inefficient. Note that
$$
Qx = \left(I - \gamma uu^T\right)x = x - \left[\gamma \left(u^T x\right)\right] u.
$$
```julia
housetimes(x::Vector, u, γ) = x - (γ*dot(u, x))*u
```
```julia
u, γ, τ = house(x)
housetimes(x, u, γ)
```
---
## Exercise
Count the number of flops:
1. To form $Q$ and compute $Qx$.
2. To compute $x - \left[\gamma \left(u^T x\right)\right] u$.
**Solution:**
1. $3n^2 + 2n$ flops
2. $4n + 1$ flops
### Part 1
Recall that $Q = I - \gamma u u^T$.
1. Computing $y = (-\gamma) u$ requires $n$ multiplications.
2. Computing $Q = y u^T$ requires $n^2$ multiplications.
3. Computing $Q = I + Q$ reqires $n$ additions (along the diagonal).
4. Computing $Qx$ requires $2n^2$ operations.
So, in total we have $3n^2 + 2n$ flops for forming the $n \times n$ matrix $Q$ and computing matrix-vector multiplication $Qx$.
### Part 2
To compute $x - \left[\gamma \left(u^T x\right)\right] u$, we do the following.
1. Computing $\delta = u^T x$ requires $n$ multiplications and $n$ additions.
2. Computing $\mu = \gamma \delta$ requires one multiplication.
3. Computing $y = \mu u$ requires $n$ multiplications.
4. Computing $z = x - y$ requires $n$ subtractions.
So, in total we have $4n + 1$ flops for computing $x - \left[\gamma \left(u^T x\right)\right] u$.
---
In the algorithm for computing the $QR$ decomposition of a matrix $A$, we will need to compute $QB$ where $B$ is a matrix.
$$
QB = B - (\gamma u) \left(u^TB\right)
$$
```julia
housetimes(B::Matrix, u, γ) = B - (γ*u)*(u'*B)
```
```julia
methods(housetimes)
```
```julia
n = 5
B = rand(n,n)
```
```julia
u, γ, τ = house(B[:,1])
housetimes(B, u, γ)
```
---
## Exercise
Use Householder reflectors to numerically compute the $QR$-decomposition of
$$
A = \begin{bmatrix}
1 & 2 & 0 \\
0 & 1 & 3 \\
1 & 3 & 0
\end{bmatrix}.
$$
Check your answer using the `qr` function in Julia.
```julia
A = [1 2 0; 0 1 3; 1 3 0.0]
R = copy(A)
```
```julia
u, γ, τ = house(R[:,1])
R[1,1] = -τ
R[2:3,1] .= 0
R[:,2:3] = housetimes(R[:,2:3], u, γ)
R
```
```julia
u, γ, τ = house(R[2:3,2])
R[2,2] = -τ
R[3,2] = 0
R[2:3,3] = housetimes(R[2:3,3], u, γ)
R
```
```julia
qr(A)
```
---
## The $QR$ Decomposition Algorithm
$$A_0 = A$$
$$
A_1 = Q_1A =
\left[\begin{array}{c|c}
-\tau_1 & a_1^T \\ \hline
0 & \hat{A}_1
\end{array}\right],
\qquad
Q_1 = I_n - \gamma_1 u_1 u_1^T
$$
$$
A_2 = Q_2Q_1A =
\left[\begin{array}{c|c}
-\tau_1 & a_1^T \\ \hline
0 &
\begin{array}{c|c}
-\tau_2 & a_2^T \\ \hline
0 & \hat{A}_2
\end{array}
\end{array}\right],
\qquad
Q_2 =
\left[\begin{array}{c|c}
1 & \\\hline
& I_{n-1} - \gamma_2 u_2 u_2^T
\end{array}\right]
$$
$$
A_3 = Q_3Q_2Q_1A =
\left[\begin{array}{c|c}
-\tau_1 & a_1^T \\ \hline
0 &
\begin{array}{c|c}
-\tau_2 & a_2^T \\ \hline
0 &
\begin{array}{c|c}
-\tau_3 & a_3^T \\ \hline
0 & \hat{A}_3
\end{array}
\end{array}
\end{array}\right],
\qquad
Q_3 =
\left[\begin{array}{c|c}
I_2 & \\\hline
& I_{n-2} - \gamma_3 u_3 u_3^T
\end{array}\right]
$$
$$\vdots$$
$$
A_{n-1} = Q_{n-1} \cdots Q_1A =
\left[\begin{array}{c|c}
-\tau_1 & a_1^T \\ \hline
0 &
\begin{array}{c|c}
-\tau_2 & a_2^T \\ \hline
0 &
\begin{array}{c|c}
-\tau_3 & a_3^T \\ \hline
0 &
\begin{array}{c|c}
\ddots & \ddots \\ \hline
0 &
\begin{array}{c|c}
-\tau_{n-1} & a_{n-1}^T \\ \hline
0& \hat{A}_{n-1}
\end{array}
\end{array}
\end{array}
\end{array}
\end{array}\right] = R
$$
We then let $Q = Q_1Q_2 \cdots Q_{n-1}$ and obtain $A = QR$.
---
## Storing $u_i$'s and $\gamma_i$'s
Each $u_i$ is normalized so that
$$
u_i = \begin{bmatrix} 1\\*\\\vdots\\* \end{bmatrix}.
$$
Thus we do not need to store the first entry since it is always $1$.
The rest of the entries of $u_i$ can be stored where the zeros are created.
To store the $\gamma_i$'s, we create a separate vector
$$
\gamma = \begin{bmatrix} \gamma_1\\\vdots\\\gamma_{n-1} \end{bmatrix}.
$$
---
## `myqr`
```julia
struct myQRfactorization
V::Matrix{Float64}
γ::Vector{Float64}
end
function myqr(A::Matrix{Float64})
m, n = size(A)
m == n || error("This QR decomposition algorithm requires a square input matrix.")
V = copy(A)
γ = zeros(n-1)
for k = 1:n-1
u, γ[k], τ = house(V[k:n,k]) # compute the Householder reflector I - γuu'
V[k,k] = -τ # diagonal entries become -τ
V[k+1:n,k] = u[2:end] # store u's in the strictly lower-triangular part of V
V[k:n,k+1:n] -= (γ[k]*u)*(u'*V[k:n,k+1:n]) # housetimes
end
myQRfactorization(V, γ)
end
```
```julia
n = 5
A = rand(n, n)
myF = myqr(A)
```
```julia
F = qr(A)
```
```julia
UpperTriangular(myF.V)
```
---
## Flop count of $QR$ Decomposition Algorithm
In each iteration, we need to compute `housetimes`:
1. $\left(I_n - \gamma_1 u_1 u_1^T\right) A_{0}[1:n,\ 2:n]$
2. $\left(I_{n-1} - \gamma_2 u_2 u_2^T\right) A_{1}[2:n,\ 3:n]$
3. $\left(I_{n-2} - \gamma_3 u_3 u_3^T\right) A_{2}[3:n,\ 4:n]$
$\qquad\vdots$
In iteration $k$, we compute
$$
\left(I_{n-k+1} - \gamma_k u_k u_k^T\right) A_{k-1}[k:n,\ k+1:n].
$$
This operation requires approximately $4(n - k + 1)^2$ flops if done efficiently.
We also need to compute $\gamma_k$ and $u_k$ from $A_{k-1}[k:n, k]$, but this is only $O(n)$.
Therefore, the $QR$ Decomposition Algorithm requires
$$
\sum_{k=1}^{n-1} 4(n-k+1)^2 \approx \int_1^{n-1} 4(n-x+1)^2 dx = \frac{4}{3}n^3.
$$
This does not include forming $Q$ though.
---
## Forming $Q$
We can use the $u_i$'s and the $\gamma_i$'s to compute
$$QB = Q_1Q_2\cdots Q_{n-1} B$$
$$Q^T B = Q_{n-1}\cdots Q_2 Q_1 B$$
efficiently without forming $Q$.
We can form $Q$ by computing $QI_n$:
$$
\begin{align}
Q = QI_n &= Q_1 Q_2\cdots Q_{n-1} I_n\\
&=
\left[\begin{array}{c}
I_{n} - \gamma_1 u_1 u_1^T
\end{array}\right]
\left[\begin{array}{c|c}
I_1 & \\\hline
& I_{n-1} - \gamma_2 u_2 u_2^T
\end{array}\right]
\cdots
\left[\begin{array}{c|c}
I_{n-2} & \\\hline
& I_{2} - \gamma_{n-1} u_{n-1} u_{n-1}^T
\end{array}\right] I_n
\end{align}
$$
Done efficiently (from right to left), this calculation requires an additional $\frac43n^3$ flops.
---
## `Qtimes` and `formQ`
```julia
function Qtimes!(F::myQRfactorization, B::Matrix; T=false)
n = size(F.V, 1)
cols = T ? (1:n-1) : (n-1:-1:1)
for k = cols
γk = F.γ[k]
uk = [1.0; F.V[k+1:n,k]]
B[k:n,k:n] -= (γk*uk)*(uk'*B[k:n,k:n])
end
B
end
Qtimes(F::myQRfactorization, B::Matrix) = Qtimes!(F, copy(B))
QTtimes!(F::myQRfactorization, B::Matrix) = Qtimes!(F, B, T=true)
QTtimes(F::myQRfactorization, B::Matrix) = QTtimes!(F, copy(B))
formQ(F::myQRfactorization) = Qtimes(F, Matrix{Float64}(I, size(F.V)))
```
```julia
methods(formQ)
```
```julia
n = 5
A = rand(n, n)
F = myqr(A)
Q = formQ(F)
```
```julia
Q'*A
```
```julia
QTtimes(F, A)
```
---
## Flop count summary for matrix factorizations
Let $A \in \mathbb{R}^{n \times n}$.
`chol(A)`: $\frac13n^3$ flops
`lu(A)`: $\frac23n^3$ flops
`F = qr(A)` does not form $Q$: $\frac43n^3$ flops
`Q = F.Q*Matrix(I,n,n)` forms $Q$: $\frac83n^3$ flops
---
## Flop count to solve $Ax = b$ by $QR$
**Algorithm:**
1. Compute the $QR$ Decomposition of $A$, but do not form $Q$.
2. $c = Q^Tb$
3. Use backward substitution to solve $Rx = c$
The cost of the $QR$ Decomposition is $\frac43n^3$.
The cost of computing $c = Q^Tb$ efficiently (using the $u_i$'s and the $\gamma_i$'s) is about $2n^2$ flops.
Backward substitution is $n^2$ flops.
Therefore, in total we have
$$\frac43n^3 + O(n^2)$$
flops to solve $Ax = b$ by $QR$.
---
## `x = F\b`
```julia
function Qtimes!(F::myQRfactorization, b::Vector; T=false)
n = size(F.V, 1)
cols = T ? (1:n-1) : (n-1:-1:1)
for k = cols
γk = F.γ[k]
uk = [1.0; F.V[k+1:n,k]]
b[k:n] -= (γk*uk)*dot(uk, b[k:n])
end
b
end
Qtimes(F::myQRfactorization, b::Vector) = Qtimes!(F, copy(b))
QTtimes!(F::myQRfactorization, b::Vector) = Qtimes!(F, b, T=true)
QTtimes(F::myQRfactorization, b::Vector) = QTtimes!(F, copy(b))
```
```julia
methods(Qtimes)
```
```julia
n = 5
A = rand(n, n)
b = rand(n)
F = myqr(A)
c = QTtimes(F, b)
```
```julia
R = UpperTriangular(F.V)
```
```julia
x = R\c
```
```julia
b - A*x
```
```julia
b - A*(A\b)
```
---
```julia
import Base.\
\(F::myQRfactorization, b::Vector) = UpperTriangular(F.V)\QTtimes(F, b)
```
```julia
methods(\)
```
```julia
x = F\b
```
```julia
b - A*x
```
---
> ## $QR$ Decomposition Theorem
>
> Let $A \in \mathbb{R}^{n \times n}$. Then the following hold.
>
> 1. There exists $Q, R \in \mathbb{R}^{n \times n}$ such that $Q$ is orthogonal, $R$ is upper-triangular, and $A = QR$.
>
> 2. If $A$ is **nonsingular**, then $\exists$ **unique** $Q, R \in \mathbb{R}^{n \times n}$ such that $Q$ is orthogonal, $R$ is upper-triangular with **positive diagonal entries**, and $A = QR$.
### Proof.
1. Hint: Use Householder reflectors and induction on $n$.
2. Hint: Let $D$ be diagonal with $d_{ii} = \mathrm{sign}(r_{ii})$.
### Part 1
If $n = 1$ then $A$ is a $1 \times 1$ matrix. Thus, $A = [a_{11}]$. Let $Q = [1]$ and $R = [a_{11}]$. Then $Q$ is orthogonal, $R$ is upper-triangular, and $A = QR$.
Now suppose that all $k \times k$ matrices have a $QR$ decomposition, for some positive integer $k$.
Let $n = k+1$ and $A \in \mathbb{R}^{n \times n}$. We partition $A$ as
$$
A =
\begin{bmatrix}
a_{11} & b^T \\ c & D
\end{bmatrix},
$$
where $D$ is a $k \times k$ matrix.
Let $Q_1$ be a Householder reflector for the first column of $A$. Then
$$
Q_1 A =
\begin{bmatrix}
\hat{a}_{11} & \hat{b}^T \\ 0 & \hat{D}
\end{bmatrix}.
$$
Note that $\hat{D}$ is a $k \times k$ matrix, so, by our induction hypothesis, $\hat{D}$ has a $QR$ decomposition: $\hat{D} = \hat{Q} \hat{R}$. Thus,
$$
Q_1 A =
\begin{bmatrix}
\hat{a}_{11} & \hat{b}^T \\ 0 & \hat{Q} \hat{R}
\end{bmatrix}.
$$
Let
$$
Q_2 =
\begin{bmatrix}
1 & 0 \\ 0 & \hat{Q}^T
\end{bmatrix},
$$
and note that $Q_2$ is orthogonal. Then
$$
Q_2 Q_1 A =
\begin{bmatrix}
1 & 0 \\ 0 & \hat{Q}^T
\end{bmatrix}
\begin{bmatrix}
\hat{a}_{11} & \hat{b}^T \\ 0 & \hat{Q} \hat{R}
\end{bmatrix} =
\begin{bmatrix}
\hat{a}_{11} & \hat{b}^T \\ 0 & \hat{Q}^T \hat{Q} \hat{R}
\end{bmatrix} =
\begin{bmatrix}
\hat{a}_{11} & \hat{b}^T \\ 0 & \hat{R}
\end{bmatrix}.
$$
Let $R = Q_2 Q_1 A$. Note that $R$ is upper-triangular. Let $Q = Q_1^T Q_2^T$. Note that $Q$ is orthogonal since it is the product of orthogonal matrices. Finally, we have that $A = Q_1^T Q_2^T R = Q R$.
### Part 2
First, we let $A$ be an $n \times n$ nonsingular matrix. Then, by part 1, the matrix $A$ has a $QR$ decomposition: $A = QR$. Note that the diagonal entries of $R$ are nonzero since, otherwise, $\det(R) = 0$ and that would imply that $\det(A) = \det(Q) \det(R) = 0$, contradicting our assumption that $A$ is nonsingular.
Let $D$ be the diagonal matrix with diagonal entries $d_{ii} = \mathrm{sign}(r_{ii})$. Thus, $D$ has diagonal entries that are $\pm 1$, so $D^2 = I$. That implies that
$$
A = Q R = Q D^2 R = (Q D) (D R).
$$
Let $\hat{Q} = Q D$ and $\hat{R} = D R$. Then $\hat{Q}$ is orthogonal since $Q$ and $D$ are orthogonal. Also, $\hat{R}$ is upper-triangular with positive diagonal entries, and $A = \hat{Q} \hat{R}$.
Now suppose that $A = Q_1 R_1 = Q_2 R_2$, where $R_1$ and $R_2$ have positive diagonal entries. Then,
$$
A^T A = R_1^T Q_1^T Q_1 R_1 = R_1^T R_1
$$
and $A^T A = R_2^T R_2$ are both Cholesky decompositions of the symmetric positive definite matrix $A^T A$. But the Cholesky decomposition is unique, so $R_1 = R_2$. Then,
$$
Q_1 = A R_1^{-1} = A R_2^{-1} = Q_2.
$$
---
## Stability
Multiplication by rotators or reflectors is stable:
$$
\mathrm{fl}(QA) = Q(A + E)
$$
where $\frac{\|E\|_2}{\|A\|_2}$ is tiny.
Also,
\begin{align}
\mathrm{fl}(Q_2 Q_1 A)
&= Q_2( Q_1(A + E_1) + E_2 ) \\
&= Q_2( Q_1(A + E_1) + Q_1 Q_1^T E_2 ) \\
&= Q_2 Q_1( A + E_1 + Q_1^T E_2 ) \\
&= Q_2 Q_1( A + E ) \\
\end{align}
where $E = E_1 + Q_1^T E_2$.
Thus,
$$
\mathrm{fl}(Q_2Q_1A) = Q_2Q_1(A + E),
$$
where $\|E\|_2 = \left\|E_1 + Q_1^T E_2\right\|_2 \leq \|E_1\|_2 + \left\|Q_1^TE_2\right\|_2 = \|E_1\|_2 + \left\|E_2\right\|_2$. Therefore, $\frac{\|E\|_2}{\|A\|_2}$ is tiny.
---
| 0868e17bff84f605c38891d3566e551015c61ef4 | 54,463 | ipynb | Jupyter Notebook | Section 3.2 - Orthogonal Matrices.ipynb | math434/fall2021math434 | 6317ce76de1eb7dbfdc3ea37a21dc5e1e3228316 | [
"MIT"
] | 1 | 2021-08-31T21:01:22.000Z | 2021-08-31T21:01:22.000Z | Section 3.2 - Orthogonal Matrices.ipynb | math434/fall2021math434 | 6317ce76de1eb7dbfdc3ea37a21dc5e1e3228316 | [
"MIT"
] | null | null | null | Section 3.2 - Orthogonal Matrices.ipynb | math434/fall2021math434 | 6317ce76de1eb7dbfdc3ea37a21dc5e1e3228316 | [
"MIT"
] | 1 | 2021-11-16T19:28:56.000Z | 2021-11-16T19:28:56.000Z | 21.324589 | 334 | 0.424967 | true | 11,090 | Qwen/Qwen-72B | 1. YES
2. YES | 0.879147 | 0.872347 | 0.766921 | __label__eng_Latn | 0.69783 | 0.620148 |
<font size = 12> Calibration Notebook </font>
Author: Leonardo Assis Morais
Calibrate the TES detector using area measurements.
The output of this notebook is a .csv file with the counting thresholds <br>
required to convert TES area information to photon-number information.
Use the notebook Counting Photons.ipynb to obtain photon-number <br>
information from your raw TES area data.
Summary:
1) Getting data
2) Importing data
3) Calibration
4) Sanity Check
5) Additional Feature: plotting other characteristics
# Getting the calibration data
You will need to have the calibration data on your computer <br>
for this notebook to work properly.
In order to get the data, you will need to use a ftp protocol.
In Mac, I can press CMD + K and type the following address:<br> ftp://smp-qtlab11.instrument.net.uq.edu.au
This wiil grant you access to the folder ’captures’ where all <br>
measurements made with the FPGA are stored. Copy the folder with <br>
your data set to your machine in a folder that you know. <br>
You will need to know this address to run the notebook below.
```python
import numpy as np
import matplotlib.pyplot as plt
import tes
```
```python
# for development/debugging of the tes package
#%load_ext autoreload
%reload_ext autoreload
```
# Importing data
```python
from tes.data import CaptureData
# edit the path here with the folder where your data is located
datapath = ('/Users/leo/TES_package_and_data/TES3_80mK_820nm_10khz_'
'50ns_2_6V_1MBW_peak1_h078df2b2/drive-pulse_BL-pulse_'
'threshold-slope_max')
# function to get the data from files
data = CaptureData(datapath)
# boolean mask for events in channel 0 - the photon detections
ch0 = data.mask(0)
# boolean mask for events in channel 1 - the laser drive pulses.
ch1 = data.mask(1)
# this is the relative time since the last event
times = data.time
heights = data.rise[ch0]['height'][:,0]
areas = data.area[ch0]
minima = data.rise[ch0]['minimum'][:,0]
lengths = data.pulse_length[ch0]
rise_time = data.rise[ch0]['rise_time'][:,0]
```
# Calibration
Using the cell below to find the position of the peaks of the histogram.
The data and model will be plotted so the user can make a sanity check of this fit first guess.
```python
%matplotlib inline
from tes.calibration import guess_histogram, plot_guess
# histogram parameters
BIN_NUMBER = 1000
WIN_LENGTH = 25
MINIMUM = 7
MAX_AREA = 1e7
# figure parameters
FIG_WIDTH = 15
FIG_HEIGHT = 9
YLIM = [1e0, 3.5e5] # limits for y axis
# calculating educated fitting
counts, smooth_hist, bin_centre, max_i, max_list = guess_histogram(areas,
BIN_NUMBER,
WIN_LENGTH,
MINIMUM,
MAX_AREA)
# plotting educated fitting with maxima points
fig, ax = plt.subplots(figsize=[FIG_WIDTH,FIG_HEIGHT])
plot_guess(ax, counts, smooth_hist, bin_centre, max_i)
print("The number of peaks found in this data set was {}.".format(len(max_i)))
# plotting details
ax.set_ylim(YLIM)
ax.set_xlim(-0.1e6,np.max(ax.get_xticks()))
ax.grid('on')
```
## Fitting the data with a mixture model of gaussian distributions
Used lmfit package for fitting. Documentation, tutorials, and further informations here:
https://lmfit.github.io/lmfit-py/
The model is composed of a sum of 16 Gaussian distributions:
\begin{equation}
M(A_1, \mu_1, \sigma_1, ..., A_{16}, \mu_{16}, \sigma_{16}) = \sum_{n=1}^{16} G_n(a),
\end{equation}
where
\begin{equation}
G_n(a) = \frac{A_n}{\sqrt{2\pi} \sigma_n} \exp \bigg( \frac{-(a - \mu_n)^2}{2 \sigma_n^2}\bigg),
\end{equation}
$a$ is the area, $\sigma_n^2$ is the gaussian variance, $A_n$ is the amplitude, $\mu_n$ is the mean.
## Creating the histogram
```python
from tes.calibration import area_histogram
MAX_AREA = 1e7
BIN_NUMBER = 45000
bin_centre, counts, error, bin_width = area_histogram(MAX_AREA, BIN_NUMBER, areas)
```
The number of points in the analysed data is: 14884856 .
We analyse data up to: 10000000.0 area units.
The number of bins in the analysed data is: 45000 .
The size of each bin is: 222.22222222222223 .
## Fitting
```python
from tes.calibration import residual_gauss
import lmfit
# requires max_list from guess_histogram
MAX_IDX = len(max_i) # number of distributions to be used+1
INIT_SCALE = 40889
INIT_AMP = 75751
params = lmfit.Parameters()
[params.add(r'scale{}'.format(idx), value=INIT_SCALE)
for idx in range(1, MAX_IDX)]
[params.add(r'loc{}'.format(idx), value=max_list[idx])
for idx in range(1,MAX_IDX)]
[params.add(r'amp{}'.format(idx), value=INIT_AMP)
for idx in range(1,MAX_IDX)]
gauss_fit = lmfit.minimize(residual_gauss, params, args=(bin_centre, counts, error,MAX_IDX))
```
```python
# uncomment to see fit report
# print(lmfit.fit_report(gauss_fit))
```
```python
# requires area_histogram
from tes.calibration import plot_area, gaussian_model
%matplotlib inline
# creating the model
model_gauss = gaussian_model(gauss_fit, counts, bin_centre, MAX_IDX)
# plotting the figure
width = 11
golden_ratio = (np.sqrt(5.0)-1.0)/2.0
fig, ax = plt.subplots(figsize=[width, width*golden_ratio])
# edit plot steps to avoid plotting all bins
plot_steps = 1
plot_area(ax, bin_centre, counts, error, model_gauss, plot_steps)
# uncomment to save fit/edit figure path
# fig.savefig('./Chapter3/Figures/area_histogram_16.pdf', bbox_inches='tight',
# dpi=300)
```
Given the area histogram above, the function below will find the <br>
position of the counting thresholds by positioning them at <br>
the intersection point of two adjacent normalised distributions. <br>
```python
from tes.calibration import find_thresholds
# requires: gauss_fit, max_i
# returns: dist, thresholds
# objective: from fitting obtained in last section, obtain normalised distributions
# for plots and position of counting thresholds
# normally you won't need to change the variable below.
# see help(find_thresholds) for more info.
const = 3e4
dist, thresholds = find_thresholds(gauss_fit, max_i, bin_centre, counts, const)
```
# Saving counting thresholds
Save all counting thresholds in a .csv file to be used for converting area information into photon-number information.
```python
import csv
folder_name = '/Users/leo/Desktop/'
file_name = 'calibration'
with open(folder_name+file_name+'.csv', mode='w') as count_file:
fock_numbers = [i for i in range(0, len(thresholds)+1)]
writer = csv.writer(count_file)
count_file.write(str(datapath) + "\n")
writer.writerow(fock_numbers)
writer.writerow(thresholds)
```
# Sanity Check: Plotting thresholds and normalised distributions
```python
from tes.calibration import correct_xticks, plot_normalised
# Creating the figure
GOLDEN_RATIO = (np.sqrt(5.0)-1.0)/2.0
FIG_WIDTH = 11
FIG_HEIGHT = FIG_WIDTH*GOLDEN_RATIO
STEP_LABEL = 2 # every other threshold has tick label (x-axis)
LAST_THRESH = 5.6e6
YMIN = -0.001e-5
YMAX = 1.1e-5
fig, ax = plt.subplots(figsize=[FIG_WIDTH, FIG_HEIGHT])
plot_normalised(ax, max_i, bin_centre, dist, thresholds)
# fixing xtick labels
expx = correct_xticks(ax)
expx = expx-1
ax.set_xticks(thresholds[::STEP_LABEL])
tick_values = thresholds/10**expx
tick_labels = [r'${:2.1f}$'.format(tick) for tick in tick_values]
ax.set_xticklabels(tick_labels[::STEP_LABEL]);
ax.set_xlabel(r'Pulse Area $(\times 10^{{ {} }})$ (arb. un.)'.format(expx))
ax.set_xlim(0, LAST_THRESH)
ax.set_ylim(YMIN, YMAX)
```
# Other components
If you want to see the data for different components, <br>
you can use the code below.
```python
%matplotlib inline
from tes.calibration import plot_histogram
# histogram parameters
BIN_NUMBER = 300
measurement = 'Length'
# figure parameters
FIG_WIDTH = 15
FIG_HEIGHT = 9
# plotting educated fitting with maxima points
fig, ax = plt.subplots(figsize=[FIG_WIDTH,FIG_HEIGHT])
plot_histogram(ax, lengths, BIN_NUMBER, measurement)
```
<font size = 12>END OF NOTEBOOK </font>
| 005ca05c7dd497fc9c001ca8af510906f9089877 | 256,655 | ipynb | Jupyter Notebook | Jupyter Notebooks/TES Calibration.ipynb | Leo-am/tespackage | 1e3447951532411eb3596c6dbeaf781c4b006676 | [
"MIT"
] | null | null | null | Jupyter Notebooks/TES Calibration.ipynb | Leo-am/tespackage | 1e3447951532411eb3596c6dbeaf781c4b006676 | [
"MIT"
] | null | null | null | Jupyter Notebooks/TES Calibration.ipynb | Leo-am/tespackage | 1e3447951532411eb3596c6dbeaf781c4b006676 | [
"MIT"
] | null | null | null | 507.22332 | 95,964 | 0.946952 | true | 2,166 | Qwen/Qwen-72B | 1. YES
2. YES | 0.774583 | 0.749087 | 0.580231 | __label__eng_Latn | 0.936751 | 0.1864 |
```python
%config InlineBackend.figure_format = 'retina'
from matplotlib import rcParams
rcParams["savefig.dpi"] = 96
rcParams["figure.dpi"] = 96
```
# The Shin (2015) model
## Introduction
The model proposed by [Shin (2015)](http://dx.doi.org/10.1007/s12665-015-4588-z)
is an equivalent circuit that aims to reproduce SIP data This model
predicts that the complex resistivity spectra $\rho^*$ of a
polarizable rock sample can be described by
\begin{equation}
\rho^* = \sum_{i=1}^2 \frac{\rho_i}{(i\omega)^{n_i} \rho_iQ_i + 1}
\end{equation}
where $\omega$ is the measurement angular frequencies
($\omega=2\pi f$) and $i$ is the imaginary unit.
Here, $\rho^*$ depends on 3 pairs of parameters:
- $\rho_i \in [0, \infty)$, the resistivity of the resistance element in Shin's circuit.
- $Q_i \in [0, \infty)$, the capacitance of the CPE.
- $n_i \in [0, 1]$, the exponent of the CPE impedance (0 = resistor, 0.5 = warburg, 1.0 = capacitance).
In this tutorial we will perform batch inversion of all SIP data files provided with BISIP with the Shin (2015) and double Cole-Cole models, and we will compare their respective relaxation time ($\tau$) parameters.
## Exploring the parameter space
First import the required packages.
```python
import numpy as np
from bisip import PeltonColeCole
from bisip import Shin2015
from bisip import DataFiles
np.random.seed(42)
```
```python
# Load the data file paths
data_files = DataFiles()
results = {'Shin': {},
'Pelton': {},
}
nsteps = 1000
for fname, fpath in data_files.items():
if fname == 'SIP-K389175':
model = Shin2015(fpath, nsteps=nsteps)
model.fit()
results['Shin'][fname] = model
# model = PeltonColeCole(fpath, nsteps=nsteps, n_modes=2)
# model.fit()
# results['Pelton'][fname] = model
```
100%|██████████| 1000/1000 [00:01<00:00, 724.53it/s]
```python
fig = results['Shin']['SIP-K389175'].plot_traces()
```
```python
fig = results['Shin']['SIP-K389175'].plot_fit(discard=500)
```
```python
import matplotlib.pyplot as plt
freq = np.logspace(-2, 100, 10000)
w = 2*np.pi*freq
theta = np.array([8.16E02, 3.12E03, np.log(1.80E-14), np.log(2.25E-06), 0.3098, 0.4584])
Z = model.forward(theta, w)
plt.plot(*Z)
```
### Conclusions
```python
```
```python
```
```python
```
| cebfef328a4048205ef3a4db570cdac5cd94e40b | 566,535 | ipynb | Jupyter Notebook | docs/tutorials/shin.ipynb | clberube/BISIP2 | 810c70bc04cba016b6f3fbe6e2412bd689acf1a8 | [
"MIT"
] | 9 | 2017-04-21T20:17:05.000Z | 2021-12-03T07:06:02.000Z | docs/tutorials/shin.ipynb | clberube/BISIP2 | 810c70bc04cba016b6f3fbe6e2412bd689acf1a8 | [
"MIT"
] | 4 | 2017-09-28T07:06:56.000Z | 2019-08-24T18:36:25.000Z | docs/tutorials/shin.ipynb | clberube/BISIP2 | 810c70bc04cba016b6f3fbe6e2412bd689acf1a8 | [
"MIT"
] | 2 | 2018-03-05T14:52:32.000Z | 2018-03-26T14:19:33.000Z | 1,967.135417 | 465,052 | 0.960906 | true | 735 | Qwen/Qwen-72B | 1. YES
2. YES | 0.843895 | 0.737158 | 0.622084 | __label__eng_Latn | 0.746143 | 0.28364 |
# Modeling the Time Evolution of the Annualized Rate of Public Mass Shootings with Gaussian Processes
Nathan Sanders, Victor Lei (Legendary Entertainment)
January, 2017
## Abstract
Much of the public policy debate over gun control and gun rights in the United States hinges on the alarming incidence of public mass shootings, here defined as attacks killing four or more victims. Several times in recent years, individual, highly salient public mass shooting incidents have galvanized public discussion of reform efforts. But deliberative legislative action proceeds over a much longer timescale that should be informed by knowledge of the long term evolution of these events. We have used *Stan* to develop a new model for the annualized rate of public mass shootings in the United States based on a Gaussian process with a time-varying mean function. This design yields a predictive model with the full non-parametric flexibility of a Gaussian process, while retaining the direct interpretability of a parametric model for long-term evolution of the mass shooting rate. We apply this model to the Mother Jones database of public mass shootings and explore the posterior consequences of different prior choices and of correlations between hyperparameters. We reach conclusions about the long term evolution of the rate of public mass shootings in the United States and short-term periods deviating from this trend.
## Background
Tragic, high profile public events over the past few years like the shootings at the Washington Navy Yard; the Emanuel AME Church in Charleston; San Bernadino, CA; and Orlando, FL have raised public awareness of the dangers posed by public mass shooting events and sociological interest in understanding the motivations and occurrence rates of such events. There is no commonly accepted definition of a public mass shooting, but such an event is generally understood to be the simultaneous homicide of multiple people perpetrated by an individual or coordinated group via firearm.
A particular question facing elevated public, political, and scholarly scrutiny is whether the rate of public mass shootings has increased significantly over recent years. Lott (2014) responded to a [September, 2013 FBI report](https://www.fbi.gov/news/stories/2014/september/fbi-releases-study-on-active-shooter-incidents/pdfs/a-study-of-active-shooter-incidents-in-the-u.s.-between-2000-and-2013) on public mass shootings by re-evaluating sources of bias, reviewing data consistency, and redefining the period under consideration to conclude that no statistically significant increase is identifiable. Lott's work has been the subject of persistent controversy (see e.g. Johnson et al. 2012). In contrast, Cohen et al. (2014) claim that the rate of public mass shootings tripled over the four year period 2011-2014 based on a Statistical Process Control (SPC) analysis of the duration between successive events.
In this study, we present a new statistical approach to evaluating the time evolution of the rate of public mass shootings. We do not present original data on occurrences in the United States, address the myriad considerations inherent in defining a "mass shooting" event, or seek to resolve the causal issues of why the growth rate may have changed over time. We do adopt a commonly cited public mass shooting dataset and definition from Mother Jones.
We develop a Gaussian process-based model for the time evolution of the occurrence rate of public mass shootings and demonstrate inference under this model by straightforward application of the probabilistic programming language *Stan*. We use this case to explore the intersection of parametric and non-parametric models. We seek to merge a parametric model, with straightforward interpretations of posterior marginalized parameter inferences, with a non-parametric model that captures and permits discovery of unspecified trends. *Stan's* flexible modeling language permits rapid model design and iteration, while the No-U-Turn sampler allows us to fully explore the model posterior and understand the dependence between the parametric and non-parametric components of our model and the implications of our prior assumptions.
In the following notebook, we describe the Mother Jones dataset on US public mass shootings and lay out our statistical model and inference scheme. We then discuss the results from this inference, how they depend on choices for the prior distribution, and explore correlations between hyperparameters. Finally, we discuss the conclusions that can be reached from inspection of the marginal posterior distributions.
```python
## Notebook setup
%matplotlib inline
import pandas as pd
import numpy as np
import pickle, os, copy
import scipy
from matplotlib import pyplot as plt
from matplotlib import cm
from matplotlib.ticker import FixedLocator, MaxNLocator, AutoMinorLocator
## NOTE: We encounter an error with this model using PyStan 2.14,
## so for now we will wrap cmdstan using stanhelper instead.
#import pystan
## See https://github.com/akucukelbir/stanhelper
import stanhelper
import subprocess
cmdstan_path = os.path.expanduser('~/Stan/cmdstan_2.14.0/')
from scipy import stats as sstats
```
### Package versions
```python
%load_ext watermark
%watermark -v -m -p pandas,numpy,scipy,matplotlib,pystan
```
CPython 2.7.6
IPython 5.1.0
pandas 0.18.1
numpy 1.11.3
scipy 0.18.1
matplotlib 1.4.3
pystan 2.14.0.0
compiler : GCC 4.8.4
system : Linux
release : 3.16.0-38-generic
machine : x86_64
processor : x86_64
CPU cores : 4
interpreter: 64bit
```python
print subprocess.check_output(cmdstan_path+'bin/stanc --version', shell=1)
```
stanc version 2.14.0
## Data
For this study, we consider the [database published by Mother Jones](http://www.motherjones.com/politics/2012/12/mass-shootings-mother-jones-full-data) (retrieved for this study on October 16, 2016; as of January 14, 2017, Mother Jones had not added any further events to its database for 2016), compiling incidents of public mass shootings in the United States from 1982 through the end of 2016. The database includes rich (quantitative and qualitative) metadata on the effects of the incidents, the mental health condition of the perpetrators, weapon type, how the perpetrators obtained their weapons, and more; however, we focus primarily on the dates of incident occurrence.
The definition of a public mass shooting is not universally agreed upon, and even when a firm definition is adopted there can be ambiguity in how to apply it to the complex and uncertain circumstances of these chaotic events. See Fox & Levin (2015) for a recent discussion. The criteria for inclusion in the Mother Jones database were described in a [2014 article by Mark Follman](http://www.motherjones.com/politics/2014/10/mass-shootings-rising-harvard):
> [The database] includes attacks in public places with four or more victims killed, a baseline established by the FBI a decade ago. We excluded mass murders in private homes related to domestic violence, as well as shootings tied to gang or other criminal activity.''
Follman discusses their motivations for these criteria and provide some examples of prominent incidents excluded by the criteria, such as the shooting at Ft. Hood in April, 2014. Note that the federal threshold for investigation of public mass shootings was lowered to three victim fatalities in January of 2013, and the Mother Jones database includes shootings under this more expansive definition starting from that date. To maintain a consistent definition for public mass shootings throughout the studied time period, we only consider shootings with four or more victim fatalities.
Our primary dataset is the count of incidents reported in this database per calendar year. We include incidents labeled as both "Mass" or "Spree" by Mother Jones.
```python
## Load data
data = pd.read_excel('MotherJonesData_2016_10_16.xlsx','US mass shootings')
## Stadardize on definition of fatalities at 4. Mother Jones changed it to 3 in 2013.
data = data[data.Fatalities > 3]
## Prepare data
# Aggregate data anually
data_annual = data.groupby('Year')
# Count cases by year and fill in empty years
cases_resamp = data_annual.count().Case.ix[np.arange(1982,2017)].fillna(0)
# Enumerate years in range
data_years = cases_resamp.index.values
# Enumerate quarters across daterange for later plotting
data_years_samp = np.arange(min(data_years), max(data_years)+10, .25)
# Format for Stan
stan_data = {
'N1': len(cases_resamp),
'x1': data_years - min(data_years),
'z1': cases_resamp.values.astype(int),
'N2': len(data_years_samp),
'x2': data_years_samp - min(data_years),
}
```
```python
## Print the stan model inputs
for key in stan_data:
print key
print stan_data[key]
print '\n'
```
x2
[ 0. 0.25 0.5 0.75 1. 1.25 1.5 1.75 2. 2.25
2.5 2.75 3. 3.25 3.5 3.75 4. 4.25 4.5 4.75 5.
5.25 5.5 5.75 6. 6.25 6.5 6.75 7. 7.25 7.5
7.75 8. 8.25 8.5 8.75 9. 9.25 9.5 9.75 10. 10.25
10.5 10.75 11. 11.25 11.5 11.75 12. 12.25 12.5 12.75 13.
13.25 13.5 13.75 14. 14.25 14.5 14.75 15. 15.25 15.5
15.75 16. 16.25 16.5 16.75 17. 17.25 17.5 17.75 18. 18.25
18.5 18.75 19. 19.25 19.5 19.75 20. 20.25 20.5 20.75 21.
21.25 21.5 21.75 22. 22.25 22.5 22.75 23. 23.25 23.5
23.75 24. 24.25 24.5 24.75 25. 25.25 25.5 25.75 26. 26.25
26.5 26.75 27. 27.25 27.5 27.75 28. 28.25 28.5 28.75 29.
29.25 29.5 29.75 30. 30.25 30.5 30.75 31. 31.25 31.5
31.75 32. 32.25 32.5 32.75 33. 33.25 33.5 33.75 34. 34.25
34.5 34.75 35. 35.25 35.5 35.75 36. 36.25 36.5 36.75 37.
37.25 37.5 37.75 38. 38.25 38.5 38.75 39. 39.25 39.5
39.75 40. 40.25 40.5 40.75 41. 41.25 41.5 41.75 42. 42.25
42.5 42.75 43. 43.25 43.5 43.75]
N1
35
N2
176
x1
[ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
25 26 27 28 29 30 31 32 33 34]
z1
[1 0 2 0 1 1 1 2 1 3 2 4 1 1 1 2 3 5 1 1 0 1 1 2 3 4 3 4 1 3 7 5 3 4 4]
```python
## Number of years with data
print len(stan_data['x1'])
```
35
```python
## Number of interpolated points to do prediction for
print len(stan_data['x2'])
```
176
## Statistical Model
We adopt a univariate Gaussian process model (see e.g. Rasmussen & Williams 2006) as a non-parametric description of the time evolution of the annualized occurrence rate. The Gaussian process describes deviations from a mean function by a covariance matrix that controls the probability of the deviation as a function of the time differential between points. Robers et al. (2012) surveyed applications of Gaussian process models to timeseries data, and explored the implications of different choices for the mean and covariance functions.
We adopt the following system of units for the Gaussian Process model. The time vector $x$ is measured in years since 1982 and the outcome vector $z$ is defined as the number of occurrences per year.
Many applications of Gaussian processes adopt a constant, zero mean function. In that case, the relationship between the dependent variable(s) and the predictors is described entirely by the non-parametric family of functions generated from the Gaussian process covariance function.
We adopt a linear mean function and a squared-exponential covariance function. The mean function $\mu(x)$ is simply:
\begin{equation}
\mu(x) = \mu_0 + \mu_b~x
\end{equation}
Note that we use a logarithmic parameterization of the likelihood for the occurence rate (see below), so the linear mean function corresponds to an exponential function for the evolution of the rate of shootings per year.
The familiar squared-exponential covariance function, which generates infinitely-differentiable functions from the Gaussian process, is:
\begin{equation}
k(x)_{i,j} = \eta^2~exp \big( -\rho^2 \sum_{d=1}^{D}(x_{i,d} - x_{j,d})^2 \big) + \delta_{i,j}~\sigma^2
\end{equation}
where the hyperparameter $\eta$ controls the overall strength of covariance, $\rho$ controls the timescale over which functions drawn from the process vary, and $\sigma$ controls the baseline level of variance.
Our likelihood assumes that the occurrence rate is specified by exponentiated draws of the occurrence rate $y$ from the mean and covariance functions, and the observed outcome data is negative binomial-distributed according to the rate.
\begin{align}
y(x) \sim \rm{N}(\mu(x), k(x)^2) \\
z(x) ~ \sim \rm{NB}(exp(y(x)), \phi)
\end{align}
where $\rm{N}$ is the normal (parameterized by the standard deviation rather than the variance, per *Stan* standard syntax) and $\rm{NB}$ is the negative binomial distribution. We use the "alternative" parameterization of the negative binomial distribution described in the *Stan* manual, where the second parameter directly scales the overdispersion relative to a Poisson distribution. While we choose the negative binomial to permit overdispersion in the annualized mass shooting rate beyond counting noise, as we will see, the data provide strong evidence for small values of $\phi^{-1}$, consistent with Poisson noise.
The role of each component of the Gaussian process will depend largely on the timescale parameter $\rho$. When the timescale is short, the model effectively divides the response into a long-term (timescale of the range of the data; in this case, decades) parametric effect and a short-term (timescale of e.g. years) non-parametric effect. This approach gives us the full flexibility of the Gaussian process for predictive applications, while still allowing us to make interpretable, parametric inferences on the long-term evolution of the system.
We apply the following prior and hyperprior distributions to provide weak information about the scale of the relevant parameters in the adopted unit system:
\begin{align*}
\rho^{-1} \sim \Gamma(\alpha_{\rho}, \beta_{\rho}) \\
\eta^2 \sim \rm{C}(2. 5) \\
\sigma^2 \sim \rm{C}(0, 2.5) \\
\mu_0 \sim \rm{N}(0, 2) \\
\mu_b \sim \rm{N}(0, 0.2) \\
\phi^{-1} \sim C(0, 5)
\end{align*}
where $\Gamma$ is the gamma distribution; $\rm{C}$ is the half-Cauchy distribution; the parameters $\eta^2$, $\sigma^2$, and $\phi^{-1}$ are constrained to be positive; and we apply the constraint $\rho^{-1} > 1$ to enforce timescales $>1$ yr (the spacing of our data).
Below we explore different choices for the $\alpha$ and $\beta$ parameters of the gamma hyperprior on $\rho^{-1}$, labeled as $\alpha_{\rho}$ and $\beta_{\rho}$. In particular, we explore $(\alpha_{\rho},\beta_{\rho}) = (4,1)$ and $(1,1/100)$. These correspond to prior distributions with standard deviations of $2$ and $100$ years, respectively. On top of the linear trend in the mean function, the former represents a strong prior expectation that the annualized rate of public mass shootings evolves on a timescale of a few years, and the latter represents a nearly-flat expectation for variations on timescales from a few years to a few centuries.
We implement the Gaussian process model in *Stan*, adapting the logistic classification example in Section 14.5 of the *Stan* manual. *Stan's* *NUTS* sampler performs full joint Bayesian estimation of all parameters, including the mean function parameters $\mu_0$ and $\mu_b$ and the Gaussian Process hyperparmeters $\eta$, $\rho$, and $\sigma$ and the negative binomial over-dispersion $\phi^{-1}$. The $\alpha_{\rho}$ and $\beta_{\rho}$ hyperparameters of the $\rho$ hyperprior distribution are fixed. We use the Cholesky factor transformed implementation of the normal distribution to calculate the likelihood.
We expect these hyperparameters to be at least somewhat correlated and not well-identified, introducing significant curvature in the model posterior, indicating that Hamiltonian Monte Carlo (HMC) would be a particularly effective sampling strategy for this model (Betancourt & Girolami 2013). We fit the model to the 35 annual observations of the Mother Jones dataset and do model interpolation and prediction over a grid of 176 quarters from 1980 to 2024.
We typically fit 8 independent chains of length 2000 iterations (following an equal number of NUTS warmup samples) in parallel using *Stan* and observe a typical execution time of ~1 min. For the purposes of this notebook, we obtain a larger number of samples by fitting 20 chains of 4000 samples in order to improve the resolution of 2D posterior histograms.
```python
with open('gp_model_final.stan', 'r') as f:
stan_code = f.read()
print stan_code
```
data {
int<lower=1> N1;
vector[N1] x1;
int z1[N1];
int<lower=1> N2;
vector[N2] x2;
real<lower=0> alpha_rho;
real<lower=0> beta_rho;
}
transformed data {
int<lower=1> N;
vector[N1+N2] x;
// cov_exp_quad wants real valued inputs
real rx[N1+N2];
real rx1[N1];
real rx2[N2];
N = N1 + N2;
x = append_row(x1, x2);
rx = to_array_1d(x);
rx1 = to_array_1d(x1);
rx2 = to_array_1d(x2);
}
parameters {
vector[N1] y_tilde1;
real<lower=0> eta_sq;
real<lower=1> inv_rho;
real<lower=0> sigma_sq;
real mu_0;
real mu_b;
real<lower=0> NB_phi_inv;
}
model {
vector[N1] mu1;
vector[N1] y1;
matrix[N1,N1] Sigma1;
matrix[N1,N1] L1;
// Calculate mean function
mu1 = mu_0 + mu_b * x1;
// GP hyperpriors
eta_sq ~ cauchy(0, 1);
sigma_sq ~ cauchy(0, 1);
inv_rho ~ gamma(alpha_rho, beta_rho); // Gamma prior with mean of 4 and std of 2
// Calculate covariance matrix using new optimized function
Sigma1 = cov_exp_quad(rx1, sqrt(eta_sq), sqrt(0.5) * inv_rho);
for (n in 1:N1) Sigma1[n,n] = Sigma1[n,n] + sigma_sq;
// Decompose
L1 = cholesky_decompose(Sigma1);
// We're using a the non-centered parameterization, so rescale y_tilde
y1 = mu1 + L1 * y_tilde1;
// Mean model priors
mu_0 ~ normal(0, 2);
mu_b ~ normal(0, 0.2);
// Negative-binomial prior
// For neg_binomial_2, phi^-1 controls the overdispersion.
// phi^-1 ~ 0 reduces to the poisson. phi^-1 = 1 represents variance = mu+mu^2
NB_phi_inv ~ cauchy(0, 5);
// Generate non-centered parameterization
y_tilde1 ~ normal(0, 1);
// Likelihood
z1 ~ neg_binomial_2_log(y1, inv(NB_phi_inv));
}
generated quantities {
vector[N1] y1;
vector[N2] y2;
vector[N] y;
int z_rep[N];
{
// Don't save these parameters
matrix[N,N] Sigma;
matrix[N,N] L;
vector[N] y_tilde;
Sigma = cov_exp_quad(rx, sqrt(eta_sq), sqrt(0.5) * inv_rho);
for (n in 1:N) Sigma[n,n] = Sigma[n,n] + sigma_sq;
for (n in 1:N1) y_tilde[n] = y_tilde1[n];
for (n in (N1 + 1):N) y_tilde[n] = normal_rng(0,1);
// Decompose
L = cholesky_decompose(Sigma);
y = mu_0 + mu_b * x + L * y_tilde;
for (n in 1:N1) y1[n] = y[n];
for (n in 1:N2) y2[n] = y[N1+n];
for (n in 1:N) z_rep[n] = neg_binomial_2_log_rng(y[n], inv(NB_phi_inv));
}
}
Note that we use the newly introduced *cov_exp_quad* function to implement the squared exponential covariance function, and we rescale $\rho^{-1}$ by $2^{-1/2}$ to accomodate the difference between this implementation and our definition above. Moreover, we use a non-centered parameterization (see e.g. Papaspiliopoulos et al. 2003) for the Gaussian process, modeling the latent parameter $\tilde{y}$ as standard normal and then transforming to a sampled value for $y$ by rescaling by the covariance matrix.
## Model fitting
```python
## Compile using pystan
#stan_model_compiled = pystan.StanModel(model_code=stan_code)
### Compile using cmdstan
### Script expects cmdstan installation at cmdstan_path
subprocess.call("mkdir "+cmdstan_path+"user-models", shell=1)
subprocess.call("cp gp_model_final.stan " + cmdstan_path+"user-models/", shell=1)
subprocess.call("make user-models/gp_model_final", cwd=cmdstan_path, shell=1)
```
0
Below we explore the consequences of different choices for the prior distribution on $\rho^{-1}$. To facilitate that analysis, here we fit the model twice with two different hyperparameter specifications provided as data. We will visualize and discuss these hyperprior choices in the next section. When not explicitly making comparisons between the two models, we focus on the model with the stronger prior on $\rho^{-1}$.
```python
## Sampling parameters
Nchains = 20
Niter = 8000
cdic = {'max_treedepth': 15, 'adapt_delta': 0.95}
```
```python
## Sample with strong prior on rho
stan_data_rho_strong = copy.copy(stan_data)
stan_data_rho_strong['alpha_rho'] = 4
stan_data_rho_strong['beta_rho'] = 1
## Sample with pystan
#stan_model_samp_rho_strong = stan_model_compiled.sampling(
# data = stan_data_rho_strong, iter=Niter,
# chains=Nchains, control=cdic, seed=1002
# )
## Sample with cmdstan
## Delete any old samples first
os.system('rm output_cmdstan_gp_rhostrong_samples*.csv')
stanhelper.stan_rdump(stan_data_rho_strong, 'input_data_rhostrong_final.R')
p = []
for i in range(Nchains):
cmd = """
{0}user-models/gp_model_final \
data file='input_data_rhostrong_final.R' \
sample num_warmup={2} num_samples={2} \
adapt delta={4} \
algorithm=hmc engine=nuts max_depth={3} \
random seed=1002 id={1} \
output file=output_cmdstan_gp_rhostrong_samples{1}.csv
""".format(cmdstan_path, i+1, Niter/2, cdic['max_treedepth'], cdic['adapt_delta'])
p += [subprocess.Popen(cmd, shell=True)]
## Don't move on until sampling is complete.
for i in range(Nchains):
p[i].wait()
## Write out results if using pystan
#stan_model_ext_rho_strong = stan_model_samp_rho_strong.extract()
#with open('stan_model_ext_rho_strong.p','w') as f: pickle.dump(stan_model_ext_rho_strong,f)
```
```python
## Sample with weak prior on rho
stan_data_rho_weak = copy.copy(stan_data)
stan_data_rho_weak['alpha_rho'] = 1
stan_data_rho_weak['beta_rho'] = 1/100.
## Sample with pystan
#stan_model_samp_rho_weak = stan_model_compiled.sampling(data = stan_data_rho_weak, iter=Niter, chains=Nchains, control=cdic)
## Sample with cmdstan
## Delete any old samples first
os.system('rm output_cmdstan_gp_rhoweak_samples*.csv')
stanhelper.stan_rdump(stan_data_rho_weak, 'input_data_rhoweak_final.R')
p = []
for i in range(Nchains):
cmd = """
{0}user-models/gp_model_final \
data file='input_data_rhoweak_final.R' \
sample num_warmup={2} num_samples={2} \
adapt delta={4} \
algorithm=hmc engine=nuts max_depth={3} \
random seed=1002 id={1} \
output file=output_cmdstan_gp_rhoweak_samples{1}.csv
""".format(cmdstan_path, i+1, Niter/2, cdic['max_treedepth'], cdic['adapt_delta'])
p += [subprocess.Popen(cmd, shell=True)]
## Don't move on until sampling is complete.
for i in range(Nchains):
p[i].wait()
## Write out results if using pystan
#stan_model_ext_rho_weak = stan_model_samp_rho_weak.extract()
#with open('stan_model_ext_rho_weak.p','w') as f: pickle.dump(stan_model_ext_rho_weak,f)
```
```python
def stan_read_csv_multi(path):
"""
Wrap the stanhelper.stan_read_csv function to load outputs
from multiple chains.
Parameters:
* path: file path for cmdstan output files including wildcard (*)
"""
## Enumerate files
from glob import glob
files = glob(path)
## Read in each file
result = {}
for file in files:
result[file] = stanhelper.stan_read_csv(file)
## Combine dictionaries
result_out = {}
keys = result[files[0]]
for key in keys:
result_out[key] = result[files[0]][key]
for f in files:
result_out[key] = np.append(result_out[key], result[f][key], axis=0)
## Remove extraneous dimension
for key in keys:
if result_out[key].shape[-1] == 1:
result_out[key] = np.squeeze(result_out[key], -1)
return result_out
stan_model_ext_rho_strong = stan_read_csv_multi('output_cmdstan_gp_rhostrong_samples*.csv')
stan_model_ext_rho_weak = stan_read_csv_multi('output_cmdstan_gp_rhoweak_samples*.csv')
```
The MCMC trace illustrates the high independence of samples achieved after the *NUTS* algorithm warm-up period, and the low variance in sampling distributions between chains.
```python
## Traceplot
trace_pars = [('eta_sq','$\\eta^2$'),
('inv_rho','$\\rho^{-1}$'),
('sigma_sq','$\\sigma^2$'),
('mu_0','$\\mu_0$'),
('mu_b','$\\mu_b$'),
('NB_phi_inv','$\\rm{NB}_\\phi^{-1}$')]
fig,axs = plt.subplots(len(trace_pars),2, figsize=(8,8), sharex='all', sharey='row')
exts = [stan_model_ext_rho_strong, stan_model_ext_rho_weak]
exts_names = [r'Strong $\rho$ prior', r'Weak $\rho$ prior']
for j in range(2):
axs[0,j].set_title(exts_names[j])
for i,par in enumerate(trace_pars):
axs[i,j].plot(exts[j][par[0]], color='.5')
if j==0: axs[i,j].set_ylabel(par[1])
for k in range(1, Nchains+1):
axs[i,j].axvline(Niter/2 * k, c='r', zorder=-1)
axs[len(trace_pars) - 1,j].set_xticks(np.arange(0, (Niter/2)*Nchains+1, Niter*2))
```
We assess MCMC convergence quantitatively using the Gelman-Rubin convergence diagnostic, $\hat{R}$, a comparison of within- to between-chain variance. We find that $\hat{R} \ll 1.05$ for all parameters, indicating a negligable discrepancy in the sampling distributions between chains.
```python
def read_stansummary(path, cmdstan_path=cmdstan_path):
"""
Wrapper for the cmdstan program stan_summary to calculate
sampling summary statistics across multiple MCMC chains.
Args:
path (str): Path, with a wildcard (*) for the id number
of each output chain
cmdstan_path (str): Path to the stan home directory
Returns:
out: A pandas dataframe with the summary statistics provided
by stan_summary. Note that each element of array variables
are provided on separate lines
"""
from StringIO import StringIO
summary_string = subprocess.check_output(cmdstan_path + 'bin/stansummary --sig_figs=5 '+path, shell=1)
out = pd.read_table(StringIO(summary_string), sep='\s+', header=4, skip_footer=6, engine='python')
return out
## Use cmdstan's stansummary command to calculate rhat
stan_model_sum_rho_strong = read_stansummary('output_cmdstan_gp_rhostrong*.csv')
stan_model_sum_rho_weak = read_stansummary('output_cmdstan_gp_rhoweak*.csv')
```
```python
## Get summary statistics using pystan
#model_summary = stan_model_samp_rho_strong.summary()
#Rhat_vec = model_summary['summary'][:,array(model_summary['summary_colnames'])=='Rhat']
#pars = model_summary['summary_rownames']
## Get summary statistics using cmdstan wrapper
model_summary = stan_model_sum_rho_strong
Rhat_vec = stan_model_sum_rho_strong['R_hat'].values
pars = stan_model_sum_rho_strong.index
## Replace y1, y2 with summaries
sel_pars = ['y1', 'y2', u'eta_sq', u'inv_rho', u'sigma_sq', u'mu_0', u'mu_b', 'NB_phi_inv']
Rhat_dic = {}
for spar in sel_pars:
if spar in ('y1','y2'):
sel = np.where([True if p.startswith(spar) else False for p in pars])
Rhat_dic[spar] = np.percentile(Rhat_vec[sel], [5,50,95])
else:
Rhat_dic[spar] = [Rhat_vec[[pars==spar]],]*3
plt.figure(figsize=(5,6))
plt.errorbar(np.array(Rhat_dic.values())[:,1], np.arange(len(sel_pars)), \
xerr= [np.array(Rhat_dic.values())[:,1] - np.array(Rhat_dic.values())[:,0],\
np.array(Rhat_dic.values())[:,2] - np.array(Rhat_dic.values())[:,1]],\
capsize=0, marker='o', color='k', lw=0)
plt.yticks(np.arange(len(sel_pars)), Rhat_dic.keys(), size=11)
plt.xlabel('$\hat{R}$')
plt.axvline(1.0, color='.5', ls='solid', zorder=-2)
plt.axvline(1.05, color='.5', ls='dashed', zorder=-2)
plt.ylim(-.5, len(sel_pars)-.5)
plt.xlim(0.99, 1.06)
```
## Posterior Simulations and Predictive Checks
To assess goodness of fit, we inspect simulated draws of the Gaussian process from the posterior and perform posterior predictive checks.
### Simulated draws
First we perform a posterior predictive check by visualizing the sampled values of $z$, which realizes both a draw from the latent Gaussian process for the public mass shootings rate and the overdispersed counting noise of the negative binomial distribution.
```python
N_samp = Niter / 2
print len(stan_model_ext_rho_strong['z_rep'])
print Niter
fig, axs = plt.subplots(5,5, figsize=(7,7), sharex='all', sharey='all')
po = axs[0,0].plot(data_years, stan_data['z1'], 'o', c='k', mfc='k', label='Observations', zorder=2, lw=1, ms=4)
axs[0,0].legend(numpoints=1, prop={'size':6})
for i in range(1,25):
draw = np.random.randint(0, N_samp)
py = stan_model_ext_rho_strong['z_rep'][draw][:stan_data['N1']]
axs.flatten()[i].plot(data_years, py, mfc='k', marker='o',
lw=.5, mec='none', ms=2, color='.5', label='GP realization')
axs[0,1].legend(numpoints=1, prop={'size':6})
axs[0,0].set_ylim(0,15)
axs[0,0].set_xticks([1980, 1990, 2000, 2010, 2020])
for ax in axs.flatten():
plt.setp(ax.get_xticklabels(), rotation='vertical', fontsize=9)
plt.setp(ax.get_yticklabels(), fontsize=9)
axs[2,0].set_ylabel('public mass shootings per year', size=9)
```
Visual inspection suggests that the observations simulated under the model show similar variation over time as the actual observations (first panel). We note that some realizations have annual counts at the later end of the modeled time range that exceed the largest observed annual count (7 public mass shootings). Some exceedence is expected given the counting noise, but this posterior predictive check could guide revision of the prior on the over-dispersion parameter or the choice of the negative binomial likelihood.
Because the relative variance in the annualized counting statistics is high (i.e. public mass shootings are generally infrequent on an annual basis), it is also helpful to examine the model for the underlying shooting rate in detail. Next we plot the posterior distribution of the Gaussian process for the annualized mass shooting rate simulated across a grid of timepoints subsampled between years and extending beyond the current year (2016), effectively interpolating and extrapolating from the observations. The mean of the posterior predictive distribution of the Gaussian process is shown with the solid blue line, and the shaded region shows the 16 and 84th percentile intervals of the posterior (i.e. the "$1\sigma$ range").
```python
def plot_GP(stan_model_ext):
y2_sum = np.percentile(np.exp(stan_model_ext['y2']), [16,50,84], axis=0)
plt.figure(figsize=(7,5))
pfb = plt.fill_between(data_years_samp, y2_sum[0], y2_sum[2], color='b', alpha=.5)
pfg = plt.plot(data_years_samp, y2_sum[1], c='b', lw=2, label='GP model', zorder=0)
po = plt.plot(data_years, stan_data['z1'], 'o', c='k', label='Observations', zorder=2)
plt.xlabel('Year')
plt.ylabel('Annual rate of public mass shootings')
plt.legend(prop={'size':10}, loc=2)
plt.ylim(0,15)
plt.gca().xaxis.set_minor_locator(FixedLocator(np.arange(min(data_years_samp), max(data_years_samp))))
plt.gca().set_xlim(min(data_years_samp) - 1, max(data_years_samp) + 1)
return pfb, pfg, po
pfb, pfg, po = plot_GP(stan_model_ext_rho_strong)
```
The Gaussian process captures an increase in the mass shooting rate over the decades and some fluctuations against that trend during certain periods, as we will explore in more detail below. The model does not show any visually apparent deviations from the evolution of the observational time series, although comparison to the data highlights several years with substantially outlying mass shooting totals (e.g. 1993 and 1999). The extrapolated period ($>2016$) suggests a range of possible future rates of growth from the 2016 level.
We add random draws from the mean function to visualize our inferences on the long-term time evolution of the mass shooting rate.
```python
def plot_GP_mu_draws(stan_model_ext):
plot_GP(stan_model_ext)
N_samp = len(stan_model_ext['mu_0'])
px = np.linspace(min(data_years_samp), max(data_years_samp), 100)
pfms = []
for i in range(20):
draw = np.random.randint(0, N_samp)
py = np.exp(stan_model_ext['mu_0'][draw] + (px - min(data_years)) * stan_model_ext['mu_b'][draw])
pfms.append(plt.plot(px, py, c='r',
zorder = 1, label = 'Mean function draws' if i==0 else None))
plt.legend(prop={'size':10}, loc=2)
plot_GP_mu_draws(stan_model_ext_rho_strong)
```
The comparison between draws of the mean functions (red) and the model posterior (blue) suggests that the mean function captures most of the modeled variation in the shooting rate over time.
We can understand the behavior of the Gaussian process covariance function by isolating it from the mean function. We do so by subtracting the linear component of the mean function from the simulated Gaussian process rates ($y_2$) and plotting against the observations.
```python
y2_gp_rho_strong = np.percentile(np.exp(
stan_model_ext_rho_strong['y2'] -
np.dot(stan_model_ext_rho_strong['mu_b'][:,np.newaxis], (data_years_samp[np.newaxis,:] - min(data_years)))
), [16,25,50,75,84], axis=0)
fig, axs = plt.subplots(2, figsize=(7,7), sharex='all')
pfb = axs[1].fill_between(data_years_samp, y2_gp_rho_strong[1], y2_gp_rho_strong[3], color='b', alpha=.25)
pfb2 = axs[1].fill_between(data_years_samp, y2_gp_rho_strong[0], y2_gp_rho_strong[4], color='b', alpha=.25)
pfg = axs[1].plot(data_years_samp, y2_gp_rho_strong[2], c='b', lw=2, label='GP model (covariance only)', zorder=0)
po = axs[0].plot(data_years, stan_data['z1'], 'o', c='k', label='Observations', zorder=2)
axs[1].axhline(np.exp(stan_model_ext_rho_strong['mu_0'].mean()), color='orange', label='$\mu_0$')
axs[0].set_ylabel('Annual rate of \npublic mass shootings\n(observations)')
axs[1].legend(prop={'size':8}, loc=2, ncol=2)
axs[1].set_ylabel('Annual rate of \npublic mass shootings\n(model)')
axs[1].set_ylim(0, 2.2)
axs[1].xaxis.set_minor_locator(FixedLocator(np.arange(min(data_years_samp), max(data_years_samp))))
axs[1].set_xlim(min(data_years_samp) - 1, max(data_years_samp) + 1)
```
In this plot, the shaded regions show the interquartile and $[16-84]$th percentile ranges. The fact that the interquartile contours never cross the mean ($\mu_0$) indicates that there is never $>75\%$ probability that the annualized trend deviates from the linear mean function. However, there are times when the interquartile range approaches the mean.
Perhaps the most salient feature captured by the covariance function of the Gaussian process is a dip in the annualized rate of public mass shootings in the years from about 2000 to 2005. The model has no features that would seek to explain the causal origin of this dip, although many readers may be surprised by its juxtoposition with the Columbine High School massacre (1999), which is understood to have spawned dozens of "copycat" attacks over time (see e.g. Follman & Andrews 2015).
The largest positive deviation from the mean function occurs between about 1988 and 1993. During that time, the mean function itself is very small (see previous figure), so this does not reresent a large absolute deviation.
### Gaussian process with weak $\rho^{-1}$ prior
For comparison, we visualize the latent Gaussian process under a weak prior for $\rho^{-1}$.
```python
plot_GP(stan_model_ext_rho_weak)
```
It's clear from this visualization that the Gaussian process does not capture significant short-timescale variations when the timescale prior is loosened. This model also generally expresses lower uncertainty in the annual public mass shootings rate. Consistent with the reliance on the parametric, linear mean function, the extrapolated predictions do not account for any substantial probability of decrease in the rate of public mass shootings after 2016.
We can see the dominance of the mean function over the covariance function directly by again visualizing the isolated Gaussian process covariance function, which shows virtually no deviation from the mean:
```python
y2_gp_rho_weak = np.percentile(np.exp(
stan_model_ext_rho_weak['y2'] -
np.dot(stan_model_ext_rho_weak['mu_b'][:,np.newaxis], (data_years_samp[np.newaxis,:] - min(data_years)))
), [16,25,50,75,84], axis=0)
fig, axs = plt.subplots(1, figsize=(7,5), sharex='all')
pfb = axs.fill_between(data_years_samp, y2_gp_rho_weak[1], y2_gp_rho_weak[3], color='b', alpha=.25)
pfb2 = axs.fill_between(data_years_samp, y2_gp_rho_weak[0], y2_gp_rho_weak[4], color='b', alpha=.25)
pfg = axs.plot(data_years_samp, y2_gp_rho_weak[2], c='b', lw=2, label='GP model (covariance only)', zorder=0)
axs.axhline(np.exp(stan_model_ext_rho_weak['mu_0'].mean()), color='orange', label='$\mu_0$')
axs.legend(prop={'size':8}, loc=2, ncol=2)
axs.set_ylabel('Annual rate of \npublic mass shootings\n(model)')
axs.set_title(r'Weak $\rho$ prior')
axs.set_ylim(0, 2.2)
axs.xaxis.set_minor_locator(FixedLocator(np.arange(min(data_years_samp), max(data_years_samp))))
axs.set_xlim(min(data_years_samp) - 1, max(data_years_samp) + 1)
```
## Inspection of posterior correlations
Before we explore the marginalized posterior distributions of the parameters in our model, we take advantage of the fully Bayesian posterior samples generated by the NUTS simulations to understand the correlations between parameters in the posterior distribution.
First we note that the parameters of the linearized mean function are highly correlated:
```python
plt.figure()
pa = plt.hist2d(stan_model_ext_rho_strong['mu_0'],
stan_model_ext_rho_strong['mu_b'],
bins=100, cmap=cm.Reds, cmin=4)
plt.xlabel(r'$\mu_0$ (log shootings)')
plt.ylabel(r'$\mu_b$ (log shootings per year)')
plt.axvline(0, color='k', ls='dashed')
plt.axhline(0, color='k', ls='dashed')
plt.axis([-1.5,1.5,-0.05,.1])
cb = plt.colorbar()
cb.set_label('Number of posterior samples')
```
If the mean rate of public mass shootings at the beginning of the time series ($\mu_0$) is inferred to be higher, then the increase in the mean function over time needed to explain the observations ($\mu_b$) would be lower. However, at all probable values of $\mu_0$, the distribution of $\mu_b$ is predominantly positive.
We can fit a simple linear model to understand more subtle correlations in the multivariate posterior distribution. Here we fit a model for $\rho^{-1}$ as a function of the other major parameters of the model. We standardize the predictors so that we can directly compare the coefficients on the linear model.
```python
import statsmodels.api as sm
## Assemble data matrices
y = pd.Series(stan_model_ext_rho_strong['inv_rho']); y.name = 'inv_rho'
X = pd.DataFrame({
'eta':np.sqrt(stan_model_ext_rho_strong['eta_sq']),
'mu_0':stan_model_ext_rho_strong['mu_0'],
'mu_b':stan_model_ext_rho_strong['mu_b'],
'sigma':np.sqrt(stan_model_ext_rho_strong['sigma_sq']),
'NB_phi_inv':np.sqrt(stan_model_ext_rho_strong['NB_phi_inv']),
})
## Standardize
X = X - X.mean()
X = X / X.std()
X = sm.add_constant(X)
y = (y - y.mean()) / y.std()
## Fit linear model using stats models
est = sm.OLS(y, X).fit()
## Print summary
print est.summary2()
```
Results: Ordinary least squares
====================================================================
Model: OLS Adj. R-squared: 0.054
Dependent Variable: inv_rho AIC: 233702.4431
Date: 2017-01-14 23:07 BIC: 233758.4745
No. Observations: 84000 Log-Likelihood: -1.1685e+05
Df Model: 5 F-statistic: 964.7
Df Residuals: 83994 Prob (F-statistic): 0.00
R-squared: 0.054 Scale: 0.94575
----------------------------------------------------------------------
Coef. Std.Err. t P>|t| [0.025 0.975]
----------------------------------------------------------------------
const -0.0000 0.0034 -0.0000 1.0000 -0.0066 0.0066
NB_phi_inv 0.0170 0.0034 5.0545 0.0000 0.0104 0.0236
eta 0.2318 0.0034 68.7066 0.0000 0.2252 0.2384
mu_0 0.0585 0.0062 9.4253 0.0000 0.0463 0.0706
mu_b 0.0647 0.0062 10.4458 0.0000 0.0525 0.0768
sigma 0.0172 0.0034 5.0956 0.0000 0.0106 0.0238
--------------------------------------------------------------------
Omnibus: 11670.523 Durbin-Watson: 2.016
Prob(Omnibus): 0.000 Jarque-Bera (JB): 18349.721
Skew: 0.978 Prob(JB): 0.000
Kurtosis: 4.189 Condition No.: 3
====================================================================
We see that the most significant correlation is between $\rho^{-1}$ and $\eta$. When we visualize this correlation, we observe that the level of posterior curvature associated with these two variables is small, though significant.
```python
plt.figure()
pa = plt.hist2d(np.sqrt(stan_model_ext_rho_strong['eta_sq']),
stan_model_ext_rho_strong['inv_rho'],
bins=40, cmap=cm.Reds, cmin=4,
range = [[0,1],[1,12]])
plt.xlabel(r'$\eta$ (log shootings per year)')
plt.ylabel(r'$\rho^{-1}$ (years)')
sqrt_eta = np.sqrt(stan_model_ext_rho_strong['eta_sq'])
px = np.linspace(min(sqrt_eta), max(sqrt_eta), 10)
px_std = (px - np.mean(sqrt_eta)) / np.std(sqrt_eta)
plt.plot(px,
# Constant term
(est.params[est.model.exog_names.index('const')] +
# Linear term
px * est.params[est.model.exog_names.index('eta')]
# Standardization adjustment
* stan_model_ext_rho_strong['inv_rho'].std()) + stan_model_ext_rho_strong['inv_rho'].mean())
plt.axis()
cb = plt.colorbar()
cb.set_label('Number of posterior samples')
plt.title(r'Strong prior on $\rho^{-1}$')
```
When we explore the same correlation in the posterior of the model with a weak prior specified on the timescale hyperparameter, we see somewhat different results:
```python
## Assemble data matrices
y = pd.Series(np.log(stan_model_ext_rho_weak['inv_rho'])); y.name = 'inv_rho'
X = pd.DataFrame({
'eta':np.sqrt(stan_model_ext_rho_weak['eta_sq']),
'mu_0':stan_model_ext_rho_weak['mu_0'],
'mu_b':stan_model_ext_rho_weak['mu_b'],
'sigma':np.sqrt(stan_model_ext_rho_weak['sigma_sq']),
'NB_phi_inv':np.sqrt(stan_model_ext_rho_weak['NB_phi_inv']),
})
## Standardize
X = X - X.mean()
X = X / X.std()
X = sm.add_constant(X)
y = (y - y.mean()) / y.std()
## Fit linear model using stats models
est = sm.OLS(y, X).fit()
## Print summary
print est.summary2()
plt.figure()
pa = plt.hist2d(np.sqrt(stan_model_ext_rho_weak['eta_sq']),
stan_model_ext_rho_weak['inv_rho'],
bins=40, cmap=cm.Reds, cmin=4,
range = [[0,4],[1,300]])
plt.xlabel(r'$\eta$ (log shootings per year)')
plt.ylabel(r'$\rho^{-1}$ (years)')
sqrt_eta = np.sqrt(stan_model_ext_rho_weak['eta_sq'])
px = np.linspace(min(sqrt_eta), max(sqrt_eta), 10)
px_std = (px - np.mean(sqrt_eta)) / np.std(sqrt_eta)
plt.plot(px,
# Constant term
(est.params[est.model.exog_names.index('const')] +
# Linear term
px * est.params[est.model.exog_names.index('eta')]
# Standardization adjustment
* stan_model_ext_rho_weak['inv_rho'].std()) + stan_model_ext_rho_weak['inv_rho'].mean())
plt.axis()
cb = plt.colorbar()
cb.set_label('Number of posterior samples')
plt.title(r'Weak prior on $\rho^{-1}$')
```
Again, $\eta$ is the parameter most significantly correlated with $\rho^{-1}$, but now the 2D posterior visualization shows that this correlation is substantially non-linear. In particular for the model with the weak prior on $\rho$, $\eta$ is constrained to much smaller values when the timescale $\rho^{-1}$ is small. In other words, in models that permit variations from the mean function on timescales smaller than the observational range ($\sim35$ years), the amplitude of those variations is constrained to be very small. In any scenario, as we have seen, the importance of the covariance function is minimal under this prior.
## Parameter inferences
Below we show the marginalized posterior distributions of the parameters of the Gaussian process under the strong prior on $\rho$.
```python
def gt0(y, x, lbound=0, ubound=np.inf):
y[(x<lbound) & (x>ubound)] = 0
return y
def marg_post_plot(stan_model_ext, alpha_rho, beta_rho, Nhist=25):
hyp_dic = {
'eta_sq': ('$\\eta$', np.sqrt, 'log shootings per year', lambda x: sstats.cauchy.pdf(x**2, 0, 1)),
'inv_rho': ('$\\rho^{-1}$', lambda x: x, 'years', lambda x: gt0(sstats.gamma.pdf(x, alpha_rho, scale=beta_rho), x, lbound=1)),
'sigma_sq': ('$\\sigma$', np.sqrt, 'log shootings per year', lambda x: sstats.cauchy.pdf(x**2, 0, 1)),
'NB_phi_inv':('$\\rm{NB}_\\phi^{-1}$', lambda x:x, '', lambda x: sstats.cauchy.pdf(x**2, 0, 0.5)),
}
meanfunc_dic = {
'mu_0': ('$\\mu_0$', lambda x: x, 'log shootings per year, '+str(np.min(data_years)), lambda x: sstats.norm.pdf(x, 0,2)),
'mu_b': ('$\\mu_b$', lambda x: x, 'annual increase in\nlog shootings per year', lambda x: sstats.norm.pdf(x, 0,0.2)),
}
for name,pdic in (('hyper', hyp_dic), ('meanfunc', meanfunc_dic)):
fig,axs = plt.subplots(1,len(pdic), figsize=(2.5*len(pdic), 2.5), sharey='all')
axs[0].set_ylabel('HMC samples ({} total)'.format(N_samp))
for i,hyp in enumerate(pdic.keys()):
samps = pdic[hyp][1](stan_model_ext[hyp])
hn, hb, hp = axs[i].hist(samps, Nhist, edgecolor='none', facecolor='.5', label='Posterior samples')
ppx = np.linspace(np.min(samps), np.max(samps), 10000)
ppy = pdic[hyp][1]( pdic[hyp][3](ppx) )
## Normalize
ppy *= len(samps) / np.sum(ppy) * len(ppy) / len(hn)
axs[i].plot(ppx, ppy, color='b', zorder=2, label='Hyperprior')
axs[i].xaxis.set_major_locator(MaxNLocator(3))
axs[i].xaxis.set_minor_locator(AutoMinorLocator(3))
axs[i].set_xlabel(pdic[hyp][0] + ' ({})'.format(pdic[hyp][2]), ha='center')
axs[i].axvline(0, ls='dashed', color='.2')
axs[-1].legend(prop={'size':9})
print "Strong prior on rho:"
marg_post_plot(stan_model_ext_rho_strong, stan_data_rho_strong['alpha_rho'], 1/stan_data_rho_strong['beta_rho'], Nhist=100)
```
The comparison of the posterior and prior distributions show strong evidence from the data to identify most hyperparameters. The posterior for $\mu_0$ shows a concentration around a baseline rate of $\exp(-1)\sim0.4$ to $\exp(1)\sim 3$ public mass shootings per year at the start of the dataset, 1982, reflecting a variance, much smaller than the corresponding prior. The negative binomial overdispersion parameter ($\phi^{-1}$) is concentrated towards very small values $\ll 1$, indicating that the Poisson disitrubtion is a good approximation to the variance in the observations. The amplitude of the Gaussian process covariance function, $\eta$, is strongly shifted from the mode of the prior distribution, to a mean of $\exp(0.5)\sim1.6$ public mass shootings per year. The variance of the Gaussian process covariance function, $\sigma$, has a posterior variance much smaller than the prior distribution.
The posterior distribution of $\rho^{-1}$ is a notable exception. It shows no visual deviation from the prior distribution, indicating that this parameter is not identified by the observations.
Next we explore the same marginalized posteriors under the weak prior on $\rho$.
```python
print "Weak prior on rho:"
marg_post_plot(stan_model_ext_rho_weak, stan_data_rho_weak['alpha_rho'], 1/stan_data_rho_weak['beta_rho'], Nhist=100)
```
With the weak prior on $\rho$, most parameters have posterior distributions nearly identical to their distributions under the strong prior on $\rho$. In particular, the conclusions about the mean function parameters ($\mu_0$ and $\mu_b$), $\phi$, and $\sigma$ seem robust to the choice of prior.
Importantly, the $\rho$ parameter is again largely non-identified. Its posterior distribution generally follows the weaker prior, although it shows a posterior probability less than the prior for the very smallest values. The consequence is that the models sampled from the Gaussian process have very long timescales for their covariance function. The distribution of the amplitude, $\eta$, is skewed to larger values under the weaker prior, although the amplitude of the mean function has little consequence when the time variation is negligable (as discussed in the previous section).
## Model predictions
We calculate the posterior probability that the annualized rate of public mass shootings has increased in the US since 1982 ($\mu_b > 0$).
```python
print_ext_names = ['...with strong prior on rho: ', '...with weak prior on rho: ']
print 'p(mu_b > 0):'
for i in range(2):
print print_ext_names[i]+'%0.0f'%(np.mean(exts[i]['mu_b'] > 0)*100)+'%'
```
p(mu_b > 0):
...with strong prior on rho: 97%
...with weak prior on rho: 97%
This indicates strong statistical evidence for a long term increase in the annualized rate of public mass shootings over the past three decades, regardless of our choice of prior for the timescale parameter, $\rho$. In linear terms, the mean percentage increase in the rate of public mass shootings is found to be,
```python
zincreaseraw = {}
for i in range(2):
zincreaseraw[i] = (np.exp((2016 - np.min(data_years)) * exts[i]['mu_b']) - 1) * 100
zincrease = np.percentile(zincreaseraw[i], [16,50,84])
print print_ext_names[i]+'%0.0f'%round(zincrease[1], -1)+'^{+%0.0f'%round(zincrease[2]-zincrease[1], -1)+'}_{-%0.0f'%round(zincrease[1]-zincrease[0], -1)+'}'
```
...with strong prior on rho: 350^{+500}_{-230}
...with weak prior on rho: 360^{+490}_{-230}
While the uncertainty interval is large, the $1\sigma$ estimate suggests at least a doubling in the annualized rate of public mass shootings over these three decades, and more likely a quadrupling or greater increase.
For comparison, the US population has grown from $\sim231$ million to $318$ million residents according to [worldbook data](http://data.worldbank.org/indicator/SP.POP.TOTL?cid=GPD_1), an increase of $38\%$, over that same period. The model posterior suggests that the rate of public mass shootings has surpassed the rate of population growth with high confidence:
```python
for i in range(2):
print print_ext_names[i]+'%0.0f'%(np.mean(zincreaseraw[i] > 38)*100)+'%'
```
...with strong prior on rho: 94%
...with weak prior on rho: 94%
Cohen et al. (2014) reported a tripling in the rate of mass shootings between 2011 and 2014 on the basis of a SPC methodology. Our inference on the mean function of the Gaussian process, because it is parameterized as linear over the full time extent of the modeled period, does not directly address this claim. But the simulated predictions of the Gaussian process, including the covariance component, can generate relevant comparisons.
```python
i1 = np.argmin(abs(data_years_samp - 2011.5))
i2 = np.argmin(abs(data_years_samp - 2014.5))
py = np.exp(stan_model_ext_rho_strong['y2'][:,i2]) / np.exp(stan_model_ext_rho_strong['y2'][:,i1])
plt.figure()
ph = plt.hist(py, 50, edgecolor='none', facecolor='.5', range=[0,8], normed=1)
plt.xlabel('Relative rate of public mass shootings in 2014 versus 2011')
plt.ylabel('Posterior probability')
plt.axvline(1, color='k', label='Unity')
plt.axvline(np.mean(py), color='b', label='Mean posterior estimate', ls='dashed')
plt.axvline(3, color='g', label='Cohen et al. estimate', lw=2, ls='dotted')
plt.legend()
print "Probability that rate increased: ", '%0.0f'%(np.mean(py > 1) * 100), '%'
print "Mean predicted level of increase: ", '%0.1f'%(np.mean(py)), 'X'
print "Probability of increase by at least 3X: ", '%0.2f'%(np.mean(py > 3)), '%'
```
While we have reported that the increase in the rate of public mass shootings over the past three decades is likely to be a factor of several, we find much less evidence for such a dramatic increase over the time period from 2011 to 2014. As reported above, our model predicts better than even odds that there was an increase during that three year period, but the probability that it was as high as a tripling are small. Our model suggests that the increase was more likely to be $\sim30\%$ .
## Conclusions
We have used Stan to implement and estimate a negative binomial regression model for the annualized rate of public mass shootings in the United States based on a Gaussian process with a time-varying mean function. When combined with a strong prior on the timescale of the Gaussian process covariance function, this design yields a predictive model with the full non-parametric flexibility of a Gaussian process to explain short timescale variation, while retaining the clear interpretability of a parametric model by isolating and jointly modeling long-term (timescale of decades) evolution in the shooting rate. Applying this model to the Mother Jones database of US public mass shootings, our major conclusions are as follows,
* We use posterior simulations and predictive checks to demonstrate the efficacy of the Gaussian process model in generating and fitting the observations of the annual mass shooting rate from the Mother Jones database. We explore the effects of prior choices on the posterior and visualize posterior curvature between hyperparameters.
* We use the non-parametric Gaussian process predictive model to identify an apparent dip in the mass shooting rate in the first few years of the new millenium.
* With a 97% probability, we find that the annualized rate of public mass shootings has risen over the past three decade. This finding is robust to the choice of prior on the timescale parameter.
* The posterior mean estimate for the increase in the shooting rate since 1982 is $\sim300\%$.
* We compare to an independent, 2014 analysis of the increase in the rate of public mass shootings between 2011 and 2014 by Cohen et al. Our model predicts a smaller rate of increase for the underlying rate of public mass shootings than those authors over this period, closer to 30% over the 4 year period.
## Acknowledgements
The authors would like to thank the anonymouse reviewers, Michael Betancourt, Amy Cohen, Deb Azrael, and Matt Miller for very helpful feedback on this manuscript and the statistical techniques applied.
## References
* Betancourt & Girolami 2013: Michael Betancourt & Mark Girolami 2013, [Hamiltonian Monte Carlo for Hierarchical Models](https://arxiv.org/abs/1312.0906)
* Cohen et al. 2014: Amy Cohen et al. 2014, [Rate of public mass shootings Has Tripled Since 2011, Harvard Research Shows](http://www.motherjones.com/politics/2014/10/mass-shootings-increasing-harvard-research)
* Follman & Andrews 2015: Mark Follman and Becca Andrews 2015, [How Columbine Spawned Dozens of Copycats](http://www.motherjones.com/politics/2015/10/columbine-effect-mass-shootings-copycat-data)
* Fox & Levin 2015: James Alan Fox and Jack Levin 2015, Mass confusion concerning mass murder, The Criminologist, Vol. 40, No. 1
* Johnson et al. 2012: Johnson et al. 2012, [Who Is Gun Advocate John Lott?](http://mediamatters.org/research/2012/12/17/who-is-gun-advocate-john-lott/191885) http://mediamatters.org/research/2012/12/17/who-is-gun-advocate-john-lott/191885
* Lott 2014: John R. Lott 2014, [The FBI's Misrepresentation of the Change in Mass Public Shootings](http://dx.doi.org/10.2139/ssrn.2524731)
* Papaspiliopoulos et al. 2003: Non-Centered Parameterisations for Hierarchical Models and Data Augmentation in Bayesian Statistics 7, Oxford University Press, p. 307–326.
* Rasmussen & Williams 2006: Carl Edward Rasmussen & CKI Williams 2006, [Gaussian processes for machine learning](http://www.gaussianprocess.org/gpml/)
* Roberts et al. 2012: S. Roberts et al 2012, [Gaussian processes for time-series modelling](http://www.robots.ox.ac.uk/~sjrob/Pubs/Phil.%20Trans.%20R.%20Soc.%20A-2013-Roberts-.pdf)
* *Stan* Manual: [Stan Modeling Language Users Guide and Reference Manual](http://mc-stan.org/), Version 2.8.0
| 699caf3d0e6c0e451ae1db5e1c76331752db605a | 872,717 | ipynb | Jupyter Notebook | 2017/Contributed-Talks/09_sanders/Annualized Rate of Mass Shootings; Sanders & Lei (StanCon2017 - revised).ipynb | simeond/stancon_talks | 5a2a94ea056dd3c05c4a0e48532769dc8dc7f9ac | [
"CC-BY-4.0",
"BSD-3-Clause"
] | 238 | 2017-01-23T23:15:19.000Z | 2022-03-06T09:26:49.000Z | 2017/Contributed-Talks/09_sanders/Annualized Rate of Mass Shootings; Sanders & Lei (StanCon2017 - revised).ipynb | simeond/stancon_talks | 5a2a94ea056dd3c05c4a0e48532769dc8dc7f9ac | [
"CC-BY-4.0",
"BSD-3-Clause"
] | 5 | 2017-01-24T01:57:34.000Z | 2020-10-22T23:13:04.000Z | 2017/Contributed-Talks/09_sanders/Annualized Rate of Mass Shootings; Sanders & Lei (StanCon2017 - revised).ipynb | simeond/stancon_talks | 5a2a94ea056dd3c05c4a0e48532769dc8dc7f9ac | [
"CC-BY-4.0",
"BSD-3-Clause"
] | 103 | 2017-01-24T04:10:41.000Z | 2022-02-05T16:39:40.000Z | 481.100882 | 132,450 | 0.922103 | true | 15,594 | Qwen/Qwen-72B | 1. YES
2. YES | 0.909907 | 0.835484 | 0.760212 | __label__eng_Latn | 0.966887 | 0.60456 |
$ \newcommand{\pd}[2]{ \frac{\partial #1}{\partial #2} }
\newcommand{\od}[2]{\frac{d #1}{d #2}}
\newcommand{\td}[2]{\frac{D #1}{D #2}}
\newcommand{\ab}[1]{\langle #1 \rangle}
\newcommand{\bss}[1]{\textsf{\textbf{#1}}}
\newcommand{\ol}{\overline}
\newcommand{\olx}[1]{\overline{#1}^x}
$
# Advection, Diffusion, and Conservation Laws
## Basin Scale Budgets
As an introduction to this somewhat technical topic, we will first examine the _basin scale_ budgets of _mass, heat, and salt_. This are usually a bit more intuitive to understand.
Consider the volume shown below. It shows the side view of an ocean basin (like the North Atlantic) bounded at the bottom by the sea floor, at the top by the ocean surface, and at the southern edge by an open boundary through which water can flow. The central question here is, _what controls the rate of change of total mass, heat content, and salt inside this volume_?
```python
from IPython.display import Image
Image('images/3D_Basin_Diagram.png')
```
### Mass Conservation
Let $M$ be the total mass of the ocean in this basin. The rate of change of mass is given by
$$ \od{M}{t} = F_m^{surf} + F_{m}^{side} $$
The two terms on the right represent the mass flux into the volume at the surface and the mass flux into the volume at the side boundary. (Note that the sign coventions we chose for the $F_m$'s was arbitrary: we definied them such that mass will increase if they are positive. This choice dictates the sign in what follows.) At this point the equation is a bit obvious, but let's write what each term is a bit more explicity.
The total mass is given by the integral of the density over the volume (denoted by $V$):
$$ M = \iiint_V \rho dV \ . $$
The mass flux at the surface is given by the integral over the surface of evaporation ($E$, measured in m/s), precipitation ($P$) and runoff ($R$), multiplied by the density of fresh water:
$$ F_m^{surf} = -\iint_{surf} \rho_{fw} (E - P - R) dA \ . $$
The mass flux at the open boundary is given by the area integral of the velocity $\mathbf{u}$ times the density, in the direction normal to the boundary:
$$ F_m^{side} = - \iint_{side} \rho \mathbf{u} \cdot \mathbf{dA} \ .$$
Here $\mathbf{dA}$ is the vector area element which points _out_ of the volume normal to the boundary.
This equation suggests several different possibilities:
- If E-P is _not_ in balance with the net E-P-R, then the total ocean mass in the basin has to increase or decrease.
- In steady state (i.e. time derivative is zero), the net evaporation minus precipitation must balance the mass inflow at the boundary.
It is very common in oceanography to take the mass of the ocean, or of a basin, as constant in time. Although the ocean mass does change slightly on various timescales (most notably through the formation and melting of terrestrial ice sheets), these changes are extremely slight compared to the total ocean mass. In this case, we find
$$ F_m^{surf} = - F_{m}^{side} $$
### Mass Conservation in Boussinesq Approximation
The Boussisesq approximation is that density variations $\delta \rho$ are very small compared to the background density $\rho_0$, i.e.
$$ \rho = \rho _0 + \delta \rho \ ; \ \ |\delta \rho| \ll \rho_0 \ . $$
This approximation simplifies conservations equations considerably. The mass can now be approximated as
$$ M \simeq \iiint_V \rho_0 dV = \rho_0 \iiint_V dV \ . $$
Since $\rho_0$ is a constant, we can divide it out of the mass equation and work with volume instead:
$$ V = \iiint_V dV \ . $$
The mass equation becomes a volume equation:
$$ \begin{align}
\od{V}{t} =& \frac{F_m^{surf}}{\rho_0} + \frac{F_{m}^{side}}{\rho_0} \\
=& F_v^{surf} + F_v^{side}
\end{align} $$
with
$$ F_v^{surf} = -\frac{\rho_{fw}}{\rho_0} \iint_{surf} (E - P - R) dA $$
and
$$ F_v^{side} = - \iint_{side} \mathbf{u} \cdot \mathbf{dA} \ . $$
### Heat Conservation
The total heat content of the basin is given by
$$ H = c_p^0 \iiint_V \rho \Theta dV \ . $$
where $\Theta$ is the conservative temperature (proportional to potential enthalpy).
Some would say that that $\Theta$ should be measured in Kelvins. However, the [first law of thermodynamics](https://en.wikipedia.org/wiki/First_law_of_thermodynamics) only describes the _changes_ in heat content of a system; the absolute value of $H$ is meaningless.
The rate of change of heat content depends only on external fluxes into the system:
$$ \od{H}{t} = F_H^{surf} + F_H^{side} \ . $$
The surface flux is the air-sea flux $Q$ we discussed earlier, plus the advection of heat by the evaporating / precipitating water:
$$ F_H^{surf} = \iint_{surf} Q dA - c_p^0 \iint_{surf} \rho_{fw} \Theta (E - P - R) dA $$
where $Q$ is defined as positive for downward heat flux. The second term represents the advetion of heat into / out of the ocean surface by evaporating / precipitating water. Neglecting diffusive fluxes (for now), the flux through the side is
$$ F_H^{side} = - c_p^0 \iint \mathbf{u} \rho \Theta \cdot \mathbf{dA} $$
#### Simultaneous Changes of Heat and Mass
It becomes difficult to reason about heat content when the mass of the system is also changing. For this reason, we will mostly talk about heat transport in situations where the total ocean mass, or the mass of an individual basin under consideration, is constant in time (i.e. $d M / d t = 0$).
If the mass of the system is changing, is may be useful to distinguish from the mean temperature of the body $\overline{\Theta}$ and a fluctuation from this mean $\Theta'$. Expanding $H$, we obtain:
$$
\od{H}{t} = \od{}{t} \left [ c_p^0 \iiint_V \rho (\overline{\Theta} + \Theta') dV \right ]
= c_p^0 \od{}{t} \iiint_V \rho \Theta' dV + c_p^0 \overline{\Theta} \od{M}{t}\ .
$$
We make the same substitution in the surface flux
$$ F_H^{surf} = \iint_{surf} Q dA - c_p^0 \iint_{surf} \rho_{fw} \Theta'(E - P - R) dA -
c_p^0 \overline{\Theta} \iint_{surf} \rho_{fw} (E - P - R) $$
and the side flux
$$ F_H^{side} = - c_p^0 \iint_{side} \mathbf{u} \rho \Theta' \cdot \mathbf{dA} -
c_p^0 \overline{\Theta} \iint_{side} \mathbf{u} \rho \cdot \mathbf{dA}$$
Summing all these, we can obtain a formula of the form
$$
\od{H'}{t} = F_H^{'surf} + F_H^{'side} + c_p^0 \overline{\Theta} (\od{M}{t} - F_m^{surf} - F_m^{side})
$$
Through mass conservation, the final term proprtional to $\overline{\Theta}$ cancels out to zero.
All that remains are the terms proportional to $\Theta'$, which we have defined as
\begin{align}
H' &= c_p^0 \od{}{t} \iiint_V \rho \Theta' dV \\
F_H^{'surf} &= \iint_{surf} Q dA - c_p^0 \iint_{surf} \rho_{fw} \Theta'(E - P - R) dA \\
F_H^{'side} &= - c_p^0 \iint_{side} \mathbf{u} \rho \Theta' \cdot \mathbf{dA} \ .
\end{align}
This gives us a template to follow when trying to calculate heat budgets for bodies whose mass is changing: redefine the temperature as relative to the mean temperature.
This form of the budget also reveals something important: even with $dM/dt = 0$, the magnitude of the surface and side fluxes is indeterminate up to a constant.
We can add _any_ constant temperature to these components, and it will cancel out in the heat budget.
Attempting to break down the heat budget into a sum of different components can lead to spurious conclusions, since these individual components potentially depend on an arbitrary reference temperature.
We can avoid this indeterminacy only if:
- There are no net mass fluxes at all through any of the boundaries ($F_m^{surf} = F_m^{side} = 0$) OR
- We reference all our temperature measurements at all boundaries to the same constant and, using the same reference temperature, cancel the terms related to net mass flux.
### Heat Conservation in Boussinesq Approximation
Under the Boussinesq approximation,
$$ H = c_p^0 \rho_0 \iiint_V \Theta dV \ . $$
The fluxes simplify to
$$ F_H^{surf} = \iint_{surf} Q dA - c_p^0 \frac{ \rho_{fw} }{\rho_0} \iint_{surf} \Theta (E - P - R) dA $$
and
$$ F_H^{side} = - c_p^0 \rho_0 \iint_{side} \mathbf{u} \Theta \cdot \mathbf{dA} \ . $$
Volume conservation ($d V / dt = 0$) accompanies heat conservation to render it independent of $\Theta_{ref}$.
### Salt and Freshwater Conservation
Since negligible salt is exchanged with the atmosphere (or sea ice), the basin-scale budget for salinity is
$$\mathcal{S} = \iiint_V \rho S dV $$
$$ \frac{d \mathcal{S}}{dt} = F_S^{side} $$
with
$$ F_S^{side} = - \iint_{side} \mathbf{u} \rho S \cdot \mathbf{dA} \ . $$
In steady state this simply states that the net salt flux through a closed section is zero.
We can construct the freshwater balance by subtracting the mass balance. Assuming steady state, we find
$$ F_M^{side} - F_S^{side} = -F_M^{surf}. $$
This equation can be simplified to
$$ F_{fw}^{side} = - \iint_{side} \mathbf{u} \rho(1 - S) \cdot \mathbf{dA}
= \iint_{surf} \rho_{fw} (E - P - R) dA $$
which represents the next freshwater flux through the basin. In the Boussinesq approximation, we have
$$ F_{fw}^{side} = - \rho_0 \iint_{side} \mathbf{u} (1 - S) \cdot \mathbf{dA}
= \rho_{fw} \iint_{surf}(E - P - R) dA $$
For a basin with a closed volume budget ($F_V^{side} = 0$), the first term in parenthesis vanishes, and we obtain
$$ F_{fw}^{side} = - F_S^{side} $$
i.e. the freshwater transport is equal and opposite to the salt transport.
### Layered Models
```python
Image('images/3D_Basin_Diagram_layers.png')
```
A very common framework for thinking about basin-scale budgets is the use of _layered models_.
By this we mean models where the quantities of interest---$\mathbf{u}$, $\Theta$, etc.---are represented as piecewise constant function.
In these cases, we can replace many of the integrals in the budget with simple sums.
#### Example: Two Layer Flow
As an example, let's develop a model very losely inspired by the North Atlantic: flow coming into the basin in layer 1 and out of the basin in layer two. Let's use the Boussinesq assumption and additionally assume steady-state volume ($dV/dt = 0$) and no net volume flux through the surface $F_v^{surf} = 0$. In this case, the volume budget just becomes
$$
F_v^{side} = - \iint_{side} \mathbf{u} \cdot \mathbf{dA} = 0
$$
Now let's divide the integral into two separate components:
- In layer 1, $\mathbf{u} \cdot \mathbf{dA} < 0$ (inflow)
- In layer 2, $\mathbf{u} \cdot \mathbf{dA} > 0$ (outflow)
We now separate the integral into two parts
\begin{align}
F_v^{side} &= - \iint_1 \mathbf{u} \cdot \mathbf{dA} - \iint_2 \mathbf{u} \cdot \mathbf{dA} \\
&= \psi_1 + \psi_2 = 0
\end{align}
Clearly the two components must be equal and opposite, i.e.
$$ \psi_1 = \psi \ , \ \ \psi_2 = -\psi $$
In this simple two-layer model, the quantity $\psi$ represents the strength of the overturning circulation (covered much more in-depth in {doc}`overturning_circulation`).
It is a volume flux, measured in m$^3$/s, or in Sv (1 Sv = 10$^6$ m$^3$/s).
Note that we did not need to specify the exact details of $\mathbf{u}$ on the boundary or the shape of the boundary.
Now we will look at heat transport by this circulation.
The advective heat transport through the side boundary is given by
$$ F_H^{side} = - c_p^0 \rho_0 \iint_{side} \mathbf{u} \Theta \cdot \mathbf{dA} \ . $$
This is where the piecewise-constant approximations of the layered model come in.
_We will assume that the temperature is uniform in each layer on the boundary._
I.e., on the side, we have $\Theta = \Theta_1$ in layer 1 and $\Theta = \Theta_2$ in layer 2.
The heat transport becomes
\begin{align}
F_H^{side} &= - c_p^0 \rho_0 \Theta_1 \iint_1 \mathbf{u} \cdot \mathbf{dA} -
c_p^0 \rho_0 \Theta_1 \iint_2 \mathbf{u} \cdot \mathbf{dA} \\
&= c_p^0 \rho_0 ( \Theta_1 \psi_1 + \Theta_2 \psi_2 )
\end{align}
Recognizing that $\psi_1 = \psi$ and $\psi_2 = -\psi$, we obtain
$$
F_H^{side} = c_p^0 \rho_0 \psi ( \Theta_1 - \Theta_2 ) \ ,
$$
where $\Delta \Theta = \Theta_1 - \Theta_2$ is the difference in temperature between the two layers.
This simple yet powerful formula expresses some important fundamental truths about how ocean heat transport works:
- Heat transport can clearly occur even when there is no net volume transport, due to correlations between the strength / direction of the flow and the water temperature.
- The transport is proportional to the strength of the circulation, as measured by $\psi$.
- The transport is proportional to the temperature _difference_ between the inflowing and outflowing water.
## Micro-scale View
The integral budgets above illustrate the concept of heat conservation at the basin scale. We now move down-scale to consider an _infintessimally small_ element of fluid. These differential equations are the most general and flexible way to describe the conservation of mass, heat, and salt.
## Material Derivative #
The "water parcel" is hypothetical, infinitesimally small fluid element. We imagine the fluid to be composed of an infinite continuum of such parcels, each following its own unique path.
A fluid parcel position at a given time $t$ is given by a three dimensional vector $\mathbf{x}(x,y,z,t)$. The instantaneous rate of change of the position of such a fluid element defines the fluid velocity:
$$
\mathbf{u} = (u, v, w)\equiv \od{\mathbf{x}}{t}
$$
One way to do fluid mechanics is to keep track of all such fluid elements as they move around; this is called the _Lagrangian_ approach. In Lagrangian fluid mechanics, the rate of change of some fluid property $c$ is just $dc / dt$; it is implicit that we are following the water parcel along its path through space. As you might imagine, Lagrangian fluid mechanics is very difficult because there is actually an infinite number of such paths, and the trajectories quickly become extremely complex. This approach is also incompatible with the way we usually measure the ocean, which involves taking samples at a fixed point in space.
The alternative is the _Eulerian_ approach. Under this approach, the rate of change of a fluid parcel property is proportional to the _local_ rate of change of that property, plus a contribution from the fluid flow transporting that property around (called advection). We can see this mathemtically by using the chain rule:
$$ \od{}{t} c(\mathbf{x}, t) = \pd{c}{t} + \od{\mathbf{x}}{t} \cdot \nabla c
= \left ( \pd{}{t} + \mathbf{u} \cdot \nabla \right ) c \ . $$
The quantity in parentheses is called the _material derivative_. To distinguish it from an ordinary or partial derivative, we often denote it with a capital $D$:
$$ \td{}{t} = \pd{}{t} + \mathbf{u} \cdot \nabla $$
It represents a rate of change following a fluid parcel, but in the Eulerian frame of reference.
### Mass Conservation: Continuity Equation
The mass budget of an infinitesimally small fluid parcel is
$$ \pd{\rho}{t} + \nabla \cdot (\rho \mathbf{u}) = 0 \ . $$
This equation is called the _continuity equation_.
Using some [vector calculus identities](https://en.wikipedia.org/wiki/Vector_calculus_identities), it can be rewritten as.
$$ \pd{\rho}{t} + \mathbf{u} \cdot \nabla \rho + \rho \nabla \cdot \mathbf{u} = 0 $$
or, using the definition of material derivative
$$ \td{\rho}{t} + \rho \nabla \cdot \mathbf{u} = 0 $$
### Continuity Equation in Boussinesq Approximation
In the Boussinesq approximation, fluctuations in density are much smaller than the background density. If we expand the continuity equation interms of background and fluctuations, we obtain
$$ \td{\rho_0}{t} + \td{\delta \rho}{t} + \rho_0 \nabla \cdot \mathbf{u} + \delta \rho \nabla \cdot \mathbf{u} = 0 $$
The first terms is zero, because $\rho_0$ is constant. Of the remaining nonzero terms, only one has a $\rho_0$ in it. As a "first order" approximation, we find that
$$ \rho_0 \nabla \cdot \mathbf{u} \simeq 0 \ .$$
At this point, we are free to drop the $\rho_0$ factor to give the Boussinesq volume continuity equation:
$$ \nabla \cdot \mathbf{u} = 0 \ .$$
This is also called the incompressibility equation. In the Boussinesq approximation, volume conservation replaced mass conservation.
### Heat Conservation
The left-hand side of the heat conservation equation, representing the rate of change of the heat content of a fluid parcel is
$$ c_p^0 \left [ \pd{}{t}(\rho \Theta) + \nabla \cdot ( \rho \Theta \mathbf{u} ) \right ]$$
which we can rewrite using the continuity equation as
$$ c_p^0 \rho \td{\Theta}{t} \ . $$
The right hand side, representing all non-conservative effects, can be written as the sum of the convergence of a molecular diffusive flux $\mathbf{Q}_{diff}$ and a radiative flux $Q_{rad}$:
(Note we have assumed that the radiative flux is only in the vertical direction, a very reasonable approximation for large-scale processes.)
The diffusive flux is given by [Fick's law of diffusion](https://en.wikipedia.org/wiki/Fick%27s_laws_of_diffusion):
$$ \mathbf{Q}_{diff} = - c_p^0 \rho \kappa_T \nabla T $$
where $T$ is the _in-situ_ temperature and $\kappa_T$ is the molectular diffusivity of heat. The air-sea heat flux acts as a boundary condition for the diffusive flux.
In the Boussinesq approximation, we replace the $\rho$'s above with $\rho_0$ to obtain:
$$ \td{\Theta}{t} = \kappa_T \nabla^2 T - \frac{1}{\rho_0 c_p^0}\pd{Q_{rad}}{z} \ .$$
One complictation of this equation is that molecular diffusion acts on $T$, not $\Theta$. However, we will mostly be thinking about turbulent, rather than molecular, diffusion, for which this is not an issue.
### Salt Conservation
Neglecting the exchange of salt with the surface, salinity is governed just by advection and diffusion
$$ \rho \td{S}{t} = \nabla \cdot ( \rho \kappa_S \nabla S ) $$
which reduces to
$$ \td{S}{t} = \kappa_S \nabla^2 S $$
in the Boussinesq approximation.
```python
```
| 788c20239bdecf3b25643397548436de734f5f17 | 208,976 | ipynb | Jupyter Notebook | book/04_advection_diffusion_continuity.ipynb | monocilindro/intro_to_physical_oceanography | 1cd76829d94dcbd13e5e81c923db924ff0798c1b | [
"MIT"
] | 82 | 2015-09-18T02:01:53.000Z | 2022-02-28T01:43:48.000Z | book/04_advection_diffusion_continuity.ipynb | monocilindro/intro_to_physical_oceanography | 1cd76829d94dcbd13e5e81c923db924ff0798c1b | [
"MIT"
] | 5 | 2015-09-19T01:35:28.000Z | 2022-02-28T17:23:53.000Z | book/04_advection_diffusion_continuity.ipynb | monocilindro/intro_to_physical_oceanography | 1cd76829d94dcbd13e5e81c923db924ff0798c1b | [
"MIT"
] | 51 | 2015-09-12T00:30:33.000Z | 2022-02-08T19:37:51.000Z | 408.95499 | 115,524 | 0.928408 | true | 5,205 | Qwen/Qwen-72B | 1. YES
2. YES | 0.867036 | 0.79053 | 0.685418 | __label__eng_Latn | 0.994886 | 0.430787 |
```python
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('apw-notebook.mplstyle')
%matplotlib inline
from scipy.integrate import quad
from scipy.interpolate import interp1d
```
Computers can generate (pseudo)-random, uniformly distributed random numbers (using, e.g., the [Mersenne Twister](https://en.wikipedia.org/wiki/Mersenne_Twister)). We often want to generate random samples from distributions, either with analytic forms or complex posterior probabilities. For some distributions, there are specific algorithms that efficiently generate samples (e.g., Box-Mueller for the Gaussian) -- for named distributions, you should almost always use [scipy.stats](https://docs.scipy.org/doc/scipy-0.18.1/reference/stats.html)! -- but in other cases, you'll need to generate samples numerically. Here we'll go through a few methods for doing that.
---
# Inverse transform sampling
Only works for 1D or separable ND distributions.
Given some (probability, number, mass) density function $p(x)$ compute the cumulative distribution:
$$
F(x) = \int_{-\infty}^x \, p(z) \, {\rm d}z
$$
To generate samples from $x \sim p(x)$, generate uniform random samples $u$ and invert the cdf:
$$
x = F^{-1}(u)
$$
Let's see this in action.
### Example 1: points uniformly distributed on the surface of a unit sphere
Generate 1000 angular positions $(\phi, \theta)$ on the surface of the unit sphere, uniformly distributed over the surface.
Do this by analytically inverting the cdf, generating uniform random samples, and evaluating:
```python
phi = np.random.uniform(0, 2*np.pi, size=10000)
theta = np.arccos(2*np.random.uniform(size=10000) - 1)
```
```python
plt.figure(figsize=(5,5))
plt.scatter(np.cos(phi)*np.sin(theta),
np.sin(phi)*np.sin(theta),
alpha=0.1, marker='.')
```
### Example 2: points drawn from a power-law distribution
$$
\begin{align}
p(x) &= C \, x^n\\
\quad x &\in (a, b)\\
n&<0\\
C &= \frac{1 + n}{b^{1+n} - a^{1+n}}
\end{align}
$$
Generate 1000 points drawn from a power-law distribution with the following parameters:
$$
n = -2\\
a = 0.1\\
b = 10.
$$
Do this by _numerically_ computing the cdf along a grid of values over the domain $(0.1,10.)$.
Hint: use scipy's integration function `quad()` and use an interpolating function, scipy's `interp1d()`, to invert the cdf. Both are already imported
```python
a = 0.1
b = 10.
n = -2.
```
```python
def power_law_pdf(x):
C = (1+n) / (b**(1+n) - a**(1+n))
return C*x**n
```
```python
quad(power_law_pdf, a, b)
```
(0.9999999999999999, 3.038479026730819e-09)
```python
x_grid = np.linspace(a, b, 1024)
power_law_cdf = [quad(power_law_pdf, a, x)[0] for x in x_grid]
```
```python
plt.plot(x_grid, power_law_cdf, marker='');
plt.xlabel(r'$x$')
plt.ylabel(r'$F(x)$')
```
```python
func = interp1d(power_law_cdf, x_grid)
```
```python
xs = func(np.random.uniform(size=10000))
```
```python
plt.hist(np.log10(xs), bins='auto');
plt.yscale('log')
```
### Example 3: radii drawn from a Hernquist profile
$$
\begin{align}
\rho(r) &= \frac{M_{tot}}{2\pi\,a^3} \, \left[\frac{r}{a} \, \left(1+\frac{r}{a}\right)^3\right] \\
M(<r) &= 2\,M_{tot} \, \frac{\left(r/a\right)^2}{2\left(1 + r/a\right)^2}
\end{align}
$$
Use whatever method you prefer to sample 1000 radii from a Hernquist profile.
```python
```
---
# Rejection sampling
Only practical for low dimensionality.
### Example 1: sampling from a mixture of two Gaussians
Use rejection sampling to generate approximate samples from the mixture of Gaussians $\mathcal(N)(\mu,\sigma^2)$ defined by (and below):
$$
p(x) = 0.5\,\mathcal{N}(1,0.25) + 0.5\,\mathcal{N}(2.5,1)
$$
```python
def gaussian(x, mu, var):
return 1/np.sqrt(2*np.pi*var) * np.exp(-0.5*(x-mu)**2/var)
def gaussian_mixture(x):
return 0.5*gaussian(x, 1., 0.25) + 0.5*gaussian(x, 2.5, 1)
```
First plot the pdf of the mixture
```python
x_grid = np.linspace(-2, 10, 1024)
plt.plot(x_grid, gaussian_mixture(x_grid), marker='')
```
```python
import numpy.random as rand
n_sample = 500000
x_sample = 6.*rand.random(n_sample)
y_sample = 0.5*rand.random(n_sample)
x_to_keep=[]
for i in range(n_sample):
if y_sample[i] < gaussian_mixture(x_sample[i]):
x_to_keep.append(x_sample[i])
plt.hist(x_to_keep,bins=120,normed=True);
```
```python
N = 500000
x_sample = np.random.uniform(-2, 6, size=N)
y_sample = np.random.uniform(0, 0.5, size=N)
idx = y_sample < gaussian_mixture(x_sample)
x = x_sample[idx]
n,_,_ = plt.hist(x, bins='auto')
```
```python
```
(Comment on adaptive rejection sampling methods)
___
# Markov Chain Monte Carlo (MCMC): Metropolis-hastings
Lose independent samples, gain in scalability to higher dimensions
The simplest MCMC algorithm is "Metropolis-Hastings". I'm not going to explain it in detail, but in pseudocode, it looks like this:
- Start from some position in the space of variables you are sampling over, $\theta_0$ with probability $\pi_0$
- Iterate from 1 to $N_{\rm steps}$:
- Sample an offset from $\delta\theta_0$ from some proposal distribution
- Compute a new parameter value using this offset, $\theta_{\rm new} = \theta_0 + \delta\theta_0$
- Evaluate the probability at the new parameter vector, $\pi_{\rm new}$
- Sample a uniform random number, $r \sim \mathcal{U}(0,1)$
- if $\pi_{\rm new}/\pi_0 > 1$ or $\pi_{\rm new}/\pi_0 > r$:
- store $\theta_{\rm new}$
- replace $\theta_0,\pi_0$ with $\theta_{\rm new},\pi_{\rm new}$
- else:
- store $\theta_0$ again
The proposal distribution has to be chosen and tuned by hand. We'll use a spherical / uncorrelated Gaussian distribution with root-variances set by hand.
### Example 1: implement and use Metropolis-Hastings MCMC to sample from the above mixture of Gaussians
```python
from scipy.misc import logsumexp
```
```python
# First, here's a log version of the mixture of Gaussians above that
# is less sensitive to numerical issues
def ln_gaussian(x, mu, var):
return -0.5*np.log(2*np.pi*var) - 0.5*(x-mu)**2/var
def ln_gaussian_mixture(x):
x = np.atleast_1d(x)
# weights
w = np.array([0.5, 0.5])[:,np.newaxis]
X = np.vstack((ln_gaussian(x, 1., 0.25), ln_gaussian(x, 2.5, 1)))
return logsumexp(X, b=w, axis=0)
```
```python
ln_gaussian_mixture(1.)
```
array([-0.76851516])
```python
# We're sampling from a 1D distribution, so we only need to propose in 1 dimension
def sample_proposal(sigma):
return np.random.normal(0., sigma)
```
```python
def run_metropolis_hastings_1d(p0, n_steps, ln_prob_func, proposal_sigma):
"""
Run a Metropolis-Hastings MCMC sampler to generate samples from the input
log-posterior function, starting from some initial parameter vector.
Parameters
----------
p0 : numeric
Initial value.
n_steps : int
Number of steps to run the sampler for.
ln_prob_func : function
A callable object that computes the log takes a parameter vector and computes
the log of the posterior pdf.
proposal_sigmas : numeric
Standard-deviation passed to the sample_proposal function.
"""
# the objects we'll fill and return:
chain = np.zeros(n_steps) # value at each step
ln_probs = np.zeros(n_steps) # log-probability value at each step
# we'll keep track of how many steps we accept to compute the acceptance fraction
n_accept = 0
# evaluate the log-posterior at the initial position and store starting position in chain
ln_probs[0] = ln_prob_func(p0)
chain[0] = p0
# loop through the number of steps requested and run MCMC
for i in range(1,n_steps):
# proposed new parameters
step = sample_proposal(proposal_sigma)
new_p = chain[i-1] + step
# compute log-posterior at new parameter values
new_ln_prob = ln_prob_func(new_p)
# log of the ratio of the new log-posterior to the previous log-posterior value
ln_prob_ratio = new_ln_prob - ln_probs[i-1]
if (ln_prob_ratio > 0) or (ln_prob_ratio > np.log(np.random.uniform())):
chain[i] = new_p
ln_probs[i] = new_ln_prob
n_accept += 1
else:
chain[i] = chain[i-1]
ln_probs[i] = ln_probs[i-1]
acc_frac = n_accept / n_steps
return chain, ln_probs, acc_frac
```
```python
# Run the MCMC sampler to generate samples from the mixture of Gaussians
```
```python
chain,ln_probs,acc_frac = run_metropolis_hastings_1d(-5.,
10000,
ln_gaussian_mixture,
2.5)
```
```python
acc_frac
```
0.4278
```python
plt.plot(chain[1000:], marker='', drawstyle='steps-mid')
```
```python
plt.hist(chain[1000:], bins='auto');
```
```python
```
| 700f47d13728e3f9715a91bab4e106d159fb38a3 | 509,117 | ipynb | Jupyter Notebook | notebooks/Sampling from probability distributions.ipynb | adrn/AST542 | 3a633cde68235cae95093e9f080dc0f3429705cd | [
"MIT"
] | 17 | 2017-03-30T20:13:38.000Z | 2021-07-12T00:55:13.000Z | notebooks/Sampling from probability distributions.ipynb | adrn/AST542 | 3a633cde68235cae95093e9f080dc0f3429705cd | [
"MIT"
] | null | null | null | notebooks/Sampling from probability distributions.ipynb | adrn/AST542 | 3a633cde68235cae95093e9f080dc0f3429705cd | [
"MIT"
] | 2 | 2021-05-28T15:26:52.000Z | 2021-07-09T15:34:41.000Z | 641.20529 | 332,976 | 0.938154 | true | 2,581 | Qwen/Qwen-72B | 1. YES
2. YES | 0.867036 | 0.899121 | 0.77957 | __label__eng_Latn | 0.902479 | 0.649536 |
<b>Traçar um esboço do gráfico e obter uma equação da parábola que satisfaça as condições dadas.</b>
<b>25. Vértice: $V(4,-3)$, eixo paralelo ao eixo dos x, passando pelo ponto $P(2,1)$</b>
<b>Se a parábola é paralela ao eixo dos $x$, temos que sua equação é dada por $(y-k)^2 = 2p(x-h)$</b><br><br>
<b>Descobrindo o valor de $p$ substituindo $V(4,-3)$ e $P(2,1)$</b><br><br>
$(-3-3)^2 = 2p(4-2)$<br><br>
$(-4)^2 = 2p(2)$<br><br>
$16 = 4p$<br><br>
<b>$p$ precisa ser negativo para passar pelo ponto $P(2,1)$</b><br><br>
$-4 = p$<br><br>
<b>Substituindo os pontos na equação da parábola</b><br><br>
$(y-(-3))^2 = 2\cdot -4 \cdot (x-4)$<br><br>
$(y+3)^2 = -8(x-4)$<br><br>
$y^2 + 6y + 9 = -8x + 32$<br><br>
$y^2 + 6y + 8x - 23 = 0$<br><br>
<b>Gráfico da parábola</b>
```python
from sympy import *
from sympy.plotting import plot_implicit
x, y = symbols("x y")
plot_implicit(Eq((y+3)**2, -8*(x-4)), (x,-20,20), (y,-20,20),
title=u'Gráfico da parábola', xlabel='x', ylabel='y');
```
| 5d77878cc93d8801d2f4befa1eba77f2ccba0c40 | 14,612 | ipynb | Jupyter Notebook | Problemas Propostos. Pag. 172 - 175/25.ipynb | mateuschaves/GEOMETRIA-ANALITICA | bc47ece7ebab154e2894226c6d939b7e7f332878 | [
"MIT"
] | 1 | 2020-02-03T16:40:45.000Z | 2020-02-03T16:40:45.000Z | Problemas Propostos. Pag. 172 - 175/25.ipynb | mateuschaves/GEOMETRIA-ANALITICA | bc47ece7ebab154e2894226c6d939b7e7f332878 | [
"MIT"
] | null | null | null | Problemas Propostos. Pag. 172 - 175/25.ipynb | mateuschaves/GEOMETRIA-ANALITICA | bc47ece7ebab154e2894226c6d939b7e7f332878 | [
"MIT"
] | null | null | null | 176.048193 | 12,512 | 0.897208 | true | 453 | Qwen/Qwen-72B | 1. YES
2. YES | 0.948155 | 0.859664 | 0.815094 | __label__por_Latn | 0.977394 | 0.732069 |
# Showcase
This notebooks shows general features of interface. For examples please see:
1. [Quantum Stadium](examples/stadium.ipynb)
2. [Edge states in HgTe](examples/qsh.ipynb)
```python
import sympy
sympy.init_printing(use_latex='mathjax')
```
# Imports discretizer
```python
from discretizer import Discretizer
from discretizer import momentum_operators
from discretizer import coordinates
```
```python
kx, ky, kz = momentum_operators
x, y, z = coordinates
A, B, C = sympy.symbols('A B C', commutative=False)
hamiltonian = sympy.Matrix([[kx * A(x) * kx, B(x,y)*kx], [kx*B(x,y), C*ky**2]],)
```
```python
hamiltonian
```
$$\left[\begin{matrix}k_{x} A{\left (x \right )} k_{x} & B{\left (x,y \right )} k_{x}\\k_{x} B{\left (x,y \right )} & C k_{y}^{2}\end{matrix}\right]$$
# class interface
```python
tb = Discretizer(hamiltonian, discrete_coordinates={'x', 'y'}, lattice_constant=2.0, verbose=True)
```
Discrete coordinates set to: ['x', 'y']
Function generated for (0, 1):
def _anonymous_func(site1, site2, p):
(x, y, ) = site2.pos
C = p.C
return (np.array([[0, 0], [0, -0.25*C]]))
Function generated for (1, 0):
def _anonymous_func(site1, site2, p):
(x, y, ) = site2.pos
A, B = p.A, p.B
return (np.array([[-0.25*A(1.0 + x), 0.25*1.j*B(2.0 + x, y)], [0.25*1.j*B(x, y), 0]]))
Function generated for (0, 0):
def _anonymous_func(site, p):
(x, y, ) = site.pos
C = p.C
A = p.A
return (np.array([[0.25*A(-1.0 + x) + 0.25*A(1.0 + x), 0], [0, 0.5*C]]))
```python
tb.input_hamiltonian
```
$$\left[\begin{matrix}k_{x} A{\left (x \right )} k_{x} & B{\left (x,y \right )} k_{x}\\k_{x} B{\left (x,y \right )} & C k_{y}^{2}\end{matrix}\right]$$
```python
tb.symbolic_hamiltonian
```
$$\left \{ \left ( 0, \quad 0\right ) : \left[\begin{matrix}\frac{1}{a^{2}} A{\left (- \frac{a}{2} + x \right )} + \frac{1}{a^{2}} A{\left (\frac{a}{2} + x \right )} & 0\\0 & \frac{2 C}{a^{2}}\end{matrix}\right], \quad \left ( 0, \quad 1\right ) : \left[\begin{matrix}0 & 0\\0 & - \frac{C}{a^{2}}\end{matrix}\right], \quad \left ( 1, \quad 0\right ) : \left[\begin{matrix}- \frac{1}{a^{2}} A{\left (\frac{a}{2} + x \right )} & \frac{i}{2 a} B{\left (a + x,y \right )}\\\frac{i}{2 a} B{\left (x,y \right )} & 0\end{matrix}\right]\right \}$$
```python
tb.lattice
```
kwant.lattice.Monatomic([[2.0, 0.0], [0.0, 2.0]], [0.0, 0.0], '')
```python
tb.onsite, tb.hoppings
```
(<function _anonymous_func>,
{HoppingKind((0, 1), kwant.lattice.Monatomic([[2.0, 0.0], [0.0, 2.0]], [0.0, 0.0], '')): <function _anonymous_func>,
HoppingKind((1, 0), kwant.lattice.Monatomic([[2.0, 0.0], [0.0, 2.0]], [0.0, 0.0], '')): <function _anonymous_func>})
| 872d508d0c8bada97d533e5dac57d29ebd63df45 | 7,740 | ipynb | Jupyter Notebook | examples/showcase.ipynb | basnijholt/discretizer | 0866107994282c39fd84d712373e7e55fe15cfa2 | [
"BSD-2-Clause"
] | null | null | null | examples/showcase.ipynb | basnijholt/discretizer | 0866107994282c39fd84d712373e7e55fe15cfa2 | [
"BSD-2-Clause"
] | null | null | null | examples/showcase.ipynb | basnijholt/discretizer | 0866107994282c39fd84d712373e7e55fe15cfa2 | [
"BSD-2-Clause"
] | 1 | 2020-04-05T03:08:37.000Z | 2020-04-05T03:08:37.000Z | 26.597938 | 601 | 0.391473 | true | 1,087 | Qwen/Qwen-72B | 1. YES
2. YES | 0.894789 | 0.731059 | 0.654144 | __label__eng_Latn | 0.344946 | 0.358125 |
## Single Index Quantile Regression
Author: @Suoer Xu (Supervised by Prof. J. Zhang)
August 18th, 2019
This is a tutorial on how to use the Single Index Quantile Regression model package. The packages almost identically replicates the profile optimization discussed in Ma and He (2016). See the paper as
> https://pdfs.semanticscholar.org/9324/e31866435d446f147320f80acde682e8e614.pdf
Environment: Python 3
Package requirements: Numpy 1.16.3, Pandas 0.22.0, Scipy 1.3.1, Matplotlib 3.1.1.
### 1. Generate B-spline
According to Ma and He (2016, p4) and de Boor (2001), the nonparametric single index function $G_{\tau}(\cdot)$ can be approximated well
by a spline function such that $G_{\tau}(\cdot) \approx B(\cdot)^T\theta_{\tau} $, w.h. $B(\cdot)$ is the basis splines with respect to a given degree, smoothness, and domain partition.
**Part I** provides a python code to generate the B-splines for any given interval and knots.The construction of B-Splines follows the Cox-de Boor recursion formula. See the description as
> https://en.wikipedia.org/wiki/B-spline
\begin{align}
B_{i,1}(x) &= \left\{
\begin{array}{rl}
1 & \text{if } t_i \le x < t_{i+1} \\
0 & \text{if otherwise}.
\end{array} \right. \\
B_{i,k+1}(x) &= \dfrac{x-t_i}{t_{i+k}-t_i}B_{i,k}(x)+\dfrac{t_{i+k+1}-x}{t_{i+k+1}-t_{i+1}}B_{i+1,k}(x)
\end{align}
The derivatives which might be used latter in the sensitivity analysis can be easily put in
\begin{align}
B'_{i,k+1}(x) = \dfrac{1}{t_{i+k}-t_i}B_{i,k}(x)+\dfrac{x-t_i}{t_{i+k}-t_i}B'_{i,k}(x)+\dfrac{-1}{t_{i+k+1}-t_{i+1}}B_{i+1,k}(x)+\dfrac{t_{i+k+1}-x}{t_{i+k+1}-t_{i+1}}B'_{i+1,k}(x)
\end{align}
```python
import numpy as np; import pandas as pd
from pandas import DataFrame
import matplotlib.pyplot as plt
from scipy.optimize import minimize, Bounds, LinearConstraint, NonlinearConstraint, BFGS
```
```python
def indicator(x,a,b,right_close=False):
if not right_close:
if (x >= a) & (x < b):
return 1
else:
return 0
else:
if (x >= a) & (x <= b):
return 1
else:
return 0
def I(a,b,right_close=False):
'''
return indicator function for a <= x < (or <=) b
'''
def inter(x):
return indicator(x,a,b,right_close)
return inter
def add(I1,I2):
'''
define the addition
'''
def inter(x):
return I1(x) + I2(x)
return inter
def mult(I1,I2):
'''
define the multiplication
'''
def inter(x):
return I1(x)*I2(x)
return inter
def scalar(I,alpha):
'''
define the scalar multiplication
'''
def inter(x):
return alpha*I(x)
return inter
def f(x, t_1, t_2, x_large = True):
if t_1 != t_2:
if x_large:
return (x - t_1)/(t_2 - t_1)
else:
return (t_2 - x)/(t_2 - t_1)
else:
return 0
def recur(t_1, t_2, x_large = True):
'''
return the recursion polynomial in the Cox-de Boor's algorithm
'''
def inter(x):
return f(x, t_1, t_2, x_large)
return inter
```
```python
def partition(a,b,N):
'''
interval [a,b] is evenly partitioned into a = t_0 < t_1 < ... < t_N < b = t_N+1
return the knots [t_0, t_1, ..., t_N+1]
'''
h = (b - a)/(N + 1)
return [a + i*h for i in range(0,N+2)]
def extend(t, p):
'''
extend the original t of length N+1 into a dictionary of length N+1+2p, convinient for de Boor algorithm
p is the final degree of the polynomials, i.e., p = m - 1 where m is the order of B-splines
'''
dic = {}
N = len(t) - 1
for i in range(p):
dic[i-p] = t[0]
for i in range(N+1):
dic[i] = t[i]
for i in range(p):
dic[N+i+1] = t[N]
return dic
```
```python
def deBoor(a, b, m, N, deri = False):
'''
a, b : the infimum and supremum , or minimum and maximum, of the scalar product <X, \beta>
m : the order of B-spline (>= 2)
N : the number of partition, i.e., [t0(=a), t1], [t1, t2], ... , [tN, tN+1(=b)]
deri : when True, return the derivatives. Default is False
'''
# the choice of N follow the implementation in Ma and He (2016, p9)
p = m - 1
t = partition(a,b,N)
t = extend(t,p)
if not deri:
B_k_1 = {}
for i in range(-p, N + p + 1) :
B_k_1[i] = I(t[i],t[i+1])
for k in range(1, p + 1):
B_k_0 = B_k_1
B_k_1 = {}
for i in range(-p, N + p + 1 - k):
recursion0 = mult( B_k_0[i] , recur(t[i], t[i+k], True) )
recursion1 = mult( B_k_0[i+1] , recur(t[i+1], t[i+k+1], False) )
B_k_1[i] = add(recursion0, recursion1)
return B_k_1
else:
B_k_1 = {}
b_k_1 = {}
for i in range(-p, N + p + 1) :
B_k_1[i] = I(t[i],t[i+1])
b_k_1[i] = I(0.,0.)
for k in range(1, p + 1):
B_k_0 = B_k_1
b_k_0 = b_k_1
B_k_1 = {}
b_k_1 = {}
for i in range(-p, N + p + 1 - k):
recursion0 = mult( B_k_0[i] , recur(t[i], t[i+k], True) )
recursion1 = mult( B_k_0[i+1] , recur(t[i+1], t[i+k+1], False) )
B_k_1[i] = add(recursion0, recursion1)
deri1 = mult( b_k_0[i] , recur(t[i], t[i+k], True) )
deri2 = mult( b_k_0[i+1] , recur(t[i+1], t[i+k+1], False) )
deri3 = scalar( B_k_0[i] , recur(t[i], t[i+k], True)(t[i]+1) )
deri4 = scalar( B_k_0[i+1] , recur(t[i+1], t[i+k+1], False)(t[i+k+1]+1) )
b_k_1[i] = add( add(deri1,deri2) , add(deri3,deri4) )
return B_k_1, b_k_1
```
```python
# an example is provided
a, b, m, N = 0, 12, 4, 3
```
```python
B_spline, b_deri = deBoor(a, b, m, N, True)
B_spline
```
{-3: <function __main__.add.<locals>.inter(x)>,
-2: <function __main__.add.<locals>.inter(x)>,
-1: <function __main__.add.<locals>.inter(x)>,
0: <function __main__.add.<locals>.inter(x)>,
1: <function __main__.add.<locals>.inter(x)>,
2: <function __main__.add.<locals>.inter(x)>,
3: <function __main__.add.<locals>.inter(x)>}
```python
plt.figure(figsize=(20,16))
for i in list(B_spline.keys()):
plt.subplot(3,3,i - list(B_spline.keys())[0] + 1)
X = np.arange(0,12,0.05)
Y = [B_spline[i](j) for j in X]
l = 'B(' + str(i) + ',' + str(m) + ')'
plt.plot(X,Y,label=l)
plt.legend()
plt.show()
```
```python
plt.figure(figsize=(20,16))
for i in list(b_deri.keys()):
plt.subplot(3,3,i - list(b_deri.keys())[0] + 1)
X = np.arange(0,12,0.05)
Y = [b_deri[i](j) for j in X]
l = 'b(' + str(i) + ',' + str(m) + ')'
plt.plot(X,Y,label=l)
plt.legend()
plt.show()
```
```python
# sanity check: the sum of B-splines should be 1 over the domain
ss = lambda x : 0
for i in list(B_spline.keys()):
ss = add(ss,B_spline[i])
x = np.arange(0,12,0.05)
y = [ss(j) for j in x]
plt.figure(figsize=(4,4))
plt.plot(x,y)
plt.show()
```
```python
```
### 2. Determine the infimum and supremum of $x^T\beta$
Y:
+ the log return of 000001.SZ
X:
+ the log return of other main commercial banks (listed before 2011)
+ 000001.SZ specific characteristics
+ macro state variables
```python
data1 = pd.read_csv('results/log_return.csv').dropna()
data2 = pd.read_csv('results/000001_specific.csv').dropna()
data3 = pd.read_csv('results/macro_state.csv').dropna()
```
```python
X = pd.concat([data1[data1.columns[2:]], data2[data2.columns[1:]], data3[data3.columns[1:]]], axis = 1)
Y = data1[data1.columns[1]]
X = np.array(X)
Y = np.array(Y)
```
```python
# sanity check
print(X.shape)
print(Y.shape)
```
```python
def u(x):
def inter(beta):
return np.dot(x,beta)/np.sqrt((beta**2).sum())
return inter
def v(x):
def inter(beta):
return -1. * np.dot(x,beta)/np.sqrt((beta**2).sum())
return inter
def min_max(x, min_ = True):
d = len(x)
beta0 = np.ones(d)
# define the linear constraint beta_0 > 0
ub = np.ones(d)*np.inf
lb = - np.ones(d)*np.inf
lb[0] = 0.
bou = Bounds(lb, ub)
if min_:
res = minimize(u(x), beta0, method='L-BFGS-B',bounds = bou)
else:
res = minimize(v(x), beta0, method='L-BFGS-B',bounds = bou)
return u(x)(res.x)
def inf_sup(X):
n = X.shape[0]
d = X.shape[1]
inf, sup = [], []
for i in range(n):
inf = inf + [min_max(X[i], min_ = True)]
sup = sup + [min_max(X[i], min_ = False)]
return np.array(inf).min(),np.array(sup).max()
```
```python
a, b = inf_sup(X)
```
```python
print(a,b)
```
### 3. Define the loss function
```python
n = X.shape[0]
m = 4
N = round(n**(1/(2*m+1))) + 1
dB = deBoor(a, b, m, N)
B = [i for i in dB.values()]
tau = 0.95
```
```python
def linear(B,theta):
'''
B : list of basis splines, dimension J = N + m
theta : control points of basis splines, (J,) array
'''
J = len(theta)
lin = scalar(B[0],theta[0])
for i in range(1,J):
lin = add(lin, scalar(B[i],theta[i]))
return lin
def rho(s,tau):
'''
define the pinball loss
'''
if s >= 0:
return tau*s
if s < 0:
return (tau - 1)*s
def SIQ_loss(X,Y,beta,B,theta,tau):
'''
X : sample input, (n, d) array
Y : sample output, (n,) array
beta : index, (d,) array
B : list of basis splines, dimension J = N + m
theta : control points of basis splines, (J,) array
tau : quantile to be estimated
'''
n = X.shape[0]
L = 0.
for i in range(n):
lin = linear(B, theta)
s = Y[i] - lin( u(X[i])(beta) )
L += rho(s,tau)
return L/n
```
### 4. Optimization for nonparametric function $G(\cdot)$ given index $\beta$
```python
def loss_on_theta(X,Y,beta,B,tau):
def inter(theta):
return SIQ_loss(X,Y,beta,B,theta,tau)
return inter
def theta_on_others(X,Y,beta,B,tau,theta0):
J = len(B)
res = minimize(loss_on_theta(X,Y,beta,B,tau), theta0, method='BFGS')
return res.x
```
### 5. Optimization for Index $\beta$
```python
def loss_on_beta_(X,Y,B,theta,tau):
def inter(beta):
return SIQ_loss(X,Y,beta,B,theta,tau)
return inter
def beta_on_others_(X,Y,B,theta,tau,beta0):
d = X.shape[1]
# define the linear constraint beta_0 > 0
ub = np.ones(d)*np.inf
lb = - np.ones(d)*np.inf
lb[0] = 0.
bou = Bounds(lb, ub)
res = minimize(loss_on_beta_(X,Y,B,theta,tau), beta0, method='L-BFGS-B',bounds = bou)
return res.x
```
### 4*. Optimization for both $\beta$ and $G(\cdot)$
```python
```
### 5. Optimization for nonparametric function $G(\cdot)$
\begin{align}
BIC(N_n) = \log\{\text{Loss}\} + \dfrac{\log n}{2n}(N_n+m)
\end{align}
```python
record = {}
for N_n in range(2,2*N):
dB_n = deBoor(a, b, m, N_n)
B_n = [i for i in dB_n.values()]
theta_n = theta_on_others(X,Y,beta,B_n,tau)
BIC_n = np.log(SIQ_loss(X,Y,beta,B_n,theta_n,tau)) + np.log(n)/(2*n)*(N_n+m)
record[N_n] = [theta, BIC_n]
```
```python
record_df = DataFrame(record, index=['theta','BIC']).T
record_df
```
| 3aa276d55755dbe5c21989d67575c1287a93c1b3 | 192,144 | ipynb | Jupyter Notebook | Single Index Quantile Regression.ipynb | Topaceminem/SIQ | e20dce1cbae7fb253fbf6c75160f0eba09c6fd8a | [
"Apache-2.0"
] | null | null | null | Single Index Quantile Regression.ipynb | Topaceminem/SIQ | e20dce1cbae7fb253fbf6c75160f0eba09c6fd8a | [
"Apache-2.0"
] | null | null | null | Single Index Quantile Regression.ipynb | Topaceminem/SIQ | e20dce1cbae7fb253fbf6c75160f0eba09c6fd8a | [
"Apache-2.0"
] | null | null | null | 280.912281 | 84,808 | 0.918759 | true | 3,811 | Qwen/Qwen-72B | 1. YES
2. YES | 0.782662 | 0.766294 | 0.599749 | __label__eng_Latn | 0.546431 | 0.231749 |
# Chapter 4
______
## The greatest theorem never told
This chapter focuses on an idea that is always bouncing around our minds, but is rarely made explicit outside books devoted to statistics. In fact, we've been using this simple idea in every example thus far.
### The Law of Large Numbers
Let $Z_i$ be $N$ independent samples from some probability distribution. According to *the Law of Large numbers*, so long as the expected value $E[Z]$ is finite, the following holds,
$$\frac{1}{N} \sum_{i=1}^N Z_i \rightarrow E[ Z ], \;\;\; N \rightarrow \infty.$$
In words:
> The average of a sequence of random variables from the same distribution converges to the expected value of that distribution.
This may seem like a boring result, but it will be the most useful tool you use.
### Intuition
If the above Law is somewhat surprising, it can be made clearer by examining a simple example.
Consider a random variable $Z$ that can take only two values, $c_1$ and $c_2$. Suppose we have a large number of samples of $Z$, denoting a specific sample $Z_i$. The Law says that we can approximate the expected value of $Z$ by averaging over all samples. Consider the average:
$$ \frac{1}{N} \sum_{i=1}^N \;Z_i $$
By construction, $Z_i$ can only take on $c_1$ or $c_2$, hence we can partition the sum over these two values:
\begin{align}
\frac{1}{N} \sum_{i=1}^N \;Z_i
& =\frac{1}{N} \big( \sum_{ Z_i = c_1}c_1 + \sum_{Z_i=c_2}c_2 \big) \\\\[5pt]
& = c_1 \sum_{ Z_i = c_1}\frac{1}{N} + c_2 \sum_{ Z_i = c_2}\frac{1}{N} \\\\[5pt]
& = c_1 \times \text{ (approximate frequency of $c_1$) } \\\\
& \;\;\;\;\;\;\;\;\; + c_2 \times \text{ (approximate frequency of $c_2$) } \\\\[5pt]
& \approx c_1 \times P(Z = c_1) + c_2 \times P(Z = c_2 ) \\\\[5pt]
& = E[Z]
\end{align}
Equality holds in the limit, but we can get closer and closer by using more and more samples in the average. This Law holds for almost *any distribution*, minus some important cases we will encounter later.
##### Example
____
Below is a diagram of the Law of Large numbers in action for three different sequences of Poisson random variables.
We sample `sample_size = 100000` Poisson random variables with parameter $\lambda = 4.5$. (Recall the expected value of a Poisson random variable is equal to its parameter.) We calculate the average for the first $n$ samples, for $n=1$ to `sample_size`.
```python
%matplotlib inline
import numpy as np
from IPython.core.pylabtools import figsize
import matplotlib.pyplot as plt
figsize(12.5, 5)
import pymc as pm
sample_size = 100000
expected_value = lambda_ = 4.5
poi = pm.rpoisson
N_samples = range(1, sample_size, 100)
for k in range(3):
samples = poi(lambda_, size=sample_size)
partial_average = [samples[:i].mean() for i in N_samples]
plt.plot(N_samples, partial_average, lw=1.5, label="average \
of $n$ samples; seq. %d" % k)
plt.plot(N_samples, expected_value * np.ones_like(partial_average),
ls="--", label="true expected value", c="k")
plt.ylim(4.35, 4.65)
plt.title("Convergence of the average of \n random variables to its \
expected value")
plt.ylabel("average of $n$ samples")
plt.xlabel("# of samples, $n$")
plt.legend();
```
Looking at the above plot, it is clear that when the sample size is small, there is greater variation in the average (compare how *jagged and jumpy* the average is initially, then *smooths* out). All three paths *approach* the value 4.5, but just flirt with it as $N$ gets large. Mathematicians and statistician have another name for *flirting*: convergence.
Another very relevant question we can ask is *how quickly am I converging to the expected value?* Let's plot something new. For a specific $N$, let's do the above trials thousands of times and compute how far away we are from the true expected value, on average. But wait — *compute on average*? This is simply the law of large numbers again! For example, we are interested in, for a specific $N$, the quantity:
$$D(N) = \sqrt{ \;E\left[\;\; \left( \frac{1}{N}\sum_{i=1}^NZ_i - 4.5 \;\right)^2 \;\;\right] \;\;}$$
The above formulae is interpretable as a distance away from the true value (on average), for some $N$. (We take the square root so the dimensions of the above quantity and our random variables are the same). As the above is an expected value, it can be approximated using the law of large numbers: instead of averaging $Z_i$, we calculate the following multiple times and average them:
$$ Y_k = \left( \;\frac{1}{N}\sum_{i=1}^NZ_i - 4.5 \; \right)^2 $$
By computing the above many, $N_y$, times (remember, it is random), and averaging them:
$$ \frac{1}{N_Y} \sum_{k=1}^{N_Y} Y_k \rightarrow E[ Y_k ] = E\;\left[\;\; \left( \frac{1}{N}\sum_{i=1}^NZ_i - 4.5 \;\right)^2 \right]$$
Finally, taking the square root:
$$ \sqrt{\frac{1}{N_Y} \sum_{k=1}^{N_Y} Y_k} \approx D(N) $$
```python
figsize(12.5, 4)
N_Y = 250 # use this many to approximate D(N)
N_array = np.arange(1000, 50000, 2500) # use this many samples in the approx. to the variance.
D_N_results = np.zeros(len(N_array))
lambda_ = 4.5
expected_value = lambda_ # for X ~ Poi(lambda) , E[ X ] = lambda
def D_N(n):
"""
This function approx. D_n, the average variance of using n samples.
"""
Z = poi(lambda_, size=(n, N_Y))
average_Z = Z.mean(axis=0)
return np.sqrt(((average_Z - expected_value) ** 2).mean())
for i, n in enumerate(N_array):
D_N_results[i] = D_N(n)
plt.xlabel("$N$")
plt.ylabel("expected squared-distance from true value")
plt.plot(N_array, D_N_results, lw=3,
label="expected distance between\n\
expected value and \naverage of $N$ random variables.")
plt.plot(N_array, np.sqrt(expected_value) / np.sqrt(N_array), lw=2, ls="--",
label=r"$\frac{\sqrt{\lambda}}{\sqrt{N}}$")
plt.legend()
plt.title("How 'fast' is the sample average converging? ");
```
As expected, the expected distance between our sample average and the actual expected value shrinks as $N$ grows large. But also notice that the *rate* of convergence decreases, that is, we need only 10 000 additional samples to move from 0.020 to 0.015, a difference of 0.005, but *20 000* more samples to again decrease from 0.015 to 0.010, again only a 0.005 decrease.
It turns out we can measure this rate of convergence. Above I have plotted a second line, the function $\sqrt{\lambda}/\sqrt{N}$. This was not chosen arbitrarily. In most cases, given a sequence of random variable distributed like $Z$, the rate of converge to $E[Z]$ of the Law of Large Numbers is
$$ \frac{ \sqrt{ \; Var(Z) \; } }{\sqrt{N} }$$
This is useful to know: for a given large $N$, we know (on average) how far away we are from the estimate. On the other hand, in a Bayesian setting, this can seem like a useless result: Bayesian analysis is OK with uncertainty so what's the *statistical* point of adding extra precise digits? Though drawing samples can be so computationally cheap that having a *larger* $N$ is fine too.
### How do we compute $Var(Z)$ though?
The variance is simply another expected value that can be approximated! Consider the following, once we have the expected value (by using the Law of Large Numbers to estimate it, denote it $\mu$), we can estimate the variance:
$$ \frac{1}{N}\sum_{i=1}^N \;(Z_i - \mu)^2 \rightarrow E[ \;( Z - \mu)^2 \;] = Var( Z )$$
### Expected values and probabilities
There is an even less explicit relationship between expected value and estimating probabilities. Define the *indicator function*
$$\mathbb{1}_A(x) =
\begin{cases} 1 & x \in A \\\\
0 & else
\end{cases}
$$
Then, by the law of large numbers, if we have many samples $X_i$, we can estimate the probability of an event $A$, denoted $P(A)$, by:
$$ \frac{1}{N} \sum_{i=1}^N \mathbb{1}_A(X_i) \rightarrow E[\mathbb{1}_A(X)] = P(A) $$
Again, this is fairly obvious after a moments thought: the indicator function is only 1 if the event occurs, so we are summing only the times the event occurs and dividing by the total number of trials (consider how we usually approximate probabilities using frequencies). For example, suppose we wish to estimate the probability that a $Z \sim Exp(.5)$ is greater than 10, and we have many samples from a $Exp(.5)$ distribution.
$$ P( Z > 10 ) = \frac{1}{N} \sum_{i=1}^N \mathbb{1}_{z > 10 }(Z_i) $$
```python
import pymc as pm
N = 10000
print(np.mean([pm.rexponential(0.5) > 10 for i in range(N)]))
```
0.0061
### What does this all have to do with Bayesian statistics?
*Point estimates*, to be introduced in the next chapter, in Bayesian inference are computed using expected values. In more analytical Bayesian inference, we would have been required to evaluate complicated expected values represented as multi-dimensional integrals. No longer. If we can sample from the posterior distribution directly, we simply need to evaluate averages. Much easier. If accuracy is a priority, plots like the ones above show how fast you are converging. And if further accuracy is desired, just take more samples from the posterior.
When is enough enough? When can you stop drawing samples from the posterior? That is the practitioners decision, and also dependent on the variance of the samples (recall from above a high variance means the average will converge slower).
We also should understand when the Law of Large Numbers fails. As the name implies, and comparing the graphs above for small $N$, the Law is only true for large sample sizes. Without this, the asymptotic result is not reliable. Knowing in what situations the Law fails can give us *confidence in how unconfident we should be*. The next section deals with this issue.
## The Disorder of Small Numbers
The Law of Large Numbers is only valid as $N$ gets *infinitely* large: never truly attainable. While the law is a powerful tool, it is foolhardy to apply it liberally. Our next example illustrates this.
##### Example: Aggregated geographic data
Often data comes in aggregated form. For instance, data may be grouped by state, county, or city level. Of course, the population numbers vary per geographic area. If the data is an average of some characteristic of each the geographic areas, we must be conscious of the Law of Large Numbers and how it can *fail* for areas with small populations.
We will observe this on a toy dataset. Suppose there are five thousand counties in our dataset. Furthermore, population number in each state are uniformly distributed between 100 and 1500. The way the population numbers are generated is irrelevant to the discussion, so we do not justify this. We are interested in measuring the average height of individuals per county. Unbeknownst to us, height does **not** vary across county, and each individual, regardless of the county he or she is currently living in, has the same distribution of what their height may be:
$$ \text{height} \sim \text{Normal}(150, 15 ) $$
We aggregate the individuals at the county level, so we only have data for the *average in the county*. What might our dataset look like?
```python
figsize(12.5, 4)
std_height = 15
mean_height = 150
n_counties = 5000
pop_generator = pm.rdiscrete_uniform
norm = pm.rnormal
# generate some artificial population numbers
population = pop_generator(100, 1500, size=n_counties)
average_across_county = np.zeros(n_counties)
for i in range(n_counties):
# generate some individuals and take the mean
average_across_county[i] = norm(mean_height, 1. / std_height ** 2,
size=population[i]).mean()
# located the counties with the apparently most extreme average heights.
i_min = np.argmin(average_across_county)
i_max = np.argmax(average_across_county)
# plot population size vs. recorded average
plt.scatter(population, average_across_county, alpha=0.5, c="#7A68A6")
plt.scatter([population[i_min], population[i_max]],
[average_across_county[i_min], average_across_county[i_max]],
s=60, marker="o", facecolors="none",
edgecolors="#A60628", linewidths=1.5,
label="extreme heights")
plt.xlim(100, 1500)
plt.title("Average height vs. County Population")
plt.xlabel("County Population")
plt.ylabel("Average height in county")
plt.plot([100, 1500], [150, 150], color="k", label="true expected \
height", ls="--")
plt.legend(scatterpoints=1);
```
What do we observe? *Without accounting for population sizes* we run the risk of making an enormous inference error: if we ignored population size, we would say that the county with the shortest and tallest individuals have been correctly circled. But this inference is wrong for the following reason. These two counties do *not* necessarily have the most extreme heights. The error results from the calculated average of smaller populations not being a good reflection of the true expected value of the population (which in truth should be $\mu =150$). The sample size/population size/$N$, whatever you wish to call it, is simply too small to invoke the Law of Large Numbers effectively.
We provide more damning evidence against this inference. Recall the population numbers were uniformly distributed over 100 to 1500. Our intuition should tell us that the counties with the most extreme population heights should also be uniformly spread over 100 to 1500, and certainly independent of the county's population. Not so. Below are the population sizes of the counties with the most extreme heights.
```python
print("Population sizes of 10 'shortest' counties: ")
print(population[np.argsort(average_across_county)[:10]])
print("\nPopulation sizes of 10 'tallest' counties: ")
print(population[np.argsort(-average_across_county)[:10]])
```
Population sizes of 10 'shortest' counties:
[100 103 138 182 194 100 118 161 156 186]
Population sizes of 10 'tallest' counties:
[100 147 132 193 270 130 414 101 150 109]
Not at all uniform over 100 to 1500. This is an absolute failure of the Law of Large Numbers.
##### Example: Kaggle's *U.S. Census Return Rate Challenge*
Below is data from the 2010 US census, which partitions populations beyond counties to the level of block groups (which are aggregates of city blocks or equivalents). The dataset is from a Kaggle machine learning competition some colleagues and I participated in. The objective was to predict the census letter mail-back rate of a group block, measured between 0 and 100, using census variables (median income, number of females in the block-group, number of trailer parks, average number of children etc.). Below we plot the census mail-back rate versus block group population:
```python
figsize(12.5, 6.5)
data = np.genfromtxt("./data/census_data.csv", skip_header=1,
delimiter=",")
plt.scatter(data[:, 1], data[:, 0], alpha=0.5, c="#7A68A6")
plt.title("Census mail-back rate vs Population")
plt.ylabel("Mail-back rate")
plt.xlabel("population of block-group")
plt.xlim(-100, 15e3)
plt.ylim(-5, 105)
i_min = np.argmin(data[:, 0])
i_max = np.argmax(data[:, 0])
plt.scatter([data[i_min, 1], data[i_max, 1]],
[data[i_min, 0], data[i_max, 0]],
s=60, marker="o", facecolors="none",
edgecolors="#A60628", linewidths=1.5,
label="most extreme points")
plt.legend(scatterpoints=1);
```
The above is a classic phenomenon in statistics. I say *classic* referring to the "shape" of the scatter plot above. It follows a classic triangular form, that tightens as we increase the sample size (as the Law of Large Numbers becomes more exact).
I am perhaps overstressing the point and maybe I should have titled the book *"You don't have big data problems!"*, but here again is an example of the trouble with *small datasets*, not big ones. Simply, small datasets cannot be processed using the Law of Large Numbers. Compare with applying the Law without hassle to big datasets (ex. big data). I mentioned earlier that paradoxically big data prediction problems are solved by relatively simple algorithms. The paradox is partially resolved by understanding that the Law of Large Numbers creates solutions that are *stable*, i.e. adding or subtracting a few data points will not affect the solution much. On the other hand, adding or removing data points to a small dataset can create very different results.
For further reading on the hidden dangers of the Law of Large Numbers, I would highly recommend the excellent manuscript [The Most Dangerous Equation](http://nsm.uh.edu/~dgraur/niv/TheMostDangerousEquation.pdf).
##### Example: How to order Reddit submissions
You may have disagreed with the original statement that the Law of Large numbers is known to everyone, but only implicitly in our subconscious decision making. Consider ratings on online products: how often do you trust an average 5-star rating if there is only 1 reviewer? 2 reviewers? 3 reviewers? We implicitly understand that with such few reviewers that the average rating is **not** a good reflection of the true value of the product.
This has created flaws in how we sort items, and more generally, how we compare items. Many people have realized that sorting online search results by their rating, whether the objects be books, videos, or online comments, return poor results. Often the seemingly top videos or comments have perfect ratings only from a few enthusiastic fans, and truly more quality videos or comments are hidden in later pages with *falsely-substandard* ratings of around 4.8. How can we correct this?
Consider the popular site Reddit (I purposefully did not link to the website as you would never come back). The site hosts links to stories or images, and a very popular part of the site are the comments associated with each link. Redditors can vote up or down on each submission (called upvotes and downvotes). Reddit, by default, will sort submissions to a given subreddit by Hot, that is, the submissions that have the most upvotes recently.
How would you determine which submissions are the best? There are a number of ways to achieve this:
1. *Popularity*: A submission is considered good if it has many upvotes. A problem with this model is that a submission with hundreds of upvotes, but thousands of downvotes. While being very popular, the submission is likely more controversial than best.
2. *Difference*: Using the *difference* of upvotes and downvotes. This solves the above problem, but fails when we consider the temporal nature of submission. Depending on when a submission is posted, the website may be experiencing high or low traffic. The difference method will bias the Top submissions to be the those made during high traffic periods, which have accumulated more upvotes than submissions that were not so graced, but are not necessarily the best.
3. *Time adjusted*: Consider using Difference divided by the age of the submission. This creates a *rate*, something like *difference per second*, or *per minute*. An immediate counter-example is, if we use per second, a 1 second old submission with 1 upvote would be better than a 100 second old submission with 99 upvotes. One can avoid this by only considering at least t second old submission. But what is a good t value? Does this mean no submission younger than t is good? We end up comparing unstable quantities with stable quantities (young vs. old submissions).
3. *Ratio*: Rank submissions by the ratio of upvotes to total number of votes (upvotes plus downvotes). This solves the temporal issue, such that new submissions who score well can be considered Top just as likely as older submissions, provided they have many upvotes to total votes. The problem here is that a submission with a single upvote (ratio = 1.0) will beat a submission with 999 upvotes and 1 downvote (ratio = 0.999), but clearly the latter submission is *more likely* to be better.
I used the phrase *more likely* for good reason. It is possible that the former submission, with a single upvote, is in fact a better submission than the latter with 999 upvotes. The hesitation to agree with this is because we have not seen the other 999 potential votes the former submission might get. Perhaps it will achieve an additional 999 upvotes and 0 downvotes and be considered better than the latter, though not likely.
What we really want is an estimate of the *true upvote ratio*. Note that the true upvote ratio is not the same as the observed upvote ratio: the true upvote ratio is hidden, and we only observe upvotes vs. downvotes (one can think of the true upvote ratio as "what is the underlying probability someone gives this submission a upvote, versus a downvote"). So the 999 upvote/1 downvote submission probably has a true upvote ratio close to 1, which we can assert with confidence thanks to the Law of Large Numbers, but on the other hand we are much less certain about the true upvote ratio of the submission with only a single upvote. Sounds like a Bayesian problem to me.
One way to determine a prior on the upvote ratio is to look at the historical distribution of upvote ratios. This can be accomplished by scraping Reddit's submissions and determining a distribution. There are a few problems with this technique though:
1. Skewed data: The vast majority of submissions have very few votes, hence there will be many submissions with ratios near the extremes (see the "triangular plot" in the above Kaggle dataset), effectively skewing our distribution to the extremes. One could try to only use submissions with votes greater than some threshold. Again, problems are encountered. There is a tradeoff between number of submissions available to use and a higher threshold with associated ratio precision.
2. Biased data: Reddit is composed of different subpages, called subreddits. Two examples are *r/aww*, which posts pics of cute animals, and *r/politics*. It is very likely that the user behaviour towards submissions of these two subreddits are very different: visitors are likely to be more friendly and affectionate in the former, and would therefore upvote submissions more, compared to the latter, where submissions are likely to be controversial and disagreed upon. Therefore not all submissions are the same.
In light of these, I think it is better to use a `Uniform` prior.
With our prior in place, we can find the posterior of the true upvote ratio. The Python script `top_showerthoughts_submissions.py` will scrape the best posts from the `showerthoughts` community on Reddit. This is a text-only community so the title of each post *is* the post. Below is the top post as well as some other sample posts:
```python
# adding a number to the end of the %run call with get the ith top photo.
%run top_showerthoughts_submissions.py 2
print("Post contents: \n")
print(top_post)
```
Post contents:
Toilet paper should be free and have advertising printed on it.
```python
"""
contents: an array of the text from the last 100 top submissions to a subreddit
votes: a 2d numpy array of upvotes, downvotes for each submission.
"""
n_submissions = len(votes)
submissions = np.random.randint( n_submissions, size=4)
print("Some Submissions (out of %d total) \n-----------"%n_submissions)
for i in submissions:
print('"' + contents[i] + '"')
print("upvotes/downvotes: ",votes[i,:], "\n")
```
Some Submissions (out of 98 total)
-----------
"You will never feel how long time is until you have allergies and snot slowly dripping out of your nostrils, while sitting in a classroom with no tissues."
upvotes/downvotes: [71 6]
"What if porn ads weren't fake and all these years I've been missing out on these local mums in my area that want to fuck?"
upvotes/downvotes: [43 11]
"You'll be real lucky to find a Penny in Canada."
upvotes/downvotes: [28 11]
""Smells Like Teen Spirit" is as old to listeners of today as "Yellow Submarine" was to listeners of 1991."
upvotes/downvotes: [92 10]
For a given true upvote ratio $p$ and $N$ votes, the number of upvotes will look like a Binomial random variable with parameters $p$ and $N$. (This is because of the equivalence between upvote ratio and probability of upvoting versus downvoting, out of $N$ possible votes/trials). We create a function that performs Bayesian inference on $p$, for a particular comment's upvote/downvote pair.
```python
import pymc as pm
def posterior_upvote_ratio(upvotes, downvotes, samples=20000):
"""
This function accepts the number of upvotes and downvotes a particular submission received,
and the number of posterior samples to return to the user. Assumes a uniform prior.
"""
N = upvotes + downvotes
upvote_ratio = pm.Uniform("upvote_ratio", 0, 1)
observations = pm.Binomial("obs", N, upvote_ratio, value=upvotes, observed=True)
# do the fitting; first do a MAP as it is cheap and useful.
map_ = pm.MAP([upvote_ratio, observations]).fit()
mcmc = pm.MCMC([upvote_ratio, observations])
mcmc.sample(samples, samples / 4)
return mcmc.trace("upvote_ratio")[:]
```
Below are the resulting posterior distributions.
```python
figsize(11., 8)
posteriors = []
colours = ["#348ABD", "#A60628", "#7A68A6", "#467821", "#CF4457"]
for i in range(len(submissions)):
j = submissions[i]
posteriors.append(posterior_upvote_ratio(votes[j, 0], votes[j, 1]))
plt.hist(posteriors[i], bins=18, normed=True, alpha=.9,
histtype="step", color=colours[i % 5], lw=3,
label='(%d up:%d down)\n%s...' % (votes[j, 0], votes[j, 1], contents[j][:50]))
plt.hist(posteriors[i], bins=18, normed=True, alpha=.2,
histtype="stepfilled", color=colours[i], lw=3, )
plt.legend(loc="upper left")
plt.xlim(0, 1)
plt.title("Posterior distributions of upvote ratios on different submissions");
```
Some distributions are very tight, others have very long tails (relatively speaking), expressing our uncertainty with what the true upvote ratio might be.
### Sorting!
We have been ignoring the goal of this exercise: how do we sort the submissions from *best to worst*? Of course, we cannot sort distributions, we must sort scalar numbers. There are many ways to distill a distribution down to a scalar: expressing the distribution through its expected value, or mean, is one way. Choosing the mean is a bad choice though. This is because the mean does not take into account the uncertainty of distributions.
I suggest using the *95% least plausible value*, defined as the value such that there is only a 5% chance the true parameter is lower (think of the lower bound on the 95% credible region). Below are the posterior distributions with the 95% least-plausible value plotted:
```python
N = posteriors[0].shape[0]
lower_limits = []
for i in range(len(submissions)):
j = submissions[i]
plt.hist(posteriors[i], bins=20, normed=True, alpha=.9,
histtype="step", color=colours[i], lw=3,
label='(%d up:%d down)\n%s...' % (votes[j, 0], votes[j, 1], contents[j][:50]))
plt.hist(posteriors[i], bins=20, normed=True, alpha=.2,
histtype="stepfilled", color=colours[i], lw=3, )
v = np.sort(posteriors[i])[int(0.05 * N)]
# plt.vlines( v, 0, 15 , color = "k", alpha = 1, linewidths=3 )
plt.vlines(v, 0, 10, color=colours[i], linestyles="--", linewidths=3)
lower_limits.append(v)
plt.legend(loc="upper left")
plt.legend(loc="upper left")
plt.title("Posterior distributions of upvote ratios on different submissions");
order = np.argsort(-np.array(lower_limits))
print(order, lower_limits)
```
The best submissions, according to our procedure, are the submissions that are *most-likely* to score a high percentage of upvotes. Visually those are the submissions with the 95% least plausible value close to 1.
Why is sorting based on this quantity a good idea? By ordering by the 95% least plausible value, we are being the most conservative with what we think is best. That is, even in the worst case scenario, when we have severely overestimated the upvote ratio, we can be sure the best comments are still on top. Under this ordering, we impose the following very natural properties:
1. given two submissions with the same observed upvote ratio, we will assign the submission with more votes as better (since we are more confident it has a higher ratio).
2. given two submissions with the same number of votes, we still assign the submission with more upvotes as *better*.
### But this is too slow for real-time!
I agree, computing the posterior of every submission takes a long time, and by the time you have computed it, likely the data has changed. I delay the mathematics to the appendix, but I suggest using the following formula to compute the lower bound very fast.
$$ \frac{a}{a + b} - 1.65\sqrt{ \frac{ab}{ (a+b)^2(a + b +1 ) } }$$
where
\begin{align}
& a = 1 + u \\\\
& b = 1 + d \\\\
\end{align}
$u$ is the number of upvotes, and $d$ is the number of downvotes. The formula is a shortcut in Bayesian inference, which will be further explained in Chapter 6 when we discuss priors in more detail.
```python
def intervals(u, d):
a = 1. + u
b = 1. + d
mu = a / (a + b)
std_err = 1.65 * np.sqrt((a * b) / ((a + b) ** 2 * (a + b + 1.)))
return (mu, std_err)
print("Approximate lower bounds:")
posterior_mean, std_err = intervals(votes[:, 0], votes[:, 1])
lb = posterior_mean - std_err
print(lb)
print("\n")
print("Top 40 Sorted according to approximate lower bounds:")
print("\n")
order = np.argsort(-lb)
ordered_contents = []
for i in order[:40]:
ordered_contents.append(contents[i])
print(votes[i, 0], votes[i, 1], contents[i])
print("-------------")
```
Approximate lower bounds:
[ 0.9335036 0.95310536 0.94166971 0.90854227 0.88683909 0.85564276
0.85607414 0.93758888 0.95697574 0.91015237 0.9112593 0.91305389
0.91341024 0.83335231 0.87543995 0.87081169 0.92748782 0.90747915
0.89063214 0.89804044 0.91295322 0.78329196 0.91901344 0.79950031
0.84776174 0.83540757 0.77406294 0.81391583 0.7296015 0.79338766
0.82895671 0.85331368 0.81849519 0.72362912 0.83662174 0.81019924
0.78564811 0.84570434 0.8400282 0.76944053 0.85827725 0.74417233
0.8189683 0.8027221 0.79190256 0.9033107 0.81639188 0.76627386
0.8010596 0.63657302 0.62988646 0.75041771 0.85355829 0.84522753
0.75627191 0.8458571 0.80877728 0.66764706 0.69623887 0.71480224
0.72921035 0.86797314 0.73955911 0.90742546 0.80364062 0.72331349
0.79249393 0.72708753 0.81109538 0.66235556 0.80480879 0.72039455
0.73945971 0.83846154 0.69 0.70597731 0.68175931 0.59412132
0.6011942 0.73158407 0.69121436 0.68134548 0.87746603 0.79809005
0.6296728 0.87152685 0.81814153 0.86498277 0.81018384 0.54207776
0.6296728 0.74107856 0.53025484 0.71034959 0.80149882 0.85773646
0.58343356 0.62971097]
Top 40 Sorted according to approximate lower bounds:
586 18 Someone should develop an AI specifically for reading Terms & Conditions and flagging dubious parts.
-------------
2354 98 Porn is the only industry where it is not only acceptable but standard to separate people based on race, sex and sexual preference.
-------------
1924 101 All polls are biased towards people who are willing to take polls
-------------
949 50 They should charge less for drinks in the drive-thru because you can't refill them.
-------------
3726 238 When I was in elementary school and going through the DARE program, I was positive a gang of older kids was going to corner me and force me to smoke pot. Then I became an adult and realized nobody is giving free drugs to somebody that doesn't want them.
-------------
164 7 "Noted" is the professional way of saying "K".
-------------
100 4 The best answer to the interview question "What is your greatest weakness?" is "interviews".
-------------
267 17 At some point every parent has stopped wiping their child's butt and hoped for the best.
-------------
291 19 You've been doing weird cameos in your friends' dreams since kindergarten.
-------------
121 6 Is it really fair to say a person over 85 has heart failure? Technically, that heart has done exceptionally well.
-------------
523 39 I wonder if America's internet is censored in a similar way that North Korea's is, but we have no idea of it happening.
-------------
539 41 It's surreal to think that the sun and moon and stars we gaze up at are the same objects that have been observed for millenia, by everyone in the history of humanity from cavemen to Aristotle to Jesus to George Washington.
-------------
1509 131 Kenny's family is poor because they're always paying for his funeral.
-------------
164 10 Black hair ties are probably the most popular bracelets in the world.
-------------
26 0 Now that I am a parent of multiple children I have realized that my parents were lying through their teeth when they said they didn't have a favorite.
-------------
41 1 If I was as careful with my whole paycheck as I am with my last $20 I'd be a whole lot better off
-------------
125 8 Surfing the internet without ads feels like a summer evening without mosquitoes
-------------
157 12 I wonder if Superman ever put a pair of glasses on Lois Lane's dog, and she was like "what's this Clark? Did you get me a new dog?"
-------------
1411 157 My life is really like Rihanna's song, "just work work work work work" and the rest of it I can't really understand.
-------------
19 0 Binoculars are like walkie talkies for the deaf.
-------------
221 22 I'm honestly slightly concerned how often Reddit commenters make me laugh compared to my real life friends.
-------------
18 0 Living on the coast is having the window seat of the land you live on.
-------------
188 19 I have not been thankful enough in the last few years that the Black Eyed Peas are no longer ever on the radio
-------------
29 1 Rewatching Mr. Bean, I've realised that the character is an eccentric genius and not a blithering idiot.
-------------
17 0 Sitting on a cold toilet seat or a warm toilet seat both suck for different reasons.
-------------
54 4 You will never feel how long time is until you have allergies and snot slowly dripping out of your nostrils, while sitting in a classroom with no tissues.
-------------
16 0 I sneer at people who read tabloids, but every time I look someone up on Wikipedia the first thing I look for is what controversies they've been involved in.
-------------
1485 222 Kid's menus at restaurants should be smaller portions of the same adult dishes at lower prices and not the junk food that they usually offer.
-------------
1417 212 Eventually once all phones are waterproof we'll be able to push people into pools again
-------------
35 2 Childhood and adolescence are thinking that no one has ever felt the way you do and that no one has ever experienced the things that you have. Adulthood is realizing that almost everyone has felt and experienced something similar.
-------------
60 5 Myspace is so outdated that jokes about it being outdated has become outdated
-------------
87 9 Yahoo!® is the RadioShack® of the Internet.
-------------
33 2 People who "tell it like it is" rarely do so to say something nice
-------------
49 4 The world must have been a spookier place altogether when candles and gas lamps were the only sources of light at night besides the moon and the stars.
-------------
41 3 Closing your eyes after turning off your alarm is a very dangerous game.
-------------
47 4 As a kid, seeing someone step on a banana peel and not slip was a disappointment.
-------------
23 1 The phonebook was the biggest invasion of privacy that everyone was oddly ok with.
-------------
53 5 I'm actually the most productive when I procrastinate because I'm doing everything I possibly can to avoid the main task at hand.
-------------
86 10 "Smells Like Teen Spirit" is as old to listeners of today as "Yellow Submarine" was to listeners of 1991.
-------------
240 36 if an ocean didnt stop immigrants from coming to America what makes us think a wall will?
-------------
We can view the ordering visually by plotting the posterior mean and bounds, and sorting by the lower bound. In the plot below, notice that the left error-bar is sorted (as we suggested this is the best way to determine an ordering), so the means, indicated by dots, do not follow any strong pattern.
```python
r_order = order[::-1][-40:]
plt.errorbar(posterior_mean[r_order], np.arange(len(r_order)),
xerr=std_err[r_order], capsize=0, fmt="o",
color="#7A68A6")
plt.xlim(0.3, 1)
plt.yticks(np.arange(len(r_order) - 1, -1, -1), map(lambda x: x[:30].replace("\n", ""), ordered_contents));
```
In the graphic above, you can see why sorting by mean would be sub-optimal.
### Extension to Starred rating systems
The above procedure works well for upvote-downvotes schemes, but what about systems that use star ratings, e.g. 5 star rating systems. Similar problems apply with simply taking the average: an item with two perfect ratings would beat an item with thousands of perfect ratings, but a single sub-perfect rating.
We can consider the upvote-downvote problem above as binary: 0 is a downvote, 1 if an upvote. A $N$-star rating system can be seen as a more continuous version of above, and we can set $n$ stars rewarded is equivalent to rewarding $\frac{n}{N}$. For example, in a 5-star system, a 2 star rating corresponds to 0.4. A perfect rating is a 1. We can use the same formula as before, but with $a,b$ defined differently:
$$ \frac{a}{a + b} - 1.65\sqrt{ \frac{ab}{ (a+b)^2(a + b +1 ) } }$$
where
\begin{align}
& a = 1 + S \\\\
& b = 1 + N - S \\\\
\end{align}
where $N$ is the number of users who rated, and $S$ is the sum of all the ratings, under the equivalence scheme mentioned above.
##### Example: Counting Github stars
What is the average number of stars a Github repository has? How would you calculate this? There are over 6 million repositories, so there is more than enough data to invoke the Law of Large numbers. Let's start pulling some data. TODO
### Conclusion
While the Law of Large Numbers is cool, it is only true so much as its name implies: with large sample sizes only. We have seen how our inference can be affected by not considering *how the data is shaped*.
1. By (cheaply) drawing many samples from the posterior distributions, we can ensure that the Law of Large Number applies as we approximate expected values (which we will do in the next chapter).
2. Bayesian inference understands that with small sample sizes, we can observe wild randomness. Our posterior distribution will reflect this by being more spread rather than tightly concentrated. Thus, our inference should be correctable.
3. There are major implications of not considering the sample size, and trying to sort objects that are unstable leads to pathological orderings. The method provided above solves this problem.
### Appendix
##### Derivation of sorting comments formula
Basically what we are doing is using a Beta prior (with parameters $a=1, b=1$, which is a uniform distribution), and using a Binomial likelihood with observations $u, N = u+d$. This means our posterior is a Beta distribution with parameters $a' = 1 + u, b' = 1 + (N - u) = 1+d$. We then need to find the value, $x$, such that 0.05 probability is less than $x$. This is usually done by inverting the CDF ([Cumulative Distribution Function](http://en.wikipedia.org/wiki/Cumulative_Distribution_Function)), but the CDF of the beta, for integer parameters, is known but is a large sum [3].
We instead use a Normal approximation. The mean of the Beta is $\mu = a'/(a'+b')$ and the variance is
$$\sigma^2 = \frac{a'b'}{ (a' + b')^2(a'+b'+1) }$$
Hence we solve the following equation for $x$ and have an approximate lower bound.
$$ 0.05 = \Phi\left( \frac{(x - \mu)}{\sigma}\right) $$
$\Phi$ being the [cumulative distribution for the normal distribution](http://en.wikipedia.org/wiki/Normal_distribution#Cumulative_distribution)
##### Exercises
1\. How would you estimate the quantity $E\left[ \cos{X} \right]$, where $X \sim \text{Exp}(4)$? What about $E\left[ \cos{X} | X \lt 1\right]$, i.e. the expected value *given* we know $X$ is less than 1? Would you need more samples than the original samples size to be equally accurate?
```python
# Enter code here
import scipy.stats as stats
exp = stats.expon(scale=4)
N = int(1e5)
X = exp.rvs(N)
# ...
```
2\. The following table was located in the paper "Going for Three: Predicting the Likelihood of Field Goal Success with Logistic Regression" [2]. The table ranks football field-goal kickers by their percent of non-misses. What mistake have the researchers made?
-----
#### Kicker Careers Ranked by Make Percentage
<table><tbody><tr><th>Rank </th><th>Kicker </th><th>Make % </th><th>Number of Kicks</th></tr><tr><td>1 </td><td>Garrett Hartley </td><td>87.7 </td><td>57</td></tr><tr><td>2</td><td> Matt Stover </td><td>86.8 </td><td>335</td></tr><tr><td>3 </td><td>Robbie Gould </td><td>86.2 </td><td>224</td></tr><tr><td>4 </td><td>Rob Bironas </td><td>86.1 </td><td>223</td></tr><tr><td>5</td><td> Shayne Graham </td><td>85.4 </td><td>254</td></tr><tr><td>… </td><td>… </td><td>…</td><td> </td></tr><tr><td>51</td><td> Dave Rayner </td><td>72.2 </td><td>90</td></tr><tr><td>52</td><td> Nick Novak </td><td>71.9 </td><td>64</td></tr><tr><td>53 </td><td>Tim Seder </td><td>71.0 </td><td>62</td></tr><tr><td>54 </td><td>Jose Cortez </td><td>70.7</td><td> 75</td></tr><tr><td>55 </td><td>Wade Richey </td><td>66.1</td><td> 56</td></tr></tbody></table>
In August 2013, [a popular post](http://bpodgursky.wordpress.com/2013/08/21/average-income-per-programming-language/) on the average income per programmer of different languages was trending. Here's the summary chart: (reproduced without permission, cause when you lie with stats, you gunna get the hammer). What do you notice about the extremes?
------
#### Average household income by programming language
<table >
<tr><td>Language</td><td>Average Household Income ($)</td><td>Data Points</td></tr>
<tr><td>Puppet</td><td>87,589.29</td><td>112</td></tr>
<tr><td>Haskell</td><td>89,973.82</td><td>191</td></tr>
<tr><td>PHP</td><td>94,031.19</td><td>978</td></tr>
<tr><td>CoffeeScript</td><td>94,890.80</td><td>435</td></tr>
<tr><td>VimL</td><td>94,967.11</td><td>532</td></tr>
<tr><td>Shell</td><td>96,930.54</td><td>979</td></tr>
<tr><td>Lua</td><td>96,930.69</td><td>101</td></tr>
<tr><td>Erlang</td><td>97,306.55</td><td>168</td></tr>
<tr><td>Clojure</td><td>97,500.00</td><td>269</td></tr>
<tr><td>Python</td><td>97,578.87</td><td>2314</td></tr>
<tr><td>JavaScript</td><td>97,598.75</td><td>3443</td></tr>
<tr><td>Emacs Lisp</td><td>97,774.65</td><td>355</td></tr>
<tr><td>C#</td><td>97,823.31</td><td>665</td></tr>
<tr><td>Ruby</td><td>98,238.74</td><td>3242</td></tr>
<tr><td>C++</td><td>99,147.93</td><td>845</td></tr>
<tr><td>CSS</td><td>99,881.40</td><td>527</td></tr>
<tr><td>Perl</td><td>100,295.45</td><td>990</td></tr>
<tr><td>C</td><td>100,766.51</td><td>2120</td></tr>
<tr><td>Go</td><td>101,158.01</td><td>231</td></tr>
<tr><td>Scala</td><td>101,460.91</td><td>243</td></tr>
<tr><td>ColdFusion</td><td>101,536.70</td><td>109</td></tr>
<tr><td>Objective-C</td><td>101,801.60</td><td>562</td></tr>
<tr><td>Groovy</td><td>102,650.86</td><td>116</td></tr>
<tr><td>Java</td><td>103,179.39</td><td>1402</td></tr>
<tr><td>XSLT</td><td>106,199.19</td><td>123</td></tr>
<tr><td>ActionScript</td><td>108,119.47</td><td>113</td></tr>
</table>
### References
1. Wainer, Howard. *The Most Dangerous Equation*. American Scientist, Volume 95.
2. Clarck, Torin K., Aaron W. Johnson, and Alexander J. Stimpson. "Going for Three: Predicting the Likelihood of Field Goal Success with Logistic Regression." (2013): n. page. [Web](http://www.sloansportsconference.com/wp-content/uploads/2013/Going%20for%20Three%20Predicting%20the%20Likelihood%20of%20Field%20Goal%20Success%20with%20Logistic%20Regression.pdf). 20 Feb. 2013.
3. http://en.wikipedia.org/wiki/Beta_function#Incomplete_beta_function
```python
from IPython.core.display import HTML
def css_styling():
styles = open("../styles/custom.css", "r").read()
return HTML(styles)
css_styling()
```
<style>
@font-face {
font-family: "Computer Modern";
src: url('http://9dbb143991406a7c655e-aa5fcb0a5a4ec34cff238a2d56ca4144.r56.cf5.rackcdn.com/cmunss.otf');
}
@font-face {
font-family: "Computer Modern";
font-weight: bold;
src: url('http://9dbb143991406a7c655e-aa5fcb0a5a4ec34cff238a2d56ca4144.r56.cf5.rackcdn.com/cmunsx.otf');
}
@font-face {
font-family: "Computer Modern";
font-style: oblique;
src: url('http://9dbb143991406a7c655e-aa5fcb0a5a4ec34cff238a2d56ca4144.r56.cf5.rackcdn.com/cmunsi.otf');
}
@font-face {
font-family: "Computer Modern";
font-weight: bold;
font-style: oblique;
src: url('http://9dbb143991406a7c655e-aa5fcb0a5a4ec34cff238a2d56ca4144.r56.cf5.rackcdn.com/cmunso.otf');
}
div.cell{
width:800px;
margin-left:16% !important;
margin-right:auto;
}
h1 {
font-family: Helvetica, serif;
}
h4{
margin-top:12px;
margin-bottom: 3px;
}
div.text_cell_render{
font-family: Computer Modern, "Helvetica Neue", Arial, Helvetica, Geneva, sans-serif;
line-height: 145%;
font-size: 130%;
width:800px;
margin-left:auto;
margin-right:auto;
}
.CodeMirror{
font-family: "Source Code Pro", source-code-pro,Consolas, monospace;
}
.prompt{
display: None;
}
.text_cell_render h5 {
font-weight: 300;
font-size: 22pt;
color: #4057A1;
font-style: italic;
margin-bottom: .5em;
margin-top: 0.5em;
display: block;
}
.warning{
color: rgb( 240, 20, 20 )
}
</style>
<style>
img{
max-width:800px}
</style>
| 80f48f1e46f421a88134f3c7ff19698d697df2dc | 594,192 | ipynb | Jupyter Notebook | Chapter4_TheGreatestTheoremNeverTold/Ch4_LawOfLargeNumbers_PyMC2.ipynb | sandeepmanocha/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers | cf47222f1a0afde29ff424162ae65913050de726 | [
"MIT"
] | 74 | 2016-07-22T19:03:32.000Z | 2022-03-24T04:23:28.000Z | Chapter4_TheGreatestTheoremNeverTold/Ch4_LawOfLargeNumbers_PyMC2.ipynb | Shzaidi/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers | 465e7e17247f9f61a2dc85b6dbcfd1919196e679 | [
"MIT"
] | 7 | 2016-08-02T08:17:15.000Z | 2016-10-03T21:48:59.000Z | Chapter4_TheGreatestTheoremNeverTold/Ch4_LawOfLargeNumbers_PyMC2.ipynb | Shzaidi/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers | 465e7e17247f9f61a2dc85b6dbcfd1919196e679 | [
"MIT"
] | 39 | 2016-07-23T01:42:20.000Z | 2022-02-11T14:55:26.000Z | 493.105394 | 130,304 | 0.917508 | true | 12,604 | Qwen/Qwen-72B | 1. YES
2. YES | 0.851953 | 0.937211 | 0.798459 | __label__eng_Latn | 0.995106 | 0.693421 |
# 10 Ordinary Differential Equations (ODEs)
[ODE](http://mathworld.wolfram.com/OrdinaryDifferentialEquation.html)s describe many phenomena in physics. They describe the changes of a **dependent variable** $y(t)$ as a function of a **single independent variable** (e.g. $t$ or $x$).
An ODE of **order** $n$
$$
F(t, y^{(0)}, y^{(1)}, ..., y^{(n)}) = 0
$$
contains derivatives $y^{(k)}(t) \equiv y^{(k)} \equiv \frac{d^{k}y(t)}{dt^{k}}$ up to the $n$-th derivative (and $y^{(0)} \equiv y$).
### Initial and boundary conditions
* $n$ **initial conditions** are needed to *uniquely determine* the solution of a $n$-th order ODE, e.g, initial position and velocities.
* **Boundary conditions** (values of solution on domain boundries) can additionaly restrict solutions but the resulting *eigenvalue problems* are more difficult, e.g, wavefunction goes towards 0 for $\pm\infty$.
### Linear ODEs
A **linear** ODE contains no higher powers than 1 of any of the $y^{(k)}$.
*Superposition principle*: Linear combinations of solutions are also solutions.
#### Example: First order linear ODE
\begin{align}
\frac{dy}{dt} &= f(t)y + g(t)\\
y^{(1)} &= f(t)y + g(t)\\
% y^{(1)} - f(t)y - g(t) &= 0
\end{align}
##### Radioactive decay
$$
\frac{dN}{dt} = -k N
$$
### Non-linear ODEs
**Non-linear** ODEs can contain any powers in the dependent variable and its derivatives.
No superposition of solutions. Often impossible to solve analytically.
#### Example: Second order (general) ODE
\begin{gather}
\frac{d^2 y}{dt^2} + \lambda(t) \frac{dy}{dt} = f\left(t, y, \frac{dy}{dt}\right)\\
\end{gather}
##### Newton's equations of motion
$$
m\frac{d^2 x}{dt^2} = F(x) + F_\text{ext}(x, t) \quad \text{with}
\quad F(x) = -\frac{dU}{dx}
$$
(Force is often derived from a potential energy $U(x)$ and may contain non-linear terms such as $x^{-2}$ or $x^3$.)
## Partial differential equations (PDEs)
* more than one independent variable (e.g. $x$ and $t$)
* partial derivatives
* much more difficult than ODEs
#### Example: Schrödinger equation (Quantum Mechanics)
$$
i\hbar \frac{\partial\psi(\mathbf{x}, t)}{\partial t} = -\frac{\hbar^2}{2m}
\left(\frac{\partial^2 \psi}{\partial x^2} +
\frac{\partial^2 \psi}{\partial y^2} +
\frac{\partial^2 \psi}{\partial z^2}
\right) + V(\mathbf{x})\, \psi(\mathbf{x}, t)
$$
## Harmonic and anharmonic oscillator
* particle with mass $m$ connected to a spring
* spring described by a harmonic potential or anharmonic ones in the displacements from equilibrium $x$
\begin{align}
U_1(x) &= \frac{1}{2} k x^2, \quad k=1\\
U_2(x) &= \frac{1}{2} k x^2 \left(1 - \frac{2}{3}\alpha x\right), \quad k=1,\ \alpha=\frac{1}{2}\\
U_3(x) &= \frac{1}{p} k x^p, \quad k=1,\ p=6
\end{align}
1. What do these potentials look like? Sketch or plot.
2. Calculate the forces.
#### Potentials
```python
import numpy as np
def U1(x, k=1):
return 0.5 * k * x*x
def U2(x, k=1, alpha=0.5):
return 0.5 * k * x*x * (1 - (2/3)*alpha*x)
def U3(x, k=1, p=6):
return (k/p) * np.power(x, p)
```
```python
import matplotlib
import matplotlib.pyplot as plt
matplotlib.style.use('seaborn-talk')
%matplotlib inline
```
```python
X = np.linspace(-3, 3, 100)
ax = plt.subplot(1,1,1)
ax.plot(X, U1(X), label=r"$U_1$")
ax.plot(X, U2(X), label=r"$U_2$")
ax.plot(X, U3(X), label=r"$U_3$")
ax.set_ylim(-0.5, 10)
ax.legend(loc="upper center");
```
#### Forces
\begin{align}
F_1(x) &= -kx\\
F_2(x) &= -kx(1 + \alpha x)\\
F_3(x) &= -k x^{p-1}
\end{align}
## ODE Algorithms
Basic idea:
1. Start with initial conditions, $y_0 \equiv y(t=0)$
2. Use $\frac{dy}{dt} = f(t, y)$ (the RHS!) to advance solution a small step $h$ forward in time: $y(t=h) \equiv y_1$
3. Repeat with $y_1$ to obtain $y_2 \equiv y(t=2h)$... and for all future values of $t$.
Possible issues
* small differences: subtractive cancelation and round-off error accumulation
* extrapolation: numerical "solution" can deviate wildly from exact
* possibly need adaptive $h$
### Euler's rule
Simple: forward difference
\begin{align}
f(t, y) = \frac{dy(t)}{dt} &\approx \frac{y(t_{n+1}) - y(t_n)}{h}\\
y_{n+1} &\approx y_n + h f(t_n, y_n) \quad \text{with} \quad y_n := y(t_n)
\end{align}
Error will be $\mathcal{O}(h^2)$ (bad!).
Also: what if we have a second order ODE ?!?! We only used $dy/dt$.
### Convert 2nd order ODE to 2 coupled _First Order_ ODEs
### Convert 2nd order ODE to 2 coupled 1st order ODEs
The 2nd order ODE is
$$
\frac{d^2 y}{dt^2} = f(t, y)
$$
Introduce "dummy" dependent variables $y_i$ with $y_0 \equiv y$ and
\begin{alignat}{1}
\frac{dy}{dt} &= \frac{dy_0}{dt} &= y_1\\
\frac{d^2y}{dt^2} &= \frac{dy_1}{dt} &= {} f(t, y_0).
\end{alignat}
The first equation defines the velocity $y_1 = v$ and the second one is the original ODE.
### $n$-th oder ODE to $n$ coupled 1st order ODEs
The $n$-th order ODE is
$$
\frac{d^n y}{dt^n} = f(t, y, \frac{d y}{dt}, \frac{d^2 y}{dt^2}, \dots, \frac{d^{n-1} y}{dt^{n-1}})
$$
Introduce "dummy" dependent variables $y^{(i)}$ with $y^{(0)} \equiv y$ and
\begin{align}
\frac{dy^{(0)}}{dt} &= y^{(1)}\\
\frac{dy^{(1)}}{dt} &= y^{(2)}\\
\dots & \\
\frac{dy^{(n-1)}}{dt} &= f(t, y^{(0)}, y^{(1)}, y^{(2)}, \dots, y^{(n-1)}).
\end{align}
### General standard (dynamic) form of ODEs
1 ODE of *any order* $n$ $\rightarrow$ $n$ coupled simultaneous first-order ODEs in $n$ unknowns $y^{(0)}, \dots, y^{(n-1)}$:
\begin{align}
\frac{dy^{(0)}}{dt} &= f^{(0)}(t, y^{(0)}, \dots, y^{(n-1)})\\
\frac{dy^{(1)}}{dt} &= f^{(1)}(t, y^{(0)}, \dots, y^{(n-1)})\\
\vdots & \\
\frac{dy^{(n-1)}}{dt} &= f^{(n-1)}(t, y^{(0)}, \dots, y^{(n-1)})\\
\end{align}
In $n$-dimensional vector notation:
\begin{align}
\frac{d\mathbf{y}(t)}{dt} &= \mathbf{f}(t, \mathbf{y})\\
\mathbf{y} &= \left(\begin{array}{c}
y^{(0)}(t) \\
y^{(1)}(t) \\
\vdots \\
y^{(n-1)}(t)
\end{array}\right),
\quad
\mathbf{f} = \left(\begin{array}{c}
f^{(0)}(t, \mathbf{y}) \\
f^{(1)}(t, \mathbf{y}) \\
\vdots \\
f^{(n-1)}(t, \mathbf{y})
\end{array}\right)
\end{align}
#### Example: Convert Newton's EOMs to standard form
$$
\frac{d^2 x}{dt^2} = m^{-1} F\Big(t, x, \frac{dx}{dt}\Big)
$$
RHS may *not contain any explicit derivatives* but components of $\mathbf{y}$ can represent derivatives.
* position $x$ as first dependent variable $y^{(0)}$ (as usual).
* velocity $dx/dt$ as second dependent variable $y^{(1)}$
\begin{align}
y^{(0)}(t) &:= x(t)\\
y^{(1)}(t) &:= \frac{dx}{dt} = \frac{dy^{(0)}}{dt}
\end{align}
One 2nd order ODE
$$
\frac{d^2 x}{dt^2} = m^{-1} F\Big(t, x, \frac{dx}{dt}\Big)
$$
to two simultaneous 1st order ODEs:
\begin{align}
\frac{dy^{(0)}}{dt} &= y^{(1)}(t)\\
\frac{dy^{(1)}}{dt} &= m^{-1} F\Big(t, y^{(0)}, y^{(1)}\Big)
\end{align}
\begin{align}
\frac{d\mathbf{y}(t)}{dt} &= \mathbf{f}(t, \mathbf{y})\\
\mathbf{y} &= \left(\begin{array}{c}
y^{(0)} \\
y^{(1)}
\end{array}\right) =
\left(\begin{array}{c}
x(t) \\
\frac{dx}{dt}
\end{array}\right),\\
\mathbf{f} &= \left(\begin{array}{c}
y^{(1)}(t) \\
m^{-1} F\Big(t, y^{(0)}, y^{(1)}\Big)
\end{array}\right) =
\left(\begin{array}{c}
\frac{dx}{dt} \\
m^{-1} F\Big(t, x(t), \frac{dx}{dt}\Big)
\end{array}\right)
\end{align}
#### Example: 1D harmonic oscillator in standard form
With $F_1 = -k x$:
$$
\frac{d^2 x}{dt^2} = -m^{-1}k x
$$
convert to
\begin{align}
\frac{dy^{(0)}}{dt} &= y^{(1)}(t) \\
\frac{dy^{(1)}}{dt} &= -m^{-1}k y^{(0)}
\end{align}
Force (or derivative) function $\mathbf{f}$ and initial conditions:
\begin{alignat}{3}
f^{(0)}(t, \mathbf{y}) &= y^{(1)},
&\quad y^{(0)}(0) &= x_0,\\
f^{(1)}(t, \mathbf{y}) &= -m^{-1} k y^{(0)},
&\quad y^{(1)}(0) &= v_0.
\end{alignat}
### Euler's rule (standard form)
Given the $n$-dimensional vectors from the ODE standard form
$$
\frac{d\mathbf{y}}{dt} = \mathbf{f}(t, \mathbf{y})
$$
the **Euler rule** amounts to
\begin{align}
\mathbf{f}(t, \mathbf{y}) = \frac{d\mathbf{y}(t)}{dt} &\approx \frac{\mathbf{y}(t_{n+1}) - \mathbf{y}(t_n)}{\Delta t}\\
\mathbf{y}_{n+1} &\approx \mathbf{y}_n + \Delta t \mathbf{f}(t_n, \mathbf{y}_n) \quad \text{with} \quad \mathbf{y}_n := \mathbf{y}(t_n)
\end{align}
## Problem: Numerically integrate the 1D harmonic oscillator with Euler
\begin{alignat}{3}
f^{(0)}(t, \mathbf{y}) &= y^{(1)},
&\quad y^{(0)}(0) &= x_0,\\
f^{(1)}(t, \mathbf{y}) &= - \frac{k}{m} y^{(0)},
&\quad y^{(1)}(0) &= v_0.
\end{alignat}
with $k=1$; $x_0 = 0$ and $v_0 = +1$.
### Explicit implementation:
* Note how in `f_harmonic` we are constructing the force vector of the standard ODE representation
* `y` is the vector of dependents in the standard representation
* We pre-allocate the array for `y` and then assign to individual elements with the
```python
y[:] = ...
```
notation, which has higher performance than creating the array anew every time.
```python
import numpy as np
def F1(x, k=1):
"""Harmonic force"""
return -k*x
def f_harmonic(t, y, k=1, m=1):
"""Force vector in standard ODE form (n=2)"""
return np.array([y[1], F1(y[0], k=k)/m])
t_max = 100
h = 0.01
Nsteps = int(t_max/h)
t_range = h * np.arange(Nsteps)
x = np.empty_like(t_range)
y = np.zeros(2)
# initial conditions
x0, v0 = 0.0, 1.0
y[:] = x0, v0
for i, t in enumerate(t_range):
# store position that corresponds to time t_i
x[i] = y[0]
# Euler integrator
y[:] = y + h * f_harmonic(t, y)
```
Plot the position $x(t)$ (which is $y_0$) against time:
```python
plt.plot(t_range, x)
```
Although we see oscillations in $x(t)$, the fact that the *amplitude increases with time* instead of staying staying constant (as required by *energy conservation*) indicates that the Euler integrator has a serious problem. We will come back to this point later.
### Modular solution with functions
We can make the Euler integrator a function, which makes the code more readable and modular and we can make the whole integration a function, too. This will allow us to easily run the integration with different initial values or `h` steps.
```python
import numpy as np
def F1(x, k=1):
"""Harmonic force"""
return -k*x
def f_harmonic(t, y, k=1, m=1):
"""Force vector in standard ODE form (n=2)"""
return np.array([y[1], F1(y[0], k=k)/m])
def euler(y, f, t, h):
"""Euler integrator.
Returns new y at t+h.
"""
return y + h * f(t, y)
def integrate(x0=0, v0=1, t_max=100, h=0.001):
"""Integrate the harmonic oscillator with force F1.
Note that the spring constant k and particle mass m are currently
pre-defined.
Arguments
---------
x0 : float
initial position
v0 : float
initial velocity
t_max : float
time to integrate out to
h : float, default 0.001
integration time step
Returns
-------
Tuple ``(t, x)`` with times and positions.
"""
Nsteps = t_max/h
t_range = h * np.arange(Nsteps)
x = np.empty_like(t_range)
y = np.zeros(2)
# initial conditions
y[:] = x0, v0
for i, t in enumerate(t_range):
# store position that corresponds to time t_i
x[i] = y[0]
# Euler integrator
y[:] = euler(y, f_harmonic, t, h)
return t_range, x
```
Plot the position as a function of time, $x(t)$.
```python
t, x = integrate(t_max=300, h=0.001)
plt.plot(t, x)
```
### Compare to analytical solution
Analytical solution:
$$
\omega = \sqrt{\frac{k}{m}}
$$
and
$$
x(t) = A \sin(\omega t)
$$
here with $A=1$ and $\omega = 1$:
```python
plt.plot(t, x)
plt.plot(t, np.sin(t), color="red")
```
Note the increase in amplitude. Explore if smaller $h$ fixes this obvious problem.
```python
t, x = integrate(h=0.001)
plt.plot(t, x)
plt.plot(t, np.sin(t), color="red")
```
Smaller $h$ improves the integration (but Euler is still a bad algorithm... just run out for longer, i.e., higher `t_max`.)
### Note on Euler's global error
How does the global error after time $t$ behave, i.e., what is its dependency on step size $h$?
* Per-step (**local**) error is $\mathcal{O}(h^2)$.
* For time $t = N h$ we have $N \propto 1/h$ steps.
We assume that we linearly accumulate local error for the **global error** for $N$ steps:
$$
\mathcal{O}(h^2) \times h^{-1} \propto \mathcal{O}(h)
$$
Thus, halving $h$ should halve the global error.
## Optional: Semi-implicit Euler
We can apply a simple trick to turn standard (forward) Euler into a _much_ better algorithm but this only works nicely when we solve Hamilton's equations of motion (Newton's equation without velocity-dependent potentials) and it also does not work nicely with the standard form.
In the following, $x$ is the position, $v = \dot{x} = \frac{dx}{dt}$ the velocity, and $f = F/m$ is the acceleration (but keeping the suggestive letter "f" for the force). Note that the force only depends on positions, $f(x) = F(x)/m$.
### Standard Euler for Hamiltonian dynamics
\begin{align}
x_{n+1} &= x_n + h v_n\\
v_{n+1} &= v_n + h f(x_n)
\end{align}
This is the same algorithm that we discussed in a more general form above.
### Semi-implicit Euler for Hamiltonian dynamics
The trick is to use the _updated_ positions $x_{n+1}$ for the velocity $v_{n+1}$ update, which gives the [semi-implicit Euler](https://en.wikipedia.org/wiki/Semi-implicit_Euler_method) method:
\begin{align}
x_{n+1} &= x_n + h v_n\\
v_{n+1} &= v_n + h f(x_{n+1})
\end{align}
This algorithm _does_ conserve energy over the long term because it is a symplectic algorithm (it maintains the deeper mathematical structures and invariants of Hamiltonian dynamics) although its error is still the same as for standard Euler.
```python
```
| ee38575d21d062f321c3cac87ca57060d8c75bb6 | 247,322 | ipynb | Jupyter Notebook | 10_ODEs/10-ODEs.ipynb | Py4Phy/PHY432-resources | c26d95eaf5c28e25da682a61190e12ad6758a938 | [
"CC-BY-4.0"
] | null | null | null | 10_ODEs/10-ODEs.ipynb | Py4Phy/PHY432-resources | c26d95eaf5c28e25da682a61190e12ad6758a938 | [
"CC-BY-4.0"
] | 1 | 2022-03-03T21:47:56.000Z | 2022-03-03T21:47:56.000Z | 10_ODEs/10-ODEs.ipynb | Py4Phy/PHY432-resources | c26d95eaf5c28e25da682a61190e12ad6758a938 | [
"CC-BY-4.0"
] | null | null | null | 229.85316 | 61,984 | 0.914262 | true | 4,854 | Qwen/Qwen-72B | 1. YES
2. YES | 0.793106 | 0.839734 | 0.665998 | __label__eng_Latn | 0.856038 | 0.385668 |
- - - -
# Mechpy Tutorials
a mechanical engineering toolbox
source code - https://github.com/nagordon/mechpy
documentation - https://nagordon.github.io/mechpy/web/
- - - -
Neal Gordon
2017-02-20
- - - -
## Composite Plate Mechanics with Python
reference: hyer page 584. 617
The motivation behind this talk is to explore the capability of python as a scientific computation tool as well as solve a typical calcuation that could either be done by hand, or coded. I find coding to be a convient way to learn challenging mathmatics because I can easily replicate my work in the future when I can't remember the details of the calcuation or, if there are any errors, they can typically be easily fixed and the other calcuations re-ran without a lot of duplcation of effort.
Composite mechanics can be very iterative by nature and is easiest to employ linear algebra to find displacements, strains and stresses of composites. Coding solutions is also handy when visualizations are required.
For this example, we are interested in calcuating the stress critical ply in a simple asymteric composite plate with a pressure load applied. We can chooose a variety of boundary conditions of our plate, but this solution is limited to 2 dimensional displacements, x and z. If we are interested in 3 dimensional displacements, the problem becomes much more challenging as partial differentiation of the governing equations gives us a PDE, which is more challenging to solve.
The steps to solving are
- Identify governing and equilibrium equations
- import python required libraries
- declare symbolic variables
- declare numeric variables, including material properties, plate dimensions, and plate pressure
- solve 4th order differntial equation with 7 constants
- apply plate boundary conditions and acquire u(x) and w(x) displacement functions
- acquire strain equations from displacement
- acquire stress equations from strain
- determine critical ply from highest ply stress ratio
```python
# Import Python modules and
import numpy as np
from sympy import *
from pprint import pprint
# printing and plotting settings
init_printing(use_latex='mathjax')
get_ipython().magic('matplotlib inline') # inline plotting
x,y,q = symbols('x,y,q')
```
As mentioned before, if we want to perform a 3 dimensional displacement model of the composite plate, we would have 6 reaction forces that are a function of x and y. Those 6 reaction forces are related by 3 equalibrium equations
```python
# # hyer page 584
# # Equations of equilibrium
# Nxf = Function('N_x')(x,y)
# Nyf = Function('N_y')(x,y)
# Nxyf = Function('N_xy')(x,y)
# Mxf = Function('M_x')(x,y)
# Myf = Function('M_y')(x,y)
# Mxyf = Function('M_xy')(x,y)
# symbols for force and moments
Nx,Ny,Nxy,Mx,My,Mxy = symbols('N_x,N_y,N_xy,M_x,M_y,M_xy')
Nxf,Nyf,Nxyf,Mxf,Myf,Mxyf = symbols('Nxf,Nyf,Nxyf,Mxf,Myf,Mxyf')
```
```python
Eq(0,diff(Nx(x,y), x)+diff(Nxy(x,y),y))
```
$$0 = \frac{\partial}{\partial x} \operatorname{N_{x}}{\left (x,y \right )} + \frac{\partial}{\partial y} \operatorname{N_{xy}}{\left (x,y \right )}$$
```python
Eq(0,diff(Nxy(x,y), x)+diff(Ny(x,y),y))
```
$$0 = \frac{\partial}{\partial x} \operatorname{N_{xy}}{\left (x,y \right )} + \frac{\partial}{\partial y} \operatorname{N_{y}}{\left (x,y \right )}$$
```python
Eq(0, diff(Mx(x,y),x,2) + 2*diff(Mxy(x,y),x,y) + diff(My(x,y) ,y,2)+ q )
```
$$0 = q + \frac{\partial^{2}}{\partial x^{2}} \operatorname{M_{x}}{\left (x,y \right )} + 2 \frac{\partial^{2}}{\partial x\partial y} \operatorname{M_{xy}}{\left (x,y \right )} + \frac{\partial^{2}}{\partial y^{2}} \operatorname{M_{y}}{\left (x,y \right )}$$
What makes composite plates special is the fact that they typically not isotropic. This is handled by the 6x6 ABD matrix that defines the composites properties axially, in bending, and the coupling between the two.
```python
# composite properties
A11,A22,A66,A12,A16,A26,A66 = symbols('A11,A22,A66,A12,A16,A26,A66')
B11,B22,B66,B12,B16,B26,B66 = symbols('B11,B22,B66,B12,B16,B26,B66')
D11,D22,D66,D12,D16,D26,D66 = symbols('D11,D22,D66,D12,D16,D26,D66')
## constants of integration when solving differential equation
C1,C2,C3,C4,C5,C6 = symbols('C1,C2,C3,C4,C5,C6')
# plate and composite parameters
th,a,b = symbols('th,a,b')
# displacement functions
u0 = Function('u0')(x,y)
v0 = Function('v0')(x,y)
w0 = Function('w0')(x,y)
```
Let's compute our 6 displacement conditions which is where our PDE's show up
```python
Nxf = A11*diff(u0,x) + A12*diff(v0,y) + A16*(diff(u0,y) + diff(v0,x)) - B11*diff(w0,x,2) - B12*diff(w0,y,2) - 2*B16*diff(w0,x,y)
Eq(Nx, Nxf)
```
$$N_{x} = A_{11} \frac{\partial}{\partial x} \operatorname{u_{0}}{\left (x,y \right )} + A_{12} \frac{\partial}{\partial y} \operatorname{v_{0}}{\left (x,y \right )} + A_{16} \left(\frac{\partial}{\partial y} \operatorname{u_{0}}{\left (x,y \right )} + \frac{\partial}{\partial x} \operatorname{v_{0}}{\left (x,y \right )}\right) - B_{11} \frac{\partial^{2}}{\partial x^{2}} \operatorname{w_{0}}{\left (x,y \right )} - B_{12} \frac{\partial^{2}}{\partial y^{2}} \operatorname{w_{0}}{\left (x,y \right )} - 2 B_{16} \frac{\partial^{2}}{\partial x\partial y} \operatorname{w_{0}}{\left (x,y \right )}$$
```python
Nyf = A12*diff(u0,x) + A22*diff(v0,y) + A26*(diff(u0,y) + diff(v0,x)) - B12*diff(w0,x,2) - B22*diff(w0,y,2) - 2*B26*diff(w0,x,y)
Eq(Ny,Nyf)
```
$$N_{y} = A_{12} \frac{\partial}{\partial x} \operatorname{u_{0}}{\left (x,y \right )} + A_{22} \frac{\partial}{\partial y} \operatorname{v_{0}}{\left (x,y \right )} + A_{26} \left(\frac{\partial}{\partial y} \operatorname{u_{0}}{\left (x,y \right )} + \frac{\partial}{\partial x} \operatorname{v_{0}}{\left (x,y \right )}\right) - B_{12} \frac{\partial^{2}}{\partial x^{2}} \operatorname{w_{0}}{\left (x,y \right )} - B_{22} \frac{\partial^{2}}{\partial y^{2}} \operatorname{w_{0}}{\left (x,y \right )} - 2 B_{26} \frac{\partial^{2}}{\partial x\partial y} \operatorname{w_{0}}{\left (x,y \right )}$$
```python
Nxyf = A16*diff(u0,x) + A26*diff(v0,y) + A66*(diff(u0,y) + diff(v0,x)) - B16*diff(w0,x,2) - B26*diff(w0,y,2) - 2*B66*diff(w0,x,y)
Eq(Nxy,Nxyf)
```
$$N_{xy} = A_{16} \frac{\partial}{\partial x} \operatorname{u_{0}}{\left (x,y \right )} + A_{26} \frac{\partial}{\partial y} \operatorname{v_{0}}{\left (x,y \right )} + A_{66} \left(\frac{\partial}{\partial y} \operatorname{u_{0}}{\left (x,y \right )} + \frac{\partial}{\partial x} \operatorname{v_{0}}{\left (x,y \right )}\right) - B_{16} \frac{\partial^{2}}{\partial x^{2}} \operatorname{w_{0}}{\left (x,y \right )} - B_{26} \frac{\partial^{2}}{\partial y^{2}} \operatorname{w_{0}}{\left (x,y \right )} - 2 B_{66} \frac{\partial^{2}}{\partial x\partial y} \operatorname{w_{0}}{\left (x,y \right )}$$
```python
Mxf = B11*diff(u0,x) + B12*diff(v0,y) + B16*(diff(u0,y) + diff(v0,x)) - D11*diff(w0,x,2) - D12*diff(w0,y,2) - 2*D16*diff(w0,x,y)
Eq(Mx,Mxf)
```
$$M_{x} = B_{11} \frac{\partial}{\partial x} \operatorname{u_{0}}{\left (x,y \right )} + B_{12} \frac{\partial}{\partial y} \operatorname{v_{0}}{\left (x,y \right )} + B_{16} \left(\frac{\partial}{\partial y} \operatorname{u_{0}}{\left (x,y \right )} + \frac{\partial}{\partial x} \operatorname{v_{0}}{\left (x,y \right )}\right) - D_{11} \frac{\partial^{2}}{\partial x^{2}} \operatorname{w_{0}}{\left (x,y \right )} - D_{12} \frac{\partial^{2}}{\partial y^{2}} \operatorname{w_{0}}{\left (x,y \right )} - 2 D_{16} \frac{\partial^{2}}{\partial x\partial y} \operatorname{w_{0}}{\left (x,y \right )}$$
```python
Myf = B12*diff(u0,x) + B22*diff(v0,y) + B26*(diff(u0,y) + diff(v0,x)) - D12*diff(w0,x,2) - D22*diff(w0,y,2) - 2*D26*diff(w0,x,y)
Eq(My,Myf)
```
$$M_{y} = B_{12} \frac{\partial}{\partial x} \operatorname{u_{0}}{\left (x,y \right )} + B_{22} \frac{\partial}{\partial y} \operatorname{v_{0}}{\left (x,y \right )} + B_{26} \left(\frac{\partial}{\partial y} \operatorname{u_{0}}{\left (x,y \right )} + \frac{\partial}{\partial x} \operatorname{v_{0}}{\left (x,y \right )}\right) - D_{12} \frac{\partial^{2}}{\partial x^{2}} \operatorname{w_{0}}{\left (x,y \right )} - D_{22} \frac{\partial^{2}}{\partial y^{2}} \operatorname{w_{0}}{\left (x,y \right )} - 2 D_{26} \frac{\partial^{2}}{\partial x\partial y} \operatorname{w_{0}}{\left (x,y \right )}$$
```python
Mxyf = B16*diff(u0,x) + B26*diff(v0,y) + B66*(diff(u0,y) + diff(v0,x)) - D16*diff(w0,x,2) - D26*diff(w0,y,2) - 2*D66*diff(w0,x,y)
Eq(Mxy,Mxyf)
```
$$M_{xy} = B_{16} \frac{\partial}{\partial x} \operatorname{u_{0}}{\left (x,y \right )} + B_{26} \frac{\partial}{\partial y} \operatorname{v_{0}}{\left (x,y \right )} + B_{66} \left(\frac{\partial}{\partial y} \operatorname{u_{0}}{\left (x,y \right )} + \frac{\partial}{\partial x} \operatorname{v_{0}}{\left (x,y \right )}\right) - D_{16} \frac{\partial^{2}}{\partial x^{2}} \operatorname{w_{0}}{\left (x,y \right )} - D_{26} \frac{\partial^{2}}{\partial y^{2}} \operatorname{w_{0}}{\left (x,y \right )} - 2 D_{66} \frac{\partial^{2}}{\partial x\partial y} \operatorname{w_{0}}{\left (x,y \right )}$$
Now, combine our 6 displacement conditions with our 3 equalibrium equations to get three goverening equations
```python
eq1 = diff(Nxf,x) + diff(Nxf,y)
eq1
```
$$A_{11} \frac{\partial^{2}}{\partial x^{2}} \operatorname{u_{0}}{\left (x,y \right )} + A_{11} \frac{\partial^{2}}{\partial x\partial y} \operatorname{u_{0}}{\left (x,y \right )} + A_{12} \frac{\partial^{2}}{\partial x\partial y} \operatorname{v_{0}}{\left (x,y \right )} + A_{12} \frac{\partial^{2}}{\partial y^{2}} \operatorname{v_{0}}{\left (x,y \right )} + A_{16} \left(\frac{\partial^{2}}{\partial x\partial y} \operatorname{u_{0}}{\left (x,y \right )} + \frac{\partial^{2}}{\partial x^{2}} \operatorname{v_{0}}{\left (x,y \right )}\right) + A_{16} \left(\frac{\partial^{2}}{\partial y^{2}} \operatorname{u_{0}}{\left (x,y \right )} + \frac{\partial^{2}}{\partial x\partial y} \operatorname{v_{0}}{\left (x,y \right )}\right) - B_{11} \frac{\partial^{3}}{\partial x^{3}} \operatorname{w_{0}}{\left (x,y \right )} - B_{11} \frac{\partial^{3}}{\partial x^{2}\partial y} \operatorname{w_{0}}{\left (x,y \right )} - B_{12} \frac{\partial^{3}}{\partial x\partial y^{2}} \operatorname{w_{0}}{\left (x,y \right )} - B_{12} \frac{\partial^{3}}{\partial y^{3}} \operatorname{w_{0}}{\left (x,y \right )} - 2 B_{16} \frac{\partial^{3}}{\partial x^{2}\partial y} \operatorname{w_{0}}{\left (x,y \right )} - 2 B_{16} \frac{\partial^{3}}{\partial x\partial y^{2}} \operatorname{w_{0}}{\left (x,y \right )}$$
```python
eq2 = diff(Nxyf,x) + diff(Nyf,y)
eq2
```
$$A_{12} \frac{\partial^{2}}{\partial x\partial y} \operatorname{u_{0}}{\left (x,y \right )} + A_{16} \frac{\partial^{2}}{\partial x^{2}} \operatorname{u_{0}}{\left (x,y \right )} + A_{22} \frac{\partial^{2}}{\partial y^{2}} \operatorname{v_{0}}{\left (x,y \right )} + A_{26} \left(\frac{\partial^{2}}{\partial y^{2}} \operatorname{u_{0}}{\left (x,y \right )} + \frac{\partial^{2}}{\partial x\partial y} \operatorname{v_{0}}{\left (x,y \right )}\right) + A_{26} \frac{\partial^{2}}{\partial x\partial y} \operatorname{v_{0}}{\left (x,y \right )} + A_{66} \left(\frac{\partial^{2}}{\partial x\partial y} \operatorname{u_{0}}{\left (x,y \right )} + \frac{\partial^{2}}{\partial x^{2}} \operatorname{v_{0}}{\left (x,y \right )}\right) - B_{12} \frac{\partial^{3}}{\partial x^{2}\partial y} \operatorname{w_{0}}{\left (x,y \right )} - B_{16} \frac{\partial^{3}}{\partial x^{3}} \operatorname{w_{0}}{\left (x,y \right )} - B_{22} \frac{\partial^{3}}{\partial y^{3}} \operatorname{w_{0}}{\left (x,y \right )} - 3 B_{26} \frac{\partial^{3}}{\partial x\partial y^{2}} \operatorname{w_{0}}{\left (x,y \right )} - 2 B_{66} \frac{\partial^{3}}{\partial x^{2}\partial y} \operatorname{w_{0}}{\left (x,y \right )}$$
```python
eq3 = diff(Mxf,x,2) + 2*diff(Mxyf,x,y) + diff(Myf,y,2) + q
eq3
```
$$B_{11} \frac{\partial^{3}}{\partial x^{3}} \operatorname{u_{0}}{\left (x,y \right )} + B_{12} \frac{\partial^{3}}{\partial x\partial y^{2}} \operatorname{u_{0}}{\left (x,y \right )} + B_{12} \frac{\partial^{3}}{\partial x^{2}\partial y} \operatorname{v_{0}}{\left (x,y \right )} + B_{16} \left(\frac{\partial^{3}}{\partial x^{2}\partial y} \operatorname{u_{0}}{\left (x,y \right )} + \frac{\partial^{3}}{\partial x^{3}} \operatorname{v_{0}}{\left (x,y \right )}\right) + 2 B_{16} \frac{\partial^{3}}{\partial x^{2}\partial y} \operatorname{u_{0}}{\left (x,y \right )} + B_{22} \frac{\partial^{3}}{\partial y^{3}} \operatorname{v_{0}}{\left (x,y \right )} + B_{26} \left(\frac{\partial^{3}}{\partial y^{3}} \operatorname{u_{0}}{\left (x,y \right )} + \frac{\partial^{3}}{\partial x\partial y^{2}} \operatorname{v_{0}}{\left (x,y \right )}\right) + 2 B_{26} \frac{\partial^{3}}{\partial x\partial y^{2}} \operatorname{v_{0}}{\left (x,y \right )} + 2 B_{66} \left(\frac{\partial^{3}}{\partial x\partial y^{2}} \operatorname{u_{0}}{\left (x,y \right )} + \frac{\partial^{3}}{\partial x^{2}\partial y} \operatorname{v_{0}}{\left (x,y \right )}\right) - D_{11} \frac{\partial^{4}}{\partial x^{4}} \operatorname{w_{0}}{\left (x,y \right )} - 2 D_{12} \frac{\partial^{4}}{\partial x^{2}\partial y^{2}} \operatorname{w_{0}}{\left (x,y \right )} - 4 D_{16} \frac{\partial^{4}}{\partial x^{3}\partial y} \operatorname{w_{0}}{\left (x,y \right )} - D_{22} \frac{\partial^{4}}{\partial y^{4}} \operatorname{w_{0}}{\left (x,y \right )} - 4 D_{26} \frac{\partial^{4}}{\partial x\partial y^{3}} \operatorname{w_{0}}{\left (x,y \right )} - 4 D_{66} \frac{\partial^{4}}{\partial x^{2}\partial y^{2}} \operatorname{w_{0}}{\left (x,y \right )} + q$$
Yikes, I do not want to solve that (at least right now). If we make the assumption that the plate has equal displacement of y in the x and y direction, then we can simply things ALOT! These simplifications are valid for cross ply unsymmetric laminates plate, Hyer pg 616. This is applied by setting some of our material properties to zero. $ A16=A26=D16=D26=B16=B26=B12=B66=0 $
Almost like magic, we now have some equations that aren't so scary.
```python
u0 = Function('u0')(x)
v0 = Function('v0')(x)
w0 = Function('w0')(x)
```
```python
Nxf = A11*diff(u0,x) + A12*diff(v0,y) - B11*diff(w0,x,2)
Eq(Nx, Nxf)
```
$$N_{x} = A_{11} \frac{d}{d x} \operatorname{u_{0}}{\left (x \right )} - B_{11} \frac{d^{2}}{d x^{2}} \operatorname{w_{0}}{\left (x \right )}$$
```python
Nyf = A12*diff(u0,x) + A22*diff(v0,y) - B22*diff(w0,y,2)
Eq(Ny,Nyf)
```
$$N_{y} = A_{12} \frac{d}{d x} \operatorname{u_{0}}{\left (x \right )}$$
```python
Nxyf = A66*(diff(u0,y) + diff(v0,x))
Eq(Nxy,Nxyf)
```
$$N_{xy} = A_{66} \frac{d}{d x} \operatorname{v_{0}}{\left (x \right )}$$
```python
Mxf = B11*diff(u0,x) - D11*diff(w0,x,2) - D12*diff(w0,y,2)
Eq(Mx,Mxf)
```
$$M_{x} = B_{11} \frac{d}{d x} \operatorname{u_{0}}{\left (x \right )} - D_{11} \frac{d^{2}}{d x^{2}} \operatorname{w_{0}}{\left (x \right )}$$
```python
Myf = B22*diff(v0,y) - D12*diff(w0,x,2) - D22*diff(w0,y,2)
Eq(My,Myf)
```
$$M_{y} = - D_{12} \frac{d^{2}}{d x^{2}} \operatorname{w_{0}}{\left (x \right )}$$
```python
Mxyf = 0
Eq(Mxy,Mxyf)
```
$$M_{xy} = 0$$
Now we are getting somewhere. Finally we can solve the differential equations
```python
dsolve(diff(Nx(x)))
```
$$\operatorname{N_{x}}{\left (x \right )} = C_{1}$$
```python
dsolve(diff(Mx(x),x,2)+q)
```
$$\operatorname{M_{x}}{\left (x \right )} = C_{1} + C_{2} x - \frac{q x^{2}}{2}$$
Now solve for u0 and w0 with some pixie dust
```python
eq4 = (Nxf-C1)
eq4
```
$$A_{11} \frac{d}{d x} \operatorname{u_{0}}{\left (x \right )} - B_{11} \frac{d^{2}}{d x^{2}} \operatorname{w_{0}}{\left (x \right )} - C_{1}$$
```python
eq5 = Mxf -( -q*x**2 + C2*x + C3 )
eq5
```
$$B_{11} \frac{d}{d x} \operatorname{u_{0}}{\left (x \right )} - C_{2} x - C_{3} - D_{11} \frac{d^{2}}{d x^{2}} \operatorname{w_{0}}{\left (x \right )} + q x^{2}$$
```python
eq6 = Eq(solve(eq4,diff(u0,x))[0] , solve(eq5, diff(u0,x))[0])
eq6
```
$$\frac{1}{A_{11}} \left(B_{11} \frac{d^{2}}{d x^{2}} \operatorname{w_{0}}{\left (x \right )} + C_{1}\right) = \frac{1}{B_{11}} \left(C_{2} x + C_{3} + D_{11} \frac{d^{2}}{d x^{2}} \operatorname{w_{0}}{\left (x \right )} - q x^{2}\right)$$
```python
w0f = dsolve(eq6, w0)
w0f
```
$$\operatorname{w_{0}}{\left (x \right )} = - \frac{A_{11} C_{2} x^{3}}{6 A_{11} D_{11} - 6 B_{11}^{2}} + \frac{A_{11} q x^{4}}{12 A_{11} D_{11} - 12 B_{11}^{2}} + C_{1} + C_{5} x + \frac{x^{2} \left(- A_{11} C_{3} + B_{11} C_{4}\right)}{2 A_{11} D_{11} - 2 B_{11}^{2}}$$
```python
eq7 = Eq(solve(eq6, diff(w0,x,2))[0] , solve(eq4,diff(w0,x,2))[0])
eq7
```
$$\frac{1}{A_{11} D_{11} - B_{11}^{2}} \left(- A_{11} C_{2} x - A_{11} C_{3} + A_{11} q x^{2} + B_{11} C_{1}\right) = \frac{1}{B_{11}} \left(A_{11} \frac{d}{d x} \operatorname{u_{0}}{\left (x \right )} - C_{1}\right)$$
```python
u0f = dsolve(eq7)
u0f
```
$$\operatorname{u_{0}}{\left (x \right )} = \frac{1}{A_{11} D_{11} - B_{11}^{2}} \left(- \frac{B_{11} C_{1}}{2} x^{2} - B_{11} C_{3} x - B_{11} C_{4} D_{11} + \frac{B_{11} q}{3} x^{3} + C_{2} D_{11} x + \frac{B_{11}^{3} C_{4}}{A_{11}}\right)$$
- - - -
| 22d478c344305fe81892507b412ae64daddc2a41 | 44,223 | ipynb | Jupyter Notebook | tutorials/Composite_Plate_Mechanics_with_Python_Theory.ipynb | nagordon/mechpy | aae2315b883f6af7cd90a8451d170744bbf1053a | [
"MIT"
] | 45 | 2017-01-27T04:40:30.000Z | 2021-12-03T03:46:07.000Z | tutorials/Composite_Plate_Mechanics_with_Python_Theory.ipynb | Lunreth/mechpy | aae2315b883f6af7cd90a8451d170744bbf1053a | [
"MIT"
] | 2 | 2016-03-01T00:42:38.000Z | 2020-03-04T15:45:39.000Z | tutorials/Composite_Plate_Mechanics_with_Python_Theory.ipynb | Lunreth/mechpy | aae2315b883f6af7cd90a8451d170744bbf1053a | [
"MIT"
] | 19 | 2016-04-25T14:12:34.000Z | 2021-07-07T17:46:35.000Z | 40.947222 | 1,884 | 0.370961 | true | 6,633 | Qwen/Qwen-72B | 1. YES
2. YES | 0.938124 | 0.810479 | 0.76033 | __label__eng_Latn | 0.6409 | 0.604833 |
# Network Models
Probably the easiest kinds of statistical models for us to think about are the *network models*. These types of models (like the name imples) describe the random processes which you'd find when you're only looking at one network. We can have models which assume all of the nodes connect to each other essentially randomly, models which assume that the nodes are in distinct *communities*, and many more.
The important realization to make about statistical models is that a model is *not* a network: it's the random process that *creates* a network. You can sample from a model a bunch of times, and because it's a random process, you'll end up with networks that look a little bit different each time -- but if you sampled a lot of networks and then averaged them, then you'd likely be able to get a reasonable ballpark estimation of what the model that they come from looks like.
Let's pretend that we have a network, and the network is unweighted (meaning, we only have edges or not-edges) and undirected (meaning, edges connect nodes both ways). It'd have an adjacency matrix which consists of only 1's and 0's, because the only information we care about is whether there's an edge or not. The model that generated this network is pretty straightforward: there's just some universal probability that each node connects to each other node, and there are 10 nodes.
```python
import matplotlib.pyplot as plt
from graspologic.simulations import er_np
from graspologic.plot import binary_heatmap
%config InlineBackend.figure_format = 'retina'
fig, ax = plt.subplots(figsize=(4,4))
n = 10
p = .5
A = er_np(n, p)
binary_heatmap(A, ax=ax, yticklabels=5, linewidths=1, linecolor="black", title="A small, simple network");
```
This small, simple network is one of many possible networks that we can generate with this model. Here are some more:
```python
fig, axs = plt.subplots(nrows=1, ncols=3, figsize=(12, 4))
for ax in axs.flat:
A = er_np(n, p)
hmap = binary_heatmap(A, ax=ax, yticklabels=5, linewidths=1, linecolor="black")
plt.suptitle("Three small, simple networks", fontsize=20)
```
One reasonable question to ask is how *many* possible networks we could make in this simple scenario? We've already made four, and it seems like there are more that this model could potentially generate.
As it turns out, "more" is a pretty massive understatement. To actually figure out the number, think about the first node: there are two possibilities (weighted or not weighted), so you can generate two networks from a one-node model. Now, let's add an additional node. For each of the first two possibilities, there are two more -- so there are $2 \times 2 = 4$ total possible networks. Every node that we add doubles the number of networks - and since a network with $n$ nodes has $n \times n$ edges, the total number of possible networks ends up being $2^{n \times n} = 2^{n^2}$! So this ten-node model can generate $2^{10^2} = 2^{100}$ networks, which is, when you think carefully, an absurdly, ridiculously big number.
Throughout many of the succeeding sections, we will attempt to make the content accessible to readers with, and without, a more technical background. To this end, we have added sections with trailing asterisks (\*). While we believe these sections build technical depth, we don't think they are critical to understanding many of the core ideas for network machine learning. In contrast with unstarred sections, these sections will assume familiarity with more advanced mathematical and probability concepts.
## Foundation*
To understand network models, it is crucial to understand the concept of a network as a random quantity, taking a probability distribution. We have a realization $A$, and we think that this realization is random in some way. Stated another way, we think that there exists a network-valued random variable $\mathbf A$ that governs the realizations we get to see. Since $\mathbf A$ is a random variable, we can describe it using a probability distribution. The distribution of the random network $\mathbf A$ is the function $\mathbb P$ which assigns probabilities to every possible configuration that $\mathbf A$ could take. Notationally, we write that $\mathbf A \sim \mathbb P$, which is read in words as "the random network $\mathbf A$ is distributed according to $\mathbb P$."
In the preceding description, we made a fairly substantial claim: $\mathbb P$ assigns probabilities to every possible configuration that realizations of $\mathbf A$, denoted by $A$, could take. How many possibilities are there for a network with $n$ nodes? Let's limit ourselves to simple networks: that is, $A$ takes values that are unweighted ($A$ is *binary*), undirected ($A$ is *symmetric*), and loopless ($A$ is *hollow*). In words, $\mathcal A_n$ is the set of all possible adjacency matrices $A$ that correspond to simple networks with $n$ nodes. Stated another way: every $A$ that is found in $\mathcal A$ is a *binary* $n \times n$ matrix ($A \in \{0, 1\}^{n \times n}$), $A$ is symmetric ($A = A^\top$), and $A$ is *hollow* ($diag(A) = 0$, or $A_{ii} = 0$ for all $i = 1,...,n$). Formally, we describe $\mathcal A_n$ as:
\begin{align*}
\mathcal A_n \triangleq \left\{A : A \textrm{ is an $n \times n$ matrix with $0$s and $1$s}, A\textrm{ is symmetric}, A\textrm{ is hollow}\right\}
\end{align*}
To summarize the statement that $\mathbb P$ assigns probabilities to every possible configuration that realizations of $\mathbf A$ can take, we write that $\mathbb P : \mathcal A_n \rightarrow [0, 1]$. This means that for any $A \in \mathcal A_n$ which is a possible realization of a random network $\mathbf A$, that $\mathbb P(\mathbf A = A)$ is a probability (it takes a value between $0$ and $1$). If it is completely unambiguous what the random variable $\mathbf A$ refers to, we might abbreviate $\mathbb P(\mathbf A = A)$ with $\mathbb P(A)$. This statement can alternatively be read that the probability that the random variable $\mathbf A$ takes the value $A$ is $\mathbb P(A)$. Finally, let's address that question we had in the previous paragraph. How many possible adjacency matrices are in $\mathcal A_n$?
Let's imagine what just one $A \in \mathcal A_n$ can look like. Note that each matrix $A$ has $n \times n = n^2$ possible entries, in total, since $A$ is an $n \times n$ matrix. There are $n$ possible self-loops for a network, but since $\mathbf A$ is simple, it is loopless. This means that we can subtract $n$ possible edges from $n^2$, leaving us with $n^2 - n = n(n-1)$ possible edges that might not be unconnected. If we think in terms of a realization $A$, this means that we are ignoring the diagonal entries $a_{ii}$, for all $i \in [n]$. Remember that a simple network is also undirected. In terms of the realization $A$, this means that for every pair $i$ and $j$, that $a_{ij} = a_{ji}$. If we were to learn about an entry in the upper triangle of $A$ where $a_{ij}$ is such that $j > i$, note that we have also learned what $a_{ji}$ is, too. This symmetry of $A$ means that of the $n(n-1)$ entries that are not on the diagonal of $A$, we would, in fact, "double count" the possible number of unique values that $A$ could have. This means that $A$ has a total of $\frac{1}{2}n(n - 1)$ possible entries which are *free*, which is equal to the expression $\binom{n}{2}$. Finally, note that for each entry of $A$, that the adjacency can take one of two possible values: $0$ or $1$. To write this down formally, for every possible edge which is randomly determined, we have *two* possible values that edge could take. Let's think about building some intuition here:
1. If $A$ is $2 \times 2$, there are $\binom{2}{2} = 1$ unique entry of $A$, which takes one of $2$ values. There are $2$ possible ways that $A$ could look:
\begin{align*}
\begin{bmatrix}
0 & 1 \\
1 & 0
\end{bmatrix}\textrm{ or }
\begin{bmatrix}
0 & 0 \\
0 & 0
\end{bmatrix}
\end{align*}
2. If $A$ is $3 \times 3$, there are $\binom{3}{2} = \frac{3 \times 2}{2} = 3$ unique entries of $A$, each of which takes one of $2$ values. There are $8$ possible ways that $A$ could look:
\begin{align*}
&\begin{bmatrix}
0 & 1 & 1 \\
1 & 0 & 1 \\
1 & 1 & 0
\end{bmatrix}\textrm{ or }
\begin{bmatrix}
0 & 1 & 0 \\
1 & 0 & 1 \\
0 & 1 & 0
\end{bmatrix}\textrm{ or }
\begin{bmatrix}
0 & 0 & 1 \\
0 & 0 & 1 \\
1 & 1 & 0
\end{bmatrix}
\textrm{ or }\\
&\begin{bmatrix}
0 & 1 & 1 \\
1 & 0 & 0 \\
1 & 0 & 0
\end{bmatrix}\textrm{ or }
\begin{bmatrix}
0 & 0 & 1 \\
0 & 0 & 0 \\
1 & 0 & 0
\end{bmatrix}\textrm{ or }
\begin{bmatrix}
0 & 0 & 0 \\
0 & 0 & 1 \\
0 & 1 & 0
\end{bmatrix}\textrm{ or }\\
&\begin{bmatrix}
0 & 1 & 0 \\
1 & 0 & 0 \\
0 & 0 & 0
\end{bmatrix}\textrm{ or }
\begin{bmatrix}
0 & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & 0
\end{bmatrix}
\end{align*}
How do we generalize this to an arbitrary choice of $n$? The answer is to use *combinatorics*. Basically, the approach is to look at each entry of $A$ which can take different values, and multiply the total number of possibilities by $2$ for every element which can take different values. Stated another way, if there are $2$ choices for each one of $x$ possible items, we have $2^x$ possible ways in which we could select those $x$ items. But we already know how many different elements there are in $A$, so we are ready to come up with an expression for the number. In total, there are $2^{\binom n 2}$ unique adjacency matrices in $\mathcal A_n$. Stated another way, the *cardinality* of $\mathcal A_n$, described by the expression $|\mathcal A_n|$, is $2^{\binom n 2}$. The **cardinality** here just means the number of elements that the set $\mathcal A_n$ contains. When $n$ is just $15$, note that $\left|\mathcal A_{15}\right| = 2^{\binom{15}{2}} = 2^{105}$, which when expressed as a power of $10$, is more than $10^{30}$ possible networks that can be realized with just $15$ nodes! As $n$ increases, how many unique possible networks are there? In the below figure, look at the value of $|\mathcal A_n| = 2^{\binom n 2}$ as a function of $n$. As we can see, as $n$ gets big, $|\mathcal A_n|$ grows really really fast!
```python
import seaborn as sns
import numpy as np
from math import comb
n = np.arange(2, 51)
logAn = np.array([comb(ni, 2) for ni in n])*np.log10(2)
ax = sns.lineplot(x=n, y=logAn)
ax.set_title("")
ax.set_xlabel("Number of Nodes")
ax.set_ylabel("Number of Possible Graphs $|A_n|$ (log scale)")
ax.set_yticks([50, 100, 150, 200, 250, 300, 350])
ax.set_yticklabels(["$10^{{{pow:d}}}$".format(pow=d) for d in [50, 100, 150, 200, 250, 300, 350]])
ax;
```
So, now we know that we have probability distributions on networks, and a set $\mathcal A_n$ which defines all of the adjacency matrices that every probability distribution must assign a probability to. Now, just what is a network model? A **network model** is a set $\mathcal P$ of probability distributions on $\mathcal A_n$. Stated another way, we can describe $\mathcal P$ to be:
\begin{align*}
\mathcal P &\subseteq \{\mathbb P: \mathbb P\textrm{ is a probability distribution on }\mathcal A_n\}
\end{align*}
In general, we will simplify $\mathcal P$ through something called *parametrization*. We define $\Theta$ to be the set of all possible parameters of the random network model, and $\theta \in \Theta$ is a particular parameter choice that governs the parameters of a specific random network $\mathbf A$. In this case, we will write $\mathcal P$ as the set:
\begin{align*}
\mathcal P(\Theta) &\triangleq \left\{\mathbb P_\theta : \theta \in \Theta\right\}
\end{align*}
If $\mathbf A$ is a random network that follows a network model, we will write that $\mathbf A \sim \mathbb P_\theta$, for some choice $\theta$. We will often use the shorthand $\mathbf A \sim \mathbb P$.
If you are used to traditional univariate or multivariate statistical modelling, an extremely natural choice for when you have a discrete sample space (like $\mathcal A_n$, which is discrete because we can count it) would be to use a categorical model. In the categorical model, we would have a single parameter for all possible configurations of an $n$-node network; that is, $|\theta| = \left|\mathcal A_n\right| = 2^{\binom n 2}$. What is wrong with this model? The limitations are two-fold:
1. As we explained previously, when $n$ is just $15$, we would need over $10^{30}$ bits of storage just to define $\theta$. This amounts to more than $10^{8}$ zetabytes, which exceeds the storage capacity of *the entire world*.
2. With a single network observed (or really, any number of networks we could collect in the real world) we would never be able to estimate $2^{\binom n 2}$ parameters for any reasonably non-trivial number of nodes $n$. For the case of one observed network $A$, an estimate of $\theta$ (referred to as $\hat\theta$) would simply be for $\hat\theta$ to have a $1$ in the entry corresponding to our observed network, and a $0$ everywhere else. Inferentially, this would imply that the network-valued random variable $\mathbf A$ which governs realizations $A$ is deterministic, even if this is not the case. Even if we collected potentially *many* observed networks, we would still (with very high probability) just get $\hat \theta$ as a series of point masses on the observed networks we see, and $0$s everywhere else. This would mean our parameter estimates $\hat\theta$ would not generalize to new observations at *all*, with high probability.
So, what are some more reasonable descriptions of $\mathcal P$? We explore some choices below. Particularly, we will be most interested in the *independent-edge* networks. These are the families of networks in which the generative procedure which governs the random networks assume that the edges of the network are generated *independently*. **Statistical Independence** is a property which greatly simplifies many of the modelling assumptions which are crucial for proper estimation and rigorous statistical inference, which we will learn more about in the later chapters.
### Equivalence Classes*
In all of the below models, we will explore the concept of the **likelihood equivalence class**, or an *equivalence class*, for short. The likelihood $\mathcal L$ is a function which in general, describes how effective a particular observation can be described by a random variable $\mathbf A$ with parameters $\theta$, written $\mathbf A \sim F(\theta)$. Formally, the likelihood is the function where $\mathcal L_\theta(A) \propto \mathbb P_\theta(A)$; that is, the likelihood is proportional to the probability of observing the realization $A$ if the underlying random variable $\mathbf A$ has parameters $\theta$. Why does this matter when it comes to equivalence classes? An equivalence class is a subset of the sample space $E \subseteq \mathcal A_n$, which has the following properties. Holding the parameters $\theta$ fixed:
1. If $A$ and $A'$ are members of the same equivalence class $E$ (written $A, A' \in E$), then $\mathcal L_\theta(A) = \mathcal L_\theta(A')$.
2. If $A$ and $A''$ are members of different equivalence classes; that is, $A \in E$ and $A'' \in E'$ where $E, E'$ are equivalence classes, then $\mathcal L_\theta(A) \neq \mathcal L_\theta(A'')$.
3. Using points 1 and 2, we can establish that if $E$ and $E'$ are two different equivalence classes, then $E \cap E' = \varnothing$. That is, the equivalence classes are **mutually disjoint**.
4. We can use the preceding properties to deduce that given the sample space $\mathcal A_n$ and a likelihood function $\mathcal L_\theta$, we can define a partition of the sample space into equivalence classes $E_i$, where $i \in \mathcal I$ is an arbitrary indexing set. A **partition** of $\mathcal A_n$ is a sequence of sets which are mutually disjoint, and whose union is the whole space. That is, $\bigcup_{i \in \mathcal I} E_i = \mathcal A_n$.
We will see more below about how the equivalence classes come into play with network models, and in a later section, we will see their relevance to the estimation of the parameters $\theta$.
### Independent-Edge Random Networks*
The below models are all special families of something called **independent-edge random networks**. An independent-edge random network is a network-valued random variable, in which the collection of edges are all independent. In words, this means that for every adjacency $\mathbf a_{ij}$ of the network-valued random variable $\mathbf A$, that $\mathbf a_{ij}$ is independent of $\mathbf a_{i'j'}$, any time that $(i,j) \neq (i',j')$. When the networks are simple, the easiest thing to do is to assume that each edge $(i,j)$ is connected with some probability (which may be different for each edge) $p_{ij}$. We use the $ij$ subscript to denote that this probability is not necessarily the same for each edge. This simple model can be described as $\mathbf a_{ij}$ has the distribution $Bern(p_{ij})$, for every $j > i$, and is independent of every other edge in $\mathbf A$. We only look at the entries $j > i$, since our networks are simple. This means that knowing a realization of $\mathbf a_{ij}$ also gives us the realizaaion of $\mathbf a_{ji}$ (and thus $\mathbf a_{ji}$ is a *deterministic* function of $\mathbf a_{ij}$). Further, we know that the random network is loopless, which means that every $\mathbf a_{ii} = 0$. We will call the matrix $P = (p_{ij})$ the **probability matrix** of the network-valued random variable $\mathbf A$. In general, we will see a common theme for the likelihoods of a realization $A$ of a network-valued random variable $\mathbf A$, which is that it will greatly simplify our computation. Remember that if $\mathbf x$ and $\mathbf y$ are binary variables which are independent, that $\mathbb P(\mathbf x = x, \mathbf y = y) = \mathbb P(\mathbf x = x) \mathbb P(\mathbf y = y)$. Using this fact:
\begin{align*}
\mathcal L_\theta(A) &= \mathbb P(\mathbf A = A) \\
&= \mathbb P(\mathbf a_{11} = a_{11}, \mathbf a_{12} = a_{12}, ..., \mathbf a_{nn} = a_{nn}) \\
&= \mathbb P(\mathbf a_{ij} = a_{ij} \text{ for all }j > i) \\
&= \prod_{j > i}\mathbb P(\mathbf a_{ij} = a_{ij}), \;\;\;\;\textrm{Independence Assumption}
\end{align*}
Next, we will use the fact that if a random variable $\mathbf a_{ij}$ has the Bernoulli distribution with probability $p_{ij}$, that $\mathbb P(\mathbf a_{ij} = a_{ij}) = p_{ij}^{a_{ij}}(1 - p_{ij})^{1 - p_{ij}}$:
\begin{align*}
\mathcal L_\theta(A) &= \prod_{j > i}p_{ij}^{a_{ij}}(1 - p_{ij})^{1 - p_{ij}}
\end{align*}
Now that we've specified a likelihood and a very generalizable model, we've learned the full story behind network models and are ready to skip to estimating parameters, right? *Wrong!* Unfortunately, if we tried too estimate anything about each $p_{ij}$ individually, we would obtain that $p_{ij} = a_{ij}$ if we only have one realization $A$. Even if we had many realizations of $\mathbf A$, this still would not be very interesting, since we have a *lot* of $p_{ij}$s to estimate, and we've ignored any sort of structural model that might give us deeper insight into $\mathbf A$. In the below sections, we will learn successively less restrictive (and hence, *more expressive*) assumptions about $p_{ij}$s, which will allow us to convey fairly complex random networks, but *still* enable us with plenty of intteresting things to learn about later on.
## Erdös-Rényi (ER)
```python
from graspologic.plot import heatmap
from graspologic.simulations import er_np
n = 50 # network with 50 nodes
p = 0.3 # probability of an edge existing is .3
# sample a single simple adjacency mtx from ER(50, .3)
A = er_np(n=n, p=p, directed=False, loops=False)
# and plot it
binary_heatmap(A, title="$ER_{50}(0.3)$ Simulation")
plt.show()
```
The simplest random network model is called the Erdös Rényi (ER) model<sup>1</sup>. Consider a social network, where nodes represent students, and edges represent whether a pair of students arae friends. The simplest possible thing to do with our network would be to assume that a given pair of students within our network have the same chance of being friends as any other pair of people we select. The Erdös Rényi model formalizes this relatively simple model with a single parameter:
| Parameter | Space | Description |
| --- | --- | --- |
| $p$ | $[0, 1]$ | Probability that an edge exists between a pair of nodes |
In an Erdös Rényi network, each pair of nodes is connected with probability $p$, and therefore not connected with probability $1-p$. Statistically, we say that for each edge $\mathbf{a}_{ij}$, that $\mathbf{a}_{ij}$ is sampled independently and identically from a $Bern(p)$ distribution, whenever $j > i$. The word "independent" means that edges in the network occurring or not occurring do not affect one another. For instance, this means that if we knew a student named Alice was friends with Bob, and Alice was also friends with Chadwick, that we do not learn any information about whether Bob is friends with Chadwick. The word "identical" means that every edge in the network has the same probability $p$ of being connected. If Alice and Bob are friends with probability $p$, then Alice and Chadwick are friends with probability $p$, too. When $i > j$, we allow $\mathbf a_{ij} = \mathbf a_{ji}$. This means that the connections *across the diagonal* of the adjacency matrix are all equal, which means that we have built-in the property of undirectedness into our networks. Also, we let $\mathbf a_{ii} = 0$, which means that all self-loops are always unconnected. This means that all the networks are loopless, and consequently the adjacency matrices are hollow. If $\mathbf A$ is the adjacency matrix for an ER network with probability $p$, we write that $\mathbf A \sim ER_n(p)$.
### Practical Utility
In practice, the ER model seems like it might be a little too simple to be useful. Why would it ever be useful to think that the best we can do to describe our network is to say that connections exist with some probability? Does this miss a *lot* of useful questions we might want to answer? Fortunately, there are a number of ways in which the simplicity of the ER model is useful. Given a probability and a number of nodes, we can easily describe the properties we would expect to see in a network if that network were ER. For instance, we know what the degree distribution of an ER network can look like. We can reverse this idea, too: given a network we think might *not* be ER, we could check whether it's different in some way from a network which is ER. For instance, if we see a half of the nodes have a very high degree, and the rest of the nodes with a much lower degree, we can reasonably conclude the network might be more complex than can be described by the ER model. If this is the case, we might look for other, more complex, models that could describe our network.
```{admonition} Working Out the Expected Degree in an Erdös-Rényi Network
Suppose that $\mathbf A$ is a simple network which is random. The network has $n$ nodes $\mathcal V = (v_i)_{i = 1}^n$. Recall that the in a simple network, the node degree is $deg(v_i) = \sum_{j = 1}^n \mathbf a_{ij}$. What is the expected degree of a node $v_i$ of a random network $\mathbf A$ which is Erdös-Rényi?
To describe this, we will compute the expectated value of the degree $deg(v_i)$, written $\mathbb E\left[deg(v_i)\right]$. Let's see what happens:
\begin{align*}
\mathbb E\left[deg(v_i)\right] &= \mathbb E\left[\sum_{j = 1}^n \mathbf a_{ij}\right] \\
&= \sum_{j = 1}^n \mathbb E[\mathbf a_{ij}]
\end{align*}
We use the *linearity of expectation* in the line above, which means that the expectation of a sum with a finite number of terms being summed over ($n$, in this case) is the sum of the expectations. Finally, by definition, all of the edges $A_{ij}$ have the same distribution: $Bern(p)$. The expected value of a random quantity which takes a Bernoulli distribution is just the probability $p$. This means every term $\mathbb E[\mathbf a_{ij}] = p$. Therefore:
\begin{align*}
\mathbb E\left[deg(v_i)\right] &= \sum_{j = 1}^n p = n\cdot p
\end{align*}
Since all of the $n$ terms being summed have the same expected value. This holds for *every* node $v_i$, which means that the expected degree of all nodes is an undirected ER network is the same, $n \cdot p$.
```
<!-- The ER model is also useful for the development of new computational techniques to use on random networks. This is because even if the "best" model for a network is something much more complex, we can still calculate an edge probability $p$ for the network without needing any information but the adjacency matrix. Consider, for instance, a case where we design a new algorithm for a social network, and we want to know how much more RAM we might need as the social network grows. We might want to investigate how the algorithm scales to networks with different numbers of people and different connection probabilities that might be realistic as our social network expands in popularity. Examining how the algorithm operates on ER networks with different values of $n$ and $p$ might be helpful. This is an especially common approach when people deal with networks that are said to be *sparse*. A **sparse network** is a network in which the number of edges is much less than the total possible number of edges. This contrasts with a **dense network**, which is a network in which the number of edges is close to the maximum number of possible edges. In the case of an $ER_{n}(p)$ network, the network is sparse when $p$ is small (closer to $0$), and dense when $p$ is large (closer to $1$). -->
### Code Examples
In the next code block, we look to sample a single ER network with $50$ nodes and an edge probability $p$ of $0.3$:
```python
from graspologic.plot import heatmap
from graspologic.simulations import er_np
n = 50 # network with 50 nodes
p = 0.3 # probability of an edge existing is .3
# sample a single simple adjacency matrix from ER(50, .3)
A = er_np(n=n, p=p, directed=False, loops=False)
# and plot it
binary_heatmap(A, title="$ER_{50}(0.3)$ Simulation")
plt.show()
```
Above, we visualize the network using a heatmap. The dark red squares indicate that an edge exists between a pair of nodes, and white squares indicate that an edge does not exist between a pair of nodes.
Next, let's see what happens when we use a higher edge probability, like $p=0.7$:
```python
p = 0.7 # network has an edge probability of 0.7
# sample a single adjacency matrix from ER(50, 0.7)
A = er_np(n=n, p=p, directed=False, loops=False)
# and plot it
binary_heatmap(A, title="$ER_{50}(0.7)$ Simulation")
plt.show()
```
As the edge probability increases, the sampled adjacency matrix tends to indicate that there are more connections in the network. This is because there is a higher chance of an edge existing when $p$ is larger.
### *Likelihood
What is the likelihood for realizations of Erdös-Rényi networks? Remember that for Independent-edge graphs, that the likelihood can be written:
\begin{align*}
\mathcal L_{\theta}(A) &= \prod_{j > i} \mathbb P_\theta(\mathbf{a}_{ij} = a_{ij})
\end{align*}
Next, we recall that by assumption of the ER model, that the probability matrix $P = (p)$, or that $p_{ij} = p$ for all $i,j$. Therefore:
\begin{align*}
\mathcal L_\theta(A) &\propto \prod_{j > i} p^{a_{ij}}(1 - p)^{1 - a_{ij}} \\
&= p^{\sum_{j > i} a_{ij}} \cdot (1 - p)^{\binom{n}{2} - \sum_{j > i}a_{ij}} \\
&= p^{m} \cdot (1 - p)^{\binom{n}{2} - m}
\end{align*}
This means that the likelihood $\mathcal L_\theta(A)$ is a function *only* of the number of edges $m = \sum_{j > i}a_{ij}$ in the network represented by adjacency matrix $A$. The equivalence class on the Erdös-Rényi networks are the sets:
\begin{align*}
E_{i} &= \left\{A \in \mathcal A_n : m = i\right\}
\end{align*}
where $i$ index from $0$ (the minimum number of edges possible) all the way up to $n^2$ (the maximum number of edges possible). All of the relationships for equivalence classes discussed above apply to the sets $E_i$.
## Stochastic Block Model (SBM)
Imagine that we are flipping a fair single coin. A *fair* coin is a coin in which the probability of seeing either a heads or a tails on a coin flip is $\frac{1}{2}$. Let's imagine we flip the coin $20$ times, and we see $10$ heads and $10$ tails.
What would happen if we were to flip $2$ coins, which had a different probability of seeing heads or tails? Imagine that we flip each coin 10 times. The first 10 flips are with a fair coin, and we might see an outcome of five heads and five tails. On the other hand, the second ten flips are not with a fair coin, but with a coin that has a $\frac{4}{5}$ probability to land on heads, and a $\frac{1}{5}$ probability of landing on tails. In the second set of $10$ flips, we might see an outcome of nine heads and one tails.
In the first set of 20 coin flips, all of the coin flips are performed with the same coin. Stated another way, we have a single *cluster*, or a set of coin flips which are similar. On the other hand, in the second set of twenty coin flips, twenty of the coin flips are performed with a fair coin, and ten of the coin flips are performed with a different coin which is not fair. Here, we have two *clusters* of coin flips, those that occur with the first coin, and those that occur with the second coin. Since the first cluster of coin flips are with a fair coin, we expect that coin flips from the first cluster will not necessarily have an identical number of heads and tails, but at least a similar number of heads and tails. On the other hand, coin flips from the second cluster will tend to have more heads than tails.
What does this example have to do with networks? In the above examples, the two sets of coin flips differ in the number of coins with different probabilities that we use for the example. The first example has only one coin, whereas the second example has two coins with different probabilities of heads or tails. If we were to assume that the second example had been performed with only a single coin when in reality it was performed with two different coins, we would be unable to capture that the second 10 coin flips had a substantially different chance of landing on heads than the first ten coin flips. Just like coin flips can be performed with fundamentally different coins, the nodes of a network could also be fundamentally different. The way in which two nodes differ (or do not differ) sometimes holds value in determining the probability that an edge exists between them.
To generalize this example to a network, let's imagine that we have $100$ students, each of whom can go to one of two possible schools: school $1$ or school $2$. Our network has $100$ nodes, and each node represents a single student. The edges of this network represent whether a pair of students are friends. Intuitively, if two students go to the same school, it might make sense to say that they have a higher chance of being friends than if they do not go to the same school. If we were to try to characterize this network using an ER network, we would run into a problem very similar to when we tried to capture the two cluster coin flip example with only a single coin. Intuitively, there must be a better way!
The Stochastic Block Model, or SBM, captures this idea by assigning each of the $n$ nodes in the network to one of $K$ communities. A **community** is a group of nodes within the network. In our example case, the communities would represent the schools that students are able to attend in our network. In an SBM, instead of describing all pairs of nodes with a fixed probability like with the ER model, we instead describe properties that hold for edges between *pairs of communities*. In this sense, for a given school, we could think of the network that describes that school's students as ER. There are two types of SBMs: one in which the node-assignment vector is treated as *unknown* and one in which the node-assignment vector is treated as *known* (it is a *node attribute* for the network).
### *A Priori* Stochastic Block Model
The *a priori* SBM is an SBM in which we know *a priori* (that is, ahead of time) which nodes are in which communities. Here, we will use the variable $K$ to denote the maximum number of different communities. The ordering of the communities does not matter; the community we call $1$ versus $2$ versus $K$ is largely a symbolic distinction (the only thing that matters is that they are *different*). The *a priori* SBM has the following parameter:
| Parameter | Space | Description |
| --- | --- | --- |
| $B$ | [0,1]$^{K \times K}$ | The block matrix, which assigns edge probabilities for pairs of communities |
To describe the *a priori* SBM, we will use a latent variable model. To do so, we will assume there is some vector-valued random variable, $\vec{\pmb \tau}$, which we will call the **node assignment vector**. This random variable takes values $\vec\tau$ which are in the space $\{1,...,K\}^n$. That means for each $\tau_i$ that is an element of a realization of $\vec{\pmb \tau}$, that $\tau_i$ takes on of $K$ possible values. Each node receives a community assignment, so we say that $i$ goes from $1$ to $n$. Stated another way, each node $i$ of our network receives an assignment $\tau_i$ to one of the $K$ communities. This model is called the *a priori* SBM because we use it when we have a realization $\vec\tau$ that we know ahead of time. In our social network example, for instance, $\tau_i$ would reflect that each student can attend one of two possible schools. For a single node $i$ that is in community $\ell$, where $\ell \in \{1, ..., K\}$, we write that $\tau_i = \ell$.
Next, let's discuss the matrix $B$, which is known as the **block matrix** of the SBM. We write down that $B \in [0, 1]^{K \times K}$, which means that the block matrix is a matrix with $K$ rows and $K$ columns. If we have a pair of nodes and know which of the $K$ communities each node is from, the block matrix tells us the probability that those two nodes are connected. If our networks are simple, the matrix $B$ is also symmetric, which means that if $b_{kk'} = p$ where $p$ is a probability, that $b_{k'k} = p$, too. The requirement of $B$ to be symmetric exists *only* if we are dealing with simple networks, since they are undirected; if we relax the requirement of undirectedness (and allow directed networks) $B$ no longer need be symmetric.
Finally, let's think about how to write down the generative model for the *a priori* SBM. Intuitionally what we want to reflect is, if we know that node $i$ is in community $\ell$ and node $j$ is in community $k$, that the $(\ell, k)$ entry of the block matrix is the probability that $i$ and $j$ are connected. We say that given $\tau_i = k'$ and $\tau_j = k$, $\mathbf a_{ij}$ is sampled independently from a $Bern(b_{k' k})$ distribution for all $j > i$. Note that the adjacencies $\mathbf a_{ij}$ are not *necessarily* identically distributed. Consider, for instance, another pair of nodes, $i'$ and $j'$, where $\tau_i = k'$ and $\tau_j = k'$. Then $\mathbf a_{i'j'}$ would have probability $b_{k'k'}$ instead of $b_{k'k}$, which specifies a different Bernoulli distribution (since the probabilities are different). If $\mathbf A$ is an *a priori* SBM network with parameter $B$, and $\vec{\tau}$ is a realization of the node-assignment vector, we write that $\mathbf A \sim SBM_{n,\vec \tau}(B)$.
### Code Examples
We just covered a lot of intuition! This intuition will come in handy later, but let's take a break from the theory by working through an example. Say we have $300$ students, and we know that each student goes to one of two possible schools. We will begin by thinking about the *a priori* SBM, since it's a little more straightforward to generate samples. Remember the *a priori* SBM is the SBM where already have a realization of $\vec{\pmb \tau}$ ahead of time. We don't really care too much about the ordering of the students for now, so let's just assume that the first $150$ students all go to school $1$, and the second $150$ students all go to school $2$. Let's assume that the students from school $1$ are better friends in general than the students from school $2$, so we'll say that the probability of two students who both go to school $1$ being friends is $0.5$, and the probability of two students who both go to school $2$ being friends is $0.3$. Finally, let's assume that if one student goes to school $1$ and the other student goes to school $2$, that the probability that they are friends is $0.2$.
```{admonition} Thought Exercise
Before you read on, try to think to yourself about what the node-assignment vector $\vec \tau$ and the block matrix $B$ look like.
```
Next, let's plot what $\vec \tau$ and $B$ look like:
```python
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import matplotlib
def plot_tau(tau, title="", xlab="Node"):
cmap = matplotlib.colors.ListedColormap(["skyblue", 'blue'])
fig, ax = plt.subplots(figsize=(10,2))
with sns.plotting_context("talk", font_scale=1):
ax = sns.heatmap((tau - 1).reshape((1,tau.shape[0])), cmap=cmap,
ax=ax, cbar_kws=dict(shrink=1), yticklabels=False,
xticklabels=False)
ax.set_title(title)
cbar = ax.collections[0].colorbar
cbar.set_ticks([0.25, .75])
cbar.set_ticklabels(['School 1', 'School 2'])
ax.set(xlabel=xlab)
ax.set_xticks([.5,149.5,299.5])
ax.set_xticklabels(["1", "150", "300"])
cbar.ax.set_frame_on(True)
return
n = 300 # number of students
# tau is a column vector of 150 1s followed by 50 2s
# this vector gives the school each of the 300 students are from
tau = np.vstack((np.ones((int(n/2),1)), np.full((int(n/2),1), 2)))
plot_tau(tau, title="Tau, Node Assignment Vector",
xlab="Student")
```
So as we can see, the first $50$ students are from school $1$, and the second $50$ students are from school $2$. Next, let's look at the block matrix $B$:
```python
K = 2 # 2 communities in total
# construct the block matrix B as described above
B = np.zeros((K, K))
B[0,0] = .5
B[0,1] = B[1,0] = .2
B[1,1] = .3
```
```python
def plot_block(X, title="", blockname="School", blocktix=[0.5, 1.5],
blocklabs=["School 1", "School 2"]):
fig, ax = plt.subplots(figsize=(8, 6))
with sns.plotting_context("talk", font_scale=1):
ax = sns.heatmap(X, cmap="Purples",
ax=ax, cbar_kws=dict(shrink=1), yticklabels=False,
xticklabels=False, vmin=0, vmax=1)
ax.set_title(title)
cbar = ax.collections[0].colorbar
ax.set(ylabel=blockname, xlabel=blockname)
ax.set_yticks(blocktix)
ax.set_yticklabels(blocklabs)
ax.set_xticks(blocktix)
ax.set_xticklabels(blocklabs)
cbar.ax.set_frame_on(True)
return
plot_block(B, title="Block Matrix")
plt.show()
```
As we can see, the matrix $B$ is a symmetric block matrix, since our network is undirected. Finally, let's sample a single network from the SBM with parameters $\vec \tau$ and $B$:
```python
from graspologic.simulations import sbm
from graspologic.plot import adjplot
import pandas as pd
# sample a graph from SBM_{300}(tau, B)
A = sbm(n=[int(n/2), int(n/2)], p=B, directed=False, loops=False)
meta = pd.DataFrame(
data = {"School": tau.reshape((n)).astype(int)}
)
ax=adjplot(A, meta=meta, color="School", palette="Blues")
```
The above network shows students, ordered by the school they are in (school 1 and school 2, respectively). As we can see in the above network, people from school $1$ are more connected than people from school $2$. We notice this from the fact that there are more connections between people from school $1$ than from school $2$. Also, the connections between people from different schools appear to be a bit *more sparse* (fewer edges) than connections betwen schools. The above heatmap can be described as **modular**: it has clear communities, which are the nodes that comprise the obvious "squares" in the above adjacency matrix.
Something easy to mistake about the SBM is that the SBM will *not always* have the obvious modular structure defined above when we look at a heatmap. Rather, this modular structure is *only* made obvious because the students are ordered according to the school in which they are in. What do you think will happen if we look at the students in a random order? Do you think it will be obvious that the network will have a modular structure?
The answer is: *No!* Let's see what happens when we use a reordering, called a *permutation* of the nodes, to reorder the nodes from the network into a random order:
```python
import numpy as np
# generate a permutation of the n nodes
vtx_perm = np.random.choice(n, size=n, replace=False)
meta = pd.DataFrame(
data = {"School": tau[vtx_perm].reshape((n)).astype(int)}
)
# same adjacency matrix (up to reorder of the nodes)
ax=adjplot(A[tuple([vtx_perm])] [:,vtx_perm], meta=meta, color="School", palette="Blues")
```
Notice that now, the students are *not* organized according to school. We can see this by looking at the school assignment vector, shown at the left and top, of the network. It becomes pretty tough to figure out whether there are communities in our network just by looking at an adjacency matrix, unless you are looking at a network in which the nodes are *already arranged* in an order which respects the community structure.
In practice, this means that if you know ahead of time what natural groupings of the nodes might be (such knowing which school each student goes to) by way of your node attributes, you can visualize your data according to that grouping. If you don't know anything about natural groupings of nodes, however, we are left with the problem of *estimating community structure*. A later method, called the *spectral embedding*, will be paired with clustering techniques to allow us to estimate node assignment vectors.
#### Likelihood*
What does the likelihood for the *a priori* SBM look like? Fortunately, since $\vec \tau$ is a *parameter* of the *a priori* SBM, the likelihood is a bit simpler than for the *a posteriori* SBM. This is because the *a posteriori* SBM requires a marginalization over potential realizations of $\vec{\pmb \tau}$, whereas the *a priori* SBM does not, since we already know that $\vec{\pmb \tau}$ was realized as $\vec\tau$.
Putting these steps together gives us that:
\begin{align*}
\mathcal L_\theta(A) &\propto \mathbb P_{\theta}(\mathbf A = A | \vec{\pmb \tau} = \vec\tau) \\
&= \prod_{j > i} \mathbb P_\theta(\mathbf a_{ij} = a_{ij} | \vec{\pmb \tau} = \vec\tau),\;\;\;\;\textrm{Independence Assumption}
\end{align*}
Next, for the *a priori* SBM, we know that each edge $\mathbf a_{ij}$ only *actually* depends on the community assignments of nodes $i$ and $j$, so we know that $\mathbb P_{\theta}(\mathbf a_{ij} = a_{ij} | \vec{\pmb \tau} = \vec\tau) = \mathbb P(\mathbf a_{ij} = a_{ij} | \tau_i = k', \tau_j = k)$, where $k$ and $k'$ are any of the $K$ possible communities. This is because the community assignments of nodes that are not nodes $i$ and $j$ do not matter for edge $ij$, due to the independence assumption.
Next, let's think about the probability matrix $P = (p_{ij})$ for the *a priori* SBM. We know that, given that $\tau_i = k'$ and $\tau_j = k$, each adjacency $\mathbf a_{ij}$ is sampled independently and identically from a $Bern(b_{k',k})$ distribution. This means that $p_{ij} = b_{k',k}$. Completing our analysis from above:
\begin{align*}
\mathcal L_\theta(A) &\propto \prod_{j > i} b_{k'k}^{a_{ij}}(1 - b_{k'k})^{1 - a_{ij}} \\
&= \prod_{k,k' \in [K]}b_{k'k}^{m_{k'k}}(1 - b_{k'k})^{n_{k'k} - m_{k'k}}
\end{align*}
Where $n_{k' k}$ denotes the total number of edges possible between nodes assigned to community $k'$ and nodes assigned to community $k$. That is, $n_{k' k} = \sum_{j > i} \mathbb 1_{\tau_i = k'}\mathbb 1_{\tau_j = k}$. Further, we will use $m_{k' k}$ to denote the total number of edges observed between these two communities. That is, $m_{k' k} = \sum_{j > i}\mathbb 1_{\tau_i = k'}\mathbb 1_{\tau_j = k}a_{ij}$. Note that for a single $(k',k)$ community pair, that the likelihood is analogous to the likelihood of a realization of an ER random variable.
<!--- We can formalize this a bit more explicitly. If we let $A^{\ell k}$ be defined as the subgraph *induced* by the edges incident nodes in community $\ell$ and those in community $k$, then we can say that $A^{\ell k}$ is a directed ER random network, --->
Like the ER model, there are again equivalence classes of the sample space $\mathcal A_n$ in terms of their likelihood. For a two-community setting, with $\vec \tau$ and $B$ given, the equivalence classes are the sets:
\begin{align*}
E_{a,b,c}(\vec \tau, B) &= \left\{A \in \mathcal A_n : m_{11} = a, m_{21}=m_{12} = b, m_{22} = c\right\}
\end{align*}
The number of equivalence classes possible scales with the number of communities, and the manner in which nodes are assigned to communities (particularly, the number of nodes in each community).
### *A Posteriori* Stochastic Block Model
In the *a posteriori* Stochastic Block Model (SBM), we consider that node assignment to one of $K$ communities is a random variable, that we *don't* know already like te *a priori* SBM. We're going to see a funky word come up, that you're probably not familiar with, the **$K$ probability simplex**. What the heck is a probability simplex?
The intuition for a simplex is probably something you're very familiar with, but just haven't seen a word describe. Let's say I have a vector, $\vec\pi = (\pi_k)_{k \in [K]}$, which has a total of $K$ elements. $\vec\pi$ will be a vector, which indicates the *probability* that a given node is assigned to each of our $K$ communities, so we need to impose some additional constraints. Symbolically, we would say that, for all $i$, and for all $k$:
\begin{align*}
\pi_k = \mathbb P(\pmb\tau_i = k)
\end{align*}
The $\vec \pi$ we're going to use has a very special property: all of its elements are non-negative: for all $\pi_k$, $\pi_k \geq 0$. This makes sense since $\pi_k$ is being used to represent the probability of a node $i$ being in group $k$, so it certainly can't be negative. Further, there's another thing that we want our $\vec\pi$ to have: in order for each element $\pi_k$ to indicate the probability of something to be assigned to $k$, we need all of the $\pi_k$s to sum up to one. This is because of something called the Law of Total Probability. If we have $K$ total values that $\pmb \tau_i$ could take, then it is the case that:
\begin{align*}
\sum_{k=1}^K \mathbb P(\pmb \tau_i = k) = \sum_{k = 1}^K \pi_k = 1
\end{align*}
So, back to our question: how does a probability simplex fit in? Well, the $K$ probability simplex describes all of the possible values that our vector $\vec\pi$ could possibly take! In symbols, the $K$ probability simplex is:
\begin{align*}
\left\{\vec\pi : \text{for all $k$ }\pi_k \geq 0, \sum_{k = 1}^K \pi_k = 1 \right\}
\end{align*}
So the $K$ probability simplex is just the space for all possible vectors which could indicate assignment probabilities to one of $K$ communities.
What does the probability simplex look like yy? Below, we take a look at the $2$-probability simplex (2-d $\vec\pi$s) and the $3$-probability simplex (3-dimensional $\vec\pi$s):
```python
from mpl_toolkits.mplot3d import Axes3D
from mpl_toolkits.mplot3d.art3d import Poly3DCollection
import matplotlib.pyplot as plt
fig=plt.figure(figsize=plt.figaspect(.5))
fig.suptitle("Probability Simplexes")
ax=fig.add_subplot(1,2,1)
x=[1,0]
y=[0,1]
ax.plot(x,y)
ax.set_xticks([0,.5,1])
ax.set_yticks([0,.5,1])
ax.set_xlabel("$\pi_1$")
ax.set_ylabel("$\pi_2$")
ax.set_title("2-probability simplex")
ax=fig.add_subplot(1,2,2,projection='3d')
x = [1,0,0]
y = [0,1,0]
z = [0,0,1]
verts = [list(zip(x,y,z))]
ax.add_collection3d(Poly3DCollection(verts, alpha=.6))
ax.view_init(elev=20,azim=10)
ax.set_xticks([0,.5,1])
ax.set_yticks([0,.5,1])
ax.set_zticks([0,.5,1])
ax.set_xlabel("$\pi_1$")
ax.set_ylabel("$\pi_2$")
h=ax.set_zlabel("$\pi_3$", rotation=0)
ax.set_title("3-probability simplex")
plt.show()
```
The values of $\vec\pi = (\pi)$ that are in the $K$-probability simplex are indicated by the shaded region of each figure. This comprises the $(\pi_1, \pi_2)$ pairs that fall along a diagonal line from $(0,1)$ to $(1,0)$ for the $2$-simplex, and the $(\pi_1, \pi_2, \pi_3)$ tuples that fall on the surface of the triangular shape above with vertices at $(1,0,0)$, $(0,1,0)$, and $(0,0,1)$.
This model has the following parameters:
| Parameter | Space | Description |
| --- | --- | --- |
| $\vec \pi$ | the $K$ probability simplex | The probability of a node being assigned to community $K$ |
| $B$ | [0,1]$^{K \times K}$ | The block matrix, which assigns edge probabilities for pairs of communities |
The *a posteriori* SBM is a bit more complicated than the *a priori* SBM. We will think about the *a posteriori* SBM as a variation of the *a priori* SBM, where instead of the node-assignment vector being treated as a vector-valued random variable which takes a known fixed value, we will treat it as *unknown*. $\vec{\pmb \tau}$ is still a *latent variable* like it was before. In this case, $\vec{\pmb \tau}$ takes values in the space $\{1,...,K\}^n$. This means that for a given realization of $\vec{\pmb \tau}$, denoted by $\vec \tau$, that for each of the $n$ nodes in the network, we suppose that an integer value between $1$ and $K$ indicates which community a node is from. Statistically, we write that the node assignment for node $i$, denoted by $\pmb \tau_i$, is sampled independently and identically from $Categorical(\vec \pi)$. Stated another way, the vector $\vec\pi$ indicates the probability $\pi_k$ of assignment to each community $k$ in the network.
The matrix $B$ behaves exactly the same as it did with the *a posteriori* SBM. Finally, let's think about how to write down the generative model in the *a posteriori* SBM. The generative model for the *a posteriori* SBM is, in fact, nearly the same as for the *a priori* SBM: we still say that given $\tau_i = k'$ and $\tau_j = k$, that $\mathbf a_{ij}$ are independent $Bern(b_{k'k})$. Here, however, we also describe that $\pmb \tau_i$ are sampled independent and identically from $Categorical(\vec\pi)$, as we learned above. If $\mathbf A$ is the adjacency matrix for an *a posteriori* SBM network with parameters $\vec \pi$ and $B$, we write that $\mathbf A \sim SBM_n(\vec \pi, B)$.
#### Likelihood*
What does the likelihood for the *a posteriori* SBM look like? In this case, $\theta = (\vec \pi, B)$ are the parameters for the model, so the likelihood for a realization $A$ of $\mathbf A$ is:
\begin{align*}
\mathcal L_\theta(A) &\propto \mathbb P_\theta(\mathbf A = A)
\end{align*}
Next, we use the fact that the probability that $\mathbf A = A$ is, in fact, the *marginalization* (over realizations of $\vec{\pmb \tau}$) of the joint $(\mathbf A, \vec{\pmb \tau})$. In this case, we will let $\mathcal T = \{1,...,K\}^n$ be the space of all possible realizations that $\vec{\pmb \tau}$ could take:
\begin{align}
\mathcal L_\theta(A)&\propto \sum_{\vec \tau \in \mathcal T} \mathbb P_\theta(\mathbf A = A, \vec{\pmb \tau} = \vec \tau)
\end{align}
Next, remember that by definition of a conditional probability for a random variable $\mathbf x$ taking value $x$ conditioned on random variable $\mathbf y$ taking the value $y$, that $\mathbb P(\mathbf x = x | \mathbf y = y) = \frac{\mathbb P(\mathbf x = x, \mathbf y = y)}{\mathbb P(\mathbf y = y)}$. Note that by multiplying through by $\mathbf P(\mathbf y = y)$, we can see that $\mathbb P(\mathbf x = x, \mathbf y = y) = \mathbb P(\mathbf x = x| \mathbf y = y)\mathbb P(\mathbf y = y)$. Using this logic for $\mathbf A$ and $\vec{\pmb \tau}$:
\begin{align*}
\mathcal L_\theta(A) &\propto\sum_{\vec \tau \in \mathcal T} \mathbb P_\theta(\mathbf A = A| \vec{\pmb \tau} = \vec \tau)\mathbb P(\vec{\pmb \tau} = \vec \tau)
\end{align*}
Intuitively, for each term in the sum, we are treating $\vec{\pmb \tau}$ as taking a fixed value, $\vec\tau$, to evaluate this probability statement.
We will start by describing $\mathbb P(\vec{\pmb \tau} = \vec\tau)$. Remember that for $\vec{\pmb \tau}$, that each entry $\pmb \tau_i$ is sampled *independently and identically* from $Categorical(\vec \pi)$.The probability mass for a $Categorical(\vec \pi)$-valued random variable is $\mathbb P(\pmb \tau_i = \tau_i; \vec \pi) = \pi_{\tau_i}$. Finally, note that if we are taking the products of $n$ $\pi_{\tau_i}$ terms, that many of these values will end up being the same. Consider, for instance, if the vector $\tau = [1,2,1,2,1]$. We end up with three terms of $\pi_1$, and two terms of $\pi_2$, and it does not matter which order we multiply them in. Rather, all we need to keep track of are the counts of each $\pi$. term. Written another way, we can use the indicator that $\tau_i = k$, given by $\mathbb 1_{\tau_i = k}$, and a running counter over all of the community probability assignments $\pi_k$ to make this expression a little more sensible. We will use the symbol $n_k = \sum_{i = 1}^n \mathbb 1_{\tau_i = k}$ to denote this value, which is the number of nodes in community $k$:
\begin{align*}
\mathbb P_\theta(\vec{\pmb \tau} = \vec \tau) &= \prod_{i = 1}^n \mathbb P_\theta(\pmb \tau_i = \tau_i),\;\;\;\;\textrm{Independence Assumption} \\
&= \prod_{i = 1}^n \pi_{\tau_i} ,\;\;\;\;\textrm{p.m.f. of a Categorical R.V.}\\
&= \prod_{k = 1}^K \pi_{k}^{n_k},\;\;\;\;\textrm{Reorganizing what we are taking products of}
\end{align*}
Next, let's think about the conditional probability term, $\mathbb P_\theta(\mathbf A = A \big | \vec{\pmb \tau} = \vec \tau)$. Remember that the entries are all independent conditional on $\vec{\pmb \tau}$ taking the value $\vec\tau$. It turns out this is exactly the same result that we obtained for the *a priori* SBM:
\begin{align*}
\mathbb P_\theta(\mathbf A = A \big | \vec{\pmb \tau} = \vec \tau)
&= \prod_{k',k} b_{\ell k}^{m_{k' k}}(1 - b_{k' k})^{n_{k' k} - m_{k' k}}
\end{align*}
Combining these into the integrand gives:
\begin{align*}
\mathcal L_\theta(A) &\propto \sum_{\vec \tau \in \mathcal T} \mathbb P_\theta(\mathbf A = A \big | \vec{\pmb \tau} = \vec \tau) \mathbb P_\theta(\vec{\pmb \tau} = \vec \tau) \\
&= \sum_{\vec \tau \in \mathcal T} \prod_{k = 1}^K \left[\pi_k^{n_k}\cdot \prod_{k'=1}^K b_{k' k}^{m_{k' k}}(1 - b_{k' k})^{n_{k' k} - m_{k' k}}\right]
\end{align*}
Evaluating this sum explicitly proves to be relatively tedious and is a bit outside of the scope of this book, so we will omit it here.
<!-- TODO: return to add equivalence classes -->
## Random Dot Product Graph (RDPG)
Let's imagine that we have a network which follows the *a priori* Stochastic Block Model. To make this example a little bit more concrete, let's borrow the code example from above. The nodes of our network represent each of the $300$ students in our network. The node assignment vector represents which of the two schools eaach student attends, where the first $150$ students attend school $1$, and the second $150$ students attend school $2$. Remember that $\tau$ and $B$ look like:
```python
plot_tau(tau, title="Tau, Node Assignment Vector",
xlab="Student");
```
```python
plot_block(B, title="Block Matrix");
```
Are there any other ways to describe this scenario, other than using both $\tau$ and $B$?
What if we were to look at the probabilities for *every* pair of edges? Remember, for a given $\tau$ and $B$, that a network which is SBM can be generated using the approach that, given that $\tau_i = \ell$ and $\tau_j = k$, that $\mathbf a_{ij} \sim Bern(b_{\ell k})$. That is, every entry is Bernoulli, with the probability indicated by appropriate entry of the block matrix corresponding to the pair of communities each node is in. However, there's another way we could write down this generative model. Suppose we had a $n \times n$ probability matrix, where for every $j > i$:
\begin{align*}
p_{ji} = p_{ij}, p_{ij} = \begin{cases}
b_{11} & \tau_i = 1, \tau_j = 1 \\
b_{12} & \tau_i = 1, \tau_j = 2 \\
b_{22} & \tau_i = 2, \tau_j = 1
\end{cases}
\end{align*}
We will call the matrix $P$ the *probability matrix* whose $i^{th}$ row and $j^{th}$ column is the entry $p_{ij}$, as defined above. If you've been following the advanced sections, you will already be familiar with this term. What does $P$ look like?
```python
def plot_prob(X, title="", nodename="Student", nodetix=None,
nodelabs=None):
fig, ax = plt.subplots(figsize=(8, 6))
with sns.plotting_context("talk", font_scale=1):
ax = sns.heatmap(X, cmap="Purples",
ax=ax, cbar_kws=dict(shrink=1), yticklabels=False,
xticklabels=False, vmin=0, vmax=1)
ax.set_title(title)
cbar = ax.collections[0].colorbar
ax.set(ylabel=nodename, xlabel=nodename)
if (nodetix is not None) and (nodelabs is not None):
ax.set_yticks(nodetix)
ax.set_yticklabels(nodelabs)
ax.set_xticks(nodetix)
ax.set_xticklabels(nodelabs)
cbar.ax.set_frame_on(True)
return
P = np.zeros((n,n))
P[0:150,0:150] = .5
P[150:300, 150:300] = .3
P[0:150,150:300] = .2
P[150:300,0:150] = .2
ax = plot_prob(P, title="Probablity Matrix", nodetix=[0,299],
nodelabs=["1", "300"])
plt.show()
```
As we can see, $P$ captures a similar modular structure to the actual adjacency matrix corresponding to the SBM network. Also, $P$ captures the probability of connections between each pair of students. Indeed, it is the case that $P$ contains the information of both $\vec\tau$ and $B$. This means that we can write down a generative model by specifying *only* $P$, and we no longer need to specify $\vec\tau$ and $B$ at all. To write down the generative model in this way, we say that for all $j > i$, that $\mathbf a_{ij} \sim Bern(p_{ij})$ independently, where $\mathbf a_{ji} = \mathbf a_{ij}$, and $\mathbf a_{ii} = 0$.
What is so special about this formulation of the SBM problem? As it turns out, for a *positive semi-definite* probability matrix $P$, $P$ can be decomposed using a matrix $X$, where $P = X X^\top$. We will call a single row of $X$ the vector $\vec x_i$. Remember, using this expression, each entry $p_{ij}$ is the product $\vec x_i^\top \vec x_j$, for all $i, j$. Like $P$, $X$ has $n$ rows, each of which corresponds to a single node in our network. However, the special property of $X$ is that it doesn't *necessarily* have $n$ columns: rather, $X$ often will have many fewer columns than rows. For instance, with $P$ defined as above, there in fact exists an $X$ with just $2$ columns that can be used to describe $P$. This matrix $X$ will be called the *latent position matrix*, and each row $\vec x_i$ will be called the *latent position of a node*. Like previously, there are two types of RDPGs: one in which $X$ is treated as *known*, and another in which $X$ is treated as *unknown*.
Now, your next thought might be that this requires a *lot* more space to represent an SBM network, and you'd be right: $\vec \tau$ has $n$ entries, and $B$ has $K \times K$ entries, where $K$ is typically much smaller than $n$. On the other hand, in this formulation, $P$ has $\binom{n}{2}$ entries, which is much bigger than $n + K \times K$ (since $K$ is usually much smaller than $n$). The advantage is that under this formulation, $P$ doesn't need to have this rigorous modular structure characteristic of SBM networks, and can look a *lot* more interesting. As we will see in later chapters, this network representation will prove extremely flexible for allowing us to capture networks that are fairly complex. Further, we can also perform analysis on the matrix $X$ itself, which will prove very useful for estimation of SBMs.
### *A Priori* RDPG
The *a priori* Random Dot Product Graph is an RDPG in which we know *a priori* the latent position matrix $X$. The *a priori* RDPG has the following parameter:
| Parameter | Space | Description |
| --- | --- | --- |
| $X$ | $ \mathbb R^{n \times d}$ | The matrix of latent positions for each node $n$. |
$X$ is called the **latent position matrix** of the RDPG. We write that $X \in \mathbb R^{n \times d}$, which means that it is a matrix with real values, $n$ rows, and $d$ columns. We will use the notation $\vec x_i$ to refer to the $i^{th}$ row of $X$. $\vec x_i$ is referred to as the **latent position** of a node $i$. Visually, this looks something like this:
\begin{align*}
X = \begin{bmatrix}
\vec x_{1}^\top \\
\vdots \\
\vec x_n^\top
\end{bmatrix}
\end{align*}
Noting that $X$ has $d$ columns, this implies that $\vec x_i \in \mathbb R^d$, or that each node's latent position is a real-valued $d$-dimensional vector.
What is the generative model for the *a priori* RDPG? As we discussed above, given $X$, for all $j > i$, $\mathbf a_{ij} \sim Bern(\vec x_i^\top \vec x_j)$ independently. If $i < j$, $\mathbf a_{ji} = \mathbf a_{ij}$ (the network is *undirected*), and $\mathbf a_{ii} = 0$ (the network is *loopless*). If $\mathbf A$ is an *a priori* RDPG with parameter $X$, we write that $\mathbf A \sim RDPG_n(X)$.
#### Code Examples
We will let $X$ be a little more complex than in our preceding example. Our $X$ will produce a $P$ that still *somewhat* has a modular structure, but not quite as much as before. Let's assume that we have $300$ people who live along a very long road that is $100$ miles long, and each person is $\frac{1}{3}$ of a mile apart. The nodes of our network represent the people who live along our assumed street. If two people are closer to one another, it might make sense to think that they have a higher probability of being friends. If two people are neighbors, we think that they will have a very high probability of being friends (almost $1$) and when people are very far apart, we think that they will have a very low probability of being friends (almost $0$). What could we use for $X$?
One possible approach would be to let each $\vec x_i$ be defined as follows:
\begin{align*}
\vec x_i = \begin{bmatrix}
\frac{300 - i}{300} \\
\frac{i}{300}
\end{bmatrix}
\end{align*}
For instance, $\vec x_1 = \begin{bmatrix}1 \\ 0\end{bmatrix}$, and $\vec x_{300} = \begin{bmatrix} 0 \\ 1\end{bmatrix}$. Note that:
\begin{align*}
p_{1,300} = \vec x_1^\top \vec x_j = 1 \cdot 0 + 0 \cdot 1 = 0
\end{align*}
What happens in between?
Let's consider another person, person $100$. Note that person $100$ lives closer to person $1$ than to person $300$. Here, $\vec x_{100} = \begin{bmatrix} \frac{2}{3}\\ \frac{1}{3}\end{bmatrix}$. This gives us that:
\begin{align*}
p_{1,100} &= \vec x_1^\top \vec x_{100} = \frac{2}{3}\cdot 1 + 0 \cdot \frac{1}{3} = \frac{2}{3} \\
p_{100, 300} &= \vec x_{100}^\top x_{300} = \frac{2}{3} \cdot 0 + \frac 1 3 \cdot 1 = \frac 1 3
\end{align*}
So this means that person $1$ and person $100$ have about a $67\%$ probability of being friends, but person $100$ and $300$ have about a $33\%$ probability of being friends.
Let's consider another person, person $200$. Person $200$ lives closer to person $300$ than person $100$. With $\vec x_{200} = \begin{bmatrix}\frac{1}{3} \\ \frac{2}{3} \end{bmatrix}$, we obtain that:
\begin{align*}
p_{1,200} &= \vec x_1^\top \vec x_{200} = \frac{1}{3}\cdot 1 + 0 \cdot \frac{2}{3} = \frac{1}{3} \\
p_{200, 300} &= \vec x_{100}^\top x_{300} = \frac{1}{3} \cdot 0 + \frac 2 3 \cdot 1 = \frac 2 3 \\
p_{100,200} &= \vec x_{100}^\top x_{200} = \frac{2}{3} \cdot \frac 1 3 + \frac 1 3 \cdot \frac 2 3 = \frac 4 9
\end{align*}
Again, remember that these fractions capture the probability that two people will be friends. So, intuitively, it seems like our probability matrix $P$ will capture the intuitive idea we described above. First, we'll take a look at $X$, and then we'll look at $P$:
```python
n = 300 # the number of nodes in our network
# design the latent position matrix X according to
# the rules we laid out previously
X = np.zeros((n,2))
for i in range(0, n):
X[i,:] = [(n - i)/n, i/n]
```
```python
def plot_lp(X, title="", ylab="Student"):
fig, ax = plt.subplots(figsize=(4, 10))
with sns.plotting_context("talk", font_scale=1):
ax = sns.heatmap(X, cmap="Purples",
ax=ax, cbar_kws=dict(shrink=1), yticklabels=False,
xticklabels=False)
ax.set_title(title)
cbar = ax.collections[0].colorbar
ax.set(ylabel=ylab)
ax.set_yticks([0, 99, 199, 299])
ax.set_yticklabels(["1", "100", "200", "300"])
ax.set_xticks([.5, 1.5])
ax.set_xticklabels(["Dimension 1", "Dimension 2"])
cbar.ax.set_frame_on(True)
return
plot_lp(X, title="Latent Position Matrix, X")
```
The latent position matrix $X$ that we plotted above is $n \times d$ dimensions. There are a number of approaches, other than looking at a heatmap of $X$, with which we can visualize $X$ to derive insights as to its structure. When $d=2$, another popular visualization is to look at the latent positions, $\vec x_i$, as individual points in $2$-dimensional space. This will give us a scatter plot of $n$ points, each of which has two coordinates. Each point is the latent position for a single node:
```python
def plot_latents(latent_positions, title=None, labels=None, **kwargs):
fig, ax = plt.subplots(figsize=(6, 6))
if ax is None:
ax = plt.gca()
ss = 6*np.arange(0, 50)
plot = sns.scatterplot(x=latent_positions[ss, 0], y=latent_positions[ss, 1], hue=labels,
s=10, ax=ax, palette="Set1", color='k', **kwargs)
ax.set_title(title)
ax.set(ylabel="Dimension 1", xlabel="Dimension 2")
ax.set_title(title)
return plot
# plot
plot_latents(X, title="Latent Position Matrix, X");
```
The above scatter plot has been subsampled to show only every $6^{th}$ latent position, so that the individual $2$-dimensional latent positions are discernable. Due to the way we constructed $X$, the scatter plot would otherwise appear to be a line (due to points overlapping one another). The reason that the points fall along a vertical line when plotted as a vector is due to the method we used to construct entries of $X$, described above. Next, we will look at the probability matrix:
```python
plot_prob(X.dot(X.transpose()), title="Probability Matrix, P=$XX^T$",
nodelabs=["1", "100", "200", "300"], nodetix=[0,99,199,299])
```
Finally, we will sample an RDPG:
```python
from graspologic.simulations import rdpg
# sample an RDPG with the latent position matrix
# created above
A = rdpg(X, loops=False, directed=False)
# and plot it
ax = binary_heatmap(A, title="$RDPG_{300}(X)$ Simulation")
```
### Likelihood*
Given $X$, the likelihood for an RDPG is relatively straightforward, as an RDPG is another Independent-Edge Random Graph. The independence assumption vastly simplifies our resulting expression. We will also use many of the results we've identified above, such as the p.m.f. of a Bernoulli random variable. Finally, we'll note that the probability matrix $P = (\vec x_i^\top \vec x_j)$, so $p_{ij} = \vec x_i^\top \vec x_j$:
\begin{align*}
\mathcal L_\theta(A) &\propto \mathbb P_\theta(A) \\
&= \prod_{j > i}\mathbb P(\mathbf a_{ij} = a_{ij}),\;\;\;\; \textrm{Independence Assumption} \\
&= \prod_{j > i}(\vec x_i^\top \vec x_j)^{a_{ij}}(1 - \vec x_i^\top \vec x_j)^{1 - a_{ij}},\;\;\;\; a_{ij} \sim Bern(\vec x_i^\top \vec x_j)
\end{align*}
Unfortunately, the likelihood equivalence classes are a bit harder to understand intuitionally here compared to the ER and SBM examples so we won't write them down here, but they still exist!
### *A Posteriori* RDPG
Like for the *a posteriori* SBM, the *a posteriori* RDPG introduces another strange set: the **intersection of the unit ball and the non-negative orthant**. Huh? This sounds like a real mouthful, but it turns out to be rather straightforward. You are probably already very familiar with a particular orthant: in two-dimensions, an orthant is called a quadrant. Basically, an orthant just extends the concept of a quadrant to spaces which might have more than $2$ dimensions. The non-negative orthant happens to be the orthant where all of the entries are non-negative. We call the **$K$-dimensional non-negative orthant** the set of points in $K$-dimensional real space, where:
\begin{align*}
\left\{\vec x \in \mathbb R^K : x_k \geq 0\text{ for all $k$}\right\}
\end{align*}
In two dimensions, this is the traditional upper-right portion of the standard coordinate axis. To give you a picture, the $2$-dimensional non-negative orthant is the blue region of the following figure:
```python
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.axisartist import SubplotZero
import matplotlib.patches as patch
class Axes():
def __init__(self, xlim=(-5,5), ylim=(-5,5), figsize=(6,6)):
self.xlim = xlim
self.ylim = ylim
self.figsize = figsize
self.__scale_arrows__()
def __drawArrow__(self, x, y, dx, dy, width, length):
plt.arrow(
x, y, dx, dy,
color = 'k',
clip_on = False,
head_width = self.head_width,
head_length = self.head_length
)
def __scale_arrows__(self):
""" Make the arrows look good regardless of the axis limits """
xrange = self.xlim[1] - self.xlim[0]
yrange = self.ylim[1] - self.ylim[0]
self.head_width = min(xrange/30, 0.25)
self.head_length = min(yrange/30, 0.3)
def __drawAxis__(self):
"""
Draws the 2D cartesian axis
"""
# A subplot with two additional axis, "xzero" and "yzero"
# corresponding to the cartesian axis
ax = SubplotZero(self.fig, 1, 1, 1)
self.fig.add_subplot(ax)
# make xzero axis (horizontal axis line through y=0) visible.
for axis in ["xzero","yzero"]:
ax.axis[axis].set_visible(True)
# make the other axis (left, bottom, top, right) invisible
for n in ["left", "right", "bottom", "top"]:
ax.axis[n].set_visible(False)
# Plot limits
plt.xlim(self.xlim)
plt.ylim(self.ylim)
ax.set_yticks([-1, 1, ])
ax.set_xticks([-2, -1, 0, 1, 2])
# Draw the arrows
self.__drawArrow__(self.xlim[1], 0, 0.01, 0, 0.3, 0.2) # x-axis arrow
self.__drawArrow__(0, self.ylim[1], 0, 0.01, 0.2, 0.3) # y-axis arrow
self.ax=ax
def draw(self):
# First draw the axis
self.fig = plt.figure(figsize=self.figsize)
self.__drawAxis__()
axes = Axes(xlim=(-2.5,2.5), ylim=(-2,2), figsize=(9,7))
axes.draw()
rectangle =patch.Rectangle((0,0), 3, 3, fc='blue',ec="blue", alpha=.2)
axes.ax.add_patch(rectangle)
plt.show()
```
Now, what is the unit ball? You are probably familiar with the idea of the unit ball, even if you haven't heard it called that specifically. Remember that the Euclidean norm for a point $\vec x$ which has coordinates $x_i$ for $i=1,...,K$ is given by the expression:
\begin{align*}
\left|\left|\vec x\right|\right|_2 \triangleq \sqrt{\sum_{i = 1}^K x_i^2}
\end{align*}
The Euclidean unit ball is just the set of points whose Euclidean norm is at most $1$. To be more specific, the **closed unit ball** with the Euclidean norm is the set of points:
\begin{align*}
\left\{\vec x \in \mathbb R^K :\left|\left|\vec x\right|\right|_2 \leq 1\right\}
\end{align*}
We draw the $2$-dimensional unit ball with the Euclidean norm below, where the points that make up the unit ball are shown in red:
```python
axes = Axes(xlim=(-2.5,2.5), ylim=(-2,2), figsize=(9,7))
axes.draw()
circle =patch.Circle((0,0), 1, fc='red',ec="red", alpha=.3)
axes.ax.add_patch(circle)
plt.show()
```
Now what is their intersection? Remember that the intersection of two sets $A$ and $B$ is the set:
\begin{align*}
A \cap B &= \{x : x \in A, x \in B\}
\end{align*}
That is, each element must be in *both* sets to be in the intersection. Formally, the interesction of the unit ball and the non-negative orthant will be the set:
\begin{align*}
\mathcal X_K \triangleq \left\{\vec x \in \mathbb R^K :\left|\left|\vec x\right|\right|_2 \leq 1, x_k \geq 0 \textrm{ for all $k$}\right\}
\end{align*}
visually, this will be the set of points in the *overlap* of the unit ball and the non-negative orthant, which we show below in purple:
```python
axes = Axes(xlim=(-2.5,2.5), ylim=(-2,2), figsize=(9,7))
axes.draw()
circle =patch.Circle((0,0), 1, fc='red',ec="red", alpha=.3)
axes.ax.add_patch(circle)
rectangle =patch.Rectangle((0,0), 3, 3, fc='blue',ec="blue", alpha=.2)
axes.ax.add_patch(rectangle)
plt.show()
```
This space has an *incredibly* important corollary. It turns out that if $\vec x$ and $\vec y$ are both elements of $\mathcal X_K$, that $\left\langle \vec x, \vec y \right \rangle = \vec x^\top \vec y$, the **inner product**, is at most $1$, and at least $0$. Without getting too technical, this is because of something called the Cauchy-Schwartz inequality and the properties of $\mathcal X_K$. If you remember from linear algebra, the Cauchy-Schwartz inequality states that $\left\langle \vec x, \vec y \right \rangle$ can be at most the product of $\left|\left|\vec x\right|\right|_2$ and $\left|\left|\vec y\right|\right|_2$. Since $\vec x$ and $\vec y$ have norms both less than or equal to $1$ (since they are on the *unit ball*), their inner-product is at most $1$. Further, since $\vec x$ and $\vec y$ are in the non-negative orthant, their inner product can never be negative.
The *a posteriori* RDPG is to the *a priori* RDPG what the *a posteriori* SBM was to the *a priori* SBM. We instead suppose that we do *not* know the latent position matrix $X$, but instead know how we can characterize the individual latent positions. We have the following parameter:
| Parameter | Space | Description |
| --- | --- | --- |
| F | inner-product distributions | A distribution which governs each latent position. |
The parameter $F$ is what is known as an **inner-product distribution**. In the simplest case, we will assume that $F$ is a distribution on a subset of the possible real vectors that have $d$-dimensions with an important caveat: for any two vectors within this subset, their inner product *must* be a probability. We will refer to the subset of the possible real vectors as $\mathcal X_K$, which we learned about above. This means that for any $\vec x_i, \vec x_j$ that are in $\mathcal X_K$, it is always the case that $\vec x_i^\top \vec x_j$ is between $0$ and $1$. This is essential because like previously, we will describe the distribution of each edge in the adjacency matrix using $\vec x_i^\top \vec x_j$ to represent a probability. Next, we will treat the latent position matrix as a matrix-valued random variable which is *latent* (remember, *latent* means that we don't get to see it in our real data). Like before, we will call $\vec{\mathbf x}_i$ the random latent positions for the nodes of our network. In this case, each $\vec {\mathbf x}_i$ is sampled independently and identically from the inner-product distribution $F$ described above. The latent-position matrix is the matrix-valued random variable $\mathbf X$ whose entries are the latent vectors $\vec {\mathbf x}_i$, for each of the $n$ nodes.
The model for edges of the *a posteriori* RDPG can be described by conditioning on this unobserved latent-position matrix. We write down that, conditioned on $\vec {\mathbf x}_i = \vec x$ and $\vec {\mathbf x}_j = \vec y$, that if $j > i$, then $\mathbf a_{ij}$ is sampled independently from a $Bern(\vec x^\top \vec y)$ distribution. As before, if $i < j$, $\mathbf a_{ji} = \mathbf a_{ij}$ (the network is *undirected*), and $\mathbf a_{ii} = 0$ (the network is *loopless*). If $\mathbf A$ is the adjacency matrix for an *a posteriori* RDPG with parameter $F$, we write that $\mathbf A \sim RDPG_n(F)$.
#### Likelihood*
The likelihood for the *a posteriori* RDPG is fairly complicated. This is because, like the *a posteriori* SBM, we do not actually get to see the latent position matrix $\mathbf X$, so we need to use *marginalization* to obtain an expression for the likelihood. Here, we are concerned with realizations of $\mathbf X$. Remember that $\mathbf X$ is just a matrix whose rows are $\vec {\mathbf x}_i$, each of which individually have have the distribution $F$; e.g., $\vec{\mathbf x}_i \sim F$ independently. For simplicity, we will assume that $F$ is a disrete distribution on $\mathcal X_K$. This makes the logic of what is going on below much simpler since the notation gets less complicated, but does not detract from the generalizability of the result (the only difference is that sums would be replaced by multivariate integrals, and probability mass functions replaced by probability density functions).
We will let $p$ denote the probability mass function (p.m.f.) of this discrete distribution function $F$. The strategy will be to use the independence assumption, followed by marginalization over the relevant rows of $\mathbf X$:
\begin{align*}
\mathcal L_\theta(A) &\propto \mathbb P_\theta(\mathbf A = A) \\
&= \prod_{j > i} \mathbb P(\mathbf a_{ij} = a_{ij}), \;\;\;\;\textrm{Independence Assumption} \\
\mathbb P(\mathbf a_{ij} = a_{ij})&= \sum_{\vec x \in \mathcal X_K}\sum_{\vec y \in \mathcal X_K}\mathbb P(\mathbf a_{ij} = a_{ij}, \vec{\mathbf x}_i = \vec x, \vec{\mathbf x}_j = \vec y),\;\;\;\;\textrm{Marginalization over }\vec {\mathbf x}_i \textrm{ and }\vec {\mathbf x}_j
\end{align*}
Next, we will simplify this expression a little bit more, using the definition of a conditional probability like we did before for the SBM:
\begin{align*}
\\
\mathbb P(\mathbf a_{ij} = a_{ij}, \vec{\mathbf x}_i = \vec x, \vec{\mathbf x}_j = \vec y) &= \mathbb P(\mathbf a_{ij} = a_{ij}| \vec{\mathbf x}_i = \vec x, \vec{\mathbf x}_j = \vec y) \mathbb P(\vec{\mathbf x}_i = \vec x, \vec{\mathbf x}_j = \vec y)
\end{align*}
Further, remember that if $\mathbf a$ and $\mathbf b$ are independent, then $\mathbb P(\mathbf a = a, \mathbf b = b) = \mathbb P(\mathbf a = a)\mathbb P(\mathbf b = b)$. Using that $\vec x_i$ and $\vec x_j$ are independent, by definition:
\begin{align*}
\mathbb P(\vec{\mathbf x}_i = \vec x, \vec{\mathbf x}_j = \vec y) &= \mathbb P(\vec{\mathbf x}_i = \vec x) \mathbb P(\vec{\mathbf x}_j = \vec y)
\end{align*}
Which means that:
\begin{align*}
\mathbb P(\mathbf a_{ij} = a_{ij}, \vec{\mathbf x}_i = \vec x, \vec{\mathbf x}_j = \vec y) &= \mathbb P(\mathbf a_{ij} = a_{ij} | \vec{\mathbf x}_i = \vec x, \vec{\mathbf x}_j = \vec y)\mathbb P(\vec{\mathbf x}_i = \vec x) \mathbb P(\vec{\mathbf x}_j = \vec y)
\end{align*}
Finally, we that conditional on $\vec{\mathbf x}_i = \vec x_i$ and $\vec{\mathbf x}_j = \vec x_j$, $\mathbf a_{ij}$ is $Bern(\vec x_i^\top \vec x_j)$. This means that in terms of our probability matrix, each entry $p_{ij} = \vec x_i^\top \vec x_j$. Therefore:
\begin{align*}
\mathbb P(\mathbf a_{ij} = a_{ij}| \vec{\mathbf x}_i = \vec x, \vec{\mathbf x}_j = \vec y) &= (\vec x^\top \vec y)^{a_{ij}}(1 - \vec x^\top\vec y)^{1 - a_{ij}}
\end{align*}
This implies that:
\begin{align*}
\mathbb P(\mathbf a_{ij} = a_{ij}, \vec{\mathbf x}_i = \vec x, \vec{\mathbf x}_j = \vec y) &= (\vec x^\top \vec y)^{a_{ij}}(1 - \vec x^\top\vec y)^{1 - a_{ij}}\mathbb P(\vec{\mathbf x}_i = \vec x) \mathbb P(\vec{\mathbf x}_j = \vec y)
\end{align*}
Where $p(\vec x)$ is the p.m.f. $\mathbb
So our complete expression for the likelihood is:
\begin{align*}
\mathcal L_\theta(A) &\propto \prod_{j > i}\sum_{\vec x \in \mathcal X_K}\sum_{\vec y \in \mathcal X_K} (\vec x^\top \vec y)^{a_{ij}}(1 - \vec x^\top\vec y)^{1 - a_{ij}}\mathbb P(\vec{\mathbf x}_i = \vec x) \mathbb P(\vec{\mathbf x}_j = \vec y)
\end{align*}
## Inhomogeneous Erdös-Rényi (IER)
In the preceding models, we typically made assumptions about how we could characterize the edge-existence probabilities using fewer than $\binom n 2$ unique probabilities (one for each edge). The reason for this is that in general, $n$ is usually relatively large, so attempting to actually learn $\binom n 2$ unique probabilities is not, in general, going to be very feasible (it is *never* feasible when we have a single network, since a single network only one observation for each independent edge). Further, it is relatively difficult to ask questions for which assuming edges share *nothing* in common (even if they don't share the same probabilities, there may be properties underlying the probabilities, such as the *latent positions* that we saw above with the RDPG, that we might still want to characterize) is actually favorable.
Nonetheless, the most general model for an independent-edge random network is known as the Inhomogeneous Erdös-Rényi (IER) Random Network. An IER Random Network is characterized by the following parameters:
| Parameter | Space | Description |
| --- | --- | --- |
| $P$ | [0,1]$^{n \times n}$ | The edge probability matrix. |
The probability matrix $P$ is an $n \times n$ matrix, where each entry $p_{ij}$ is a probability (a value between $0$ and $1$). Further, if we restrict ourselves to the case of simple networks like we have done so far, $P$ will also be symmetric ($p_{ij} = p_{ji}$ for all $i$ and $j$). The generative model is similar to the preceding models we have seen: given the $(i, j)$ entry of $P$, denoted $p_{ij}$, the edges $\mathbf a_{ij}$ are independent $Bern(p_{ij})$, for any $j > i$. Further, $\mathbf a_{ii} = 0$ for all $i$ (the network is *loopless*), and $\mathbf a_{ji} = \mathbf a_{ij}$ (the network is *undirected*). If $\mathbf A$ is the adjacency maatrix for an IER network with probability matarix $P$, we write that $\mathbf A \sim IER_n(P)$.
It is worth noting that *all* of the preceding models we have discussed so far are special cases of the IER model. This means that, for instance, if we were to consider only the probability matrices where all of the entries are the same, we could represent the ER models. Similarly, if we were to only to consider the probability matrices $P$ where $P = XX^\top$, we could represent any RDPG.
### Likelihood*
The likelihood for a network which is IER is very straightforward. We use the independence assumption, and the p.m.f. of a Bernoulli-distributed random-variable $\mathbf a_{ij}$:
\begin{align*}
\mathcal L_\theta(A) &\propto \mathbb P(\mathbf A = A) \\
&= \prod_{j > i}p_{ij}^{a_{ij}}(1 - p_{ij})^{1 - a_{ij}}
\end{align*}
## Degree-Corrected Stochastic Block Model (DCSBM)
Let's think back to our school example for the Stochastic Block Model. Remember, we had 100 students, each of whom could go to one of two possible schools: school one or school two. Our network had 100 nodes, representing each of the students. We said that the school for which each student attended was represented by their node assignment $\tau_i$ to one of two possible communities. The matrix $B$ was the block probaability matrix, where $b_{11}$ was the probability that students in school one were friends, $b_{22}$ was the probability that students in school two were friends, and $b_{12} = b_{21}$ was the probability that students were friends if they did not go to the same school. In this case, we said that $\mathbf A \sim SBM_n(\tau, B)$.
When would this setup not make sense? Let's say that Alice and Bob both go to the same school, but Alice is more popular than Bob. If we were to look at a schoolmate Chadwick, it might not make sense to say that both Alice and Bob have the *same* probability of being friends with Chadwick. Rather, we might want to reflect that Alice has a higher probability of being friends with an arbitrary schoolmate than Bob. The problem here is that within a single community of an SBM, the SBM assumes that the **node degree** (the number of nodes each nodes is connected to) is the *same* for all nodes within a single community.
```{admonition} Degree Homogeneity in a Stochastic Block Model Network
Suppose that $\mathbf A \sim SBM_{n, \vec\tau}(B)$, where $\mathbf A$ has $K=2$ communities. What is the node degree of each node in $\mathbf A$?
For an arbitrary node $v_i$ which is in community $k$ (either $1$ or $2$), we will compute the expectated value of the degree $deg(v_i)$, written $\mathbb E\left[deg(v_i); \tau_i = k\right]$. We will let $n_k$ represent the number of nodes whose node assignments $\tau_i$ are to community $k$. Let's see what happens:
\begin{align*}
\mathbb E\left[deg(v_i); \tau_i = k\right] &= \mathbb E\left[\sum_{j = 1}^n \mathbf a_{ij}\right] \\
&= \sum_{j = 1}^n \mathbb E[\mathbf a_{ij}]
\end{align*}
We use the *linearity of expectation* again to get from the top line to the second line. Next, instead of summing over all the nodes, we'll break the sum up into the nodes which are in the same community as node $i$, and the ones in the *other* community $k'$. We use the notation $k'$ to emphasize that $k$ and $k'$ are different values:
\begin{align*}
\mathbb E\left[deg(v_i); \tau_i = k\right] &= \sum_{j : i \neq j, \tau_j = k} \mathbb E\left[\mathbf a_{ij}\right] + \sum_{j : \tau_j =k'} \mathbb E[\mathbf a_{ij}]
\end{align*}
In the first sum, we have $n_k-1$ total edges (the number of nodes that aren't node $i$, but are in the same community), and in the second sum, we have $n_{k'}$ total edges (the number of nodes that are in the other community). Finally, we will use that the probability of an edge in the same community is $b_{kk}$, but the probability of an edge between the communities is $b_{k' k}$. Finally, we will use that the expected value of an adjacency $\mathbf a_{ij}$ which is Bernoulli distributed is its probability:
\begin{align*}
\mathbb E\left[deg(v_i); \tau_i = k\right] &= \sum_{j : i \neq j, \tau_j = k} b_{kk} + \sum_{j : \tau_j = \ell} b_{kk'},\;\;\;\;\mathbf a_{ij}\textrm{ are Bernoulli distributed} \\
&= (n_k - 1)b_{kk} + n_{k'} b_{kk'}
\end{align*}
This holds for any node $i$ which is in community $k$. Therefore, the expected node degree is the same, or **homogeneous**, within a community of an SBM.
```
To address this limitation, we turn to the Degree-Corrected Stochastic Block Model, or DCSBM. As with the Stochastic Block Model, there is both a *a priori* and *a posteriori* DCSBM.
### *A Priori* DCSBM
Like the *a priori* SBM, the *a priori* DCSBM is where we know which nodes are in which node communities ahead of time. Here, we will use the variable $K$ to denote the maximum number of communities that nodes could be assigned to. The *a priori* DCSBM has the following two parameters:
| Parameter | Space | Description |
| --- | --- | --- |
| $B$ | [0,1]$^{K \times K}$ | The block matrix, which assigns edge probabilities for pairs of communities |
| $\vec\theta$ | $\mathbb R^n_+$ | The degree correction vector, which adjusts the degree for pairs of nodes |
The latent community assignment vector $\vec{\pmb \tau}$ with a known *a priori* realization $\vec{\tau}$ and the block matrix $B$ are exactly the same for the *a priori* DCSBM as they were for the *a priori* SBM.
The vector $\vec\theta$ is the degree correction vector. Each entry $\theta_i$ is a positive scalar. For every adjacency for a given node $i$, the degree correction $\theta_i$ will indicate the factor by which the probability for an adjacency which represents an edge incident node $i$ is adjusted.
Finally, let's think about how to write down the generative model for the *a priori* DCSBM. We say that $\tau_i = k'$ and $\tau_j = k$, $\mathbf a_{ij}$ is sampled independently from a $Bern(\theta_i \theta_j b_{k'k})$ distribution for all $j > i$. As we can see, $\theta_i$ in a sense is "correcting" the probabilities of each adjacency to node $i$ to be higher, or lower, depending on the value of $\theta_i$ that that which is given by the block probabilities $b_{\ell k}$. If $\mathbf A$ is an *a priori* DCSBM network with parameters and $B$, we write that $\mathbf A \sim DCSBM_{n,\vec\tau}(\vec \theta, B)$.
#### Likelihood*
The derivation for the likelihood is the same as for the *a priori* SBM, with the change that $p_{ij} = \theta_i \theta_j b_{k'k}$ instead of just $b_{k'k}$. This gives that the likelihood turns out to be:
\begin{align*}
\mathcal L_\theta(A) &\propto \prod_{j > i} \left(\theta_i \theta_j b_{k'k}\right)^{a_{ij}}\left(1 - \theta_i \theta_j b_{k'k}\right)^{1 - a_{ij}}
\end{align*}
The expression doesn't simplify much more due to the fact that the probabilities are dependent on the particular $i$ and $j$, so we can't just reduce the statement in terms of $n_{k'k}$ and $m_{k'k}$ like for the SBM.
### *A Posteriori* DCSBM
The *a posteriori* DCSBM is to the *a posteriori* SBM what the *a priori* DCSBM was to the *a priori* SBM. The changes are very minimal, so we will omit explicitly writing it all down here so we can get this section wrapped up, with the idea that the preceding section on the *a priori* DCSBM should tell you what needs to change.
## Network models for networks which aren't simple
To make the discussions a little more easy to handle, in the above descriptions, we described network models for simple networks, which to recap, are binary networks which are both loopless and undirected. Stated another way, simple networks are networks whose adjacency matrices are only $0$s and $1$s, they are hollow, and symmetric. What happens our networks don't quite look this way?
For now, we'll keep the assumption that the networks are binary, but we will discuss non-binary network models in a later chapter. We have three possibilities we can consider, and we will show how the "relaxations" of the assumptions change a description of a network model. We split these out so we can be as clear as possible about how the generative model changes.
We will compare each relaxation to the statement about the generative model for the ER generative model. To recap, for a simple network, we wrote:
Statistically, we say that for each edge $\mathbf{a}_{ij}$, that $\mathbf{a}_{ij}$ is sampled independently and identically from a $Bern(p)$ distribution, whenever $j > i$. When $i > j$, we allow $\mathbf a_{ij} = \mathbf a_{ji}$. Also, we let $\mathbf a_{ii} = 0$, which means that all self-loops are always unconnected.
### Binary network model which has loops, but is undirected
Here, all we want to do is relax the assumption that the network is loopless. We simply ignore the statement that $\mathbf a_{ii} = 0$, and allow that the $\mathbf a_{ij}$ which follow a Bernoulli distribution (with some probability which depends on the network model choice) *now* applies to $j \geq i$, and not just $j > i$. We keep that $\mathbf a_{ji} = \mathbf a_{ij}$, which maintains the symmetry of $\mathbf A$ (and consequently, the undirectedness of the network).
Our description of the ER network changes to:
Statistically, we say that for each edge $\mathbf{a}_{ij}$, that $\mathbf{a}_{ij}$ is sampled independently and identically from a $Bern(p)$ distribution, whenever $j \geq i$. When $i > j$, we allow $\mathbf a_{ij} = \mathbf a_{ji}$.
### Binary network model which is loopless, but directed
Like above, we simply ignore the statement that $\mathbf a_{ji} = \mathbf a_{ij}$, which removes the symmetry of $\mathbf A$ (and consequently, removes the undirectedness of the network). We allow that the $\mathbf a_{ij}$ which follows a Bernoulli distribution now apply to $j \neq i$, and not just $j > i$. We keep that $\mathbf a_{ii} = 0$, which maintains the hollowness of $\mathbf A$ (and consequently, the undirectedness of the network).
Our description of the ER network changes to:
Statistically, we say that for each edge $\mathbf{a}_{ij}$, that $\mathbf{a}_{ij}$ is sampled independently and identically from a $Bern(p)$ distribution, whenever $j \neq i$. Also, we let $\mathbf a_{ii} = 0$, which means that all self-loops are always unconnected.
### Binary network model which is has loops and is directed
Finally, for a network which has loops and is directed, we combine the above two approaches. We ignore the statements that $\mathbf a_{ji} = \mathbf a_{ij}$, and the statement thhat $\mathbf a_{ii} = 0$.
Our descriptiomn of the ER network changes to:
Statistically, we say that for each edge $\mathbf{a}_{ij}$, that $\mathbf{a}_{ij}$ is sampled independently and identically from a $Bern(p)$ distribution, for all possible combinations of nodes $j$ and $i$.
## Generalized Random Dot Product Graph (GRDPG)
The Generalized Random Dot Product Graph, or GRDPG, is the most general random network model we will consider in this book. Note that for the RDPG, the probability matrix $P$ had entries $p_{ij} = \vec x_i^\top \vec x_j$. What about $p_{ji}$? Well, $p_{ji} = \vec x_j^\top \vec x_i$, which is exactly the same as $p_{ij}$! This means that even if we were to consider a directed RDPG, the probabilities that can be captured are *always* going to be symmetric. The generalized random dot product graph, or GRDPG, relaxes this assumption. This is achieved by using *two* latent positin matrices, $X$ and $Y$, and letting $P = X Y^\top$. Now, the entries $p_{ij} = \vec x_i^\top \vec y_j$, but $p_{ji} = \vec x_j^\top \vec y_i$, which might be different.
### *A Priori* GRDPG
The *a priori* GRDPG is a GRDPG in which we know *a priori* the latent position matrices $X$ and $Y$. The *a priori* GRDPG has the following parameters:
| Parameter | Space | Description |
| --- | --- | --- |
| $X$ | $ \mathbb R^{n \times d}$ | The matrix of left latent positions for each node $n$. |
| $Y$ | $ \mathbb R^{n \times d}$ | The matrix of right latent positions for each node $n$. |
$X$ and $Y$ behave nearly the same as the latent position matrix $X$ for the *a priori* RDPG, with the exception that they will be called the **left latent position matrix** and the **right latent position matrix** respectively. Further, the vectors $\vec x_i$ will be the left latent positions, and $\vec y_i$ will be the right latent positions, for a given node $i$, for each node $i=1,...,n$.
What is the generative model for the *a priori* GRDPG? As we discussed above, given $X$ and $Y$, for all $j \neq i$, $\mathbf a_{ij} \sim Bern(\vec x_i^\top \vec y_j)$ independently. If we consider only loopless networks, $\mathbf a_{ij} = 0$. If $\mathbf A$ is an *a priori* GRDPG with left and right latent position matrices $X$ and $Y$, we write that $\mathbf A \sim GRDPG_n(X, Y)$.
### *A Posteriori* GRDPG
The *A Posteriori* GRDPG is very similar to the *a posteriori* RDPG. We have two parameters:
| Parameter | Space | Description |
| --- | --- | --- |
| F | inner-product distributions | A distribution for the left latent positions. |
| G | inner-product distributions | A distribution for the right latent positions. |
Here, we treat the left and right latent position matrices as latent variable matrices, like we did for *a posteriori* RDPG. That is, the left latent positions are sampled independently and identically from $F$, and the right latent positions $\vec y_i$ are sampled independently and identically from $G$.
The model for edges of the *a posteriori* RDPG can be described by conditioning on the unobserved left and right latent-position matrices. We write down that, conditioned on $\vec {\mathbf x}_i = \vec x$ and $\vec {\mathbf y}_j = \vec y$, that if $j \neq i$, then $\mathbf a_{ij}$ is sampled independently from a $Bern(\vec x^\top \vec y)$ distribution. As before, assuming the network is loopless, $\mathbf a_{ii} = 0$. If $\mathbf A$ is the adjacency matrix for an *a posteriori* RDPG with parameter $F$, we write that $\mathbf A \sim GRDPG_n(F, G)$.
# References
[1] Erdös P, Rényi A. 1959. "On random graphs, I." Publ. Math. Debrecen 6:290–297.
| 0f7d79f8e7fca8afd31c31bd2d58cefa622ce9ba | 1,018,366 | ipynb | Jupyter Notebook | network_machine_learning_in_python/_build/jupyter_execute/representations/ch5/single-network-models.ipynb | Laknath1996/graph-stats-book | 4b10c2f99dbfb5e05a72c98130f8c4338d7c9a21 | [
"MIT"
] | 10 | 2020-09-15T19:09:53.000Z | 2022-03-17T21:24:14.000Z | network_machine_learning_in_python/_build/jupyter_execute/representations/ch5/single-network-models.ipynb | Laknath1996/graph-stats-book | 4b10c2f99dbfb5e05a72c98130f8c4338d7c9a21 | [
"MIT"
] | 30 | 2020-09-15T19:15:11.000Z | 2022-03-10T15:33:24.000Z | network_machine_learning_in_python/_build/jupyter_execute/representations/ch5/single-network-models.ipynb | Laknath1996/graph-stats-book | 4b10c2f99dbfb5e05a72c98130f8c4338d7c9a21 | [
"MIT"
] | 2 | 2021-04-12T05:08:00.000Z | 2021-10-04T09:42:21.000Z | 497.249023 | 101,488 | 0.923547 | true | 27,271 | Qwen/Qwen-72B | 1. YES
2. YES | 0.896251 | 0.903294 | 0.809579 | __label__eng_Latn | 0.997611 | 0.719255 |
# Non-trivial Band Topology and the Chern Number
### Christina Lee
### Category: Graduate
### Topological Physics Series
* [Quantum Anomolous Hall Effect and the Chern Number](../Graduate/Chern-Number.ipynb)
* [SSH Model and the Winding Number](../Graduate/Winding-Number.ipynb)
## Overview
A Chern number tells us whether something non-trivial is going on in the wavefunction and lets us distinguish between different topological phases.
Now let me clarify what I mean when I say "non-trivial".
Normally, when we are studying materials, we move from a spatial dependence for the wavefunction to a momentum dependence across a Brillouin Zone.
When we are in a non-trivial phase, we can't define a wavefunction across the entire Brillouin Zone at the same time. We can rewrite the wavefunction to cover the area that didn't work before, but then some other section isn't well-defined.
To be clear, the physics is well defined everywhere, and every way we write the wavefunction gives the same physics. The problem lies in our inability to write down a single "chart" for the whole Zone. This conundrum is similar to the problem with plotting a globe in 2 dimensions. We always have to make cuts, but the entire globe can be covered by "charts" that make up an "atlas".
## Our Model
A page of a Review of Modern Physics behemoth, "Classification of topological quantum matter with symmetries" [1], presented this model below. It inspired me to dig deeper and fill in the details. They attributed this model as describing the Quantum Anomolous Hall Effect, QAHE, but this form describes a wide variety topological phenomena.
\begin{equation}
H(k) = R_0(k) \sigma_0 + \vec{R}(k) \cdot \vec{\sigma}
\end{equation}
\begin{equation}
= R_0(k) \sigma_0 + R_1(k) \sigma_1 + R_2(k) \sigma_2 + R_3(k) \sigma_3
\end{equation}
where $\sigma_0$ is the identity matrix and $\sigma_i$ are simply the Pauli matrices.
Combining the terms, $H(k)$ is a 2x2 matrix.
\begin{equation}=
\begin{pmatrix}
R_0 + R_3 & R_1 -i R_2 \\
R_1 + i R_2 & R_0 - R_3
\end{pmatrix}
\end{equation}
The wavefunction will then be 2-valued. This could denote two sublattices, or two different types of particles.
While we could use any values for $\vec{R}(k)$, we will use
\begin{equation}
\vec{R}(k) = \begin{pmatrix}
-2 \sin k_x \\
-2 \sin k_y \\
\mu +2 \sum_{x,y} \cos k_i \\
\end{pmatrix}
\end{equation}
as it's a fairly simple form that gives us the physics we want and exhibits phase transitions depending on the value of $\mu$. I set $\mu$ as $1$ early on. Go through the notebook with that value, then go through the notebook with $\mu = -5, -1, 1, 5$.
```julia
using Plots
pyplot()
```
Plots.PyPlotBackend()
```julia
μ=1
labels=["-π","-π/2","0","π/2","π"]
ticks=[-π,-π/2,0,π/2,π];
ks=range(-π,stop=π,length=314)
l=length(ks)
ka=Array{Array{Float64},2}(undef,l,l)
for ii in 1:l
x=ks[ii]
for jj in 1:l
ka[ii,jj]=[x,ks[jj]]
end
end
```
```julia
function R0(k::Array)
return 0
end
function R1(k::Array)
return -2*sin(k[1])
end
function R2(k::Array)
return -2*sin(k[2])
end
function R3(k::Array)
return μ+2*cos(k[1])+2*cos(k[2])
end
```
R3 (generic function with 1 method)
```julia
function R(k::Array)
return sqrt(R1(k)^2+R2(k)^2+R3(k)^2)
end
```
R (generic function with 1 method)
## Band Diagram
First, let's just take a look at the energy spectrum. For each $k$, we calculate the eigenvalues of the 2x2 matrix Hamiltonian.
To make the calculation simpler, I denote the function $R$ as
\begin{equation}
R=\sqrt{R_1^2+R_2^2+R_3^2}
\end{equation}
With that, energy can simply be written as
\begin{equation}
\lambda= R_0 \pm R
\end{equation}
$R_0$ moves the entire spectrum up and down but doesn't effect the gap, and won't affect the physics. That term won't even factor into the eigenvectors.
```julia
function λp(k::Array{Float64})
return R0(k)+R(k)
end
function λm(k::Array{Float64})
return R0(k)-R(k)
end
```
λm (generic function with 1 method)
Notation Note: I denote the Array evaluation of a function as the function name followed by 'a'.
```julia
λpa=zeros(Float64,l,l)
λma=zeros(Float64,l,l)
for ii in 1:l
for jj in 1:l
λpa[ii,jj]=λp(ka[ii,jj])
λma[ii,jj]=λm(ka[ii,jj])
end
end
```
```julia
surface(ks,ks,λpa)
surface!(ks,ks,λma)
plot!(xticks= (ticks,labels),
yticks=(ticks,labels),
xlabel="kx",
ylabel="ky",
zlabel="Energy",
title="Band Diagram")
```
## Eigenvectors
To find the eigenvectors, we find the nullspace of the following matrix,
\begin{equation}
\begin{vmatrix}
R_0 +R_3 - R_0 \mp R & R_1 - i R_2 \\
R_1 + i R_2 & R_0 - R_3 -R_0 \mp R \\
\end{vmatrix}
\end{equation}
Take the bottom row. Second column value multiples first column and first column value multiples second column.
\begin{equation}
\begin{pmatrix}
\pm R + R_3 \\
R_1 + i R_2
\end{pmatrix}
\end{equation}
And now we need to normalize the state
\begin{equation}
\frac{1}{\sqrt{2 R \left( R \pm R_3 \right) }}
\begin{pmatrix}
\pm R + R_3\\
R_1 + i R_2
\end{pmatrix}
\end{equation}
In low energy, just the minus state will be occupied (lower energy), but it has a singularity at $\vec{R} = ( 0 ,0, R) $.
We have two complex numbers in this vector, and thus four things to plot $r_1, \theta_1, r_2, \theta_2$,
\begin{equation}
|\psi \rangle =
\begin{pmatrix}
r_1 e^{i \theta_1} \\
r_2 e^{i \theta_2}
\end{pmatrix}
\end{equation}
If we chose to create our vector from the top row instead,
\begin{equation}
\begin{pmatrix}
R_1 -i R_2 \\
- R_3 \mp R
\end{pmatrix}
\end{equation}
and normalized
\begin{equation}
\frac{1}{\sqrt{2 R \left( R \mp R_3 \right) }}
\begin{pmatrix}
R_1 - i R_2 \\
\pm R - R_3
\end{pmatrix}
\end{equation}
But the minus state for this one has a singularity at $\vec{R} = (0,0, -R )$.
We can move where the singularity is, but we can't get rid of it. The problem-point doesn't show up in the physics, only in the wavefunction representation of it. We cannot well represent the state across the entire Brillouin Zone at the same time. This problem occurs because we have a topologically non-trivial state and a non-zero Chern number.
```julia
function up1(k::Array)
front=1/sqrt(2*R(k)*(R(k)+R3(k)))
return front*(R(k)+R3(k))
end
function up2(k::Array)
front=1/sqrt(2*R(k)*(R(k)+R3(k)))
return front*(R1(k)+im*R2(k))
end
up=Function[]
push!(up,up1)
push!(up,up2)
```
2-element Array{Function,1}:
up1
up2
```julia
function um1(k::Array)
front=1/sqrt(2*R(k)*(R(k)-R3(k)))
return front*(-R(k)+R3(k))
end
function um2(k::Array)
front=1/sqrt(2*R(k)*(R(k)-R3(k)))
return front*(R1(k)+im*R2(k))
end
um=Function[]
push!(um,um1)
push!(um,um2)
```
2-element Array{Function,1}:
um1
um2
```julia
upa=zeros(Complex{Float64},2,l,l)
uma=zeros(Complex{Float64},2,l,l)
for ii in 1:l
for jj in 1:l
upa[1,ii,jj]=up[1](ka[ii,jj])
upa[2,ii,jj]=up[2](ka[ii,jj])
uma[1,ii,jj]=um[1](ka[ii,jj])
uma[2,ii,jj]=um[2](ka[ii,jj])
end
end
```
```julia
# Plotting θ2
heatmap(ks,ks,angle.(uma[2,:,:])-angle.(uma[1,:,:]))
plot!(xticks= (ticks,labels),
yticks=(ticks,labels),
xlabel="kx",
ylabel="ky",
title="difference in phase, lower energy")
```
```julia
surface(ks,ks,abs2.(uma[2,:,:]))
plot!(xticks= (ticks,labels),
yticks=(ticks,labels),
xlabel="kx", ylabel="ky",
title="magnitude,second component, lower energy")
```
```julia
surface(ks,ks,abs2.(uma[1,:,:]))
plot!(xticks= (ticks,labels),
yticks=(ticks,labels),
xlabel="kx",
ylabel="ky",
title="magnitude,first component, lower energy")
```
## Calculating the Connection
The first step in calculating the Chern number is evaluating the Berry Connection.
\begin{equation}
\mathcal{A}^{i} = \langle u (k,r) | d_{i} u (k,r) \rangle
\end{equation}
Though $\mathcal{A}^i$ looks like a vector, it is not invariant under gauge tranformation. If a wavefunction transforms as
\begin{equation}
| u(k,r) \rangle \rightarrow e^{-i \phi} | u(k,r) \rangle
\end{equation}
then the connection transforms as
\begin{equation}
\mathcal{A}^i \rightarrow \mathcal{A}^i -i \partial_i \phi
\end{equation}
Near the Dirac points in the Brillouin Zone, the wavefunction has quite a high curvature, which makes the numerical computation of the derivative prone to errors. I tried numerical derivation but was unable to arrive at a correct, stable answer. Therefore, I'm using the 'ForwardDiff' Package to take derivatives analytically.
```julia
using ForwardDiff
```
Before dealing with the physics, let's just look at the syntax for the package.
Here's just an example of taking the gradient of $x^2$.
```julia
ex=x->ForwardDiff.gradient(t->t[1]^2,x)
ex([1])
```
1-element Array{Int64,1}:
2
Now let's apply that syntax to our wavefunctions.
The 'ForwardDiff' package seems to only work on purely real functions, so we have to take the derivative of the real and imaginary parts separately.
```julia
dum1=kt->ForwardDiff.gradient(um1,kt)
Rdum2=kt->ForwardDiff.gradient(t->real(um2(t)),kt)
Idum2=kt->ForwardDiff.gradient(t->imag(um2(t)),kt)
dum2(k)=Rdum2(k)+im*Idum2(k)
```
dum2 (generic function with 1 method)
With the derivatives, we can now calculate the connection.
```julia
Amkx(k)=conj(um1(k))*dum1(k)[1]+conj(um2(k))*dum2(k)[1]
Amky(k)=conj(um1(k))*dum1(k)[2]+conj(um2(k))*dum2(k)[2]
```
Amky (generic function with 1 method)
```julia
Akxa=Array{Complex{Float64}}(undef,l,l)
Akya=Array{Complex{Float64}}(undef,l,l)
for ii in 1:l
for jj in 1:l
Akxa[ii,jj]=Amkx(ka[ii,jj])
Akya[ii,jj]=Amky(ka[ii,jj])
end
end
```
```julia
heatmap(ks,ks,log.(abs.(Akxa)))
plot!(xticks= (ticks,labels),
yticks=(ticks,labels),
xlabel="kx",ylabel="ky",
title="A_kx Magnitude Berry Connection")
```
```julia
heatmap(ks,ks,log.(abs.(Akya)))
plot!(xticks= (ticks,labels),
yticks=(ticks,labels),
xlabel="kx",ylabel="ky",
title="A_ky Magnitude Berry Connection")
```
```julia
heatmap(ks,ks,angle.(Akxa))
plot!(xticks= (ticks,labels),
yticks=(ticks,labels),
xlabel="kx",ylabel="ky",
title="A_kx Berry Connection angle")
```
```julia
heatmap(ks,ks,angle.(Akya))
plot!(xticks= (ticks,labels),
yticks=(ticks,labels),
xlabel="kx",ylabel="ky",
title="A_ky Berry Connection angle")
```
## Curvature
I previously had a section here about Berry phases, but I need to fix it. Until then, please believe the mass of physics literature that from the guage-dependent Berry connection, we can get a gauge independent quantity called the <i>Berry Curvature</i>. Hopefully I can clarify this mathematical magic when I understand it myself.
Of course it's called Berry. Every thing in this calculation is named after Berry. At least its not Euler. Done with side note, back to equations.
Here's our version of Gauss's Theorem:
\begin{equation}
\oint \mathcal{A} \cdot ds = \iint F_{n}^{xy} d^2 k
\end{equation}
And $F_n^{xy}$ will take on this form:
\begin{equation}
F_{n}^{xy} = \nabla \times \mathcal{A} = \partial_{k_x} \mathcal{A}^y_{n} - \partial_{k_y} \mathcal{A}^x_{n}
\end{equation}
\begin{equation}
= \partial_{k_x} \langle u | \partial_{k_y} u \rangle - \partial_{k_y} \langle u | \partial_{k_x} u \rangle
\end{equation}
TIME FOR MORE DERIVATIVES!
```julia
DRAmkx=kt->ForwardDiff.gradient(t->(real(Amkx(t))),kt )
DImAmkx=kt->ForwardDiff.gradient(t->(imag(Amkx(t))),kt )
DRAmky=kt->ForwardDiff.gradient(t->(real(Amky(t))),kt )
DImAmky=kt->ForwardDiff.gradient(t->(imag(Amky(t))),kt )
```
#29 (generic function with 1 method)
```julia
F(k)=DRAmky(k)[1]+im*DImAmky(k)[1]-DRAmkx(k)[2]-im*DImAmkx(k)[2]
```
F (generic function with 1 method)
```julia
Fa=Array{Complex{Float64}}(undef,l,l)
for ii in 1:l
for jj in 1:l
Fa[ii,jj]=F(ka[ii,jj])
end
end
```
```julia
heatmap(ks,ks,abs.(Fa))
plot!(xticks= (ticks,labels),
yticks=(ticks,labels),
xlabel="kx",ylabel="ky",
title="F Berry curvature norm")
```
```julia
heatmap(ks,ks,angle.(Fa))
plot!(xticks= (ticks,labels),
yticks=(ticks,labels),
xlabel="kx",ylabel="ky",
title="F Berry Curvature phase")
```
## Chern Number
Last Step!
Easy calculation now in terms of code, but this number has a great deal of significance. I'm still trying to wrap my head around it. The Chern number seems to pop up in a variety of obscure mathematical stuff over this physicist's head, but hopefully none of that is necessary to grasp its incredible mind-blowing usefullness.
This single integer not only seperates out topological phases from topologically trivial phases, but seperates different topological phases from each other. And always evaluates to an integer.
\begin{equation}
Ch = \frac{1}{2 \pi i} \iint_{BZ} F^{xy}_n d^2 k
\end{equation}
If you don't get approximately an integer, try a finer mesh.
```julia
sum(Fa)*(ks[2]-ks[1])^2/(2π*im)
```
1.0250189329637458 + 1.0118627829401277e-34im
## Victory !!!!!!!!!!
You made it!
Even if you didn't understand everything, or really understand much at all, pat yourself on the back. This is the very frontier of science.
[1] Ching Kai Chiu, Jeffrey C.Y. Teo, Andreas P. Schnyder, and Shinsei Ryu, Classification of topological quantum matter with symmetries, Reviews of Modern Physics 88 2016, no. 3, 1–70.
@article{RevelsLubinPapamarkou2016,
title = {Forward-Mode Automatic Differentiation in Julia},
author = {{Revels}, J. and {Lubin}, M. and {Papamarkou}, T.},
journal = {arXiv:1607.07892 [cs.MS]},
year = {2016},
url = {https://arxiv.org/abs/1607.07892}
}
```julia
```
| 60e5c242b036542362a33ae42d3d242aca54b8f2 | 972,689 | ipynb | Jupyter Notebook | Graduate/Chern-Number.ipynb | albi3ro/M4 | ccd27d4b8b24861e22fe806ebaecef70915081a8 | [
"MIT"
] | 22 | 2015-11-15T08:47:04.000Z | 2022-02-25T10:47:12.000Z | Graduate/Chern-Number.ipynb | albi3ro/M4 | ccd27d4b8b24861e22fe806ebaecef70915081a8 | [
"MIT"
] | 11 | 2016-02-23T12:18:26.000Z | 2019-09-14T07:14:26.000Z | Graduate/Chern-Number.ipynb | albi3ro/M4 | ccd27d4b8b24861e22fe806ebaecef70915081a8 | [
"MIT"
] | 6 | 2016-02-24T03:08:22.000Z | 2022-03-10T18:57:19.000Z | 1,050.420086 | 215,389 | 0.955452 | true | 4,434 | Qwen/Qwen-72B | 1. YES
2. YES | 0.913677 | 0.828939 | 0.757382 | __label__eng_Latn | 0.913609 | 0.597984 |
### Example 3 , part B: Diffusion for non uniform material properties
In this example we will look at the diffusion equation for non uniform material properties and how to handle second-order derivatives. For this, we will reuse Devito's `.laplace` short-hand expression outlined in the previous example and demonstrate it using the examples from step 7 of the original tutorial. This example is an enhancement of `03_diffusion` in terms of having non-uniform viscosity as opposed to the constant $\nu$. This example introduces the use of `Function` in order to create this non-uniform space.
So, the equation we are now trying to implement is
$$\frac{\partial u}{\partial t} = \nu(x,y) \frac{\partial ^2 u}{\partial x^2} + \nu(x,y) \frac{\partial ^2 u}{\partial y^2}$$
In our case $\nu$ is not uniform and $\nu(x,y)$ represents spatially variable viscosity.
To discretize this equation we will use central differences and reorganizing the terms yields
\begin{align}
u_{i,j}^{n+1} = u_{i,j}^n &+ \frac{\nu(x,y) \Delta t}{\Delta x^2}(u_{i+1,j}^n - 2 u_{i,j}^n + u_{i-1,j}^n) \\
&+ \frac{\nu(x,y) \Delta t}{\Delta y^2}(u_{i,j+1}^n-2 u_{i,j}^n + u_{i,j-1}^n)
\end{align}
As usual, we establish our baseline experiment by re-creating some of the original example runs. So let's start by defining some parameters.
```python
from examples.cfd import plot_field, init_hat
import numpy as np
%matplotlib inline
# Some variable declarations
nx = 100
ny = 100
nt = 1000
nu = 0.15 #the value of base viscosity
offset = 1 # Used for field definition
visc = np.full((nx, ny), nu) # Initialize viscosity
visc[nx//4-offset:nx//4+offset, 1:-1] = 0.0001 # Adding a material with different viscosity
visc[1:-1,nx//4-offset:nx//4+offset ] = 0.0001
visc[3*nx//4-offset:3*nx//4+offset, 1:-1] = 0.0001
visc_nb = visc[1:-1,1:-1]
dx = 2. / (nx - 1)
dy = 2. / (ny - 1)
sigma = .25
dt = sigma * dx * dy / nu
# Initialize our field
# Initialise u with hat function
u_init = np.empty((nx, ny))
init_hat(field=u_init, dx=dx, dy=dy, value=1)
u_init[10:-10, 10:-10] = 1.5
zmax = 2.5 # zmax for plotting
```
We now set up the diffusion operator as a separate function, so that we can re-use if for several runs.
```python
def diffuse(u, nt ,visc):
for n in range(nt + 1):
un = u.copy()
u[1:-1, 1:-1] = (un[1:-1,1:-1] +
visc*dt / dy**2 * (un[1:-1, 2:] - 2 * un[1:-1, 1:-1] + un[1:-1, 0:-2]) +
visc*dt / dx**2 * (un[2:,1: -1] - 2 * un[1:-1, 1:-1] + un[0:-2, 1:-1]))
u[0, :] = 1
u[-1, :] = 1
u[:, 0] = 1
u[:, -1] = 1
```
Now let's take this for a spin. In the next two cells we run the same diffusion operator for a varying number of timesteps to see our "hat function" dissipate to varying degrees.
```python
#NBVAL_IGNORE_OUTPUT
# Plot material according to viscosity, uncomment to plot
import matplotlib.pyplot as plt
plt.imshow(visc_nb, cmap='Greys', interpolation='nearest')
# Field initialization
u = u_init
print ("Initial state")
plot_field(u, zmax=zmax)
diffuse(u, nt , visc_nb )
print ("After", nt, "timesteps")
plot_field(u, zmax=zmax)
diffuse(u, nt, visc_nb)
print ("After another", nt, "timesteps")
plot_field(u, zmax=zmax)
```
You can notice that the area with lower viscosity is not diffusing its heat as quickly as the area with higher viscosity.
```python
#NBVAL_IGNORE_OUTPUT
# Field initialization
u = u_init
diffuse(u, nt , visc_nb)
print ("After", nt, "timesteps")
plot_field(u, zmax=zmax)
```
Excellent. Now for the Devito part, we need to note one important detail to our previous examples: we now have a second-order derivative. So, when creating our `TimeFunction` object we need to tell it about our spatial discretization by setting `space_order=2`. We also use the notation `u.laplace` outlined previously to denote all second order derivatives in space, allowing us to reuse this code for 2D and 3D examples.
```python
from devito import Grid, TimeFunction, Eq, solve, Function
from sympy.abc import a
from sympy import nsimplify
# Initialize `u` for space order 2
grid = Grid(shape=(nx, ny), extent=(2., 2.))
# Create an operator with second-order derivatives
a = Function(name='a',grid = grid) # Define as Function
a.data[:]= visc # Pass the viscosity in order to be used in the operator.
u = TimeFunction(name='u', grid=grid, space_order=2)
# Create an equation with second-order derivatives
eq = Eq(u.dt, a * u.laplace)
stencil = solve(eq, u.forward)
eq_stencil = Eq(u.forward, stencil)
print(nsimplify(eq_stencil))
```
Eq(u(t + dt, x, y), dt*((-2*u(t, x, y)/h_y**2 + u(t, x, y - h_y)/h_y**2 + u(t, x, y + h_y)/h_y**2 - 2*u(t, x, y)/h_x**2 + u(t, x - h_x, y)/h_x**2 + u(t, x + h_x, y)/h_x**2)*a(x, y) + u(t, x, y)/dt))
Great. Now all that is left is to put it all together to build the operator and use it on our examples. For illustration purposes we will do this in one cell, including update equation and boundary conditions.
```python
#NBVAL_IGNORE_OUTPUT
from devito import Operator, Constant, Eq, solve, Function
# Reset our data field and ICs
init_hat(field=u.data[0], dx=dx, dy=dy, value=1.)
# Field initialization
u.data[0] = u_init
# Create an operator with second-order derivatives
a = Function(name='a',grid = grid)
a.data[:]= visc
eq = Eq(u.dt, a * u.laplace, subdomain=grid.interior)
stencil = solve(eq, u.forward)
eq_stencil = Eq(u.forward, stencil)
# Create boundary condition expressions
x, y = grid.dimensions
t = grid.stepping_dim
bc = [Eq(u[t+1, 0, y], 1.)] # left
bc += [Eq(u[t+1, nx-1, y], 1.)] # right
bc += [Eq(u[t+1, x, ny-1], 1.)] # top
bc += [Eq(u[t+1, x, 0], 1.)] # bottom
op = Operator([eq_stencil] + bc)
op(time=nt, dt=dt, a = a)
print ("After", nt, "timesteps")
plot_field(u.data[0], zmax=zmax)
op(time=nt, dt=dt, a = a)
print ("After another", nt, "timesteps")
plot_field(u.data[0], zmax=zmax)
```
```python
```
| 311e1da8a4a3b919849faea87deaeb9516816838 | 955,112 | ipynb | Jupyter Notebook | examples/cfd/03_diffusion_nonuniform.ipynb | kristiantorres/devito | 9357d69448698fd2b7a57be6fbb400058716b532 | [
"MIT"
] | 1 | 2020-01-31T10:35:49.000Z | 2020-01-31T10:35:49.000Z | examples/cfd/03_diffusion_nonuniform.ipynb | kristiantorres/devito | 9357d69448698fd2b7a57be6fbb400058716b532 | [
"MIT"
] | 53 | 2020-11-30T07:50:14.000Z | 2022-03-10T17:06:03.000Z | examples/cfd/03_diffusion_nonuniform.ipynb | kristiantorres/devito | 9357d69448698fd2b7a57be6fbb400058716b532 | [
"MIT"
] | 1 | 2020-06-02T03:31:11.000Z | 2020-06-02T03:31:11.000Z | 2,062.87689 | 168,644 | 0.964415 | true | 1,876 | Qwen/Qwen-72B | 1. YES
2. YES | 0.893309 | 0.927363 | 0.828422 | __label__eng_Latn | 0.952097 | 0.763036 |
# Elektrotechnisch integrieren mit Spulen und Kondensatoren
```python
# Bibliotheken importieren
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('classic')
```
Mathematisch ist das Verhalten von Strom und Spannung an Kondensatoren und Induktivitäten (Spulen) mit Integration bzw. Differentiation (Ableitung) beschreibbar. In diesem Notebook kann man mit den Funktionen "spielen", um ihr Verhalten besser zu verstehen.
### Kondensator
Ein Kondensator wird mit einem Strom auf eine bestimmte Spannung aufgeladen, d.h. er integriert das Stromsignal. Ist der Kondensator zuvor schon auf eine Spannung $u(t=0)$ aufgeladen, so muss man diesen Wert hinzuaddieren, um den richtigen Endwert zu erhalten.
\begin{equation}
u_C(t_1)=\frac{1}{C}\int_0^{t_1} i_C(t)dt + u_C(t=0)
\end{equation}
Aus einem Spannungsverlauf $u_C(t)$ am Kondensator erhält man demzufolge über Ableiten auch den Stromverlauf $i_C(t)$.
\begin{equation}
i_C(t)=C\cdot \frac{du_C(t)}{dt}
\end{equation}
Da in einem realen System immer Widerstände (mindestens der Leitungen) vorkommen, kann man das Verhalten von $u_C(t)$ und $i_C(t)$ an der folgenden Abbildung betrachten.
### Induktivität
Bei einer Induktivität führt die Selbstinduktion bei Stromänderung dazu, dass eine Spannung anliegt.
\begin{equation}
u_L(t)=L\cdot\frac{di_L(t)}{dt}
\end{equation}
Der Strom lässt sich also auch durch Integration bestimmen, wobei man einen ggf. vorher schon fließenden Gleichstrom $i(t=0)$ hinzuaddieren muss, denn nur die Strom**änderung** führt eine Induktionsspannung herbei.
\begin{equation}
i_L(t_1)=i_L(t=0)+\frac{1}{L}\int_0^{t_1}u_L(t)dt
\end{equation}
Da in einem realen System immer Widerstände (mindestens der Leitungen) vorkommen, kann man das Verhalten von $u_L(t)$ und $i_L(t)$ an der folgenden Abbildung betrachten (dort ist, da es eine englischsprachige Quelle ist, $V$ für die Spannung eingetragen).
## Kondensatorverhalten
### Konstanter Ladestrom
```python
# Zeitvektor
t = np.linspace(0,20e-3,1000) # von 0 bis 20 ms in 1000 Schritten
# konstanter Strom
i1 = 0.1 # 100 mA
i1 = i1*np.ones(len(t)) # zu jedem Zeitpunkt konstant
# Kapazität
C = 1e-3 # 1 mF
# Spannung zu Beginn
u0 = 1 # 1 Volt
# Spannung bestimmen
u1 = u0 + (1/C)*np.cumsum(i1*t[1]-t[0])
# zwei Diagramme in einem mit subplot
plt.figure(figsize=(8,5))
plt.subplot(2,1,1)
# im Diagramm darstellen
plt.plot(1000*t,1000*i1)
# Diagramm beschriften
plt.ylabel('Strom $i_C$ [mA]')
# zweites Diagramm
plt.subplot(2,1,2)
# Spannung im Diagramm darstellen
plt.plot(1000*t,u1)
# Diagramm beschriften
plt.xlabel('Zeit $t$ [ms]')
plt.ylabel('Spannung $u_C$ [V]')
```
**Aufgabe: Stellen Sie die Anfangsspannung so ein, dass der Kondensator am Ende des betrachteten Zeitraums entladen ist.**
*Tragen Sie hier das Ergebnis ein.*
### Beliebiger Ladestrom
```python
# Zeitvektor
t = np.linspace(0,20e-3,1000) # von 0 bis 20 ms in 1000 Schritten
# beliebiger zeitabhängiger Ladestrom
# - zum Auswählen des jeweiligen Stroms das # entfernen und
# - beim anderen Strom jeweils das # am Anfang ergänzen (auskommentieren)
i2 = 0.2*(np.random.rand(len(t))-.5) # Zufallszahlen um Nullpunkt herum
# i2 = 0.2*np.cos(2*np.pi*50*t) # Cosinus
# i2 = t**2 # Quadratische Abhängigkeit von der Zeit
# Kapazität
C = 1e-3 # 1 mF
# Spannung zu Beginn
u0 = 0 # 0 Volt
# Spannung bestimmen
u2 = u0 + (1/C)*np.cumsum(i2*t[1]-t[0])
# zwei Diagramme in einem mit subplot
plt.figure(figsize=(8,5))
plt.subplot(2,1,1)
# im Diagramm darstellen
plt.plot(1000*t,1000*i2)
# Diagramm beschriften
plt.ylabel('Strom $i_C$ [mA]')
# zweites Diagramm
plt.subplot(2,1,2)
# Spannung im Diagramm darstellen
plt.plot(1000*t,1000*u2)
# Diagramm beschriften
plt.xlabel('Zeit $t$ [ms]')
plt.ylabel('Spannung $u_C$ [mV]')
```
**Aufgabe: Finden sie eine Funktion für den Strom, bei der die Kondensatorspannung regelmäßig zurück auf Null geht.**
*Tragen Sie hier das Ergebnis ein.*
### Linear steigende Kondensatorspannung
```python
# Zeitvektor
t = np.linspace(0,20e-3,1000) # von 0 bis 20 ms in 1000 Schritten
# linear steigende Kondensatorspannung
u3 = 0.1 + 0.001*t # 0,1 zu Beginn, danach mit 0,001 steigend
# Kapazität
C = 1e-3 # 1 mF
# Strom bestimmen
i3 = C*np.diff(u3)
# noch einen Wert anhängen, da diff einen um 1 verkürzten Vektor ergibt
i3 = np.append(1e-12,i3)
# zwei Diagramme in einem mit subplot
plt.figure(figsize=(8,5))
plt.subplot(2,1,1)
# im Diagramm darstellen
plt.plot(1000*t,1e3*u3)
# Diagramm beschriften
plt.ylabel('Spannung $u_C$ [mV]')
# zweites Diagramm
plt.subplot(2,1,2)
# Spannung im Diagramm darstellen
plt.plot(1000*t,1e9*i3)
# Diagramm beschriften
plt.xlabel('Zeit $t$ [ms]')
plt.ylabel('Strom $i_C$ [nA]')
```
**Aufgabe: Stellen Sie das Diagramm dar, wenn die Zeitachse bis 100 ms statt bis 20 ms geht. Ändert sich der Strom?**
*Tragen Sie hier das Ergebnis ein.*
**Aufgabe: Geben Sie eine beliebige Funktion für die Kondensatorspannung an und bestimmen Sie den dazugehörigen Strom.**
*Tragen Sie hier Ihre Beobachtungen ein.*
### Rechteckspannung als Eingang
```python
t = np.linspace(0,20e-3,1000)
tau = 2e-3
u1 = .2
u2 = 1
tp = t[500]
u = np.piecewise(t,
[t <=tp,
(t>tp)
],
[lambda t: u1+(u2-u1)*(1-np.exp(-t/tau)),
lambda t: u1+(u2-u1)*np.exp(-(t-tp)/tau)]
)
utau1 = u1 + (u2-u1)*(1-np.exp(-tau/tau))
u3tau1 = u1 + (u2-u1)*(1-np.exp(-3*tau/tau))
utau2 = u1 + (u2-u1)*np.exp(-(tau)/tau)
u3tau2 = u1 + (u2-u1)*np.exp(-(3*tau)/tau)
plt.figure(figsize=(8,5))
plt.plot(1000*t,u,linewidth=2)
plt.plot(1000*t,u1*np.ones(len(t)),'k--')
plt.plot(1000*t,u2*np.ones(len(t)),'k--')
plt.text(0,-.08,'$0$')
plt.text(-1,u1,r'$U_1$')
plt.text(-1,u2,r'$U_2$')
plt.plot(1000*tau*np.ones(1000),np.linspace(0,1.1*u2,1000),'k--')
plt.text(900*tau,-.08,r'$\tau$')
plt.plot(3000*tau*np.ones(1000),np.linspace(0,1.1*u2,1000),'k--')
plt.text(2900*tau,-.08,r'$3\tau$')
plt.plot(1000*tp*np.ones(1000),np.linspace(0,1.1*u2,1000),'k--')
plt.text(1030*tp,.03,r'$t_p$')
plt.plot(1000*(tp+tau)*np.ones(1000),np.linspace(0,1.1*u2,1000),'k--')
plt.text(1000*tp+900*tau,-.08,r'$t_p+\tau$')
plt.annotate(s='', xy=(1000*tau,u2), xytext=(1000*tau,utau1), arrowprops=dict(arrowstyle='<->'))
plt.text(1100*tau,0.9*u2,r'$\frac{1}{e}\approx\frac{1}{3}$')
plt.annotate(s='', xy=(2400*tau,u2), xytext=(2400*tau,u1), arrowprops=dict(arrowstyle='<->',color='green'))
deltau1str=r"$\Delta U=$"
deltau1str+="\n"
deltau1str+=r"$=U_0 - U_\infty=$"
deltau1str+="\n"
deltau1str+="=$ U_1 - U_2$"
plt.text(2500*tau,2*u1,deltau1str,c='green')
plt.annotate(s='', xy=(3000*tau,u3tau1), xytext=(3000*tau,0.8*u3tau1), arrowprops=dict(arrowstyle='->'))
plt.annotate(s='', xy=(3000*tau,1.1*u2), xytext=(3000*tau,u2), arrowprops=dict(arrowstyle='<-'))
plt.text(3100*tau,1.02*u2,r'$\approx5\%$')
plt.text(450*tau,1.1*u1,r'$U_0$')
plt.text(880*tp,1.02*u2,r'$U_\infty$')
plt.text(1100*tp,1.02*u2,r'$U_0$')
plt.annotate(s='', xy=(1000*(tau+tp),utau2), xytext=(1000*(tau+tp),u2), arrowprops=dict(arrowstyle='<->'))
plt.text(1000*tp+1100*tau,0.9*u2,r'$1-\frac{1}{e}\approx\frac{2}{3}$')
plt.annotate(s='', xy=(1000*tp+2800*tau,u2), xytext=(1000*tp+2800*tau,u1), arrowprops=dict(arrowstyle='<->',color='green'))
deltau2str=r"$\Delta U=$"
deltau2str+="\n"
deltau2str+=r"$=U_0 - U_\infty=$"
deltau2str+="\n"
deltau2str+="=$ U_2 - U_1$"
plt.text(1000*tp+2900*tau,2*u1,deltau2str,c='green')
plt.xlabel('Zeit $t$')
plt.ylabel('Spannung $u_C(t)$')
plt.ylim(0,1.2*u2)
plt.xticks([])
plt.yticks([])
plt.grid()
plt.tight_layout()
```
**Aufgabe: Bei dieser Graphik, die Sie aus dem Skript kennen, können Sie nun verändern: $\tau$ und $t_p$ können andere Werte annehmen. Wie müssen Sie $\tau$ einstellen, damit die Spannung am Kondensator praktisch rechteckförmig wird? Wie müssen Sie sie einstellen, damit $U_2$ nicht erreicht wird am Kondensator?**
*Tragen Sie hier Ihre Beobachtungen ein.*
**Aufgabe: Übertragen Sie die Abschnitte für das Kondensatorverhalten auf einen neuen Abschnitt, der das Spulenverhalten beschreibt.**
*Ergänzen Sie ebenfalls jeweils Ihre Beobachtungen.*
| 6d811f0868180c156fd498693e717ffeaa64e991 | 26,630 | ipynb | Jupyter Notebook | 03GE2_elektrotechnisch_integrieren_differenzieren.ipynb | johannamay/GE2 | 63958cc1fd0500814aa5f701f84c63f996b28baf | [
"MIT"
] | null | null | null | 03GE2_elektrotechnisch_integrieren_differenzieren.ipynb | johannamay/GE2 | 63958cc1fd0500814aa5f701f84c63f996b28baf | [
"MIT"
] | null | null | null | 03GE2_elektrotechnisch_integrieren_differenzieren.ipynb | johannamay/GE2 | 63958cc1fd0500814aa5f701f84c63f996b28baf | [
"MIT"
] | 3 | 2020-03-14T22:27:31.000Z | 2020-08-20T16:41:48.000Z | 55.711297 | 12,852 | 0.748329 | true | 2,996 | Qwen/Qwen-72B | 1. YES
2. YES | 0.90053 | 0.73412 | 0.661096 | __label__deu_Latn | 0.906835 | 0.37428 |
# Generating the input-output function $P(g\mid R, c)$ for varying repressor copy number $R$.
```python
import pickle
import os
import glob
import datetime
# Our numerical workhorses
import numpy as np
from sympy import mpmath
import scipy.optimize
import scipy.special
import scipy.integrate
import pandas as pd
import itertools
# Import libraries to parallelize processes
from joblib import Parallel, delayed
# Import the utils for this project
import chann_cap_utils as chann_cap
```
# Pre-computing analytical distributions of gene expession.
Since the computation of the mRNA and protein steady-state probability distributions are computationally expensive, we can pre-compute the distribution for different repressor copy number and save the results as a lookup table to compute any desired quantity out of these distributions including the channel capacity and the variability in gene expression due to the stochasticity of the allosteric molecules.
This notebook achieves the simple task of computing the mRNA and protein distribution for different repressor copy numbers saving the result into csv files that we can read with `numpy`.
The matrices are arranged such that each row's index is given by the number of repressors and each column index indicates either the mRNA or protein count.
## Pre-computing the mRNA distribution
Let's start by saving the distribution for mRNA molecules.
```python
# Define the parameters
k0 = 2.7E-3 # Used by Jones and Brewster
# The MWC parameters come from the global fit to the O2 data
mRNA_params = dict(ka=0.199, ki=0.00064, omega=np.exp(-4.5),
k0=k0, gamma=0.00284, r_gamma=15.7)
```
```python
# Define the mRNA copy numbers to evaluate
# It is break up in blocks to run the process in parallel
mRNA_grid = np.reshape(np.arange(0, 50), [-1, 10])
# define the array of repressor copy numbers to evaluate the function in
R_array = np.arange(0, 1001)
kon_array = [chann_cap.kon_fn(-17, mRNA_params['k0']),
chann_cap.kon_fn(-15.3, mRNA_params['k0']),
chann_cap.kon_fn(-13.9, mRNA_params['k0']),
chann_cap.kon_fn(-9.7, mRNA_params['k0'])]
kon_operators = ['Oid', 'O1', 'O2', 'O3']
compute_matrix = True
if compute_matrix:
for j, kon in enumerate(kon_array):
print('operator : ' + kon_operators[j])
# Set the value for the kon
mRNA_params['kon'] = kon
# Initialize transition matrix
QmR = np.zeros([mRNA_grid.size, len(R_array)])
for i, r in enumerate(R_array):
if r%100==0:
print('repressors : {:d}'.format(r))
mRNA_params['rep'] = r * 1.66
# -- Parallel computation of distribution -- #
lnm_list = list()
# loop through the concentrations
# define a function to run in parallel the computation
def lnm_parallel(m):
lnm = chann_cap.log_p_m_mid_C(C=0, mRNA=m, **mRNA_params)
return lnm
lnm_list.append(Parallel(n_jobs=7)(delayed(lnm_parallel)(m) \
for m in mRNA_grid))
# -- Building and cleaning the transition matrix -- #
for k, lnm in enumerate(lnm_list):
# Initialize the matrix of zeros where the normalized
# distribution will live
p_norm = np.zeros_like(lnm)
p = np.exp(lnm)
# Compute the cumulative sum of the protein copy number
p_sum = np.cumsum(np.sum(p, axis=1))
# Find the first block that is already normalized given
# the tolerance value
norm_idx = np.where((p_sum <= 1 + 1E-5) & \
(p_sum >= 1 - 1E-5))[0][-1]
# add all the probability values of these blocks to our matrix
p_norm[0:norm_idx, :] = p[0:norm_idx, :]
QmR[:, i] = p_norm.ravel()
# Check that all distributions for each concentration are normalized
np.savetxt('../../tmp/QmR_' + kon_operators[j] +\
'_0_1000_literature_param.csv', QmR, delimiter=",")
```
### Pre-computing the protien distribution
```python
# Protein parameters
k0 = 2.7E-3 # From Jones & Brewster
prot_params = dict(ka=141.52, ki=0.56061, epsilon=4.5,
kon=chann_cap.kon_fn(-9.7, k0),
k0=k0,
gamma_m=0.00284, r_gamma_m=15.7,
gamma_p=0.000277, r_gamma_p=100)
```
```python
# Define the protein blocks to evaluate in parallel
# Break into blocks to compute the distributions in parallel
prot_grid = np.reshape(np.arange(0, 4000), [-1, 50])
# define the array of repressor copy numbers to evaluate the function in
R_array = np.arange(0, 1050)
# Setting the kon parameter based on k0 and the binding energies form stat. mech.
kon_array = [chann_cap.kon_fn(-13.9, prot_params['k0']),
chann_cap.kon_fn(-15.3, prot_params['k0']),
chann_cap.kon_fn(-9.7, prot_params['k0']),
chann_cap.kon_fn(-17, prot_params['k0'])]
kon_operators = ['O2', 'O1', 'O3', 'Oid']
kon_dict = dict(zip(kon_operators, kon_array))
compute_matrix = True
if compute_matrix:
for kon, op in enumerate(kon_operators):
print('operator : ' + op)
# Set the value for the kon
prot_params['kon'] = kon_dict[op]
# Define filename
file = '../../data/csv_protein_dist/lnp_' + op + '_DJ_RB.csv'
# If the file exists read the file, find the maximum number of repressors
# And compute from this starting point.
if os.path.isfile(file):
df = pd.read_csv(file, index_col=0)
max_rep = df.repressor.max()
df = df[df.repressor != max_rep]
df.to_csv(file)
r_array = np.arange(max_rep, np.max(R_array) + 1)
else:
r_array = R_array
# Loop through repressor copy numbers
for i, r in enumerate(r_array):
if r%50==0:
print('repressors : {:d}'.format(r))
prot_params['rep'] = r * 1.66
# -- Parallel computation of distribution -- #
# define a function to run in parallel the computation
def lnp_parallel(p):
lnp = chann_cap.log_p_p_mid_C(C=0, protein=p, **prot_params)
df = pd.DataFrame([r] * len(p), index=p, columns=['repressor'])
df.loc[:, 'protein'] = pd.Series(p, index=df.index)
df.loc[:, 'lnp'] = lnp
# if file does not exist write header
if not os.path.isfile(file):
df.to_csv(file)
else: # else it exists so append without writing the header
df.to_csv(file, mode='a', header=False)
Parallel(n_jobs=40)(delayed(lnp_parallel)(p) for p in prot_grid)
```
operator : O2
repressors : 0
# Cleaning up the lookup tables
These calculations can sometimes be numerically unstable due to the complicated confluent hypergeometric function. What can happen is that by the time the probability is basically zero (i.e. the $\ln P \ll 0$) there can be some "jumps" where the calcualtion overshoots. But this happens for probability values that should be very close to zero, so it is very easy to discard these values.
We will define a function to pre-process these lookup tables.
```python
def pre_process_lnp(df, group_col='repressor', lnp_col='lnp',
output_col='prob', tol=-20):
'''
Pre-processes the lookup tables containing the log probability of a protein
copy number for different repressor copy numbers eliminating the values
that were numerically unstable, and returning the data frame with a column
containing the processed probability.
Parameters
----------
filename : df
Data frame containing the log probabilities.
group_col : str.
Name of the column in the data frame to be used to group the distributions
lnp_col : str.
Name of the column containing the log probability
output_col : str.
Name of the column that will contain the processed probability
tol : float.
log probability under which to consider values as probability zero.
This is important since some of the calculations goe to < -300
Returns
-------
Pandas dataframe containing the processed probability.
'''
# Remove duplicated rows
df = df[[not x for x in df.duplicated()]]
# Group by group_col
df_group = df.groupby(group_col)
# Initialize data frame where to save the processed data
df_clean = pd.DataFrame(columns=df.columns)
# Loop through each group, computing the log probability making sure that
# There is no numerical overshoot and that the very small lnp are set to 0
# probability
for group, data in df_group:
data.sort(columns='protein', inplace=True)
# Set the new column to be all probability zero
data.loc[:, output_col] = [0.0] * len(data)
# Exponentiate the good log probabilities
data.loc[(data.lnp > tol) & (data.lnp < 0), output_col] =\
pd.Series(np.exp(data.loc[(data.lnp > tol) & (data.lnp < 0), lnp_col]))
# Make sure cumulative sum still adds to zero
cumsum = np.cumsum(data[output_col])
data.loc[cumsum > 1, output_col] = 0
# Append to the clean data frame
df_clean = pd.concat([df_clean, data])
return df_clean
```
Having defined the function let's pre-process the matrices we generated.
```python
files = glob.glob('../../data/csv_protein_dist/*O3_all*.csv')
for f in files:
print(f)
df = pd.read_csv(f, header=0, index_col=0, comment='#')
df_clean = pre_process_lnp(df)
df_clean.to_csv(f)
```
../../data/csv_protein_dist/lnp_O3_all_RBS1027_fit.csv
/Users/razo/anaconda/lib/python3.5/site-packages/ipykernel/__main__.py:38: FutureWarning: sort(columns=....) is deprecated, use sort_values(by=.....)
/Users/razo/anaconda/lib/python3.5/site-packages/pandas/core/frame.py:3304: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
na_position=na_position)
/Users/razo/anaconda/lib/python3.5/site-packages/pandas/core/indexing.py:297: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
self.obj[key] = _infer_fill_value(value)
/Users/razo/anaconda/lib/python3.5/site-packages/pandas/core/indexing.py:477: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
self.obj[item] = s
```python
```
| 5929e76aed11c9ee0e0c49af4f6a9292f817e2e3 | 15,517 | ipynb | Jupyter Notebook | src/theory/sandbox/generating_input_output_matrix.ipynb | RPGroup-PBoC/chann_cap | f2a826166fc2d47c424951c616c46d497ed74b39 | [
"MIT"
] | 2 | 2020-08-21T04:06:12.000Z | 2022-02-09T07:36:58.000Z | src/theory/sandbox/generating_input_output_matrix.ipynb | RPGroup-PBoC/chann_cap | f2a826166fc2d47c424951c616c46d497ed74b39 | [
"MIT"
] | null | null | null | src/theory/sandbox/generating_input_output_matrix.ipynb | RPGroup-PBoC/chann_cap | f2a826166fc2d47c424951c616c46d497ed74b39 | [
"MIT"
] | 2 | 2020-04-29T17:43:28.000Z | 2020-09-09T00:20:16.000Z | 39.184343 | 418 | 0.552555 | true | 2,775 | Qwen/Qwen-72B | 1. YES
2. YES | 0.743168 | 0.692642 | 0.514749 | __label__eng_Latn | 0.940568 | 0.034264 |
# Implementing Walsh and Haar Transforms Using Python
## Table of Contents
* [Walsh Transform](#Walsh)
* [Introduction](#WalshIntroduction)
* [Python Implementation](#WalshImplementation)
* [Testing](#WalshTesting)
* [Haar Transform](#Haar)
* [Introduction](#HaarIntroduction)
* [Python Implementation](#HaarImplementation)
* [Testing](#HaarTesting)
<a name="Walsh"></a>
## The Walsh Transform
<a name="WalshIntroduction"></a>
### Introduction
The Walsh Transform is one way of transforming a signal/image from a space-domain to its corresponding frequency-domain. The DFT is a complex, sinusoidal transform while DCT is a transform where the imaginary values are eliminated and the real ones are retained. In the case of Walsh, it is a square waveform transform. In addition, the base/kernel functions in the Walsh tranformation are orthogonal, orthonormal and symmetric. Those in DFT are also symmetric. The forward walsh transform is as follows (Assume that N = $2^n$):
$
\begin{align}
H(u,v) = \frac{1}{N}\sum_{x=0}^{N-1}\sum_{y=0}^{N-1}f(x,y) (-1)^{ \beta } \;\\ where \; u,v=0,1,2,...N-1 \\
\beta = \sum_{i=0}^{n-1}b_i(x)b_i(u) + \sum_{i=0}^{n-1}b_i(y)b_i(v)
\end{align}
$
Since the kernels of the walsh transform are real and orthogonal, they are separable. The forward transform can be rewritten as:
$H(u,v) = k.f(x,y).k $
$ Where \; u,v=0,1,2,...N-1 \; and \; k = \sum_{i=0}^{n-1}b_i(x)b_i(u) $
Similarly, the inverse walsh transform will look as follows:
$f(x,y) = k.H(u,v).k $
This is exactly the same as that of the forward transformation as the kernels are real and symmetric.
<a name="WalshImplementation"></a>
### Python Implementation
First of all, let's import the necessary python libraries
```python
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
#import matplotlib.image as img
import PIL.Image as Image
import math
import cmath
import time
import csv
from numpy import binary_repr
```
Now let's start with creating common image functions.
```python
def generateImagesWithResizedWhite(imge):
"""
Generates images with the same size as the original but with a resized white part of them.
"""
N = imge.shape[0]
imges = []
i = N/2
while i >= 4:
j = (N - i)/2
#Starting and ending indices for the white part.
indx1 = j
indx2 = j+i
#Draw the image.
imgeNew = np.zeros([N, N],dtype=int)
imgeNew[indx1:indx2, indx1:indx2] = np.ones([i, i], dtype=int)*255
#Add the image to the list.
imges.append(imgeNew)
i = i/2
return imges
def generateBlackAndWhiteSquareImage(imgSize):
"""
Generates a square-sized black and white image with a given input size.
Parameters
----------
imgSize : int
Input number that stores the dimension of the square image to be generated.
Returns
-------
imge : ndarray
The generated black and white square image.
"""
#Creating a matrix with a given size where all the stored values are only zeros (for initialization)
imge = np.zeros([imgSize, imgSize], dtype=int)
#Starting and ending indices of the white part of the image.
ind1 = imgSize/4
ind2 = ind1 + (imgSize/2)
#Make a part of the image as white (255)
imge[ind1:ind2, ind1:ind2] = np.ones([imgSize/2, imgSize/2], dtype=int)*255
#return the resulting image
return imge
def generateImages(imgSizes=[128, 64, 32, 16, 8]):
"""
Generates images of different sizes."""
#Create an empty list of images to save the generated images with different sizes.
images = []
#Generate the first and biggest image
imge = generateBlackAndWhiteSquareImage(imgSizes[0])
#Add to the images list
images.append(imge)
#Generate the resized and smaller images with different sizes.
for i in range(1, len(imgSizes)):
size = imgSizes[i]
images.append(resizeImage(imge, size))
return images
def resizeImage(imge, newSize):
"""
Reduces the size of the given image.
Parameters
----------
imge : ndarray
Input array that stores the image to be resized.
Returns
-------
newSize : int
The size of the newly generated image.
"""
#Compute the size of the original image (in this case, only # of rows as it is square)
N = imge.shape[0]
#The ratio of the original image as compared to the new one.
stepSize = N/newSize
#Creating a new matrix (image) with a black color (values of zero)
newImge = np.zeros([N/stepSize, N/stepSize])
#Average the adjacent four pixel values to compute the new intensity value for the new image.
for i in xrange(0, N, stepSize):
for j in xrange(0, N, stepSize):
newImge[i/stepSize, j/stepSize] = np.mean(imge[i:i+stepSize, j:j+stepSize])
#Return the new image
return newImge
```
Next, we are going to implement the walsh transform:
```python
class Walsh(object):
"""
This class Walsh implements all the procedures for transforming a given 2D digital image
into its corresponding frequency-domain image (Walsh Transform)
"""
@classmethod
def __computeBeta(self, u, x, n):
uBin = binary_repr(u, width=n)
xBin = binary_repr(x, width=n)
beta = 0
for i in xrange(n):
beta += (int(xBin[i])*int(uBin[i]))
return beta
#Compute walsh kernel (there is only a single kernel for forward and inverse transform
#as it is both orthogonal and symmetric).
@classmethod
def computeKernel(self, N):
"""
Computes/generates the walsh kernel function.
Parameters
----------
N : int
Size of the kernel to be generated.
Returns
-------
kernel : ndarray
The generated kernel as a matrix.
"""
#Initialize the kernel
kernel = np.zeros([N, N])
#Compute each value of the kernel...
n = int(math.log(N, 2))
for u in xrange(N):
for x in xrange(N):
beta = Walsh.__computeBeta(u, x, n)
kernel[u, x] = (-1)**beta
#To make the kernel orthonormal, we can divide it by sqrt(N)
#kernel /= math.sqrt(N)
#Return the resulting kernel
return kernel
@classmethod
def computeForward2DWalshTransform(self, imge):
"""
Computes/generates the 2D Walsh transform.
Parameters
----------
imge : ndarray
The input image to be transformed.
Returns
-------
final2DWalsh : ndarray
The transformed image.
"""
N = imge.shape[0]
kernel = Walsh.computeKernel(N)
imge1DWalsh = np.dot(kernel, imge)
final2DWalsh = np.dot(imge1DWalsh, kernel)
return final2DWalsh/N
@classmethod
def computeInverse2DWalshTransform(self, imgeWalsh):
"""
Computes/generates the inverse of 2D Walsh transform.
Parameters
----------
imgeWalsh : ndarray
The Walsh transformed image.
Returns
-------
imgeInverse : ndarray
The inverse of the transformed image.
"""
N = imgeWalsh.shape[0]
kernel = Walsh.computeKernel(N)
imge1DInverse = np.dot(kernel, imgeWalsh)
imgeInverse = np.dot(imge1DInverse, kernel)
return imgeInverse/N
```
<a name="WalshTesting"></a>
### Testing the Code
Let's try to compute the 4x4 walsh kernel.
```python
Walsh.computeKernel(4)
```
array([[ 1., 1., 1., 1.],
[ 1., -1., 1., -1.],
[ 1., 1., -1., -1.],
[ 1., -1., -1., 1.]])
Note: if we want to make the kernel orthonormal, we divide it by $\sqrt N$. Now, we will compute the walsh transform of a given image. First we will read an image from a file.
```python
#Read an image file
imgeCameraman = Image.open("Images/cameraman.tif") # open an image
#Convert the image file to a matrix
imgeCameraman = np.array(imgeCameraman)
```
```python
imgeWalsh = Walsh.computeForward2DWalshTransform(imgeCameraman)
```
The inverse of the transformed image can also be computed as follows:
```python
imgeInverse = Walsh.computeInverse2DWalshTransform(imgeWalsh)
```
The minimum and maximum values of the transformation are:
```python
np.min(np.absolute(imgeWalsh)), np.max(np.absolute(imgeWalsh))
```
(0.0, 60576.76953125)
We can visualize the original, the transformed and inverse transformed images as follows:
```python
fig, axarr = plt.subplots(1, 3, figsize=(10,7))
axarr[0].imshow(imgeCameraman, cmap=plt.get_cmap('gray'))
axarr[0].set_title('Original Image')
axarr[1].imshow(np.absolute(imgeWalsh), cmap=plt.get_cmap('gray'))
axarr[1].set_title('Forward Transformed Image')
axarr[2].imshow(imgeInverse, cmap=plt.get_cmap('gray'))
axarr[2].set_title("Inverse Transformed Image")
plt.show()
```
Now, let's compute the running time of Walsh tranform using images with different size
```python
#Generate images
imgSizes = [128, 64, 32, 16, 8]
images = generateImages(imgSizes)
```
```python
# A list that stores the running time of the DCT algorithm for images with different size.
runningTimeWalsh = []
#For each image...
for i, imge in enumerate(images):
#Compute the image size
N = imge.shape[0]
print "Computing for ", N, "x", N, "image..."
#Save the starting time.
startTime = time.time()
#Compute the DCT of the image.
walshImge = Walsh.computeForward2DWalshTransform(imge)
#Save the running time
runningTimeWalsh.append((time.time() - startTime)/60.0)
```
Computing for 128 x 128 image...
Computing for 64 x 64 image...
Computing for 32 x 32 image...
Computing for 16 x 16 image...
Computing for 8 x 8 image...
```python
result = zip(imgSizes, runningTimeWalsh)
np.savetxt("RunningTimes/runningTimeWalsh.csv", np.array(result), delimiter=',')
```
The running time will be visualized and compared with the other transformation methods in the next section. As it will be shown, Walsh is faster than DFT as it is only a square waveform where DFT is a complex, sinusoidal transformation.
## The Haar Transform
<a name="HaarIntroduction"></a>
### Introduction
The Haar transform is also one type of the transformation methods. In contrast to Walsh, the kernel functions in Haar are not symmetric. So we need to transpose the first kernel when finding the second one.
The forward transform is computed as follows:
$ H(0, 0, x) = \frac{1}{N} $
$
H(r, m, x) = \left\{
\begin{array}{l l}
\frac{2^{r/2}}{\sqrt N} & \quad \text{$ \frac{m-1}{2^r} \leq x \leq \frac{m-1/2}{2^r}$ } \\
-\frac{2^{r/2}}{\sqrt N} & \quad \text{$ \frac{m-1/2}{2^r} \leq x \leq \frac{m}{2^r}$ }\\
0 & \quad \text{otherwise}
\end{array} \right.
$
Where $0 \leq x < 1$, $0 \leq r < log_2 N $ and $1 \leq m \leq 2^r$
<a name="HaarImplementation"></a>
### Python Implementation
It's implementation is as follows:
```python
class Haar(object):
"""
This class Haar implements all the procedures for transforming a given 2D digital image
into its corresponding frequency-domain image (Haar Transform)
"""
#Compute the Haar kernel.
@classmethod
def computeKernel(self, N):
"""
Computes/generates the haar kernel function.
Parameters
----------
N : int
Size of the kernel to be generated.
Returns
-------
kernel : ndarray
The generated kernel as a matrix.
"""
i = 0
kernel = np.zeros([N, N])
n = int(math.log(N, 2))
#Fill for the first row of the kernel
for j in xrange(N):
kernel[i, j] = 1.0/math.sqrt(N)
# For the other rows of the kernel....
i += 1
for r in xrange(n):
for m in xrange(1, (2**r)+1):
j=0
for x in np.arange(0, 1, 1.0/N):
if (x >= (m-1.0)/(2**r)) and (x < (m-0.5)/(2**r)):
kernel[i, j] = (2.0**(r/2.0))/math.sqrt(N)
elif (x >= (m-0.5)/(2**r)) and (x < m/(2.0**r)):
kernel[i, j] = -(2.0**(r/2.0))/math.sqrt(N)
else:
kernel[i, j] = 0
j += 1
i += 1
return kernel
@classmethod
def computeForward2DHaarTransform(self, imge):
"""
Computes/generates the 2D Haar transform.
Parameters
----------
imge : ndarray
The input image to be transformed.
Returns
-------
final2DHaar : ndarray
The transformed image.
"""
N = imge.shape[0]
kernel = Haar.computeKernel(N)
imge1DHaar = np.dot(kernel, imge)
#Transpose the kernel as it is not symmetric
final2DHaar = np.dot(imge1DHaar, kernel.T)
return final2DHaar/N
@classmethod
def computeInverse2DHaarTransform(self, imgeHaar):
"""
Computes/generates the inverse of 2D Haar transform.
Parameters
----------
imgeHaar : ndarray
The Haar transformed image.
Returns
-------
imgeInverse : ndarray
The inverse of the transformed image.
"""
N = imgeHaar.shape[0]
kernel = Haar.computeKernel(N)
imge1DInverse = np.dot(kernel.T, imgeHaar)
imgeInverse = np.dot(imge1DInverse, kernel)
return imgeInverse/N
```
<a name="HaarTesting"></a>
### Testing the Code
Now let's try the 4x4 haar kernel:
```python
xKernel = Haar.computeKernel(4)
xKernel
```
array([[ 0.5 , 0.5 , 0.5 , 0.5 ],
[ 0.5 , 0.5 , -0.5 , -0.5 ],
[ 0.70710678, -0.70710678, 0. , 0. ],
[ 0. , 0. , 0.70710678, -0.70710678]])
Now, we will compute the haar transform of a given image.
```python
imgeHaar = Haar.computeForward2DHaarTransform(imgeCameraman)
```
Now, let's return back to its original form by using the inverse Haar transform
```python
imgeHaarInverse = Haar.computeInverse2DHaarTransform(imgeHaar)
```
The range of the results of the Haar transform is:
```python
np.min(np.absolute(imgeHaar)), np.max(np.absolute(imgeHaar))
```
(0.0, 118.31400299072266)
We can now visualize the original, forward and inverse transformed images as follows:
```python
fig, axarr = plt.subplots(1, 3, figsize=(10,7))
axarr[0].imshow(imgeCameraman, cmap=plt.get_cmap('gray'))
axarr[0].set_title('Original Image')
axarr[1].imshow(np.absolute(imgeHaar), cmap=plt.get_cmap('gray'))
axarr[1].set_title('Forward Transformed Image')
axarr[2].imshow(imgeHaarInverse, cmap=plt.get_cmap('gray'))
axarr[2].set_title("Inverse Transformed Image")
plt.show()
```
Now, we will compute the running time of Haar Transform using images with different sizes:
```python
#Generate images
imgSizes = [128, 64, 32, 16, 8]
images = generateImages(imgSizes)
# A list that stores the running time of the DCT algorithm for images with different size.
runningTimeHaar = []
#For each image...
for i, imge in enumerate(images):
#Compute the image size
N = imge.shape[0]
print "Computing for ", N, "x", N, "image..."
#Save the starting time.
startTime = time.time()
#Compute the Haar of the image.
haarImge = Haar.computeForward2DHaarTransform(imge)
#Save the running time
runningTimeHaar.append((time.time() - startTime)/60.0)
result = zip(imgSizes, runningTimeHaar)
np.savetxt("RunningTimes/runningTimeHaar.csv", np.array(result), delimiter=',')
```
Computing for 128 x 128 image...
Computing for 64 x 64 image...
Computing for 32 x 32 image...
Computing for 16 x 16 image...
Computing for 8 x 8 image...
```python
#Load the running time for DFT, Walsh, Haar Transforms
runningTimeDFT = np.loadtxt("RunningTimes/runningTimeDFT.csv", delimiter =',')
runningTimeWalsh = np.loadtxt("RunningTimes/runningTimeWalsh.csv", delimiter =',')
runningTimeHaar = np.loadtxt("RunningTimes/runningTimeHaar.csv", delimiter =',')
```
```python
#Plot the running times
plt.plot(xrange(runningTimeDFT.shape[0]), runningTimeDFT[:,1], '-d')
plt.hold
plt.plot(xrange(runningTimeWalsh.shape[0]), runningTimeWalsh[:,1], '-d')
plt.plot(xrange(runningTimeHaar.shape[0]), runningTimeHaar[:,1], '-d')
xlabels = [str(int(imge)) + 'x' + str(int(imge)) for imge in runningTimeDFT[:, 0]]
print xlabels
plt.xticks(xrange(len(runningTimeDFT[:, 0])), xlabels)
plt.xlabel("Image Size(Pixels)")
plt.ylabel("Time(Sec)")
plt.legend(['DFT', 'Walsh', 'Haar'])
plt.show()
```
As we can see from the graph above, both Walsh and Haar are faster than DFT but are not as precise as the DFT. Because DFT is a complex sinusoidal wave.
```python
```
| 901b0b0f78853e41f5a20f2ab469e74ab1b44bc7 | 179,921 | ipynb | Jupyter Notebook | Notebooks_Teoricos/Image-Processing-Operations/04-Implementing-Walsh-Haar-Transform-Using-Python.ipynb | lucas-althoff/PDI-UnB | eae5de886739807bd7f66d5cb9dbe7b541efa4ff | [
"MIT"
] | null | null | null | Notebooks_Teoricos/Image-Processing-Operations/04-Implementing-Walsh-Haar-Transform-Using-Python.ipynb | lucas-althoff/PDI-UnB | eae5de886739807bd7f66d5cb9dbe7b541efa4ff | [
"MIT"
] | null | null | null | Notebooks_Teoricos/Image-Processing-Operations/04-Implementing-Walsh-Haar-Transform-Using-Python.ipynb | lucas-althoff/PDI-UnB | eae5de886739807bd7f66d5cb9dbe7b541efa4ff | [
"MIT"
] | null | null | null | 175.704102 | 67,674 | 0.882471 | true | 4,679 | Qwen/Qwen-72B | 1. YES
2. YES | 0.879147 | 0.903294 | 0.794128 | __label__eng_Latn | 0.82109 | 0.683358 |
---
title: Monte Carlo Integration
summary: working out a variation metric for the IC using monte carlo integration of a toy problem
---
# toy 2D → 3D problem
```python
import numpy as np
import numba
```
### exact solution
Our map is a simple one, from 2D $\boldsymbol{z}$ space to 3D $\boldsymbol{x}$ space. We endow the 2D latent space with a gaussian density, i.e
\begin{align}
\rho(\boldsymbol{z}) &= \frac{1}{2\pi} e^{-\tfrac{1}{2} |\boldsymbol{z}|^2 }
\end{align}
As a toy map, take
\begin{align}
\boldsymbol{z} &= (z_1, z_2) \\
\boldsymbol{x}(\boldsymbol{z}) &= \tfrac{1}{\sqrt[4]{2}} \left(\tfrac{1}{\sqrt{2}}z_1^2, \tfrac{1}{\sqrt{2}}z_2^2, z_1 z_2 \right)
\end{align}
The Jacobian of this transformation is
\begin{align}
J(\boldsymbol{x}) &= \frac{d \boldsymbol{x}}{d \boldsymbol{z}} \\
&= \sqrt[4]{2} \pmatrix{ z_1 & 0 \cr
0 & z_2 \cr
\tfrac{1}{\sqrt{2}}z_2 & \tfrac{1}{\sqrt{2}}z_1 }
\end{align}
A scalar measure Jacobian we can use is $D(\boldsymbol{z}) = \sqrt{ \det{ \left( J(\boldsymbol{z})^T J(\boldsymbol{z}) \right)} }$ (see https://en.wikipedia.org/w/index.php?title=Determinant).
\begin{align}
J(\boldsymbol{x})^T J(\boldsymbol{x}) &= \sqrt{2}
\pmatrix{z_1 & 0 & \tfrac{1}{\sqrt{2}}z_2 \cr
0 & z_2 & \tfrac{1}{\sqrt{2}}z_1}
\pmatrix{z_1 & 0 \cr 0 & z_2 \cr
\tfrac{1}{\sqrt{2}}z_2 & \tfrac{1}{\sqrt{2}}z_1} \\
&= \pmatrix{\sqrt{2}z_1^2 + \tfrac{1}{\sqrt{2}}z_2^2 & \tfrac{1}{\sqrt{2}} z_1 z_2 \cr
\tfrac{1}{\sqrt{2}} z_1 z_2 & \tfrac{1}{\sqrt{2}}z_1^2 + \sqrt{2} z_2^2 }
\end{align}
Taking the determinant we get
\begin{align}
\det{ \left( J(\boldsymbol{z})^T J(\boldsymbol{z}) \right)} &= z_1^4 + 2 z_1^2 z_2^2 + z_2^4 \\
&= \left( z_1^2 + z_2^2 \right)^2
\end{align}
Our measure is then simply
\begin{align}
D(\boldsymbol{z}) &= \sqrt{ \det{ \left( J(\boldsymbol{z})^T J(\boldsymbol{z}) \right)} } \\
&= \sqrt{\left( z_1^2 + z_2^2 \right)^2} \\
&= z_1^2 + z_2^2
\end{align}
To find the expectation of this measure over the entire $\boldsymbol{z}$ space, we integrate over the space, weighting by the density of $\boldsymbol{z}$ in that space:
\begin{align}
\Bbb E [ D(\boldsymbol{z}) ] &= \int D(\boldsymbol{z}) \rho(\boldsymbol{z}) d\boldsymbol{z} \\
&= \int_{-\infty}^\infty \int_{-\infty}^\infty \left( z_1^2 + z_2^2 \right)
\frac{1}{2\pi} e^{-\tfrac{1}{2} (z_1^2 + z_2^2)^2 } d z_1 d z_2 \\
&= \int_{0}^{2\pi} \int_{0}^\infty \frac{1}{2\pi} e^{ -\tfrac{1}{2} r^2 } r^2 r dr d\theta \\
&= \int_{0}^\infty e^{ -\tfrac{1}{2} r^2 } r^3 r dr \\
&= 2
\end{align}
### numerical solution
```python
@numba.jit(nopython=True)
def x(z):
z1,z2 = z
return np.array( [z1**2/np.sqrt(2.0), z2**2/np.sqrt(2.0), z1*z2] ) / np.sqrt(np.sqrt(2.0))
```
```python
@numba.jit(nopython=True)
def sqrt_det_jacobian(z, delta=1e-6):
J = np.zeros((len(x(z)),len(z)))
for i,z_i in enumerate(z):
z_delta = z.copy()
z_delta[i] += delta
J[:,i] = (x(z_delta)-x(z))/delta
det_JtJ = np.linalg.det( np.dot(J.transpose(), J) )
return np.sqrt(det_JtJ)
```
```python
z_points = np.random.randn(1024,2)
jac_samples = np.zeros(len(z_points))
for i,zi in enumerate(z_points):
jac_samples[i] = sqrt_det_jacobian(zi,delta=1e-9)
np.mean(jac_samples)
```
2.0324136516733455
### notes re pytorch
https://discuss.pytorch.org/t/clarification-using-backward-on-non-scalars/1059
https://discuss.pytorch.org/t/more-efficient-implementation-of-jacobian-matrix-computation/6960
```python
```
| e1679e51ebfdc1179f6a11e189046fd69aea1d0d | 6,693 | ipynb | Jupyter Notebook | assets/notebooks/2017-10-28-Monte_Carlo_Integration.ipynb | AllenCellModeling/AllenCellModeling.github.io | fcda8609d4840f5329560524516eab59a1699bc8 | [
"MIT"
] | 9 | 2018-07-21T14:16:23.000Z | 2020-08-10T20:52:55.000Z | assets/notebooks/2017-10-28-Monte_Carlo_Integration.ipynb | AllenCellModeling/AllenCellModeling.github.io | fcda8609d4840f5329560524516eab59a1699bc8 | [
"MIT"
] | 3 | 2018-08-15T17:37:13.000Z | 2020-07-09T08:49:11.000Z | assets/notebooks/2017-10-28-Monte_Carlo_Integration.ipynb | AllenCellModeling/AllenCellModeling.github.io | fcda8609d4840f5329560524516eab59a1699bc8 | [
"MIT"
] | 5 | 2018-08-21T19:44:29.000Z | 2021-03-12T19:43:24.000Z | 32.64878 | 208 | 0.452861 | true | 1,434 | Qwen/Qwen-72B | 1. YES
2. YES | 0.924142 | 0.857768 | 0.792699 | __label__kor_Hang | 0.133353 | 0.680039 |
# 非线性规划 Nonlinear Programming
xyfJASON
## 1 概述
若目标函数或约束条件包含非线性函数,则称这种规划问题是非线性规划问题。
没有通用的算法,各个方法都有自己特定的使用范围。
## 2 算法与代码
使用 `scipy.optimize.minimize`,提供了众多优化方法。
Documentation: https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html
| 方法 | 约束条件 | 使用算法 | 是否需要<br>梯度向量<br>海塞矩阵 | 备注 |
| :----------: | :------: | :---------------------------------------: | :------------------------------: | :----------------------------------------------------------: |
| CG | 无约束 | nonlinear conjugate<br>gradient | 是;否 | |
| BFGS | 无约束 | quasi-Newton | 是;否 | 在非光滑优化上都能有很好的表现<br>同时返回近似海塞逆矩阵 |
| Newton-CG | 无约束 | truncated Newton | 是;是 | 适合大规模问题 |
| dogleg | 无约束 | dog-leg trust-region | 是;是<br>(要求正定) | |
| trust-ncg | 无约束 | Newton conjugate<br>gradient trust-region | 是;是 | 适合大规模问题 |
| trust-krylov | 无约束 | Newton GLTR<br>trust-region | 是;是 | 适合大规模问题<br>中规模和大规模推荐使用 |
| trust-exact | 无约束 | trust-exact | 是;是 | 小规模和中规模最为推荐使用 |
| Nelder-Mead | 边界约束 | Simplex | 否;否 | 适用于许多应用<br>不如要求导函数的方法精确 |
| L-BFGS-B | 边界约束 | L-BFGS-B | 是;否 | 同时返回近似海塞逆矩阵 |
| Powell | 边界约束 | conjugate direction | 否;否 | 目标函数无需可导<br> |
| TNC | 边界约束 | truncated Newton | 是;否 | 打包了 C 语言实现<br>Newton-CG 的边界约束版本 |
| COBYLA | 有约束 | COBYLA | 否;否 | 打包了 FORTRAN 语言实现<br>只支持不等式(大于等于)约束 |
| SLSQP | 有约束 | Sequential Least<br>SQuares Programming | 是;否<br>还需要约束条件的梯度 | |
| trust-constr | 有约束 | trust-region | 是;是 | 根据问题自动在两种方法中切换<br>最多功能的约束优化实现<br>用于大规模问题的最适合方法 |
## 3 例题
### 3.1 例一
求函数 $f(x)=100(x_2-x_1^2)^2+(1-x_1)^2$ 的极小值。
这是无约束问题,梯度和海塞矩阵都比较好算,不妨使用 `trust-exact` 方法:
$$
\nabla f(x)=\begin{bmatrix}-400(x_2-x_1^2)x_1-2(1-x_1)\\200(x_2-x_1^2)\end{bmatrix}
$$
$$
Hessian(f)=\begin{bmatrix}
-400(x_2-x_1^2)+800x_1^2+2&-400x_1\\
-400x_1&200
\end{bmatrix}
$$
```python
import numpy as np
from scipy.optimize import minimize
def f(x):
return 100*(x[1]-x[0]*x[0])**2+(1-x[0])**2
def grad(x):
g = np.zeros(2)
g[0] = -400*(x[1]-x[0]*x[0])*x[0]-2*(1-x[0])
g[1] = 200*(x[1]-x[0]*x[0])
return g
def hessian(x):
h = np.zeros((2, 2))
h[0, 0] = -400*(x[1]-x[0]*x[0])+800*x[0]*x[0]+2
h[0, 1] = -400 * x[0]
h[1, 0] = -400 * x[0]
h[1, 1] = 200
return h
res = minimize(fun=f,
x0=np.zeros(2),
method='trust-exact',
jac=grad,
hess=hessian)
print(res)
```
fun: 1.1524542015768823e-13
hess: array([[ 801.99967846, -399.99991693],
[-399.99991693, 200. ]])
jac: array([ 1.03264509e-05, -5.37090168e-06])
message: 'Optimization terminated successfully.'
nfev: 18
nhev: 18
nit: 17
njev: 15
status: 0
success: True
x: array([0.99999979, 0.99999956])
### 3.2 例二
求 $f(x)=(x-3)^2-1, x\in[0,5]$ 的最小值。
计算梯度:
$$
\nabla f(x)=2(x-3)
$$
这是有边界约束问题,可以使用 L-BFGS-B 方法:
```python
def f(x):
return (x-3)**2-1
def grad(x):
return 2*(x-3)
res = minimize(fun=f,
x0=np.array([0]),
method='L-BFGS-B',
jac=grad,
bounds=[(0, 5)])
print(res)
```
fun: array([-1.])
hess_inv: <1x1 LbfgsInvHessProduct with dtype=float64>
jac: array([0.])
message: 'CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL'
nfev: 3
nit: 2
njev: 3
status: 0
success: True
x: array([3.])
### 3.3 例三
已知 $f(x)=e^{x_1}(4x_1^2+2x_2^2+4x_1x_2+2x_2+1)$,求
$$
\begin{align}
&\min f(x)\\
&\text{s.t.}\begin{cases}
x_1x_2-x_1-x_2\leqslant -1.5\\
x_1x_2\geqslant -10
\end{cases}
\end{align}
$$
能算梯度,先把梯度算出来:
$$
\nabla f(x)=
\begin{bmatrix}
e^{x_1}(4x_1^2+2x_2^2+4x_1x_2+8x_1+6x_2+1)\\
e^{x_1}(4x_1+4x_2+2)
\end{bmatrix}
$$
这是有约束问题,可以使用 SLSQP 方法:
```python
def f(x):
return np.e ** x[0] * (4 * x[0] * x[0] + 2 * x[1] * x[1] + 4 * x[0] * x[1] + 2 * x[1] + 1)
def grad(x):
g = np.zeros(2)
g[0] = np.e ** x[0] * (4 * x[0] * x[0] + 2 * x[1] * x[1] + 4 * x[0] * x[1] + 8 * x[0] + 6 * x[1] + 1)
g[1] = np.e ** x[0] * (4 * x[0] + 4 * x[1] + 2)
return g
def get_constr():
def constr_f1(x):
return x[0] + x[1] - x[0] * x[1] - 1.5
def constr_grad1(x):
return np.array([1 - x[1], 1 - x[0]])
def constr_f2(x):
return x[0] * x[1] + 10
def constr_grad2(x):
return np.array([x[1], x[0]])
c = [
dict(type='ineq',
fun=constr_f1,
jac=constr_grad1),
dict(type='ineq',
fun=constr_f2,
jac=constr_grad2)
]
return c
constr = get_constr()
res = minimize(fun=f,
x0=np.array([-2, 2]),
method='SLSQP',
jac=grad,
constraints=constr)
print(res)
```
fun: 0.02355037962417156
jac: array([ 0.01839705, -0.00228436])
message: 'Optimization terminated successfully'
nfev: 9
nit: 8
njev: 8
status: 0
success: True
x: array([-9.54740503, 1.04740503])
| cea86800769d30a017beaf353bca6aa5701480eb | 9,536 | ipynb | Jupyter Notebook | Mathematical Programming/Nonlinear Programming.ipynb | FinCreWorld/Mathematical-Modeling-with-Python | d5206309bce32f2aa64fe94ab4e8a576add0e628 | [
"MIT"
] | null | null | null | Mathematical Programming/Nonlinear Programming.ipynb | FinCreWorld/Mathematical-Modeling-with-Python | d5206309bce32f2aa64fe94ab4e8a576add0e628 | [
"MIT"
] | 1 | 2021-08-21T09:36:54.000Z | 2021-08-21T09:36:54.000Z | Mathematical Programming/Nonlinear Programming.ipynb | FinCreWorld/Mathematical-Modeling-with-Python | d5206309bce32f2aa64fe94ab4e8a576add0e628 | [
"MIT"
] | 3 | 2021-08-21T09:25:22.000Z | 2021-08-29T12:04:49.000Z | 31.061889 | 178 | 0.369232 | true | 2,422 | Qwen/Qwen-72B | 1. YES
2. YES | 0.861538 | 0.857768 | 0.739 | __label__yue_Hant | 0.37403 | 0.555276 |
###### Content under Creative Commons Attribution license CC-BY 4.0, code under MIT license (c)2014 L.A. Barba, G.F. Forsyth, C. Cooper. Based on [CFDPython](https://github.com/barbagroup/CFDPython), (c)2013 L.A. Barba, also under CC-BY license.
# Space & Time
## Burgers' Equation
Hi there! We have reached the final lesson of the series *Space and Time — Introduction to Finite-difference solutions of PDEs*, the second module of ["Practical Numerical Methods with Python"](https://openedx.seas.gwu.edu/courses/course-v1:MAE+MAE6286+2017/about).
We have learned about the finite-difference solution for the linear and non-linear convection equations and the diffusion equation. It's time to combine all these into one: *Burgers' equation*. The wonders of *code reuse*!
Before you continue, make sure you have completed the previous lessons of this series, it will make your life easier. You should have written your own versions of the codes in separate, clean Jupyter Notebooks or Python scripts.
You can read about Burgers' Equation on its [wikipedia page](http://en.wikipedia.org/wiki/Burgers'_equation).
Burgers' equation in one spatial dimension looks like this:
$$
\begin{equation}
\frac{\partial u}{\partial t} + u \frac{\partial u}{\partial x} = \nu \frac{\partial ^2u}{\partial x^2}
\end{equation}
$$
As you can see, it is a combination of non-linear convection and diffusion. It is surprising how much you learn from this neat little equation!
We can discretize it using the methods we've already detailed in the previous notebooks of this module. Using forward difference for time, backward difference for space and our 2nd-order method for the second derivatives yields:
$$
\begin{equation}
\frac{u_i^{n+1}-u_i^n}{\Delta t} + u_i^n \frac{u_i^n - u_{i-1}^n}{\Delta x} = \nu \frac{u_{i+1}^n - 2u_i^n + u_{i-1}^n}{\Delta x^2}
\end{equation}
$$
As before, once we have an initial condition, the only unknown is $u_i^{n+1}$. We will step in time as follows:
$$
\begin{equation}
u_i^{n+1} = u_i^n - u_i^n \frac{\Delta t}{\Delta x} (u_i^n - u_{i-1}^n) + \nu \frac{\Delta t}{\Delta x^2}(u_{i+1}^n - 2u_i^n + u_{i-1}^n)
\end{equation}
$$
### Initial and Boundary Conditions
To examine some interesting properties of Burgers' equation, it is helpful to use different initial and boundary conditions than we've been using for previous steps.
The initial condition for this problem is going to be:
$$
\begin{eqnarray}
u &=& -\frac{2 \nu}{\phi} \frac{\partial \phi}{\partial x} + 4 \\\
\phi(t=0) = \phi_0 &=& \exp \bigg(\frac{-x^2}{4 \nu} \bigg) + \exp \bigg(\frac{-(x-2 \pi)^2}{4 \nu} \bigg)
\end{eqnarray}
$$
This has an analytical solution, given by:
$$
\begin{eqnarray}
u &=& -\frac{2 \nu}{\phi} \frac{\partial \phi}{\partial x} + 4 \\\
\phi &=& \exp \bigg(\frac{-(x-4t)^2}{4 \nu (t+1)} \bigg) + \exp \bigg(\frac{-(x-4t -2 \pi)^2}{4 \nu(t+1)} \bigg)
\end{eqnarray}
$$
The boundary condition will be:
$$
\begin{equation}
u(0) = u(2\pi)
\end{equation}
$$
This is called a *periodic* boundary condition. Pay attention! This will cause you a bit of headache if you don't tread carefully.
### Saving Time with SymPy
The initial condition we're using for Burgers' Equation can be a bit of a pain to evaluate by hand. The derivative $\frac{\partial \phi}{\partial x}$ isn't too terribly difficult, but it would be easy to drop a sign or forget a factor of $x$ somewhere, so we're going to use SymPy to help us out.
[SymPy](http://sympy.org/en/) is the symbolic math library for Python. It has a lot of the same symbolic math functionality as Mathematica with the added benefit that we can easily translate its results back into our Python calculations (it is also free and open source).
Start by loading the SymPy library, together with our favorite library, NumPy.
```python
import numpy
import sympy
from matplotlib import pyplot
%matplotlib inline
```
```python
# Set the font family and size to use for Matplotlib figures.
pyplot.rcParams['font.family'] = 'serif'
pyplot.rcParams['font.size'] = 16
```
We're also going to tell SymPy that we want all of its output to be rendered using $\LaTeX$. This will make our Notebook beautiful!
```python
sympy.init_printing()
```
Start by setting up symbolic variables for the three variables in our initial condition. It's important to recognize that once we've defined these symbolic variables, they function differently than "regular" Python variables.
If we type `x` into a code block, we'll get an error:
```python
x
```
`x` is not defined, so this shouldn't be a surprise. Now, let's set up `x` as a *symbolic* variable:
```python
x = sympy.symbols('x')
```
Now let's see what happens when we type `x` into a code cell:
```python
x
```
The value of `x` is $x$. Sympy is also referred to as a computer algebra system -- normally the value of `5*x` will return the product of `5` and whatever value `x` is pointing to. But, if we define `x` as a symbol, then something else happens:
```python
5 * x
```
This will let us manipulate an equation with unknowns using Python! Let's start by defining symbols for $x$, $\nu$ and $t$ and then type out the full equation for $\phi$. We should get a nicely rendered version of our $\phi$ equation.
```python
x, nu, t = sympy.symbols('x nu t')
phi = (sympy.exp(-(x - 4 * t)**2 / (4 * nu * (t + 1))) +
sympy.exp(-(x - 4 * t - 2 * sympy.pi)**2 / (4 * nu * (t + 1))))
phi
```
It's maybe a little small, but that looks right. Now to evaluate our partial derivative $\frac{\partial \phi}{\partial x}$ is a trivial task. To take a derivative with respect to $x$, we can just use:
```python
phiprime = phi.diff(x)
phiprime
```
If you want to see the non-rendered version, just use the Python print command.
```python
print(phiprime)
```
-(-8*t + 2*x)*exp(-(-4*t + x)**2/(4*nu*(t + 1)))/(4*nu*(t + 1)) - (-8*t + 2*x - 4*pi)*exp(-(-4*t + x - 2*pi)**2/(4*nu*(t + 1)))/(4*nu*(t + 1))
### Now what?
Now that we have the Pythonic version of our derivative, we can finish writing out the full initial condition equation and then translate it into a usable Python expression. For this, we'll use the *lambdify* function, which takes a SymPy symbolic equation and turns it into a callable function.
```python
from sympy.utilities.lambdify import lambdify
u = -2 * nu * (phiprime / phi) + 4
print(u)
```
-2*nu*(-(-8*t + 2*x)*exp(-(-4*t + x)**2/(4*nu*(t + 1)))/(4*nu*(t + 1)) - (-8*t + 2*x - 4*pi)*exp(-(-4*t + x - 2*pi)**2/(4*nu*(t + 1)))/(4*nu*(t + 1)))/(exp(-(-4*t + x - 2*pi)**2/(4*nu*(t + 1))) + exp(-(-4*t + x)**2/(4*nu*(t + 1)))) + 4
### Lambdify
To lambdify this expression into a usable function, we tell lambdify which variables to request and the function we want to plug them into.
```python
u_lamb = lambdify((t, x, nu), u)
print('The value of u at t=1, x=4, nu=3 is {}'.format(u_lamb(1, 4, 3)))
```
The value of u at t=1, x=4, nu=3 is 3.49170664206445
### Back to Burgers' Equation
Now that we have the initial conditions set up, we can proceed and finish setting up the problem. We can generate the plot of the initial condition using our lambdify-ed function.
```python
# Set parameters.
nx = 101 # number of spatial grid points
L = 2.0 * numpy.pi # length of the domain
dx = L / (nx - 1) # spatial grid size
nu = 0.07 # viscosity
nt = 100 # number of time steps to compute
sigma = 0.1 # CFL limit
dt = sigma * dx**2 / nu # time-step size
# Discretize the domain.
x = numpy.linspace(0.0, L, num=nx)
```
We have a function `u_lamb` but we need to create an array `u0` with our initial conditions. `u_lamb` will return the value for any given time $t$, position $x$ and $nu$. We can use a `for`-loop to cycle through values of `x` to generate the `u0` array. That code would look something like this:
```Python
u0 = numpy.empty(nx)
for i, x0 in enumerate(x):
u0[i] = u_lamb(t, x0, nu)
```
But there's a cleaner, more beautiful way to do this -- *list comprehension*.
We can create a list of all of the appropriate `u` values by typing
```Python
[u_lamb(t, x0, nu) for x0 in x]
```
You can see that the syntax is similar to the `for`-loop, but it only takes one line. Using a list comprehension will create... a list. This is different from an *array*, but converting a list to an array is trivial using `numpy.asarray()`.
With the list comprehension in place, the three lines of code above become one:
```Python
u = numpy.asarray([u_lamb(t, x0, nu) for x0 in x])
```
```python
# Set initial conditions.
t = 0.0
u0 = numpy.array([u_lamb(t, xi, nu) for xi in x])
u0
```
array([4. , 4.06283185, 4.12566371, 4.18849556, 4.25132741,
4.31415927, 4.37699112, 4.43982297, 4.50265482, 4.56548668,
4.62831853, 4.69115038, 4.75398224, 4.81681409, 4.87964594,
4.9424778 , 5.00530965, 5.0681415 , 5.13097336, 5.19380521,
5.25663706, 5.31946891, 5.38230077, 5.44513262, 5.50796447,
5.57079633, 5.63362818, 5.69646003, 5.75929189, 5.82212374,
5.88495559, 5.94778745, 6.0106193 , 6.07345115, 6.136283 ,
6.19911486, 6.26194671, 6.32477856, 6.38761042, 6.45044227,
6.51327412, 6.57610598, 6.63893783, 6.70176967, 6.76460125,
6.82742866, 6.89018589, 6.95176632, 6.99367964, 6.72527549,
4. , 1.27472451, 1.00632036, 1.04823368, 1.10981411,
1.17257134, 1.23539875, 1.29823033, 1.36106217, 1.42389402,
1.48672588, 1.54955773, 1.61238958, 1.67522144, 1.73805329,
1.80088514, 1.863717 , 1.92654885, 1.9893807 , 2.05221255,
2.11504441, 2.17787626, 2.24070811, 2.30353997, 2.36637182,
2.42920367, 2.49203553, 2.55486738, 2.61769923, 2.68053109,
2.74336294, 2.80619479, 2.86902664, 2.9318585 , 2.99469035,
3.0575222 , 3.12035406, 3.18318591, 3.24601776, 3.30884962,
3.37168147, 3.43451332, 3.49734518, 3.56017703, 3.62300888,
3.68584073, 3.74867259, 3.81150444, 3.87433629, 3.93716815,
4. ])
Now that we have the initial conditions set up, we can plot it to see what $u(x,0)$ looks like:
```python
# Plot the initial conditions.
pyplot.figure(figsize=(6.0, 4.0))
pyplot.title('Initial conditions')
pyplot.xlabel('x')
pyplot.ylabel('u')
pyplot.grid()
pyplot.plot(x, u0, color='C0', linestyle='-', linewidth=2)
pyplot.xlim(0.0, L)
pyplot.ylim(0.0, 10.0);
```
This is definitely not the hat function we've been dealing with until now. We call it a "saw-tooth function". Let's proceed forward and see what happens.
### Periodic Boundary Conditions
We will implement Burgers' equation with *periodic* boundary conditions. If you experiment with the linear and non-linear convection notebooks and make the simulation run longer (by increasing `nt`) you will notice that the wave will keep moving to the right until it no longer even shows up in the plot.
With periodic boundary conditions, when a point gets to the right-hand side of the frame, it *wraps around* back to the front of the frame.
Recall the discretization that we worked out at the beginning of this notebook:
$$
\begin{equation}
u_i^{n+1} = u_i^n - u_i^n \frac{\Delta t}{\Delta x} (u_i^n - u_{i-1}^n) + \nu \frac{\Delta t}{\Delta x^2}(u_{i+1}^n - 2u_i^n + u_{i-1}^n)
\end{equation}
$$
What does $u_{i+1}^n$ *mean* when $i$ is already at the end of the frame?
Think about this for a minute before proceeding.
```python
# Integrate the Burgers' equation in time.
u = u0.copy()
for n in range(nt):
un = u.copy()
# Update all interior points.
u[1:-1] = (un[1:-1] -
un[1:-1] * dt / dx * (un[1:-1] - un[:-2]) +
nu * dt / dx**2 * (un[2:] - 2 * un[1:-1] + un[:-2]))
# Update boundary points.
u[0] = (un[0] -
un[0] * dt / dx * (un[0] - un[-1]) +
nu * dt / dx**2 * (un[1] - 2 * un[0] + un[-1]))
u[-1] = (un[-1] -
un[-1] * dt / dx * (un[-1] - un[-2]) +
nu * dt / dx**2 * (un[0] - 2 * un[-1] + un[-2]))
```
```python
# Compute the analytical solution.
u_analytical = numpy.array([u_lamb(nt * dt, xi, nu) for xi in x])
```
```python
# Plot the numerical solution along with the analytical solution.
pyplot.figure(figsize=(6.0, 4.0))
pyplot.xlabel('x')
pyplot.ylabel('u')
pyplot.grid()
pyplot.plot(x, u, label='Numerical',
color='C0', linestyle='-', linewidth=2)
pyplot.plot(x, u_analytical, label='Analytical',
color='C1', linestyle='--', linewidth=2)
pyplot.legend()
pyplot.xlim(0.0, L)
pyplot.ylim(0.0, 10.0);
```
Let's now create an animation with the `animation` module of Matplotlib to observe how the numerical solution changes over time compared to the analytical solution.
We start by importing the module from Matplotlib as well as the special `HTML` display method.
```python
from matplotlib import animation
from IPython.display import HTML
```
We create a function `burgers` to computes the numerical solution of the 1D Burgers' equation over time.
(The function returns the history of the solution: a list with `nt` elements, each one being the solution in the domain at a time step.)
```python
def burgers(u0, dx, dt, nu, nt=20):
"""
Computes the numerical solution of the 1D Burgers' equation
over the time steps.
Parameters
----------
u0 : numpy.ndarray
The initial conditions as a 1D array of floats.
dx : float
The grid spacing.
dt : float
The time-step size.
nu : float
The viscosity.
nt : integer, optional
The number of time steps to compute;
default: 20.
Returns
-------
u_hist : list of numpy.ndarray objects
The history of the numerical solution.
"""
u_hist = [u0.copy()]
u = u0.copy()
for n in range(nt):
un = u.copy()
# Update all interior points.
u[1:-1] = (un[1:-1] -
un[1:-1] * dt / dx * (un[1:-1] - un[:-2]) +
nu * dt / dx**2 * (un[2:] - 2 * un[1:-1] + un[:-2]))
# Update boundary points.
u[0] = (un[0] -
un[0] * dt / dx * (un[0] - un[-1]) +
nu * dt / dx**2 * (un[1] - 2 * un[0] + un[-1]))
u[-1] = (un[-1] -
un[-1] * dt / dx * (un[-1] - un[-2]) +
nu * dt / dx**2 * (un[0] - 2 * un[-1] + un[-2]))
u_hist.append(u.copy())
return u_hist
```
```python
# Compute the history of the numerical solution.
u_hist = burgers(u0, dx, dt, nu, nt=nt)
```
```python
# Compute the history of the analytical solution.
u_analytical = [numpy.array([u_lamb(n * dt, xi, nu) for xi in x])
for n in range(nt)]
```
```python
fig = pyplot.figure(figsize=(6.0, 4.0))
pyplot.xlabel('x')
pyplot.ylabel('u')
pyplot.grid()
u0_analytical = numpy.array([u_lamb(0.0, xi, nu) for xi in x])
line1 = pyplot.plot(x, u0, label='Numerical',
color='C0', linestyle='-', linewidth=2)[0]
line2 = pyplot.plot(x, u0_analytical, label='Analytical',
color='C1', linestyle='--', linewidth=2)[0]
pyplot.legend()
pyplot.xlim(0.0, L)
pyplot.ylim(0.0, 10.0)
fig.tight_layout()
```
```python
def update_plot(n, u_hist, u_analytical):
"""
Update the lines y-data of the Matplotlib figure.
Parameters
----------
n : integer
The time-step index.
u_hist : list of numpy.ndarray objects
The history of the numerical solution.
u_analytical : list of numpy.ndarray objects
The history of the analytical solution.
"""
fig.suptitle('Time step {:0>2}'.format(n))
line1.set_ydata(u_hist[n])
line2.set_ydata(u_analytical[n])
```
```python
# Create an animation.
anim = animation.FuncAnimation(fig, update_plot,
frames=nt, fargs=(u_hist, u_analytical),
interval=100)
```
```python
# Display the video.
HTML(anim.to_html5_video())
```
## Array Operation Speed Increase
Coding up discretization schemes using array operations can be a bit of a pain. It requires much more mental effort on the front-end than using two nested `for` loops. So why do we do it? Because it's fast. Very, very fast.
Here's what the Burgers code looks like using two nested `for` loops. It's easier to write out, plus we only have to add one "special" condition to implement the periodic boundaries.
At the top of the cell, you'll see the decorator `%%timeit`.
This is called a "cell magic". It runs the cell several times and returns the average execution time for the contained code.
Let's see how long the nested `for` loops take to finish.
```python
%%timeit
# Set initial conditions.
u = numpy.array([u_lamb(t, x0, nu) for x0 in x])
# Integrate in time using a nested for loop.
for n in range(nt):
un = u.copy()
# Update all interior points and the left boundary point.
for i in range(nx - 1):
u[i] = (un[i] -
un[i] * dt / dx *(un[i] - un[i - 1]) +
nu * dt / dx**2 * (un[i + 1] - 2 * un[i] + un[i - 1]))
# Update the right boundary.
u[-1] = (un[-1] -
un[-1] * dt / dx * (un[-1] - un[-2]) +
nu * dt / dx**2 * (un[0]- 2 * un[-1] + un[-2]))
```
23.1 ms ± 344 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
Less than 50 milliseconds. Not bad, really.
Now let's look at the array operations code cell. Notice that we haven't changed anything, except we've added the `%%timeit` magic and we're also resetting the array `u` to its initial conditions.
This takes longer to code and we have to add two special conditions to take care of the periodic boundaries. Was it worth it?
```python
%%timeit
# Set initial conditions.
u = numpy.array([u_lamb(t, xi, nu) for xi in x])
# Integrate in time using array operations.
for n in range(nt):
un = u.copy()
# Update all interior points.
u[1:-1] = (un[1:-1] -
un[1:-1] * dt / dx * (un[1:-1] - un[:-2]) +
nu * dt / dx**2 * (un[2:] - 2 * un[1:-1] + un[:-2]))
# Update boundary points.
u[0] = (un[0] -
un[0] * dt / dx * (un[0] - un[-1]) +
nu * dt / dx**2 * (un[1] - 2 * un[0] + un[-1]))
u[-1] = (un[-1] -
un[-1] * dt / dx * (un[-1] - un[-2]) +
nu * dt / dx**2 * (un[0] - 2 * un[-1] + un[-2]))
```
2.52 ms ± 64.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Yes, it is absolutely worth it. That's a nine-fold speed increase. For this exercise, you probably won't miss the extra 40 milliseconds if you use the nested `for` loops, but what about a simulation that has to run through millions and millions of iterations? Then that little extra effort at the beginning will definitely pay off.
---
###### The cell below loads the style of the notebook.
```python
from IPython.core.display import HTML
css_file = '../../styles/numericalmoocstyle.css'
HTML(open(css_file, 'r').read())
```
<link href='http://fonts.googleapis.com/css?family=Alegreya+Sans:100,300,400,500,700,800,900,100italic,300italic,400italic,500italic,700italic,800italic,900italic' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Arvo:400,700,400italic' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=PT+Mono' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Shadows+Into+Light' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Nixie+One' rel='stylesheet' type='text/css'>
<link href='https://fonts.googleapis.com/css?family=Source+Code+Pro' rel='stylesheet' type='text/css'>
<style>
@font-face {
font-family: "Computer Modern";
src: url('http://mirrors.ctan.org/fonts/cm-unicode/fonts/otf/cmunss.otf');
}
#notebook_panel { /* main background */
background: rgb(245,245,245);
}
div.cell { /* set cell width */
width: 750px;
}
div #notebook { /* centre the content */
background: #fff; /* white background for content */
width: 1000px;
margin: auto;
padding-left: 0em;
}
#notebook li { /* More space between bullet points */
margin-top:0.8em;
}
/* draw border around running cells */
div.cell.border-box-sizing.code_cell.running {
border: 1px solid #111;
}
/* Put a solid color box around each cell and its output, visually linking them*/
div.cell.code_cell {
background-color: rgb(256,256,256);
border-radius: 0px;
padding: 0.5em;
margin-left:1em;
margin-top: 1em;
}
div.text_cell_render{
font-family: 'Alegreya Sans' sans-serif;
line-height: 140%;
font-size: 125%;
font-weight: 400;
width:600px;
margin-left:auto;
margin-right:auto;
}
/* Formatting for header cells */
.text_cell_render h1 {
font-family: 'Nixie One', serif;
font-style:regular;
font-weight: 400;
font-size: 45pt;
line-height: 100%;
color: rgb(0,51,102);
margin-bottom: 0.5em;
margin-top: 0.5em;
display: block;
}
.text_cell_render h2 {
font-family: 'Nixie One', serif;
font-weight: 400;
font-size: 30pt;
line-height: 100%;
color: rgb(0,51,102);
margin-bottom: 0.1em;
margin-top: 0.3em;
display: block;
}
.text_cell_render h3 {
font-family: 'Nixie One', serif;
margin-top:16px;
font-size: 22pt;
font-weight: 600;
margin-bottom: 3px;
font-style: regular;
color: rgb(102,102,0);
}
.text_cell_render h4 { /*Use this for captions*/
font-family: 'Nixie One', serif;
font-size: 14pt;
text-align: center;
margin-top: 0em;
margin-bottom: 2em;
font-style: regular;
}
.text_cell_render h5 { /*Use this for small titles*/
font-family: 'Nixie One', sans-serif;
font-weight: 400;
font-size: 16pt;
color: rgb(163,0,0);
font-style: italic;
margin-bottom: .1em;
margin-top: 0.8em;
display: block;
}
.text_cell_render h6 { /*use this for copyright note*/
font-family: 'PT Mono', sans-serif;
font-weight: 300;
font-size: 9pt;
line-height: 100%;
color: grey;
margin-bottom: 1px;
margin-top: 1px;
}
.CodeMirror{
font-family: "Source Code Pro";
font-size: 90%;
}
.alert-box {
padding:10px 10px 10px 36px;
margin:5px;
}
.success {
color:#666600;
background:rgb(240,242,229);
}
</style>
| 42ec8836a11109329a82b4121962cac7af8b07b5 | 254,165 | ipynb | Jupyter Notebook | lessons/02_spacetime/02_04_1DBurgers.ipynb | mcarpe/numerical-mooc | 62b3c14c2c56d85d65c6075f2d7eb44266b49c17 | [
"CC-BY-3.0"
] | 748 | 2015-01-04T22:50:56.000Z | 2022-03-30T20:42:16.000Z | lessons/02_spacetime/02_04_1DBurgers.ipynb | mcarpe/numerical-mooc | 62b3c14c2c56d85d65c6075f2d7eb44266b49c17 | [
"CC-BY-3.0"
] | 62 | 2015-02-02T01:06:07.000Z | 2020-11-09T12:27:41.000Z | lessons/02_spacetime/02_04_1DBurgers.ipynb | mcarpe/numerical-mooc | 62b3c14c2c56d85d65c6075f2d7eb44266b49c17 | [
"CC-BY-3.0"
] | 1,270 | 2015-01-02T19:19:52.000Z | 2022-02-27T01:02:44.000Z | 88.590101 | 19,340 | 0.818055 | true | 6,996 | Qwen/Qwen-72B | 1. YES
2. YES | 0.835484 | 0.815232 | 0.681113 | __label__eng_Latn | 0.931911 | 0.420786 |
NYC-TAXI-EDA-FEATURE-ENGINEERING<br>
https://www.kaggle.com/frednavruzov/nyc-taxi-eda-feature-engineering
```python
import pandas as pd
import numpy as np
import sympy
import datetime as dt
import time
from math import *
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
from ipyleaflet import *
import folium
import json
import geopy.distance
from haversine import haversine
from tqdm import tqdm_notebook
sns.set()
%matplotlib inline
%config InlineBackend.figure_formats = {'png', 'retina'}
from matplotlib import font_manager, rc
plt.rcParams['axes.unicode_minus'] = False
import platform
if platform.system() == 'Darwin':
rc('font', family='AppleGothic')
elif platform.system() == 'Windows':
path = "c:/Windows/Fonts/malgun.ttf"
font_name = font_manager.FontProperties(fname=path).get_name()
rc('font', family=font_name)
```
```python
train = pd.read_csv("../dataset/train.csv") # 각자 데이터셋의 폴더
```
### data에 대한 기본적인 탐색
- traing data의 row 수와 column 수
```python
print("traing data의 row 수 : {}, column 수 : {}".format(train.shape[0], train.shape[1]))
```
```python
train.info()
```
```python
train.describe().round(2)
```
```python
# taxi['count'] = 1
```
# 전처리
- 아웃라이어 제거
1. 시간
- pickup_datetime
2. 위치
3. 승객수
### 아웃라이어 제거
- 시간
- 승객수
- 위치
```python
# 운행 시간 0, 2시간 벗어나는 데이터 제거
# taxi = taxi[taxi["trip_duration"] >= 0]
# taxi = taxi[taxi["trip_duration"] <= 60*60*2]
```
```python
# 승객 수 0명 제거
# taxi = taxi[taxi["passenger_count"] != 0]
```
## pickup_datetime 살피기
```python
# pd.to_datetime을 해줘야 pickup_datetime coulmn의 data type이 datetime으로 됨
pickup_datetime_dt = pd.to_datetime(train["pickup_datetime"])
dropoff_datetime_dt = pd.to_datetime(train["dropoff_datetime"])
```
```python
train["pickup_datetime"] = pickup_datetime_dt
train["dropoff_datetime"] = dropoff_datetime_dt
train["pickup_date"] = train["pickup_datetime"].dt.date
train["dropoff_date"] = train["dropoff_datetime"].dt.date
train["pickup_month"] = train["pickup_datetime"].dt.month
train["dropoff_month"] = train["dropoff_datetime"].dt.month
train["pickup_weekday"] = train["pickup_datetime"].dt.weekday
train["dropoff_weekday"] = train["dropoff_datetime"].dt.weekday
train["pickup_hour"] = train["pickup_datetime"].dt.hour
train["dropoff_hour"] = train["dropoff_datetime"].dt.hour
```
```python
train.info()
```
```python
train.describe().round(2)
```
### EDA of pickup_datetime in train data
- year : 2016년
- month : 1~7월
- hour : 0~23시
- weekday : 월요일~일요일
```python
# # year
# print("데이터가 기록된 연도")
# print("가장 오래된 데이터의 기록 연도: {}년".format(taxi_df1["pickup_datetime"].dt.year.min()))
# print("가장 최신 데이터의 기록 연도: {}년".format(taxi_df1["pickup_datetime"].dt.year.max()))
# print('')
# # month
# print("데이터가 기록된 달")
# print("가장 오래된 데이터의 기록된 달: {}월".format(taxi_df1["pickup_month"].min()))
# print("가장 최신 데이터가 기록된 달: {}월".format(taxi_df1["pickup_month"].max()))
```
```python
train[train["trip_duration"] >= 60*60*24]
```
```python
train = train[train["trip_duration"] <= 60*60*24]
```
```python
plt.figure(figsize=(7, 5))
sns.distplot(train["trip_duration"], color="r")
plt.xlabel("Trip Duration")
plt.show()
```
```python
sns.set()
plt.figure(figsize=(7, 5))
sns.distplot(np.log(train['trip_duration']+1), color="r")
plt.xlabel("Log of Trip Duration")
plt.show()
```
```python
plt.figure(figsize=(7, 4))
sns.countplot(x="vendor_id", data=train, palette="husl")
plt.title("Vendor Distribution", fontsize=13)
plt.xlabel("Vendor")
plt.ylabel("Number of Trips")
plt.show()
```
```python
plt.figure(figsize=(7, 4))
sns.countplot(x="store_and_fwd_flag", data=train, palette="husl")
plt.title("Store & FWD Flag Distribution", fontsize=13)
plt.xlabel("Store & FWD Flag")
plt.ylabel("Number of Trips")
plt.show()
```
```python
plt.figure(figsize=(13, 4))
plt.subplot(121)
sns.countplot(x="pickup_month", data=train, palette="husl")
plt.title("Pickups Month Distribution", fontsize=13)
plt.xlabel("Pickup Months (January-June)")
plt.ylabel("Number of Trips")
plt.subplot(122)
sns.countplot(x="pickup_month", data=train, palette="husl", hue="vendor_id")
plt.title("Pickups Month Distribution", fontsize=13)
plt.xlabel("Pickup Months (January-June)")
plt.ylabel("Number of Trips")
plt.legend(loc=(1.04,0))
plt.show()
```
```python
plt.figure(figsize=(13, 4))
plt.subplot(121)
sns.countplot(x="pickup_weekday", data=train, palette="husl")
plt.title("Pickups Weekday Distribution", fontsize=13)
plt.xlabel("Pickup Weekday (Mon-Sun)")
plt.ylabel("Number of Trips")
plt.subplot(122)
sns.countplot(x="pickup_weekday", data=train, palette="husl", hue="vendor_id")
plt.title("Pickups Weekday Distribution", fontsize=13)
plt.xlabel("Pickup Weekday (Mon-Sun)")
plt.ylabel("Number of Trips")
plt.legend(loc=(1.04,0))
plt.show()
```
```python
plt.figure(figsize=(8, 4))
sns.countplot(x="pickup_hour", data=train, palette="husl")
plt.title("Pickups Hour Distribution", fontsize=13)
plt.xlabel("Pickup Hours (0-23)")
plt.ylabel("Number of Trips")
plt.show()
```
```python
# Trip Duration의 경우, 상위 4개의 값이 워낙 큼. mean이 아닌 mode로 하면 차이는 더 많이 남)
# boxplot으로 보여주는 것도 좋을 것 같음 (단, log 처리하지 않으면 아웃라이어에 영향을 많이 받을 것)
data = train.loc[:, ["pickup_hour", "trip_duration"]].groupby("pickup_hour").mean()
plt.figure(figsize=(17, 4))
plt.subplot(121)
sns.barplot(x=data.index, y=data.trip_duration, data=data, palette="husl")
plt.title("Pickups Hours & Trip Duration", fontsize=13)
plt.xlabel("Pickup Hours (0-23)")
plt.ylabel("Trip Duration")
plt.subplot(122)
sns.barplot(x=data.index, y=np.log(data.trip_duration+1), data=data, palette="husl")
plt.title("Pickups Hours & Log Trip Duration", fontsize=13)
plt.xlabel("Pickup Hours (0-23)")
plt.ylabel("Log of Trip Duration")
plt.show()
```
```python
data = train.loc[:, ["pickup_weekday", "trip_duration"]].groupby("pickup_weekday").mean()
plt.figure(figsize=(17, 4))
plt.subplot(121)
sns.barplot(x=data.index, y=data.trip_duration, data=data, palette="husl")
plt.title("Pickups Weekday & Trip Duration", fontsize=13)
plt.xlabel("Pickup Weekday (Mon-Sun)")
plt.ylabel("Trip Duration")
plt.subplot(122)
sns.barplot(x=data.index, y=np.log(data.trip_duration+1), data=data, palette="husl")
plt.title("Pickups Weekday & Log Trip Duration", fontsize=13)
plt.xlabel("Pickup Weekday (Mon-Sun)")
plt.ylabel("Log of Trip Duration")
plt.show()
```
```python
plt.figure(figsize=(7, 5))
sns.boxplot(x=train["pickup_weekday"],
y=train["trip_duration"].apply(np.log1p),
data=train, palette="husl")
plt.title("Pickups Weekday & Log of Trip Duration", fontsize=13)
plt.xlabel("Pickup Weekday (Mon-Sun)")
plt.ylabel("Log of Trip Duration")
plt.show()
```
```python
data = train.loc[:, ["pickup_month", "trip_duration"]].groupby("pickup_month").mean()
plt.figure(figsize=(17, 4))
plt.subplot(121)
sns.barplot(x=data.index, y=data.trip_duration, data=data, palette="husl")
plt.title("Pickups Months & Trip Duration", fontsize=13)
plt.xlabel("Pickup Months (January-June)")
plt.ylabel("Trip Duration")
plt.subplot(122)
sns.barplot(x=data.index, y=np.log(data.trip_duration+1), data=data, palette="husl")
plt.title("Pickups Months & Log Trip Duration", fontsize=13)
plt.xlabel("Pickup Months (January-June)")
plt.ylabel("Log of Trip Duration")
plt.show()
```
```python
month = ["January", "February", "March", "April", "May", "June"]
weekday = ["Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday", "Sunday"]
working_day = [0, 1, 2, 3, 4] # Mon-Fri
wd = train.loc[:, ["pickup_weekday", "pickup_month", "trip_duration"]]
wd["working_day"] = wd["pickup_weekday"].isin(working_day)
plt.figure(figsize=(11, 5))
sns.violinplot(x=wd["pickup_month"],
y=wd["trip_duration"].apply(np.log1p),
hue="working_day",
data=wd, palette="husl")
plt.title("Pickups Month & Log of Trip Duration", fontsize=13)
plt.xlabel("Pickup Month")
plt.ylabel("Log of Trip Duration")
plt.xticks(range(0, 6), month)
plt.show()
```
```python
plt.figure(figsize=(11, 7))
sns.countplot(x="pickup_weekday", data=train, hue="pickup_hour")
plt.xlabel("Pickup Weekday (Mon-Sun)")
plt.ylabel("Number of Trips")
plt.xticks(range(0,7), weekday)
plt.legend(loc=(1.04,0))
plt.show()
```
```python
plt.figure(figsize=(11,2))
sns.heatmap(data=pd.crosstab(train["pickup_weekday"],
train["pickup_hour"],
values=train["vendor_id"],
aggfunc="count",
normalize="index"), cmap="RdPu")
plt.title("Pickup Weekday vs. Hours", fontsize=13)
plt.xlabel("Pickup Hours (0-23)")
plt.ylabel("Pickup Weekday")
plt.yticks(range(0,7), weekday, rotation="horizontal")
plt.show()
```
```python
plt.figure(figsize=(11,2))
sns.heatmap(data=pd.crosstab(train["pickup_month"],
train["pickup_hour"],
values=train["vendor_id"],
aggfunc="count",
normalize="index"), cmap="RdPu")
plt.title("Pickup Month vs. Hours", fontsize=13)
plt.xlabel("Pickup Hours (0-23)")
plt.ylabel("Pickup Month")
plt.yticks(range(0,6), month, rotation="horizontal")
plt.show()
```
```python
plt.figure(figsize=(11,2))
sns.heatmap(data=pd.crosstab(train["pickup_month"],
train["pickup_weekday"],
values=train["vendor_id"],
aggfunc="count",
normalize="index"), cmap="RdPu")
plt.title("Pickup Month vs. Weekday", fontsize=13)
plt.xlabel("Pickup Weekday (Mon-Sun)")
plt.ylabel("Pickup Month")
plt.xticks(range(0,7), weekday, rotation=30)
plt.yticks(range(0,6), month, rotation="horizontal")
plt.show()
```
## 승객수
```python
train['passenger_count'].value_counts()
```
```python
print("가장 적은 탑승 인원: {}명".format(train["passenger_count"].min()))
print("가장 많은 탑승 인원: {}명".format(train["passenger_count"].max()))
```
```python
# 승객 수 0명 제거
# train = train[train["passenger_count"] != 0]
```
```python
plt.figure(figsize=(7, 5))
sns.countplot(x='passenger_count', data=train, palette="husl")
plt.title("Passenger Count Distribution", fontsize=13)
plt.xlabel("Passenger Count")
plt.ylabel("Number of Trips")
plt.show()
```
```python
# 아웃라이어 아직 제거하지 않아서 아웃라이어에 영향을 많이 받음
plt.figure(figsize=(17, 4))
plt.subplot(121)
sns.boxplot(x="passenger_count",
y=train["trip_duration"],
data=train,
palette="husl")
plt.title("Passenger Count & Trip Duration", fontsize=13)
plt.xlabel("Passenger Count")
plt.ylabel("Trip Duration")
plt.subplot(122)
sns.boxplot(x="passenger_count",
y=train["trip_duration"].apply(np.log1p),
data=train,
palette="husl")
plt.title("Passenger Count & Log Trip Duration", fontsize=13)
plt.xlabel("Passenger Count")
plt.ylabel("Log of Trip Duration")
plt.show()
```
```python
f, (ax1, ax2) = plt.subplots(ncols=2, sharey=True, figsize=(14, 4))
sns.boxplot(x="passenger_count",
y=train["trip_duration"].apply(np.log1p),
hue="vendor_id",
data=train,
palette="husl", ax=ax1)
ax1.set_xlabel(""); ax1.set_ylabel("Log of Trip Duration")
sns.boxplot(x="passenger_count",
y=train["trip_duration"].apply(np.log1p),
hue="store_and_fwd_flag",
data=train,
palette="husl", ax=ax2)
ax2.set_xlabel(""); ax2.set_ylabel("")
plt.suptitle("Passenger Count & Log of Trip Duration", y=1.05, fontsize=13)
plt.tight_layout()
f.text(0.5, -0.01, "Passenger Count", ha="center")
plt.show()
```
```python
plt.figure(figsize=(11,3))
sns.heatmap(data=pd.crosstab(train["passenger_count"],
train["pickup_hour"],
values=train["vendor_id"],
aggfunc="count",
normalize=False), cmap="RdPu")
plt.title("Passenger Count vs. Hours", fontsize=13)
plt.xlabel("Pickup Hours (0-23)")
plt.ylabel("Passenger Count")
plt.show()
```
```python
plt.figure(figsize=(11,3))
sns.heatmap(data=pd.crosstab(train["passenger_count"],
train["pickup_month"],
values=train["vendor_id"],
aggfunc="count",
normalize=False), cmap="RdPu")
plt.title("Passenger Count vs. Month", fontsize=13)
plt.xlabel("Pickup Month")
plt.ylabel("Passenger Count")
plt.xticks(range(0,6), month, rotation=30)
plt.show()
```
### EDA 결과
- 승객수 Max 9인, Min 0인 : 0인, 7인이상 삭제 검토 (총 65건)
- 승객수 지표 : 1인 승차객이 전체 약 146만 건 중 103만 건의 비중, 2인 승차객 21만건 / 5인 / 3인 순
- 승객수&승차시각 : 오전 8시 이후 1인 승차객 중심으로 전 시간대 고른 분포
- 승객수&승차요일 : 전반적으로 고른 분포 , 전반적으로 주말보다 수~금 승차건수가 높음
- 승객수&운행시간 : 1인 승객이 평균 1~2분정도 짧게 운행. 전반적으로 1000~1100초 (16분대), 탑승객 없는 택시 유일하게 30분 수준 (공항으로 이동?)
```python
import folium
import json
geo_path = '../dataset/geojson/state.geo.json'
geo_str = json.load(open(geo_path, encoding='utf-8'))
```
```python
# 뉴욕 시 경계를 벗어나는 위/경도 제거
# city_long_border = (-74.03, -73.75)
# city_lat_border = (40.63, 40.85)
train = train[train['pickup_longitude'] <= -73.75]
train = train[train['pickup_longitude'] >= -74.03]
train = train[train['pickup_latitude'] <= 40.85]
train = train[train['pickup_latitude'] >= 40.63]
train = train[train['dropoff_longitude'] <= -73.75]
train = train[train['dropoff_longitude'] >= -74.03]
train = train[train['dropoff_latitude'] <= 40.85]
train = train[train['dropoff_latitude'] >= 40.63]
```
```python
f, (ax1, ax2) = plt.subplots(1, 2, sharey=True, figsize=(13,8))
train.plot(kind="scatter", x="pickup_longitude", y="pickup_latitude",
color="yellow", grid=False, s=.02, alpha=.6, subplots=True, ax=ax1)
ax1.set_title("Pickups")
ax1.set_facecolor("black")
train.plot(kind="scatter", x="dropoff_longitude", y="dropoff_latitude",
color="yellow", grid=False, s=.02, alpha=.6, subplots=True, ax=ax2)
ax2.set_title("Dropoffs")
ax2.set_facecolor("black")
```
```python
pickup_lat = tuple(train["pickup_latitude"])
pickup_lng = tuple(train["pickup_longitude"])
dropoff_lat = tuple(train["dropoff_latitude"])
dropoff_lng = tuple(train["dropoff_longitude"])
```
```python
pickup_loc = tuple(zip(pickup_lat, pickup_lng))
dropoff_loc = tuple(zip(dropoff_lat, dropoff_lng))
```
```python
print(len(pickup_loc))
print(len(dropoff_loc))
```
```python
# append가 아니라 전체 컬럼으로 바로
# vincenty distacne
import geopy.distance
from tqdm import tqdm_notebook
vincenty_distance = []
for i in tqdm_notebook(range(len(pickup_loc))):
vincenty_distance.append(geopy.distance.vincenty(pickup_loc[i], dropoff_loc[i]).km)
```
```python
train.loc[:, "vincenty_distance"] = vincenty_distance
train.loc[:, ["id", "vincenty_distance"]].tail()
```
```python
train["log_duration"] = np.log1p(train["trip_duration"])
train["log_vincenty_distance"] = np.log1p(train["vincenty_distance"])
```
```python
plt.figure(figsize=(7, 5))
plt.scatter(train.log_vincenty_distance, train.log_duration, alpha=0.05)
plt.ylabel("log(Trip Duration)")
plt.xlabel("log(Vincenty Distance)")
plt.title("log(Vincenty Distance) vs log(Trip Duration)");
```
```python
sns.jointplot(x="log_vincenty_distance", y="log_duration", data=train);
```
```python
# taxi["pickup_datetime"] = pd.to_datetime(taxi.pickup_datetime)
# taxi.loc[:, "pickup_weekday"] = taxi["pickup_datetime"].dt.weekday
# taxi.loc[:, "pickup_hour_weekofyear"] = taxi["pickup_datetime"].dt.weekofyear
# taxi.loc[:, "pickup_hour"] = taxi["pickup_datetime"].dt.hour
# taxi.loc[:, "pickup_minute"] = taxi["pickup_datetime"].dt.minute
# taxi.loc[:, "pickup_dt"] = (taxi["pickup_datetime"] - taxi["pickup_datetime"].min()).dt.total_seconds()
```
```python
train.loc[:, "pickup_week_hour"] = train["pickup_weekday"] * 24 + train["pickup_hour"]
train.loc[:, "avg_speed_h"] = 1000 * train["vincenty_distance"] / train["trip_duration"]
fig, (ax1, ax2, ax3) = plt.subplots(ncols=3, sharey=True, figsize=(13, 6))
ax1.plot(train.groupby("pickup_hour").mean()["avg_speed_h"], 'bo-', lw=2, alpha=0.7)
ax2.plot(train.groupby("pickup_weekday").mean()["avg_speed_h"], 'go-', lw=2, alpha=0.7)
ax3.plot(train.groupby("pickup_week_hour").mean()["avg_speed_h"], 'ro-', lw=2, alpha=0.7)
ax1.set_xlabel("Hour")
ax2.set_xlabel("Weekday")
ax3.set_xlabel("Weekhour")
ax1.set_ylabel("Average Speed (km/h)")
fig.suptitle("Rush hour average traffic speed")
plt.show()
```
파라미터
- 이동 거리
- 시간대별
- 승/하차 위치 (맨하탄 내부 / 맨하탄 내외부(특히 공항))
- 스피드
- 승객 수
- 가장 큰 영향을 미치는 변수 확인 (아직 안 배움)
- 코드 깔끔히
### 속도와 duration간의 상관관계
```python
train_hour = train.loc[:, ["pickup_hour", "avg_speed_h", "trip_duration"]]
train_hour["trip_duration"] = train_hour["trip_duration"] / 60
train_hour = train_hour.groupby("pickup_hour").mean()
train_hour.tail()
```
```python
train_hour.plot(figsize=(7, 5))
plt.legend(loc='best')
plt.show()
```
```python
corr = train_hour.corr()
sns.heatmap(corr, cmap="RdPu");
```
```python
np.corrcoef(train_hour.avg_speed_h, train_hour.trip_duration)
```
```python
corr_matt = train[["pickup_datetime", "pickup_hour",
"pickup_month", "pickup_weekday",
"log_vincenty_distance", "log_duration"]]
```
```python
corr_matt.head()
```
```python
corr_matt = corr_matt.corr()
mask = np.array(corr_matt)
mask[np.tril_indices_from(mask)] = False
```
```python
corr_matt.head()
```
```python
fig, ax = plt.subplots()
fig.set_size_inches(14, 7)
sns.heatmap(corr_matt, mask=mask, vmax=.8, annot=True, square=True, cmap="RdPu")
plt.show()
```
```python
tmp = train[["trip_duration", "pickup_datetime"]]
tmp.head()
```
| 54ec0bf4196309fae8f63c6c072cbfb96ea99959 | 44,314 | ipynb | Jupyter Notebook | individual_dir/KSW/EDA_KSW.ipynb | novdov/dss7b5-nyctaxi | 2f1e538d0a25a9c299310b24564da71e9fe9e689 | [
"MIT"
] | null | null | null | individual_dir/KSW/EDA_KSW.ipynb | novdov/dss7b5-nyctaxi | 2f1e538d0a25a9c299310b24564da71e9fe9e689 | [
"MIT"
] | null | null | null | individual_dir/KSW/EDA_KSW.ipynb | novdov/dss7b5-nyctaxi | 2f1e538d0a25a9c299310b24564da71e9fe9e689 | [
"MIT"
] | null | null | null | 36.989983 | 1,489 | 0.568782 | true | 5,355 | Qwen/Qwen-72B | 1. YES
2. YES | 0.819893 | 0.699254 | 0.573314 | __label__kor_Hang | 0.276167 | 0.170331 |
```python
from IPython.display import Image
```
# 2D Turbulent Hot Free Jet
## Literature
[**"Physical and computational aspects of convective heat transfer"**](http://link.springer.com/book/10.1007%2F978-1-4612-3918-5)
T. CEBECI, P. BRADSHAW, Springer 1984
## Equations for the 2D hot free jet
Applying assumptions from [Prandtl's boundary layer](https://en.wikipedia.org/wiki/Boundary_layer) theorem, the [Navier-Stokes equations](https://en.wikipedia.org/wiki/Navier%E2%80%93Stokes_equations) can simplified for the 2D turbulent free jet. The follwoing set of equations depict the continuity, momentum and energy equations, respectively.
\begin{align}
\frac{\partial u}{\partial x} + \frac{\partial v}{\partial y} & = 0 \\
u\frac{\partial u}{\partial x} + v\frac{\partial u}{\partial y} & = \frac{1}{\rho}\frac{\partial \tau}{\partial y} \\
u\frac{\partial u}{\partial x} + v\frac{\partial u}{\partial y} & = - \frac{1}{\rho c_P}\frac{\partial \dot q}{\partial y}
\end{align}
With
\begin{equation}
\tau = \mu \frac{\partial u}{\partial y}, \qquad \tau = \mu \frac{\partial u}{\partial y} - \rho \overline{u'v'}
\end{equation}
for the laminar and turbulent case, and similarly we have
\begin{equation}
\dot q = -\lambda \frac{\partial T}{\partial y}, \qquad \dot q = -\lambda \frac{\partial T}{\partial y} + \rho c_P \overline{T'v'}
\end{equation}
for the laminar and turbulent cases of the energy equation. A proper turbulence model has to be selected for the *Reynolds' stresses* and *turbulent diffusion*. In this case the **Cebeci-Smith** model, derived from the Schlichting formula which is based on the mixing length is applied.
Analogy considerations to the molecular viscosity lead to
\begin{equation}
\mu_t = \frac{- \rho \overline{u'v'}} {\partial u/\partial y} \qquad \text{so that} \qquad \mu_{eff} = \mu + \mu_t
\end{equation}
and for the turbulent diffussivity
\begin{equation}
\kappa_t = \frac{-\rho c_P \overline{T'v'}} {\partial T/\partial y} \qquad \text{so that} \qquad \kappa_{eff} = \kappa + \kappa_t \qquad \text{with} \qquad \kappa = \frac{\lambda}{\rho c_P} \qquad \text{being the } \textit {thermal diffusivity}.
\end{equation}
With
\begin{equation}
\epsilon_m = \frac{\mu_t}{\rho} \quad \text{and} \quad \epsilon_h = \frac{\kappa_t}{\rho c_P},\quad \text {we define the } \textit{eddy kinematic viscosity} \text{ and } \textit{eddy diffusivity of heat} \text{, respectively}.
\end{equation}
By analogy with the molecular Prandtl number:
\begin{equation}
Pr \equiv \frac{\nu}{\lambda} \equiv \frac{\mu c_P}{\lambda}
\end{equation}
a *turbulent Prandtl* number can be defined
\begin{equation}
Pr_t \equiv \frac{\epsilon_m}{\epsilon_h} \equiv \frac{\epsilon_m^+}{\epsilon_h^+} \equiv \frac{\rho \overline{u'v'} / (\partial u/\partial y)}{\overline{T'v'} / (\partial T/\partial y)}
\end{equation}
with
\begin{equation}
\epsilon_m^+ \equiv \frac{\epsilon_m}{\nu} \quad \text{and} \quad \epsilon_h^+ \equiv \frac{\epsilon_h}{\nu}.
\end{equation}
Applying the **Falkner-Skan** transformation:
\begin{equation}
\eta \equiv \frac{y}{\delta} = \sqrt{\frac{u_0}{\nu L}}\frac{y}{3\xi^{2/3}}, \qquad \xi=\frac{x}{L}, \qquad \psi = \sqrt{u_0 \nu L}\xi^{1/3}f(\xi, \eta), \quad u = \frac{\partial \psi}{\partial y} = \frac{\partial \psi}{\partial \eta}\frac{\partial \eta}{\partial y}, \quad v = -\frac{\partial \psi}{\partial x} = -\frac{\partial \psi}{\partial \eta}\frac{\partial \eta}{\partial x}
\end{equation}
\begin{equation}
u = \frac{1}{3} \frac{u_0 f'}{\xi^{1/3}} \quad \text{with} \quad f' \equiv \frac{\partial f}{\partial \eta}
\end{equation}
the following equations for the 2D laminar heated free jet can be derived:
\begin{align}
f'''+(f')^2 +ff'' &= 3 \xi \left( f' \frac{\partial f'}{\partial \xi} - f'' \frac{\partial f}{\partial \xi}\right) \\
\frac{1}{Pr}g'' + (fg)' &= 3 \xi \left( f' \frac{\partial g}{\partial \xi} - g' \frac{\partial f}{\partial \xi}\right)
\end{align}
The boundary conditions are:
\begin{align}
\eta = 0&: \quad f = f'' = 0, \quad g' = 0 \\
\eta = \eta_e&: \quad f' = 0, \quad g = 0
\end{align}
## Keller's BOX method
The BOX-method of H.B. Keller is a second order accurate discretization scheme for parabolic partial differential equations. Since the boundary layer equations are of parabolic type the scheme is well suited.
The BOX-method consists of four steps:
1. Reduction of the order of the PDE by substitution to get a first order equation system
2. Use central differences for the discretization of the first order equations
3. Linearize the resulting algebraic equations (in case they are non-linear)
4. Solve the linear system
The order recuction is done by substitution which typically leads to a set of ODEs and one PDE. The ODEs are discretized at the new, unknown position (e.g. space or time), whereas the first oder PDE is discretized halfway between known and unknown quantities.
In this case, steps 3 and 4 are treated in a different way and a solver for a system of nonlinear equations is employed instead.
1. Reduction of the order of the PDE by substitution to get a first order equation system
2. Use central differences for the discretization of the first order equations
3. Rewrite linear and nonlinear discretized equations in the form F(x) = 0 and solve the system using SCIPY.
The solver applied is the **fsolve** solver from **scipy.optimize** and is dedicated to large systems of equations forming sparse matrices.
### Step 1 - Order Reduction
Applying the order reduction of the BOX-method to the equations for the free jet and also introducing turbulence yields the following set of equations:
\begin{align}
f' &= u(\xi, \eta) \\
u' &= v(\xi, \eta) \\
g' &= p(\xi, \eta) \\
(bv)'+u^2 +fv &= 3 \xi \left( u \frac{\partial u}{\partial \xi} - v \frac{\partial f}{\partial \xi}\right) \\
(ep)' + fp &= 3 \xi \left( u \frac{\partial g}{\partial \xi} - p \frac{\partial f}{\partial \xi}\right)
\end{align}
which is a set of 3 ODEs and 2 PDEs, where
\begin{equation}
b = 1 + \epsilon_m^+ \quad \text{and} \quad e = \frac{1}{Pr} + \frac{\epsilon_m^+}{Pr_t}
\end{equation}
The boundary conditions after the substitution are:
\begin{align}
\eta = 0&: \quad f = v = 0, \quad p = 0 \\
\eta = \eta_e&: \quad u = 0, \quad g = 0
\end{align}
### Step 2 - Discretization
The figures below depict the transformed computational mesh at the nozzle exit and the nomenclature of the *Keller BOX* and associated discretization points for the ODEs and PDEs. In the lower sketch the solution propagation of the PDE is from left to right which corresponds to the flow direction and the parabolic nature of the flow type (downstream disturbances do not affect the upstream solution).
```python
import os
path = 'C:\Dropbox\JET\Documentation'
_file = 'Nozzle_Meshgrid.png'
filename = os.path.join(path, _file)
Image(filename=filename, width=600)
```
```python
_file = 'Keller_BOX.png'
filename = os.path.join(path, _file)
Image(filename=filename, width=600)
```
#### ODE Discretization
\begin{equation}
f' = \frac{df}{d\eta} = u
\end{equation}
Using central differences to dicretize this ODE at the unknown location (see *ODE discretization point* in above figure) results in:
\begin{equation}
\frac{1}{h_j}(f_{j}^{n}-f_{j-1}^{n}) = \frac{1}{2}(u_{j}^{n}+u_{j-1}^{n})
\end{equation}
and likewise for the remaining two ODEs:
\begin{equation}
\frac{1}{h_j}(u_{j}^{n}-u_{j-1}^{n}) = \frac{1}{2}(v_{j}^{n}+v_{j-1}^{n})
\end{equation}
\begin{equation}
\frac{1}{h_j}(g_{j}^{n}-g_{j-1}^{n}) = \frac{1}{2}(p_{j}^{n}+p_{j-1}^{n})
\end{equation}
####PDE Discretization
#####Discretization of the Momentum Equation
\begin{equation}
(bv)'+u^2 +fv = 3 \xi \left( u \frac{\partial u}{\partial \xi} - v \frac{\partial f}{\partial \xi}\right)
\end{equation}
The PDE is again discretized using central differences, but now at a point halfway between known and unknown quantities in the center of the "Keller BOX" (see *PDE disecretization point* in above figure).
The discretization is done one by one for each individual term. For the first term of the momentum equation it is done in a quite verbose way in order to show the process in every detail. In the last line the variables are already separated in terms of known (superscript $n-1$) and unknown (superscript $n$) positions. This is needed later for grouping the coefficients of the unknowns as well as the known terms for the matrix-vector solution procedure ($\matrix{A}\vec{x}=\vec{b}$) for the linear equation system.
\begin{align}
(bv)' = \frac{\partial (bv)}{\partial \eta} &\approx \frac{1}{h_j}\left [\left(bv\right)_{j}^{n-1/2}-\left(bv\right)_{j-1}^{n-1/2}\right ] =\\
&= \frac{1}{h_j}\left [ \frac{1}{2}\left((bv)_{j}^{n} + (bv)_{j}^{n-1}\right) - \frac{1}{2}\left((bv)_{j-1}^{n} + (bv)_{j-1}^{n-1}\right)\right] = \\
&= \frac{1}{h_j}\left [ \frac{1}{2}(b_{j}^{n}v_{j}^{n} + b_{j}^{n-1}v_{j}^{n-1}) - \frac{1}{2}(b_{j-1}^{n}v_{j-1}^{n} + b_{j-1}^{n-1}v_{j-1}^{n-1})\right] = \\
&= \frac{1}{2 h_j} \left(b_{j}^{n}v_{j}^{n} - b_{j-1}^{n}v_{j-1}^{n}\right) + \frac{1}{2 h_j}\left(b_{j}^{n-1}v_{j}^{n-1} - b_{j-1}^{n-1}v_{j-1}^{n-1}\right) \\
\quad \\
u^2 &\approx \left (u^2 \right )_{j-1/2}^{n-1/2} = \\
& = \frac{1}{2} \left[\left ( u^2 \right)_{j-1/2}^{n} + \left(u^2\right)_{j-1/2}^{n-1}\right] \\
& = \frac{1}{4}\left[\left ( u^2 \right)_{j}^{n} + \left ( u^2 \right)_{j-1}^{n}\right] + \frac{1}{4}\left[\left ( u^2 \right)_{j}^{n-1} + \left ( u^2 \right)_{j-1}^{n-1}\right]\\
\quad \\
fv &\approx \left (fv \right )_{j-1/2}^{n-1/2} = \\
& = \frac{1}{2} \left[ \left(fv\right)_{j-1/2}^{n} + \left(fv\right)_{j-1/2}^{n-1}\right] \\
& = \frac{1}{4}\left(f_{j}^{n}v_{j}^{n} + f_{j-1}^{n}v_{j-1}^{n}\right) + \frac{1}{4}\left ( f_{j}^{n-1}v_{j}^{n-1} + f_{j-1}^{n-1}v_{j-1}^{n-1}\right) \\
\quad \\
3\xi &= 3\xi^{n-1/2} = \frac{3}{2}\left(\xi^n+\xi^{n-1}\right)
\quad \\
u &\approx u_{j-1/2}^{n-1/2} = \\
& = \frac{1}{2} \left(u_{j-1/2}^{n} + u_{j-1/2}^{n-1}\right) \\
& = \frac{1}{4}\left(u_{j}^{n} + u_{j-1}^{n}\right) + \frac{1}{4}\left(u_{j}^{n-1} + u_{j-1}^{n-1}\right)\\
\quad \\
\frac{\partial u}{\partial \xi} &\approx \frac{1}{k_n}\left (u_{j-1/2}^{n}-u_{j-1/2}^{n-1}\right ) =\\
&= \frac{1}{2 k_n}\left (u_{j}^{n}+u_{j-1}^{n}\right) - \frac{1}{2 k_n}\left (u_{j}^{n-1}+u_{j-1}^{n-1}\right) = \\
\quad \\
v &\approx v_{j-1/2}^{n-1/2} = \\
& = \frac{1}{2} \left(v_{j-1/2}^{n} + v_{j-1/2}^{n-1}\right) \\
& = \frac{1}{4}\left(v_{j}^{n} + v_{j-1}^{n}\right) + \frac{1}{4}\left(v_{j}^{n-1} + v_{j-1}^{n-1}\right)\\
\quad \\
\frac{\partial f}{\partial \xi} &\approx \frac{1}{k_n}\left (f_{j-1/2}^{n}-f_{j-1/2}^{n-1}\right ) =\\
&= \frac{1}{2 k_n}\left (f_{j}^{n}+f_{j-1}^{n}\right) - \frac{1}{2 k_n}\left (f_{j}^{n-1}+f_{j-1}^{n-1}\right)
\end{align}
Keeping the $( )_{j-1/2}$ terms, the complete discretized reduced order momentum equation now is:
\begin{equation}
\frac{1}{2 h_j} \left(b_{j}^{n}v_{j}^{n} - b_{j-1}^{n}v_{j-1}^{n}\right) + \frac{1}{2 h_j}\left(b_{j}^{n-1}v_{j}^{n-1} - b_{j-1}^{n-1}v_{j-1}^{n-1}\right) + \frac{1}{2} \left[\left ( u^2 \right)_{j-1/2}^{n} + \left(u^2\right)_{j-1/2}^{n-1}\right] + \frac{1}{2} \left[\left ( fv \right)_{j-1/2}^{n} + \left(fv\right)_{j-1/2}^{n-1}\right] = \frac{3}{2 k_n}\left(\xi^n+\xi^{n-1}\right)\left \{ \frac{1}{2} \left(u_{j-1/2}^{n} + u_{j-1/2}^{n-1}\right)\left (u_{j-1/2}^{n}-u_{j-1/2}^{n-1}\right) + \frac{1}{2} \left(v_{j-1/2}^{n} + v_{j-1/2}^{n-1}\right) \left(f_{j-1/2}^{n}-f_{j-1/2}^{n-1}\right ) \right\}
\end{equation}
Multiply by 2 and regroup LHS by unknowns and knowns:
\begin{equation}
\frac{1}{h_j} \left(b_{j}^{n}v_{j}^{n} - b_{j-1}^{n}v_{j-1}^{n}\right) + \left ( u^2 \right)_{j-1/2}^{n} + \left ( fv \right)_{j-1/2}^{n} + \frac{1}{h_j}\left(b_{j}^{n-1}v_{j}^{n-1} - b_{j-1}^{n-1}v_{j-1}^{n-1}\right) + \left(u^2\right)_{j-1/2}^{n-1} + \left(fv\right)_{j-1/2}^{n-1} = \frac{3}{2 k_n}\left(\xi^n+\xi^{n-1}\right) \left\{ \left(u_{j-1/2}^{n} + u_{j-1/2}^{n-1}\right)\left (u_{j-1/2}^{n}-u_{j-1/2}^{n-1}\right) - \left(v_{j-1/2}^{n} + v_{j-1/2}^{n-1}\right) \left(f_{j-1/2}^{n}-f_{j-1/2}^{n-1}\right ) \right\}
\end{equation}
The prefactor on the RHS is substitued by $\alpha$:
\begin{equation}
\alpha = \frac{3}{2 k_n}\left(\xi^n+\xi^{n-1}\right) = 3 \frac{\xi^{n-1/2}}{\xi^n-\xi^{n-1}}
\end{equation}
\begin{equation}
\frac{1}{h_j} \left(b_{j}^{n}v_{j}^{n} - b_{j-1}^{n}v_{j-1}^{n}\right) + \left ( u^2 \right)_{j-1/2}^{n} + \left ( fv \right)_{j-1/2}^{n} + \frac{1}{h_j}\left(b_{j}^{n-1}v_{j}^{n-1} - b_{j-1}^{n-1}v_{j-1}^{n-1}\right) + \left(u^2\right)_{j-1/2}^{n-1} + \left(fv\right)_{j-1/2}^{n-1} = \alpha \left\{ \left(u_{j-1/2}^{n} + u_{j-1/2}^{n-1}\right)\left (u_{j-1/2}^{n}-u_{j-1/2}^{n-1}\right) - \left(v_{j-1/2}^{n} + v_{j-1/2}^{n-1}\right) \left(f_{j-1/2}^{n}-f_{j-1/2}^{n-1}\right ) \right\}
\end{equation}
Resolving the terms on the RHS and indicating knowns and unknowns at the LHS:
\begin{equation}
\underbrace{ \frac{1}{h_j} \left(b_{j}^{n}v_{j}^{n} - b_{j-1}^{n}v_{j-1}^{n}\right) + \left ( u^2 \right)_{j-1/2}^{n} + \left ( fv \right)_{j-1/2}^{n} }_\text{unknown} + \underbrace{ \frac{1}{h_j}\left(b_{j}^{n-1}v_{j}^{n-1} - b_{j-1}^{n-1}v_{j-1}^{n-1}\right) + \left(u^2\right)_{j-1/2}^{n-1} + \left(fv\right)_{j-1/2}^{n-1} }_\text{known} = \alpha \left\{ \left(u^2\right)_{j-1/2}^{n} - u_{j-1/2}^{n}u_{j-1/2}^{n-1}+ u_{j-1/2}^{n-1}u_{j-1/2}^{n} - \left(u^2\right)_{j-1/2}^{n-1} - \left(fv\right)_{j-1/2}^{n} + v_{j-1/2}^{n}f_{j-1/2}^{n-1} - v_{j-1/2}^{n-1}f_{j-1/2}^{n} + \left(fv\right)_{j-1/2}^{n-1} \right\}
\end{equation}
Futher rerarrangement and putting all unknowns to the LHS (also those, where knowns and unknowns are combined in a term) and all knowns to the RHS yields the final discretization of the reduced order momentum equation:
\begin{equation}
\frac{1}{h_j} \left(b_{j}^{n}v_{j}^{n} - b_{j-1}^{n}v_{j-1}^{n}\right) + \left(1-\alpha\right) \left( u^2 \right)_{j-1/2}^{n} + \left(1+\alpha\right) \left( fv \right)_{j-1/2}^{n} + \alpha \left( v_{j-1/2}^{n-1}f_{j-1/2}^{n} - v_{j-1/2}^{n}f_{j-1/2}^{n-1} \right) = - \frac{1}{h_j}\left(b_{j}^{n-1}v_{j}^{n-1} - b_{j-1}^{n-1}v_{j-1}^{n-1}\right) - \left(1+\alpha\right) \left( u^2 \right)_{j-1/2}^{n-1} - \left(1-\alpha\right)\left(fv\right)_{j-1/2}^{n-1}
\end{equation}
#####Discretization of the Energy Equation
\begin{equation}
(ep)' + fp = 3 \xi \left( u \frac{\partial g}{\partial \xi} - p \frac{\partial f}{\partial \xi}\right)
\end{equation}
The discretization is done again one by one for each individual term, but now without being too verbose.
\begin{align}
(ep)' = \frac{\partial (ep)}{\partial \eta} &\approx \frac{1}{h_j}\left [\left(ep\right)_{j}^{n-1/2}-\left(ep\right)_{j-1}^{n-1/2}\right ] =\\
&= \frac{1}{2 h_j} \left(e_{j}^{n}p_{j}^{n} - e_{j-1}^{n}p_{j-1}^{n}\right) + \frac{1}{2 h_j}\left(e_{j}^{n-1}p_{j}^{n-1} - e_{j-1}^{n-1}p_{j-1}^{n-1}\right) \\
\quad \\
fp &\approx \left (fp \right )_{j-1/2}^{n-1/2} = \\
& = \frac{1}{2} \left[ \left(fp\right)_{j-1/2}^{n} + \left(fp\right)_{j-1/2}^{n-1}\right] \\
\quad \\
3\xi &= 3\xi^{n-1/2} = \frac{3}{2}\left(\xi^n+\xi^{n-1}\right)
\quad \\
u &\approx u_{j-1/2}^{n-1/2} = \\
& = \frac{1}{2} \left(u_{j-1/2}^{n} + u_{j-1/2}^{n-1}\right) \\
\quad \\
\frac{\partial g}{\partial \xi} &\approx \frac{1}{k_n}\left (g_{j-1/2}^{n}-g_{j-1/2}^{n-1}\right )\\
\quad \\
p &\approx p_{j-1/2}^{n-1/2} = \\
& = \frac{1}{2} \left(p_{j-1/2}^{n} + p_{j-1/2}^{n-1}\right) \\
\quad \\
\frac{\partial f}{\partial \xi} &\approx \frac{1}{k_n}\left (f_{j-1/2}^{n}-f_{j-1/2}^{n-1}\right )
\end{align}
Discretized energy equation:
\begin{equation}
\frac{1}{2 h_j} \left(e_{j}^{n}p_{j}^{n} - e_{j-1}^{n}p_{j-1}^{n}\right) + \frac{1}{2 h_j}\left(e_{j}^{n-1}p_{j}^{n-1} - e_{j-1}^{n-1}p_{j-1}^{n-1}\right) + \frac{1}{2} \left[ \left(fp\right)_{j-1/2}^{n} + \left(fp\right)_{j-1/2}^{n-1}\right] = \frac{3}{2 k_n}\left(\xi^n+\xi^{n-1}\right) \left[\frac{1}{2} \left(u_{j-1/2}^{n} + u_{j-1/2}^{n-1}\right) \left (g_{j-1/2}^{n}-g_{j-1/2}^{n-1}\right ) - \frac{1}{2} \left(p_{j-1/2}^{n} + p_{j-1/2}^{n-1}\right)\left (f_{j-1/2}^{n}-f_{j-1/2}^{n-1}\right ) \right]
\end{equation}
Final discretization of the reduced order energy equation with separated knowns and unknowns.
\begin{equation}
\frac{1}{h_j} \left(e_{j}^{n}p_{j}^{n} - e_{j-1}^{n}p_{j-1}^{n}\right) + \left(1+\alpha\right) \left( fp \right)_{j-1/2}^{n} - \alpha \left[\left( ug \right)_{j-1/2}^{n} + u_{j-1/2}^{n-1}g_{j-1/2}^{n} - u_{j-1/2}^{n}g_{j-1/2}^{n-1} + p_{j-1/2}^{n}f_{j-1/2}^{n-1} - p_{j-1/2}^{n-1}f_{j-1/2}^{n} \right] = - \frac{1}{h_j}\left(e_{j}^{n-1}p_{j}^{n-1} - e_{j-1}^{n-1}p_{j-1}^{n-1}\right) - \left(1-\alpha\right)\left(fp\right)_{j-1/2}^{n-1} - \alpha \left( ug \right)_{j-1/2}^{n-1}
\end{equation}
**All discretized ODEs and PDEs together:**
\begin{equation}
\frac{1}{h_j}(f_{j}^{n}-f_{j-1}^{n}) = \frac{1}{2}(u_{j}^{n}+u_{j-1}^{n}) \\
\quad \\
\frac{1}{h_j}(u_{j}^{n}-u_{j-1}^{n}) = \frac{1}{2}(v_{j}^{n}+v_{j-1}^{n}) \\
\quad \\
\frac{1}{h_j}(g_{j}^{n}-g_{j-1}^{n}) = \frac{1}{2}(p_{j}^{n}+p_{j-1}^{n}) \\
\quad \\
\frac{1}{h_j} \left(b_{j}^{n}v_{j}^{n} - b_{j-1}^{n}v_{j-1}^{n}\right) + \left(1-\alpha\right) \left( u^2 \right)_{j-1/2}^{n} + \left(1+\alpha\right) \left( fv \right)_{j-1/2}^{n} + \alpha \left( v_{j-1/2}^{n-1}f_{j-1/2}^{n} - v_{j-1/2}^{n}f_{j-1/2}^{n-1} \right) = - \frac{1}{h_j}\left(b_{j}^{n-1}v_{j}^{n-1} - b_{j-1}^{n-1}v_{j-1}^{n-1}\right) - \left(1+\alpha\right) \left( u^2 \right)_{j-1/2}^{n-1} - \left(1-\alpha\right)\left(fv\right)_{j-1/2}^{n-1} \\
\quad \\
\frac{1}{h_j} \left(e_{j}^{n}p_{j}^{n} - e_{j-1}^{n}p_{j-1}^{n}\right) + \left(1+\alpha\right) \left( fp \right)_{j-1/2}^{n} - \alpha \left[\left( ug \right)_{j-1/2}^{n} + u_{j-1/2}^{n-1}g_{j-1/2}^{n} - u_{j-1/2}^{n}g_{j-1/2}^{n-1} + p_{j-1/2}^{n}f_{j-1/2}^{n-1} - p_{j-1/2}^{n-1}f_{j-1/2}^{n} \right] = - \frac{1}{h_j}\left(e_{j}^{n-1}p_{j}^{n-1} - e_{j-1}^{n-1}p_{j-1}^{n-1}\right) - \left(1-\alpha\right)\left(fp\right)_{j-1/2}^{n-1} - \alpha \left( ug \right)_{j-1/2}^{n-1}
\end{equation}
And the boundary conditions become:
\begin{equation}
f_0 = 0, \quad v_0 = 0, \quad p_0 = 0 \\
u_J = 0, \quad g_J = 0
\end{equation}
### Step 3 - Rewrite Equations for SciPy Solver
If we simply rearrange the equations in the form $f(x)=0$, where $f$ is a vector valued function consisting of:
\begin{equation}
f_1(x_1, x_2, x_3, . . . , x_n) = 0 \\
f_2(x_1, x_2, x_3, . . . , x_n) = 0 \\
. \\
. \\
f_n(x_1, x_2, x_3, . . . , x_n) = 0
\end{equation}
and $(x_1, x_2, x_3, . . . , x_n)$ being the solution vector $x$ we can apply the solver [*scipy.optimize.fsolve*](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fsolve.html#scipy.optimize.fsolve) from the [scipy](http://www.scipy.org/) package.
\begin{equation}
\frac{1}{h_j}(f_{j}^{n}-f_{j-1}^{n}) - \frac{1}{2}(u_{j}^{n}+u_{j-1}^{n}) = 0\\
\quad \\
\frac{1}{h_j}(u_{j}^{n}-u_{j-1}^{n}) - \frac{1}{2}(v_{j}^{n}+v_{j-1}^{n}) = 0\\
\quad \\
\frac{1}{h_j}(g_{j}^{n}-g_{j-1}^{n}) - \frac{1}{2}(p_{j}^{n}+p_{j-1}^{n}) = 0\\
\quad \\
\frac{1}{h_j} \left(b_{j}^{n}v_{j}^{n} - b_{j-1}^{n}v_{j-1}^{n}\right) + \left(1-\alpha\right) \left( u^2 \right)_{j-1/2}^{n} + \left(1+\alpha\right) \left( fv \right)_{j-1/2}^{n} + \alpha \left( v_{j-1/2}^{n-1}f_{j-1/2}^{n} - v_{j-1/2}^{n}f_{j-1/2}^{n-1} \right) + \frac{1}{h_j}\left(b_{j}^{n-1}v_{j}^{n-1} - b_{j-1}^{n-1}v_{j-1}^{n-1}\right) + \left(1+\alpha\right) \left( u^2 \right)_{j-1/2}^{n-1} + \left(1-\alpha\right)\left(fv\right)_{j-1/2}^{n-1} = 0 \\
\quad \\
\frac{1}{h_j} \left(e_{j}^{n}p_{j}^{n} - e_{j-1}^{n}p_{j-1}^{n}\right) + \left(1+\alpha\right) \left( fp \right)_{j-1/2}^{n} - \alpha \left[\left( ug \right)_{j-1/2}^{n} + u_{j-1/2}^{n-1}g_{j-1/2}^{n} - u_{j-1/2}^{n}g_{j-1/2}^{n-1} + p_{j-1/2}^{n}f_{j-1/2}^{n-1} - p_{j-1/2}^{n-1}f_{j-1/2}^{n} \right] + \frac{1}{h_j}\left(e_{j}^{n-1}p_{j}^{n-1} - e_{j-1}^{n-1}p_{j-1}^{n-1}\right) + \left(1-\alpha\right)\left(fp\right)_{j-1/2}^{n-1} + \alpha \left( ug \right)_{j-1/2}^{n-1} = 0
\end{equation}
```python
```
```python
```
```python
```
| f2021bec78fbd038ec25e61e5b8f29637689e893 | 111,825 | ipynb | Jupyter Notebook | jet.ipynb | chiefenne/jet | 82ae734186ced455b87182676ec714d834ae6b3e | [
"MIT"
] | null | null | null | jet.ipynb | chiefenne/jet | 82ae734186ced455b87182676ec714d834ae6b3e | [
"MIT"
] | null | null | null | jet.ipynb | chiefenne/jet | 82ae734186ced455b87182676ec714d834ae6b3e | [
"MIT"
] | null | null | null | 133.125 | 56,408 | 0.831898 | true | 8,916 | Qwen/Qwen-72B | 1. YES
2. YES | 0.661923 | 0.760651 | 0.503492 | __label__eng_Latn | 0.5154 | 0.00811 |
```
%matplotlib inline
from sympy import var, Matrix, eye, init_printing, roots
init_printing()
```
```
var("a:5")
var("b:5")
var("s");
```
# Formas canónicas observador y observabilidad
Tenemos un sistema de orden $4$ con una representación de estado:
```
Ao = Matrix([[0, 0, 0, -a4], [1, 0, 0, -a3], [0, 1, 0, -a2], [0, 0, 1, -a1]])
bo = Matrix([[b4], [b3], [b2], [b1]])
co = Matrix([[0], [0], [0], [1]])
```
```
Ao
```
```
bo
```
```
co
```
La cual ya esta en la forma canónica observador, podemos obtener su polinomio caracteristico:
```
PCo = (s*eye(4) - Ao).det()
PCo
```
La matriz sistema es la siguiente:
```
MSo = (s*eye(4) - Ao).row_join(bo).col_join(-co.T.row_join(Matrix([0])))
MSo
```
y su polinomio es:
```
pMSo = MSo.det()
pMSo
```
tiene una matriz de observabilidad:
```
MOo = ((co.T.col_join(co.T*Ao)).col_join(co.T*Ao*Ao)).col_join(co.T*Ao*Ao*Ao)
MOo
```
```
MOo.det()
```
Dado este sistema, podemos transformarlo a su forma observabilidad:
```
Aob = Matrix([[-a1, -a2, -a3, -a4], [1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 1, 0]])
beta = Matrix([[1, 0, 0, 0], [a1, 1, 0, 0], [a2, a1, 1, 0], [a3, a2, a1, 1]]).inv()*Matrix([[b1], [b2], [b3], [b4]])
bob = Matrix([[beta[3]], [beta[2]], [beta[1]], [beta[0]]])
cob = co
```
```
Aob
```
```
bob
```
```
cob
```
La cual, tiene un polinomio caracteristico:
```
PCob = (s*eye(4) - Aob).det()
PCob
```
y una matriz sistema:
```
MSob = (s*eye(4) - Aob).row_join(bob).col_join(-co.T.row_join(Matrix([0])))
MSob
```
con una matriz de observabilidad:
```
MOob = ((cob.T.col_join(cob.T*Aob)).col_join(cob.T*Aob*Aob)).col_join(cob.T*Aob*Aob*Aob)
MOob
```
```
MOob.det()
```
## Ejemplo
Tenemos el siguiente sistema:
```
from numpy import matrix
```
```
A = Matrix([[0, 1, 0], [0, 0 , 1], [0, 0, 0]])
b = Matrix([[0], [0], [1]])
c = Matrix([[-1], [0], [1]])
x0 = Matrix ([[1], [0], [0]])
```
Su polinomio caracteristico es:
```
PC = (s*eye(3) - A).det()
PC
```
Su matriz sistema es:
```
MS =(s*eye(3) - A).row_join(b).col_join(-c.T.row_join(Matrix([0])))
MS
```
y su polinomio es:
```
pMS = MS.det()
pMS.factor()
```
su matriz de controlabilidad es:
```
MC = (b.row_join(A*b)).row_join(A*A*b)
MC
```
y su determinanate:
```
MC.det()
```
lo que quiere decir, que este sistema es controlable.
Ahora calculamos su matriz de observabilidad.
```
MO = (c.T.col_join(c.T*A)).col_join(c.T*A*A)
MO
```
y su determinante:
```
MO.det()
```
lo cual quiere decir que es observable.
Sabemos que tenemos 3 polos en el origen, pero queremos que esten en $-1$, podemos calcular los coeficientes necesarios:
```
((s+1)**3).expand()
```
lo cual nos implica la siguiente retroalimentación de estado:
```
f = Matrix([[-1], [-3], [-3]])
```
y el siguiente sistema retroalimentado:
```
Af = A + b*f.T
Af
```
```
A.col_join(c.T)
```
pero para calcular la inyección de salida, deseamos hacerlo bajo la forma canónica observador, por lo que primero la describimos y despues calculamos el cambio de base necesario:
```
Ao = Matrix([[0, 0, 0], [1, 0, 0], [0, 1, 0]])
bo = Matrix([[-1], [0], [1]])
co = Matrix([[0], [0], [1]])
```
cuya función de transferencia será:
```
(co.T*((s*eye(3) - Ao).inv())*bo)[0].factor()
```
su matriz de observabilidad:
```
MOo = (co.T.col_join(co.T*Ao)).col_join(co.T*Ao*Ao)
MOo
```
y la matriz de cambio de base será:
```
T = MO.inv()* MOo
T
```
teniendo esto, tan solo tenemos que idear una inyección de salida.
En este caso, queremos una respuesta mas rapida, por lo que pondremos estos polos en $-10$:
```
((s+10)**3).expand()
```
por lo que nuestra inyección de salida será:
```
ko = Matrix([[-1000], [-300], [-30]])
```
y el sistema bajo inyección de salida será:
```
Ako = Ao + ko*co.T
Ako
```
y su polinomio caracteristico será:
```
(s*eye(3) - Ako).det()
```
pero esta inyección de salida esta definida en la nueva base, por lo que tenemos que convertirla a la base anterior:
```
k = T*ko
k
```
Si ahora juntamosla retroalimentación de estado y la inyección de salida, tendremos:
```
Alc = (A.row_join(b*f.T)).col_join((-k*c.T).row_join(Af + k*c.T))
Alc
```
```
blc = b.col_join(b)
blc
```
```
clc = c.col_join(Matrix([[0], [0], [0]]))
clc
```
el polinomio caracteristico de este sistema será:
```
(s*eye(6) - Alc).det().factor()
```
y el polinomio de la matriz sistema:
```
MSlc = ((s*eye(6) - Alc).row_join(blc)).col_join(-clc.T.row_join(Matrix([[0]])))
MSlc
```
```
MSlc.det().factor()
```
su matriz de controlabilidad será:
```
MClc = ((((blc.row_join(Alc*blc))
.row_join(Alc*Alc*blc))
.row_join(Alc*Alc*Alc*blc))
.row_join(Alc*Alc*Alc*Alc*blc)).row_join(Alc*Alc*Alc*Alc*Alc*blc)
MClc
```
```
MClc.det()
```
```
MOlc = ((((clc.T.col_join(clc.T*Alc))
.col_join(clc.T*Alc*Alc))
.col_join(clc.T*Alc*Alc*Alc))
.col_join(clc.T*Alc*Alc*Alc*Alc)).col_join(clc.T*Alc*Alc*Alc*Alc*Alc)
MOlc
```
```
MOlc.det()
```
```
Tlc, Jlc = Alc.jordan_form()
```
```
Jlc
```
```
Tlc
```
| 5b03b41dc5f4046f09ea3d528c9c85b4c036c7d5 | 145,650 | ipynb | Jupyter Notebook | IPythonNotebooks/Teoria de Control I/Formas canonicas observador y observabilidad.ipynb | chelizalde/DCA | 34fd4d500117a9c0a75b979b8b0f121c1992b9dc | [
"MIT"
] | null | null | null | IPythonNotebooks/Teoria de Control I/Formas canonicas observador y observabilidad.ipynb | chelizalde/DCA | 34fd4d500117a9c0a75b979b8b0f121c1992b9dc | [
"MIT"
] | null | null | null | IPythonNotebooks/Teoria de Control I/Formas canonicas observador y observabilidad.ipynb | chelizalde/DCA | 34fd4d500117a9c0a75b979b8b0f121c1992b9dc | [
"MIT"
] | 1 | 2021-03-20T12:44:13.000Z | 2021-03-20T12:44:13.000Z | 84.239445 | 21,973 | 0.770717 | true | 1,888 | Qwen/Qwen-72B | 1. YES
2. YES | 0.872347 | 0.865224 | 0.754776 | __label__spa_Latn | 0.786382 | 0.59193 |
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W1D5-DimensionalityReduction/student/W1D5_Tutorial3.ipynb" target="_parent"></a>
# Neuromatch Academy: Week 1, Day 5, Tutorial 3
# Dimensionality Reduction and reconstruction
---
In this notebook we'll learn to apply PCA for dimensionality reduction, using a classic dataset that is often used to benchmark machine learning algorithms: the MNIST dataset of handwritten digits. We'll also learn how to use PCA for reconstruction and denoising.
Steps:
1. Perform PCA on MNIST dataset.
2. Calculate the variance explained.
3. Reconstruct data with different numbers of PCs.
4. Examine denoising using PCA.
To learn more about MNIST:
* https://en.wikipedia.org/wiki/MNIST_database
---
```python
#@title Video: Logistic regression
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="ew0-P7-6Nho", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
```
Video available at https://youtube.com/watch?v=ew0-P7-6Nho
# Setup
Run these cells to get the tutorial started.
```python
#import libraries
import time # import time
import numpy as np # import numpy
import scipy as sp # import scipy
import math # import basic math
import random # import basic random number generator functions
import matplotlib.pyplot as plt # import matplotlib
from IPython import display
```
```python
# @title Figure Settings
fig_w, fig_h = (10, 4)
plt.rcParams.update({'figure.figsize': (fig_w, fig_h)})
plt.style.use('ggplot')
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
```
```python
# @title Helper Functions
# New helper functions
def plot_variance_explained(variance_explained):
"""
Plots eigenvalues.
Args:
variance_explained (numpy array of floats) : Vector of variance explained for each PC
Returns:
Nothing.
"""
plt.figure()
plt.plot(np.arange(1,len(variance_explained)+1),variance_explained,'o-k')
plt.xlabel('Number of components')
plt.ylabel('Variance explained')
def plot_MNIST_reconstruction(X,X_reconstructed):
"""
Plots 9 images in the MNIST dataset side-by-side with the reconstructed images.
Args:
X (numpy array of floats): Data matrix
each column corresponds to a different random variable
X_reconstructed (numpy array of floats): Data matrix
each column corresponds to a different random variable
Returns:
Nothing.
"""
plt.figure()
ax = plt.subplot(1,2,1)
k=0
for k1 in range(3):
for k2 in range(3):
k = k+1
plt.imshow(np.reshape(X[k,:],(28,28)),extent=[(k1+1)*28,k1*28,(k2+1)*28,k2*28],vmin=0,vmax=255)
plt.xlim((3*28,0))
plt.ylim((3*28,0))
plt.tick_params(axis='both',which='both',bottom=False,top=False,labelbottom=False)
ax.set_xticks([])
ax.set_yticks([])
plt.title('Data')
plt.clim([0,250])
ax = plt.subplot(1,2,2)
k=0
for k1 in range(3):
for k2 in range(3):
k = k+1
plt.imshow(np.reshape(np.real(X_reconstructed[k,:]),(28,28)),extent=[(k1+1)*28,k1*28,(k2+1)*28,k2*28],vmin=0,vmax=255)
plt.xlim((3*28,0))
plt.ylim((3*28,0))
plt.tick_params(axis='both',which='both',bottom=False,top=False,labelbottom=False)
ax.set_xticks([])
ax.set_yticks([])
plt.clim([0,250])
plt.title('Reconstructed')
def plot_MNIST_sample(X):
"""
Plots 9 images in the MNIST dataset.
Args:
X (numpy array of floats): Data matrix
each column corresponds to a different random variable
Returns:
Nothing.
"""
plt.figure()
fig, ax = plt.subplots()
k=0
for k1 in range(3):
for k2 in range(3):
k = k+1
plt.imshow(np.reshape(X[k,:],(28,28)),extent=[(k1+1)*28,k1*28,(k2+1)*28,k2*28],vmin=0,vmax=255)
plt.xlim((3*28,0))
plt.ylim((3*28,0))
plt.tick_params(axis='both',which='both',bottom=False,top=False,labelbottom=False)
plt.clim([0,250])
ax.set_xticks([])
ax.set_yticks([])
def plot_MNIST_weights(weights):
"""
Visualize PCA basis vector weights for MNIST. Red = positive weights, blue =
negative weights, white = zero weight.
Args:
weights (numpy array of floats) : PCA basis vector
Returns:
Nothing.
"""
plt.figure()
fig, ax = plt.subplots()
cmap = plt.cm.get_cmap('seismic')
plt.imshow(np.real(np.reshape(weights,(28,28))),cmap=cmap)
plt.tick_params(axis='both',which='both',bottom=False,top=False,labelbottom=False)
plt.clim(-.15,.15)
plt.colorbar(ticks=[-.15,-.1,-.05,0,.05,.1,.15])
ax.set_xticks([])
ax.set_yticks([])
def add_noise(X,frac_noisy_pixels):
"""
Randomly corrupts a fraction of the pixels by setting them to random values.
Args:
X (numpy array of floats) : Data matrix
frac_noisy_pixels (scalar) : Fraction of noisy pixels
Returns:
(numpy array of floats) : Data matrix + noise
"""
X_noisy = np.reshape(X,(X.shape[0]*X.shape[1]))
N_noise_ixs = int(X_noisy.shape[0] * frac_noisy_pixels)
noise_ixs = np.random.choice(X_noisy.shape[0],size= N_noise_ixs,replace=False)
X_noisy[noise_ixs] = np.random.uniform(0,255,noise_ixs.shape)
X_noisy = np.reshape(X_noisy,(X.shape[0],X.shape[1]))
return X_noisy
# Old helper functions from Tutorial 1-2
def change_of_basis(X,W):
"""
Projects data onto a new basis.
Args:
X (numpy array of floats) : Data matrix
each column corresponding to a different random variable
W (numpy array of floats): new orthonormal basis
columns correspond to basis vectors
Returns:
(numpy array of floats) : Data matrix expressed in new basis
"""
Y = np.matmul(X,W)
return Y
def get_sample_cov_matrix(X):
"""
Returns the sample covariance matrix of data X
Args:
X (numpy array of floats): Data matrix
each column corresponds to a different random variable
Returns:
(numpy array of floats) : Covariance matrix
"""
X = X - np.mean(X,0)
cov_matrix = 1./X.shape[0]*np.matmul(X.T,X)
return cov_matrix
def sort_evals_descending(evals,evectors):
"""
Sorts eigenvalues and eigenvectors in decreasing order. Also aligns first two
eigenvectors to be in first two quadrants (if 2D).
Args:
evals (numpy array of floats): Vector of eigenvalues
evectors (numpy array of floats): Corresponding matrix of eigenvectors
each column corresponds to a different eigenvalue
Returns:
(numpy array of floats) : Vector of eigenvalues after sorting
(numpy array of floats) : Matrix of eigenvectors after sorting
"""
index = np.flip(np.argsort(evals))
evals = evals[index]
evectors = evectors[:,index]
if evals.shape[0] == 2:
if np.arccos(np.matmul(evectors[:,0], 1./np.sqrt(2)*np.array([1,1]))) > np.pi/2.:
evectors[:,0] = -evectors[:,0]
if np.arccos(np.matmul(evectors[:,1], 1./np.sqrt(2)*np.array([-1,1]))) > np.pi/2.:
evectors[:,1] = -evectors[:,1]
return evals, evectors
def pca(X):
"""
Performs PCA on multivariate data. Eigenvalues are sorted in decreasing order.
Args:
X (numpy array of floats): Data matrix
each column corresponds to a different random variable
Returns:
(numpy array of floats) : Data projected onto the new basis
(numpy array of floats) : Vector of eigenvalues
(numpy array of floats) : Corresponding matrix of eigenvectors
"""
X = X - np.mean(X,0)
cov_matrix = get_sample_cov_matrix(X)
evals, evectors = np.linalg.eig(cov_matrix)
evals, evectors = sort_evals_descending(evals,evectors)
score = change_of_basis(X,evectors)
return score, evectors, evals
def plot_eigenvalues(evals):
"""
Plots eigenvalues.
Args:
(numpy array of floats) : Vector of eigenvalues
Returns:
Nothing.
"""
plt.figure()
plt.plot(np.arange(1,len(evals)+1),evals,'o-k')
plt.xlabel('Component')
plt.ylabel('Eigenvalue')
plt.title('Scree plot')
```
# Perform PCA on MNIST dataset.
The MNIST dataset consists of a 70,000 images of individual handwritten digits. Each image is a 28x28 pixel grayscale image. For convenience, each 28x28 pixel image is often unravelled into a single 784 (=28*28) element vector, so that the whole dataset is represented as a 70,000 x 784 matrix. Each row represents a different image, and each column represents a different pixel.
Enter the following cell to load the MNIST dataset and plot the first nine images.
```python
from sklearn.datasets import fetch_openml
mnist = fetch_openml(name = 'mnist_784')
X = mnist.data
plot_MNIST_sample(X)
```
The MNIST dataset has an extrinsic dimensionality of 784, much higher than the 2-dimensional examples used in the previous tutorials! To make sense of this data, we'll use dimensionality reduction. But first, we need to determine the intrinsic dimensionality $K$ of the data. One way to do this is to look for an "elbow" in the scree plot, to determine which eigenvalues are signficant.
#### Exercise
In this exercise you will examine the scree plot in the MNIST dataset.
**Suggestions**
* Perform PCA on the dataset and examine the scree plot.
* When do the eigenvalues appear (by eye) to reach zero? (Hint: use `plt.xlim` to zoom into a section of the plot).
```python
help(pca)
help(plot_eigenvalues)
```
Help on function pca in module __main__:
pca(X)
Performs PCA on multivariate data. Eigenvalues are sorted in decreasing order.
Args:
X (numpy array of floats): Data matrix
each column corresponds to a different random variable
Returns:
(numpy array of floats) : Data projected onto the new basis
(numpy array of floats) : Vector of eigenvalues
(numpy array of floats) : Corresponding matrix of eigenvectors
Help on function plot_eigenvalues in module __main__:
plot_eigenvalues(evals)
Plots eigenvalues.
Args:
(numpy array of floats) : Vector of eigenvalues
Returns:
Nothing.
```python
###################################################################
## Insert your code here to:
## perform PCA
## plot the eigenvalues
###################################################################
# score, evectors, evals = ...YOUR CODE HERE to perform PCA
# plot_eigenvalues(evals)
# YOUR CODE HERE to limit the x-axis for zooming
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D5-DimensionalityReduction/solutions/W1D5_Tutorial3_Solution_f0861370.py)
*Example output:*
# Calculate the variance explained.
The scree plot suggests that most of the eigenvalues are near zero, with fewer than 100 having large values. Another common way to determine the intrinsic dimensionality is by considering the variance explained. This can be examined with a cumulative plot of the fraction of the total variance explained by the top $K$ components, i.e.:
\begin{equation}
\text{var explained} = \frac{\sum_{i=1}^K \lambda_i}{\sum_{i=1}^N \lambda_i}
\end{equation}
The intrinsic dimensionality is often quantified by the $K$ necessary to explain a large proportion of the total variance of the data (often a defined threshold, e.g., 90%).
#### Exercise
In this exercise you will plot the explained variance.
**Suggestions**
* Fill in the function below to calculate the fraction variance explained as a function of the number of principal componenets. **Hint:** use `np.cumsum`.
* Plot the variance explained using `plot_variance_explained`.
* How many principal components are required to explain 90% of the variance?
* How does the intrinsic dimensionality of this dataset compare to its extrinsic dimensionality?
```python
help(plot_variance_explained)
```
Help on function plot_variance_explained in module __main__:
plot_variance_explained(variance_explained)
Plots eigenvalues.
Args:
variance_explained (numpy array of floats) : Vector of variance explained for each PC
Returns:
Nothing.
```python
def get_variance_explained(evals):
"""
Calculates variance explained from the eigenvalues.
Args:
evals (numpy array of floats) : Vector of eigenvalues
Returns:
(numpy array of floats) : Vector of variance explained
"""
###################################################################
## Insert your code here to:
## cumulatively sum the eigenvalues
## normalize by the sum of eigenvalues
#uncomment once you've filled in the function
raise NotImplementedError("Student excercise: calculate explaine variance!")
###################################################################
return variance_explained
###################################################################
## Insert your code here to:
## calculate and plot the variance explained
###################################################################
# variance_explained = ...
# plot_variance_explained(variance_explained)
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D5-DimensionalityReduction/solutions/W1D5_Tutorial3_Solution_7af6bcb7.py)
*Example output:*
# Reconstruct data with different numbers of PCs.
```python
#@title Video: Geometric view of data
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="A_a7_hMhjfc", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
```
Video available at https://youtube.com/watch?v=A_a7_hMhjfc
Now we have seen that the top 100 or so principal components of the data can explain most of the variance. We can use this fact to perform *dimensionality reduction*, i.e., by storing the data using only 100 components rather than the samples of all 784 pixels. Remarkably, we will be able to reconstruct much of the structure of the data using only the top 100 components. To see this, recall that to perform PCA we projected the data $\bf X$ onto the eigenvectors of the covariance matrix:
\begin{equation}
\bf S = X W
\end{equation}
Since $\bf W$ is an orthogonal matrix, ${\bf W}^{-1} = {\bf W}^T$. So by multiplying by ${\bf W}^T$ on each side we can rewrite this equation as
\begin{equation}
{\bf X = S W}^T.
\end{equation}
This now gives us a way to reconstruct the data matrix from the scores and loadings. To reconstruct the data from a low-dimensional approximation, we just have to truncate these matrices. Let's call ${\bf S}_{1:K}$ and ${\bf W}_{1:K}$ as keeping only the first $K$ columns of this matrix. Then our reconstruction is:
\begin{equation}
{\bf \hat X = S}_{1:K} ({\bf W}_{1:K})^T.
\end{equation}
#### Exercise
Fill in the function below to reconstruct the data using different numbers of principal components.
**Suggestions**
* Fill in the following function to reconstruct the data based on the weights and scores. Don't forget to add the mean!
* Make sure your function works by reconstructing the data with all $K=784$ components. They two images should look identical.
```python
help(plot_MNIST_reconstruction)
```
Help on function plot_MNIST_reconstruction in module __main__:
plot_MNIST_reconstruction(X, X_reconstructed)
Plots 9 images in the MNIST dataset side-by-side with the reconstructed images.
Args:
X (numpy array of floats): Data matrix
each column corresponds to a different random variable
X_reconstructed (numpy array of floats): Data matrix
each column corresponds to a different random variable
Returns:
Nothing.
```python
def reconstruct_data(score,evectors,X_mean,K):
"""
Reconstruct the data based on the top K components.
Args:
score (numpy array of floats) : Score matrix
evectors (numpy array of floats) : Matrix of eigenvectors
X_mean (numpy array of floats) : Vector corresponding to data mean
K (scalar) : Number of components to include
Returns:
(numpy array of floats) : Matrix of reconstructed data
"""
###################################################################
## Insert your code here to:
## Reconstruct the data from the score and eigenvectors
## Don't forget to add the mean!!
#X_reconstructed = Your code here
#uncomment once you've filled in the function
raise NotImplementedError("Student excercise: finish reconstructing data function!")
###################################################################
return X_reconstructed
K = 784
## Uncomment below to to:
## Reconstruct the data based on all components
## Plot the data and reconstruction
# X_mean = ...
# X_reconstructed = ...
# plot_MNIST_reconstruction(X ,X_reconstructed)
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D5-DimensionalityReduction/solutions/W1D5_Tutorial3_Solution_0edb6db6.py)
*Example output:*
#### Exercise:
Now run the code below and experiment with the slider to reconstruct the data matrix using different numbers of principal components.
**Questions:**
* How many principal components are necessary to reconstruct the numbers (by eye)? How does this relate to the intrinsic dimensionality of the data?
* Do you see any information in the data with only a single principal component?
```python
###### MAKE SURE TO RUN THIS CELL VIA THE PLAY BUTTON TO ENABLE SLIDERS ########
import ipywidgets as widgets
def refresh(K = 100):
X_reconstructed = reconstruct_data(score,evectors,X_mean,K)
plot_MNIST_reconstruction(X ,X_reconstructed)
plt.title('Reconstructed, K={}'.format(K))
_ = widgets.interact(refresh,
K = (1, 784, 10))
```
#### Exercise:
Next, let's take a closer look at the first principal component by visualizing its corresponding weights.
**Questions**
* Enter `plot_MNIST_weights` to visualize the weights of the first basis vector.
* What structure do you see? Which pixels have a strong positive weighting? Which have a strong negative weighting? What kinds of images would this basis vector differentiate?
* Try visualizing the second and third basis vectors. Do you see any structure? What about the 100th basis vector? 500th? 700th?
```python
help(plot_MNIST_weights)
```
Help on function plot_MNIST_weights in module __main__:
plot_MNIST_weights(weights)
Visualize PCA basis vector weights for MNIST. Red = positive weights, blue =
negative weights, white = zero weight.
Args:
weights (numpy array of floats) : PCA basis vector
Returns:
Nothing.
```python
###################################################################
## Insert your code here to:
## Plot the weights of the first principal component
#plot_MNIST_weights(Your code here)
###################################################################
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D5-DimensionalityReduction/solutions/W1D5_Tutorial3_Solution_d07d8802.py)
*Example output:*
# (Optional Exploration): Examine denoising using PCA.
Finally, we will test how PCA can be used to denoise data. We will add salt-and-pepper noise to the original data and see how that affects the eigenvalues. To do this, we'll use the function `add_noise`, starting with adding noise to 20% of the pixels.
The we'll Perform PCA and plot the variance explained. How many principal components are required to explain 90% of the variance? How does this compare to the original data?
```python
###################################################################
## Here we:
## Add noise to the data
## Plot noise-corrupted data
## Perform PCA on the noisy data
## Calculate and plot the variance explained
###################################################################
X_noisy = add_noise(X,.2)
score_noisy, evectors_noisy, evals_noisy = pca(X_noisy)
variance_explained_noisy = get_variance_explained(evals_noisy)
with plt.xkcd():
plot_MNIST_sample(X_noisy)
plot_variance_explained(variance_explained_noisy)
```
To denoise the data, we can simply project it onto the basis found with the original dataset (`evectors`, not `evectors_noisy`). Then, by taking the top K components of this projection, we have a guess for where the sample should lie in the K-dimensional latent space. We can then reconstruct the data as normal, using the top 50 components. You should play around with the amount of noise and K to build intuition.
```python
###################################################################
## Here we:
## Subtract the mean of the noise-corrupted data
## Project onto the original basis vectors evectors
## Reconstruct the data using the top 50 components
## Plot the result
###################################################################
X_noisy_mean = np.mean(X_noisy,0)
projX_noisy = np.matmul(X_noisy-X_noisy_mean,evectors)
X_reconstructed = reconstruct_data(projX_noisy,evectors,X_noisy_mean,50)
with plt.xkcd():
plot_MNIST_reconstruction(X_noisy,X_reconstructed)
```
| 9ebfc4e971ae1525060ba467a63777ea8e4c62ee | 382,961 | ipynb | Jupyter Notebook | tutorials/W1D5_DimensionalityReduction/student/W1D5_Tutorial3.ipynb | hyosubkim/course-content | 30370131c42fd3bf4f84c50e9c4eaf19f3193165 | [
"CC-BY-4.0"
] | null | null | null | tutorials/W1D5_DimensionalityReduction/student/W1D5_Tutorial3.ipynb | hyosubkim/course-content | 30370131c42fd3bf4f84c50e9c4eaf19f3193165 | [
"CC-BY-4.0"
] | null | null | null | tutorials/W1D5_DimensionalityReduction/student/W1D5_Tutorial3.ipynb | hyosubkim/course-content | 30370131c42fd3bf4f84c50e9c4eaf19f3193165 | [
"CC-BY-4.0"
] | null | null | null | 263.747245 | 90,056 | 0.914093 | true | 5,284 | Qwen/Qwen-72B | 1. YES
2. YES | 0.817574 | 0.763484 | 0.624205 | __label__eng_Latn | 0.933473 | 0.288568 |
# Content
* Libraries
* Introduction to Problem
* Loading Dataset
* Visualizing Raw Dataset
* Preprocessing
* Visualizing Proprocessed Dataset
* Logistic Regression with numpy
* Forward Propagation
* Backward Propagation
* Complete Propagation
* Combining All Together
* Training
* Prediction
* Evaluation
* Logistic Regression with tensorflow
* Just Forward Propagation
* Training and Evaluation
* Logistic Regression with keras
* Just Layer Description
* Training
* Evaluation
# Libraries
```python
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
import seaborn as sb
import pandas as pd
from sklearn import datasets
from sklearn.model_selection import train_test_split
from utils import plot_prediction
```
# Introduction to Problem
Given a set of inputs X as features of flowers, we want to assign them to one of two possible categories 0 or 1 that means what type of flower they are.
For solving this problem we use logistic regression because it models the probability that each input belongs to a particular category.
## Loading Dataset
How many features this dataset have? <br>
What is the label for this problem?
```python
iris = pd.read_csv('./data/Iris.csv')
iris.sample(5)
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Id</th>
<th>SepalLengthCm</th>
<th>SepalWidthCm</th>
<th>PetalLengthCm</th>
<th>PetalWidthCm</th>
<th>Species</th>
</tr>
</thead>
<tbody>
<tr>
<th>122</th>
<td>123</td>
<td>7.7</td>
<td>2.8</td>
<td>6.7</td>
<td>2.0</td>
<td>Iris-virginica</td>
</tr>
<tr>
<th>23</th>
<td>24</td>
<td>5.1</td>
<td>3.3</td>
<td>1.7</td>
<td>0.5</td>
<td>Iris-setosa</td>
</tr>
<tr>
<th>77</th>
<td>78</td>
<td>6.7</td>
<td>3.0</td>
<td>5.0</td>
<td>1.7</td>
<td>Iris-versicolor</td>
</tr>
<tr>
<th>31</th>
<td>32</td>
<td>5.4</td>
<td>3.4</td>
<td>1.5</td>
<td>0.4</td>
<td>Iris-setosa</td>
</tr>
<tr>
<th>89</th>
<td>90</td>
<td>5.5</td>
<td>2.5</td>
<td>4.0</td>
<td>1.3</td>
<td>Iris-versicolor</td>
</tr>
</tbody>
</table>
</div>
```python
print("Number of Flowers: {}".format(iris.shape[0]))
```
Number of Flowers: 150
## Visualizing Dataset
```python
sepalPlt = sb.FacetGrid(iris, hue="Species", size=6).map(plt.scatter, "SepalLengthCm", "SepalWidthCm")
plt.legend(loc='upper left')
```
## Preprocessing
In this step we will simplify the problem into a binary classification problem with just 2 features.
```python
X = iris.iloc[:, 1:3]
Y = (iris['Species'] != 'Iris-setosa') * 1
```
We devide data to train and test sets:
```python
X_train, X_test, y_train, y_test = train_test_split(X, Y)
```
```python
plt.figure(figsize=(10, 6))
plt.scatter(X_train[y_train == 0].iloc[:, 0], X_train[y_train == 0].iloc[:, 1], color='b', label='Iris-setosa train')
plt.scatter(X_train[y_train == 1].iloc[:, 0], X_train[y_train == 1].iloc[:, 1], color='r', label='Others train')
plt.scatter(X_test[y_test == 0].iloc[:, 0], X_test[y_test == 0].iloc[:, 1], color='b', label='Iris-setosa test', marker='+', s=150)
plt.scatter(X_test[y_test == 1].iloc[:, 0], X_test[y_test == 1].iloc[:, 1], color='r', label='Others test', marker='+', s=150)
plt.xlabel('SepalLengthCm')
plt.ylabel('SepalWidthCm')
plt.legend()
```
<h1 style="text-align:center">Questions</h1>
## Logistic Regression
<a href="https://towardsdatascience.com/building-a-logistic-regression-in-python-step-by-step-becd4d56c9c8">Logistic Regression</a> is a Machine Learning classification algorithm that is used to predict the probability of a categorical dependent variable. In logistic regression, the dependent variable is a binary variable that contains data coded as 1 (yes, success, etc.) or 0 (no, failure, etc.). In other words, the logistic regression model predicts P(Y=1) as a function of X.
\begin{align}
Z = w_1x_1+w_2x_2+\dots+w_nx_n + b
\end{align}
## Forward Propagation
First we will implement linear multiplication:
$$W=\begin{bmatrix}w_1, & w_2, & \dots & w_n \end{bmatrix}$$
$$X=\begin{bmatrix}x_1, & x_2, & \dots & x_n \end{bmatrix}$$
\begin{align}
Z = W^TX + b
\end{align}
```python
def linear_mult(X, w, b):
return np.dot(w.T, X) + b
```
Now we implement a function could generate W and b for us:
```python
def initialize_with_zeros(dim):
"""
This function creates a vector of zeros of shape (dim, 1) for w and initializes b to 0.
Argument:
dim -- size of the w vector we want (or number of parameters in this case)
Returns:
w -- initialized vector of shape (dim, 1)
b -- initialized scalar (corresponds to the bias)
"""
w = np.zeros((dim,1))
b = 0
assert(w.shape == (dim, 1))
assert(isinstance(b, float) or isinstance(b, int))
return w, b
```
Next we will implement sigmoid function to map calculated value to a probablity:
\begin{align}
A = \sigma(Z) = \frac{1}{1 + e^{-Z}}
\end{align}
```python
def sigmoid(z):
return 1 / (1 + np.exp(-z))
```
Now we implement the cost function, cost function represent the difference between our predictions and actual labels(y is the actual label and a is our predicted label):
\begin{align}
J = -\frac{1}{m}\sum_{i=1}^{m}y^{(i)}\log(A^{(i)})+(1-y^{(i)})\log(1-A^{(i)})
\end{align}
```python
def cost_function(y, a):
return -np.mean(y*np.log(a) + (1-y)*np.log(1-a))
```
Now we implement the whole forward propagation which will calculate cost and the predicted value for the each data point:
```python
def forward_propagate(w, b, X, Y):
m = X.shape[1]
Z = linear_mult(X, w, b)
A = sigmoid(Z)
cost = cost_function(Y, A)
cost = np.squeeze(cost)
assert(cost.shape == ())
back_require = {
'A': A
}
return back_require, cost
```
## Backward Propagation
Now we calculate W and b derivative as follow:
$$ \frac{\partial J}{\partial w} = \frac{1}{m}X(A-Y)^T\tag{7}$$
$$ \frac{\partial J}{\partial b} = \frac{1}{m} \sum_{i=1}^m (a^{(i)}-y^{(i)})\tag{8}$$
```python
def backward_propagate(w, b, X, Y, back_require):
m = X.shape[1]
A = back_require['A']
dw = (1/m) * np.dot(X,(A-Y).T)
db = (1/m) * np.sum(A - Y)
assert(dw.shape == w.shape)
assert(db.dtype == float)
grads = {"dw": dw,
"db": db}
return grads
```
# Complete Propagation
```python
def propagate(w, b, X, Y):
"""
Implement the cost function and its gradient for the propagation explained above
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if non-cat, 1 if cat) of size (1, number of examples)
Return:
cost -- negative log-likelihood cost for logistic regression
dw -- gradient of the loss with respect to w, thus same shape as w
db -- gradient of the loss with respect to b, thus same shape as b
Tips:
- Write your code step by step for the propagation. np.log(), np.dot()
"""
# FORWARD PROPAGATION
back_require, cost = forward_propagate(w, b, X, Y)
# BACKWARD PROPAGATION
grads = backward_propagate(w, b, X, Y, back_require)
return grads, cost
```
# Combining All Together
Now we combine all our implemented functions together to create an optimizer which can find a linear function to devide the zero labeled data from one labeled data points by optimizng W and b as follow:
$$W=W−\alpha{dw}$$
$$b=b−\alpha{db}$$
$\alpha$ is the learning rate
```python
# GRADED FUNCTION: optimize
def optimize(w, b, X, Y, num_iterations, learning_rate, print_cost = False):
"""
This function optimizes w and b by running a gradient descent algorithm
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of shape (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if non-cat, 1 if cat), of shape (1, number of examples)
num_iterations -- number of iterations of the optimization loop
learning_rate -- learning rate of the gradient descent update rule
print_cost -- True to print the loss every 100 steps
Returns:
params -- dictionary containing the weights w and bias b
grads -- dictionary containing the gradients of the weights and bias with respect to the cost function
costs -- list of all the costs computed during the optimization, this will be used to plot the learning curve.
Tips:
You basically need to write down two steps and iterate through them:
1) Calculate the cost and the gradient for the current parameters. Use propagate().
2) Update the parameters using gradient descent rule for w and b.
"""
costs = []
for i in range(num_iterations):
grads, cost = propagate(w,b,X,Y)
dw = grads["dw"]
db = grads["db"]
w -= learning_rate*dw
b -= learning_rate*db
# Record the costs
costs.append(cost)
# Print the cost every 100 training iterations
if print_cost and i % 100 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
params = {"w": w,
"b": b}
grads = {"dw": dw,
"db": db}
return params, grads, costs
```
## Training
```python
%%time
X_train_t, y_train_t = np.array(X_train.T), np.array(y_train.T)
w, b = initialize_with_zeros(2)
params, grads, costs = optimize(w, b, X_train_t, y_train_t, num_iterations= 800, learning_rate = 0.1, print_cost = False)
```
Wall time: 50.9 ms
```python
plt.plot(range(len(costs)),costs)
plt.xlabel('Iterations')
plt.ylabel('Cost(loss) value')
```
# Prediction
```python
def predict(w, b, X):
'''
Predict whether the label is 0 or 1 using learned logistic regression parameters (w, b)
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)
Returns:
Y_prediction -- a numpy array (vector) containing all predictions (0/1) for the examples in X
'''
m = X.shape[1]
Y_prediction = np.zeros((1,m))
Z = linear_mult(X, w, b)
A = sigmoid(Z)
for i in range(m):
Y_prediction[0][i] = 1 if A[0][i] > .5 else 0
assert(Y_prediction.shape == (1, m))
return Y_prediction
```
# Evaluation
```python
preds = predict(params['w'], params['b'], X_train_t)
print('Accuracy on training set: %{}'.format((preds[0] == y_train).mean()*100))
```
Accuracy on training set: %99.10714285714286
```python
preds = predict(params['w'], params['b'], X_train_t)
print('Accuracy on training set: %{}'.format((preds[0] == y_train).mean()*100))
preds = predict(params['w'], params['b'], np.array(X_test.T))
print('Accuracy on test set: %{}'.format((preds[0] == y_test).mean()*100))
```
Accuracy on training set: %99.10714285714286
Accuracy on test set: %100.0
```python
plot_prediction(X_train, y_train, X_test, y_test, predict, params)
```
# Logistic Regression with tensorflow
## Just Forward Propagation
```python
y_test = np.array(y_test).astype(np.float32).reshape(-1,1)
y_train = np.array(y_train).astype(np.float32).reshape(-1,1)
graph = tf.Graph()
with graph.as_default():
with tf.device("/cpu:0"):
# Input data.
# Load the training, validation and test data into constants that are
# attached to the graph.
X_data = tf.placeholder(tf.float32, shape=(None, 2))
y_data = tf.placeholder(tf.float32 , shape=(None, 1))
# Variables.
# These are the parameters that we are going to be training. The weight
# matrix will be initialized using random values following a (truncated)
# normal distribution. The biases get initialized to zero.
weights = tf.Variable(tf.truncated_normal([2, 1]))
biases = tf.Variable(tf.zeros([1]))
# Training computation.
# We multiply the inputs with the weight matrix, and add biases. We compute
# the softmax and cross-entropy (it's one operation in TensorFlow, because
# it's very common, and it can be optimized). We take the average of this
# cross-entropy across all training examples: that's our loss.
logits = tf.matmul(X_data, weights) + biases
loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=y_data, logits=logits))
# Optimizer.
# We are going to find the minimum of this loss using gradient descent.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
# These are not part of training, but merely here so that we can report
# accuracy figures as we train.
prediction = tf.nn.sigmoid(logits)
```
## Training and Evaluation
```python
%%time
num_steps = 800
with tf.Session(graph=graph) as session:
# This is a one-time operation which ensures the parameters get initialized as
# we described in the graph: random weights for the matrix, zeros for the
# biases.
tf.global_variables_initializer().run()
print('Initialized')
for step in range(num_steps):
# Run the computations. We tell .run() that we want to run the optimizer,
# and get the loss value and the training predictions returned as numpy
# arrays.
d, l, predictions = session.run([optimizer, loss, prediction],
feed_dict={X_data: X_train.astype(np.float32), y_data: y_train})
if (step % 100 == 0):
print('Loss at step %d: %f' % (step, l))
print('Training accuracy: %.1f%%' % accuracy(predictions, y_train))
# Calling .eval() on valid_prediction is basically like calling run(), but
# just to get that one numpy array. Note that it recomputes all its graph
# dependencies.
print('Test accuracy: %.1f%%' % accuracy(
prediction.eval(feed_dict={X_data: X_test.astype(np.float32)}), y_test))
plot_prediction(X_train, y_train, X_test, y_test, prediction, params, fm_type='tensorflow') #TODO
```
# Keras
## Just Layer Description
```python
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(1, activation=tf.nn.sigmoid, input_shape=(2,))
])
model.compile(optimizer='sgd',
loss='binary_crossentropy',
metrics=['accuracy'])
```
## Training
```python
%%time
history = model.fit(X_train, y_train, epochs=800, verbose=0)
```
Wall time: 7.25 s
```python
plt.plot(range(len(history.history['acc'])),history.history['acc'])
plt.xlabel('Iterations')
plt.ylabel('Accuracy value')
```
## Evaluation
```python
print('Test accuracy: %.1f%%' % accuracy(
model.predict(X_train.astype(np.float32)), y_train))
print('Test accuracy: %.1f%%' % accuracy(
model.predict(X_test.astype(np.float32)), y_test))
```
Test accuracy: 99.1%
Test accuracy: 100.0%
```python
plot_prediction(X_train, y_train, X_test, y_test, model.predict, params, fm_type='keras')
```
# Further Reading
* Tensorflow Playground: https://playground.tensorflow.org/
* First Week of Neural Networks Course: https://www.coursera.org/learn/neural-networks-deep-learning
| f1d43d4ecd64f7bccaaf866ad242283b528d4ed7 | 176,450 | ipynb | Jupyter Notebook | Second Meetup/Binary Classification.ipynb | school-of-ai-rasht-chapter/Meetup-Materials | dd5bfb4b163ab07b83beb7038f062f18ca530f45 | [
"MIT"
] | 9 | 2019-04-03T13:01:17.000Z | 2019-09-09T09:01:00.000Z | Second Meetup/Binary Classification.ipynb | rasht-school-of-ai/Meetup-Materials | ec4074f24f837111cebb604bb7f27c4ec6045784 | [
"MIT"
] | null | null | null | Second Meetup/Binary Classification.ipynb | rasht-school-of-ai/Meetup-Materials | ec4074f24f837111cebb604bb7f27c4ec6045784 | [
"MIT"
] | 2 | 2019-07-25T08:36:57.000Z | 2019-09-19T07:24:55.000Z | 151.070205 | 25,848 | 0.88331 | true | 4,430 | Qwen/Qwen-72B | 1. YES
2. YES | 0.861538 | 0.824462 | 0.710305 | __label__eng_Latn | 0.890863 | 0.488609 |
```python
import numpy as np
import sympy as sm
import scipy as sp
from scipy import optimize
from scipy import interpolate
import matplotlib.pyplot as plt
import ipywidgets as widgets
from mpl_toolkits.mplot3d import Axes3D
%matplotlib inline
```
# 1. Human capital accumulation
Consider a worker living in **two periods**, $t \in \{1,2\}$.
In each period she decides whether to **work ($l_t = 1$) or not ($l_t = 0$)**.
She can *not* borrow or save and thus **consumes all of her income** in each period.
If she **works** her **consumption** becomes:
$$c_t = w h_t l_t\,\,\text{if}\,\,l_t=1$$
where $w$ is **the wage rate** and $h_t$ is her **human capital**.
If she does **not work** her consumption becomes:
$$c_t = b\,\,\text{if}\,\,l_t=0$$
where $b$ is the **unemployment benefits**.
Her **utility of consumption** is:
$$ \frac{c_t^{1-\rho}}{1-\rho} $$
Her **disutility of working** is:
$$ \gamma l_t $$
From period 1 to period 2, she **accumulates human capital** according to:
$$ h_2 = h_1 + l_1 +
\begin{cases}
0 & \text{with prob. }0.5 \\
\Delta & \text{with prob. }0.5
\end{cases} \\
$$
where $\Delta$ is a **stochastic experience gain**.
In the **second period** the worker thus solves:
$$
\begin{eqnarray*}
v_{2}(h_{2}) & = &\max_{l_{2}} \frac{c_2^{1-\rho}}{1-\rho} - \gamma l_2
\\ & \text{s.t.} & \\
c_{2}& = & \begin{cases}
w h_2 &
\text{if }l_2 = 1 \\
b & \text{if }l_2 = 0
\end{cases} \\
l_{2}& \in &\{0,1\}
\end{eqnarray*}
$$
In the **first period** the worker thus solves:
$$
\begin{eqnarray*}
v_{1}(h_{1}) &=& \max_{l_{1}} \frac{c_1^{1-\rho}}{1-\rho} - \gamma l_1 + \beta\mathbb{E}_{1}\left[v_2(h_2)\right]
\\ & \text{s.t.} & \\
c_{1}& = & \begin{cases}
w h_1 &
\text{if }l_1 = 1 \\
b & \text{if }l_1 = 0
\end{cases} \\
h_2 &=& h_1 + l_1 + \begin{cases}
0 & \text{with prob. }0.5\\
\Delta & \text{with prob. }0.5
\end{cases}\\
l_{1} &\in& \{0,1\}\\
\end{eqnarray*}
$$
where $\beta$ is the **discount factor** and $\mathbb{E}_{1}\left[v_2(h_2)\right]$ is the **expected value of living in period two**.
The **parameters** of the model are:
```python
rho = 2
beta = 0.96
gamma = 0.1
w = 2
b = 1
Delta = 0.1
```
The **relevant levels of human capital** are:
```python
h_vec = np.linspace(0.1, 1.5, 100)
```
**Question 1:** Solve the model in period 2 and illustrate the solution (including labor supply as a function of human capital).
**Answer to Question 1:**
```python
# The basic functions are:
# Utility
def utility(c, rho):
return c**(1-rho)/(1-rho)
# Consumption
def c(w, h, l, b):
return w*h*l + b*(1-l)
# Disutility
def disutility(l, gamma):
return gamma*l
# Total utility in period 2
def v2(w, h2, l2, b, rho, gamma):
return utility(c(w, h2, l2, b), rho) - disutility(l2, gamma)
# Total utility in period 1
def v1(h1, l1, v2_interp, Delta, w, b, rho, gamma, beta):
# From period 1 to period 2 the consumer can acummulate human capital based on the two functions below:
# If human capital is not accumulated, utility in period 2 becomes
N_h2 = h1 + l1 + 0
N_v2 = v2_interp([N_h2])[0]
# b. If human capital is accumulated, utility in period 2 becomes
Y_h2 = h1 + l1 + Delta
Y_v2 = v2_interp([Y_h2])[0]
# c. Given the probabilities for ho
v2 = 0.5*N_v2 + 0.5*Y_v2
# d. total net utility
return utility(c(w, h1, l1, b), rho) - disutility(l1, gamma) + beta*v2
```
**Note:** the consumer in period 2, decides whether to work or not by comparing her two utilities. If she is better by not working, then decides not to work $(l_{2}=0)$, otherwise chooses to work $(l_{2}=1)$.
```python
# The solution function for period 2 is:
def solve_period_2(rho, gamma, Delta):
# Create the vectors for period 2
l2_vec = np.empty(100)
v2_vec = np.empty(100)
# Solve for each h2 in grid
for i, h2 in enumerate(h_vec):
# Choose either l2 = 0 or l2 = 1, by comparing the utility values from these two options
if v2(w, h2, 1, b, rho, gamma) <= v2(w, h2, 0, b, rho, gamma):
l2_vec[i] = 0
else:
l2_vec[i] = 1
# Save the estimated values of v2, based on the choice of working or not
v2_vec[i] = v2(w, h2, l2_vec[i], b, rho, gamma)
return l2_vec, v2_vec
```
```python
# Solve for period 2
l2_vec, v2_vec = solve_period_2(rho, gamma, Delta)
# Figure
fig = plt.figure(figsize = (8,10))
ax = fig.add_subplot(2,1,1)
ax1 = fig.add_subplot(2,1,2)
ax.plot(h_vec, l2_vec)
ax1.plot(h_vec, v2_vec)
# Labels
ax.set_xlabel('Human Capital')
ax.set_ylabel('Labor Supply')
ax.set_title('Labor Supply and Human Capital- Period 2')
ax1.set_xlabel('Human Capital')
ax1.set_ylabel('Utility')
ax1.set_title('Utility and Human Capital- Period 2')
plt.tight_layout()
```
**Conclusion:** from the above figure, we can see that the individual will choose not to work in period 2 whenever the accumulated human capital ($h_2{}$) is approximately below 0.6. If the accumulated human capital is above that level, she will get more utility by working in period 2, rather than not working.
**Question 2:** Solve the model in period 1 and illustrate the solution (including labor supply as a function of human capital).
**Answer to Question 2**
```python
# The solution function for period 1 is:
def solve_period_1(rho, gamma, beta, Delta, v1, v2_interp):
# Vectors
l1_vec = np.empty(100)
v1_vec = np.empty(100)
# Solve for each h1
for i, h1 in enumerate(h_vec):
# The individual decides whether to work or not by comparing his utility. If she is better off not working, we will have
# l1=0 otherwise l1=1
if v1(h1, 1, v2_interp, Delta, w, b, rho, gamma, beta) <= v1(h1, 0, v2_interp, Delta, w, b, rho, gamma, beta):
l1_vec[i] = 0
else:
l1_vec[i] = 1
v1_vec[i] = v1(h1, l1_vec[i], v2_interp, Delta, w, b, rho, gamma, beta)
return l1_vec, v1_vec
# Construct interpolator
v2_interp = interpolate.RegularGridInterpolator((h_vec,), v2_vec, bounds_error=False, fill_value=None)
```
```python
# Solve for period 1
l1_vec, v1_vec = solve_period_1(rho, gamma, beta, Delta, v1, v2_interp)
# Figure
fig = plt.figure(figsize = (8,10))
ax = fig.add_subplot(2,1,1)
ax1 = fig.add_subplot(2,1,2)
ax.plot(h_vec,l1_vec)
ax1.plot(h_vec, v1_vec)
# Labels
ax.set_xlabel('Human Capital')
ax.set_ylabel('Labor Supply')
ax.set_title('Labor Supply and Human Capital - Period 1')
ax1.set_xlabel('Human Capital')
ax1.set_ylabel('Utility')
ax1.set_title('Utility and Human Capital- Period 1')
plt.tight_layout()
```
**Conclusion:** from this figure we can conclude that the individual will choose not to work whenever his human capital is aporximately below 0.4. If the accumulated human capital is larger than that, the individual is better off working. The human capital level that makes the individual work is lower in period 1 than in period 2, because the individual recognizes the potential experience gain in period 1, that could be tranfered in period 2.
**Question 3:** Will the worker never work if her potential wage income is lower than the unemployment benefits she can get? Explain and illustrate why or why not.
**Answer to Question 3:**
The straight forward answer to this question is no. Even if the wage income is lower than the unemployment benefits, we can see that there are still some high values of human capital that would make the individual better off if she decides to work. In order to verify this, we can set the level of unempoyment benefits "b" above the wage level "w" and re-run the code above.
For example, setting the unemployment benefits equal to 2.2 (w = 2) we can see that for a certain level of human capital, the individual would still choose to work in both periods. However, these levels of h are much higher now compared to the situation when b < w.
```python
b = 2.2
```
```python
# Solve for period 2 and period 1
# We did not create new vectors, just use the old ones
l2_vec, v2_vec = solve_period_2(rho, gamma, Delta)
l1_vec, v1_vec = solve_period_1(rho, gamma, beta, Delta, v1, v2_interp)
# Figure
fig = plt.figure(figsize = (12,8))
ax = fig.add_subplot(2,2,1)
ax1 = fig.add_subplot(2,2,2)
ax2 = fig.add_subplot(2,2,3)
ax3 = fig.add_subplot(2,2,4)
# Plots
ax.plot(h_vec,l2_vec)
ax1.plot(h_vec, l1_vec)
ax2.plot(h_vec, v2_vec)
ax3.plot(h_vec, v1_vec)
# Labels
ax.set_xlabel('Human Capital')
ax.set_ylabel('Labor Supply')
ax.set_title('Labor Supply and Human Capital - Period 2')
ax1.set_xlabel('Human Capital')
ax1.set_ylabel('Labor Supply')
ax1.set_title('Labor Supply and Human Capital - Period 1')
ax2.set_xlabel('Human Capital')
ax2.set_ylabel('Utility')
ax2.set_title('Utility and Human Capital- Period 2')
ax3.set_xlabel('Human Capital')
ax3.set_ylabel('Utility')
ax3.set_title('Utility and Human Capital- Period 1')
plt.tight_layout()
```
# 2. AS-AD model
Consider the following **AS-AD model**. The **goods market equilibrium** is given by
$$ y_{t} = -\alpha r_{t} + v_{t} $$
where $y_{t}$ is the **output gap**, $r_{t}$ is the **ex ante real interest** and $v_{t}$ is a **demand disturbance**.
The central bank's **Taylor rule** is
$$ i_{t} = \pi_{t+1}^{e} + h \pi_{t} + b y_{t}$$
where $i_{t}$ is the **nominal interest rate**, $\pi_{t}$ is the **inflation gap**, and $\pi_{t+1}^{e}$ is the **expected inflation gap**.
The **ex ante real interest rate** is given by
$$ r_{t} = i_{t} - \pi_{t+1}^{e} $$
Together, the above implies that the **AD-curve** is
$$ \pi_{t} = \frac{1}{h\alpha}\left[v_{t} - (1+b\alpha)y_{t}\right]$$
Further, assume that the **short-run supply curve (SRAS)** is given by
$$ \pi_{t} = \pi_{t}^{e} + \gamma y_{t} + s_{t}$$
where $s_t$ is a **supply disturbance**.
**Inflation expectations are adaptive** and given by
$$ \pi_{t}^{e} = \phi\pi_{t-1}^{e} + (1-\phi)\pi_{t-1}$$
Together, this implies that the **SRAS-curve** can also be written as
$$ \pi_{t} = \pi_{t-1} + \gamma y_{t} - \phi\gamma y_{t-1} + s_{t} - \phi s_{t-1} $$
The **parameters** of the model are:
```python
par = {}
par['alpha'] = 5.76
par['h'] = 0.5
par['b'] = 0.5
par['phi'] = 0
par['gamma'] = 0.075
```
**Question 1:** Use the ``sympy`` module to solve for the equilibrium values of output, $y_t$, and inflation, $\pi_t$, (where AD = SRAS) given the parameters ($\alpha$, $h$, $b$, $\alpha$, $\gamma$) and $y_{t-1}$ , $\pi_{t-1}$, $v_t$, $s_t$, and $s_{t-1}$.
**Answer to question 1:**
```python
# Activating pretty printing
sm.init_printing(use_unicode=True)
```
```python
# Define all the needed variables for the symbolic solution
v = sm.symbols('v_t')
y_t = sm.symbols('y_t')
pi_t = sm.symbols('pi_t')
h = sm.symbols('h')
alpha = sm.symbols('alpha')
b = sm.symbols('b')
pi_t_1 = sm.symbols('\pi_{t-1}')
y_t_1 = sm.symbols('y_{t-1}')
s_t = sm.symbols('s_t')
s_t_1 = sm.symbols('s_{t-1}')
gamma = sm.symbols('gamma')
phi = sm.symbols('\phi')
# Check if they are printed properly
h, alpha, v, b, y_t, pi_t, pi_t_1, gamma, phi, y_t_1, s_t, s_t_1
```
```python
AD = sm.Eq(pi_t, (1/(h*alpha))*(v - (1+b*alpha)*y_t))
SRAS = sm.Eq(pi_t, (pi_t_1 + gamma*y_t - phi*gamma*y_t_1 + s_t - phi*s_t_1))
AD, SRAS
```
**Solving for the equilibrium value of output ($y_{t}$) in three steps:**
1. **Step 1:** Solve AD wrt $\pi_{t}$
2. **Step 2:** Substistute in SRAS equation
3. **Step 3:** Solve wrt $y_{t}$
```python
AD_1 = sm.solve(AD, pi_t)
AD_1
```
```python
AD_2 = SRAS.subs(pi_t, AD_1[0])
AD_2
```
```python
Output = sm.solve(AD_2, y_t)
Output
```
**Solving for the equilibrium value of inflation ($\pi_{t}$) in two steps**
1. **Step 1:** Substistute $y_{t}$ in SRAS
2. **Step 2:** Solve wrt $\pi_{t}$
```python
AS_1 = SRAS.subs(y_t, Output[0])
AS_1
```
```python
Inflation = sm.solve(AS_1, pi_t)
Inflation
```
```python
# Deactivating pretty printing
sm.init_printing(use_unicode=False)
```
**Question 2:** Find and illustrate the equilibrium when $y_{t-1} = \pi_{t-1} = v_t = s_t = s_{t-1} = 0$. Illustrate how the equilibrium changes when instead $v_t = 0.1$.
**Answer to Question 2:**
```python
# The two calculated equations are lambdified to be used
sol_output = sm.lambdify((y_t_1, s_t_1, pi_t_1, s_t, v, phi, alpha, gamma, h, b), Output[0])
sol_inflation = sm.lambdify((y_t_1, s_t_1, pi_t_1, s_t, v, phi, alpha, gamma, h, b), Inflation[0])
# Set the parameters equal to their values
def _sol_output(y_t_1, s_t_1, pi_t_1, s_t, v, phi=par['phi'], alpha=par['alpha'], gamma=par['gamma'], h=par['h'], b=par['b']):
return sol_output(y_t_1, s_t_1, pi_t_1, s_t, v, phi, alpha, gamma, h, b)
def _sol_inflation(y_t_1, s_t_1, pi_t_1, s_t, v, phi=par['phi'], alpha=par['alpha'], gamma=par['gamma'], h=par['h'], b=par['b']):
return sol_inflation(y_t_1, s_t_1, pi_t_1, s_t, v, phi, alpha, gamma, h, b)
# The variables' values are inserted into the functions
A = _sol_output(y_t_1=0, s_t_1=0, pi_t_1=0, s_t=0, v=0)
B = _sol_output(y_t_1=0, s_t_1=0, pi_t_1=0, s_t=0, v=0.1)
C = _sol_inflation(y_t_1=0, s_t_1=0, pi_t_1=0, s_t=0, v=0)
D = _sol_inflation(y_t_1=0, s_t_1=0, pi_t_1=0, s_t=0, v=0.1)
print('The values of y_t and pi_t when all the variables are equal to zero:')
print(f'y_t = {A}')
print(f'pi_t = {C}')
print('The values of y_t and pi_t when all the variables are equal to zero, except v = 0.1')
print(f'y_t = {B}')
print(f'pi_t = {D}')
```
The values of y_t and pi_t when all the variables are equal to zero:
y_t = 0.0
pi_t = 0.0
The values of y_t and pi_t when all the variables are equal to zero, except v = 0.1
y_t = 0.0244140625
pi_t = 0.0018310546875
**Conclusion:** when the values of the parameters are equal to zero ($y_{t-1} = \pi_{t-1} = v_t = s_t = s_{t-1} = 0$), then the equlibrium is $(\pi_t,y_t) = (0,0)$. On the other hand when the economy endurs from a positive demand shock ($v_t = 0.1$ with the rest of the parameters remaining equal to zero), then the equilibrium is $(\pi_t,y_t) = (0.0244140625 , 0.0018310546875)$.
**Note:** to plot the AD-SRAS we need the equations to express the relationship between $\pi_t$ and $y_t$; the lambdified equations don't express directly that relationship, so we will define the original AD and SRAS functions and then plot them.
```python
# Define the AD and SRAS functions to plot them
def AD(h, alpha, v, b, y_t):
return (1/(h*alpha))*(v - (1+b*alpha)*y_t)
def SRAS(pi_t_1, gamma, y_t, phi, y_t_1, s_t, s_t_1):
return (pi_t_1 + gamma*y_t - phi*gamma*y_t_1 + s_t - phi*s_t_1)
```
```python
fig = plt.figure(figsize = (12,8))
ax = fig.add_subplot(1,1,1)
y_lin = np.linspace(-0.3, 0.3, 100)
AD_0 = AD(h=par['h'], alpha=par['alpha'], v=0, b=par['b'], y_t=y_lin)
AD_1 = AD(h=par['h'], alpha=par['alpha'], v=0.1, b=par['b'],y_t=y_lin)
SRAS_total = SRAS(pi_t_1=0, gamma=par['gamma'], y_t=y_lin, phi=par['phi'], y_t_1=0, s_t=0, s_t_1=0)
plt.plot(y_lin, AD_0, label='AD 0 (with no disturbance)')
plt.plot(y_lin, AD_1, label='AD 1 (with demand disturbance, v=0.1)')
plt.plot(y_lin, SRAS_total, label='SRAS')
plt.plot(A, C, marker='.', color='black', label='Equilibrium before the demand disturbance')
plt.plot(B, D, marker='o', color='black', label='Equilibrium after the demand disturbance')
plt.grid(True)
plt.title('AD - SRAS')
plt.xlabel('$y_t$')
plt.ylabel('$\pi_t$')
plt.legend(loc='upper right')
plt.show()
```
**Persistent disturbances:** Now, additionaly, assume that both the demand and the supply disturbances are AR(1) processes
$$ v_{t} = \delta v_{t-1} + x_{t} $$
$$ s_{t} = \omega s_{t-1} + c_{t} $$
where $x_{t}$ is a **demand shock**, and $c_t$ is a **supply shock**. The **autoregressive parameters** are:
```python
par['delta'] = 0.80
par['omega'] = 0.15
```
**Question 3:** Starting from $y_{-1} = \pi_{-1} = s_{-1} = 0$, how does the economy evolve for $x_0 = 0.1$, $x_t = 0, \forall t > 0$ and $c_t = 0, \forall t \geq 0$?
**Answer to Question 3:**
First we will define the functions of the two shocks, which will be used to fill the vectors of $v_{t}$ and $s_{t}$ and the simulate the model to investigate the evolution of the two main variables of our model $(\pi_{t}, y_{t})$
**Note:** don't forget to run the above cell with the parameter values
```python
def v_t(v_t_1, x_t, delta=par['delta']):
return delta*v_t_1 + x_t
def s_t(s_t_1, c_t, omega=par['omega']):
return omega*s_t_1 + c_t
```
```python
# Create a random seed
seed = 2019
np.random.seed(seed)
# Define four periods to check which one will present the evolution of the economy better
#T1 = 25
#T2 = 50
#T3 = 100
T4 = 125
# Creating the vectors that will be needed for the simulation
y_vec = [0]
pi_vec = [0]
v_vec = [0]
x_vec = np.zeros(T4) # all the demand shocks are set equal to 0
x_vec[1] = 0.1 # the second element of the demand shocks list is set to 0.1, because demand disturbance function uses v_{t-1}
s_vec = [0]
c_vec = np.zeros(T4) # all the supply shocks are 0
```
```python
# Creating a for loop to fill in the vectors
for t in range (1, T4):
v_vec.append(v_t (v_vec [t-1], x_vec[t]))
s_vec.append(s_t (s_vec [t-1], c_vec[t]))
y_vec.append(sol_output (y_vec[t-1], s_vec[t-1], pi_vec[t-1], s_vec[t], v_vec[t], par['phi'], par['alpha'], par['gamma'], par['h'], par['b']))
pi_vec.append(sol_inflation (y_vec[t-1], s_vec[t-1], pi_vec[t-1], s_vec[t], v_vec[t], par['phi'], par['alpha'], par['gamma'], par['h'], par['b']))
# Checking the created vectors (uncomment next row to check the values from any of the vectors)
#y_vec, pi_vec, v_vec, s_vec
```
```python
periods = np.linspace(0, T4, T4)
fig = plt.figure(figsize = (10,8))
ax = fig.add_subplot(1,1,1)
ax.plot(periods, pi_vec, label='$\pi_t$')
ax.plot(periods, y_vec, label='$y_t$')
plt.grid(True)
plt.ylim(-0.005, 0.025)
ax.set_title('Evolution of output and inflation')
plt.xlabel('$Periods$')
plt.ylabel('$y_t$ and $\pi_t$')
plt.legend(loc='upper right')
```
**Conclusion:** the two variables ($y_t$, $\pi_t$), that we want to investigate, are affected by a demand shock ($x_t=0.1$) in the interval from zero (0) to the convergence point. First, we used 25, 50 and 100 periods to check how many periods are needed for our economy to converge back to its equilibrium ($y_t=\pi_t=0$); from the graph we can see that at least 100 periods are required for the convergence. So, the chosen number of periods is 125. From the plot, it is shown that for both variables the demand shock has a bigger effect during the first periods (periods < 20), even though our economy needs more than 100 periods to converge back to its equilibrium.
**Stochastic shocks:** Now, additionally, assume that $x_t$ and $c_t$ are stochastic and normally distributed
$$ x_{t}\sim\mathcal{N}(0,\sigma_{x}^{2}) $$
$$ c_{t}\sim\mathcal{N}(0,\sigma_{c}^{2}) $$
The **standard deviations of the shocks** are:
```python
par['sigma_x'] = 3.492
par['sigma_c'] = 0.2
```
**Question 4:** Simulate the AS-AD model for 1,000 periods. Calculate the following five statistics:
1. Variance of $y_t$, $var(y_t)$
2. Variance of $\pi_t$, $var(\pi_t)$
3. Correlation between $y_t$ and $\pi_t$, $corr(y_t,\pi_t)$
4. Auto-correlation between $y_t$ and $y_{t-1}$, $corr(y_t,y_{t-1})$
5. Auto-correlation between $\pi_t$ and $\pi_{t-1}$, $corr(\pi_t,\pi_{t-1})$
**Answer to Question 4:**
**Note:** now we know that the demand and supply shocks are normally distributed (and that's how they are going to be defined), so we are going to follow the same methodology as before to simulate the AS-AD model for 1000 periods.
```python
# Create a random seed
seed = 2020
np.random.seed(seed)
# Define the periods of the model simulation
T5 = 1000
# Creating the vectors that will be needed for the simulation
y_vec_1 = [0]
pi_vec_1 = [0]
v_vec_1 = [0]
x_vec_1 = np.random.normal(loc=0, scale=par['sigma_x'], size=T5) # all the demand shocks are normally distributed
s_vec_1 = [0]
c_vec_1 = np.random.normal(loc=0, scale=par['sigma_c'], size=T5) # all the supply shocks are normally distributed
```
```python
# Creating a for loop to fill in the vectors
for t in range (1, T5):
v_vec_1.append(v_t (v_vec_1[t-1], x_vec_1[t]))
s_vec_1.append(s_t (s_vec_1[t-1], c_vec_1[t]))
y_vec_1.append(sol_output (y_vec_1[t-1], s_vec_1[t-1], pi_vec_1[t-1], s_vec_1[t], v_vec_1[t], par['phi'], par['alpha'], par['gamma'], par['h'], par['b']))
pi_vec_1.append(sol_inflation (y_vec_1[t-1], s_vec_1[t-1], pi_vec_1[t-1], s_vec_1[t], v_vec_1[t], par['phi'], par['alpha'], par['gamma'], par['h'], par['b']))
```
```python
periods = np.linspace(0, T5, T5)
fig = plt.figure(figsize = (15,6))
fig.suptitle('Graphical simulation of the model', fontsize=20)
ax1 = fig.add_subplot(2,1,1)
ax1.plot(periods, pi_vec_1, label='$\pi_t$')
ax1.set_title('Evolution of Inflation')
plt.ylabel('$\pi_t$')
plt.grid(True)
plt.legend(loc='upper right')
ax2 = fig.add_subplot(2,1,2)
ax2.plot(periods, y_vec_1, color='orange', label='$y_t$')
ax2.set_title('Evolution of Output')
plt.ylabel('$y_t$')
plt.xlabel('$Periods$')
plt.grid(True)
plt.legend(loc='upper right')
```
```python
# Calculate the requested statistics:
var_y_t = np.var(y_vec_1)
var_pi_t = np.var(pi_vec_1)
corr_y_t_pi_t = np.corrcoef(y_vec_1, pi_vec_1)
corr_y_t_y_t_1 = np.corrcoef(y_vec_1[1:], y_vec_1[:-1])
corr_pi_t_pi_t_1 = np.corrcoef(pi_vec_1[1:], pi_vec_1[:-1])
# Print the statistics
print(f'The variance of the output is: var(y_t) = {var_y_t}')
print(f'The variance of the inflation is: var(pi_t) = {var_pi_t}')
print(f'The correlation between the inflation and the output is: corr(pi_t, y_t) = {corr_y_t_pi_t[0,1]}')
print(f'The auto-correlation between the output (of the current period) and the output (of the previous period) is: corr(y_t,y_t_1) = {corr_y_t_y_t_1[0,1]}')
print(f'The auto-correlation between the inflation (of the current period) and the inflation (of the previous period) is: corr(pi_t, pi_t_1) = {corr_pi_t_pi_t_1[0,1]}')
```
The variance of the output is: var(y_t) = 1.824925348354874
The variance of the inflation is: var(pi_t) = 1.1449104934286185
The correlation between the inflation and the output is: corr(pi_t, y_t) = -0.1314060958224771
The auto-correlation between the output (of the current period) and the output (of the previous period) is: corr(y_t,y_t_1) = 0.7789670127564
The auto-correlation between the inflation (of the current period) and the inflation (of the previous period) is: corr(pi_t, pi_t_1) = 0.9785490722526474
**Question 5:** Plot how the correlation between $y_t$ and $\pi_t$ changes with $\phi$. Use a numerical optimizer or root finder to choose $\phi\in(0,1)$ such that the simulated correlation between $y_t$ and $\pi_t$ comes close to 0.31.
**Answer to Question 5:**
```python
# Create a random seed
seed = 2021
np.random.seed(seed)
# Define the periods of the model simulation
T6 = 1000
# Define the demand and supply shocks as normal distributions
x_vec_2 = np.random.normal(loc=0, scale=par['sigma_x'], size=T6)
c_vec_2 = np.random.normal(loc=0, scale=par['sigma_c'], size=T6)
```
```python
# Create a function that will fill in the vectors and the only function parameter will be phi
def phi_function(phi):
# Create vectors
y_vec_2 = [0]
pi_vec_2 = [0]
v_vec_2 = [0]
s_vec_2 = [0]
corr_y_t_pi_t_vec = [0]
# In (y_vec_2.append...) and (pi_vec_2.append...) all the parameters are substituted with their values apart from phi,
# which will take the value that we will give when calling the function.
for t in range (1, T6):
v_vec_2.append(v_t (v_vec_2[t-1], x_vec_2[t]))
s_vec_2.append(s_t (s_vec_2[t-1], c_vec_2[t]))
y_vec_2.append(sol_output (y_vec_2[t-1], s_vec_2[t-1], pi_vec_2[t-1], s_vec_2[t], v_vec_2[t], phi, par['alpha'], par['gamma'], par['h'], par['b']))
pi_vec_2.append(sol_inflation (y_vec_2[t-1], s_vec_2[t-1], pi_vec_2[t-1], s_vec_2[t], v_vec_2[t], phi, par['alpha'], par['gamma'], par['h'], par['b']))
corr_y_t_pi_t_vec = np.corrcoef(y_vec_2, pi_vec_2)[1,0]
return y_vec_2, pi_vec_2, corr_y_t_pi_t_vec
# Unpack solution
y_vec_2, pi_vec_2, corr_y_t_pi_t_vec = phi_function(par['phi'])
```
```python
# Simulate the model and fill in the correlation vector
phi_vec = np.linspace(1e-8, 9.9999999e-1, T6) # define different values for phi between 0 and 1.
# At the beggining we used 0.00000001 and 0.99999999 as the bound for phi values, then print phi_vec
# and finally used the first and last printed value as bounds
correlations = []
for i in phi_vec:
y_vec_2, pi_vec_2, corr_y_t_pi_t_vec = phi_function(i)
correlations.append(corr_y_t_pi_t_vec)
```
```python
# Plot the correlations and the respective phi values
plt.figure(figsize = (12,8))
plt.plot(phi_vec, correlations, label='Correlation ($y_{t},\pi_{t}$)')
plt.axhline(y=0.31, linestyle='--', color='black', xmin=0, label='Correlation ($y_{t},\pi_{t}$) = 0.31')
plt.title('Correlations of ($y_{t},\pi_{t}$) for different values of $\phi$')
plt.ylabel('Correlations ($y_{t},\pi_{t}$)')
plt.xlabel('$\phi$')
plt.ylim(-0.2, 0.5)
plt.grid(True)
plt.legend(loc='upper left')
```
**Conclusion:** from the plot we can see that while the value of $\phi$ increases, so the correlation of ($y_{t},\pi_{t}$) increases, too. Moreover, when $\phi$ $\in$ (0, 0.5] (the right bound is approximately), the values of the correlation lie in the negative interval, while when $\phi$ $\in$ $[0.5, 1)$, the values of the correlation lie in the positive interval. Finally, when the value $\phi$ is approximately slightly greater than 0.9 the corr($y_{t},\pi_{t}$) = 0.31.
In the next cell we are going to use two different optimizers, to estimate the values of $\phi$, so as the $corr(y_{t},\pi_{t}) = 0.31$ approximately.
```python
# Define the function that will be used in the optimizer
obj = lambda phi: (np.corrcoef(phi_function(phi)[0], phi_function(phi)[1])[1,0] - 0.31)
# Use two optimizing methods to estimate and cross-check the result
brentq_result = optimize.brentq(obj, a=0, b=1, full_output=False)
print(f'The value of phi estimated with the brentq numerical optimizer is: brentq_result = {brentq_result}')
bisect_result = optimize.bisect(obj, a=0, b=1, full_output=False)
print(f'The value of phi estimated with the bisect numerical optimizer is: bisect_result = {bisect_result}')
```
The value of phi estimated with the brentq numerical optimizer is: brentq_result = 0.9277138617387728
The value of phi estimated with the bisect numerical optimizer is: bisect_result = 0.9277138617380842
**Conclusion:** both optimizers (brentq and bisect methods) returned almost the same results, with the first 12/16 decimals being the same. So, when $\phi = 0.92771$ approximately, the correlation of $(y_{t}, pi_{t})$ comes close to 0.31.
**Quesiton 6:** Use a numerical optimizer to choose $\sigma_x>0$, $\sigma_c>0$ and $\phi\in(0,1)$ to make the simulated statistics as close as possible to US business cycle data where:
1. $var(y_t) = 1.64$
2. $var(\pi_t) = 0.21$
3. $corr(y_t,\pi_t) = 0.31$
4. $corr(y_t,y_{t-1}) = 0.84$
5. $corr(\pi_t,\pi_{t-1}) = 0.48$
**Answer to question 6:**
We did not manage to solve question 6, but in the next cells we are going to include the latest version of our try.
```python
# Create a random seed
#seed = 2022
#np.random.seed(seed)
# Define the periods of the model simulation
#T7 = 10 # The number of periods is quite small, because we wanted to reduce the complexity of the "for loop" (that exists in the next lines)
# during our tests. We would set it equal to 1000 for the simulation when we would fix the code
# Create a function that will fill in the vectors and the function's parameters will be phi, sigma_x, sigma_c
#def US_function(phi, sigma_x, sigma_c):
# Create vectors
# y_vec_3 = [0]
# pi_vec_3 = [0]
# v_vec_3 = [0]
# s_vec_3 = [0]
# x_vec_3 = np.random.normal(loc=0, scale=sigma_x, size=T7)
# c_vec_3 = np.random.normal(loc=0, scale=sigma_c, size=T7)
# Create vectors for the five statistics
# var_y_t_vec = [0]
# var_pi_t_vec = [0]
# corr_y_t_pi_t_vec_1 = [0]
# corr_y_t_y_t_1_vec = [0]
# corr_pi_t_pi_t_1_vec = [0]
# for t in range (1, T7):
# v_vec_3.append(v_t (v_vec_3[t-1], x_vec_3[t]))
# s_vec_3.append(s_t (s_vec_3[t-1], c_vec_3[t]))
# y_vec_3.append(sol_output (y_vec_3[t-1], s_vec_3[t-1], pi_vec_3[t-1], s_vec_3[t], v_vec_3[t], phi, par['alpha'], par['gamma'], par['h'], par['b']))
# pi_vec_3.append(sol_inflation (y_vec_3[t-1], s_vec_3[t-1], pi_vec_3[t-1], s_vec_3[t], v_vec_3[t], phi, par['alpha'], par['gamma'], par['h'], par['b']))
# var_y_t_vec = np.var(y_vec_3)
# var_pi_t_vec = np.var(pi_vec_3)
# corr_y_t_pi_t_vec_1 = np.corrcoef(y_vec_3, pi_vec_3)[1,0]
# corr_y_t_y_t_1_vec = np.corrcoef(y_vec_3[1:], y_vec_3[:-1])[1,0]
# corr_pi_t_pi_t_1_vec = np.corrcoef(pi_vec_3[1:], pi_vec_3[:-1])[1,0]
# return y_vec_3, pi_vec_3, var_y_t_vec, var_pi_t_vec, corr_y_t_pi_t_vec_1, corr_y_t_y_t_1_vec, corr_pi_t_pi_t_1_vec
# Unpack solution
#y_vec_3, pi_vec_3, var_y_t_vec, var_pi_t_vec, corr_y_t_pi_t_vec_1, corr_y_t_y_t_1_vec, corr_pi_t_pi_t_1_vec = US_function(par['phi'], par['sigma_x'], par['sigma_c'])
# Simulate the model and fill in the correlation vector
#phi_vec_1 = np.linspace(1e-8, 9.9999999e-1, T7)
#sigma_x_vec = np.linspace(0, 10, 10)
#sigma_c_vec = np.linspace(0, 10, 10)
#vars_y_t = []
#vars_pi_t = []
#corrs = []
#autocorrs_y_t = []
#autocorrs_pi_t = []
#for i in phi_vec_1:
# for j in sigma_x_vec:
# for k in sigma_c_vec:
# y_vec_3, pi_vec_3, var_y_t_vec, var_pi_t_vec, corr_y_t_pi_t_vec_1, corr_y_t_y_t_1_vec, corr_pi_t_pi_t_1_vec = US_function(par['phi'], par['sigma_x'], par['sigma_c'])
# vars_y_t.append(var_y_t_vec)
# vars_pi_t.append(var_pi_t_vec)
# corrs.append(corr_y_t_pi_t_vec_1)
# autocorrs_y_t.append(corr_y_t_y_t_1_vec)
# autocorrs_pi_t.append(corr_pi_t_pi_t_1_vec)
```
```python
# Define the function that will be used in the optimizer
#obj_1 = lambda phi, sigma_x, sigma_c: ((np.var(US_function(phi, sigma_x, sigma_c)[0], US_function(phi, sigma_x, sigma_c)[1] - 1.64)) &
# (np.var(US_function(phi, sigma_x, sigma_c)[0], US_function(phi, sigma_x, sigma_c)[1] - 0.21)) &
# (np.corrcoef(US_function(phi, sigma_x, sigma_c)[0], US_function(phi, sigma_x, sigma_c)[1])[1,0] - 0.31) &
# (np.corrcoef(US_function(phi, sigma_x, sigma_c)[1:][0], US_function(phi, sigma_x, sigma_c)[:-1][1])[1,0] - 0.84) &
# (np.corrcoef(US_function(phi, sigma_x, sigma_c)[1:][0], US_function(phi, sigma_x, sigma_c)[:-1][1]) - 0.48))
#E = obj_1(phi, sigma_x, sigma_c)
# Use two optimizing methods to cross-check the result
#brentq_result = optimize.brentq(E, a=0, b=1, full_output=False)
#print(f'The value of phi estimated with the brentq numerical optimizer is: brentq_result = {brentq_result}')
#bisect_result = optimize.bisect(obj_1, a=0, b=1, full_output=False)
#print(f'The value of phi estimated with the bisect numerical optimizer is: bisect_result = {bisect_result}')
```
# 3. Exchange economy
Consider an **exchange economy** with
1. 3 goods, $(x_1,x_2,x_3)$
2. $N$ consumers indexed by \\( j \in \{1,2,\dots,N\} \\)
3. Preferences are Cobb-Douglas with log-normally distributed coefficients
$$ \begin{eqnarray*}
u^{j}(x_{1},x_{2},x_{3}) &=&
\left(x_{1}^{\beta_{1}^{j}}x_{2}^{\beta_{2}^{j}}x_{3}^{\beta_{3}^{j}}\right)^{\gamma}\\
& & \,\,\,\beta_{i}^{j}=\frac{\alpha_{i}^{j}}{\alpha_{1}^{j}+\alpha_{2}^{j}+\alpha_{3}^{j}} \\
& & \,\,\,\boldsymbol{\alpha}^{j}=(\alpha_{1}^{j},\alpha_{2}^{j},\alpha_{3}^{j}) \\
& & \,\,\,\log(\boldsymbol{\alpha}^j) \sim \mathcal{N}(\mu,\Sigma) \\
\end{eqnarray*} $$
4. Endowments are exponentially distributed,
$$
\begin{eqnarray*}
\boldsymbol{e}^{j} &=& (e_{1}^{j},e_{2}^{j},e_{3}^{j}) \\
& & e_i^j \sim f, f(z;\zeta) = 1/\zeta \exp(-z/\zeta)
\end{eqnarray*}
$$
Let $p_3 = 1$ be the **numeraire**. The implied **demand functions** are:
$$
\begin{eqnarray*}
x_{i}^{\star j}(p_{1},p_{2},\boldsymbol{e}^{j})&=&\beta^{j}_i\frac{I^j}{p_{i}} \\
\end{eqnarray*}
$$
where consumer $j$'s income is
$$I^j = p_1 e_1^j + p_2 e_2^j +p_3 e_3^j$$
The **parameters** and **random preferences and endowments** are given by:
```python
# a. parameters
N = 50000
mu = np.array([3,2,1])
Sigma = np.array([[0.25, 0, 0], [0, 0.25, 0], [0, 0, 0.25]])
gamma = 0.8
zeta = 1
# b. random draws
seed = 1986
np.random.seed(seed)
# preferences
alphas = np.exp(np.random.multivariate_normal(mu, Sigma, size=N))
betas = alphas/np.reshape(np.sum(alphas,axis=1), (N,1))
# endowments
e1 = np.random.exponential(zeta,size=N)
e2 = np.random.exponential(zeta,size=N)
e3 = np.random.exponential(zeta,size=N)
```
**Question 1:** Plot the histograms of the budget shares for each good across agents.
**Answer to Question 1:**
The budget shares are given by the fraction of demand per income:
$$
\begin{eqnarray*}
\frac{x_{i}^{\star j}(p_{1},p_{2},\boldsymbol{e}^{j})}{I^j}&=&\frac{\beta^{j}_i\frac{I^j}{p_{i}}}{I^j}&=&\frac{\beta^{j}_i}{p_{i}} \\
\end{eqnarray*}
$$
The histogram of the budget shares can be presented by ploting betas.
```python
# Plot the histogram of the budgets share for each good
fig = plt.figure(dpi=100)
ax = fig.add_subplot(1,1,1)
goods = ['Good 1', 'Good 2', 'Good 3']
ax.hist(betas, bins=50, label=goods)
ax.set_title('The budget shares of three goods')
ax.set_xlabel('betas')
ax.set_ylabel('Consumers')
plt.legend(loc='upper right')
```
**Conclusion:**
As we can see from the histogram, good 1 has the highest budget share. Good 3 has the lowest budgets share, but it gets most of the consumers. The consumers for good1 and goo2 are almost the same.
Consider the **excess demand functions:**
$$ z_i(p_1,p_2) = \sum_{j=1}^N x_{i}^{\star j}(p_{1},p_{2},\boldsymbol{e}^{j}) - e_i^j$$
**Question 2:** Plot the excess demand functions.
**Answer to Question 2:**
Firstly, we need to define the general demand function to calulate the excess demand.
```python
# Check the values of betas
betas
```
array([[0.53104511, 0.30693723, 0.16201766],
[0.50866997, 0.4162364 , 0.07509363],
[0.83317207, 0.07722226, 0.08960567],
...,
[0.63725404, 0.22027742, 0.14246854],
[0.78205036, 0.11988119, 0.09806844],
[0.65552751, 0.24728801, 0.09718448]])
We know from the fucntion of betas and alphas that the first column of betas is for good 1, the second column for good 2, and the third column for good 3. So we can define the demand like the functions below.
```python
# Demand function:
def demand_good_1_fun(betas, p1, p2, e1, e2, e3):
I = p1*e1 + p2*e2 + e3
return betas[:,0]*I/p1
def demand_good_2_fun(betas, p1, p2, e1, e2, e3):
I = p1*e1 + p2*e2 + e3
return betas[:,1]*I/p2
def demand_good_3_fun(betas, p1, p2, e1, e2, e3):
I = p1*e1 + p2*e2 + e3
return betas[:,2]*I
```
Secondly, excess demand function is obtained by demand minus supply.
```python
# Excess demand function:
def excess_demand_good_1_func(betas, p1, p2, e1, e2, e3):
demand = np.sum(demand_good_1_fun(betas, p1, p2, e1, e2, e3))
supply = np.sum(e1)
excess_demand = demand - supply
return excess_demand
def excess_demand_good_2_func(betas, p1, p2, e1, e2, e3):
demand = np.sum(demand_good_2_fun(betas, p1, p2, e1, e2, e3))
supply = np.sum(e2)
excess_demand = demand - supply
return excess_demand
```
Thirdly, plot the excess demand function.
```python
# Return coordinate matrices from coordinate vectors
p1_vec = np.linspace(1, 10, 100)
p2_vec = np.linspace(1, 10, 100)
p1_grid, p2_grid = np.meshgrid(p1_vec, p2_vec)
```
```python
# Store the function value in two-dimention lists
excess_1_grid = np.ones((100, 100))
excess_2_grid = np.ones((100, 100))
for i, p1 in enumerate(p1_vec):
for j, p2 in enumerate(p2_vec):
excess_1_grid[i,j] = excess_demand_good_1_func(betas, p1, p2, e1, e2, e3)
excess_2_grid[i,j] = excess_demand_good_2_func(betas, p1, p2, e1, e2, e3)
```
```python
# Plot the excess demand function for three goods
fig = plt.figure(figsize = (10,15))
# Excess demand for good 1
ex1 = fig.add_subplot(2,1,1, projection='3d')
ex1.plot_surface(p1_grid, p2_grid, excess_1_grid)
ex1.invert_xaxis()
ex1.set_xlabel('$p_1$')
ex1.set_ylabel('$p_2$')
ex1.set_zlabel('Excess Demand')
ex1.set_title('Good 1')
# Excess demand for good 2
ex2 = fig.add_subplot(2,1,2, projection='3d')
ex2.plot_surface(p1_grid, p2_grid, excess_2_grid)
ex2.invert_xaxis()
ex2.set_xlabel('$p_1$')
ex2.set_ylabel('$p_2$')
ex2.set_zlabel('Excess Demand')
ex2.set_title('Good 2')
plt.tight_layout()
```
**Quesiton 3:** Find the Walras-equilibrium prices, $(p_1,p_2)$, where both excess demands are (approximately) zero, e.g. by using the following tâtonnement process:
1. Guess on $p_1 > 0$, $p_2 > 0$ and choose tolerance $\epsilon > 0$ and adjustment aggressivity parameter, $\kappa > 0$.
2. Calculate $z_1(p_1,p_2)$ and $z_2(p_1,p_2)$.
3. If $|z_1| < \epsilon$ and $|z_2| < \epsilon$ then stop.
4. Else set $p_1 = p_1 + \kappa \frac{z_1}{N}$ and $p_2 = p_2 + \kappa \frac{z_2}{N}$ and return to step 2.
**Answer to Question 3:**
Firstly, we follow the steps of tâtonnement process in question 3 to create a function for finding the equilibrium prices.
```python
# Create a function to find the equilibrium prices
def find_equilibrium(betas, p1, p2, e1, e2, e3, kappa=0.5, eps=1e-5, maxiter=5000):
t = 0
while True:
# a. step 1: excess demand
Z1 = excess_demand_good_1_func(betas, p1, p2, e1, e2, e3)
Z2 = excess_demand_good_2_func(betas, p1, p2, e1, e2, e3)
# b: step 2: stop?
if np.abs(Z1) < eps and np.abs(Z2) < eps or t >= maxiter:
print(f'{t:3d}: p1 = {p1:12.8f} -> excess demand 1 -> {Z1:14.8f}')
print(f'{t:3d}: p2 = {p2:12.8f} -> excess demand 2 -> {Z2:14.8f}')
break
# c. step 3: update p1
p1 = p1 + kappa*Z1/N
p2 = p2 + kappa*Z2/N
# d. step 4: return
if t < 5 or t%250 == 0:
print(f'{t:3d}: p1 = {p1:12.8f} -> excess demand 1 -> {Z1:14.8f}')
print(f'{t:3d}: p2 = {p2:12.8f} -> excess demand 2 -> {Z2:14.8f}')
elif t == 5:
print(' ...')
t += 1
return p1, p2
```
Secondly, we initialize the prices and run the tâtonnement process to obtain equilibrium prices.
```python
# Find the equilibrium prices
p1 = 1.5
p2 = 2
kappa = 0.5
eps = 1e-5
p1, p2 = find_equilibrium(betas, p1, p2, e1, e2, e3, kappa=kappa, eps=eps)
```
0: p1 = 1.96210663 -> excess demand 1 -> 46210.66316021
0: p2 = 1.79267895 -> excess demand 2 -> -20732.10501031
1: p1 = 2.23963132 -> excess demand 1 -> 27752.46920159
1: p2 = 1.63720075 -> excess demand 2 -> -15547.82025340
2: p1 = 2.43849634 -> excess demand 1 -> 19886.50124673
2: p2 = 1.52379200 -> excess demand 2 -> -11340.87488112
3: p1 = 2.59174633 -> excess demand 1 -> 15324.99963530
3: p2 = 1.44618319 -> excess demand 2 -> -7760.88055883
4: p1 = 2.71582585 -> excess demand 1 -> 12407.95136982
4: p2 = 1.39783608 -> excess demand 2 -> -4834.71087112
...
250: p1 = 6.26335411 -> excess demand 1 -> 216.49247363
250: p2 = 2.53210665 -> excess demand 2 -> 80.79309673
500: p1 = 6.46787054 -> excess demand 1 -> 20.50967918
500: p2 = 2.60841773 -> excess demand 2 -> 7.65154831
750: p1 = 6.48782160 -> excess demand 1 -> 2.05850651
750: p2 = 2.61586076 -> excess demand 2 -> 0.76794420
1000: p1 = 6.48982965 -> excess demand 1 -> 0.20775870
1000: p2 = 2.61660988 -> excess demand 2 -> 0.07750600
1250: p1 = 6.49003237 -> excess demand 1 -> 0.02098016
1250: p2 = 2.61668551 -> excess demand 2 -> 0.00782681
1500: p1 = 6.49005285 -> excess demand 1 -> 0.00211877
1500: p2 = 2.61669314 -> excess demand 2 -> 0.00079042
1750: p1 = 6.49005491 -> excess demand 1 -> 0.00021397
1750: p2 = 2.61669391 -> excess demand 2 -> 0.00007982
2000: p1 = 6.49005512 -> excess demand 1 -> 0.00002161
2000: p2 = 2.61669399 -> excess demand 2 -> 0.00000806
2085: p1 = 6.49005513 -> excess demand 1 -> 0.00000991
2085: p2 = 2.61669400 -> excess demand 2 -> 0.00000370
Thirdly, ensure that excess demand of both goods are almost zero.
```python
# Ensure that excess demand of both goods are (almost) zero
Z1 = excess_demand_good_1_func(betas, p1, p2, e1, e2, e3)
Z2 = excess_demand_good_2_func(betas, p1, p2, e1, e2, e3)
print(Z1, Z2)
assert(np.abs(Z1) < eps)
assert(np.abs(Z2) < eps)
```
9.910378139466047e-06 3.6971468944102526e-06
**Conclusion:** by applying the tâtonnement process, we estimated that the Walras-equilibrium prices are $p_1$ = 6.4900 and $p_2$ = 2.6167, where both excess demands are reaching close to zero values (Excess Demand for Good 1 = 9.910e-06 and Excess Demand for Good 2 = 3.697e-06).
**Question 4:** Plot the distribution of utility in the Walras-equilibrium and calculate its mean and variance.
**Answer to Question 4:**
To define the utility function in the Walras-equilibrium, we need to use the equilibrium prices.
```python
# Use the price value in equilibrium
p1 = 6.4900
p2 = 2.6166
# Calculate the utility function
def utility(betas, e1, e2, e3, gamma):
I = 6.49*e1 + 2.62*e2 + e3
x1 = betas[:,0]*(I/6.49)
x2 = betas[:,1]*(I/2.62)
x3 = betas[:,2]*I
return (x1**betas[:,0] + x2**betas[:,1] + x3**betas[:,2])**gamma
# Plot the utility function
U = utility(betas,e1, e2, e3, gamma)
plt.hist(U,100)
plt.xlabel('Utility')
plt.ylabel('Consumers')
plt.title('Utilities Distribution In the Walras-equilibrium ')
```
Calculate the mean value and variance:
```python
mean = np.mean(U)
variance = np.var(U)
print(f'mean = {mean:.3f}, variance = {variance:.3f}')
```
mean = 2.376, variance = 0.208
**Question 5:** Find the Walras-equilibrium prices if instead all endowments were distributed equally. Discuss the implied changes in the distribution of utility. Does the value of $\gamma$ play a role for your conclusions?
**Answer to Question 5:**
Firstly, we make all endowments distributed equally.
```python
# Create equally distributed endowments
e_1 = np.mean(e1) + np.zeros(N)
e_2 = np.mean(e2) + np.zeros(N)
e_3 = np.mean(e3) + np.zeros(N)
```
Secondly, we run the fucntion of tâtonnement process again to find the equilibrium prices in this situation.
```python
# Find the equilibrium prices
p1 = 6
p2 = 2
p1, p2 = find_equilibrium(betas, p1, p2, e_1, e_2, e_3, kappa=kappa, eps=eps)
```
0: p1 = 5.98161569 -> excess demand 1 -> -1838.43058095
0: p2 = 2.08252318 -> excess demand 2 -> 8252.31758307
1: p1 = 5.96812909 -> excess demand 1 -> -1348.65994059
1: p2 = 2.14602277 -> excess demand 2 -> 6349.95919174
2: p1 = 5.95841304 -> excess demand 1 -> -971.60561725
2: p2 = 2.19591981 -> excess demand 2 -> 4989.70444789
3: p1 = 5.95164637 -> excess demand 1 -> -676.66645308
3: p2 = 2.23573186 -> excess demand 2 -> 3981.20485204
4: p1 = 5.94721104 -> excess demand 1 -> -443.53291538
4: p2 = 2.26787693 -> excess demand 2 -> 3214.50700009
...
250: p1 = 6.42958877 -> excess demand 1 -> 52.50116116
250: p2 = 2.59617436 -> excess demand 2 -> 19.61473172
500: p1 = 6.48035834 -> excess demand 1 -> 5.20168564
500: p2 = 2.61514143 -> excess demand 2 -> 1.94323017
750: p1 = 6.48542456 -> excess demand 1 -> 0.52273499
750: p2 = 2.61703404 -> excess demand 2 -> 0.19528030
1000: p1 = 6.48593405 -> excess demand 1 -> 0.05260559
1000: p2 = 2.61722437 -> excess demand 2 -> 0.01965208
1250: p1 = 6.48598532 -> excess demand 1 -> 0.00529473
1250: p2 = 2.61724353 -> excess demand 2 -> 0.00197797
1500: p1 = 6.48599048 -> excess demand 1 -> 0.00053292
1500: p2 = 2.61724546 -> excess demand 2 -> 0.00019909
1750: p1 = 6.48599100 -> excess demand 1 -> 0.00005364
1750: p2 = 2.61724565 -> excess demand 2 -> 0.00002004
1933: p1 = 6.48599105 -> excess demand 1 -> 0.00000999
1933: p2 = 2.61724567 -> excess demand 2 -> 0.00000373
**Conclusion:** the tâtonnement process show that the equilibrium prices are $p_1$ = 6.4860, $p_2$ = 2.6172. The price for Good 1 is a little bit smaller than before (previous $p_1$ = 6.4900 ), while the price for Good 2 is a little bit larger than before (previous $p_2$ = 2.6167).
In the following part, we will discuss the changes in the distribution of utilty.
Firstly, we need to define the new utility function with equally ditributed endowments.
```python
# New equilibrium prices
p1 = 6.4860
p2 = 2.6172
# Define the new utility function when endowments changed
def utility_1(betas,e_1, e_2, e_3, gamma):
I = 6.4860*e_1 + 2.6172*e_2 + e_3
x1 = betas[:,0]*(I/6.4860)
x2 = betas[:,1]*(I/2.6172)
x3 = betas[:,2]*(I/1)
return (x1**betas[:,0] + x2**betas[:,1] + x3**betas[:,2])**gamma
U_1 = utility_1(betas, e_1, e_2, e_3, gamma)
```
Then, we plot the original unitility function and new fucntion in a graph to easily compare the changes by eyes.
```python
# Plot the original and new utility function
plt.hist(U, 100, label='Original Utility Distribution')
plt.hist(U_1, 100, label='New Utility Distribution')
plt.xlabel('Utility')
plt.ylabel('Consumers')
plt.title('Utilities Distribution')
plt.legend()
```
In addition, we calculate the mean value and variance of two utility functions to get the changes in number.
```python
mean_u = np.mean(U)
var_u = np.var(U)
mean_u_1 = np.mean(U_1)
var_u_1 = np.var(U_1)
print(f'original utilty: mean = {mean_u:.3f}, variance = {var_u:.3f}')
print(f'new utilty : mean = {mean_u_1:.3f}, variance = {var_u_1:.3f}')
```
original utilty: mean = 2.376, variance = 0.208
new utilty : mean = 2.457, variance = 0.004
**Conclusion:**
Obviously, the new unitility function has a higher mean value and a lower variance compared to the original utility function.
Here, we use interactive plot to see if the conclusions change with different value of $\gamma$.
```python
# Plot the original and new utility function in one graph to compare the difference
def interactive(gam):
uti = utility(betas, e_1, e_2, e_3, gamma=gam)
uti_1 = utility_1(betas,e_1, e_2, e_3, gamma=gam)
plt.hist(uti, 500, label='Original Utility Distribution')
plt.hist(uti_1, 500, label='New Utility Distribution')
plt.xlabel('Utility')
plt.ylabel('Consumers')
plt.title('Utilities Distribution')
plt.legend()
widgets.interact(interactive,
gam = widgets.FloatSlider(description='$\\gamma$', min=-1, max=1, step=0.1, value=0))
```
interactive(children=(FloatSlider(value=0.0, description='$\\gamma$', max=1.0, min=-1.0), Output()), _dom_clas…
<function __main__.interactive(gam)>
**Conclusion:** from the graph, we know that the mean value changes greatly with different $\gamma$ value. Hence, $\gamma$ does play an important role.
| f9964cbc3cbe91d1975f2867aff7deea2289e6e2 | 880,695 | ipynb | Jupyter Notebook | examproject/exam_2019.ipynb | NumEconCopenhagen/projects-2019-bcg | bf3b5074359baf08533f858bf452ade2d56c5122 | [
"MIT"
] | null | null | null | examproject/exam_2019.ipynb | NumEconCopenhagen/projects-2019-bcg | bf3b5074359baf08533f858bf452ade2d56c5122 | [
"MIT"
] | 13 | 2019-04-10T10:40:21.000Z | 2019-05-22T16:03:01.000Z | examproject/exam_2019.ipynb | NumEconCopenhagen/projects-2019-bcg | bf3b5074359baf08533f858bf452ade2d56c5122 | [
"MIT"
] | 1 | 2021-05-13T09:28:29.000Z | 2021-05-13T09:28:29.000Z | 349.204996 | 392,504 | 0.926628 | true | 16,330 | Qwen/Qwen-72B | 1. YES
2. YES | 0.891811 | 0.766294 | 0.683389 | __label__eng_Latn | 0.880808 | 0.426073 |
```python
%matplotlib inline
from sympy import *
from sympy.utilities.lambdify import implemented_function
from sympy.abc import x, y, z
import numpy as np
import matplotlib.pyplot as plt
init_printing(use_unicode=True)
```
```python
r, u, v, c, r_c, u_c, v_c, E, p, r_p, u_p, v_p, e, a, b, q, b_0, b_1, b_2, b_3, q_0, q_1, q_2, q_3, q_4, q_5 = symbols('r u v c r_c u_c v_c E p r_p u_p v_p e a b q b_0 b_1 b_2 b_3 q_0 q_1 q_2 q_3 q_4 q_5')
```
```python
gamma = symbols('gamma',positive=True)
```
#### $f_{1}(c,p) = \dfrac{1}{2}r_{c}c^{2}+\dfrac{1}{4}u_{c}c^{4}+\dfrac{1}{6}v_{c}c^{6}+\dfrac{1}{2}r_{p}p^{2}-\gamma cp-Ep$
```python
f1 = (1/2)*r_c*c**2+(1/4)*u_c*c**4+(1/6)*v_c*c**6-E*p+(1/2)*r_p*p**2-gamma*c*p
```
### $\dfrac{\partial f_{1}(c,p)}{\partial p} = 0 = $
```python
pmin = solve(f1.diff(c),p)[0]
pmin
```
```python
E_cp = solve(f1.diff(p),E)[0]
E_cp
```
```python
expand(E_cp.subs(p,pmin))
```
```python
```
| 259bde4f381841dbe645239370f086b2871a012f | 7,624 | ipynb | Jupyter Notebook | Smectic/SimplePol.ipynb | brettavedisian/Liquid-Crystals | c7c6eaec594e0de8966408264ca7ee06c2fdb5d3 | [
"MIT"
] | null | null | null | Smectic/SimplePol.ipynb | brettavedisian/Liquid-Crystals | c7c6eaec594e0de8966408264ca7ee06c2fdb5d3 | [
"MIT"
] | null | null | null | Smectic/SimplePol.ipynb | brettavedisian/Liquid-Crystals | c7c6eaec594e0de8966408264ca7ee06c2fdb5d3 | [
"MIT"
] | null | null | null | 41.210811 | 1,734 | 0.692812 | true | 429 | Qwen/Qwen-72B | 1. YES
2. YES | 0.92523 | 0.822189 | 0.760714 | __label__yue_Hant | 0.130743 | 0.605726 |
```python
import math
from sympy import *
init_printing(use_latex='mathjax')
```
# Definitions and Functions
```python
## Define symbols
x, y, z = symbols('mu gamma psi')
### NOTE: THE CODE BELOW IS NOT BEING USED IN THE FINAL EXAMPLE ###
cx, sx = symbols('cos(x) sin(x)')
cy, sy = symbols('cos(y) sin(y)')
cz, sz = symbols('cos(z) sin(z)')
## Elementary rotation matrices:
#C1
Rx = Matrix([
[1, 0, 0],
[0, cx, sx],
[0, -sx, cx]])
#C2
Ry = Matrix([
[cy, 0, -sy],
[0, 1, 0],
[sy, 0, cy]])
#C3
Rz = Matrix([
[cz, sz, 0],
[-sz, cz, 0],
[0, 0, 1]])
```
```python
## Elementary rotation matrices Functions:
def C1(angle):
x = symbols('x')
Rx = Matrix([
[1, 0, 0],
[0, cos(x), sin(x)],
[0, -sin(x), cos(x)]])
return Rx.subs(x, angle)
def C2(angle):
y = symbols('y')
Ry = Matrix([
[cos(y), 0, -sin(y)],
[0, 1, 0],
[sin(y), 0, cos(y)]])
return Ry.subs(y, angle)
def C3(angle):
z = symbols('z')
Rz = Matrix([
[cos(z), sin(z), 0],
[-sin(z), cos(z), 0],
[0, 0, 1]])
return Rz.subs(z, angle)
```
```python
class IJKReferenceFrame(ReferenceFrame):
def __init__(self, name):
super().__init__(name, latexs=['\mathbf{%s}_{%s}' % (
idx, name) for idx in ("i", "j", "k")])
self.i = self.x
self.j = self.y
self.k = self.z
```
# Examples
We can compute the matrix Rzyx by multiplying matrixes corresponding to each consecutive rotation, e.g.
Rzyx(θx, θy, θz) = Rz(θz)∗Ry(θy)∗Rx(θx)
In this file, we will use the SymPy to compute algebraic expressions for euler angle matrices. Using these expressions, we will be able to derive formulas for converting from matrices to euler angles.
x = spin angle ($\mu$)
y = nutation angle ($\gamma$)
z = precession angle ($\psi$)
```python
# 3-1-3 Euler angles rotation matrices
# Ctot(x,y,z) = C3(x) * C1(y) * C3(z)
C3_x = C3(x)
C1_y = C1(y)
C3_z = C3(z)
R_zxz = C3_x * C1_y * C3_z
R_zxz
```
$\displaystyle \left[\begin{matrix}- \sin{\left(\mu \right)} \sin{\left(\psi \right)} \cos{\left(\gamma \right)} + \cos{\left(\mu \right)} \cos{\left(\psi \right)} & \sin{\left(\mu \right)} \cos{\left(\gamma \right)} \cos{\left(\psi \right)} + \sin{\left(\psi \right)} \cos{\left(\mu \right)} & \sin{\left(\gamma \right)} \sin{\left(\mu \right)}\\- \sin{\left(\mu \right)} \cos{\left(\psi \right)} - \sin{\left(\psi \right)} \cos{\left(\gamma \right)} \cos{\left(\mu \right)} & - \sin{\left(\mu \right)} \sin{\left(\psi \right)} + \cos{\left(\gamma \right)} \cos{\left(\mu \right)} \cos{\left(\psi \right)} & \sin{\left(\gamma \right)} \cos{\left(\mu \right)}\\\sin{\left(\gamma \right)} \sin{\left(\psi \right)} & - \sin{\left(\gamma \right)} \cos{\left(\psi \right)} & \cos{\left(\gamma \right)}\end{matrix}\right]$
```python
# 3-2-1 Euler angles rotation matrices
# Ctot(x,y,z) = C1(x) * C2(y) * C3(z)
C1_x = C1(x)
C2_y = C2(y)
C3_z = C3(z)
R_zyx = C1_x * C2_y * C3_z
R_zyx
```
$\displaystyle \left[\begin{matrix}\cos{\left(\gamma \right)} \cos{\left(\psi \right)} & \sin{\left(\psi \right)} \cos{\left(\gamma \right)} & - \sin{\left(\gamma \right)}\\\sin{\left(\gamma \right)} \sin{\left(\mu \right)} \cos{\left(\psi \right)} - \sin{\left(\psi \right)} \cos{\left(\mu \right)} & \sin{\left(\gamma \right)} \sin{\left(\mu \right)} \sin{\left(\psi \right)} + \cos{\left(\mu \right)} \cos{\left(\psi \right)} & \sin{\left(\mu \right)} \cos{\left(\gamma \right)}\\\sin{\left(\gamma \right)} \cos{\left(\mu \right)} \cos{\left(\psi \right)} + \sin{\left(\mu \right)} \sin{\left(\psi \right)} & \sin{\left(\gamma \right)} \sin{\left(\psi \right)} \cos{\left(\mu \right)} - \sin{\left(\mu \right)} \cos{\left(\psi \right)} & \cos{\left(\gamma \right)} \cos{\left(\mu \right)}\end{matrix}\right]$
```python
# 3-2-3 Euler angles rotation matrices
# Ctot(x,y,z) = C3(x) * C2(y) * C3(z)
C3_x = C3(x)
C2_y = C2(y)
C3_z = C3(z)
R_zyz = C3_z * C2_y * C3_x
R_zyz
```
$\displaystyle \left[\begin{matrix}- \sin{\left(\mu \right)} \sin{\left(\psi \right)} + \cos{\left(\gamma \right)} \cos{\left(\mu \right)} \cos{\left(\psi \right)} & \sin{\left(\mu \right)} \cos{\left(\gamma \right)} \cos{\left(\psi \right)} + \sin{\left(\psi \right)} \cos{\left(\mu \right)} & - \sin{\left(\gamma \right)} \cos{\left(\psi \right)}\\- \sin{\left(\mu \right)} \cos{\left(\psi \right)} - \sin{\left(\psi \right)} \cos{\left(\gamma \right)} \cos{\left(\mu \right)} & - \sin{\left(\mu \right)} \sin{\left(\psi \right)} \cos{\left(\gamma \right)} + \cos{\left(\mu \right)} \cos{\left(\psi \right)} & \sin{\left(\gamma \right)} \sin{\left(\psi \right)}\\\sin{\left(\gamma \right)} \cos{\left(\mu \right)} & \sin{\left(\gamma \right)} \sin{\left(\mu \right)} & \cos{\left(\gamma \right)}\end{matrix}\right]$
### Using the physics.mechanics library
I will now compare **two methods of obtaining the same rotation matrix**.
One uses the .orient method implemented in the sympy.physics.mechanics and the other the functions stated on the beginning of this notebook.
```python
from sympy.physics.mechanics import *
```
```python
# First let's define the dynamic symbols, as these are a function of time
x, y, z = dynamicsymbols('mu gamma psi')
```
#### As done earlier:
```python
# (-2)-3-1 Euler angle rotation matrix
C3_x = C3(z)
C2_y = C2(-y)
C1_x = C1(x)
R_y_neg_zx = C1_x*C3_x*C2_y
R_y_neg_zx
```
$\displaystyle \left[\begin{matrix}\cos{\left(\gamma{\left(t \right)} \right)} \cos{\left(\psi{\left(t \right)} \right)} & \sin{\left(\psi{\left(t \right)} \right)} & \sin{\left(\gamma{\left(t \right)} \right)} \cos{\left(\psi{\left(t \right)} \right)}\\- \sin{\left(\gamma{\left(t \right)} \right)} \sin{\left(\mu{\left(t \right)} \right)} - \sin{\left(\psi{\left(t \right)} \right)} \cos{\left(\gamma{\left(t \right)} \right)} \cos{\left(\mu{\left(t \right)} \right)} & \cos{\left(\mu{\left(t \right)} \right)} \cos{\left(\psi{\left(t \right)} \right)} & - \sin{\left(\gamma{\left(t \right)} \right)} \sin{\left(\psi{\left(t \right)} \right)} \cos{\left(\mu{\left(t \right)} \right)} + \sin{\left(\mu{\left(t \right)} \right)} \cos{\left(\gamma{\left(t \right)} \right)}\\- \sin{\left(\gamma{\left(t \right)} \right)} \cos{\left(\mu{\left(t \right)} \right)} + \sin{\left(\mu{\left(t \right)} \right)} \sin{\left(\psi{\left(t \right)} \right)} \cos{\left(\gamma{\left(t \right)} \right)} & - \sin{\left(\mu{\left(t \right)} \right)} \cos{\left(\psi{\left(t \right)} \right)} & \sin{\left(\gamma{\left(t \right)} \right)} \sin{\left(\mu{\left(t \right)} \right)} \sin{\left(\psi{\left(t \right)} \right)} + \cos{\left(\gamma{\left(t \right)} \right)} \cos{\left(\mu{\left(t \right)} \right)}\end{matrix}\right]$
```python
# 3-1 Euler angle rotation matrix
C1_x*C3_x
```
$\displaystyle \left[\begin{matrix}\cos{\left(\psi{\left(t \right)} \right)} & \sin{\left(\psi{\left(t \right)} \right)} & 0\\- \sin{\left(\psi{\left(t \right)} \right)} \cos{\left(\mu{\left(t \right)} \right)} & \cos{\left(\mu{\left(t \right)} \right)} \cos{\left(\psi{\left(t \right)} \right)} & \sin{\left(\mu{\left(t \right)} \right)}\\\sin{\left(\mu{\left(t \right)} \right)} \sin{\left(\psi{\left(t \right)} \right)} & - \sin{\left(\mu{\left(t \right)} \right)} \cos{\left(\psi{\left(t \right)} \right)} & \cos{\left(\mu{\left(t \right)} \right)}\end{matrix}\right]$
#### Using built-in functions
```python
A = IJKReferenceFrame("A")
```
```python
A1 = IJKReferenceFrame("A1")
psi = dynamicsymbols('psi')
A1.orient(A, 'Axis', [psi, A.z])
A1.dcm(A) # T_{A1A}
```
$\displaystyle \left[\begin{matrix}\cos{\left(\psi{\left(t \right)} \right)} & \sin{\left(\psi{\left(t \right)} \right)} & 0\\- \sin{\left(\psi{\left(t \right)} \right)} & \cos{\left(\psi{\left(t \right)} \right)} & 0\\0 & 0 & 1\end{matrix}\right]$
```python
A2 = IJKReferenceFrame("A2")
gamma = dynamicsymbols('gamma')
A2.orient(A1, 'Axis', [gamma, -A1.y])
A2.dcm(A1) # T_{A2A1}
```
$\displaystyle \left[\begin{matrix}\cos{\left(\gamma{\left(t \right)} \right)} & 0 & \sin{\left(\gamma{\left(t \right)} \right)}\\0 & 1 & 0\\- \sin{\left(\gamma{\left(t \right)} \right)} & 0 & \cos{\left(\gamma{\left(t \right)} \right)}\end{matrix}\right]$
```python
A3 = IJKReferenceFrame("A3")
#zeta = dynamicsymbols('zeta')
A3.orient(A2, 'Axis', [psi, A2.z])
A3.dcm(A1) # T_{A3A1}
```
$\displaystyle \left[\begin{matrix}\cos{\left(\gamma{\left(t \right)} \right)} \cos{\left(\psi{\left(t \right)} \right)} & \sin{\left(\psi{\left(t \right)} \right)} & \sin{\left(\gamma{\left(t \right)} \right)} \cos{\left(\psi{\left(t \right)} \right)}\\- \sin{\left(\psi{\left(t \right)} \right)} \cos{\left(\gamma{\left(t \right)} \right)} & \cos{\left(\psi{\left(t \right)} \right)} & - \sin{\left(\gamma{\left(t \right)} \right)} \sin{\left(\psi{\left(t \right)} \right)}\\- \sin{\left(\gamma{\left(t \right)} \right)} & 0 & \cos{\left(\gamma{\left(t \right)} \right)}\end{matrix}\right]$
```python
B = IJKReferenceFrame("B")
mu = dynamicsymbols('mu')
B.orient(A3, 'Axis', [mu, A3.x])
B.dcm(A3) # T_{BA3}
```
$\displaystyle \left[\begin{matrix}1 & 0 & 0\\0 & \cos{\left(\mu{\left(t \right)} \right)} & \sin{\left(\mu{\left(t \right)} \right)}\\0 & - \sin{\left(\mu{\left(t \right)} \right)} & \cos{\left(\mu{\left(t \right)} \right)}\end{matrix}\right]$
```python
B.dcm(A2)
```
$\displaystyle \left[\begin{matrix}\cos{\left(\psi{\left(t \right)} \right)} & \sin{\left(\psi{\left(t \right)} \right)} & 0\\- \sin{\left(\psi{\left(t \right)} \right)} \cos{\left(\mu{\left(t \right)} \right)} & \cos{\left(\mu{\left(t \right)} \right)} \cos{\left(\psi{\left(t \right)} \right)} & \sin{\left(\mu{\left(t \right)} \right)}\\\sin{\left(\mu{\left(t \right)} \right)} \sin{\left(\psi{\left(t \right)} \right)} & - \sin{\left(\mu{\left(t \right)} \right)} \cos{\left(\psi{\left(t \right)} \right)} & \cos{\left(\mu{\left(t \right)} \right)}\end{matrix}\right]$
```python
B.dcm(A1)
```
$\displaystyle \left[\begin{matrix}\cos{\left(\gamma{\left(t \right)} \right)} \cos{\left(\psi{\left(t \right)} \right)} & \sin{\left(\psi{\left(t \right)} \right)} & \sin{\left(\gamma{\left(t \right)} \right)} \cos{\left(\psi{\left(t \right)} \right)}\\- \sin{\left(\gamma{\left(t \right)} \right)} \sin{\left(\mu{\left(t \right)} \right)} - \sin{\left(\psi{\left(t \right)} \right)} \cos{\left(\gamma{\left(t \right)} \right)} \cos{\left(\mu{\left(t \right)} \right)} & \cos{\left(\mu{\left(t \right)} \right)} \cos{\left(\psi{\left(t \right)} \right)} & - \sin{\left(\gamma{\left(t \right)} \right)} \sin{\left(\psi{\left(t \right)} \right)} \cos{\left(\mu{\left(t \right)} \right)} + \sin{\left(\mu{\left(t \right)} \right)} \cos{\left(\gamma{\left(t \right)} \right)}\\- \sin{\left(\gamma{\left(t \right)} \right)} \cos{\left(\mu{\left(t \right)} \right)} + \sin{\left(\mu{\left(t \right)} \right)} \sin{\left(\psi{\left(t \right)} \right)} \cos{\left(\gamma{\left(t \right)} \right)} & - \sin{\left(\mu{\left(t \right)} \right)} \cos{\left(\psi{\left(t \right)} \right)} & \sin{\left(\gamma{\left(t \right)} \right)} \sin{\left(\mu{\left(t \right)} \right)} \sin{\left(\psi{\left(t \right)} \right)} + \cos{\left(\gamma{\left(t \right)} \right)} \cos{\left(\mu{\left(t \right)} \right)}\end{matrix}\right]$
# Other interesting methods
### DCM
Examples from: https://docs.sympy.org/latest/modules/physics/vector/api/classes.html
```python
# Define the reference frames
N = ReferenceFrame('N')
q1 = symbols('q1')
# orientnew = Returns a new reference frame oriented with respect to this reference frame.
A = N.orientnew('A', 'Axis', (q1, N.x))
# DCM between A and N reference frames
A.dcm(N), N.dcm(A)
```
$\displaystyle \left( \left[\begin{matrix}1 & 0 & 0\\0 & \cos{\left(q_{1} \right)} & \sin{\left(q_{1} \right)}\\0 & - \sin{\left(q_{1} \right)} & \cos{\left(q_{1} \right)}\end{matrix}\right], \ \left[\begin{matrix}1 & 0 & 0\\0 & \cos{\left(q_{1} \right)} & - \sin{\left(q_{1} \right)}\\0 & \sin{\left(q_{1} \right)} & \cos{\left(q_{1} \right)}\end{matrix}\right]\right)$
```python
q1 = symbols('q1')
N = ReferenceFrame('N')
B = ReferenceFrame('B')
B.orient_axis(N, N.x, q1)
# The orient_axis() method generates a direction cosine matrix and its transpose which
# defines the orientation of B relative to N and vice versa. Once orient is called,
# dcm() outputs the appropriate direction cosine matrix:
B.dcm(N), N.dcm(B)
```
$\displaystyle \left( \left[\begin{matrix}1 & 0 & 0\\0 & \cos{\left(q_{1} \right)} & \sin{\left(q_{1} \right)}\\0 & - \sin{\left(q_{1} \right)} & \cos{\left(q_{1} \right)}\end{matrix}\right], \ \left[\begin{matrix}1 & 0 & 0\\0 & \cos{\left(q_{1} \right)} & - \sin{\left(q_{1} \right)}\\0 & \sin{\left(q_{1} \right)} & \cos{\left(q_{1} \right)}\end{matrix}\right]\right)$
### Kinematic Equations
```python
from sympy.physics.vector import ReferenceFrame, get_motion_params, dynamicsymbols, init_vprinting
```
Returns the three motion parameters - (acceleration, velocity, and position) as vectorial functions of time in the given frame.
If a higher order differential function is provided, the lower order functions are used as boundary conditions. For example, given the acceleration, the velocity and position parameters are taken as boundary conditions.
The values of time at which the boundary conditions are specified are taken from timevalue1(for position boundary condition) and timevalue2(for velocity boundary condition).
If any of the boundary conditions are not provided, they are taken to be zero by default (zero vectors, in case of vectorial inputs). If the boundary conditions are also functions of time, they are converted to constants by substituting the time values in the dynamicsymbols._t time Symbol.
This function can also be used for calculating rotational motion parameters. Have a look at the Parameters and Examples for more clarity.
```python
R = ReferenceFrame('R')
v1, v2, v3 = dynamicsymbols('v1 v2 v3')
v = v1*R.x + v2*R.y + v3*R.z
get_motion_params(R, position=v)
a, b, c = symbols('a b c')
v = a*R.x + b*R.y + c*R.z
get_motion_params(R, velocity=v)
parameters = get_motion_params(R, acceleration=v)
parameters[1], parameters[2]
```
$\displaystyle \left( a t\mathbf{\hat{r}_x} + b t\mathbf{\hat{r}_y} + c t\mathbf{\hat{r}_z}, \ \frac{a t^{2}}{2}\mathbf{\hat{r}_x} + \frac{b t^{2}}{2}\mathbf{\hat{r}_y} + \frac{c t^{2}}{2}\mathbf{\hat{r}_z}\right)$
Gives equations relating the qdot’s to u’s for a rotation type.
Supply rotation type and order as in orient. Speeds are assumed to be body-fixed; if we are defining the orientation of B in A using by rot_type, the angular velocity of B in A is assumed to be in the form: speed[0]*B.x + speed[1]*B.y + speed[2]*B.z
```python
u1, u2, u3 = dynamicsymbols('u1 u2 u3')
q1, q2, q3 = dynamicsymbols('q1 q2 q3')
k_Eq = kinematic_equations([u1, u2, u3], [q1, q2, q3], 'body', '313')
k_Eq
```
$\displaystyle \left[ - \frac{\operatorname{u_{1}}{\left(t \right)} \sin{\left(\operatorname{q_{3}}{\left(t \right)} \right)} + \operatorname{u_{2}}{\left(t \right)} \cos{\left(\operatorname{q_{3}}{\left(t \right)} \right)}}{\sin{\left(\operatorname{q_{2}}{\left(t \right)} \right)}} + \frac{d}{d t} \operatorname{q_{1}}{\left(t \right)}, \ - \operatorname{u_{1}}{\left(t \right)} \cos{\left(\operatorname{q_{3}}{\left(t \right)} \right)} + \operatorname{u_{2}}{\left(t \right)} \sin{\left(\operatorname{q_{3}}{\left(t \right)} \right)} + \frac{d}{d t} \operatorname{q_{2}}{\left(t \right)}, \ \frac{\left(\operatorname{u_{1}}{\left(t \right)} \sin{\left(\operatorname{q_{3}}{\left(t \right)} \right)} + \operatorname{u_{2}}{\left(t \right)} \cos{\left(\operatorname{q_{3}}{\left(t \right)} \right)}\right) \cos{\left(\operatorname{q_{2}}{\left(t \right)} \right)}}{\sin{\left(\operatorname{q_{2}}{\left(t \right)} \right)}} - \operatorname{u_{3}}{\left(t \right)} + \frac{d}{d t} \operatorname{q_{3}}{\left(t \right)}\right]$
```python
u1, u2, u3 = symbols('mu gamma psi')
q1, q2, q3 = symbols('\dot{\mu} \dot{\gamma} \dot{\psi}')
K_eq = Matrix(kinematic_equations([q1, q2, q3], [u1, u2, u3], 'body', '323'))
K_eq
# Check this site http://man.hubwiz.com/docset/SymPy.docset/Contents/Resources/Documents/_modules/sympy/physics/vector/functions.html
```
$\displaystyle \left[\begin{matrix}- \frac{\dot{\gamma} \sin{\left(\psi \right)} - \dot{\mu} \cos{\left(\psi \right)}}{\sin{\left(\gamma \right)}}\\- \dot{\gamma} \cos{\left(\psi \right)} - \dot{\mu} \sin{\left(\psi \right)}\\- \dot{\psi} - \frac{\left(- \dot{\gamma} \sin{\left(\psi \right)} + \dot{\mu} \cos{\left(\psi \right)}\right) \cos{\left(\gamma \right)}}{\sin{\left(\gamma \right)}}\end{matrix}\right]$
| 5784ce6662adbfcd846bd55a102536ef870fae50 | 33,925 | ipynb | Jupyter Notebook | MyScripts/042-SymPy-RotationMatrices.ipynb | diegoomataix/Curso_AeroPython | c2cf71a938062bc70dbbf7c2f21e09653fa2cedd | [
"CC-BY-4.0"
] | null | null | null | MyScripts/042-SymPy-RotationMatrices.ipynb | diegoomataix/Curso_AeroPython | c2cf71a938062bc70dbbf7c2f21e09653fa2cedd | [
"CC-BY-4.0"
] | null | null | null | MyScripts/042-SymPy-RotationMatrices.ipynb | diegoomataix/Curso_AeroPython | c2cf71a938062bc70dbbf7c2f21e09653fa2cedd | [
"CC-BY-4.0"
] | null | null | null | 41.676904 | 1,504 | 0.434046 | true | 5,825 | Qwen/Qwen-72B | 1. YES
2. YES | 0.877477 | 0.72487 | 0.636057 | __label__eng_Latn | 0.311995 | 0.316104 |
# Chapter 7 Solvers
```python
from sympy import *
x, y, z = symbols('x y z')
init_printing(use_unicode=True)
```
## 7.1 方程式についての注意
`Sympy`での方程式は`Eq`関数を使う
```python
Eq(x,y)
```
```python
solveset(Eq(x**2, 1), x) #Eq(左辺, 右辺)
```
```python
solveset(Eq(x**2 - 1, 0), x)
```
```python
solveset(x**2 - 1, x) #式 = 0 とした「式」が第一引数
```
--->`Sympy`では、式はすべて`=0`とみなされる. `Eq`関数を使わなくても、差で表現してやればよい.
## 7.2 代数的に方程式を解く
代数方程式を解く基本的な関数は`solveset(equation, variable=None, domain=S.Complexes)`.
他に関数`solve()`があるが、基本的には関数`solveset()`を用いる.
```python
solveset(x**2-x, x)
```
```python
solveset(x - x, x, domain=S.Reals)
```
```python
solveset(sin(x) - 1, x, domain=S.Reals)
```
--->勝手に`n`を導入して解いてくれる.
#### 解がないときは空集合を返す:
```python
solveset(exp(x),x)
```
```python
solveset(cos(x) - x, x)
```
#### 連立方程式: `linsolve関数`
```python
linsolve([x + y + z - 1, x + y + 2*z -3], (x, y, z))
```
```python
linsolve(Matrix((
[1, 1, 1, 1],
[1, 1, 2, 3])), (x, y, z)) #Matrix関数についてはChapter 8
```
```python
M = Matrix(((
1, 1, 1, 1),
(1, 1, 2, 3)
))
```
```python
system = A, b = M[:,:-1], M[:,-1]
```
```python
linsolve(system, x, y, z)
```
ただし高次の式になると,
```python
solveset(x**3 - 6*x**2 + 9*x, x)
```
となって、解の重根の有無と縮退度がわからない. そこで
```python
roots(x**3 - 6*x**2 + 9*x, x)
```
とすれば、解`0`が1個、解`3`が2個出てくるとわかる. 実際
```python
factor(x**3 - 6*x**2 + 9*x)
```
**注意**: `solveset`の使えない方程式
- 非線形多変数系
- LambertWで解ける方程式
こういう場合は`solve()`関数で解ける:
```python
solve([x*y - 1, x -2], x, y)
```
オプション`dict ='True'`を付けると結果がリストの要素としての辞書型で得られる.
```python
solve([x*y - 1, x -2], x, y, dict ='True')
```
```python
solve(x*exp(x) - 1, x)
```
## 7.3 微分方程式を解く
関数の設定: SymPyオブジェクトへ。
```python
f, g = symbols('f g', cls=Function) #解きたいものが関数なので、cls=Function オプションを付ける.
```
```python
f(x)
```
☆導関数は、Chapter 6 でやったように、関数Derivative()を使ってもできるし、
```python
Derivative(f(x),x,1)
```
未知関数 `f(x)` に対するメソッド `diff` を用いても良い:
```python
f(x).diff(x)
```
**コメント**
ここでdiffメソッドが使えたのは、未知関数f(x)に対して用いているため。
==> 今、微分方程式$$f''(x)-2f'(x)+f(x)=\sin(x)$$を解くことを考える. これは
```python
diffeq = Eq(f(x).diff(x, 2) - 2*f(x).diff(x) + f(x), sin(x))
```
```python
diffeq #微分方程式
```
微分方程式は`dsolve`関数で解ける!
```python
dsolve(diffeq, f(x)) #第二引数で解きたい関数を指定
```
```python
dsolve(f(x).diff(x)*(1-sin(f(x))),f(x))
```
次が最後! [Chapter8 Matrices](https://hiroyuki827.github.io/SymPy_tutorial/Chapter8_Matrices.html)へ!
| 909de5c25a0de570454605cadb2a0ba1c783c4d7 | 37,898 | ipynb | Jupyter Notebook | Chapter7_Solvers.ipynb | hiroyuki827/SymPy_tutorial | 8423ceab49482dc83c90c4cb1d388cad100ced84 | [
"BSD-3-Clause"
] | 9 | 2018-01-02T16:53:11.000Z | 2021-05-05T13:48:49.000Z | Chapter7_Solvers.ipynb | hiroyuki827/SymPy_tutorial | 8423ceab49482dc83c90c4cb1d388cad100ced84 | [
"BSD-3-Clause"
] | 1 | 2018-06-12T03:51:09.000Z | 2018-06-13T08:15:45.000Z | Chapter7_Solvers.ipynb | hiroyuki827/SymPy_tutorial | 8423ceab49482dc83c90c4cb1d388cad100ced84 | [
"BSD-3-Clause"
] | null | null | null | 44.743802 | 2,134 | 0.733733 | true | 1,314 | Qwen/Qwen-72B | 1. YES
2. YES | 0.936285 | 0.831143 | 0.778187 | __label__yue_Hant | 0.709151 | 0.646321 |
## Elements of Machine Learning
We consider ML problems involving data points with real-valued labels $y$, which represent some quantity of interest. We often refer to such ML problems as **regression problems**. In this notebook, we will apply some basic ML methods to solve a simple regression problem. These methods aim at finding or learning a good predictor function $h(\mathbf{x})$, which predicts the label $y$ based on the features $\mathbf{x}$ of a data point. The different methods are differnt combinations of particular choices for the type of predictor functions (hypothesis space) and loss function (the quality measure used to rank predictors). These different combinations offer different tradeoffs between **computational complexity, robustness (against outliers) and accuracy**.
## Learning goals
- know how to formulate a simple regression problem by identifying data points, their features, and labels
- know how to represent features and labels of data points using matrices and vectors
- apply ready-made regression methods to learn a good predictor function from labeled data
- assess and compare the computational complexity and accuracy of different regression methods
## Relevant Sections in [Course Book](https://arxiv.org/abs/1805.05052)
Chapter 2; Section 3.1-3.2; Chapter 4 and 5
## Background Material
[Video Lecture](https://www.youtube.com/watch?v=kHwlB_j7Hkc) on regression by [Prof. Andrew Ng](https://en.wikipedia.org/wiki/Andrew_Ng)
Additional information on the Python libraries used in this exercise can be found here:
- [NumPy](http://cs231n.github.io/python-numpy-tutorial/)
- [matplotlib](https://matplotlib.org/tutorials/index.html#introductory)
- [Pandas](https://pandas.pydata.org/pandas-docs/stable/getting_started/10min.html#min)
- [Slicing numpy arrays](https://www.pythoninformer.com/python-libraries/numpy/index-and-slice/)
## The Problem
Assume you have secured an internship at a real estate broker in downtown Helsinki. Your firm has gathered a dataset obtained from previously sold houses in different areas of the city. Your task is to find out the relation between different characteristics (features) $\mathbf{x}$ and the price (label) $y$ of a sold house.
In order to solve this task you can use ML methods to find a good predictor map $h(\mathbf{x})$ such that $h(\mathbf{x}) \approx y$. The basic principle behind these methods is simple: Out of a given set of predictor functions (e.g. the set of linear functions) pick one that fits well the data points $(\mathbf{x}^{(i)},y^{(i)})$, $i=1,\ldots,m$, for which we know the true label values $y^{(i)}$. For the house price prediction problem, this labeled data is provided by the recordings of previous house sales.
## The Data
Our goal is to learn a good predictor $h(\mathbf{x})$ for the price $y$ of a house. The prediction $h(\mathbf{x})$ is based on several features (characteristics) $\mathbf{x} = \big(x_{1},\ldots,x_{n}\big)^{T}$ such as the average number of rooms per dwelling $x_{1}$ or the nitric oxides concentration $x_{2}$ near the house.
To learn a good predictor $h(\mathbf{x})$, we use historic recordings of house sales. These recordings consist of $m$ data points. Each data pints is characterized by the house features $\mathbf{x}^{(i)} \in \mathbb{R}^{n}$ and the selling price $y^{(i)}$. ML methods find or learn a good predictor $h(\mathbf{x})$ by minimizing the average error $h(\mathbf{x}^{(i)}) - y^{(i)}$ for $i=1,\ldots,m$.
<a id='handsondata'></a>
<div class=" alert alert-info">
<p><b>Demo.</b> Loading the Data.</p>
The following code snippet defines a function `X,y= GetFeaturesLabels(m,n)` which reads in data of previous house sales. The input parameters are the number `m` of data points and the number `n` of features to be used for each data point. The function returns a matrix $\mathbf{X}$ and vector $\mathbf{y}$.
The features $\mathbf{x}^{(i)}$ of the sold houses are stored in the rows of the numpy array `X` (of shape (m,n)) and the corresponding selling prices $y^{(i)}$ in the numpy array `y` (shape (m,1)). The two arrays represent the feature matrix $\mathbf{X} = \begin{pmatrix} \mathbf{x}^{(1)} & \ldots & \mathbf{x}^{(m)} \end{pmatrix}^{T}$ and the label vector $\mathbf{y} = \big( y^{(1)}, \ldots, y^{(m)} \big)^{T}$.
</div>
```python
# import "Pandas" library/package (and use shorthand "pd" for the package)
# Pandas provides functions for loading (storing) data from (to) files
import pandas as pd
from matplotlib import pyplot as plt
from IPython.display import display, HTML
import numpy as np
from sklearn.datasets import load_boston
import random
def GetFeaturesLabels(m=10, n=10):
house_dataset = load_boston()
house = pd.DataFrame(house_dataset.data, columns=house_dataset.feature_names)
x1 = house['RM'].values.reshape(-1,1) # vector whose entries are the average room numbers for each sold houses
x2 = house['NOX'].values.reshape(-1,1) # vector whose entries are the nitric oxides concentration for sold houses
x1 = x1[0:m]
x2 = x2[0:m]
np.random.seed(30)
X = np.hstack((x1,x2,np.random.randn(n,m).T))
X = X[:,0:n]
X = X - X.mean(axis = 0)
y = house_dataset.target.reshape(-1,1) # creates a vector whose entries are the labels for each sold house
y = y[0:m]
y = y - y.mean(axis = 0)
return X, y
```
## Visualize Data
Scatter plots visualize data points by representing them as "dots" in the two-dimensional plane. Scatter plots can help to develop an intuition for the relation between features and labels of data points.
<a id='drawplot'></a>
<div class=" alert alert-info">
<p><b>Demo.</b> Scatter Plots.</p>
<p>The code snippet below creates one scatterplot showing the first feature $x^{(i)}_{1}$ and the label $y^{(i)}$ (house price) for each data point. It also creates a second scatterplot with the second feature $x^{(i)}_{2}$ and the label $y^{(i)}$ (house price) for each data point.</p>
</div>
```python
from matplotlib import pyplot as plt
X,y = GetFeaturesLabels(10,10)
fig, axs = plt.subplots(1, 2,figsize=(15,5))
axs[0].scatter(X[:,0], y)
axs[0].set_title('average number of rooms per dwelling vs. price')
axs[0].set_xlabel(r'feature $x_{1}$')
axs[0].set_ylabel('house price $y$')
axs[1].scatter(X[:,1], y)
axs[1].set_xlabel(r'feature $x_{2}$')
axs[1].set_title('nitric oxide level vs. price')
axs[1].set_ylabel('house price $y$')
plt.show()
```
## Linear Regression
Our goal is to predict the price $y$ of a house based on $n$ properties or features $\mathbf{x}=(x_{1},\ldots,x_{n})^{T} \in \mathbb{R}^{n}$ of that house. To this end, we try to find (or learn) a predictor function $h(\mathbf{x})$ such that $y \approx h(\mathbf{x})$.
Within linear regression, we restrict the predictor function to be a linear function. The resulting hypothesis space of all such linear predictor functions is
\begin{equation*}
\mathcal{H} = \{h^{(\mathbf{w})}(\mathbf{x}) = \mathbf{w}^{T} \mathbf{x} \mbox{ for some } \mathbf{w} \in \mathbb{R}^{n}\}.
\label{eq1}
\tag{1}
\end{equation*}
Note that,for two vectors $\mathbf{w}=\big(w_{1},\ldots,w_{n}\big)^{T} \in \mathbb{R}^{n}$ and $\mathbf{x}=\big(x_{1},\ldots,x_{n}\big)^{T}\in \mathbb{R}^{n}$, we denote $\mathbf{w}^{T} \mathbf{x} = \sum_{r=1}^{n} w_{r} x_{r}$.
To measure the quality of a particular predictor $h^{(\mathbf{w})}(\mathbf{x}) = \mathbf{w}^{T} \mathbf{x}$, obtained for some particular choice for the weight vector $\mathbf{w} \in \mathbb{R}^{n}$, we try it on the historic recordings of house sales. For the $i$th house sale record we know the house price $y^{(i)}$ that has been achieved and can compare it to the predicted price $h(\mathbf{x}^{(i)})$.
The prediction $h^{(\mathbf{w})}(\mathbf{x}^{(i)})$ will typically incur a non-zero __prediction error__ $y^{(i)} - h^{(\mathbf{w})}(\mathbf{x}^{(i)})$. To measure the error or **loss** incurred by predicting the true label value $y$ of a data point using the predicted label $\hat{y}=h(\mathbf{x})$ (which is the result of applying the predictor function $h(\cdot)$ to the features $\mathbf{x}$), we need to define a loss function $\mathcal{L}(y,\hat{y})$. In principle, the loss function is a design parameter which can be chosen freely by the ML scientist or engineer (based on the application at hand).
For regression problems with numeric labels $y$, a popular choice for the loss is the [squared error loss](https://scikit-learn.org/stable/modules/model_evaluation.html#mean-squared-error) $\mathcal{L}(y,\hat{y}) = (y-\hat{y})^{2}$. We highlight that to evaluate the loss we need to know the true label $y$ of a data point. However, this is the case for the data points $(\mathbf{x}^{(i)},y^{(i)})$ representing previous house sales. We can then average the predictor loss over all $m$ data points $(\mathbf{x}^{(i)},y^{(i)})$ of previously sold houses:
\begin{equation*}
\mathcal{E} (\mathbf{w}) = (1/m) \sum^{m}_{i=1}(y^{(i)} - \mathbf{w}^{T} \mathbf{x}^{(i)})^2.
\label{eq2}
\tag{2}
\end{equation*}
The optimal weight vector $\mathbf{w}_{\rm opt}$ is any weight vector which achives the minimum value of $ \mathcal{E} (\mathbf{w})$, i.e., $$\mathcal{E} (\mathbf{w}_{\rm opt})= \min_{\mathbf{w} \in \mathbb{R}^{n}}\mathcal{E} (\mathbf{w}).$$
An optimal predictor is then obtained as $h(\mathbf{x}) = \mathbf{w}_{\rm opt}^{T} \mathbf{x}$. The average loss
$$\mathcal{E} (\mathbf{w}_{\rm opt}) = (1/m) \sum^{m}_{i=1}(y^{(i)} - \mathbf{w}_{\rm opt}^{T} \mathbf{x}^{(i)})^2$$
incurred by the optimal predictor $h(\mathbf{x}) = \mathbf{w}_{\rm opt}^{T} \mathbf{x}$ is also known as the **training error**, since it is the error obtained when fitting or training a linear model to some labeled data points, which serve as the **training data**.
<a id='drawplot'></a>
<div class=" alert alert-info"><p><b>Demo.</b> Learn a Linear Predictor.</p>
The code snippet below shows how to use the `LinearRegression` class from the Python library `scikit-learn` to learn an optimal predictor from the set of linear predictors $h(\mathbf{x}) = \mathbf{w}^{T} \mathbf{x}$ using the squared error loss.
The optimal weight vector $\mathbf{w}_{\rm opt}$ can be found by the function `LinearRegression.fit()`. The resulting optimal weight vector is stored in the variable `LinearRegression.coef_`. Using the optimal weight vector, we compute the training error
\begin{equation}
(1/m) \sum_{i=1}^{m} \big( y^{(i)} - \mathbf{w}_{\rm opt}^{T} \mathbf{x}^{(i)} \big)^{2}.
\end{equation}
and store it in the variable `training_error`.
You can find the documentation of the `LinearRegression` class under [this link](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html).
</div>
```python
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
from IPython.display import display, Math
reg = LinearRegression(fit_intercept=False)
reg = reg.fit(X, y)
training_error = mean_squared_error(y, reg.predict(X))
display(Math(r'$\mathbf{w}_{\rm opt} ='))
optimal_weight = reg.coef_
optimal_weight = optimal_weight.reshape(-1,1)
print(optimal_weight)
print("\nThe resuling training error is ",training_error)
```
$\displaystyle \mathbf{w}_{\rm opt} =$
[[ 32.5939049 ]
[-19.90275233]
[ 2.6463634 ]
[ 16.20766644]
[ 10.69852006]
[ 0.38989091]
[-19.20441205]
[ -8.90217068]
[ 14.3588369 ]
[ 13.58536119]]
The resuling training error is 5.198593365406468e-28
<a id='varying_features'></a>
<div class=" alert alert-warning">
<p><b>Student Task</b> Varying Number of Features. </p>
In principle, we can freely choose how many of the available features $x_{1}, x_{2},\ldots,x_{n}$
of a house you want to use, to predict the house price $y$. You are now to explore the effect of using a varying number $r \leq n$ of features on the resulting training error and computational complexity (runtime) of linear regression. For each $r=1,2,\ldots,10$, fit a linear model (using `LinearRegression(fit_intercept=False)` to the house sales dataset (using $m=10$ data points) by using only the first $r$ features $x_{1},...,x_{r}$ of a house.
<br />
- You can get the first $r$ features and labels for the previously sold houses using `GetFeaturesLabels(m,r)`.<br />
- For each value of $r$, determine the computation time of the fitting method `LinearRegression.fit()` and the resulting training error (using the Python function `mean_squared_error()`) of the fitted linear model. <br />
- The results should be stored in two vectors `linreg_time` (multiply by 1000 to get time in milliseconds) and `linreg_error`. <br />
- You can measure the computation time with Python function `time.time()` as demonstrated in Round 1.
</div>
```python
import time
m = 10 # we use 10 data points of the house sales database
max_r = 10 # maximum number of features used
X,y = GetFeaturesLabels(m,max_r) # read in m data points using max_r features
linreg_time = np.zeros(max_r) # vector for storing the exec. times of LinearRegresion.fit() for each r
linreg_error = np.zeros(max_r) # vector for storing the training error of LinearRegresion.fit() for each r
# linreg_time = ...
# linreg_error = ...
# Hint: loop "r" times.
### BEGIN SOLUTION
for r in range(max_r):
reg = LinearRegression(fit_intercept=False)
start_time = time.time()
reg = reg.fit(X[:,:(r+1)], y)
end_time = (time.time() - start_time)*1000
linreg_time[r] = end_time
pred = reg.predict(X[:,:(r+1)])
linreg_error[r] = mean_squared_error(y, pred)
### END SOLUTION
plot_x = np.linspace(1, max_r, max_r, endpoint=True)
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(8, 4))
axes[0].plot(plot_x, linreg_error, label='MSE', color='red')
axes[1].plot(plot_x, linreg_time, label='time', color='green')
axes[0].set_xlabel('features')
axes[0].set_ylabel('empirical error')
axes[1].set_xlabel('features')
axes[1].set_ylabel('time (ms)')
axes[0].set_title('training error vs number of features')
axes[1].set_title('computation time vs number of features')
axes[0].legend()
axes[1].legend()
plt.tight_layout()
plt.show()
```
<a id='varying_features'></a>
<div class=" alert alert-info">
<p><b>Demo.</b> Varying Number of Data Points.</p>
<p>Beside the number of features which are used to describe a data point, another important parameter of a data set is the number $m$ of data points it contains. Intuitively, having more labeled data points to find or learn a good predictor should be beneficial. Let us now explore the effect of using an increasing number $m$ of data points on the resulting training error. </p>
- For each choice $m=1,2,\ldots,10$ read in the first $m$ data points of previously sold houses using $n=2$ features. This can be done using `GetFeaturesLabels(m,2)`.
- For each $m$, fit a linear model (using `LinearRegression(fit_intercept=False)` to the first $m$ data points.
- Store the resulting training errors in the numpy array `train_error` of shape (10,1). The $m$th entry of this array should be the training error obtained when using $m$ data points. Note that the first entry of a numpy array has index $0$.
</div>
```python
import time
max_m = 10 # maximum number of data points
X, y = GetFeaturesLabels(max_m, 2) # read in max_m data points using n=2 features
train_error = np.zeros(max_m) # vector for storing the training error of LinearRegresion.fit() for each r
for r in range(max_m):
reg = LinearRegression(fit_intercept=False)
reg = reg.fit(X[:(r+1),:], y[:(r+1)])
y_pred = reg.predict(X[:(r+1),:])
train_error[r] = mean_squared_error(y[:(r+1)], y_pred)
print(train_error[2])
plot_x = np.linspace(1, max_r, max_r, endpoint=True)
fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(8, 4))
axes.plot(plot_x, train_error, label='MSE', color='red')
axes.set_xlabel('number of data points (sample size)')
axes.set_ylabel('training error')
axes.set_title('training error vs. number of data points')
axes.legend()
plt.tight_layout()
plt.show()
```
<a id='drawplot'></a>
<div class=" alert alert-info">
<p><b>Demo.</b> Robustness Against Outliers.</p>
We now consider an important aspect of ML methods, i.e., the robustness to outliers. An **outlier** refers to a data point which is intrinsically different from all other data points, e.g., due to measurement errors. Imagine that for some reason, the first data point in our house sales database is such an erroneous outlier. We would then prefer ML methods not to be affected too much by an error in a single data point.
<p>The code snippet below considers fitting a linear model for house prices $y$ based on a single feature $x_{1}$. The resulting linear predictor minimizes the average squared error loss on the training data. We then intentionally perturb the first data point by setting $y_{1}^{(1)}$ to an unreasonable value. Using this corrupted data set, we then fit again a linear model and compare the so-obtained linear predictor to the linear predictor obtained from the "clean" data set.</p>
</div>
```python
from sklearn import linear_model
X,y = GetFeaturesLabels(10,1) # read in 10 data points with single feature x_1 and label y
### fit a linear model to the clean data
reg = linear_model.LinearRegression(fit_intercept=False)
reg = reg.fit(X, y)
y_pred = reg.predict(X)
# now we intentionally perturb the label of the first data point
y_perturbed = np.copy(y)
y_perturbed[8] = 15;
### fit a linear model to the perturbed data
reg1 = linear_model.LinearRegression(fit_intercept=False)
reg1 = reg1.fit(X, y_perturbed)
y_pred_perturbed = reg1.predict(X)
fig, axes = plt.subplots(1, 2, figsize=(15, 4))
axes[0].scatter(X, y, label='data')
axes[0].plot(X, y_pred, color='green', label='Fitted model')
#np.savetxt("foo.csv", np.hstack((X,y)), delimiter=",",header="x,y")
df2 = pd.DataFrame(np.hstack((X,y,y_pred)),columns=['x', 'y','yhat'])
df2.to_csv("cleandata.csv",index=False)
df3 = pd.DataFrame(np.hstack((X,y_perturbed,y_pred_perturbed)),columns=['x', 'y','yhat'])
df3.to_csv("corrupteddata.csv",index=False)
# now add individual line for each error point
axes[0].plot((X[0], X[0]), (y[0], y_pred[0]), color='red', label='errors') # add label to legend
for i in range(len(X)-1):
lineXdata = (X[i+1], X[i+1]) # same X
lineYdata = (y[i+1], y_pred[i+1]) # different Y
axes[0].plot(lineXdata, lineYdata, color='red')
axes[0].set_title('fitted model using clean data')
axes[0].set_xlabel('feature x_1')
axes[0].set_ylabel('house price y')
axes[0].legend()
axes[1].scatter(X, y_perturbed, label='data')
axes[1].plot(X, y_pred_perturbed, color='green', label='Fitted model')
# now add individual line for each error point
axes[1].plot((X[0], X[0]), (y_perturbed[0], y_pred_perturbed[0]), color='red', label='errors') # add label to legend
for i in range(len(X)-1):
lineXdata = (X[i+1], X[i+1]) # same X
lineYdata = (y_perturbed[i+1], y_pred_perturbed[i+1]) # different Y
axes[1].plot(lineXdata, lineYdata, color='red')
axes[1].set_title('fitted model using perturbed training data')
axes[1].set_xlabel('feature x_1')
axes[1].set_ylabel('house price y')
axes[1].legend()
plt.show()
plt.close('all') # clean up after using pyplot
print("optimal weight w_opt by fitting to (training on) clean training data : ", reg.coef_)
print("optimal weight w_opt by fitting to (training on) perturbed training data : ", reg1.coef_)
print(y)
```
## Using Different Loss Function
We observe from the demo above that the resulting linear predictor is heavily affected by corrupting only one single data point. The reason for this sensitivity is rooted in the properties of the squared error loss function used by the class `LinearRegression()`. Indeed by using the loss $(\hat{y} - y)^{2}$ we force the predictor $\hat{y}$ to not be too far away from any data point with very large value $y$ of the true label.
It turns out that using a different loss function to learn a linear predictor can make the learning robust against few outliers. One such robust loss function is known as "Huber loss" $\mathcal{L}(\hat{y},y)$. Given a data point with label $y$ and a predicted label $\hat{y}=h(\mathbf{x})$ the Huber loss is defined as
$$\mathcal{L}(y,\hat{y}) = \begin{cases} (1/2) (y-\hat{y})^{2} & \mbox{ for } |y-\hat{y}| \leq c \\
c (|y-\hat{y}| - c/2) & \mbox{ else. }\end{cases}$$
Note that the Huber loss contains a parameter $c$ which has to be adapted to the application at hand. To learn a linear predictor $h(\mathbf{x}) = \mathbf{w}^{T} \mathbf{x}$ which minimizes the average Huber loss over the labeled data points in the training set we can use the [`HuberRegressor()`](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.HuberRegressor.html) class.
The Huber loss contains two important special cases. The first special case is obtained when $c$ is chosen very large (the precise value depending on the value range of the features and labels) such that the condition $|y-\hat{y}| \leq c$ is always satisfied. In this case, the Huber loss becomes the squared error loss $(y-\hat{y})^{2}$ (with an additional factor 1/2). The second special case is obtained for choosing $c$ very small (close to $0$) such that the condition $|y-\hat{y}| \leq c$ is never satisfied. In this case, the Huber loss becomes the absolute loss $|y - \hat{y}|$ scaled by a factor $c$.
<a id='drawplot'></a>
<div class=" alert alert-info"><b>Demo.</b> Squared Error and Huber Loss.
The code below plots the squared error loss and the Huber loss for different choices of the parameter $c$. Note that the Huber loss reduces to the squared error loss for a sufficiently large value of the parameter $c$.
</div>
```python
# Author: Jake VanderPlas
# License: BSD
# The figure produced by this code is published in the textbook
# "Statistics, Data Mining, and Machine Learning in Astronomy" (2013)
# For more information, see http://astroML.github.com
# To report a bug or issue, use the following forum:
# https://groups.google.com/forum/#!forum/astroml-general
import numpy as np
from matplotlib import pyplot as plt
#------------------------------------------------------------
# Define the Huber loss
def Phi(t, c):
t = abs(t)
flag = (t > c)
return (~flag) * (0.5 * t ** 2) - (flag) * c * (0.5 * c - t)
#------------------------------------------------------------
# Plot for several values of c
fig = plt.figure(figsize=(10, 3.75))
ax = fig.add_subplot(111)
x = np.linspace(-5, 5, 100)
for c in (1,2,10):
y = Phi(x, c)
ax.plot(x, y, '-k')
if c > 10:
s = r'\infty'
else:
s = str(c)
ax.text(x[6], y[6], '$c=%s$' % s,
ha='center', va='center',
bbox=dict(boxstyle='round', ec='k', fc='w'))
ax.plot(x,np.square(x),label="squared loss")
ax.set_xlabel(r'$y - \hat{y}$')
ax.set_ylabel(r'loss $\mathcal{L}(y,\hat{y})$')
ax.legend()
plt.show()
```
<a id='drawplot'></a>
<div class=" alert alert-info">
<p><b>Demo.</b> Robustness Against Outliers II.</p>
<p>The code snippet below fits a linear model for house prices $y$ based on a single feature $x_{1}$ using the Huber loss. We also intentionally perturb the first data point by setting $y_{1}^{(1)}$ to an unreasonable value. Using this corrupted data set, we fit a linear model (under Huber loss) again and compare the so-obtained linear predictor to the linear predictor obtained from the "clean" data set.</p>
</div>
```python
from sklearn import linear_model
from sklearn.linear_model import HuberRegressor
X,y = GetFeaturesLabels(10,1) # read in 10 data points with single feature x_1 and label y
### fit a linear model (using Huber loss) to the clean data
reg = HuberRegressor(fit_intercept=False)
reg = reg.fit(X, y)
y_pred = reg.predict(X).reshape(-1,1)
# now we intentionaly perturb the label of the first data point
y_perturbed = np.copy(y)
y_perturbed[8] = 0;
### fit a linear model (using Huber loss) to the perturbed data
#reg1 = linear_model.LinearRegression(fit_intercept=False)
reg1 = HuberRegressor (fit_intercept=False) ;
reg1 = reg1.fit(X, y_perturbed)
y_pred_perturbed = reg1.predict(X).reshape(-1,1)
fig, axes = plt.subplots(1, 2, figsize=(15, 4))
axes[0].scatter(X, y, label='data')
#np.savetxt("foo.csv", np.hstack((X,y)), delimiter=",",header="x,y")
df2 = pd.DataFrame(np.hstack((X,y,y_pred)),columns=['x', 'y','yhat'])
df2.to_csv("cleandata_huber.csv",index=False)
df3 = pd.DataFrame(np.hstack((X,y_perturbed,y_pred_perturbed)),columns=['x', 'y','yhat'])
df3.to_csv("corrupteddata_huber.csv",index=False)
axes[0].plot(X, y_pred, color='green', label='Fitted model')
# now add individual line for each error point
axes[0].plot((X[0], X[0]), (y[0], y_pred[0]), color='red', label='errors') # add label to legend
for i in range(len(X)-1):
lineXdata = (X[i+1], X[i+1]) # same X
lineYdata = (y[i+1], y_pred[i+1]) # different Y
axes[0].plot(lineXdata, lineYdata, color='red')
axes[0].set_title('fitted model using clean data')
axes[0].set_xlabel('feature x_1')
axes[0].set_ylabel('house price y')
axes[0].legend()
axes[1].scatter(X, y_perturbed, label='data')
axes[1].plot(X, y_pred_perturbed, color='green', label='Fitted model')
# now add individual line for each error point
axes[1].plot((X[0], X[0]), (y_perturbed[0], y_pred_perturbed[0]), color='red', label='errors') # add label to legend
for i in range(len(X)-1):
lineXdata = (X[i+1], X[i+1]) # same X
lineYdata = (y_perturbed[i+1], y_pred_perturbed[i+1]) # different Y
axes[1].plot(lineXdata, lineYdata, color='red')
axes[1].set_title('fitted model using perturbed data')
axes[1].set_xlabel('feature x_1')
axes[1].set_ylabel('house price y')
axes[1].legend()
plt.show()
plt.close('all') # clean up after using pyplot
print("optimal weight w_opt by fitting on clean data : ", reg.coef_)
print("optimal weight w_opt by fitting on perturbed data : ", reg1.coef_)
```
<a id='varying_features'></a>
<div class=" alert alert-info">
<p><b>Demo</b> Varying Number of Features with Huber Loss. </p>
In principle you can choose how many of the available features $x_{1}, x_{2},\ldots,$
of a house you want to use to in order to predict the house price $y$. Let us now explore
the effect of using a varying number $r$ of features on the resulting error and
computational complexity (runtime). <br />
In particular, for each $r=1,2,\ldots,10$, the code snippet below fits a linear model under
Huber loss to the house sales dataset (using $m=10$ data points) by using only the
first $r$ features $x_{1},...,x_{r}$ of a house.
<br />
- The first $r$ features and labels for the previously sold houses can be obtained using `GetFeaturesLabels(m,r)`.<br />
- For each value of $r$, the computation time of the fitting method `HuberRegressor.fit()` and the resulting training error (using the Python function `mean_squared_error()`) of the fitted linear model. <br />
- The results are stored in two vectors `linreg_time` (multiply by 1000 to get time in milliseconds) and `linreg_error`.
</div>
```python
import time
m = 10 # we use 100 data points of the house sales database
max_r = 10 # maximum number of features used
X,y = GetFeaturesLabels(m,max_r) # read in 100 data points using 10 features
linreg_time = np.zeros(max_r) # vector for storing the exec. times of LinearRegresion.fit() for each r
linreg_error = np.zeros(max_r) # vector for storing the training error of LinearRegresion.fit() for each r
for r in range(max_r):
reg_hub = HuberRegressor(fit_intercept=False)
start_time = time.time()
reg_hub = reg_hub.fit(X[:,:(r+1)], y)
end_time = (time.time() - start_time)*1000
linreg_time[r] = end_time
pred = reg_hub.predict(X[:,:(r+1)])
linreg_error[r] = mean_squared_error(y, pred)
plot_x = np.linspace(1, max_r, max_r, endpoint=True)
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(8, 4))
axes[0].plot(plot_x, linreg_error, label='MSE', color='red')
axes[1].plot(plot_x, linreg_time, label='time', color='green')
axes[0].set_xlabel('features')
axes[0].set_ylabel('empirical error')
axes[1].set_xlabel('features')
axes[1].set_ylabel('Time (ms)')
axes[0].set_title('training error vs number of features')
axes[1].set_title('computation time vs number of features')
axes[0].legend()
axes[1].legend()
plt.tight_layout()
plt.show()
```
```python
```
```python
```
```python
```
| b859488ba824ea7fff5ba17c28c6564e7076f398 | 233,955 | ipynb | Jupyter Notebook | ComponentsML/PythonNotebook/ElementsofML.ipynb | alexjungaalto/ResearchPublic | 07a6b05f5a5f306aea8a625622f4736274f9f11e | [
"Unlicense"
] | 3 | 2019-11-06T17:43:31.000Z | 2020-04-27T19:20:46.000Z | ComponentsML/PythonNotebook/.ipynb_checkpoints/ElementsofML-checkpoint.ipynb | alexjungaalto/ResearchPublic | 07a6b05f5a5f306aea8a625622f4736274f9f11e | [
"Unlicense"
] | null | null | null | ComponentsML/PythonNotebook/.ipynb_checkpoints/ElementsofML-checkpoint.ipynb | alexjungaalto/ResearchPublic | 07a6b05f5a5f306aea8a625622f4736274f9f11e | [
"Unlicense"
] | 13 | 2019-11-09T01:58:59.000Z | 2021-11-01T11:18:15.000Z | 203.793554 | 32,980 | 0.890646 | true | 8,177 | Qwen/Qwen-72B | 1. YES
2. YES | 0.914901 | 0.709019 | 0.648682 | __label__eng_Latn | 0.97119 | 0.345437 |
```python
import numpy as np
import math as mt
import sympy as sym
```
```python
theta = sym.Symbol('theta')
alpha = sym.Symbol('alpha')
costheta = sym.Symbol('costheta')
cosalpha = sym.Symbol('cosalpha')
sintheta = sym.Symbol('sintheta')
sinalpha = sym.Symbol('sinalpha')
l1 = sym.Symbol('l1')
l2 = sym.Symbol('l2')
```
```python
var('theta alpha l1 l2 costheta cosalpha sintheta sinalpha')
```
(theta, alpha, l1, l2, costheta, cosalpha, sintheta, sinalpha)
```python
A_1 = np.array([[costheta, -sintheta, 0, l1*sintheta],[sintheta,costheta, 0, l1*costheta],[0,0,1,0],[0,0,0,1]])
A_2 = np.array([[cosalpha,-sinalpha, 0,l2*sinalpha],[sinalpha, cosalpha, 0, l2*cosalpha],[0,0,1,0],[0,0,0,1]])
```
```python
A_1 A_2
```
```python
a1 = matrix(SR, [[costheta, -sintheta, 0, l1*sintheta],[sintheta,costheta, 0, l1*costheta],[0,0,1,0],[0,0,0,1]])
a2 = matrix(SR,[[cosalpha,-sinalpha, 0,l2*sinalpha],[sinalpha, cosalpha, 0, l2*cosalpha],[0,0,1,0],[0,0,0,1]])
```
```python
a1*a2
```
[ cosalpha*costheta - sinalpha*sintheta -costheta*sinalpha - cosalpha*sintheta 0 costheta*l2*sinalpha - cosalpha*l2*sintheta + l1*sintheta]
[ costheta*sinalpha + cosalpha*sintheta cosalpha*costheta - sinalpha*sintheta 0 cosalpha*costheta*l2 + l2*sinalpha*sintheta + costheta*l1]
[ 0 0 1 0]
[ 0 0 0 1]
```python
```
| 3e3c5470d5ac7822f268a66bdbf20a2d18647ad0 | 3,733 | ipynb | Jupyter Notebook | Jupyter/HomogeneousTransformations.ipynb | der-coder/CINVESTAV-System-Modeling-2019 | 9345444113885181560940b3b4467c9555b8a85a | [
"MIT"
] | null | null | null | Jupyter/HomogeneousTransformations.ipynb | der-coder/CINVESTAV-System-Modeling-2019 | 9345444113885181560940b3b4467c9555b8a85a | [
"MIT"
] | null | null | null | Jupyter/HomogeneousTransformations.ipynb | der-coder/CINVESTAV-System-Modeling-2019 | 9345444113885181560940b3b4467c9555b8a85a | [
"MIT"
] | null | null | null | 27.448529 | 245 | 0.436914 | true | 541 | Qwen/Qwen-72B | 1. YES
2. YES | 0.903294 | 0.757794 | 0.684511 | __label__kor_Hang | 0.200056 | 0.42868 |
# scqubits example: transmon qubit
J. Koch and P. Groszkowski
For further documentation of scqubits see https://scqubits.readthedocs.io/en/latest/.
---
Set up Matplotlib for plotting into notebook, and import scqubits and numpy:
```python
%matplotlib inline
%config InlineBackend.figure_format = 'svg'
import numpy as np
import scqubits as scq
```
# Transmon qubit
The transmon qubit is described by the Hamiltonian
\begin{equation}
H=4E_\text{C}(\hat{n}-n_g)^2-\frac{1}{2}E_\text{J}\sum_n(|n\rangle\langle n+1|+\text{h.c.}),
\end{equation}
expressed in terms of the charge operator $\hat{n}$ and its eigenstates $|n\rangle$; $E_C$ is the charging energy, $E_J$ the Josephson energy, and $n_g$ the offset charge.
<br>
**Creation via GUI** (ipywidgets needs to be installed for this to work.)
```python
tmon = scq.Transmon.create()
```
HBox(children=(VBox(children=(HBox(children=(Label(value='EJ [GHz]'), FloatText(value=15.0, layout=Layout(widt…
Output()
**Programmatic creation**
```python
tmon2 = scq.Transmon(
EJ=30.02,
EC=1.2,
ng=0.0,
ncut=31
)
```
**Displaying and modifying parameters**
```python
print(tmon)
```
```python
tmon.EJ = 16.5
print(tmon)
```
Modifying values in the above GUI works as well.
## Computing and plotting eigenenergies and wavefunctions
**Eigenenergies.** The energy eigenvalues for the transmon are obtained by calling the `eigenvals()` method. The optional parameter `evals_count` specifies the sought number of eigenenergies.
```python
tmon.eigenvals(evals_count=12)
```
**Eigenstates**. `eigensys` is used to obtain both eigenenergies and eigenstates (represented in charge basis).
```python
evals, evecs = tmon.eigensys()
```
**Plot energy levels**.
To plot eigenenergies vs. a qubit parameter (`EJ`, `EC`, or `ng`), we generate an array of values for the desired parameter and call the method `plot_evals_vs_paramvals`:
```python
ng_list = np.linspace(-2, 2, 220)
tmon.EJ = 0.1 # temporarily reduce EJ to see some charge dispersion
fig, axes = tmon.plot_evals_vs_paramvals('ng', ng_list, evals_count=6, subtract_ground=False)
tmon.EJ = 15 # switch back
```
HBox(children=(HTML(value='Spectral data'), FloatProgress(value=0.0, max=220.0), HTML(value='')))
```python
newfig, newaxes = tmon.plot_evals_vs_paramvals('ng', ng_list, evals_count=6, subtract_ground=False, fig_ax=(fig, axes))
```
HBox(children=(HTML(value='Spectral data'), FloatProgress(value=0.0, max=220.0), HTML(value='')))
```python
newfig
```
**Charge-basis wavefunction for eigenstate**
```python
tmon.plot_n_wavefunction(esys=None, which=0, mode='real');
```
**Phase-basis wavefunction for eigenstate**
```python
tmon.plot_wavefunction(esys=None, which=(0,1,2,3,8), mode='real');
```
```python
tmon.plot_phi_wavefunction(which=(0, 1, 4, 10), mode='abs_sqr');
```
**Dispersion of transition energies**
```python
EJvals = np.linspace(0.1, 50, 100)
tmon.plot_dispersion_vs_paramvals('ng', 'EJ', EJvals, ref_param='EC', transitions=(((0,1), (0,2))));
EJvals = np.linspace(0.1, 50, 35)
tmon.plot_dispersion_vs_paramvals('ng', 'EJ', EJvals, ref_param='EC', levels=(0,1,2));
```
tmon.plot_dispersion_vs_paramvals('ng', 'EJ', EJvals, ref_param='EC', transitions=(((0,1), (0,2))));
## Calculating and visualizing matrix elements
**Compute matrix elements.** Matrix elements can be calculated by referring to the `Transmon` operator methods in string form. For instance, `.n_operator` yields the charge operator:
```python
tmon.matrixelement_table('n_operator', evals_count=3)
```
**Plotting matrix elements**. Calling the `.plot_matrixelements` method yields:
```python
tmon.plot_matrixelements('n_operator', evals_count=10);
```
```python
tmon.plot_matrixelements('cos_phi_operator', evals_count=10, show3d=False, show_numbers=True);
```
**Plot matrix elements vs. parameter value**
```python
tmon.EJ = 7.0
fig, ax = tmon.plot_matelem_vs_paramvals('n_operator', 'ng', ng_list, select_elems=4);
```
# Tunable Transmon
Replacing the above transmon's Josephson junction by a SQUID loop makes the transmon tunable. The resulting Hamiltonian is
\begin{equation}
H_\text{CPB}=4E_\text{C}(\hat{n}-n_g)^2-\frac{1}{2}E_\text{J,eff}(\Phi_\text{ext})\sum_n(|n\rangle\langle n+1|+\text{h.c.}),
\end{equation}
expressed in the charge basis. Here, parameters are as above except for the effective Josephson energy $E_\text{J,eff}(\Phi_\text{ext}) = E_{\text{J,max}} \sqrt{\cos^2(\pi\Phi_\text{ext}/\Phi_0)+ d^2 \sin^2 (\pi\Phi_\text{ext}/\Phi_0)}$, where $E_\text{J,max} = E_\text{J1} + E_\text{J2}$ is the maximum Josephson energy, and $d=(E_\text{J1}-E_\text{J2})/(E_\text{J1}+E_\text{J2})$ is the relative junction asymmetry.
<br>
**Create instance.** An instance of a tunable transmon qubit is obtained like this:
```python
tune_tmon = scq.TunableTransmon(
EJmax=50.0,
EC=0.5,
d=0.01,
flux=0.0,
ng=0.0,
ncut=30
)
```
**Create via GUI**
```python
tune_tmon = scq.TunableTransmon.create()
```
HBox(children=(VBox(children=(HBox(children=(Label(value='EJmax [GHz]'), FloatText(value=20.0, layout=Layout(w…
Output()
```python
flux_list = np.linspace(-1.1, 1.1, 220)
tune_tmon.plot_evals_vs_paramvals('flux', flux_list, subtract_ground=True);
```
HBox(children=(HTML(value='Spectral data'), FloatProgress(value=0.0, max=220.0), HTML(value='')))
```python
```
| 2985aa8f0633401dccdc596b0b61b646bb4b1a24 | 227,672 | ipynb | Jupyter Notebook | examples/demo_transmon.ipynb | scqubits/scqubits-examples | 83e267b795f3c90de5543a5ca2b1c9d109acd52c | [
"BSD-3-Clause"
] | 8 | 2021-01-25T19:14:31.000Z | 2022-03-12T14:25:34.000Z | examples/demo_transmon.ipynb | scqubits/scqubits-examples | 83e267b795f3c90de5543a5ca2b1c9d109acd52c | [
"BSD-3-Clause"
] | null | null | null | examples/demo_transmon.ipynb | scqubits/scqubits-examples | 83e267b795f3c90de5543a5ca2b1c9d109acd52c | [
"BSD-3-Clause"
] | 9 | 2021-01-12T18:24:59.000Z | 2022-02-25T09:19:50.000Z | 47.830252 | 41,515 | 0.588039 | true | 1,701 | Qwen/Qwen-72B | 1. YES
2. YES | 0.843895 | 0.73412 | 0.61952 | __label__eng_Latn | 0.751779 | 0.277683 |
## Performance Indicator
It is fundamental for any algorithm to measure the performance. In a multi-objective scenario, we can not calculate the distance to the true global optimum but must consider a set of solutions. Moreover, sometimes the optimum is not even known, and other techniques must be used.
First, let us consider a scenario where the Pareto-front is known:
```python
import numpy as np
from pymoo.factory import get_problem
from pymoo.visualization.scatter import Scatter
# The pareto front of a scaled zdt1 problem
pf = get_problem("zdt1").pareto_front()
# The result found by an algorithm
A = pf[::10] * 1.1
# plot the result
Scatter(legend=True).add(pf, label="Pareto-front").add(A, label="Result").show()
```
### Generational Distance (GD)
The GD performance indicator <cite data-cite="gd"></cite> measure the distance from solution to the Pareto-front. Let us assume the points found by our algorithm are the objective vector set $A=\{a_1, a_2, \ldots, a_{|A|}\}$ and the reference points set (Pareto-front) is $Z=\{z_1, z_2, \ldots, z_{|Z|}\}$. Then,
\begin{align}
\begin{split}
\text{GD}(A) & = & \; \frac{1}{|A|} \; \bigg( \sum_{i=1}^{|A|} d_i^p \bigg)^{1/p}\\[2mm]
\end{split}
\end{align}
where $d_i$ represents the Euclidean distance (p=2) from $a_i$ to its nearest reference point in $Z$. Basically, this results in the average distance from any point $A$ to the closest point in the Pareto-front.
```python
from pymoo.factory import get_performance_indicator
gd = get_performance_indicator("gd", pf)
print("GD", gd.calc(A))
```
GD 0.05497689467314528
### Generational Distance Plus (GD+)
Ishibushi et. al. proposed in <cite data-cite="igd_plus"></cite> GD+:
\begin{align}
\begin{split}
\text{GD}^+(A) & = & \; \frac{1}{|A|} \; \bigg( \sum_{i=1}^{|A|} {d_i^{+}}^2 \bigg)^{1/2}\\[2mm]
\end{split}
\end{align}
where for minimization $d_i^{+} = max \{ a_i - z_i, 0\}$ represents the modified distance from $a_i$ to its nearest reference point in $Z$ with the corresponding value $z_i$.
```python
from pymoo.factory import get_performance_indicator
gd_plus = get_performance_indicator("gd+", pf)
print("GD+", gd_plus.calc(A))
```
GD+ 0.05497689467314528
### Inverted Generational Distance (IGD)
The IGD performance indicator <cite data-cite="igd"></cite> inverts the generational distance and measures the distance from any point in $Z$ to the closest point in $A$.
\begin{align}
\begin{split}
\text{IGD}(A) & = & \; \frac{1}{|Z|} \; \bigg( \sum_{i=1}^{|Z|} \hat{d_i}^p \bigg)^{1/p}\\[2mm]
\end{split}
\end{align}
where $\hat{d_i}$ represents the euclidean distance (p=2) from $z_i$ to its nearest reference point in $A$.
```python
from pymoo.factory import get_performance_indicator
igd = get_performance_indicator("igd", pf)
print("IGD", igd.calc(A))
```
IGD 0.06690908300327662
### Inverted Generational Distance Plus (IGD+)
In <cite data-cite="igd_plus"></cite> Ishibushi et. al. proposed IGD+ which is weakly Pareto compliant wheres the original IGD is not.
\begin{align}
\begin{split}
\text{IGD}^{+}(A) & = & \; \frac{1}{|Z|} \; \bigg( \sum_{i=1}^{|Z|} {d_i^{+}}^2 \bigg)^{1/2}\\[2mm]
\end{split}
\end{align}
where for minimization $d_i^{+} = max \{ a_i - z_i, 0\}$ represents the modified distance from $z_i$ to the closest solution in $A$ with the corresponding value $a_i$.
```python
from pymoo.factory import get_performance_indicator
igd_plus = get_performance_indicator("igd+", pf)
print("IGD+", igd_plus.calc(A))
```
IGD+ 0.06466828842775944
### Hypervolume
For all performance indicators showed so far, a target set needs to be known. For Hypervolume only a reference point needs to be provided. First, I would like to mention that we are using the Hypervolume implementation from [DEAP](https://deap.readthedocs.io/en/master/). It calculates the area/volume, which is dominated by the provided set of solutions with respect to a reference point.
<div style="display: block;margin-left: auto;margin-right: auto;width: 40%;">
</div>
This image is taken from <cite data-cite="hv"></cite> and illustrates a two objective example where the area which is dominated by a set of points is shown in grey.
Whereas for the other metrics, the goal was to minimize the distance to the Pareto-front, here, we desire to maximize the performance metric.
```python
from pymoo.factory import get_performance_indicator
hv = get_performance_indicator("hv", ref_point=np.array([1.2, 1.2]))
print("hv", hv.calc(A))
```
hv 0.9631646448182305
| 82a055cfeaf67232b420773dfb80481174e19a60 | 45,961 | ipynb | Jupyter Notebook | doc/source/misc/performance_indicator.ipynb | Alaya-in-Matrix/pymoo | 02d6e7085f5fe88dbd56b2a9f5173abe20c54caf | [
"Apache-2.0"
] | 2 | 2021-03-28T03:06:35.000Z | 2021-03-28T03:40:08.000Z | doc/source/misc/performance_indicator.ipynb | Alaya-in-Matrix/pymoo | 02d6e7085f5fe88dbd56b2a9f5173abe20c54caf | [
"Apache-2.0"
] | null | null | null | doc/source/misc/performance_indicator.ipynb | Alaya-in-Matrix/pymoo | 02d6e7085f5fe88dbd56b2a9f5173abe20c54caf | [
"Apache-2.0"
] | 1 | 2022-03-31T08:19:13.000Z | 2022-03-31T08:19:13.000Z | 127.315789 | 36,476 | 0.88029 | true | 1,358 | Qwen/Qwen-72B | 1. YES
2. YES | 0.90053 | 0.880797 | 0.793184 | __label__eng_Latn | 0.97428 | 0.681165 |
###### Content under Creative Commons Attribution license CC-BY 4.0, code under MIT license (c) 2014 F.J.Gonzales. Portions of the code adopted from the #numericalmooc materials, also under CC-BY.
# The French Connec... eh solution
## Solving the 1-D wave equation
Welcome to this bonus notebook that ties into the second module of ["Practical Numerical Methods with Python"](http://openedx.seas.gwu.edu/courses/GW/MAE6286/2014_fall/about).
In the first notebook of the second module, we were introduced to the numerical solutions of *partial differential equations (PDEs)*. The equation used in that notebook was the 1-D linear convection equation, AKA *the advection equation*.
However, in this notebook we will explore the 1-D wave equation shown below:
\begin{equation}\frac{\partial^2 u}{\partial t^2}= c^2 \frac{\partial^2 u}{\partial x^2}.
\end{equation}
The wave equation is second order in both spatial and temporal dimensions.
If the initial displacement $\ f(x)$ and a velocity $\ g(x)$ are specified for the wave equation, then the solution of the system is:
\begin{equation}\ u(x,t) = \frac{1}{2}[f(x-ct)+f(x+ct)]+\frac{1}{2c}\int{g(\tau){\partial \tau}}\end{equation}
This solution is known as the d'Alembert solution for the 1-D wave equation.
For this notebook we will focus on the special case where the initial velocity is zero, $g(x)=0$
and the solution is simplified to the equation below:
\begin{equation}\ u(x,t)=\frac{1}{2}[f(x-ct)+f(x+ct)]\end{equation}
## The Guitar pick example
A classical example of the 1-D wave equation is when simulating the motion of a guitar string after it is plucked.
###### Image credit: G. Everstine, 2012 (Fig. 14, page 23).
As the figure above shows, the original triangular displacement splits into two waves traveling in opposite directions. One can also see that each of the travelling waves is half the height of the original function, which corresponds to d'Alembert's solution shown previously.
## Wave motion for a bar
Now that we have established the solution for the 1-D wave equation, we will use this to simulate the longitudinal wave motion on a finite length bar.
### Problem Description
Let's say we have a bar of finite length as shown below. The length is $10 \rm{m}$ in the $x$ direction. As can be seen, the system ends with a dashpot, also known as a damper; this dashpot is an important part of this problem because we will see how this will create a non-reflecting boundary condition.
###### Image credit: G. Everstine, 2012 (Fig. 22 in page 30).
In addition to the problem description, the initial displacement $f(x)$ is defined below:
\begin{equation}
u(x,0)=\begin{cases}2-2(x-5)^2 & \text{where } 4\leq x \leq 6,\\
0 & \text{everywhere else in } (0, 10)
\end{cases}
\end{equation}
### Discretizing the equation
Since the 1-D wave equation is second order in time and space, we will discretize the equation using central difference for both dimensions. Below is the discretized equation:
\begin{equation}\frac{u_i^{n+1}-2u_i^n+u_i^{n-1}}{c^2\Delta t^2} = \frac{u_{i+1}^n - 2u_i^n+u_{i-1}^n}{\Delta x^2} \end{equation}
Solving for the unknown in the discretized equation we get the following:
\begin{equation}\ u_i^{n+1}=-u_i^{n-1}+2u_i^n+C^2(u_{i+1}^n-2u_i^{n}+u_{i-1}^n) \end{equation}
In this equation $C= c\frac{\Delta t}{\Delta x}$, which equates to the CFL number.
In addition to discretizing the wave equation, we also want to take a look at our boundary conditions. On the left-hand-side boundary, the bar is fixed, which would correlate with a Dirichlet boundary where the displacement is $u=0$.
The right-hand-side boundary is more tricky and that is where the idea of the nonreflecting boundary condition comes into play.
If we look at a simplified version of d'Alembert's solution, $f(x-ct)+f(x+ct),$ we can see that the first term represents the wave that is moving towards the right boundary while the latter term represents the wave that bounces away from the right boundary. In the bar we are analyzing, the dashpot at the right-hand-side means that the second return wave term would go to zero.
Differentiating the first term of D'Alembert's solution in terms of space and time would result in the following PDEs:
\begin{equation}\frac{\partial u}{\partial x}= f^\prime,\frac{\partial u}{\partial t}= -cf^\prime\end{equation}
Rearranging the above equations gives the boundary condition we have to enforce on the right-hand side:
\begin{equation}\frac{\partial u}{\partial x}=-\frac{1}{c}\frac{\partial u}{\partial t}\end{equation}
Discretizing our non-reflecting boundary:
\begin{equation}\frac{u_i^n-u_{i-1}^n}{\Delta x} = \frac{-1}{c}\frac{u_i^{n+1}-u_i^n}{\Delta t}\end{equation}
and isolating for the end boundary:
\begin{equation}u_i^n= -\frac{\Delta x}{c\Delta t}(u_i^{n+1}-u_i^n)+u_{i-1}^n\end{equation}
### Lets rev up some code!
The first thing we always do is to import the necessary libraries for this problem. In this case we will use bothy Numpy and Sympy. We will also use Matplotlib for plotting our displacement.
```
import numpy as np
import sympy as sp
from sympy.utilities.lambdify import lambdify
import matplotlib.pyplot as plt
%matplotlib inline
from matplotlib import rcParams
rcParams['font.family'] = 'serif'
rcParams['font.size'] = 16
```
We then create the initial condition for the bar.
```
# initial conditions and parameters
nx= 50
x= sp.symbols('x')
func= 2-2*(x-5)**2
init_func= lambdify((x), func)
x_array= np.linspace(0,10,nx+1)
u_initial= np.asarray([init_func(x0) for x0 in x_array])
```
Lets plot the initial function so we can start off on the right foot.
```
plt.plot(x_array, u_initial, color='#003366', ls='--', lw=3)
plt.xlim(0,10)
plt.ylim(0,3)
plt.xlabel('x')
plt.ylabel('Displacement')
```
Now we can define a function to solve for the displacement.
```
def barsolver(u_init, T, C, nx):
''' Returns the displacement of the wave equation for a bar.
Parameters
----------
u_init: array of float
initial displacement of bar
T: integer
final time for calculation
C: integer
CFL number for stability
nx: integer
number of gridsteps
Returns
-------
u: array of float
final displacement of bar wave motion
'''
#initial parameters
c= 2
C2= C**2
dx= 10./nx
dt = C*dx/c
nt= int(round(T/dt))
#create arrays
u= np.zeros(nx+1) #array holding u for u[n+1]
u_1= np.zeros(nx+1) #array holding u for u[n]
u_2= np.zeros(nx+1) #array holding u for u[n-1]
u_1[4/dx:6/dx+1]= u_init[4/dx:6/dx+1]
# Loop for first time step
n = 0
for i in range(1, nx):
u[i] = u_1[i] + 0.5*C2*(u_1[i-1] - 2*u_1[i] + u_1[i+1])
# Enforce boundary conditions
u[0] = 0; u[-1] = -dx/(c*dt)*(u[i]-u_1[i])+u[-2]
# Switch variables before next step
u_2[:], u_1[:] = u_1, u
# Loops for subsequent time steps
for n in range(1, nt):
for i in range(1, nx):
u[i] = - u_2[i] + 2*u_1[i] + C2*(u_1[i-1] - 2*u_1[i] + u_1[i+1])
# Enforce boundary conditions
u[0] = 0; u[-1] = -dx/(c*dt)*(u[i]-u_1[i])+u[-2]
# Switch variables before next step
u_2[:], u_1[:] = u_1, u
return u
```
Lets run the solver for $t=0$ and plot the displacement.
```
u_ans= barsolver(u_initial,0,1,50)
```
```
plt.plot(x_array,u_ans)
plt.xlim(0,10)
plt.ylim(-3,3)
```
As we expected, the displacement is equal to the initial parabola.
```
u_ans= barsolver(u_initial,2,1,50) #solving at t= 2
```
```
plt.plot(x_array,u_ans)
plt.xlim(0,10)
plt.ylim(-2,2)
```
```
u_ans= barsolver(u_initial,4,1,50) #solving at t= 4
```
```
plt.plot(x_array,u_ans)
plt.xlim(0,10)
plt.ylim(-2,2)
```
As we can see from the plots, the wave moving in the right direction has been absorbed while the left moving wave has flipped over and is now moving in the right direction.
Our code has expressed correctly the non-reflecting boundary!
##### Now your turn
What do you think will happen as more time passes?
You will need to plot the displacement for t= 6, 8, 10 and check if what you expected was correct.
## Reference
* Barba, Lorena A., et al. "MAE 6286 Practical Numerical Methods with Python," GW Open edX, The George Washingtion University, 2014. http://openedx.seas.gwu.edu/courses/GW/MAE6286/2014_fall/about .
* Chapra, Steven C and Canale, Raymond P., "Numerical Methods for Engineers," McGraw-Hill Education, Inc., New York, 2010.
* Everstine, Gordon C. "Analytical Solution of Partial Differential Equations," George Washington University, 2012. [PDF](http://gwu.geverstine.com/pdenum.pdf)
* Fitzpatrick, Richard. "Introduction: Wave Equation," University of Texas, 2006. [Wave Equation](http://farside.ph.utexas.edu/teaching/329/lectures/node89.html).
* Strauss, Walter A., "Partial Differential Equations: An Introduction," Wiley, Inc., New York, 2008.
---
###### The cell bellow loads the style of the notebook
```
from IPython.core.display import HTML
css_file = '../../styles/numericalmoocstyle.css'
HTML(open(css_file, "r").read())
```
<link href='http://fonts.googleapis.com/css?family=Alegreya+Sans:100,300,400,500,700,800,900,100italic,300italic,400italic,500italic,700italic,800italic,900italic' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Arvo:400,700,400italic' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=PT+Mono' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Shadows+Into+Light' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Nixie+One' rel='stylesheet' type='text/css'>
<style>
@font-face {
font-family: "Computer Modern";
src: url('http://mirrors.ctan.org/fonts/cm-unicode/fonts/otf/cmunss.otf');
}
#notebook_panel { /* main background */
background: rgb(245,245,245);
}
div.cell { /* set cell width */
width: 750px;
}
div #notebook { /* centre the content */
background: #fff; /* white background for content */
width: 1000px;
margin: auto;
padding-left: 0em;
}
#notebook li { /* More space between bullet points */
margin-top:0.8em;
}
/* draw border around running cells */
div.cell.border-box-sizing.code_cell.running {
border: 1px solid #111;
}
/* Put a solid color box around each cell and its output, visually linking them*/
div.cell.code_cell {
background-color: rgb(256,256,256);
border-radius: 0px;
padding: 0.5em;
margin-left:1em;
margin-top: 1em;
}
div.text_cell_render{
font-family: 'Alegreya Sans' sans-serif;
line-height: 140%;
font-size: 125%;
font-weight: 400;
width:600px;
margin-left:auto;
margin-right:auto;
}
/* Formatting for header cells */
.text_cell_render h1 {
font-family: 'Nixie One', serif;
font-style:regular;
font-weight: 400;
font-size: 45pt;
line-height: 100%;
color: rgb(0,51,102);
margin-bottom: 0.5em;
margin-top: 0.5em;
display: block;
}
.text_cell_render h2 {
font-family: 'Nixie One', serif;
font-weight: 400;
font-size: 30pt;
line-height: 100%;
color: rgb(0,51,102);
margin-bottom: 0.1em;
margin-top: 0.3em;
display: block;
}
.text_cell_render h3 {
font-family: 'Nixie One', serif;
margin-top:16px;
font-size: 22pt;
font-weight: 600;
margin-bottom: 3px;
font-style: regular;
color: rgb(102,102,0);
}
.text_cell_render h4 { /*Use this for captions*/
font-family: 'Nixie One', serif;
font-size: 14pt;
text-align: center;
margin-top: 0em;
margin-bottom: 2em;
font-style: regular;
}
.text_cell_render h5 { /*Use this for small titles*/
font-family: 'Nixie One', sans-serif;
font-weight: 400;
font-size: 16pt;
color: rgb(163,0,0);
font-style: italic;
margin-bottom: .1em;
margin-top: 0.8em;
display: block;
}
.text_cell_render h6 { /*use this for copyright note*/
font-family: 'PT Mono', sans-serif;
font-weight: 300;
font-size: 9pt;
line-height: 100%;
color: grey;
margin-bottom: 1px;
margin-top: 1px;
}
.CodeMirror{
font-family: "PT Mono";
font-size: 90%;
}
</style>
| a3998d5ac27acb91942d852a4aa7f3ae55e42cf7 | 68,747 | ipynb | Jupyter Notebook | Lessons.and.Assignments/1D.Wave.Bar/1D.WaveBar.ipynb | udaypandit/black_scholes | 705c3e47e5fd441c30a38c1ab17a80a75441e7d5 | [
"MIT"
] | 12 | 2015-03-30T13:29:15.000Z | 2018-06-23T03:39:11.000Z | Lessons.and.Assignments/1D.Wave.Bar/1D.WaveBar.ipynb | udaypandit/black_scholes | 705c3e47e5fd441c30a38c1ab17a80a75441e7d5 | [
"MIT"
] | 8 | 2015-01-01T13:55:56.000Z | 2015-01-16T19:46:58.000Z | Lessons.and.Assignments/1D.Wave.Bar/1D.WaveBar.ipynb | udaypandit/black_scholes | 705c3e47e5fd441c30a38c1ab17a80a75441e7d5 | [
"MIT"
] | 32 | 2015-01-10T08:13:13.000Z | 2021-08-10T15:18:26.000Z | 84.87284 | 14,781 | 0.790696 | true | 3,652 | Qwen/Qwen-72B | 1. YES
2. YES | 0.863392 | 0.849971 | 0.733858 | __label__eng_Latn | 0.9549 | 0.54333 |
# Modeling a Ball Channel Pendulum
Below is a video of a simple cardboard pendulum that has a metal ball in a semi-circular channel mounted above the pendulum's rotational joint. It is an interesting dynamic system that can be constructed and experimented with. This system seems to behave like a single degree of freedom system, i.e. that the ball's location is a kinematic function of the pendulum's angle. But this may not actually be the case. It depends on the nature of the motion of the ball with respect to the channel. If the ball rolls without slipping in the channel it is a single degree of freedom system. If the ball can slip and roll it is at a minimum a two degree of freedom system. In this notebook we will derive the equations of motion of the system considering the ball slips and doesn't roll in the channel.
```python
from IPython.display import YouTubeVideo
YouTubeVideo('3pJdkssUdfU', width=600, height=480)
```
# Imports and setup
```python
import sympy as sm
import numpy as np
```
```python
sm.init_printing()
```
```python
%matplotlib widget
```
# Free Body Diagram
Assumptions:
- Pendulum pivot is frictionless
- Pendulum is a simple pendulum
- Ball is a point mass that slides without friction in the pendulum channel
# Constants
Create a symbol for each of the system's contant parameters.
- $m_p$: mass of the pendulum
- $m_b$: mass of the ball
- $l$: length of the pendulum
- $r$: radius of the channel
- $g$: acceleration due to gravity
```python
mp, mb, r, l, g = sm.symbols('m_p, m_b, r, l, g', real=True, positive=True)
```
# Generalized Coordinates
Create functions of time for each generalized coordinate.
- $\theta(t)$: angle of the pendulum
- $\phi(t)$: angle of the line from the center of the channel semi-circle to the ball
```python
t = sm.symbols('t')
```
```python
theta = sm.Function('theta')(t)
phi = sm.Function('phi')(t)
```
```python
theta.diff()
```
```python
theta.diff(t)
```
Introduce two new variables for the generalized speeds:
$$
\alpha = \dot{\theta} \\
\beta = \dot{\phi}
$$
```python
alpha = sm.Function('alpha')(t)
beta = sm.Function('beta')(t)
```
# Kinetic Energy
Write the kinetic energy in terms of the generalized coordinates.
Pendulum:
```python
Tp = mp * (l * alpha)**2 / 2
Tp
```
Ball
```python
Tb = mb / 2 * ((-r * alpha * sm.cos(theta) + beta * r * sm.cos(theta + phi))**2 +
(-r * alpha * sm.sin(theta) + beta * r * sm.sin(theta + phi))**2)
Tb
```
```python
T = Tp + Tb
T
```
# Potential Energy
Each particle (pendulum bob and the ball) has a potential energy associated with how high the mass rises.
```python
U = mp * g * (l - l * sm.cos(theta)) + mb * g * (r * sm.cos(theta) - r * sm.cos(theta + phi))
U
```
# Lagrange's equation of the second kind
There are two generalized coordinates with two degrees of freedom and thus two equations of motion.
$$
0 = f_\theta(\theta, \phi, \alpha, \beta, \dot{\alpha}, \dot{\beta}, t) \\
0 = f_\phi(\theta, \phi, \alpha, \beta, \dot{\alpha}, \dot{\beta}, t) \\
$$
```python
L = T - U
L
```
```python
gs_repl = {theta.diff(): alpha, phi.diff(): beta}
```
```python
f_theta = L.diff(alpha).diff(t).subs(gs_repl) - L.diff(theta)
f_theta = sm.trigsimp(f_theta)
f_theta
```
```python
f_phi = L.diff(beta).diff(t).subs(gs_repl) - L.diff(phi)
f_phi = sm.trigsimp(f_phi)
f_phi
```
```python
f = sm.Matrix([f_theta, f_phi])
f
```
The equations are motion are based on Newton's second law and Euler's equations, thus it is guaranteed that terms in $\mathbf{f}$ that include $\dot{\mathbf{u}}$ are linear with respect to $\dot{\mathbf{u}}$. So the equations of motion can be written in this matrix form:
$$
\mathbf{f}(\mathbf{c}, \mathbf{s}, \dot{\mathbf{s}}, t) = \mathbf{I}(\mathbf{c}, t)\dot{\mathbf{s}} + \mathbf{g}(\mathbf{c}, \mathbf{s}, t) = 0
$$
$\mathbf{I}$ is called the "mass matrix" of the nonlinear equations. If the derivatives of $\mathbf{f}$ with respect to $\dot{\mathbf{u}}$ are computed, i.e. the [Jacobian](https://en.wikipedia.org/wiki/Jacobian_matrix_and_determinant) of $\mathbf{f}$ with respect to $\dot{\mathbf{u}}$, then you can obtain the mass matrix.
```python
sbar = sm.Matrix([alpha, beta])
sbar
```
```python
Imat = f.jacobian(sbar.diff())
Imat
```
$$\mathbf{g} = \mathbf{f}|_{\dot{\mathbf{s}}=\mathbf{0}}$$
```python
gbar = f.subs({alpha.diff(t): 0, beta.diff(t): 0})
gbar
```
The explicit first order form has all of the $\dot{\mathbf{s}}$ on the left hand side. This requires solving the linear system of equations:
$$\mathbf{I}\dot{\mathbf{s}}=-\mathbf{g}$$
The mathematical solution is:
$$\dot{\mathbf{s}}=-\mathbf{I}^{-1}\mathbf{g}$$
```python
sdotbar = -Imat.inv() * gbar
sdotbar.simplify()
sdotbar
```
A better way to solve the system of linear equations is to use Guassian elmination. SymPy has a variety of methods for sovling linear systems. The LU decomposition method of Guassian elimination is a generally good choice for this and for large number of degrees of freedom this will provide reasonable computation time. For very large $n$ this should be done numerically instead of symbolically.
```python
sdotbar = -Imat.LUsolve(gbar)
sdotbar.simplify()
sdotbar
```
Note the differences in timing below. For systems with a large number of degrees of freedom, this gap in timing will increase significantly.
```python
%%timeit
-Imat.inv() * gbar
```
874 ms ± 68.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
```python
%%timeit
-Imat.LUsolve(gbar)
```
883 µs ± 148 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
# Simulation of the nonlinear system
Resonance has a prepared system that is only missing the equations of motion.
```python
from resonance.nonlinear_systems import BallChannelPendulumSystem
```
```python
sys = BallChannelPendulumSystem()
```
```python
sys.constants
```
{'mp': 0.012, 'mb': 0.0035, 'r': 0.1, 'l': 0.2, 'g': 9.81}
```python
sys.coordinates
```
_CoordinatesDict([('theta', 0.17453292519943295), ('phi', -0.17453292519943295)])
```python
sys.speeds
```
_CoordinatesDict([('alpha', 0.0), ('beta', 0.0)])
The full first order ordinary differential equations are:
$$
\dot{\theta} = \alpha \\
\dot{\phi} = \beta \\
\dot{\alpha} = f_{\alpha}(\theta, \phi, \alpha, \beta, t) \\
\dot{\beta} = f_{\beta}(\theta, \phi, \alpha, \beta, t)
$$
where:
$$
\dot{\mathbf{c}}=\begin{bmatrix}
\theta \\
\phi
\end{bmatrix} \\
\dot{\mathbf{s}}=-\mathbf{I}^{-1}\mathbf{g} =
\begin{bmatrix}
f_{\alpha}(\theta, \phi, \alpha, \beta, t) \\
f_{\beta}(\theta, \phi, \alpha, \beta, t)
\end{bmatrix}
$$
Introducing:
$$\mathbf{x} =
\begin{bmatrix}
\mathbf{c} \\
\mathbf{s}
\end{bmatrix}=
\begin{bmatrix}
\theta \\
\phi \\
\alpha \\
\beta
\end{bmatrix}
$$
we have equations for:
$$\dot{\mathbf{x}} = \begin{bmatrix}
\dot{\theta} \\
\dot{\phi} \\
\dot{\alpha} \\
\dot{\beta}
\end{bmatrix}
=
\begin{bmatrix}
\alpha \\
\beta \\
f_{\alpha}(\theta, \phi, \alpha, \beta, t) \\
f_{\beta}(\theta, \phi, \alpha, \beta, t)
\end{bmatrix}
$$
To find $\mathbf{x}$ we must integrate $\dot{\mathbf{x}}$ with respect to time:
$$
\mathbf{x} = \int_{t_0}^{t_f} \dot{\mathbf{x}} dt
$$
Resonance uses numerical integration behind the scenes to compute this integral. Numerical integration routines typicall require that you write a function that computes the right hand side of the first order form of the differential equations. This function takes the current state and time and computes the derivative of the states.
SymPy's `lambdify` function can convert symbolic expression into NumPy aware functions, i.e. Python functions that can accept NumPy arrays.
```python
eval_alphadot = sm.lambdify((phi, theta, alpha, beta, mp, mb, l, r, g), sdotbar[0])
```
```python
eval_alphadot(1, 2, 3, 4, 5, 6, 7, 8, 9)
```
```python
eval_betadot = sm.lambdify((phi, theta, alpha, beta, mp, mb, l, r, g), sdotbar[1])
```
```python
eval_betadot(1, 2, 3, 4, 5, 6, 7, 8, 9)
```
Now the right hand side (of the explicit ODEs) function can be written:
```python
def rhs(phi, theta, alpha, beta, mp, mb, l, r, g):
theta_dot = alpha
phi_dot = beta
alpha_dot = eval_alphadot(phi, theta, alpha, beta, mp, mb, l, r, g)
beta_dot = eval_betadot(phi, theta, alpha, beta, mp, mb, l, r, g)
return theta_dot, phi_dot, alpha_dot, beta_dot
```
```python
rhs(1, 2, 3, 4, 5, 6, 7, 8, 9)
```
This function also works with numpy arrays:
```python
rhs(np.array([1, 2]), np.array([3, 4]), np.array([5, 6]), np.array([7, 8]), 9, 10, 11, 12, 13)
```
(array([5, 6]),
array([7, 8]),
array([-27.2770872 , -36.73982421]),
array([-13.91800374, 15.59186174]))
Add this function as the differential equation function of the system.
```python
sys.diff_eq_func = rhs
```
Now the `free_response` function can be called to simulation the nonlinear system.
```python
traj = sys.free_response(20, sample_rate=500)
```
```python
traj[['theta', 'phi']].plot(subplots=True);
```
Canvas(toolbar=Toolbar(toolitems=[('Home', 'Reset original view', 'home', 'home'), ('Back', 'Back to previous …
```python
sys.animate_configuration(fps=30, repeat=False)
```
Canvas(toolbar=Toolbar(toolitems=[('Home', 'Reset original view', 'home', 'home'), ('Back', 'Back to previous …
<matplotlib.animation.FuncAnimation at 0x7f3885c36908>
# Equilibrium
This system four equilibrium points.
1. $\theta = \phi = \alpha = \beta = 0$
2. $\theta = \pi, \phi=\alpha=\beta=0$
3. $\theta = \pi, \phi = -\pi, \alpha=\beta=0$
4. $\theta = 0, \phi = \pi, \alpha=\beta=0$
If you set the velocities and accelerations equal to zero in the equations of motion you can then solve for the coordinates that make these equations equal to zero. This is the static force balance equations.
```python
static_repl = {alpha.diff(): 0, beta.diff(): 0, alpha: 0, beta: 0}
static_repl
```
```python
f_static = f.subs(static_repl)
f_static
```
```python
sm.solve(f_static, theta, phi)
```
Let's look at the simulation with the initial condition very close to an equilibrium point:
```python
sys.coordinates['theta'] = np.deg2rad(180.0000001)
sys.coordinates['phi'] = -np.deg2rad(180.00000001)
```
```python
traj = sys.free_response(20, sample_rate=500)
```
```python
sys.animate_configuration(fps=30, repeat=False)
```
Canvas(toolbar=Toolbar(toolitems=[('Home', 'Reset original view', 'home', 'home'), ('Back', 'Back to previous …
<matplotlib.animation.FuncAnimation at 0x7f3885a377f0>
```python
traj[['theta', 'phi']].plot(subplots=True);
```
Canvas(toolbar=Toolbar(toolitems=[('Home', 'Reset original view', 'home', 'home'), ('Back', 'Back to previous …
This equlibrium point is an *unstable equlibrium*.
# Linearizing the system
The equations of motion can be linearized about one of the equilibrium points. This can be done by computing the linear terms of the multivariate Taylor Series expansion. This expansion can be expressed as:
$$
\mathbf{f}_{linear} = \mathbf{f}(\mathbf{v}_{eq}) + \mathbf{J}_{f,v}(\mathbf{v}_{eq}) (\mathbf{v} - \mathbf{v}_{eq})
$$
where $\mathbf{J}_f$ is the Jacobian of $\mathbf{f}$ with respect to $\mathbf{v}$ and:
$$
\mathbf{v} = \begin{bmatrix}
\theta\\
\phi \\
\alpha \\
\beta \\
\dot{\alpha} \\
\dot{\beta}
\end{bmatrix}
$$
In our case let's linearize about the static position where $\theta=\phi=0$.
```python
f
```
```python
v = sm.Matrix([theta, phi, alpha, beta, alpha.diff(), beta.diff()])
v
```
```python
veq = sm.zeros(len(v), 1)
veq
```
```python
v_eq_sub = dict(zip(v, veq))
v_eq_sub
```
The linear equations are then:
```python
f_lin = f.subs(v_eq_sub) + f.jacobian(v).subs(v_eq_sub) * (v - veq)
f_lin
```
Note that all of the terms that involve the coordinates, speeds, and their derivatives are linear terms, i.e. simple linear coefficients. These linear equations can be put into this canonical form:
$$\mathbf{M}\dot{\mathbf{s}} + \mathbf{C}\mathbf{s} + \mathbf{K} \mathbf{c} = \mathbf{F}$$
with:
- $\mathbf{M}$ as the mass matrix
- $\mathbf{C}$ as the damping matrix
- $\mathbf{K}$ as the stiffness matrix
- $\mathbf{F}$ as the forcing vector
The Jacobian can again be utlized to extract the linear coefficients.
```python
cbar = sm.Matrix([theta, phi])
cbar
```
```python
sbar
```
```python
M = f_lin.jacobian(sbar.diff())
M
```
```python
C = f_lin.jacobian(sbar)
C
```
```python
K = f_lin.jacobian(cbar)
K
```
```python
F = -f_lin.subs(v_eq_sub)
F
```
# Simulate the linear system
```python
from resonance.linear_systems import BallChannelPendulumSystem
```
```python
lin_sys = BallChannelPendulumSystem()
```
For linear systems, a function that calculates the canonical coefficient matrices should be created. Each of the canonical matrices should be created as 2 x 2 NumPy arrays.
```python
def canon_coeff_matrices(mp, mb, l, g, r):
M = np.array([[mp * l**2 + mb * r**2, -mb * r**2],
[-mb * r**2, mb * r**2]])
C = np.zeros((2, 2))
K = np.array([[g * l * mp, g * mb * r],
[g * mb * r, g * mb * r]])
return M, C, K
```
```python
lin_sys.canonical_coeffs_func = canon_coeff_matrices
```
```python
M_num, C_num, K_num = lin_sys.canonical_coefficients()
```
```python
M_num
```
array([[ 5.15e-04, -3.50e-05],
[-3.50e-05, 3.50e-05]])
```python
C_num
```
array([[0., 0.],
[0., 0.]])
```python
K_num
```
array([[0.023544 , 0.0034335],
[0.0034335, 0.0034335]])
```python
lin_traj = lin_sys.free_response(20, sample_rate=500)
```
```python
lin_traj[['theta', 'phi']].plot(subplots=True);
```
Canvas(toolbar=Toolbar(toolitems=[('Home', 'Reset original view', 'home', 'home'), ('Back', 'Back to previous …
# Compare the nonlinear and linear simulations
```python
sys.coordinates['theta'] = np.deg2rad(10)
sys.coordinates['phi'] = np.deg2rad(-10)
```
```python
lin_sys.coordinates['theta'] = np.deg2rad(10)
lin_sys.coordinates['phi'] = np.deg2rad(-10)
```
```python
traj = sys.free_response(10.0)
```
```python
lin_traj = lin_sys.free_response(10.0)
```
```python
axes = traj[['theta', 'phi']].plot(subplots=True, color='red')
axes = lin_traj[['theta', 'phi']].plot(subplots=True, color='blue', ax=axes)
axes[0].legend([r'nonlin $\theta$', r'lin $\theta$'])
axes[1].legend([r'nonlin $\phi$', r'lin $\phi$']);
```
Canvas(toolbar=Toolbar(toolitems=[('Home', 'Reset original view', 'home', 'home'), ('Back', 'Back to previous …
| 1ee0d822d6b178493ce0de45b4e96033dec83e51 | 312,147 | ipynb | Jupyter Notebook | notebooks/09-2020/modeling_a_ball_channel_pendulum.ipynb | gbrault/resonance | bf66993a98fbbb857511f83bc072449b98f0b4c2 | [
"MIT"
] | 31 | 2017-11-10T16:44:04.000Z | 2022-01-13T12:22:02.000Z | notebooks/09-2020/modeling_a_ball_channel_pendulum.ipynb | gbrault/resonance | bf66993a98fbbb857511f83bc072449b98f0b4c2 | [
"MIT"
] | 178 | 2017-07-19T20:16:13.000Z | 2020-03-10T04:13:46.000Z | notebooks/09-2020/modeling_a_ball_channel_pendulum.ipynb | gbrault/resonance | bf66993a98fbbb857511f83bc072449b98f0b4c2 | [
"MIT"
] | 12 | 2018-04-05T22:58:43.000Z | 2021-01-14T04:06:26.000Z | 154.451757 | 31,473 | 0.858781 | true | 4,479 | Qwen/Qwen-72B | 1. YES
2. YES | 0.935347 | 0.839734 | 0.785442 | __label__eng_Latn | 0.882258 | 0.663178 |
```python
# Required to load webpages
from IPython.display import IFrame
```
[Table of contents](../toc.ipynb)
# SymPy
* SymPy is a symbolic mathematics library for Python.
* It is a very powerful computer algebra system, which is easy to include in your Python scripts.
* Please find the documentation and a tutorial here [https://www.sympy.org/en/index.html](https://www.sympy.org/en/index.html)
## SymPy live
There is a very nice SymPy live shell in [https://live.sympy.org/](https://live.sympy.org/), where you can try SymPy without installation.
```python
IFrame(src='https://live.sympy.org/', width=1000, height=600)
```
## SymPy import and basics
```python
import sympy as sp
%matplotlib inline
```
Symbols can be defined with `sympy.symbols` like:
```python
x, y, t = sp.symbols('x, y, t')
```
These symbols and later equations are rendered with LaTeX, which makes pretty prints.
```python
display(x)
```
$\displaystyle x$
Expressions can be easily defined, and equations with left and right hand side are defined with `sympy.Eq` function.
```python
expr = x**2
expr
```
$\displaystyle x^{2}$
```python
eq = sp.Eq(3*x, -10)
eq
```
$\displaystyle 3 x = -10$
Plots are done with `sympy.plot` and the value range can be adjusted.
```python
sp.plot(expr, (x, -5, 5))
```
## Why should you consider symbolic math at all?
The power of symbolic computation is their precision. Just compare these two results.
```python
import math
math.sqrt(8)
```
2.8284271247461903
```python
sp.sqrt(8)
```
$\displaystyle 2 \sqrt{2}$
You can simplify expressions and equations and also expand them.
```python
eq = sp.sin(x)**2 + sp.cos(x)**2
eq
```
$\displaystyle \sin^{2}{\left(x \right)} + \cos^{2}{\left(x \right)}$
```python
sp.simplify(eq)
```
$\displaystyle 1$
```python
eq = x*(x + y)
eq
```
$\displaystyle x \left(x + y\right)$
```python
sp.expand(eq)
```
$\displaystyle x^{2} + x y$
```python
sp.factor(eq)
```
$\displaystyle x \left(x + y\right)$
Differentiation and integration are built in of course.
```python
eq = sp.sin(x) * sp.exp(x)
eq
```
$\displaystyle e^{x} \sin{\left(x \right)}$
```python
sp.diff(eq, x)
```
$\displaystyle e^{x} \sin{\left(x \right)} + e^{x} \cos{\left(x \right)}$
```python
sp.integrate(eq, x)
```
$\displaystyle \frac{e^{x} \sin{\left(x \right)}}{2} - \frac{e^{x} \cos{\left(x \right)}}{2}$
Or define an explicit interval for the integration.
```python
sp.integrate(eq, (x, -10, 10))
```
$\displaystyle \frac{e^{10} \sin{\left(10 \right)}}{2} + \frac{\cos{\left(10 \right)}}{2 e^{10}} + \frac{\sin{\left(10 \right)}}{2 e^{10}} - \frac{e^{10} \cos{\left(10 \right)}}{2}$
We can also easily substitute one variable of an expression.
```python
eq.subs(x, 2)
```
$\displaystyle e^{2} \sin{\left(2 \right)}$
Solve one equation. $x^2 + 3 x = 10$.
```python
sp.solve(x**2 + 3*x - 10, x)
```
[-5, 2]
## More advanced topics
Here, we will solve a linear system of equations.
```python
e1 = sp.Eq(3*x + 4*y, -20)
e2 = sp.Eq(4*y, -3)
system_of_eq = [e1, e2]
from sympy.solvers.solveset import linsolve
linsolve(system_of_eq, (x, y))
```
$\displaystyle \left\{\left( - \frac{17}{3}, \ - \frac{3}{4}\right)\right\}$
Also differential equations can be used. Let us solve $y'' - y = e^t$ for instance.
```python
y = sp.Function('y')
sp.dsolve(sp.Eq(y(t).diff(t, t) - y(t), sp.exp(t)), y(t))
```
$\displaystyle y{\left(t \right)} = C_{2} e^{- t} + \left(C_{1} + \frac{t}{2}\right) e^{t}$
Finally, we will have a short look at matrices.
```python
A = sp.Matrix([[0, 1],
[1, 0]])
A
```
$\displaystyle \left[\begin{matrix}0 & 1\\1 & 0\end{matrix}\right]$
```python
A = sp.eye(3)
A
```
$\displaystyle \left[\begin{matrix}1 & 0 & 0\\0 & 1 & 0\\0 & 0 & 1\end{matrix}\right]$
```python
A = sp.zeros(2, 3)
A
```
$\displaystyle \left[\begin{matrix}0 & 0 & 0\\0 & 0 & 0\end{matrix}\right]$
Inversion of a matrix is done with `**-1` or better readable with `.inv()`.
```python
A = sp.eye(2) * 4
A.inv()
```
$\displaystyle \left[\begin{matrix}\frac{1}{4} & 0\\0 & \frac{1}{4}\end{matrix}\right]$
```python
A[-2] = x
A
```
$\displaystyle \left[\begin{matrix}4 & 0\\x & 4\end{matrix}\right]$
## To sum up
* SymPy is a very powerful computer algebra package!
* It is light, small, and easy to install through pip and conda.
* Simple to integrate in your Python project.
| 90f20aee25b5ac16277b055a8fb6481697c71f61 | 35,385 | ipynb | Jupyter Notebook | 02_tools-and-packages/04_sympy.ipynb | rico-mix/py-algorithms-4-automotive-engineering | 1da36207aa27f1dbfbd8e829c28de356f2456163 | [
"MIT"
] | null | null | null | 02_tools-and-packages/04_sympy.ipynb | rico-mix/py-algorithms-4-automotive-engineering | 1da36207aa27f1dbfbd8e829c28de356f2456163 | [
"MIT"
] | null | null | null | 02_tools-and-packages/04_sympy.ipynb | rico-mix/py-algorithms-4-automotive-engineering | 1da36207aa27f1dbfbd8e829c28de356f2456163 | [
"MIT"
] | null | null | null | 35.279163 | 16,616 | 0.694249 | true | 1,466 | Qwen/Qwen-72B | 1. YES
2. YES | 0.931463 | 0.857768 | 0.798979 | __label__eng_Latn | 0.831061 | 0.694628 |
# Tutorial: optimal piecewise binning with continuous target
## Basic
To get us started, let's load a well-known dataset from the UCI repository and transform the data into a ``pandas.DataFrame``.
```python
import pandas as pd
from sklearn.datasets import load_boston
```
```python
data = load_boston()
df = pd.DataFrame(data.data, columns=data.feature_names)
```
We choose a variable to discretize and the continuous target.
```python
variable = "LSTAT"
x = df[variable].values
y = data.target
```
Import and instantiate an ``ContinuousOptimalPWBinning`` object class and we pass the variable name. The ``ContinuousOptimalPWBinning`` can **ONLY** handle numerical variables. This differs from the ``ContinuousOptimalBinning`` object class.
```python
from optbinning import ContinuousOptimalPWBinning
```
```python
optb = ContinuousOptimalPWBinning(name=variable)
```
We fit the optimal binning object with arrays ``x`` and ``y``.
```python
optb.fit(x, y)
```
ContinuousOptimalPWBinning(name='LSTAT')
You can check if an optimal solution has been found via the ``status`` attribute:
```python
optb.status
```
'OPTIMAL'
You can also retrieve the optimal split points via the ``splits`` attribute:
```python
optb.splits
```
array([ 4.6500001 , 5.49499989, 6.86500001, 9.7249999 , 11.67499971,
13.0999999 , 16.08500004, 19.89999962, 23.31500053])
#### The binning table
The optimal binning algorithms return a binning table; a binning table displays the binned data and several metrics for each bin. Class ``ContinuousOptimalPWBinning`` returns an object ``PWContinuousBinningTable`` via the ``binning_table`` attribute.
```python
binning_table = optb.binning_table
```
```python
type(binning_table)
```
optbinning.binning.piecewise.binning_statistics.PWContinuousBinningTable
The `binning_table` is instantiated, but not built. Therefore, the first step is to call the method `build`, which returns a ``pandas.DataFrame``.
```python
binning_table.build()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Bin</th>
<th>Count</th>
<th>Count (%)</th>
<th>Sum</th>
<th>Std</th>
<th>Min</th>
<th>Max</th>
<th>Zeros count</th>
<th>c0</th>
<th>c1</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>(-inf, 4.65)</td>
<td>50</td>
<td>0.098814</td>
<td>1985.9</td>
<td>8.198651</td>
<td>22.8</td>
<td>50.0</td>
<td>0</td>
<td>56.750078</td>
<td>-4.823690</td>
</tr>
<tr>
<th>1</th>
<td>[4.65, 5.49)</td>
<td>28</td>
<td>0.055336</td>
<td>853.2</td>
<td>6.123541</td>
<td>21.9</td>
<td>50.0</td>
<td>0</td>
<td>74.546279</td>
<td>-8.650830</td>
</tr>
<tr>
<th>2</th>
<td>[5.49, 6.87)</td>
<td>45</td>
<td>0.088933</td>
<td>1188.6</td>
<td>5.136259</td>
<td>20.6</td>
<td>48.8</td>
<td>0</td>
<td>27.338265</td>
<td>-0.059744</td>
</tr>
<tr>
<th>3</th>
<td>[6.87, 9.72)</td>
<td>89</td>
<td>0.175889</td>
<td>2274.9</td>
<td>6.845250</td>
<td>11.9</td>
<td>50.0</td>
<td>0</td>
<td>35.670910</td>
<td>-1.273531</td>
</tr>
<tr>
<th>4</th>
<td>[9.72, 11.67)</td>
<td>49</td>
<td>0.096838</td>
<td>1057.9</td>
<td>2.994842</td>
<td>15.0</td>
<td>31.0</td>
<td>0</td>
<td>36.084157</td>
<td>-1.316024</td>
</tr>
<tr>
<th>5</th>
<td>[11.67, 13.10)</td>
<td>35</td>
<td>0.069170</td>
<td>697.5</td>
<td>2.592139</td>
<td>14.5</td>
<td>27.9</td>
<td>0</td>
<td>20.719580</td>
<td>-0.000000</td>
</tr>
<tr>
<th>6</th>
<td>[13.10, 16.09)</td>
<td>66</td>
<td>0.130435</td>
<td>1289.9</td>
<td>3.541705</td>
<td>10.2</td>
<td>30.7</td>
<td>0</td>
<td>36.325829</td>
<td>-1.191317</td>
</tr>
<tr>
<th>7</th>
<td>[16.09, 19.90)</td>
<td>69</td>
<td>0.136364</td>
<td>1129.3</td>
<td>3.607949</td>
<td>8.3</td>
<td>27.5</td>
<td>0</td>
<td>25.801457</td>
<td>-0.537020</td>
</tr>
<tr>
<th>8</th>
<td>[19.90, 23.32)</td>
<td>28</td>
<td>0.055336</td>
<td>368.4</td>
<td>3.912839</td>
<td>5.0</td>
<td>21.7</td>
<td>0</td>
<td>32.669942</td>
<td>-0.882170</td>
</tr>
<tr>
<th>9</th>
<td>[23.32, inf)</td>
<td>47</td>
<td>0.092885</td>
<td>556.0</td>
<td>4.006586</td>
<td>5.0</td>
<td>23.7</td>
<td>0</td>
<td>13.271245</td>
<td>-0.050143</td>
</tr>
<tr>
<th>10</th>
<td>Special</td>
<td>0</td>
<td>0.000000</td>
<td>0.0</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>0</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>11</th>
<td>Missing</td>
<td>0</td>
<td>0.000000</td>
<td>0.0</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>0</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>Totals</th>
<td></td>
<td>506</td>
<td>1.000000</td>
<td>11401.6</td>
<td></td>
<td>5.0</td>
<td>50.0</td>
<td>0</td>
<td>-</td>
<td>-</td>
</tr>
</tbody>
</table>
</div>
Let's describe the columns of this binning table:
- Bin: the intervals delimited by the optimal split points.
- Count: the number of records for each bin.
- Count (%): the percentage of records for each bin.
- Sum: the target sum for each bin.
- Std: the target std for each bin.
- Min: the target min value for each bin.
- Max: the target max value for each bin.
- Zeros count: the number of zeros for each bin.
- $c_0$: the first coefficient of the prediction polynomial.
- $c_1$: the second coefficient of the prediction polynomial.
The for bin $i$ is defined as $P_i = c_0 + c_1 x_i$, where $x_i \in \text{Bin}_{i}$. In general,
\begin{equation}
P_i = \sum_{j=0}^d c_j x_i^j,
\end{equation}
where $d$ is the degree of the event rate polynomial.
The last row shows the total number of records, sum and mean.
You can use the method ``plot`` to visualize the histogram and mean curve.
```python
binning_table.plot()
```
##### Mean transformation
Now that we have checked the binned data, we can transform our original data into mean values.
```python
x_transform_mean = optb.transform(x)
```
## Advanced
Many of the advanced options have been covered in the previous tutorials with a binary target. **Check it out!** In this section, we focus on the mean monotonicity trend and the mean difference between bins.
#### Binning table statistical analysis
The ``analysis`` method performs a statistical analysis of the binning table, computing the Information Value (IV) and Herfindahl-Hirschman Index (HHI). Additionally, several statistical significance tests between consecutive bins of the contingency table are performed using the Student's t-test. The piecewise binning also includes several performance metrics for regression problems.
```python
binning_table.analysis()
```
-------------------------------------------------
OptimalBinning: Continuous Binning Table Analysis
-------------------------------------------------
General metrics
Mean absolute error 3.68295105
Mean squared error 26.03331040
Median absolute error 2.72954866
Explained variance 0.69161991
R^2 0.69161991
MPE -0.05213932
MAPE 0.17801859
SMAPE 0.08405929
MdAPE 0.13009259
SMdAPE 0.06660004
HHI 0.11313253
HHI (normalized) 0.03250821
Quality score 0.21464367
Significance tests
Bin A Bin B t-statistic p-value
0 1 5.644492 3.313748e-07
1 2 2.924528 5.175586e-03
2 3 0.808313 4.206096e-01
3 4 4.714124 6.127108e-06
4 5 2.712699 8.189719e-03
5 6 0.622294 5.353397e-01
6 7 5.162973 8.663417e-07
7 8 3.742511 4.988278e-04
8 9 1.408305 1.643801e-01
#### Mean monotonicity
The monotonic_trend option permits forcing a monotonic trend to the mean curve. The default setting “auto” should be the preferred option, however, some business constraints might require to impose different trends. The default setting “auto” chooses the monotonic trend most likely to minimize the L1-norm from the options “ascending”, “descending”, “peak” and “valley” using a machine-learning-based classifier.
```python
variable = "INDUS"
x = df[variable].values
y = data.target
```
```python
optb = ContinuousOptimalPWBinning(name=variable, monotonic_trend="auto")
optb.fit(x, y)
```
ContinuousOptimalPWBinning(name='INDUS')
```python
binning_table = optb.binning_table
binning_table.build()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Bin</th>
<th>Count</th>
<th>Count (%)</th>
<th>Sum</th>
<th>Std</th>
<th>Min</th>
<th>Max</th>
<th>Zeros count</th>
<th>c0</th>
<th>c1</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>(-inf, 3.35)</td>
<td>63</td>
<td>0.124506</td>
<td>1994.0</td>
<td>8.569841</td>
<td>16.5</td>
<td>50.0</td>
<td>0</td>
<td>31.428209</td>
<td>-0.000000</td>
</tr>
<tr>
<th>1</th>
<td>[3.35, 5.04)</td>
<td>57</td>
<td>0.112648</td>
<td>1615.2</td>
<td>8.072710</td>
<td>17.2</td>
<td>50.0</td>
<td>0</td>
<td>42.928218</td>
<td>-3.432839</td>
</tr>
<tr>
<th>2</th>
<td>[5.04, 6.66)</td>
<td>66</td>
<td>0.130435</td>
<td>1723.7</td>
<td>7.879078</td>
<td>16.0</td>
<td>50.0</td>
<td>0</td>
<td>25.626711</td>
<td>0.000000</td>
</tr>
<tr>
<th>3</th>
<td>[6.66, 8.01)</td>
<td>31</td>
<td>0.061265</td>
<td>692.0</td>
<td>4.604886</td>
<td>14.4</td>
<td>35.2</td>
<td>0</td>
<td>52.694010</td>
<td>-4.064159</td>
</tr>
<tr>
<th>4</th>
<td>[8.01, 16.57)</td>
<td>100</td>
<td>0.197628</td>
<td>2045.5</td>
<td>3.547348</td>
<td>11.9</td>
<td>28.7</td>
<td>0</td>
<td>21.858675</td>
<td>-0.212150</td>
</tr>
<tr>
<th>5</th>
<td>[16.57, 18.84)</td>
<td>132</td>
<td>0.260870</td>
<td>2165.3</td>
<td>8.507336</td>
<td>5.0</td>
<td>50.0</td>
<td>0</td>
<td>21.603058</td>
<td>-0.196723</td>
</tr>
<tr>
<th>6</th>
<td>[18.84, inf)</td>
<td>57</td>
<td>0.112648</td>
<td>1165.9</td>
<td>9.519086</td>
<td>7.0</td>
<td>50.0</td>
<td>0</td>
<td>17.896791</td>
<td>0.000000</td>
</tr>
<tr>
<th>7</th>
<td>Special</td>
<td>0</td>
<td>0.000000</td>
<td>0.0</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>0</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>8</th>
<td>Missing</td>
<td>0</td>
<td>0.000000</td>
<td>0.0</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>0</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>Totals</th>
<td></td>
<td>506</td>
<td>1.000000</td>
<td>11401.6</td>
<td></td>
<td>5.0</td>
<td>50.0</td>
<td>0</td>
<td>-</td>
<td>-</td>
</tr>
</tbody>
</table>
</div>
```python
binning_table.plot()
```
```python
binning_table.analysis()
```
-------------------------------------------------
OptimalBinning: Continuous Binning Table Analysis
-------------------------------------------------
General metrics
Mean absolute error 5.38628982
Mean squared error 59.01564719
Median absolute error 4.02296554
Explained variance 0.30092446
R^2 0.30092446
MPE -0.12086396
MAPE 0.27859705
SMAPE 0.12156690
MdAPE 0.18811572
SMdAPE 0.09658000
HHI 0.16875752
HHI (normalized) 0.06485221
Quality score 0.74831364
Significance tests
Bin A Bin B t-statistic p-value
0 1 2.180865 0.031181
1 2 1.537968 0.126745
2 3 2.976660 0.003739
3 4 2.075258 0.044180
4 5 4.934158 0.000002
5 6 -2.770230 0.006721
A smoother curve, keeping the valley monotonicity, can be achieved by using ``monotonic_trend="convex"``.
```python
optb = ContinuousOptimalPWBinning(name=variable, monotonic_trend="convex")
optb.fit(x, y)
```
ContinuousOptimalPWBinning(monotonic_trend='convex', name='INDUS')
```python
binning_table = optb.binning_table
binning_table.build()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Bin</th>
<th>Count</th>
<th>Count (%)</th>
<th>Sum</th>
<th>Std</th>
<th>Min</th>
<th>Max</th>
<th>Zeros count</th>
<th>c0</th>
<th>c1</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>(-inf, 3.35)</td>
<td>63</td>
<td>0.124506</td>
<td>1994.0</td>
<td>8.569841</td>
<td>16.5</td>
<td>50.0</td>
<td>0</td>
<td>35.478615</td>
<td>-1.760813</td>
</tr>
<tr>
<th>1</th>
<td>[3.35, 5.04)</td>
<td>57</td>
<td>0.112648</td>
<td>1615.2</td>
<td>8.072710</td>
<td>17.2</td>
<td>50.0</td>
<td>0</td>
<td>35.478615</td>
<td>-1.760813</td>
</tr>
<tr>
<th>2</th>
<td>[5.04, 6.66)</td>
<td>66</td>
<td>0.130435</td>
<td>1723.7</td>
<td>7.879078</td>
<td>16.0</td>
<td>50.0</td>
<td>0</td>
<td>35.478615</td>
<td>-1.760813</td>
</tr>
<tr>
<th>3</th>
<td>[6.66, 8.01)</td>
<td>31</td>
<td>0.061265</td>
<td>692.0</td>
<td>4.604886</td>
<td>14.4</td>
<td>35.2</td>
<td>0</td>
<td>35.478615</td>
<td>-1.760813</td>
</tr>
<tr>
<th>4</th>
<td>[8.01, 16.57)</td>
<td>100</td>
<td>0.197628</td>
<td>2045.5</td>
<td>3.547348</td>
<td>11.9</td>
<td>28.7</td>
<td>0</td>
<td>24.492313</td>
<td>-0.388383</td>
</tr>
<tr>
<th>5</th>
<td>[16.57, 18.84)</td>
<td>132</td>
<td>0.260870</td>
<td>2165.3</td>
<td>8.507336</td>
<td>5.0</td>
<td>50.0</td>
<td>0</td>
<td>18.898426</td>
<td>-0.050792</td>
</tr>
<tr>
<th>6</th>
<td>[18.84, inf)</td>
<td>57</td>
<td>0.112648</td>
<td>1165.9</td>
<td>9.519086</td>
<td>7.0</td>
<td>50.0</td>
<td>0</td>
<td>18.898426</td>
<td>-0.050792</td>
</tr>
<tr>
<th>7</th>
<td>Special</td>
<td>0</td>
<td>0.000000</td>
<td>0.0</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>0</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>8</th>
<td>Missing</td>
<td>0</td>
<td>0.000000</td>
<td>0.0</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>0</td>
<td>0.000000</td>
<td>0.000000</td>
</tr>
<tr>
<th>Totals</th>
<td></td>
<td>506</td>
<td>1.000000</td>
<td>11401.6</td>
<td></td>
<td>5.0</td>
<td>50.0</td>
<td>0</td>
<td>-</td>
<td>-</td>
</tr>
</tbody>
</table>
</div>
```python
binning_table.plot()
```
```python
binning_table.analysis()
```
-------------------------------------------------
OptimalBinning: Continuous Binning Table Analysis
-------------------------------------------------
General metrics
Mean absolute error 5.46871836
Mean squared error 60.47066184
Median absolute error 4.02086478
Explained variance 0.28368894
R^2 0.28368894
MPE -0.12278289
MAPE 0.28138163
SMAPE 0.12314489
MdAPE 0.19095659
SMdAPE 0.09398799
HHI 0.16875752
HHI (normalized) 0.06485221
Quality score 0.74831364
Significance tests
Bin A Bin B t-statistic p-value
0 1 2.180865 0.031181
1 2 1.537968 0.126745
2 3 2.976660 0.003739
3 4 2.075258 0.044180
4 5 4.934158 0.000002
5 6 -2.770230 0.006721
| 9dd9ea6976328fb2d665db7f8d0b616e6a37ea3d | 107,141 | ipynb | Jupyter Notebook | doc/source/tutorials/tutorial_piecewise_continuous.ipynb | jensgk/optbinning | 5ccd892fa4ee0a745ab539cee10a2069b35de6da | [
"Apache-2.0"
] | 207 | 2020-01-23T21:32:59.000Z | 2022-03-30T06:33:21.000Z | doc/source/tutorials/tutorial_piecewise_continuous.ipynb | jensgk/optbinning | 5ccd892fa4ee0a745ab539cee10a2069b35de6da | [
"Apache-2.0"
] | 133 | 2020-01-23T22:14:35.000Z | 2022-03-29T14:05:04.000Z | doc/source/tutorials/tutorial_piecewise_continuous.ipynb | jensgk/optbinning | 5ccd892fa4ee0a745ab539cee10a2069b35de6da | [
"Apache-2.0"
] | 50 | 2020-01-27T15:37:08.000Z | 2022-03-30T06:33:25.000Z | 83.834898 | 23,888 | 0.753633 | true | 6,700 | Qwen/Qwen-72B | 1. YES
2. YES | 0.870597 | 0.831143 | 0.723591 | __label__eng_Latn | 0.255774 | 0.519476 |