text
stringlengths
87
777k
meta.hexsha
stringlengths
40
40
meta.size
int64
682
1.05M
meta.ext
stringclasses
1 value
meta.lang
stringclasses
1 value
meta.max_stars_repo_path
stringlengths
8
226
meta.max_stars_repo_name
stringlengths
8
109
meta.max_stars_repo_head_hexsha
stringlengths
40
40
meta.max_stars_repo_licenses
sequencelengths
1
5
meta.max_stars_count
int64
1
23.9k
meta.max_stars_repo_stars_event_min_datetime
stringlengths
24
24
meta.max_stars_repo_stars_event_max_datetime
stringlengths
24
24
meta.max_issues_repo_path
stringlengths
8
226
meta.max_issues_repo_name
stringlengths
8
109
meta.max_issues_repo_head_hexsha
stringlengths
40
40
meta.max_issues_repo_licenses
sequencelengths
1
5
meta.max_issues_count
int64
1
15.1k
meta.max_issues_repo_issues_event_min_datetime
stringlengths
24
24
meta.max_issues_repo_issues_event_max_datetime
stringlengths
24
24
meta.max_forks_repo_path
stringlengths
8
226
meta.max_forks_repo_name
stringlengths
8
109
meta.max_forks_repo_head_hexsha
stringlengths
40
40
meta.max_forks_repo_licenses
sequencelengths
1
5
meta.max_forks_count
int64
1
6.05k
meta.max_forks_repo_forks_event_min_datetime
stringlengths
24
24
meta.max_forks_repo_forks_event_max_datetime
stringlengths
24
24
meta.avg_line_length
float64
15.5
967k
meta.max_line_length
int64
42
993k
meta.alphanum_fraction
float64
0.08
0.97
meta.converted
bool
1 class
meta.num_tokens
int64
33
431k
meta.lm_name
stringclasses
1 value
meta.lm_label
stringclasses
3 values
meta.lm_q1_score
float64
0.56
0.98
meta.lm_q2_score
float64
0.55
0.97
meta.lm_q1q2_score
float64
0.5
0.93
text_lang
stringclasses
53 values
text_lang_conf
float64
0.03
1
label
float64
0
1
# Time Series Forecasting In this tutorial, we will demonstrate how to build a model for time series forecasting in NumPyro. Specifically, we will replicate the **Seasonal, Global Trend (SGT)** model from the [Rlgt: Bayesian Exponential Smoothing Models with Trend Modifications](https://cran.r-project.org/web/packages/Rlgt/index.html) package. The time series data that we will use for this tutorial is the **lynx** dataset, which contains annual numbers of lynx trappings from 1821 to 1934 in Canada. ```python import os from IPython.display import set_matplotlib_formats import matplotlib.pyplot as plt import pandas as pd import jax.numpy as np from jax import lax, random, vmap from jax.nn import softmax import numpyro; numpyro.set_host_device_count(4) import numpyro.distributions as dist from numpyro.diagnostics import autocorrelation, hpdi from numpyro import handlers from numpyro.infer import MCMC, NUTS if "NUMPYRO_SPHINXBUILD" in os.environ: set_matplotlib_formats('svg') assert numpyro.__version__.startswith('0.2.4') ``` ## Data First, lets import and take a look at the dataset. ```python URL = "https://raw.githubusercontent.com/vincentarelbundock/Rdatasets/master/csv/datasets/lynx.csv" lynx = pd.read_csv(URL, index_col=0) data = lynx["value"].values print("Length of time series:", data.shape[0]) plt.figure(figsize=(8, 4)) plt.plot(lynx["time"], data) plt.show() ``` The time series has a length of 114 (a data point for each year), and by looking at the plot, we can observe [seasonality](https://en.wikipedia.org/wiki/Seasonality) in this dataset, which is the recurrence of similar patterns at specific time periods. e.g. in this dataset, we observe a cyclical pattern every 10 years, but there is also a less obvious but clear spike in the number of trappings every 40 years. Let us see if we can model this effect in NumPyro. In this tutorial, we will use the first 80 values for training and the last 34 values for testing. ```python y_train, y_test = np.array(data[:80], dtype=np.float32), data[80:] ``` ## Model The model we are going to use is called **Seasonal, Global Trend**, which when tested on 3003 time series of the [M-3 competition](https://forecasters.org/resources/time-series-data/m3-competition/), has been known to outperform other models originally participating in the competition: $$ \begin{align} \text{exp_val}_{t} &= \text{level}_{t-1} + \text{coef_trend} \times \text{level}_{t-1}^{\text{pow_trend}} + \text{s}_t \times \text{level}_{t-1}^{\text{pow_season}}, \\ \sigma_{t} &= \sigma \times \text{exp_val}_{t}^{\text{powx}} + \text{offset}, \\ y_{t} &\sim \text{StudentT}(\nu, \text{exp_val}_{t}, \sigma_{t}) \end{align} $$ , where `level` and `s` follows the following recursion rules: $$ \begin{align} \text{level_p} &= \begin{cases} y_t - \text{s}_t \times \text{level}_{t-1}^{\text{pow_season}} & \text{if } t \le \text{seasonality}, \\ \text{Average} \left[y(t - \text{seasonality} + 1), \ldots, y(t)\right] & \text{otherwise}, \end{cases} \\ \text{level}_{t} &= \text{level_sm} \times \text{level_p} + (1 - \text{level_sm}) \times \text{level}_{t-1}, \\ \text{s}_{t + \text{seasonality}} &= \text{s_sm} \times \frac{y_{t} - \text{level}_{t}}{\text{level}_{t-1}^{\text{pow_trend}}} + (1 - \text{s_sm}) \times \text{s}_{t}. \end{align} $$ A more detailed explanation for SGT model can be found in [this vignette](https://cran.r-project.org/web/packages/Rlgt/vignettes/GT_models.html) from the authors of the Rlgt package. Here we summarize the core ideas of this model: + [Student's t-distribution](https://en.wikipedia.org/wiki/Student%27s_t-distribution), which has heavier tails than normal distribution, is used for the likelihood. + The expected value `exp_val` consists of a trending component and a seasonal component: - The trend is governed by the map $x \mapsto x + ax^b$, where $x$ is `level`, $a$ is `coef_trend`, and $b$ is `pow_trend`. Note that when $b \sim 0$, the trend is linear with $a$ is the slope, and when $b \sim 1$, the trend is exponential with $a$ is the rate. So that function can cover a large family of trend. - When time changes, `level` and `s` are updated to new values. Coefficients `level_sm` and `s_sm` are used to make the transition smoothly. + When `powx` is near $0$, the error $\sigma_t$ will be nearly constant while when `powx` is near $1$, the error will be propotional to the expected value. + There are several varieties of SGT. In this tutorial, we use generalized seasonality and seasonal average method. Note that `level` and `s` are updated recursively while we collect the expected value at each time step. NumPyro uses [JAX](https://github.com/google/jax) in the backend to JIT compile many critical parts of the NUTS algorithm, including the verlet integrator and the tree building process. However, doing so using Python's `for` loop in the model will result in a long compilation time for the model, so we use `jax.lax.scan` instead. A detailed explanation for using this utility can be found in [lax.scan documentation](https://jax.readthedocs.io/en/latest/_autosummary/jax.lax.scan.html#jax.lax.scan). Here we use it to collect expected values while the pair `(level, s)` plays the role of carrying state. ```python def scan_exp_val(y, init_s, level_sm, s_sm, coef_trend, pow_trend, pow_season): seasonality = init_s.shape[0] def scan_fn(carry, t): level, s, moving_sum = carry season = s[0] * level ** pow_season exp_val = level + coef_trend * level ** pow_trend + season exp_val = np.clip(exp_val, a_min=0) moving_sum = moving_sum + y[t] - np.where(t >= seasonality, y[t - seasonality], 0.) level_p = np.where(t >= seasonality, moving_sum / seasonality, y[t] - season) level = level_sm * level_p + (1 - level_sm) * level level = np.clip(level, a_min=0) new_s = (s_sm * (y[t] - level) / season + (1 - s_sm)) * s[0] s = np.concatenate([s[1:], new_s[None]], axis=0) return (level, s, moving_sum), exp_val level_init = y[0] s_init = np.concatenate([init_s[1:], init_s[:1]], axis=0) moving_sum = level_init (last_level, last_s, moving_sum), exp_vals = lax.scan( scan_fn, (level_init, s_init, moving_sum), np.arange(1, y.shape[0])) return exp_vals, last_level, last_s ``` With our utility function defined above, we are ready to specify the model using *NumPyro* primitives. In NumPyro, we use the primitive `sample(name, prior)` to declare a latent random variable with a corresponding `prior`. These primitives can have custom interpretations depending on the effect handlers that are used by NumPyro inference algorithms in the backend. e.g. we can condition on specific values using the `substitute` handler, or record values at these sample sites in the execution trace using the `trace` handler. Note that these details are not important for specifying the model, or running inference, but curious readers are encouraged to read the [tutorial on effect handlers](http://pyro.ai/examples/effect_handlers.html) in Pyro. ```python def sgt(y, seasonality): # heuristically, standard derivation of Cauchy prior depends on the max value of data cauchy_sd = np.max(y) / 150 nu = numpyro.sample("nu", dist.Uniform(2, 20)) powx = numpyro.sample("powx", dist.Uniform(0, 1)) sigma = numpyro.sample("sigma", dist.HalfCauchy(cauchy_sd)) offset_sigma = numpyro.sample("offset_sigma", dist.TruncatedCauchy(low=1e-10, loc=1e-10, scale=cauchy_sd)) coef_trend = numpyro.sample("coef_trend", dist.Cauchy(0, cauchy_sd)) pow_trend_beta = numpyro.sample("pow_trend_beta", dist.Beta(1, 1)) # pow_trend takes values from -0.5 to 1 pow_trend = 1.5 * pow_trend_beta - 0.5 pow_season = numpyro.sample("pow_season", dist.Beta(1, 1)) level_sm = numpyro.sample("level_sm", dist.Beta(1, 2)) s_sm = numpyro.sample("s_sm", dist.Uniform(0, 1)) init_s = numpyro.sample("init_s", dist.Cauchy(0, y[:seasonality] * 0.3)) exp_val, last_level, last_s = scan_exp_val( y, init_s, level_sm, s_sm, coef_trend, pow_trend, pow_season) omega = sigma * exp_val ** powx + offset_sigma numpyro.sample("y", dist.StudentT(nu, exp_val, omega), obs=y[1:]) # we return last `level` and last `s` for forecasting return last_level, last_s ``` Note that all prior parameters are retrieved from [this file](https://github.com/cbergmeir/Rlgt/blob/master/Rlgt/R/rlgtcontrol.R) in the original source. ## Inference First, we want to choose a good value for `seasonality`. Following [the demo in Rlgt](https://github.com/cbergmeir/Rlgt/blob/master/Rlgt/demo/lynx.R), we will set `seasonality=38`. Indeed, this value can be guessed by looking at the plot of the training data, where the second order seasonality effect has a periodicity around $40$ years. Note that $38$ is also one of the highest-autocorrelation lags. ```python print("Lag values sorted according to their autocorrelation values:\n") print(np.argsort(autocorrelation(y_train))[::-1]) ``` Lag values sorted according to their autocorrelation values: [ 0 67 57 38 68 1 29 58 37 56 28 10 19 39 66 78 47 77 9 79 48 76 30 18 20 11 46 59 69 27 55 36 2 8 40 49 17 21 75 12 65 45 31 26 7 54 35 41 50 3 22 60 70 16 44 13 6 25 74 53 42 32 23 43 51 4 15 14 34 24 5 52 73 64 33 71 72 61 63 62] Now, let us run $4$ MCMC chains (using the No-U-Turn Sampler algorithm) with $5000$ warmup steps and $5000$ sampling steps per each chain. The returned value will be a collection of $20000$ samples. ```python kernel = NUTS(sgt) mcmc = MCMC(kernel, num_warmup=5000, num_samples=5000, num_chains=4) mcmc.run(random.PRNGKey(2), y_train, seasonality=38) mcmc.print_summary() samples = mcmc.get_samples() ``` mean std median 5.0% 95.0% n_eff r_hat coef_trend 37.41 146.47 14.58 -94.66 158.11 899.45 1.00 init_s[0] 86.47 112.10 64.44 -71.79 238.22 3103.90 1.00 init_s[1] -20.09 74.78 -24.72 -133.16 89.86 3727.80 1.00 init_s[2] 30.61 96.50 18.96 -114.20 172.59 6539.20 1.00 init_s[3] 122.40 125.50 104.49 -65.33 307.99 5502.14 1.00 init_s[4] 447.19 253.84 404.01 72.97 817.14 4448.29 1.00 init_s[5] 1168.04 469.86 1089.44 480.24 1859.17 3045.90 1.00 init_s[6] 1990.85 676.57 1888.51 939.77 2999.54 2269.65 1.00 init_s[7] 3670.58 1116.12 3529.56 1936.24 5377.14 2239.09 1.00 init_s[8] 2600.92 830.22 2497.08 1275.30 3830.12 2382.89 1.00 init_s[9] 950.62 441.38 873.10 305.99 1605.88 3591.29 1.00 init_s[10] 48.80 109.00 31.85 -107.24 204.84 5029.92 1.00 init_s[11] -1.05 50.78 -2.22 -84.10 68.44 6261.04 1.00 init_s[12] -10.22 64.41 -12.06 -119.67 84.49 5551.32 1.00 init_s[13] 67.32 102.99 47.42 -74.20 228.30 6769.72 1.00 init_s[14] 330.62 248.06 284.87 -20.52 672.49 6404.36 1.00 init_s[15] 967.45 380.59 904.08 379.27 1522.60 3433.92 1.00 init_s[16] 1257.06 484.78 1176.15 547.50 1994.05 2617.35 1.00 init_s[17] 1381.97 569.38 1287.81 533.50 2198.29 2556.74 1.00 init_s[18] 613.77 313.95 561.28 161.09 1064.75 3710.26 1.00 init_s[19] 17.24 91.46 5.80 -121.23 146.82 5663.56 1.00 init_s[20] -31.33 66.56 -26.82 -141.77 65.53 6836.88 1.00 init_s[21] -15.05 47.50 -5.05 -97.72 43.51 2110.57 1.00 init_s[22] -2.25 45.41 -2.05 -69.50 60.48 4455.37 1.00 init_s[23] 39.60 88.02 24.74 -83.23 169.93 5845.50 1.00 init_s[24] 525.67 342.66 462.01 18.68 998.37 4501.69 1.00 init_s[25] 926.82 453.26 846.58 269.68 1575.50 4160.67 1.00 init_s[26] 1788.62 692.88 1683.25 742.71 2830.06 2661.40 1.00 init_s[27] 1270.05 477.56 1189.62 548.85 1974.94 2725.48 1.00 init_s[28] 213.62 172.18 183.75 -29.46 467.98 6231.66 1.00 init_s[29] -9.53 86.49 -17.25 -142.24 114.82 6054.94 1.00 init_s[30] -5.89 84.86 -13.67 -143.76 118.33 7664.38 1.00 init_s[31] -37.11 74.85 -37.10 -154.99 79.57 5162.96 1.00 init_s[32] -9.05 86.82 -16.76 -142.67 117.25 4841.36 1.00 init_s[33] 112.13 139.36 89.61 -83.02 305.20 5633.16 1.00 init_s[34] 510.17 296.97 459.01 77.70 915.20 4574.63 1.00 init_s[35] 1069.24 451.47 992.84 398.07 1733.07 3594.75 1.00 init_s[36] 1846.93 651.34 1747.20 868.30 2841.41 2858.36 1.00 init_s[37] 1446.64 549.45 1365.04 607.13 2254.92 3286.41 1.00 level_sm 0.00 0.00 0.00 0.00 0.00 10572.39 1.00 nu 12.18 4.75 12.40 5.42 20.00 8974.57 1.00 offset_sigma 32.51 30.53 23.66 0.00 71.60 9205.88 1.00 pow_season 0.09 0.04 0.09 0.01 0.15 1406.05 1.00 pow_trend_beta 0.26 0.17 0.23 0.00 0.50 1427.71 1.00 powx 0.62 0.13 0.61 0.40 0.83 5128.73 1.00 s_sm 0.08 0.09 0.05 0.00 0.19 4123.16 1.00 sigma 9.81 9.70 6.96 0.42 20.98 6775.15 1.00 Number of divergences: 3594 ## Forecasting Given `samples` from `mcmc`, we want to do forecasting for the testing dataset `y_test`. First, we will make some utilities to do forecasting given a sample. Note that to retrieve the last `level` and last `s` value, we run the model forward by constraining the latent sites to a sample from the posterior using the `substitute` handler: ```python ... level, s = substitute(sgt, sample)(y, seasonality) ``` ```python # Ref: https://github.com/cbergmeir/Rlgt/blob/master/Rlgt/R/forecast.rlgtfit.R def sgt_forecast(future, sample, y, level, s): seasonality = s.shape[0] moving_sum = np.sum(y[-seasonality:]) pow_trend = 1.5 * sample["pow_trend_beta"] - 0.5 yfs = [0] * (seasonality + future) for t in range(future): season = s[0] * level ** sample["pow_season"] exp_val = level + sample["coef_trend"] * level ** pow_trend + season exp_val = np.clip(exp_val, a_min=0) omega = sample["sigma"] * exp_val ** sample["powx"] + sample["offset_sigma"] yf = numpyro.sample("yf[{}]".format(t), dist.StudentT(sample["nu"], exp_val, omega)) yf = np.clip(yf, a_min=1e-30) yfs[t] = yf moving_sum = moving_sum + yf - np.where(t >= seasonality, yfs[t - seasonality], y[-seasonality + t]) level_p = moving_sum / seasonality level_tmp = sample["level_sm"] * level_p + (1 - sample["level_sm"]) * level level = np.where(level_tmp > 1e-30, level_tmp, level) # s is repeated instead of being updated s = np.concatenate([s[1:], s[:1]], axis=0) def forecast(future, rng_key, sample, y, seasonality): level, s = handlers.substitute(sgt, sample)(y, seasonality) forecast_model = handlers.seed(sgt_forecast, rng_key) forecast_trace = handlers.trace(forecast_model).get_trace(future, sample, y, level, s) results = [np.clip(forecast_trace["yf[{}]".format(t)]["value"], a_min=1e-30) for t in range(future)] return np.stack(results, axis=0) ``` Then, we can use [jax.vmap](https://jax.readthedocs.io/en/latest/jax.html#jax.vmap) to get prediction given a collection of samples. This allows us to vectorize the computation across the test dataset which can be dramatically faster as compared to using for-loop to collect predictions per test data point. ```python rng_keys = random.split(random.PRNGKey(3), samples["nu"].shape[0]) forecast_marginal = vmap(lambda rng_key, sample: forecast( len(y_test), rng_key, sample, y_train, seasonality=38))(rng_keys, samples) ``` Finally, let's get sMAPE, root mean square error of the prediction, and visualize the result with the mean prediction and the 90% highest posterior density interval (HPDI). ```python y_pred = np.mean(forecast_marginal, axis=0) sMAPE = np.mean(np.abs(y_pred - y_test) / (y_pred + y_test)) * 200 msqrt = np.sqrt(np.mean((y_pred - y_test) ** 2)) print("sMAPE: {:.2f}, rmse: {:.2f}".format(sMAPE, msqrt)) ``` sMAPE: 62.63, rmse: 1242.24 ```python plt.figure(figsize=(8, 4)) plt.plot(lynx["time"], data) t_future = lynx["time"][80:] hpd_low, hpd_high = hpdi(forecast_marginal) plt.plot(t_future, y_pred, lw=2) plt.fill_between(t_future, hpd_low, hpd_high, alpha=0.3) plt.title("Forecasting lynx dataset with SGT model (90% HPDI)") plt.show() ``` As we can observe, the model has been able to learn both the first and second order seasonality effects, i.e. a cyclical pattern with a periodicity of around 10, as well as spikes that can be seen once every 40 or so years. Moreover, we not only have point estimates for the forecast but can also use the uncertainty estimates from the model to bound our forecasts. ## Acknowledgements We would like to thank Slawek Smyl for many helpful resources and suggestions. Fast inference would not have been possible without the support of JAX and the XLA teams, so we would like to thank them for providing such a great open-source platform for us to build on, and for their responsiveness in dealing with our feature requests and bug reports. ## References [1] `Rlgt: Bayesian Exponential Smoothing Models with Trend Modifications`,<br>&nbsp;&nbsp;&nbsp;&nbsp; Slawek Smyl, Christoph Bergmeir, Erwin Wibowo, To Wang Ng, Trustees of Columbia University
a1b1ddd72128d78f0687510fdc51a2bfc0f1f34c
99,813
ipynb
Jupyter Notebook
notebooks/source/time_series_forecasting.ipynb
ibab/numpyro
076bb2a15026734a41abf4eb39ce384ab68c297f
[ "Apache-2.0" ]
1
2020-11-30T23:51:45.000Z
2020-11-30T23:51:45.000Z
notebooks/source/time_series_forecasting.ipynb
ibab/numpyro
076bb2a15026734a41abf4eb39ce384ab68c297f
[ "Apache-2.0" ]
null
null
null
notebooks/source/time_series_forecasting.ipynb
ibab/numpyro
076bb2a15026734a41abf4eb39ce384ab68c297f
[ "Apache-2.0" ]
1
2020-11-30T23:52:33.000Z
2020-11-30T23:52:33.000Z
183.143119
41,512
0.866801
true
6,106
Qwen/Qwen-72B
1. YES 2. YES
0.942507
0.803174
0.756997
__label__eng_Latn
0.88124
0.597089
# DATA 442/642 - Advanced Machine Learning Midterm ## [Yunting Chiu](https://www.linkedin.com/in/yuntingchiu/) # Exercise 1 (10 points) Show that the $\ell 1$ norm is a convex function (as all norms), yet it is not strictly convex. In contrast, show that the squared Euclidean norm is a strictly convex function. ## Answer A function $f$ is **convex** if $$ f(\lambda x + (1 − \lambda )y) \leq \lambda f(x) + (1 − \lambda )f(y) $$ for all x, y $\in$ dom f and all $\lambda \in$ [0, 1]. If the inequality holds strictly (i.e. $<$ rather than $\leq$) for all $\lambda \in$ (0, 1) and x not equal to y, then we say that $f$ is **strictly convex**.\ The norm of a vector is a generalization of the concept of length. The $l_1$ norm of a vector $x \in R^n$ is defined as: $$ ||x||_1 := \sum_{i=1}^{n}|x_i| $$ According to Minkowski's Inequality, we have the following inequality for $l_1$ norm: $$ \sum_{i}^{l}(|\lambda x_i + (1- \lambda)y_i|) \leq \lambda \sum_{i}^{l}|X_i| + (1 - \lambda) \sum_{i}^{l}|y_i| $$ which is not strictly convex. All $x_i$ and all $y_i$ are non-negative then there is no reason for strict inequality. By constract, the squared Euclidean norm is a strictly convex function because of the follwing inequality $$ \sum_{i}(|\lambda x_i + (1-\lambda)y_i|)^2 < \lambda \sum_{i}|X_i|^2+ (1 - \lambda) \sum_{i}|y_i|^2 $$ which gives $$ \lambda ^2 \sum_{i}x^2_i + (1-\lambda)^2\sum_{i}y^2_i + 2\lambda (1-\lambda)x_iy_i < \lambda \sum_{i}x^2_i + (1-\lambda)^2\sum_{i}y^2_i $$ then $$ 2 \lambda(1- \lambda) x_iy_i < \lambda(1-\lambda)(\sum_{i}x^2_i + \sum_{i}y^2_i) $$ So we have $$ \sum_{i}(x_i-y_i) > 0, for x \neq y $$ Which is strictly convex. # Exercise 2 (10 points) Let the observations resulting from an experiment be $x_n, n = 1, 2,..., N$. Assume that they are independent and that they originate from a Gaussian PDF with mean $\mu$ and standard deviation $\sigma$. Both, the mean and the variance, are unknown. Prove that the maximum likelihood (ML) estimates of these quantities are given $$ \hat{\mu}ML = \frac{1}{N}\sum_{n=1}^{N}X_n,\space\hat{\sigma}^2ML = \frac{1}{N}\sum_{n=1}^{N}X_n(X_n - \hat{\mu}ML)^2 $$ ## Answer In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution. The log-likelihood function is given by $$ L(\mu, \sigma^2) = - \frac{N}{2}ln(2 \pi) - \frac{N}{2}ln(\sigma^2) - \frac{1}{2\sigma^2}\sum_{n=1}^{N}(x_n - \mu)^2 $$ Why we start with log-likehood? Because we can simply the notation. Taking the derivative with respect to $\mu, \sigma^2$, and equating it to zero we can get the following equations. We have two unknowns, so we need to derivate two equations. $$ \frac{\partial L}{\partial \mu} = \frac{1}{\sigma^2}\sum_{n=1}^{N}(x_n - \mu) = \frac{1}{\sigma^2}(\sum_{n=1}^{N}x_n \sum_{n=1}^{N} \mu)=0 $$ $$ \frac{\partial L}{\partial \sigma^2} = -\frac{N}{2}\frac{1}{\sigma^2} + \frac{1}{2\sigma^4}\cdot \sum_{n=1}^{N}(x_n - \mu)^2 = 0 $$ We know the $\sigma^4$ is $\sigma^2 \cdot \sigma^2$, so we can slove it in the above equation. # Exercise 3 (15 points) For the regression model where the noise vector $\eta= [ \, \eta_1,..., \eta_N ] \,^T$ comprises samples from zero mean Gaussian random variable, with covariance matrix $\Sigma_n$, show that the Fisher information matrix is given by $$ I(\theta) = X^T\Sigma^{-1}_{n}X, $$ where $X$ is the input matrix. ## Answer A Gaussian distribution is defined as $$ P(y, \theta) = \frac{1}{2 \pi^\frac{N}{2}\cdot \sqrt{det(\Sigma_{n}})} \cdot{exp(-\frac{1}{2}(y - x \cdot \theta)^T \cdot \Sigma_{n}(y - x \cdot \theta)} $$ We take natural log of the equation and get $$ ln (p(y, \theta)) = -\frac{1}{2}(y - x \cdot \theta)^T \cdot \Sigma_{n}^{-1}(y - x \cdot \theta) + C $$ where $C$ is a constant. We take the first derivative from the above equation: $$ \frac{\partial ln (p(y, \theta))}{\partial \theta} = -\frac{1}{\theta^2}(x - \mu) $$ Then, we take the second derivative $$ \frac{\partial^2 ln (p(y, \theta))}{\partial \theta^2} = - x^T \cdot \Sigma_{n}^{-1} \cdot x $$ Therefore, the Fisher information matrix is defined as follows: $$ I(\theta) = - E(\frac{\sigma^2\ell_n P (y, \theta)}{\sigma \theta^2}) = x^T \cdot \Sigma_{n}^{-1} \cdot x $$ # Exercise 4 (20 points) Consider the regression problem described in one of our labs. Read the same audio file, then add white Gaussian noise at a 15 dB level and randomly “hit” 10% of the data samples with outliers (set the outlier values to 80% of the maximum value of the data samples). Recall from Lab 6 of the kernal ridge regression ```python from sklearn.svm import SVR from datetime import datetime import numpy as np import math import soundfile as sf import matplotlib.pyplot as plt import sys import os path = "/content/drive/MyDrive/American_University/2021_Fall/DATA-642-001_Advanced Machine Learning/GitHub/Labs/06/BladeRunner.wav" ``` ```python # This is a helping funvtion that contains different types of kernels def kappa(x, y, kernel_type, kernel_params): value = None if kernel_type == 'gaus': sigma = kernel_params[0] N = len(x) norm = sum((x-y)**2) value = np.exp(-norm/(sigma**2)) elif kernel_type == 'gaus_c': sigma = kernel_params[0] N = len(x) exponent = sum( (x-y.conj())**2 ) # value = 2*(math.exp( -np.real(exponent)/(sigma**2) )) value = 2*np.real(np.exp(-exponent/(sigma**2))) elif kernel_type == 'linear': value = 0 N = len(x) for i in range(0, N): value = value + x[i]*y[i].conj() elif kernel_type == 'poly': d = kernel_params[0] value = (1 + x*y.transpose())**d # value = ( (1 + x*y.transpose())/( math.sqrt(np.real(x*x.transpose())* np.real(y*y.transpose()) ) ) )**d; elif kernel_type == 'poly_c': d = kernel_params[0] value = 2*np.real((1 + x*y.transpose())**d) # value = 2*np.real( ( (1 + x*y.transpose())/( math.sqrt(np.real(x*x.transpose()) * np.real(y*y.transpose()) ) ) )**d ) return value ``` ```python def kernel_regression_l2_l2_unbiased(X, y, params, kernel_type, kernel_params): lmbda = params[0] # build kernel matrix [d, N] = X.shape # for n=1:N # for m=1:N # K(n,m) = kappa(X(:,n), X(:,m), kernel_type, kernel_params); # end; # end; if kernel_type == 'gaus': par = kernel_params norms = np.zeros(shape=(N, N)) for i in range(0, N): T = X - X[:, i] # bsxfun(@minus,X,X(:,i)) norms[i, :] = np.sum(T**2, axis=0) K = np.exp(-norms/(par**2)) elif kernel_type == 'gaus_c': par = kernel_params norms = np.zeros(shape=(N, N)) for i in range(0, N): T = X - X[:, i].conj() # bsxfun(@minus,X,conj(X(:,i))) norms[i, :] = np.sum(T**2, axis=0) K = 2*np.real(np.exp(-norms/(par**2))) else: K = np.zeros(shape=(N, N)) for i in range(0, N): for j in range(0, N): K[i, j] = kappa(X[:, i], X[:, j], kernel_type, kernel_params) I = np.eye(N) A = lmbda*I+K c = y # Solve A*x=c sol = np.linalg.solve(A, c) return sol ``` ```python # ----------------------------------------------------------------- # Kernel Ridge Regression # on an audio sequence corrupted by # Gaussian noise and outliers # You need a .wav file to run the experiment... # Python3 required packages: numpy, soundfile, matplotlib # ----------------------------------------------------------------- def kernelridge_11_19(): np.random.seed(0) # -------------------------------------------------------------------- # Reading wav file. x corresponds to time instances (is., x_i in [0,1]) # fs is the sampling frequency # Replace the name "BladeRunner.wav" with the name of the file # you intend to use. # -------------------------------------------------------------------- N = 100 samples = 1000 indices = range(0, samples,int(samples/N)) start = 100000 [data, fs] = sf.read(path) sound = np.array(data[start:(start+samples+1), :], dtype=np.float32) y = np.reshape(sound[indices, 0], newshape=(len(indices), 1)) Ts = 1/fs # sampling period x = np.array(range(0, samples)).conj().transpose()*Ts # time instances of sampling x = x[indices] x = np.reshape(x, newshape=(x.shape[0], 1)) #print(x) # ------------------------------------------------------- # Add white Gaussian noise snr = 15 # dB y = py_awgn(y, snr) # add outliers O = 0.8*np.max(np.abs(y)) percent = 0.1 M = int(math.floor(percent*N)) out_ind = np.random.choice(N, M, replace=False) outs = np.sign(np.random.randn(M, 1))*O y[out_ind] = y[out_ind] + outs M = len(y) # ----------Code for unbiased L2 Kernel Ridge Regression (KRR-L2)----------- C = 0.0001 kernel_type = 'gaus' kernel_params = 0.004 sol = kernel_regression_l2_l2_unbiased(x.conj().transpose(), y, [C], kernel_type, kernel_params) a0 = sol[0:N] # Generate regressor t = np.array([range(0, samples)]).conj().transpose()*Ts # Here we generate all 1000 points that will be used for prediction #t = np.array([range(0, samples+1)]).conj().transpose()*Ts M2 = len(t) print(M2) z0 = np.zeros(shape=(M2, 1)) for k in range(0, M2): z0[k] = 0 for l in range(0, N): z0[k] = z0[k] + a0[l]*kappa(x[l], t[k], kernel_type,[kernel_params]) # ------------------end unbiased KRR-L2------------------------------------- # For unbiased KRR-L2 plt.figure(1) # plot(x,y); plt.xlabel('time in sec') plt.ylabel('amplitude') plt.plot(t, z0, 'r', lw=1) plt.plot(x, y, '.', markeredgecolor=[0.3, 0.3, 0.3], markersize=5) title = 'unbiased KRR-L2 C= %s' % str(C) plt.title(title) plt.show() ``` ```python def py_awgn(input_signal, snr_dB, rate=1.0): """ Addditive White Gaussian Noise (AWGN) Channel. Parameters __________ input_signal : 1D ndarray of floats Input signal to the channel. snr_dB : float Output SNR required in dB. rate : float Rate of the a FEC code used if any, otherwise 1. Returns _______ output_signal : 1D ndarray of floats Output signal from the channel with the specified SNR. """ avg_energy = np.sum(np.dot(input_signal.conj().T, input_signal)) / input_signal.shape[0] snr_linear = 10 ** (snr_dB / 10.0) noise_variance = avg_energy / (2 * rate * snr_linear) if input_signal.dtype is np.complex: noise = np.array([np.sqrt(noise_variance) * np.random.randn(input_signal.shape[0]) * (1 + 1j)], ndmin=2) else: noise = np.array([np.sqrt(2 * noise_variance) * np.random.randn(input_signal.shape[0])], ndmin=2) output_signal = input_signal + noise.conj().T return output_signal if __name__ == '__main__': kernelridge_11_19() ``` Construct the support vector regression model ```python N = 100 samples = 1000 indices = range(0, samples,int(samples/N)) start = 100000 [data, fs] = sf.read(path) sound = np.array(data[start:(start+samples+1), :], dtype=np.float32) y = np.reshape(sound[indices, 0], newshape=(len(indices), 1)) Ts = 1/fs # sampling period x = np.array(range(0, samples)).conj().transpose()*Ts # time instances of sampling x = x[indices] x = np.reshape(x, newshape=(x.shape[0], 1)) #print(x.shape) #------------------------------------------------------- # Add white Gaussian noise snr = 15 # dB y = py_awgn(y, snr) # add outliers O = 0.8*np.max(np.abs(y)) # set the outlier values to 80% percent = 0.1 M = int(math.floor(percent*N)) out_ind = np.random.choice(N, M, replace=False) outs = np.sign(np.random.randn(M, 1))*O y[out_ind] = y[out_ind] + outs y = y.ravel() M = len(y) # print(M) #print(x.shape) #print(y.shape) #------------------------------------------------------- # support vector regression #C=1 #epsilon=0.003 #kernel_type='gaus' #kernel_params=0.004 #------------------------------------------------------- def svm_regression(sigma, epsilon, C): # fit regression model # C - float, default=1.0: Regularization parameter. The strength of the regularization is inversely proportional to C. Must be strictly positive. The penalty is a squared l2 penalty. # epsilon - float, default=0.1 Epsilon in the epsilon-SVR model. It specifies the epsilon-tube within which no penalty is associated in the training loss function with points predicted within a distance epsilon from the actual value. svr_rbf = SVR(kernel='rbf', C=C, gamma = 1/(2*(sigma**2)), epsilon = epsilon) svr_lin = SVR(kernel='linear', C=C, gamma = 1/(2*(sigma**2)), epsilon = epsilon) svr_poly = SVR(kernel='poly', C=C, gamma = 1/(2*(sigma**2)), epsilon = epsilon, degree=2) y_rbf = svr_rbf.fit(x, y).predict(x) y_lin = svr_lin.fit(x, y).predict(x) y_poly = svr_poly.fit(x, y).predict(x) ############################################################################### # look at the results plt.scatter(x, y, c='k', label='data') #plt.hold('on') plt.plot(x, y_rbf, c='g', label='RBF kernel') plt.plot(x, y_lin, c='r', label='Linear kernel') plt.plot(x, y_poly, c='b', label='Polynomial kernel') plt.xlabel('time in sec') plt.ylabel('amplitude') plt.title('Support Vector Regression') plt.legend() plt.show() plt.tight_layout() ``` (a) Find the reconstructed data samples obtained by the support vector regression. Employ the Gaussian kernel with $\sigma$ = 0.004 and set $\epsilon$ = 0.003 and $C$ = 1. Plot the fitted curve of the reconstructed samples together with the data used for training. ```python print(svm_regression(sigma = 0.004, epsilon = 0.003, C = 1)) ``` (b) Repeat step (a) using $C$ = 0.05, 0.1, 0.5, 5, 10, 100. ```python print(svm_regression(sigma = 0.004, epsilon = 0.003, C = 0.05)) ``` ```python print(svm_regression(sigma = 0.004, epsilon = 0.003, C = 0.1)) ``` ```python print(svm_regression(sigma = 0.004, epsilon = 0.003, C = 0.5)) ``` ```python print(svm_regression(sigma = 0.004, epsilon = 0.003, C = 5)) ``` ```python print(svm_regression(sigma = 0.004, epsilon = 0.003, C = 10)) ``` ```python print(svm_regression(sigma = 0.004, epsilon = 0.003, C = 100)) ``` (c) Repeat step (a) using $\epsilon$ = 0.0005, 0.001, 0.01, 0.05, 0.1. ```python print(svm_regression(sigma = 0.004, epsilon = 0.0005, C = 1)) ``` ```python print(svm_regression(sigma = 0.004, epsilon = 0.001, C = 1)) ``` ```python print(svm_regression(sigma = 0.004, epsilon = 0.01, C = 1)) ``` ```python print(svm_regression(sigma = 0.004, epsilon = 0.05, C = 1)) ``` ```python print(svm_regression(sigma = 0.004, epsilon = 0.1, C = 1)) ``` (d) Repeat step (a) using $\sigma$ = 0.001, 0.002, 0.01, 0.05, 0.1. ```python print(svm_regression(sigma = 0.001, epsilon = 0.003, C = 1)) ``` ```python print(svm_regression(sigma = 0.002, epsilon = 0.003, C = 1)) ``` ```python print(svm_regression(sigma = 0.01, epsilon = 0.003, C = 1)) ``` ```python print(svm_regression(sigma = 0.05, epsilon = 0.003, C = 1)) ``` ```python print(svm_regression(sigma = 0.1, epsilon = 0.003, C = 1)) ``` (e) Comment on the results. We experiment with various Gaussian kernal, epsilon, and regularization parameters. After that, we conclude that the polynomial RBF kernel fits the dataset better. When we change the value of C, the model does not change vastly, and the model doesn't perform well in the linear and polynomial kernel. When we increase the value of espilon above 0.01, the predicted lines of SVR models almost touch the 0 in the y-axis, meaning that using a greater epsilon in SVR models with this dataset is not a good idea. Also, if the value of Gaussian kernal above 0.05, the fitted curve would not performs well. # Exercise 5 (15 points) Show, using Lagrange multipliers, that the $\ell_2$ minimizer in equation (9.18) from the textbook accepts the closed form solution $$ \hat{\theta} = X^T(XX^T)^{−1}y $$ Now, show that for the system $y = X\theta$ with $X \in R^{n×l}$ and $n > l$ the least squares solution is given by: $$ \hat{\theta} = (X^TX)^{−1}X^Ty $$ ## Answer - $\ell>n$ In regression, we want to select those components of $\theta$ that have the most say in the formation of the output variable. The solution here is given by $\theta$. The $\ell_2$ minimizer tries to minimize $$ min ||\theta||^{2}_{2} $$ Such that $$ x^{T}_{n}\theta = y_n, n = 1, 2, ..., N \to y = x \cdot \theta $$ where $N<\ell$ (features < observations) is the assumption. Because it penalizes very large coeffients. The corresponding Lagrangian can be calculated as follows: $$ L(\theta) = \theta^{T}\theta + \sum_{n=1}^{N}\lambda_n(y_n-X^{T}_n\theta) $$ Where $\lambda$ is the parameter that controls the weight of the constant. We take the first order of the above equation and set that to zero. This will give us the minimum: $$ L'(\theta) = 0 \to \theta = X^T\lambda $$ where$\lambda = [ \, \lambda_1, \lambda_2, ..., \lambda_N ] \,^T$, and $X$ being the input matrix $$ X = \begin{bmatrix} x_1^T \\ \vdots \\ x_N^T \end{bmatrix} $$ We plug the solution into the set of constraints, which can be written as $ y = X\theta$, now we get: $$ X\theta = y = XX^T\lambda \to \lambda = (XX^T)^{-1} y $$ So in ridge regression we have a closed form solution. Therefore, the final result is $$ \theta = X^T(XX^T)^{-1}y $$ ## Answer - $n>\ell$ We know that $$ y = X \cdot \theta $$ based on the above, we can get, $$ ||y - x \cdot \theta ||^2_2 = (y - x \cdot \theta)^T \cdot (y - x \cdot \theta)^T $$ then, $$ = y^T\cdot y-y^T\cdot x\cdot\theta-\theta^T\cdot x^T\cdot y+\theta^T\cdot x^T\cdot x\cdot\theta $$ then, $$ (-y^T\cdot x)^T - (x^T\cdot y) + 2 \cdot x^T \cdot x \cdot \theta = 0 $$ So we can get, $$ \theta = X^T(XX^T)^{-1}y $$ # Exercise 6 (10 points) Show that the null space of a full rank $N × l$ matrix $X$ is a subspace of imensionality $l − N$ , for $N < l$. ## Answer Because $N \cdot \ell$ is a full rank matrix where $N < \ell$. We also know `rank(x) = dim(rank(x)) = N`. So we have `dim(null(x)) + dim(rank(x)) = l`. Therefore, `dim(null(x)) = l-n`, where `dim` is a dimension of a matrix `x`. # Exercise 7 Generate in Python a sparse vector $\theta \in R^l, l = 100$, with its first five components taking random values drawn from a normal distribution with mean zero and variance one and the rest being equal to zero. Build, also, a sensing matrix $X$ with $N = 30$ rows having samples normally distributed, with mean zero and variance $1/N$, in order to get 30 observations based on the linear regression model $y = X\theta$. Then perform the following tasks. ```python import numpy as np import matplotlib.pyplot as plt import math import random np.random.seed(1234) N = 30 # 30 rows k = 5 # first five components taking random values drawn from a normal distribution with mean zero and variance one and the rest being equal to zero l = 100 # 100 columns theta = np.zeros((l, 1)) theta[0:k] = np.random.standard_normal(size = (k, 1)) # draw samples from a standard Normal distribution (mean=0, stdev=1) # print(theta) X = np.dot(np.random.standard_normal(size = (N, l)),(1/math.sqrt(N))) y = np.dot(X, theta) print("X size is {}".format(X.shape)) print("y size is {}".format(y.shape)) #print(theta) ``` X size is (30, 100) y size is (30, 1) (a) Use a LASSO implementation to reconstruct $\theta$ from $y$ and $X$. Lasso regression performs L1 regularization, which adds a penalty equal to the absolute value of the magnitude of coefficients. This type of regularization can result in sparse models with few coefficients; Some coefficients can become zero and eliminated from the model. Larger penalties result in coefficient values closer to zero, which is the ideal for producing simpler models. L1 tends to shink coefficients to zero. As we can see below, the cofficients are almost zero. ```python # https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Lasso.html from sklearn.linear_model import Lasso from sklearn.linear_model import LinearRegression lasso_reconst = Lasso(fit_intercept=True, normalize=True, max_iter=1e6) lasso_reconst.fit(X, y) error_lasso = np.linalg.norm(np.reshape(np.array(lasso_reconst.coef_), newshape=theta.shape)-theta) print(lasso_reconst.coef_, lasso_reconst.intercept_) print(error_lasso) ``` [ 0. -0. 0. -0. -0. -0. -0. 0. 0. -0. -0. -0. 0. 0. 0. 0. -0. -0. -0. -0. -0. 0. 0. 0. -0. -0. 0. 0. -0. 0. 0. 0. -0. -0. -0. 0. 0. 0. 0. 0. -0. 0. 0. 0. 0. -0. 0. 0. -0. 0. 0. -0. 0. -0. 0. 0. -0. -0. 0. -0. 0. 0. -0. -0. -0. 0. 0. 0. -0. -0. -0. 0. -0. 0. 0. 0. -0. 0. -0. 0. 0. 0. 0. -0. -0. 0. 0. 0. -0. -0. 0. 0. 0. -0. -0. -0. -0. 0. -0. -0.] [0.03136091] 2.0761316931119906 When we compare the coefficients in the linear model, they are non-zero. ```python linear_reconst = LinearRegression() linear_reconst.fit(X, y) #error_lin = np.linalg.norm(theta - linear_reconst.coef_) print(linear_reconst.coef_) #print(error_lin) ``` [[ 0.1504216 -0.24852887 0.2851724 -0.08915576 -0.10327576 -0.12618162 -0.04602372 0.05473597 0.07021556 -0.05015795 -0.0767972 -0.00251141 0.08548682 0.19020808 0.06857647 0.03895134 -0.09698147 -0.10210427 -0.22613006 -0.04752304 -0.0878581 0.07235256 0.04932862 -0.00278751 -0.11543169 -0.00846009 0.12466218 -0.01432902 -0.04694153 0.09194229 0.02842385 0.05432163 -0.02859287 -0.10724279 -0.06568882 0.12864594 0.01044759 0.16077251 0.10993205 0.12337903 -0.04621973 0.08477134 0.04870411 0.11392832 0.11209879 -0.02476117 -0.02252337 -0.00092738 -0.11909358 0.07232582 -0.01891333 -0.14706331 -0.03034476 -0.08839947 0.0574367 0.08681011 -0.02308903 -0.02174072 0.13930187 -0.0745217 0.02068142 0.0311157 -0.03109188 -0.16113841 -0.01404451 0.07568557 0.04640705 0.06853807 -0.09811168 -0.19712532 -0.00386257 0.07060244 -0.11120965 -0.01141535 -0.02502231 0.12010114 -0.05180121 0.2208544 -0.10120215 0.02234681 0.04724423 -0.07606814 0.07811075 -0.00621995 -0.01792998 -0.00034544 0.06737222 0.01680416 -0.09043721 -0.03464077 0.02525259 -0.00389582 0.01578966 -0.03366412 -0.01596172 -0.105041 -0.19233346 0.01089446 0.05992144 0.02080798]] (b) Repeat the experiment 500 times, with different realizations of $X$, in order to compute the probability of correct reconstruction (assume the reconstruction is exact when $||y =X\theta|| < 10^{−8}$). The probability is 0 if we computer $||y = X\theta|| < 10^{-8}$ with the experiment 500 times. ```python rep = 500 # repeat the experiment 500 times error = np.zeros((rep, 1)) print(error.shape) for i in range(0, rep): X = np.random.randn(N,l)*(1/math.sqrt(N)) #X = np.dot(np.random.standard_normal(size = (N, l)),(1/math.sqrt(N))) # random values y = np.dot(X, theta) model = Lasso(alpha=0.0001, fit_intercept=False, normalize=False, max_iter=1e6).fit(X, y) values = model.coef_ errorX = np.linalg.norm(np.reshape(np.array(values), newshape = theta.shape)-theta) #print(errorX) if errorX < 10**(-8): error[i] = 1 else: error[i] = 0 probRandn = np.sum(error)/rep print('Random Sensing Mtx: '+ str(probRandn)) ``` (500, 1) Random Sensing Mtx: 0.0 (c) Repeat the same experiment (500 times) with matrices of the form $$ \begin{equation} \nonumber X(i, j) = \left\{ \begin{array}{l l} + \sqrt{\frac{\sqrt{p}}{N}}, & \quad\text{with probability} \frac{1}{2\sqrt{p}}\\ 0, & \quad\text{with probability} 1- \frac{1}{\sqrt{p}}\\ - \sqrt{\frac{\sqrt{p}}{N}}, &\quad\text{with probability} \frac{1}{2\sqrt{p}} \end{array} \right. \end{equation} $$ for p equal to 1, 9, 25, 36, 64 (make sure that each row and each column of $X$ has at least a nonzero component). Give an explanation why the probability of reconstruction falls as $p$ increases (observe that both the sensing matrix and the unknown vector are sparse). ## Explanation The range of error terms are going to wider if p increased. With the larger error value, the estimated results are more likely to be bias, so the probability of reconstruction falls as p increases. ```python p = [1, 9, 25, 36, 64] Error = np.zeros((rep, 1)) eachMinErr = 9999 eachMaxErr = 0 # construct sparse sensing matrices for pval in p: for i in range(rep): OK = False while not OK: kk = np.zeros((N*l, 1)) P = np.random.permutation(N*l) #numofzeros = round(N*l*(1-1/math.sqrt(pval))) numofzeros = int(np.around(N*l*(1-1/math.sqrt(pval)))) P[(N*l-numofzeros+1):] = 0 #kk[P] = np.multiply(np.matmul(math.sqrt((math.sqrt(pval)/N)), np.ones((len(P),1))), np.sign(np.random.randn(len(P),1))) kk[P] = np.sqrt((math.sqrt(pval)/N))*np.multiply(np.ones(shape=(len(P), 1)), np.sign(np.random.randn(len(P), 1))) X = np.zeros((N, l)) X[:(l*N)] = np.reshape(kk, newshape = (N, l)) #OK = np.linalg.matrix_rank(X) == N # Return matrix rank of array using SVD method, also check if there is a full rank # Check if it is full rank if np.linalg.matrix_rank(X) == N: OK = True else: OK = False y = np.dot(X, theta) L1_model = Lasso(alpha=0.0001, fit_intercept=False, normalize=False, max_iter=1e6).fit(X, y) values = L1_model.coef_ #errorX = np.linalg.norm(L1_model.coef_-theta) errorX = np.linalg.norm(np.reshape(np.array(values),newshape=theta.shape) - theta) #print(errorX) if errorX < 10 ** (-8): Error[i] = 1 else: Error[i] = 0 eachMinErr = min(eachMinErr, errorX) eachMaxErr = max(eachMaxErr, errorX) print("min err: {}, max err:{}".format(eachMinErr, eachMaxErr)) probSparse = np.sum(Error)/rep #print(pval) print('Sparse Sensing Mtx, p = '+str(pval)+', the probability is '+str(probSparse)) ``` min err: 0.0039041880894823524, max err:1.0101686575393911 Sparse Sensing Mtx, p = 1, the probability is 0.0 min err: 0.0039041880894823524, max err:1.5537180405710938 Sparse Sensing Mtx, p = 9, the probability is 0.0 min err: 0.003875605385666494, max err:1.5537180405710938 Sparse Sensing Mtx, p = 25, the probability is 0.0 min err: 0.0038363421335191514, max err:1.6001763493778818 Sparse Sensing Mtx, p = 36, the probability is 0.0 min err: 0.0034991983600240503, max err:1.7485271633545891 Sparse Sensing Mtx, p = 64, the probability is 0.0 # Testing Zone ```python """ kk = np.zeros((N*l, 1)) X = np.zeros((N, l)) print(kk.shape) print(X.shape) print(X[:l*N].shape) a = np.multiply(np.dot(math.sqrt((math.sqrt(9)/N)), np.ones((len(P),1))) , np.sign(np.random.rand(len(P),1))) print(a.shape) """ ``` '\nkk = np.zeros((N*l, 1))\nX = np.zeros((N, l))\nprint(kk.shape)\nprint(X.shape)\nprint(X[:l*N].shape)\na = np.multiply(np.dot(math.sqrt((math.sqrt(9)/N)), np.ones((len(P),1))) , np.sign(np.random.rand(len(P),1)))\nprint(a.shape)\n' ```python """ P = np.random.permutation(N*l) print(P.shape) """ ``` '\nP = np.random.permutation(N*l)\nprint(P.shape)\n' # Output ```python # should access the Google Drive files before running the chunk %%capture !sudo apt-get install texlive-xetex texlive-fonts-recommended texlive-plain-generic !jupyter nbconvert --to html "/content/drive/MyDrive/American_University/2021_Fall/DATA-642-001_Advanced Machine Learning/GitHub/Midterm/submit/midterm_Yunting.ipynb" ``` # References: - https://www.statisticshowto.com/lasso-regression/ - https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVR.html - https://ai.stanford.edu/~gwthomas/notes/convexity.pdf - https://en.wikipedia.org/wiki/Convex_function - https://math.stackexchange.com/questions/1356842/is-l-2-norm-a-strictly-convex-function - https://cims.nyu.edu/~cfgranda/pages/OBDA_spring16/material/convex_optimization.pdf - https://math.stackexchange.com/questions/80139/why-is-the-l-p-norm-strictly-convex-for-1p-infty - https://ai.stanford.edu/~gwthomas/notes/convexity.pdf - https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVR.html
a6d89cf3afafec9f046e8a3a38974f140f4857dd
537,449
ipynb
Jupyter Notebook
Midterm/submit/midterm_Yunting.ipynb
twyunting/DATA-642_Advanced_Machine_Learning
fffc2621f0dfe86c315a75b6331da9b54a11dd1a
[ "MIT" ]
null
null
null
Midterm/submit/midterm_Yunting.ipynb
twyunting/DATA-642_Advanced_Machine_Learning
fffc2621f0dfe86c315a75b6331da9b54a11dd1a
[ "MIT" ]
null
null
null
Midterm/submit/midterm_Yunting.ipynb
twyunting/DATA-642_Advanced_Machine_Learning
fffc2621f0dfe86c315a75b6331da9b54a11dd1a
[ "MIT" ]
null
null
null
537,449
537,449
0.936759
true
9,293
Qwen/Qwen-72B
1. YES 2. YES
0.891811
0.859664
0.766658
__label__eng_Latn
0.763286
0.619535
```python import numpy as np np.random.seed(2017) import torch torch.manual_seed(2017) from scipy.misc import logsumexp # Use it for reference checking implementation ``` ```python seq_length, num_states=4, 2 emissions = np.random.randint(20, size=(seq_length,num_states))*1. transitions = np.random.randint(10, size=(num_states, num_states))*1. print("Emissions:", emissions, sep="\n") print("Transitions:", transitions, sep="\n") ``` Emissions: [[ 9. 6.] [ 13. 10.] [ 8. 18.] [ 3. 15.]] Transitions: [[ 7. 8.] [ 0. 8.]] ```python def viterbi_decoding(emissions, transitions): # Use help from: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/crf/python/ops/crf.py scores = np.zeros_like(emissions) back_pointers = np.zeros_like(emissions, dtype="int") scores = emissions[0] # Generate most likely scores and paths for each step in sequence for i in range(1, emissions.shape[0]): score_with_transition = np.expand_dims(scores, 1) + transitions scores = emissions[i] + score_with_transition.max(axis=0) back_pointers[i] = np.argmax(score_with_transition, 0) # Generate the most likely path viterbi = [np.argmax(scores)] for bp in reversed(back_pointers[1:]): viterbi.append(bp[viterbi[-1]]) viterbi.reverse() viterbi_score = np.max(scores) return viterbi_score, viterbi ``` ```python viterbi_decoding(emissions, transitions) ``` (78.0, [0, 0, 1, 1]) ```python def viterbi_decoding_torch(emissions, transitions): scores = torch.zeros(emissions.size(1)) back_pointers = torch.zeros(emissions.size()).int() scores = scores + emissions[0] # Generate most likely scores and paths for each step in sequence for i in range(1, emissions.size(0)): scores_with_transitions = scores.unsqueeze(1).expand_as(transitions) + transitions max_scores, back_pointers[i] = torch.max(scores_with_transitions, 0) scores = emissions[i] + max_scores # Generate the most likely path viterbi = [scores.numpy().argmax()] back_pointers = back_pointers.numpy() for bp in reversed(back_pointers[1:]): viterbi.append(bp[viterbi[-1]]) viterbi.reverse() viterbi_score = scores.numpy().max() return viterbi_score, viterbi ``` ```python viterbi_decoding_torch(torch.Tensor(emissions), torch.Tensor(transitions)) ``` (78.0, [0, 0, 1, 1]) ```python viterbi_decoding(emissions, transitions) ``` (78.0, [0, 0, 1, 1]) ```python def log_sum_exp(vecs, axis=None, keepdims=False): ## Use help from: https://github.com/scipy/scipy/blob/v0.18.1/scipy/misc/common.py#L20-L140 max_val = vecs.max(axis=axis, keepdims=True) vecs = vecs - max_val if not keepdims: max_val = max_val.squeeze(axis=axis) out_val = np.log(np.exp(vecs).sum(axis=axis, keepdims=keepdims)) return max_val + out_val ``` ```python def score_sequence(emissions, transitions, tags): # Use help from: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/crf/python/ops/crf.py score = emissions[0][tags[0]] for i, emission in enumerate(emissions[1:]): score = score + transitions[tags[i], tags[i+1]] + emission[tags[i+1]] return score ``` ```python score_sequence(emissions, transitions, [1,1,0,0]) ``` 42.0 ```python correct_seq = [0, 0, 1, 1] [transitions[correct_seq[i],correct_seq[i+1]] for i in range(len(correct_seq) -1)] ``` [7.0, 8.0, 8.0] ```python sum([transitions[correct_seq[i], correct_seq[i+1]] for i in range(len(correct_seq) -1)]) ``` 23.0 ```python viterbi_decoding(emissions, transitions) ``` (78.0, [0, 0, 1, 1]) ```python score_sequence(emissions, transitions, [0, 0, 1, 1]) ``` 78.0 ```python def score_sequence_torch(emissions, transitions, tags): score = emissions[0][tags[0]] for i, emission in enumerate(emissions[1:]): score = score + transitions[tags[i], tags[i+1]] + emission[tags[i+1]] return score ``` ```python score_sequence_torch(torch.Tensor(emissions), torch.Tensor(transitions), [0, 0, 1, 1]) ``` 78.0 ```python def get_all_tags(seq_length, num_labels): if seq_length == 0: yield [] return for sequence in get_all_tags(seq_length-1, num_labels): #print(sequence, seq_length) for label in range(num_labels): yield [label] + sequence list(get_all_tags(4,2)) ``` [[0, 0, 0, 0], [1, 0, 0, 0], [0, 1, 0, 0], [1, 1, 0, 0], [0, 0, 1, 0], [1, 0, 1, 0], [0, 1, 1, 0], [1, 1, 1, 0], [0, 0, 0, 1], [1, 0, 0, 1], [0, 1, 0, 1], [1, 1, 0, 1], [0, 0, 1, 1], [1, 0, 1, 1], [0, 1, 1, 1], [1, 1, 1, 1]] ```python def get_all_tags_dp(seq_length, num_labels): prior_tags = [[]] for i in range(1, seq_length+1): new_tags = [] for label in range(num_labels): for tags in prior_tags: new_tags.append([label] + tags) prior_tags = new_tags return new_tags list(get_all_tags_dp(2,2)) ``` [[0, 0], [0, 1], [1, 0], [1, 1]] ```python def brute_force_score(emissions, transitions): # This is for ensuring the correctness of the dynamic programming method. # DO NOT run with very high values of number of labels or sequence lengths for tags in get_all_tags_dp(*emissions.shape): yield score_sequence(emissions, transitions, tags) brute_force_sequence_scores = list(brute_force_score(emissions, transitions)) print(brute_force_sequence_scores) ``` [54.0, 67.0, 58.0, 78.0, 45.0, 58.0, 56.0, 76.0, 44.0, 57.0, 48.0, 68.0, 42.0, 55.0, 53.0, 73.0] ```python max(brute_force_sequence_scores) # Best score calcuated using brute force ``` 78.0 ```python log_sum_exp(np.array(brute_force_sequence_scores)) # Partition function ``` 78.132899613126483 ```python def forward_algorithm_naive(emissions, transitions): scores = emissions[0] # Get the log sum exp score for i in range(1,emissions.shape[0]): print(scores) alphas_t = np.zeros_like(scores) # Forward vars at timestep t for j in range(emissions.shape[1]): emit_score = emissions[i,j] trans_score = transitions.T[j] next_tag_var = scores + trans_score alphas_t[j] = log_sum_exp(next_tag_var) + emit_score scores = alphas_t return log_sum_exp(scores) ``` ```python forward_algorithm_naive(emissions, transitions) ``` [ 9. 6.] [ 29.0000454 27.04858735] [ 44.00017494 55.13288499] 78.132899613126483 ```python def forward_algorithm_vec_check(emissions, transitions): # This is for checking the correctedness of log_sum_exp function compared to scipy scores = emissions[0] scores_naive = emissions[0] # Get the log sum exp score for i in range(1, emissions.shape[0]): print(scores, scores_naive) scores = emissions[i] + logsumexp( scores_naive + transitions.T, axis=1) scores_naive = emissions[i] + np.array([log_sum_exp( scores_naive + transitions.T[j]) for j in range(emissions.shape[1])]) print(scores, scores_naive) return logsumexp(scores), log_sum_exp(scores_naive) ``` ```python forward_algorithm_vec_check(emissions, transitions) ``` [ 9. 6.] [ 9. 6.] [ 29.0000454 27.04858735] [ 29.0000454 27.04858735] [ 44.00017494 55.13288499] [ 44.00017494 55.13288499] [ 58.14879707 78.13289961] [ 58.14879707 78.13289961] (78.132899613126483, 78.132899613126483) ```python def forward_algorithm(emissions, transitions): scores = emissions[0] # Get the log sum exp score for i in range(1, emissions.shape[0]): scores = emissions[i] + log_sum_exp( scores + transitions.T, axis=1) return log_sum_exp(scores) ``` ```python forward_algorithm(emissions, transitions) ``` 78.132899613126483 ```python tt = torch.Tensor(emissions) tt_max, _ = tt.max(1) ``` ```python tt_max.expand_as(tt) ``` 9 9 13 13 18 18 15 15 [torch.FloatTensor of size 4x2] ```python tt.sum(0) ``` 33 49 [torch.FloatTensor of size 1x2] ```python tt.squeeze(0) ``` 9 6 13 10 8 18 3 15 [torch.FloatTensor of size 4x2] ```python tt.transpose(-1,-2) ``` 9 13 8 3 6 10 18 15 [torch.FloatTensor of size 2x4] ```python tt.ndimension() ``` 2 ```python def log_sum_exp_torch(vecs, axis=None): ## Use help from: http://pytorch.org/tutorials/beginner/nlp/advanced_tutorial.html#sphx-glr-beginner-nlp-advanced-tutorial-py if axis < 0: axis = vecs.ndimension()+axis max_val, _ = vecs.max(axis) vecs = vecs - max_val.expand_as(vecs) out_val = torch.log(torch.exp(vecs).sum(axis)) #print(max_val, out_val) return max_val + out_val ``` ```python def forward_algorithm_torch(emissions, transitions): scores = emissions[0] # Get the log sum exp score transitions = transitions.transpose(-1,-2) for i in range(1, emissions.size(0)): scores = emissions[i] + log_sum_exp_torch( scores.expand_as(transitions) + transitions, axis=1) return log_sum_exp_torch(scores, axis=-1) ``` ```python forward_algorithm_torch(torch.Tensor(emissions), torch.Tensor(transitions)) ``` 78.1329 [torch.FloatTensor of size 1] The core idea is to find the sequence of states $y = \{y_0, y_1, ..., y_N\}$ which have the highest probability given the input $X = \{X_0, X_1, ..., X_N\}$ as follows: $$ \begin{equation} p(y\mid X) = \prod_{i=0}^{N}{p(y_i\mid X_i)p(y_i \mid y_{i-1})}\\ \log{p(y\mid X)} = \sum_{i=0}^{N}{\log{p(y_i\mid X_i)} + \log{p(y_i \mid y_{i-1})}}\\ \end{equation} $$ Now $\log{p(y_i\mid X_i)}$ and $\log{p(y_i \mid y_{i-1})}$ can be parameterized as follows: $$ \begin{equation} \log{p(y_i\mid X_i)} = \sum_{l=0}^{L}{\sum_{k=0}^{K}{w_{k}^{l}*\phi_{k}^{l}(X_i, y_i)}}\\ \log{p(y_i\mid y_{y-1})} = \sum_{l=0}^{L}{\sum_{l'=0}^{L}{w_{l'}^{l}*\psi_{l'}^{l}(y_i, y_{i-1})}}\\ \implies \log{p(y\mid X)} = \sum_{i=0}^{N}{(\sum_{l=0}^{L}{\sum_{k=0}^{K}{w_{k}^{l}*\phi_{k}^{l}(X_i, y_i)}} + \sum_{l=0}^{L}{\sum_{l'=0}^{L}{w_{l'}^{l}*\psi_{l'}^{l}(y_i, y_{i-1})}})}\\ \implies \log{p(y\mid X)} = \sum_{i=0}^{N}{(\Phi(X_i)W_{emission} + \log{p(y_{i-1} \mid X_{i-1})}W_{transition})} \end{equation} $$ Where, * $N$ is the sequence length * $K$ is number of feature functions, * $L$ is number of states * $W_{emission}$ is $K*L$ matrix * $W_{transition}$ is $L*L$ matrix * $\Phi(X_i)$ is a feature vector of shape $1*K$ * $(\Phi(X_i)W_{emission} + \log{p(y_{i-1} \mid X_{i-1})}W_{transition})$ gives the score for each label ```python ```
71842119afd0b10791832985d79cf73c1757ba8b
21,378
ipynb
Jupyter Notebook
Viterbi+decoding+and+CRF.ipynb
Anou9531/Pytorch-Implementation
6ce3d5123852a77ca565b4acb0efe12b68fd803c
[ "Apache-2.0" ]
182
2017-01-25T13:08:18.000Z
2022-03-02T13:27:27.000Z
Viterbi+decoding+and+CRF.ipynb
Anou9531/Pytorch-Implementation
6ce3d5123852a77ca565b4acb0efe12b68fd803c
[ "Apache-2.0" ]
1
2018-07-11T07:45:46.000Z
2018-07-11T07:45:46.000Z
Viterbi+decoding+and+CRF.ipynb
Anou9531/Pytorch-Implementation
6ce3d5123852a77ca565b4acb0efe12b68fd803c
[ "Apache-2.0" ]
48
2017-01-26T14:50:42.000Z
2022-03-12T02:40:04.000Z
24.629032
181
0.497147
true
3,579
Qwen/Qwen-72B
1. YES 2. YES
0.865224
0.865224
0.748613
__label__eng_Latn
0.470712
0.57761
# Anomaly Detection with Adaptive Fourier Features on Quantum Computer Simulator ```python !pip install qiskit !pip install pylatexenc ``` Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/ Collecting qiskit Downloading qiskit-0.36.2.tar.gz (13 kB) Collecting qiskit-terra==0.20.2 Downloading qiskit_terra-0.20.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (6.5 MB)  |████████████████████████████████| 6.5 MB 10.1 MB/s [?25hCollecting qiskit-aer==0.10.4 Downloading qiskit_aer-0.10.4-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (18.0 MB)  |████████████████████████████████| 18.0 MB 136 kB/s [?25hCollecting qiskit-ibmq-provider==0.19.1 Downloading qiskit_ibmq_provider-0.19.1-py3-none-any.whl (240 kB)  |████████████████████████████████| 240 kB 46.9 MB/s [?25hCollecting qiskit-ignis==0.7.1 Downloading qiskit_ignis-0.7.1-py3-none-any.whl (198 kB)  |████████████████████████████████| 198 kB 14.0 MB/s [?25hRequirement already satisfied: scipy>=1.0 in /usr/local/lib/python3.7/dist-packages (from qiskit-aer==0.10.4->qiskit) (1.4.1) Requirement already satisfied: numpy>=1.16.3 in /usr/local/lib/python3.7/dist-packages (from qiskit-aer==0.10.4->qiskit) (1.21.6) Collecting websocket-client>=1.0.1 Downloading websocket_client-1.3.2-py3-none-any.whl (54 kB)  |████████████████████████████████| 54 kB 2.0 MB/s [?25hRequirement already satisfied: requests>=2.19 in /usr/local/lib/python3.7/dist-packages (from qiskit-ibmq-provider==0.19.1->qiskit) (2.23.0) Requirement already satisfied: urllib3>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from qiskit-ibmq-provider==0.19.1->qiskit) (1.24.3) Collecting websockets>=10.0 Downloading websockets-10.3-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (112 kB)  |████████████████████████████████| 112 kB 29.9 MB/s [?25hCollecting requests-ntlm>=1.1.0 Downloading requests_ntlm-1.1.0-py2.py3-none-any.whl (5.7 kB) Requirement already satisfied: python-dateutil>=2.8.0 in /usr/local/lib/python3.7/dist-packages (from qiskit-ibmq-provider==0.19.1->qiskit) (2.8.2) Requirement already satisfied: setuptools>=40.1.0 in /usr/local/lib/python3.7/dist-packages (from qiskit-ignis==0.7.1->qiskit) (57.4.0) Collecting retworkx>=0.8.0 Downloading retworkx-0.11.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (1.6 MB)  |████████████████████████████████| 1.6 MB 44.2 MB/s [?25hCollecting python-constraint>=1.4 Downloading python-constraint-1.4.0.tar.bz2 (18 kB) Collecting ply>=3.10 Downloading ply-3.11-py2.py3-none-any.whl (49 kB)  |████████████████████████████████| 49 kB 4.1 MB/s [?25hRequirement already satisfied: dill>=0.3 in /usr/local/lib/python3.7/dist-packages (from qiskit-terra==0.20.2->qiskit) (0.3.5.1) Requirement already satisfied: sympy>=1.3 in /usr/local/lib/python3.7/dist-packages (from qiskit-terra==0.20.2->qiskit) (1.7.1) Collecting tweedledum<2.0,>=1.1 Downloading tweedledum-1.1.1-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (943 kB)  |████████████████████████████████| 943 kB 37.6 MB/s [?25hRequirement already satisfied: psutil>=5 in /usr/local/lib/python3.7/dist-packages (from qiskit-terra==0.20.2->qiskit) (5.4.8) Collecting stevedore>=3.0.0 Downloading stevedore-3.5.0-py3-none-any.whl (49 kB)  |████████████████████████████████| 49 kB 4.4 MB/s [?25hCollecting scipy>=1.0 Downloading scipy-1.7.3-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (38.1 MB)  |████████████████████████████████| 38.1 MB 1.4 MB/s [?25hCollecting symengine>=0.9 Downloading symengine-0.9.2-cp37-cp37m-manylinux2010_x86_64.whl (37.5 MB)  |████████████████████████████████| 37.5 MB 2.3 MB/s [?25hRequirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.8.0->qiskit-ibmq-provider==0.19.1->qiskit) (1.15.0) Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests>=2.19->qiskit-ibmq-provider==0.19.1->qiskit) (3.0.4) Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests>=2.19->qiskit-ibmq-provider==0.19.1->qiskit) (2.10) Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests>=2.19->qiskit-ibmq-provider==0.19.1->qiskit) (2022.5.18.1) Collecting cryptography>=1.3 Downloading cryptography-37.0.2-cp36-abi3-manylinux_2_24_x86_64.whl (4.0 MB)  |████████████████████████████████| 4.0 MB 30.8 MB/s [?25hCollecting ntlm-auth>=1.0.2 Downloading ntlm_auth-1.5.0-py2.py3-none-any.whl (29 kB) Requirement already satisfied: cffi>=1.12 in /usr/local/lib/python3.7/dist-packages (from cryptography>=1.3->requests-ntlm>=1.1.0->qiskit-ibmq-provider==0.19.1->qiskit) (1.15.0) Requirement already satisfied: pycparser in /usr/local/lib/python3.7/dist-packages (from cffi>=1.12->cryptography>=1.3->requests-ntlm>=1.1.0->qiskit-ibmq-provider==0.19.1->qiskit) (2.21) Requirement already satisfied: importlib-metadata>=1.7.0 in /usr/local/lib/python3.7/dist-packages (from stevedore>=3.0.0->qiskit-terra==0.20.2->qiskit) (4.11.4) Collecting pbr!=2.1.0,>=2.0.0 Downloading pbr-5.9.0-py2.py3-none-any.whl (112 kB)  |████████████████████████████████| 112 kB 23.2 MB/s [?25hRequirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata>=1.7.0->stevedore>=3.0.0->qiskit-terra==0.20.2->qiskit) (3.8.0) Requirement already satisfied: typing-extensions>=3.6.4 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata>=1.7.0->stevedore>=3.0.0->qiskit-terra==0.20.2->qiskit) (4.2.0) Requirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.7/dist-packages (from sympy>=1.3->qiskit-terra==0.20.2->qiskit) (1.2.1) Building wheels for collected packages: qiskit, python-constraint Building wheel for qiskit (setup.py) ... [?25l[?25hdone Created wheel for qiskit: filename=qiskit-0.36.2-py3-none-any.whl size=11933 sha256=f328a0a9daac65747c9017451e44f3d6e38699a8c51d97d27dd61c76a135e017 Stored in directory: /root/.cache/pip/wheels/36/f7/83/e2755ad17aa35bc145fce34e184aaf394a265a978d95caaabf Building wheel for python-constraint (setup.py) ... [?25l[?25hdone Created wheel for python-constraint: filename=python_constraint-1.4.0-py2.py3-none-any.whl size=24081 sha256=9c5d3724a5c23ee297b81fd58e0dd810d1a7e0a2f487509c28d0e023e127485b Stored in directory: /root/.cache/pip/wheels/07/27/db/1222c80eb1e431f3d2199c12569cb1cac60f562a451fe30479 Successfully built qiskit python-constraint Installing collected packages: pbr, tweedledum, symengine, stevedore, scipy, retworkx, python-constraint, ply, ntlm-auth, cryptography, websockets, websocket-client, requests-ntlm, qiskit-terra, qiskit-ignis, qiskit-ibmq-provider, qiskit-aer, qiskit Attempting uninstall: scipy Found existing installation: scipy 1.4.1 Uninstalling scipy-1.4.1: Successfully uninstalled scipy-1.4.1 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. albumentations 0.1.12 requires imgaug<0.2.7,>=0.2.5, but you have imgaug 0.2.9 which is incompatible. Successfully installed cryptography-37.0.2 ntlm-auth-1.5.0 pbr-5.9.0 ply-3.11 python-constraint-1.4.0 qiskit-0.36.2 qiskit-aer-0.10.4 qiskit-ibmq-provider-0.19.1 qiskit-ignis-0.7.1 qiskit-terra-0.20.2 requests-ntlm-1.1.0 retworkx-0.11.0 scipy-1.7.3 stevedore-3.5.0 symengine-0.9.2 tweedledum-1.1.1 websocket-client-1.3.2 websockets-10.3 Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/ Collecting pylatexenc Downloading pylatexenc-2.10.tar.gz (162 kB)  |████████████████████████████████| 162 kB 9.5 MB/s [?25hBuilding wheels for collected packages: pylatexenc Building wheel for pylatexenc (setup.py) ... [?25l[?25hdone Created wheel for pylatexenc: filename=pylatexenc-2.10-py3-none-any.whl size=136835 sha256=72c407dbd89b799f918d13f84eeaac5cdf4b20bdf3f3af40e1608b1f9ccc0002 Stored in directory: /root/.cache/pip/wheels/f1/8a/f5/33ee79d4473eb201b519fa40f989b842e373237395a3421f52 Successfully built pylatexenc Installing collected packages: pylatexenc Successfully installed pylatexenc-2.10 gamma = 1, percentile = 9.54, test val train ## Mount Google Drive ```python from google.colab import drive drive.mount('/content/drive') ``` Mounted at /content/drive Load from Drive Load from drive .mat file ```python !pip install --upgrade --no-cache-dir gdown ``` Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/ Requirement already satisfied: gdown in /usr/local/lib/python3.7/dist-packages (4.4.0) Requirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from gdown) (4.64.0) Requirement already satisfied: filelock in /usr/local/lib/python3.7/dist-packages (from gdown) (3.7.0) Requirement already satisfied: requests[socks] in /usr/local/lib/python3.7/dist-packages (from gdown) (2.23.0) Requirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from gdown) (1.15.0) Requirement already satisfied: beautifulsoup4 in /usr/local/lib/python3.7/dist-packages (from gdown) (4.6.3) Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests[socks]->gdown) (2022.5.18.1) Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests[socks]->gdown) (1.24.3) Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests[socks]->gdown) (3.0.4) Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests[socks]->gdown) (2.10) Requirement already satisfied: PySocks!=1.5.7,>=1.5.6 in /usr/local/lib/python3.7/dist-packages (from requests[socks]->gdown) (1.7.1) ```python #Loading .mat Cardiotocography dataset file !gdown --id 1j4qIus2Bl44Om0UiOu4o4f__wVwUeDfP ``` /usr/local/lib/python3.7/dist-packages/gdown/cli.py:131: FutureWarning: Option `--id` was deprecated in version 4.3.1 and will be removed in 5.0. You don't need to pass it anymore to use a file ID. category=FutureWarning, Downloading... From: https://drive.google.com/uc?id=1j4qIus2Bl44Om0UiOu4o4f__wVwUeDfP To: /content/cardio.mat 100% 68.3k/68.3k [00:00<00:00, 64.3MB/s] ```python import numpy as np from time import time from sklearn.kernel_approximation import RBFSampler from sklearn.metrics import roc_auc_score import matplotlib.pyplot as plt from scipy import io cardio = io.loadmat("cardio.mat") cardio["X"].shape, cardio["y"].shape ``` ((1831, 21), (1831, 1)) ```python from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit from qiskit import Aer, execute ``` Preprocessing np.load object --> X, y (scaled) normal: '1' anomalies: '0' ```python from sklearn.preprocessing import MinMaxScaler from scipy.stats import zscore def preprocessing_cardio(data): features, labels = cardio["X"], cardio["y"] labels = 1 - labels return features, labels cardio_X, cardio_y = preprocessing_cardio(cardio) cardio_X.shape, cardio_y.shape ``` ((1831, 21), (1831, 1)) Random fourier features parameters: gamma, dimensions, random_state X --> rff(X) ```python from sklearn.kernel_approximation import RBFSampler """ Code from https://arxiv.org/abs/2004.01227 """ class QFeatureMap: def get_dim(self, num_features): pass def batch2wf(self, X): pass def batch2dm(self, X): psi = self.batch2wf(X) rho = np.einsum('...i,...j', psi, np.conj(psi)) return rho class QFeatureMap_rff(QFeatureMap): def __init__(self, rbf_sampler): self.rbf_sampler = rbf_sampler self.weights = np.array(rbf_sampler.random_weights_) self.offset = np.array(rbf_sampler.random_offset_) self.dim = rbf_sampler.get_params()['n_components'] def get_dim(self, num_features): return self.dim def batch2wf(self, X): vals = np.dot(X, self.weights) + self.offset vals = np.cos(vals) vals *= np.sqrt(2.) / np.sqrt(self.dim) norms = np.linalg.norm(vals, axis=1) psi = vals / norms[:, np.newaxis] return psi ``` ```python # Create the RandomFourierFeature map def rff(X, dim, gamma): feature_map_fourier = RBFSampler(gamma=gamma, n_components=dim, random_state=None) X_feat_train = feature_map_fourier.fit(cardio_X) rffmap = QFeatureMap_rff(rbf_sampler=feature_map_fourier) Crff = rffmap.batch2wf(cardio_X) return Crff ``` Train test split ```python from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(cardio_X, cardio_y, test_size=0.2, stratify=cardio_y, random_state=42) X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.25, stratify=y_train, random_state=42) print(f"shape of X_train: {X_train.shape} X_test: {X_test.shape} X_val {X_val.shape}") n_classes = np.bincount(y_test.ravel().astype(np.int64)) print(f"classes: 0: {n_classes[0]} 1: {n_classes[1]} %-anomalies: {n_classes[0] / (n_classes[0] + n_classes[1])}") #print(f"classes: 0: {n_classes[0]} 1: {n_classes[1]} %-anomalies: {n_classes[1] / (n_classes[0] + n_classes[1])}") ``` shape of X_train: (1098, 21) X_test: (367, 21) X_val (366, 21) classes: 0: 35 1: 332 %-anomalies: 0.09536784741144415 ## Quantum Prediction Density Matrix Build Pure State: x_train --> U (matrix) Mixed State: X_train --> lambda (vec) , U (matrix) ```python def pure_state(Ctrain): phi_train = np.sum(Ctrain, axis=0) phi_train = phi_train / np.linalg.norm(phi_train) size_U = len(phi_train) U_train = np.zeros((size_U, size_U)) x_1 = phi_train U_train[:, 0] = x_1 for i in range(1, size_U): x_i = np.random.randn(size_U) for j in range(0, i): x_i -= x_i.dot(U_train[:, j]) * U_train[:, j] x_i = x_i / np.linalg.norm(x_i) U_train[:, i] = x_i return U_train ``` ```python import copy def mixed_state(Ctrain): Z_train = np.outer(Ctrain[0], Ctrain[0]) for i in range(1, len(Ctrain)): Z_train += np.outer(Ctrain[i], Ctrain[i]) Z_train *= 1/len(Ctrain) lambda_P1_temp, U_train = np.linalg.eigh(Z_train) return lambda_P1_temp, U_train ``` Running the Quantum Circuits parameters: eigvals, U, n_shots X_test --> preds ```python from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit from qiskit import Aer, execute ``` ```python backend2 = Aer.get_backend('qasm_simulator') # np.random.seed(1234) ``` ```python def quantum_circuit_mixed_four(Ctest, eigvals, U_train, n_shots=10000): Cpred = [] for i in range(len(Ctest)): qc = QuantumCircuit(4, 2) qc.initialize(Ctest[i], [0, 1]) qc.initialize(np.sqrt(eigvals), [2, 3]) qc.isometry(U_train.T, [], [0, 1]) qc.cnot(3, 1) qc.cnot(2, 0) qc.measure(0, 0) qc.measure(1, 1) counts = execute(qc, backend2, shots=n_shots).result().get_counts() try: Cpred.append(counts['00'] / n_shots) except: Cpred.append(0) return Cpred ``` ```python def quantum_circuit_pure_four(Ctest, U_train, n_shots=10000): Cpred = [] for i in range(len(Ctest)): qc = QuantumCircuit(2, 2) qc.initialize(Ctest[i], [0, 1]) qc.isometry(U_train.T, [], [0, 1]) # ArbRot as a isometry qc.measure(0, 0) qc.measure(1, 1) counts = execute(qc, backend2, shots=n_shots).result().get_counts() try: Cpred.append(np.sqrt(counts['00'] / n_shots)) except: Cpred.append(0) return Cpred ``` Classification The threshold is calculated using weighted f1_score X_test --> classification_report, threshold ```python from sklearn.metrics import roc_curve, f1_score from sklearn.metrics import classification_report def classification(preds_val, preds_test, y_test): thredhold = np.percentile(preds_val, q = 9.54) y_pred = preds_test > thredhold return classification_report(y_test, y_pred, digits=4) ``` ```python gammas = [0.0078125] dim = 4 num_exps = 5 exp_time = time() for gamma in gammas: for i in range(num_exps): print("4x4 Pure, experiment", i) print("Gamma:", gamma) feature_map_fourier = RBFSampler(gamma=gamma, n_components=dim) # original gamma 2 X_feat_train = feature_map_fourier.fit(X_train) rffmap = QFeatureMap_rff(rbf_sampler=feature_map_fourier) X_feat_train = rffmap.batch2wf(X_train) X_feat_val = rffmap.batch2wf(X_val) X_feat_test = rffmap.batch2wf(X_test) U = pure_state(X_feat_train) preds_val = quantum_circuit_pure_four(X_feat_val, U) preds_test = quantum_circuit_pure_four(X_feat_test, U) print(classification(preds_val, preds_test, y_test)) print(f"AUC = {round(roc_auc_score(y_test, preds_test), 4)}") print(time() - exp_time) exp_time = time() print("4x4 Mixed, experiment", i) print("Gamma:", gamma) eigvals, U = mixed_state(X_feat_train) preds_val = quantum_circuit_mixed_four(X_feat_val, eigvals, U) preds_test = quantum_circuit_mixed_four(X_feat_test, eigvals, U) print(classification(preds_val, preds_test, y_test)) print(f"AUC = {round(roc_auc_score(y_test, preds_test), 4)}") print(time() - exp_time) exp_time = time() ``` 4x4 Pure, experiment 0 Gamma: 0.0282 precision recall f1-score support 0.0 0.2222 0.1143 0.1509 35 1.0 0.9112 0.9578 0.9339 332 accuracy 0.8774 367 macro avg 0.5667 0.5361 0.5424 367 weighted avg 0.8455 0.8774 0.8592 367 AUC = 0.6635 67.05944108963013 4x4 Mixed, experiment 0 Gamma: 0.0282 precision recall f1-score support 0.0 0.1538 0.1143 0.1311 35 1.0 0.9091 0.9337 0.9212 332 accuracy 0.8556 367 macro avg 0.5315 0.5240 0.5262 367 weighted avg 0.8371 0.8556 0.8459 367 AUC = 0.7065 345.49100279808044 4x4 Pure, experiment 1 Gamma: 0.0282 precision recall f1-score support 0.0 0.3429 0.3429 0.3429 35 1.0 0.9307 0.9307 0.9307 332 accuracy 0.8747 367 macro avg 0.6368 0.6368 0.6368 367 weighted avg 0.8747 0.8747 0.8747 367 AUC = 0.719 66.59816336631775 4x4 Mixed, experiment 1 Gamma: 0.0282 precision recall f1-score support 0.0 0.3438 0.3143 0.3284 35 1.0 0.9284 0.9367 0.9325 332 accuracy 0.8774 367 macro avg 0.6361 0.6255 0.6304 367 weighted avg 0.8726 0.8774 0.8749 367 AUC = 0.7412 342.6536514759064 4x4 Pure, experiment 2 Gamma: 0.0282 precision recall f1-score support 0.0 0.2051 0.2286 0.2162 35 1.0 0.9177 0.9066 0.9121 332 accuracy 0.8420 367 macro avg 0.5614 0.5676 0.5642 367 weighted avg 0.8497 0.8420 0.8458 367 AUC = 0.7031 66.87918972969055 4x4 Mixed, experiment 2 Gamma: 0.0282 precision recall f1-score support 0.0 0.2368 0.2571 0.2466 35 1.0 0.9210 0.9127 0.9168 332 accuracy 0.8501 367 macro avg 0.5789 0.5849 0.5817 367 weighted avg 0.8557 0.8501 0.8529 367 AUC = 0.701 338.1283895969391 4x4 Pure, experiment 3 Gamma: 0.0282 precision recall f1-score support 0.0 0.4194 0.3714 0.3939 35 1.0 0.9345 0.9458 0.9401 332 accuracy 0.8910 367 macro avg 0.6769 0.6586 0.6670 367 weighted avg 0.8854 0.8910 0.8880 367 AUC = 0.792 63.4641752243042 4x4 Mixed, experiment 3 Gamma: 0.0282 precision recall f1-score support 0.0 0.4194 0.3714 0.3939 35 1.0 0.9345 0.9458 0.9401 332 accuracy 0.8910 367 macro avg 0.6769 0.6586 0.6670 367 weighted avg 0.8854 0.8910 0.8880 367 AUC = 0.7773 335.48702812194824 4x4 Pure, experiment 4 Gamma: 0.0282 precision recall f1-score support 0.0 0.1765 0.1714 0.1739 35 1.0 0.9129 0.9157 0.9143 332 accuracy 0.8447 367 macro avg 0.5447 0.5435 0.5441 367 weighted avg 0.8427 0.8447 0.8437 367 AUC = 0.6912 64.79746794700623 4x4 Mixed, experiment 4 Gamma: 0.0282 precision recall f1-score support 0.0 0.2308 0.1714 0.1967 35 1.0 0.9150 0.9398 0.9272 332 accuracy 0.8665 367 macro avg 0.5729 0.5556 0.5620 367 weighted avg 0.8497 0.8665 0.8575 367 AUC = 0.7059 332.08904814720154 # Quantum Prediction with Adaptive RFF ## Clone the QMC from GitHUB ```python !pip install git+https://github.com/fagonzalezo/qmc.git ``` Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/ Collecting git+https://github.com/fagonzalezo/qmc.git Cloning https://github.com/fagonzalezo/qmc.git to /tmp/pip-req-build-9ojels8a Running command git clone -q https://github.com/fagonzalezo/qmc.git /tmp/pip-req-build-9ojels8a Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from qmc==0.0.1) (1.7.3) Requirement already satisfied: numpy>=1.19.2 in /usr/local/lib/python3.7/dist-packages (from qmc==0.0.1) (1.21.6) Requirement already satisfied: scikit-learn in /usr/local/lib/python3.7/dist-packages (from qmc==0.0.1) (1.0.2) Requirement already satisfied: tensorflow>=2.2.0 in /usr/local/lib/python3.7/dist-packages (from qmc==0.0.1) (2.8.2+zzzcolab20220527125636) Requirement already satisfied: typeguard in /usr/local/lib/python3.7/dist-packages (from qmc==0.0.1) (2.7.1) Requirement already satisfied: protobuf<3.20,>=3.9.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow>=2.2.0->qmc==0.0.1) (3.17.3) Requirement already satisfied: keras<2.9,>=2.8.0rc0 in /usr/local/lib/python3.7/dist-packages (from tensorflow>=2.2.0->qmc==0.0.1) (2.8.0) Requirement already satisfied: opt-einsum>=2.3.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow>=2.2.0->qmc==0.0.1) (3.3.0) Requirement already satisfied: six>=1.12.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow>=2.2.0->qmc==0.0.1) (1.15.0) Requirement already satisfied: tensorboard<2.9,>=2.8 in /usr/local/lib/python3.7/dist-packages (from tensorflow>=2.2.0->qmc==0.0.1) (2.8.0) Requirement already satisfied: grpcio<2.0,>=1.24.3 in /usr/local/lib/python3.7/dist-packages (from tensorflow>=2.2.0->qmc==0.0.1) (1.46.3) Requirement already satisfied: gast>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow>=2.2.0->qmc==0.0.1) (0.5.3) Requirement already satisfied: tensorflow-estimator<2.9,>=2.8 in /usr/local/lib/python3.7/dist-packages (from tensorflow>=2.2.0->qmc==0.0.1) (2.8.0) Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow>=2.2.0->qmc==0.0.1) (1.1.0) Requirement already satisfied: tensorflow-io-gcs-filesystem>=0.23.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow>=2.2.0->qmc==0.0.1) (0.26.0) Requirement already satisfied: astunparse>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow>=2.2.0->qmc==0.0.1) (1.6.3) Requirement already satisfied: google-pasta>=0.1.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow>=2.2.0->qmc==0.0.1) (0.2.0) Requirement already satisfied: typing-extensions>=3.6.6 in /usr/local/lib/python3.7/dist-packages (from tensorflow>=2.2.0->qmc==0.0.1) (4.2.0) Requirement already satisfied: libclang>=9.0.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow>=2.2.0->qmc==0.0.1) (14.0.1) Requirement already satisfied: wrapt>=1.11.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow>=2.2.0->qmc==0.0.1) (1.14.1) Requirement already satisfied: h5py>=2.9.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow>=2.2.0->qmc==0.0.1) (3.1.0) Requirement already satisfied: flatbuffers>=1.12 in /usr/local/lib/python3.7/dist-packages (from tensorflow>=2.2.0->qmc==0.0.1) (2.0) Requirement already satisfied: absl-py>=0.4.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow>=2.2.0->qmc==0.0.1) (1.0.0) Requirement already satisfied: keras-preprocessing>=1.1.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow>=2.2.0->qmc==0.0.1) (1.1.2) Requirement already satisfied: setuptools in /usr/local/lib/python3.7/dist-packages (from tensorflow>=2.2.0->qmc==0.0.1) (57.4.0) Requirement already satisfied: wheel<1.0,>=0.23.0 in /usr/local/lib/python3.7/dist-packages (from astunparse>=1.6.0->tensorflow>=2.2.0->qmc==0.0.1) (0.37.1) Requirement already satisfied: cached-property in /usr/local/lib/python3.7/dist-packages (from h5py>=2.9.0->tensorflow>=2.2.0->qmc==0.0.1) (1.5.2) Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.9,>=2.8->tensorflow>=2.2.0->qmc==0.0.1) (0.4.6) Requirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.9,>=2.8->tensorflow>=2.2.0->qmc==0.0.1) (0.6.1) Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.9,>=2.8->tensorflow>=2.2.0->qmc==0.0.1) (1.0.1) Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.9,>=2.8->tensorflow>=2.2.0->qmc==0.0.1) (1.8.1) Requirement already satisfied: requests<3,>=2.21.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.9,>=2.8->tensorflow>=2.2.0->qmc==0.0.1) (2.23.0) Requirement already satisfied: google-auth<3,>=1.6.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.9,>=2.8->tensorflow>=2.2.0->qmc==0.0.1) (1.35.0) Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.9,>=2.8->tensorflow>=2.2.0->qmc==0.0.1) (3.3.7) Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from google-auth<3,>=1.6.3->tensorboard<2.9,>=2.8->tensorflow>=2.2.0->qmc==0.0.1) (0.2.8) Requirement already satisfied: rsa<5,>=3.1.4 in /usr/local/lib/python3.7/dist-packages (from google-auth<3,>=1.6.3->tensorboard<2.9,>=2.8->tensorflow>=2.2.0->qmc==0.0.1) (4.8) Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from google-auth<3,>=1.6.3->tensorboard<2.9,>=2.8->tensorflow>=2.2.0->qmc==0.0.1) (4.2.4) Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.7/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.9,>=2.8->tensorflow>=2.2.0->qmc==0.0.1) (1.3.1) Requirement already satisfied: importlib-metadata>=4.4 in /usr/local/lib/python3.7/dist-packages (from markdown>=2.6.8->tensorboard<2.9,>=2.8->tensorflow>=2.2.0->qmc==0.0.1) (4.11.4) Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata>=4.4->markdown>=2.6.8->tensorboard<2.9,>=2.8->tensorflow>=2.2.0->qmc==0.0.1) (3.8.0) Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.7/dist-packages (from pyasn1-modules>=0.2.1->google-auth<3,>=1.6.3->tensorboard<2.9,>=2.8->tensorflow>=2.2.0->qmc==0.0.1) (0.4.8) Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard<2.9,>=2.8->tensorflow>=2.2.0->qmc==0.0.1) (2.10) Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard<2.9,>=2.8->tensorflow>=2.2.0->qmc==0.0.1) (3.0.4) Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard<2.9,>=2.8->tensorflow>=2.2.0->qmc==0.0.1) (1.24.3) Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard<2.9,>=2.8->tensorflow>=2.2.0->qmc==0.0.1) (2022.5.18.1) Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.9,>=2.8->tensorflow>=2.2.0->qmc==0.0.1) (3.2.0) Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.7/dist-packages (from scikit-learn->qmc==0.0.1) (1.1.0) Requirement already satisfied: threadpoolctl>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from scikit-learn->qmc==0.0.1) (3.1.0) Building wheels for collected packages: qmc Building wheel for qmc (setup.py) ... [?25l[?25hdone Created wheel for qmc: filename=qmc-0.0.1-py3-none-any.whl size=12757 sha256=97575d9b21b37edc1e810b591f74f15ebf1cacda8041d9c863ab509b369bec65 Stored in directory: /tmp/pip-ephem-wheel-cache-y7xaqn18/wheels/b2/d2/8d/5870208920445c46dfe694f549251e5f63d7afbee56c01f720 Successfully built qmc Installing collected packages: qmc Successfully installed qmc-0.0.1 ```python import tensorflow as tf import numpy as np import qmc.tf.layers as layers import qmc.tf.models as models ``` ## One dimensional approximation ```python import tensorflow as tf class QFeatureMapAdaptRFF(layers.QFeatureMapRFF): def __init__( self, gamma_trainable=True, weights_trainable=True, **kwargs ): self.g_trainable = gamma_trainable self.w_trainable = weights_trainable super().__init__(**kwargs) def build(self, input_shape): rbf_sampler = RBFSampler( gamma=0.5, n_components=self.dim, random_state=self.random_state) x = np.zeros(shape=(1, self.input_dim)) rbf_sampler.fit(x) self.gamma_val = tf.Variable( initial_value=self.gamma, dtype=tf.float32, trainable=self.g_trainable, name="rff_gamma") self.rff_weights = tf.Variable( initial_value=rbf_sampler.random_weights_, dtype=tf.float32, trainable=self.w_trainable, name="rff_weights") self.offset = tf.Variable( initial_value=rbf_sampler.random_offset_, dtype=tf.float32, trainable=self.w_trainable, name="offset") self.built = True def call(self, inputs): vals = tf.sqrt(2 * self.gamma_val) * tf.matmul(inputs, self.rff_weights) + self.offset vals = tf.cos(vals) vals = vals * tf.sqrt(2. / self.dim) norms = tf.linalg.norm(vals, axis=-1) psi = vals / tf.expand_dims(norms, axis=-1) return psi class DMRFF(tf.keras.Model): def __init__(self, dim_x, num_rff, gamma=1, random_state=None): super().__init__() self.rff_layer = QFeatureMapAdaptRFF(input_dim=dim_x, dim=num_rff, gamma=gamma, random_state=random_state, gamma_trainable=False) def call(self, inputs): x1 = inputs[:, 0] x2 = inputs[:, 1] phi1 = self.rff_layer(x1) phi2 = self.rff_layer(x2) dot = tf.einsum('...i,...i->...', phi1, phi2) ** 2 return dot def calc_rbf(dmrff, x1, x2): return dmrff.predict(np.concatenate([x1[:, np.newaxis, ...], x2[:, np.newaxis, ...]], axis=1), batch_size=256) # dmrff = DMRFF(dim_x=22, num_rff=n_rffs, gamma=gamma / 2, random_state=0) # dm_rff_pdf = calc_rbf(dmrff, np.broadcast_to(mean, x.shape), x) # pl.plot(x, gauss_pdf, 'r-', alpha=0.6, label='Gaussian kernel') # pl.plot(x, dm_rff_pdf, 'b-', alpha=0.6, label='dmrff kernel') # pl.title("$dim = "+str(n_rffs)+"$") # pl.legend() ``` # QAD Adaptive RFF ```python import pylab as pl ``` ```python X_train.shape X_test.shape ``` (367, 21) ```python num_samples = 100000 rnd_idx1 = np.random.randint(X_train.shape[0],size=(num_samples, )) rnd_idx2 = np.random.randint(X_train.shape[0],size=(num_samples, )) #x_train_rff = [X_train[rnd_idx1], X_train[rnd_idx2]] x_train_rff = np.concatenate([X_train[rnd_idx1][:, np.newaxis, ...], X_train[rnd_idx2][:, np.newaxis, ...]], axis=1) dists = np.linalg.norm(x_train_rff[:, 0, ...] - x_train_rff[:, 1, ...], axis=1) print(dists.shape) pl.hist(dists) print(np.quantile(dists, 0.001)) rnd_idx1 = np.random.randint(X_test.shape[0],size=(num_samples, )) rnd_idx2 = np.random.randint(X_test.shape[0],size=(num_samples, )) #x_test_rff = [X_test[rnd_idx1], X_test[rnd_idx2]] x_test_rff = np.concatenate([X_test[rnd_idx1][:, np.newaxis, ...], X_test[rnd_idx2][:, np.newaxis, ...]], axis=1) ``` ```python def gauss_kernel_arr(x, y, gamma): return np.exp(-gamma * np.linalg.norm(x - y, axis=1) ** 2) ``` ```python sigma = np.quantile(dists, 0.01) gamma = 1/(2 * sigma ** 2) gamma_index = 6 # index 7 corresponds to gamma = 2**(-7) gammas = 1/(2**(np.arange(11))) print(gammas) n_rffs = 4 print(f'Gamma: {gammas[gamma_index ]}') y_train_rff = gauss_kernel_arr(x_train_rff[:, 0, ...], x_train_rff[:, 1, ...], gamma=gammas[gamma_index]) # gamma without square y_test_rff = gauss_kernel_arr(x_test_rff[:, 0, ...], x_test_rff[:, 1, ...], gamma=gammas[gamma_index]) # gamma without square dmrff = DMRFF(dim_x=21, num_rff=n_rffs, gamma=gammas[gamma_index ]/2, random_state=np.random.randint(10000)) # original rs = 0 dm_rbf = calc_rbf(dmrff, x_test_rff[:, 0, ...], x_test_rff[:, 1, ...]) pl.plot(y_test_rff, dm_rbf, '.') dmrff.compile(optimizer="adam", loss='mse') dmrff.evaluate(x_test_rff, y_test_rff, batch_size=16) ``` ```python print(f'Mean: {np.mean(dmrff.rff_layer.rff_weights)}') print(f'Std: {np.std(dmrff.rff_layer.rff_weights)}') print(f'Gamma: {dmrff.rff_layer.gamma_val.numpy()}') pl.hist(dmrff.rff_layer.rff_weights.numpy().flatten(), bins=30); ``` ```python dmrff.fit(x_train_rff, y_train_rff, validation_split=0.1, epochs=40, batch_size=128) ``` Epoch 1/40 704/704 [==============================] - 5s 5ms/step - loss: 0.0406 - val_loss: 0.0330 Epoch 2/40 704/704 [==============================] - 2s 3ms/step - loss: 0.0297 - val_loss: 0.0263 Epoch 3/40 704/704 [==============================] - 3s 4ms/step - loss: 0.0233 - val_loss: 0.0203 Epoch 4/40 704/704 [==============================] - 2s 3ms/step - loss: 0.0183 - val_loss: 0.0164 Epoch 5/40 704/704 [==============================] - 2s 3ms/step - loss: 0.0157 - val_loss: 0.0148 Epoch 6/40 704/704 [==============================] - 1s 2ms/step - loss: 0.0146 - val_loss: 0.0141 Epoch 7/40 704/704 [==============================] - 1s 2ms/step - loss: 0.0140 - val_loss: 0.0137 Epoch 8/40 704/704 [==============================] - 1s 2ms/step - loss: 0.0137 - val_loss: 0.0134 Epoch 9/40 704/704 [==============================] - 1s 2ms/step - loss: 0.0134 - val_loss: 0.0132 Epoch 10/40 704/704 [==============================] - 1s 2ms/step - loss: 0.0133 - val_loss: 0.0130 Epoch 11/40 704/704 [==============================] - 1s 2ms/step - loss: 0.0132 - val_loss: 0.0129 Epoch 12/40 704/704 [==============================] - 2s 2ms/step - loss: 0.0131 - val_loss: 0.0128 Epoch 13/40 704/704 [==============================] - 1s 2ms/step - loss: 0.0130 - val_loss: 0.0128 Epoch 14/40 704/704 [==============================] - 1s 2ms/step - loss: 0.0129 - val_loss: 0.0127 Epoch 15/40 704/704 [==============================] - 1s 2ms/step - loss: 0.0129 - val_loss: 0.0127 Epoch 16/40 704/704 [==============================] - 1s 2ms/step - loss: 0.0128 - val_loss: 0.0126 Epoch 17/40 704/704 [==============================] - 1s 2ms/step - loss: 0.0128 - val_loss: 0.0126 Epoch 18/40 704/704 [==============================] - 2s 2ms/step - loss: 0.0128 - val_loss: 0.0126 Epoch 19/40 704/704 [==============================] - 1s 2ms/step - loss: 0.0127 - val_loss: 0.0126 Epoch 20/40 704/704 [==============================] - 1s 2ms/step - loss: 0.0127 - val_loss: 0.0126 Epoch 21/40 704/704 [==============================] - 1s 2ms/step - loss: 0.0127 - val_loss: 0.0125 Epoch 22/40 704/704 [==============================] - 1s 2ms/step - loss: 0.0127 - val_loss: 0.0126 Epoch 23/40 704/704 [==============================] - 2s 2ms/step - loss: 0.0127 - val_loss: 0.0125 Epoch 24/40 704/704 [==============================] - 1s 2ms/step - loss: 0.0126 - val_loss: 0.0125 Epoch 25/40 704/704 [==============================] - 1s 2ms/step - loss: 0.0126 - val_loss: 0.0125 Epoch 26/40 704/704 [==============================] - 1s 2ms/step - loss: 0.0126 - val_loss: 0.0125 Epoch 27/40 704/704 [==============================] - 1s 2ms/step - loss: 0.0126 - val_loss: 0.0125 Epoch 28/40 704/704 [==============================] - 1s 2ms/step - loss: 0.0126 - val_loss: 0.0125 Epoch 29/40 704/704 [==============================] - 1s 2ms/step - loss: 0.0126 - val_loss: 0.0125 Epoch 30/40 704/704 [==============================] - 1s 2ms/step - loss: 0.0126 - val_loss: 0.0125 Epoch 31/40 704/704 [==============================] - 2s 2ms/step - loss: 0.0126 - val_loss: 0.0125 Epoch 32/40 704/704 [==============================] - 2s 2ms/step - loss: 0.0126 - val_loss: 0.0125 Epoch 33/40 704/704 [==============================] - 1s 2ms/step - loss: 0.0126 - val_loss: 0.0125 Epoch 34/40 704/704 [==============================] - 1s 2ms/step - loss: 0.0126 - val_loss: 0.0125 Epoch 35/40 704/704 [==============================] - 1s 2ms/step - loss: 0.0126 - val_loss: 0.0125 Epoch 36/40 704/704 [==============================] - 1s 2ms/step - loss: 0.0126 - val_loss: 0.0125 Epoch 37/40 704/704 [==============================] - 2s 2ms/step - loss: 0.0126 - val_loss: 0.0125 Epoch 38/40 704/704 [==============================] - 1s 2ms/step - loss: 0.0126 - val_loss: 0.0125 Epoch 39/40 704/704 [==============================] - 1s 2ms/step - loss: 0.0126 - val_loss: 0.0125 Epoch 40/40 704/704 [==============================] - 1s 2ms/step - loss: 0.0126 - val_loss: 0.0125 <keras.callbacks.History at 0x7f46623fee10> ```python dm_rbf = calc_rbf(dmrff, x_test_rff[:, 0, ...], x_test_rff[:, 1, ...]) pl.plot(y_test_rff, dm_rbf, '.') dmrff.evaluate(x_test_rff, y_test_rff, batch_size=128) ``` ```python print(f'Mean: {np.mean(dmrff.rff_layer.rff_weights)}') print(f'Std: {np.std(dmrff.rff_layer.rff_weights)}') print(f'Gamma: {dmrff.rff_layer.gamma_val.numpy()}') pl.hist(dmrff.rff_layer.rff_weights.numpy().flatten(), bins=30); ``` ```python X_feat_train = dmrff.rff_layer.call(tf.cast(X_train, tf.float32)) X_feat_test = dmrff.rff_layer.call(tf.cast(X_test, tf.float32)) X_feat_val = dmrff.rff_layer.call(tf.cast(X_val, tf.float32)) X_feat_train = np.float64((X_feat_train).numpy()) X_feat_test = np.float64((X_feat_test).numpy()) X_feat_val = np.float64((X_feat_val).numpy()) X_feat_train = X_feat_train / np.linalg.norm(X_feat_train, axis = 1).reshape(-1, 1) X_feat_test = X_feat_test / np.linalg.norm(X_feat_test, axis = 1).reshape(-1, 1) X_feat_val = X_feat_val / np.linalg.norm(X_feat_val, axis = 1).reshape(-1, 1) X_feat_train.shape, X_feat_test.shape, X_feat_val.shape ``` ((1098, 4), (367, 4), (366, 4)) ## Classical Pred AdpRFF ```python from sklearn.metrics import roc_curve, f1_score from sklearn.metrics import classification_report def classification(preds_val, preds_test, y_test): thredhold = np.percentile(preds_val, q = 9.54) y_pred = preds_test > thredhold return classification_report(y_test, y_pred, digits=4) ``` ```python gamma = dmrff.rff_layer.gamma_val.numpy() dim = n_rffs print(f"{dim}x{dim} Pure, experiment AdaptiveRFF") print("Gamma:", gamma) ## Training pure state and create the Unitary matrix to initialize such state psi_train = X_feat_train.sum(axis = 0) psi_train = psi_train / np.linalg.norm(psi_train) preds_val_expected = np.sqrt((X_feat_val @ psi_train)**2) preds_test_expected = np.sqrt((X_feat_test @ psi_train)**2) print(classification(preds_val_expected, preds_test_expected, y_test)) print(f"AUC = {round(roc_auc_score(y_test, preds_test_expected), 4)}") ``` 4x4 Pure, experiment AdaptiveRFF Gamma: 0.0078125 precision recall f1-score support 0.0 0.7027 0.7429 0.7222 35 1.0 0.9727 0.9669 0.9698 332 accuracy 0.9455 367 macro avg 0.8377 0.8549 0.8460 367 weighted avg 0.9470 0.9455 0.9462 367 AUC = 0.9571 ```python gamma = dmrff.rff_layer.gamma_val.numpy() dim = n_rffs print(f"{dim}x{dim} mixed, experiment AdaptiveRFF") print("Gamma:", gamma) ## Training mixed state and create the Unitary matrix to initialize such state rho_train = np.zeros((dim, dim)) #for i in range(1000): for i in range(len(X_feat_train)): rho_train += np.outer(X_feat_train[i], X_feat_train[i]) rho_train = rho_train / len(X_feat_train) # Classical prediction preds_val_mixed = np.zeros(len(X_feat_val)) for i in range(len(X_feat_val)): preds_val_mixed[i] = X_feat_val[i].T @ rho_train @ X_feat_val[i] preds_test_mixed = np.zeros(len(X_feat_test)) for i in range(len(X_feat_test)): preds_test_mixed[i] = X_feat_test[i].T @ rho_train @ X_feat_test[i] print(classification(preds_val_mixed, preds_test_mixed, y_test)) print(f"AUC = {round(roc_auc_score(y_test, preds_test_mixed), 4)}") ``` 4x4 mixed, experiment AdaptiveRFF Gamma: 0.0078125 precision recall f1-score support 0.0 0.6944 0.7143 0.7042 35 1.0 0.9698 0.9669 0.9683 332 accuracy 0.9428 367 macro avg 0.8321 0.8406 0.8363 367 weighted avg 0.9435 0.9428 0.9431 367 AUC = 0.956 ## Quantum Simulator Pred AdpRFF ```python def quantum_circuit_mixed(Ctest, eigvals, U_train, n_shots=10000, n_rffs = 16): num_qubits = int(np.log2(n_rffs)) # qubits to initialize qb_ls = [l for l in range(num_qubits)] # (0, 1, .., n-1) qb_ls2 = [l + num_qubits for l in range(num_qubits)] # (n, n+1, ..., 2*n-1) print(num_qubits) Cpred = [] for i in range(len(Ctest)): qc = QuantumCircuit(2*num_qubits, num_qubits) qc.initialize(Ctest[i], qb_ls) qc.initialize(np.sqrt(eigvals), qb_ls2) qc.isometry(U_train.T, [], qb_ls) # ArbRot as a isometry for j in range(num_qubits): qc.cnot(j+num_qubits, j) for k in range(num_qubits): qc.measure(k, k) counts = execute(qc, backend2, shots=n_shots).result().get_counts() try: Cpred.append(counts['0'*num_qubits] / n_shots) except: Cpred.append(0) return Cpred def quantum_circuit_pure(Ctest, U_train, n_shots=10000, n_rffs = 16): num_qubits = int(np.log2(n_rffs)) # num qubits to initialize print(num_qubits) qb_ls = [l for l in range(num_qubits)] # (0, 1, .., n-1) Cpred = [] for i in range(len(Ctest)): qc = QuantumCircuit(num_qubits, num_qubits) qc.initialize(Ctest[i], qb_ls ) qc.isometry(U_train.T, [], qb_ls ) for k in range(num_qubits): qc.measure(k, k) counts = execute(qc, backend2, shots=n_shots).result().get_counts() try: Cpred.append(np.sqrt(counts['0'*num_qubits] / n_shots)) except: Cpred.append(0) return Cpred ``` ```python backend2 = Aer.get_backend('qasm_simulator') exp_time = time() n_rff = n_rffs print(f"{n_rff}x{n_rff} Pure, experiment") print("Gamma:", dmrff.rff_layer.gamma_val.numpy()) U = pure_state(X_feat_train) preds_val = quantum_circuit_pure(X_feat_val, U, 10000, n_rff) preds_test = quantum_circuit_pure(X_feat_test, U, 10000, n_rff) print(classification(preds_val, preds_test, y_test)) print(f"AUC = {round(roc_auc_score(y_test, preds_test), 4)}") print(time() - exp_time) exp_time = time() ``` 4x4 Pure, experiment Gamma: 0.0078125 2 2 precision recall f1-score support 0.0 0.7027 0.7429 0.7222 35 1.0 0.9727 0.9669 0.9698 332 accuracy 0.9455 367 macro avg 0.8377 0.8549 0.8460 367 weighted avg 0.9470 0.9455 0.9462 367 AUC = 0.9569 79.42946267127991 ```python backend2 = Aer.get_backend('qasm_simulator') exp_time = time() n_rff = n_rffs print(f"{n_rff}x{n_rff} Mixed, experiment") print("Gamma:", dmrff.rff_layer.gamma_val.numpy()) eigvals, U = mixed_state(X_feat_train) preds_val = quantum_circuit_mixed(X_feat_val, eigvals, U, 10000, n_rff) preds_test = quantum_circuit_mixed(X_feat_test, eigvals, U, 10000, n_rff) print(classification(preds_val, preds_test, y_test)) print(f"AUC = {round(roc_auc_score(y_test, preds_test), 4)}") print(time() - exp_time) exp_time = time() ``` 4x4 Mixed, experiment Gamma: 0.0078125 2 2 precision recall f1-score support 0.0 0.6842 0.7429 0.7123 35 1.0 0.9726 0.9639 0.9682 332 accuracy 0.9428 367 macro avg 0.8284 0.8534 0.8403 367 weighted avg 0.9451 0.9428 0.9438 367 AUC = 0.9568 408.50582551956177
24b0633568613d4269f6e187fa320a50c14cb575
158,585
ipynb
Jupyter Notebook
Paper Experiments/AnomalyDetection_AdaptiveFF_QuantumComputerSimulator.ipynb
diegour1/QuantumAnomalyDetection
412f88887edc60f4e060cc598a8536d8cbba19b9
[ "Apache-2.0" ]
2
2022-02-16T09:11:48.000Z
2022-03-25T12:54:08.000Z
Paper Experiments/AnomalyDetection_AdaptiveFF_QuantumComputerSimulator.ipynb
diegour1/QuantumAnomalyDetection
412f88887edc60f4e060cc598a8536d8cbba19b9
[ "Apache-2.0" ]
null
null
null
Paper Experiments/AnomalyDetection_AdaptiveFF_QuantumComputerSimulator.ipynb
diegour1/QuantumAnomalyDetection
412f88887edc60f4e060cc598a8536d8cbba19b9
[ "Apache-2.0" ]
2
2022-01-04T12:54:42.000Z
2022-02-16T09:11:52.000Z
85.814394
35,733
0.73005
true
16,530
Qwen/Qwen-72B
1. YES 2. YES
0.835484
0.782662
0.653902
__label__eng_Latn
0.263091
0.357563
```python from sympy import * init_printing() x=symbols("x") a=Integral(cos(x)*exp(x),x) a ``` ```python x,y=symbols("x y") expr=x+2*y expr ``` ```python type(x) ``` sympy.core.symbol.Symbol ```python expr+1 ``` ```python expr-x ``` ```python x*expr ``` ```python expanded_expr=expand(_) expanded_expr ``` ```python factor_expr=factor(_) factor_expr ``` ```python x,t,z,nu=symbols("x t z nu") ``` ```python diff(cos(x**2)/x,x) ``` ```python print(_) ``` -2*sin(x**2) - cos(x**2)/x**2 ```python Derivative(cos(x**2)/x,x) ``` ```python integrate(exp(-x**2),(x,-oo,+oo)) ``` ```python limit((cos(x+t)-cos(x))/t,t,0) ``` ```python abs(sin(x))/sin(x) ``` ```python limit(_,x,0) ``` ```python limit(abs(sin(x))/sin(x),x,0,"+") ``` ```python limit(abs(sin(x))/sin(x),x,0,"-") ``` ```python Integral(cos(z)*sin(nu),z,nu,t) ``` ```python r,phi,theta,R=symbols("r phi theta R") ``` ```python Integral(r**2*sin(theta),(theta,0,pi),(phi,0,2*pi),(r,0,R)) ``` ```python Limit(exp(cos(t**2)),t,5,"+") ``` ```python solve(x**2-2,x) ``` ```python Eq(x**2-2,0) ``` ```python solve(_,x) ``` ### Ecuacion diferencial!!! $y''-y=e^t$ ```python y=Function("y") y(t) n=symbols("n") ``` ```python Derivative(y(t),t) ``` ```python Derivative(y(t),(t,n)) ``` ```python Eq(Derivative(y(t),(t,2))-y(t),exp(t)) ``` ```python dsolve(_,y(t)) ``` ```python Matrix([[0,-I],[I,0]]) ``` $\displaystyle \left[\begin{matrix}0 & - i\\i & 0\end{matrix}\right]$ ```python _.eigenvals() ``` ```python Eq(4*t*y(t).diff(t,t)+y(t).diff(t)-y(t),0) ``` ```python dsolve(_) ``` ```python print(latex(_)) ``` y{\left(t \right)} = t^{\frac{3}{8}} \left(C_{1} J_{\frac{3}{4}}\left(i \sqrt{t}\right) + C_{2} Y_{\frac{3}{4}}\left(i \sqrt{t}\right)\right)
b98365ebf86f9e6a9fb89c948d29c040935d2144
65,458
ipynb
Jupyter Notebook
IntroToSciPy/SymPy/SymPyTutorial/chibolo meppo.ipynb
migueloayza/LearningPython
00fe5e0072d16cb5caa10f546d2708b1beb8c30b
[ "MIT" ]
null
null
null
IntroToSciPy/SymPy/SymPyTutorial/chibolo meppo.ipynb
migueloayza/LearningPython
00fe5e0072d16cb5caa10f546d2708b1beb8c30b
[ "MIT" ]
null
null
null
IntroToSciPy/SymPy/SymPyTutorial/chibolo meppo.ipynb
migueloayza/LearningPython
00fe5e0072d16cb5caa10f546d2708b1beb8c30b
[ "MIT" ]
null
null
null
75.152698
4,452
0.820343
true
678
Qwen/Qwen-72B
1. YES 2. YES
0.944995
0.907312
0.857405
__label__yue_Hant
0.110344
0.830373
```python # HIDDEN from datascience import * from prob140 import * import numpy as np import matplotlib.pyplot as plt plt.style.use('fivethirtyeight') %matplotlib inline import math from scipy import stats from sympy import * init_printing() ``` ## Marginal and Conditional Densities ## Let random variables $X$ and $Y$ have the joint density defined by $$ f(x, y) ~ = ~ \begin{cases} 30(y-x)^4, ~~~ 0 < x < y < 1 \\ 0 ~~~~~~~~ \text{otherwise} \end{cases} $$ ```python def jt_dens(x,y): if y < x: return 0 else: return 30 * (y-x)**4 Plot_3d(x_limits=(0,1), y_limits=(0,1), f=jt_dens, cstride=4, rstride=4) ``` Then the possible values of $(X, Y)$ are in the upper right hand triangle of the unit square. ```python # NO CODE plt.figure(figsize=(3,3)) plt.axes().set_aspect('equal') plt.plot([0, 0], [0, 1], color='k', lw=2) plt.plot([0, 1], [0, 1], color='k', lw=2) plt.plot([0, 1], [1, 1], color='k', lw=2) plt.xlim(-0.05, 1) plt.ylim(0, 1.05) plt.xticks(np.arange(0, 1.1, 0.25)) plt.yticks(np.arange(0, 1.1, 0.25)) plt.xlabel('$x$') plt.ylabel('$y$', rotation=0) plt.title('Possible Values of $(X, Y)$'); ``` Here is a quick check by `SymPy` to see that the function $f$ is indeed a joint density. ```python x = Symbol('x', positive=True) y = Symbol('y', positive=True) joint_density = 30*(y-x)**4 ``` ```python Integral(joint_density, (y, x, 1), (x, 0, 1)).doit() ``` ### Marginal Density of $X$ ### We can use the joint density $f$ to find the density of $X$. Call this density $f_X$. We know that $$ \begin{align*} f_X(x)dx &\sim P(X \in dx) \\ &= \int_y P(X \in dx, Y \in dy) \\ &= \int_y f(x, y)dxdy \\ &= \big{(} \int_y f(x, y)dy \big{)}dx \end{align*} $$ You can see the reasoning behind this calculation in the graph below. The blue strip shows the event $\{ X \in dx \}$ for a value of $x$ very near 0.25. To find the volume $P(X \in dx)$, we hold $x$ fixed and add over all $y$. ```python # NO CODE plt.figure(figsize=(3,3)) plt.axes().set_aspect('equal') plt.plot([0, 0], [0, 1], color='k', lw=2) plt.plot([0, 1], [0, 1], color='k', lw=2) plt.plot([0, 1], [1, 1], color='k', lw=2) plt.plot([0.25, 0.25], [0.25, 1], color='blue', lw=3, alpha=0.3) plt.xlim(-0.05, 1) plt.ylim(0, 1.05) plt.xticks(np.arange(0, 1.1, 0.25)) plt.yticks(np.arange(0, 1.1, 0.25)) plt.xlabel('$x$') plt.ylabel('$y$', rotation=0) plt.title('$X \in dx$'); ``` So the density of $X$ is given by $$ f_X(x) ~ = ~ \int_y f(x, y)dy ~~~~~ \text{for all } x $$ By analogy with the discrete case, $f_X$ is sometimes called the *marginal density* of $X$. In our example, the possible values of $(X, Y)$ are the upper left hand triangle as shown above. So for each fixed $x$, the possible values of $Y$ go from $x$ to 1. Therefore for $0 < x < 1$, the density of $X$ is given by $$ \begin{align*} f_X(x) &= \int_x^1 30(y-x)^4 dy \\ &= 30 \cdot \frac{1}{5} (y-x)^5 \Big{\rvert}_x^1 \\ &= 6(1-x)^5 \end{align*} $$ Here is the joint density surface again. You can see that $X$ is much more likely to be near 0 than near 1. ```python Plot_3d(x_limits=(0,1), y_limits=(0,1), f=jt_dens, cstride=4, rstride=4) ``` That can be seen in the shape of the density of $X$. ```python # NO CODE x_vals = np.arange(0, 1.01, 0.01) f_X = 6*(1-x_vals)**5 plt.plot(x_vals, f_X, color='darkblue', lw=2) plt.xlabel('$x$') plt.ylabel('$f_X(x)$', rotation=0) plt.title('$f_X$: Density of $X$'); ``` ### Density of $Y$ ### Correspondingly, the density of $Y$ can be found by fixing $y$ and integrating over $x$ as follows: $$ f_Y(y) = \int_x f(x, y)dx ~~~~ \text{for all } y $$ In our example, the joint density surface indicates that $Y$ is more likely to be near 1 than near 0, which is confirmed by calculation. Remember that $y > x$ and therefore for each fixed $y$, the possible values of $x$ are 0 through $y$. For $0 < y < 1$, $$ f_Y(y) ~ = ~ \int_0^y 30(y-x)^4dx ~ = ~ 6y^5 $$ ```python # NO CODE y_vals = np.arange(0, 1.01, 0.01) f_Y = 6*y_vals**5 plt.plot(y_vals, f_Y, color='darkblue', lw=2) plt.xlabel('$y$') plt.ylabel('$f_Y(y)$', rotation=0) plt.title('$f_Y$: Density of $Y$'); ``` ### Conditional Densities ### Consider the conditional probability $P(Y \in dy \mid X \in dx)$. By the division rule, $$ P(Y \in dy \mid X \in dx) ~ = ~ \frac{P(X \in dx, Y \in dy)}{P(X \in dx)} ~ = ~ \frac{f(x, y)dxdy}{f_X(x)dx} ~ = ~ \frac{f(x, y)}{f_X(x)} dy $$ This gives us a division rule for densities. For a fixed value $x$, the *conditional density of $Y$ given $X=x$* is defined by $$ f_{Y\mid X=x} (y) ~ = ~ \frac{f(x, y)}{f_X(x)} ~~~~ \text{for all } y $$ Since $X$ has a density, we know that $P(X = x) = 0$ for all $x$. But the ratio above is of densities, not probabilities. It might help your intuition to think of "given $X=x$" to mean "given that $X$ is just around $x$". Visually, the shape of this conditional density is the vertical cross section at $x$ of the joint density graph above. The numerator determines the shape, and the denominator is part of the constant that makes the density integrate to 1. Note that $x$ is constant in this formula; it is the given value of $X$. So the denominator $f_X(x)$ is the same for all the possible values of $y$. To see that the conditional density does integrate to 1, let's do the integral. $$ \int_y f_{Y\mid X=x} (y)dy ~ = ~ \int_y \frac{f(x, y)}{f_X(x)} dy ~ = ~ \frac{1}{f_X(x)} \int_y f(x, y)dy ~ = ~ \frac{1}{f_X(x)} f_X(x) ~ = ~ 1 $$ In our example, let $x = 0.2$ and consider finding the conditional density of $Y$ given $X = 0.4$. Under that condition, the possible values of $Y$ are in the range 0.4 to 1, and therefore $$ f_{Y \mid X=0.4} (y) ~ = ~ \frac{30(y - 0.4)^4}{6(1 - 0.4)^5} ~ = ~ \frac{5}{0.6^5} (y - 0.4)^4 ~~~~ y \in (0.4, 1) $$ This is a density on $(0.4, 1)$: ```python y = Symbol('y', positive=True) conditional_density_Y_given_X_is_04 = (5/(0.6**5)) * (y - 0.4)**4 Integral(conditional_density_Y_given_X_is_04, (y, 0.4, 1)).doit() ``` The figure below shows the overlaid graphs of the density of $Y$ and the conditional density of $Y$ given $X = 0.4$. You can see that the conditional density is more concentrated on large values of $Y$, because under the condition $X = 0.4$ you know that $Y$ can't be small. ```python # NO CODE plt.plot(y_vals, f_Y, color='darkblue', lw=2, label='Density of $Y$') new_y = np.arange(0.4, 1.01, 0.01) dens = (5/(0.6**5)) * (new_y - 0.4)**4 plt.plot(new_y, dens, color='gold', lw=2, label='Density of $Y$ given $X=0.4$') plt.legend() plt.xlim(0, 1) plt.xlabel('$y$'); ``` ### Using a Conditional Density ### We can use conditional densities to find probabilities and expectations, just as we would use an ordinary density. Here are some examples of calculations. In each case we will set up the integrals and then use `SymPy`. $$ P(Y > 0.9 \mid X = 0.4) = \int_{0.9}^1 \frac{5}{0.6^5} (y - 0.4)^4 dy $$ The answer is about 60%. ```python Integral(conditional_density_Y_given_X_is_04, (y, 0.9, 1)).doit() ``` Now we will use the conditional density to find a conditional expectation. Remember that in our example, given that $X = 0.4$ the possible values of $Y$ go from $0.4$ to 1. $$ E(Y \mid X = 0.4) ~ = ~ \int_{0.4}^1 y \frac{5}{0.6^5} (y - 0.4)^4 dy ~ = ~ 0.9 $$ ```python Integral(y*conditional_density_Y_given_X_is_04, (y, 0.4, 1)).doit() ``` You can condition $X$ on $Y$ in the same way. By analogous arguments, for any fixed value of $y$ the conditional density of $X$ given $Y = y$ is $$ f_{X \mid Y=y} (x) ~ = ~ \frac{f(x, y)}{f_Y(y)} ~~~~~ \text{for all } x $$ All the examples in this section and the previous one have started with a joint density function that apparently emerged out of nowhere. In the next section, we will study a context in which they arise. ```python ```
a4789e924e3eb454ef95c76489c30ed13cf13ec3
285,739
ipynb
Jupyter Notebook
content/Chapter_17/03_Marginal_and_Conditional_Densities.ipynb
dcroce/jupyter-book
9ac4b502af8e8c5c3b96f5ec138602a0d3d8a624
[ "MIT" ]
null
null
null
content/Chapter_17/03_Marginal_and_Conditional_Densities.ipynb
dcroce/jupyter-book
9ac4b502af8e8c5c3b96f5ec138602a0d3d8a624
[ "MIT" ]
null
null
null
content/Chapter_17/03_Marginal_and_Conditional_Densities.ipynb
dcroce/jupyter-book
9ac4b502af8e8c5c3b96f5ec138602a0d3d8a624
[ "MIT" ]
null
null
null
500.418564
103,940
0.937814
true
2,779
Qwen/Qwen-72B
1. YES 2. YES
0.907312
0.740174
0.671569
__label__eng_Latn
0.977611
0.398611
<!-- dom:TITLE: Data Analysis and Machine Learning: Logistic Regression --> # Data Analysis and Machine Learning: Logistic Regression <!-- dom:AUTHOR: Morten Hjorth-Jensen at Department of Physics, University of Oslo & Department of Physics and Astronomy and National Superconducting Cyclotron Laboratory, Michigan State University --> <!-- Author: --> **Morten Hjorth-Jensen**, Department of Physics, University of Oslo and Department of Physics and Astronomy and National Superconducting Cyclotron Laboratory, Michigan State University Date: **Oct 18, 2018** Copyright 1999-2018, Morten Hjorth-Jensen. Released under CC Attribution-NonCommercial 4.0 license <!-- !split --> ## Logistic Regression In linear regression our main interest was centered on learning the coefficients of a functional fit (say a polynomial) in order to be able to predict the response of a continuous variable on some unseen data. The fit to the continuous variable $y_i$ is based on some independent variables $\hat{x}_i$. Linear regression resulted in analytical expressions (in terms of matrices to invert) for several quantities, ranging from the variance and thereby the confidence intervals of the parameters $\hat{\beta}$ to the mean squared error. If we can invert the product of the design matrices, linear regression gives then a simple recipe for fitting our data. Classification problems, however, are concerned with outcomes taking the form of discrete variables (i.e. categories). We may for example, on the basis of DNA sequencing for a number of patients, like to find out which mutations are important for a certain disease; or based on scans of various patients' brains, figure out if there is a tumor or not; or given a specific physical system, we'd like to identify its state, say whether it is an ordered or disordered system (typical situation in solid state physics); or classify the status of a patient, whether she/he has a stroke or not and many other similar situations. The most common situation we encounter when we apply logistic regression is that of two possible outcomes, normally denoted as a binary outcome, true or false, positive or negative, success or failure etc. ## Optimization and Deep learning Logistic regression will also serve as our stepping stone towards neural network algorithms and supervised deep learning. For logistic learning, the minimization of the cost function leads to a non-linear equation in the parameters $\hat{\beta}$. The optmization of the problem calls therefore for minimization algorithms. This forms the bottle neck of all machine learning algorithms, namely how to find reliable minima of a multi-variable function. This leads us to the family of gradient descent methods. The latter are the working horses of basically all modern machine learning algorithms. We note also that many of the topics discussed here regression are also commonly used in modern supervised Deep Learning models, as we will see later. <!-- !split --> ## Basics We consider the case where the dependent variables, also called the responses or the outcomes, $y_i$ are discrete and only take values from $k=0,\dots,K-1$ (i.e. $K$ classes). The goal is to predict the output classes from the design matrix $\hat{X}\in\mathbb{R}^{n\times p}$ made of $n$ samples, each of which carries $p$ features or predictors. The primary goal is to identify the classes to which new unseen samples belong. Let us specialize to the case of two classes only, with outputs $y_i=0$ and $y_i=1$. Our outcomes could represent the status of a credit card user who could default or not on her/his credit card debt. That is $$ y_i = \begin{bmatrix} 0 & \mathrm{no}\\ 1 & \mathrm{yes} \end{bmatrix}. $$ ## Linear classifier Before moving to the logistic model, let us try to use our linear regression model to classify these two outcomes. We could for example fit a linear model to the default case if $y_i > 0.5$ and the no default case $y_i \leq 0.5$. We would then have our weighted linear combination, namely <!-- Equation labels as ordinary links --> <div id="_auto1"></div> $$ \begin{equation} \hat{y} = \hat{X}^T\hat{\beta} + \hat{\epsilon}, \label{_auto1} \tag{1} \end{equation} $$ where $\hat{y}$ is a vector representing the possible outcomes, $\hat{X}$ is our $n\times p$ design matrix and $\hat{\beta}$ represents our estimators/predictors. ## Some selected properties The main problem with our function is that it takes values on the entire real axis. In the case of logistic regression, however, the labels $y_i$ are discrete variables. One simple way to get a discrete output is to have sign functions that map the output of a linear regressor to values $\{0,1\}$, $f(s_i)=sign(s_i)=1$ if $s_i\ge 0$ and 0 if otherwise. We will encounter this model in our first demonstration of neural networks. Historically it is called the "perceptron" model in the machine learning literature. This model is extremely simple. However, in many cases it is more favorable to use a ``soft" classifier that outputs the probability of a given category. This leads us to the logistic function. The code for plotting the perceptron can be seen here. This si nothing but the standard [Heaviside step function](https://en.wikipedia.org/wiki/Heaviside_step_function). ## The logistic function The perceptron is an example of a ``hard classification" model. We will encounter this model when we discuss neural networks as well. Each datapoint is deterministically assigned to a category (i.e $y_i=0$ or $y_i=1$). In many cases, it is favorable to have a "soft" classifier that outputs the probability of a given category rather than a single value. For example, given $x_i$, the classifier outputs the probability of being in a category $k$. Logistic regression is the most common example of a so-called soft classifier. In logistic regression, the probability that a data point $x_i$ belongs to a category $y_i=\{0,1\}$ is given by the so-called logit function (or Sigmoid) which is meant to represent the likelihood for a given event, $$ p(t) = \frac{1}{1+\mathrm \exp{-t}}=\frac{\exp{t}}{1+\mathrm \exp{t}}. $$ Note that $1-p(t)= p(-t)$. The following code plots the logistic function. ## Two parameters We assume now that we have two classes with $y_i$ either $0$ or $1$. Furthermore we assume also that we have only two parameters $\beta$ in our fitting of the Sigmoid function, that is we define probabilities $$ \begin{align*} p(y_i=1|x_i,\hat{\beta}) &= \frac{\exp{(\beta_0+\beta_1x_i)}}{1+\exp{(\beta_0+\beta_1x_i)}},\nonumber\\ p(y_i=0|x_i,\hat{\beta}) &= 1 - p(y_i=1|x_i,\hat{\beta}), \end{align*} $$ where $\hat{\beta}$ are the weights we wish to extract from data, in our case $\beta_0$ and $\beta_1$. Note that we used $$ p(y_i=0\vert x_i, \hat{\beta}) = 1-p(y_i=1\vert x_i, \hat{\beta}). $$ <!-- !split --> ## Maximum likelihood In order to define the total likelihood for all possible outcomes from a dataset $\mathcal{D}=\{(y_i,x_i)\}$, with the binary labels $y_i\in\{0,1\}$ and where the data points are drawn independently, we use the so-called [Maximum Likelihood Estimation](https://en.wikipedia.org/wiki/Maximum_likelihood_estimation) (MLE) principle. We aim thus at maximizing the probability of seeing the observed data. We can then approximate the likelihood in terms of the product of the individual probabilities of a specific outcome $y_i$, that is $$ \begin{align*} P(\mathcal{D}|\hat{\beta})& = \prod_{i=1}^n \left[p(y_i=1|x_i,\hat{\beta})\right]^{y_i}\left[1-p(y_i=1|x_i,\hat{\beta}))\right]^{1-y_i}\nonumber \\ \end{align*} $$ from which we obtain the log-likelihood and our **cost/loss** function $$ \mathcal{C}(\hat{\beta}) = \sum_{i=1}^n \left( y_i\log{p(y_i=1|x_i,\hat{\beta})} + (1-y_i)\log\left[1-p(y_i=1|x_i,\hat{\beta}))\right]\right). $$ ## The cost function rewritten Reordering the logarithms, we can rewrite the **cost/loss** function as $$ \mathcal{C}(\hat{\beta}) = \sum_{i=1}^n \left(y_i(\beta_0+\beta_1x_i) -\log{(1+\exp{(\beta_0+\beta_1x_i)})}\right). $$ The maximum likelihood estimator is defined as the set of parameters that maximize the log-likelihood where we maximize with respect to $\beta$. Since the cost (error) function is just the negative log-likelihood, for logistic regression we have that $$ \mathcal{C}(\hat{\beta})=-\sum_{i=1}^n \left(y_i(\beta_0+\beta_1x_i) -\log{(1+\exp{(\beta_0+\beta_1x_i)})}\right). $$ This equation is known in statistics as the **cross entropy**. Finally, we note that just as in linear regression, in practice we often supplement the cross-entropy with additional regularization terms, usually $L_1$ and $L_2$ regularization as we did for Ridge and Lasso regression. ## Minimizing the cross entropy The cross entropy is a convex function of the weights $\hat{\beta}$ and, therefore, any local minimizer is a global minimizer. Minimizing this cost function with respect to the two parameters $\beta_0$ and $\beta_1$ we obtain $$ \frac{\partial \mathcal{C}(\hat{\beta})}{\partial \beta_0} = -\sum_{i=1}^n \left(y_i -\frac{\exp{(\beta_0+\beta_1x_i)}}{1+\exp{(\beta_0+\beta_1x_i)}}\right), $$ and $$ \frac{\partial \mathcal{C}(\hat{\beta})}{\partial \beta_1} = -\sum_{i=1}^n \left(y_ix_i -x_i\frac{\exp{(\beta_0+\beta_1x_i)}}{1+\exp{(\beta_0+\beta_1x_i)}}\right). $$ ## A more compact expression Let us now define a vector $\hat{y}$ with $n$ elements $y_i$, an $n\times p$ matrix $\hat{X}$ which contains the $x_i$ values and a vector $\hat{p}$ of fitted probabilities $p(y_i\vert x_i,\hat{\beta})$. We can rewrite in a more compact form the first derivative of cost function as $$ \frac{\partial \mathcal{C}(\hat{\beta})}{\partial \hat{\beta}} = -\hat{X}^T\left(\hat{y}-\hat{p}\right). $$ If we in addition define a diagonal matrix $\hat{W}$ with elements $p(y_i\vert x_i,\hat{\beta})(1-p(y_i\vert x_i,\hat{\beta})$, we can obtain a compact expression of the second derivative as $$ \frac{\partial^2 \mathcal{C}(\hat{\beta})}{\partial \hat{\beta}\partial \hat{\beta}^T} = \hat{X}^T\hat{W}\hat{X}. $$ ## Extending to more predictors Within a binary classification problem, we can easily expand our model to include multiple predictors. Our ratio between likelihoods is then with $p$ predictors $$ \log{ \frac{p(\hat{\beta}\hat{x})}{1-p(\hat{\beta}\hat{x})}} = \beta_0+\beta_1x_1+\beta_2x_2+\dots+\beta_px_p. $$ Here we defined $\hat{x}=[1,x_1,x_2,\dots,x_p]$ and $\hat{\beta}=[\beta_0, \beta_1, \dots, \beta_p]$ leading to $$ p(\hat{\beta}\hat{x})=\frac{ \exp{(\beta_0+\beta_1x_1+\beta_2x_2+\dots+\beta_px_p)}}{1+\exp{(\beta_0+\beta_1x_1+\beta_2x_2+\dots+\beta_px_p)}}. $$ ## Including more classes Till now we have mainly focused on two classes, the so-called binary system. Suppose we wish to extend to $K$ classes. Let us for the sake of simplicity assume we have only two predictors. We have then following model 1 5 < < < ! ! M A T H _ B L O C K $$ \log{\frac{p(C=2\vert x)}{p(K\vert x)}} = \beta_{20}+\beta_{21}x_1, $$ and so on till the class $C=K-1$ class $$ \log{\frac{p(C=K-1\vert x)}{p(K\vert x)}} = \beta_{(K-1)0}+\beta_{(K-1)1}x_1, $$ and the model is specified in term of $K-1$ so-called log-odds or **logit** transformations. ## The Softmax function In our discussion of neural networks we will encounter the above again in terms of the so-called **Softmax** function. The softmax function is used in various multiclass classification methods, such as multinomial logistic regression (also known as softmax regression), multiclass linear discriminant analysis, naive Bayes classifiers, and artificial neural networks. Specifically, in multinomial logistic regression and linear discriminant analysis, the input to the function is the result of $K$ distinct linear functions, and the predicted probability for the $k$-th class given a sample vector $\hat{x}$ and a weighting vector $\hat{\beta}$ is (with two predictors): $$ p(C=k\vert \mathbf {x} )=\frac{\exp{(\beta_{k0}+\beta_{k1}x_1)}}{1+\sum_{l=1}^{K-1}\exp{(\beta_{l0}+\beta_{l1}x_1)}}. $$ It is easy to extend to more predictors. The final class is $$ p(C=K\vert \mathbf {x} )=\frac{1}{1+\sum_{l=1}^{K-1}\exp{(\beta_{l0}+\beta_{l1}x_1)}}, $$ and they sum to one. Our earlier discussions were all specialized to the case with two classes only. It is easy to see from the above that what we derived earlier is compatible with these equations. To find the optimal parameters we would typically use a gradient descent method. Newton's method and gradient descent methods are discussed in the material on [optimization methods](https://compphysics.github.io/MachineLearning/doc/pub/Splines/html/Splines-bs.html). ## A **scikit-learn** example ``` %matplotlib inline import numpy as np import matplotlib.pyplot as plt from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) ['data', 'target_names', 'feature_names', 'target', 'DESCR'] X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris-Virginica, else 0 from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression() log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", label="Not Iris-Virginica") plt.show() ``` ## A simple classification problem ``` import numpy as np from sklearn import datasets, linear_model import matplotlib.pyplot as plt def generate_data(): np.random.seed(0) X, y = datasets.make_moons(200, noise=0.20) return X, y def visualize(X, y, clf): # plt.scatter(X[:, 0], X[:, 1], s=40, c=y, cmap=plt.cm.Spectral) # plt.show() plot_decision_boundary(lambda x: clf.predict(x), X, y) plt.title("Logistic Regression") def plot_decision_boundary(pred_func, X, y): # Set min and max values and give it some padding x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5 y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5 h = 0.01 # Generate a grid of points with distance h between them xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) # Predict the function value for the whole gid Z = pred_func(np.c_[xx.ravel(), yy.ravel()]) Z = Z.reshape(xx.shape) # Plot the contour and training examples plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral) plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Spectral) plt.show() def classify(X, y): clf = linear_model.LogisticRegressionCV() clf.fit(X, y) return clf def main(): X, y = generate_data() # visualize(X, y) clf = classify(X, y) visualize(X, y, clf) if __name__ == "__main__": main() ``` <!-- !split --> ## The two-dimensional Ising model, Predicting phase transition of the two-dimensional Ising model The Hamiltonian of the two-dimensional Ising model without an external field for a constant coupling constant $J$ is given by <!-- Equation labels as ordinary links --> <div id="_auto2"></div> $$ \begin{equation} H = -J \sum_{\langle ij\rangle} S_i S_j, \label{_auto2} \tag{2} \end{equation} $$ where $S_i \in \{-1, 1\}$ and $\langle ij \rangle$ signifies that we only iterate over the nearest neighbors in the lattice. We will be looking at a system of $L = 40$ spins in each dimension, i.e., $L^2 = 1600$ spins in total. Opposed to the one-dimensional Ising model we will get a phase transition from an **ordered** phase to a **disordered** phase at the critical temperature <!-- Equation labels as ordinary links --> <div id="_auto3"></div> $$ \begin{equation} \frac{T_c}{J} = \frac{2}{\log\left(1 + \sqrt{2}\right)} \approx 2.26, \label{_auto3} \tag{3} \end{equation} $$ as shown by Lars Onsager. Here we use **logistic regression** to predict when a phase transition occurs. The data we will look at is a set of spin configurations, i.e., individual lattices with spins, labeled **ordered** `1` or **disordered** `0`. Our job is to build a model which will take in a spin configuration and predict whether or not the spin configuration constitutes an ordered or a disordered phase. To achieve this we will represent the lattices as flattened arrays with $1600$ elements instead of a matrix of $40 \times 40$ elements. As an extra test of the performance of the algorithms we will divide the dataset into three pieces. We will do a conventional train-test-split on a combination of totally ordered and totally disordered phases. The remaining "critical-like" states will be used as test data which we hope the model will be able to make good extrapolated predictions on. ``` import pickle import os import glob import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import sklearn.model_selection as skms import sklearn.linear_model as skl import sklearn.metrics as skm import tqdm import copy import time from IPython.display import display %matplotlib inline sns.set(color_codes=True) ``` ## Reading in the data Using the data from [Mehta et al.](https://physics.bu.edu/~pankajm/ML-Review-Datasets/isingMC/) (specifically the two datasets named `Ising2DFM_reSample_L40_T=All.pkl` and `Ising2DFM_reSample_L40_T=All_labels.pkl`) we have to unpack the data into numpy arrays. ``` filenames = glob.glob(os.path.join("..", "dat", "*")) label_filename = list(filter(lambda x: "label" in x, filenames))[0] dat_filename = list(filter(lambda x: "label" not in x, filenames))[0] # Read in the labels with open(label_filename, "rb") as f: labels = pickle.load(f) # Read in the corresponding configurations with open(dat_filename, "rb") as f: data = np.unpackbits(pickle.load(f)).reshape(-1, 1600).astype("int") # Set spin-down to -1 data[data == 0] = -1 ``` This dataset consists of $10000$ samples, i.e., $10000$ spin configurations with $40 \times 40$ spins each, for $16$ temperatures between $0.25$ to $4.0$. Next we create a train/test-split and keep the data in the critical phase as a separate dataset for extrapolation-testing. ``` # Set up slices of the dataset ordered = slice(0, 70000) critical = slice(70000, 100000) disordered = slice(100000, 160000) X_train, X_test, y_train, y_test = skms.train_test_split( np.concatenate((data[ordered], data[disordered])), np.concatenate((labels[ordered], labels[disordered])), test_size=0.95 ) ``` ## Logistic regression Logistic regression is a linear model for classification. Recalling the cost function for ordinary least squares with both L2 (ridge) and L1 (LASSO) penalties we will see that the logistic cost function is very similar. In OLS we wish to predict a continuous variable $\hat{y}$ using <!-- Equation labels as ordinary links --> <div id="_auto4"></div> $$ \begin{equation} \hat{y} = X\omega, \label{_auto4} \tag{4} \end{equation} $$ where $X \in \mathbb{R}^{n \times p}$ is the input data and $\omega^{p \times d}$ are the weights of the regression. In a classification setting (binary classification in our situation) we are interested in a positive or negative answer. We can thus define either answer to be above or below some threshold. But, in order to limit the size of the answer and also to get a probability interpretation on how sure we are for either answer we can compute the sigmoid function of OLS. That is, <!-- Equation labels as ordinary links --> <div id="_auto5"></div> $$ \begin{equation} f(X\omega) = \frac{1}{1 + \exp(-X\omega)}. \label{_auto5} \tag{5} \end{equation} $$ We are thus interested in minizming the following cost function <!-- Equation labels as ordinary links --> <div id="_auto6"></div> $$ \begin{equation} C(X, \omega) = \sum_{i = 1}^n \left\{ - y_i\log\left( f(x_i^T\omega) \right) - (1 - y_i)\log\left[1 - f(x_i^T\omega)\right] \right\}, \label{_auto6} \tag{6} \end{equation} $$ where we will restrict ourselves to a value for $f(z)$ as the sigmoid described above. We can also tack on a L2 (Ridge) or L1 (LASSO) penalization to this cost function in the same manner we did for linear regression. ## Exploring the logistic regression The penalization factor $\lambda$ is inverted in the case of the logistic regression model we use. We will explore several values of $\lambda$ using both L1 and L2 penalization. We do this using a grid search over different parameters and run a 3-fold cross validation for each configuration. In other words, we fit a model 3 times for each configuration of the hyper parameters. ``` lambdas = np.logspace(-7, -1, 7) param_grid = { "C": list(1.0/lambdas), "penalty": ["l1", "l2"] } clf = skms.GridSearchCV( skl.LogisticRegression(), param_grid=param_grid, n_jobs=-1, return_train_score=True ) t0 = time.time() clf.fit(X_train, y_train) t1 = time.time() print ( "Time spent fitting GridSearchCV(LogisticRegression): {0:.3f} sec".format( t1 - t0 ) ) ``` We can see that logistic regression is quite slow and using the grid search and cross validation results in quite a heavy computation. Below we show the results of the different configurations. ``` logreg_df = pd.DataFrame(clf.cv_results_) display(logreg_df) ``` ## Accuracy of a classification model To determine how well a classification model is performing we count the number of correctly labeled classes and divide by the number of classes in total. The accuracy is thus given by <!-- Equation labels as ordinary links --> <div id="_auto7"></div> $$ \begin{equation} a(y, \hat{y}) = \frac{1}{n}\sum_{i = 1}^{n} I(y_i = \hat{y}_i), \label{_auto7} \tag{7} \end{equation} $$ where $I(y_i = \hat{y}_i)$ is the indicator function given by <!-- Equation labels as ordinary links --> <div id="_auto8"></div> $$ \begin{equation} I(x = y) = \begin{array}{cc} 1 & x = y, \\ 0 & x \neq y. \end{array} \label{_auto8} \tag{8} \end{equation} $$ This is the accuracy provided by Scikit-learn when using **sklearn.metrics.accuracyscore**. Below we compute the accuracy of the best fit model on the training data (which should give a good accuracy), the test data (which has not been shown to the model) and the critical data (completely new data that needs to be extrapolated). ``` train_accuracy = skm.accuracy_score(y_train, clf.predict(X_train)) test_accuracy = skm.accuracy_score(y_test, clf.predict(X_test)) critical_accuracy = skm.accuracy_score(labels[critical], clf.predict(data[critical])) print ("Accuracy on train data: {0}".format(train_accuracy)) print ("Accuracy on test data: {0}".format(test_accuracy)) print ("Accuracy on critical data: {0}".format(critical_accuracy)) ``` We can see that we get quite good accuracy on the training data, but gradually worsening accuracy on the test and critical data. ## Analyzing the results Below we show a different metric for determining the quality of our model, namely the **reciever operating characteristic** (ROC). The ROC curve tells us how well the model correctly classifies the different labels. We plot the **true positive rate** (the rate of predicted positive classes that are positive) versus the **false positive rate** (the rate of predicted positive classes that are negative). The ROC curve is built by computing the true positive rate and the false positive rate for varying **thresholds**, i.e, which probability we should acredit a certain class. By computing the **area under the curve** (AUC) of the ROC curve we get an estimate of how well our model is performing. Pure guessing will get an AUC of $0.5$. A perfect score will get an AUC of $1.0$. ``` fig = plt.figure(figsize=(20, 14)) for (_X, _y), label in zip( [ (X_train, y_train), (X_test, y_test), (data[critical], labels[critical]) ], ["Train", "Test", "Critical"] ): proba = clf.predict_proba(_X) fpr, tpr, _ = skm.roc_curve(_y, proba[:, 1]) roc_auc = skm.auc(fpr, tpr) print ("LogisticRegression AUC ({0}): {1}".format(label, roc_auc)) plt.plot(fpr, tpr, label="{0} (AUC = {1})".format(label, roc_auc), linewidth=4.0) plt.plot([0, 1], [0, 1], "--", label="Guessing (AUC = 0.5)", linewidth=4.0) plt.title(r"The ROC curve for LogisticRegression", fontsize=18) plt.xlabel(r"False positive rate", fontsize=18) plt.ylabel(r"True positive rate", fontsize=18) plt.axis([-0.01, 1.01, -0.01, 1.01]) plt.xticks(fontsize=18) plt.yticks(fontsize=18) plt.legend(loc="best", fontsize=18) plt.show() ``` We can see that this plot of the ROC looks very strange. This tells us that logistic regression is quite inept at predicting the Ising model transition and is therefore highly non-linear. The ROC curve for the training data looks quite good, but as the testing data is so far off we see that we are dealing with an overfit model.
2ad1a345d796f26dfa328ff22a4b73e32078acb6
36,548
ipynb
Jupyter Notebook
doc/pub/LogReg/ipynb/LogReg.ipynb
mortele/MachineLearning
86eeaed5c7f31ab0f37d451aaf5a5c311ffb7f19
[ "CC0-1.0" ]
1
2019-11-19T01:26:12.000Z
2019-11-19T01:26:12.000Z
doc/pub/LogReg/ipynb/LogReg.ipynb
bernharl/MachineLearning
35389a23d0abe490fbb9cd653aa732eeac162262
[ "CC0-1.0" ]
null
null
null
doc/pub/LogReg/ipynb/LogReg.ipynb
bernharl/MachineLearning
35389a23d0abe490fbb9cd653aa732eeac162262
[ "CC0-1.0" ]
1
2022-03-21T13:30:23.000Z
2022-03-21T13:30:23.000Z
33.622815
401
0.575681
true
6,922
Qwen/Qwen-72B
1. YES 2. YES
0.712232
0.885631
0.630775
__label__eng_Latn
0.989241
0.303833
# "Intro til Anvendt Matematik og Python opfriskning" > "19 April 2021 - HA-AAUBS" - toc: true - branch: master - badges: true - comments: true - author: Roman Jurowetzki - categories: [intro, forelæsning] # Intro til Anvendt Matematik og Python opfriskning - Matematik bruges i finance, økonomistyring, data science, tech og meget andet - men også helt sikkert senere hvis I skal videre med en kandidat. - Analytiske skills er meget [eftertragtede på arbejdsmarkedet](https://youtu.be/u2oupkbxddc ) > [Ny DI-analyse viser](https://www.danskindustri.dk/tech-der-taller/analysearkiv/analyser/2020/10/kompetencer-til-et-digitalt-arbejdsliv/), at den digitale omstilling i virksomheder ikke kan drives af it-specialisterne alene. Der er i stærkt stigende omfang behov for, at samfundsvidenskabelige profiler også har gode digitale kompetencer. ### Hvad sker her fra idag til 21 Juni? - overblik over linkeær algebra og calculus (ikke meget mere end B niveau) - Brug gerne fx https://www.webmatematik.dk/ - $\LaTeX$ [cheat-sheet](http://tug.ctan.org/info/undergradmath/undergradmath.pdf) - [Markdown cheatsheet](https://www.markdownguide.org/cheat-sheet/) - Lære at **bruge** matematik - ikke være matematiker¨ - lære fra et data/computer science perspektiv, hvor det handler mest at kunne implementere matematik direkte og bruge til fx at bygge en søgemaskine, recommender system, visualisere eller automatisere BI - "computational tilgang" - Python som tool - Danglish ### Pingvin Motivation og Intuition - Fra Data og Statistik til Liniær Algebra Pingvin data: https://github.com/allisonhorst/palmerpenguins Vi bygger en søgemaskine til pingviner 🤔 Antagelse: - Pingviner kan bedst lide at være sammen med dem, der ligner dem mest ```python import pandas as pd import numpy as np np.set_printoptions(suppress=True) import seaborn as sns sns.set(color_codes=True, rc={'figure.figsize':(10,8)}) ``` ```python pinguins = pd.read_csv("https://github.com/allisonhorst/palmerpenguins/raw/5b5891f01b52ae26ad8cb9755ec93672f49328a8/data/penguins_size.csv") ``` ```python pinguins.head() ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>species_short</th> <th>island</th> <th>culmen_length_mm</th> <th>culmen_depth_mm</th> <th>flipper_length_mm</th> <th>body_mass_g</th> <th>sex</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>Adelie</td> <td>Torgersen</td> <td>39.1</td> <td>18.7</td> <td>181.0</td> <td>3750.0</td> <td>MALE</td> </tr> <tr> <th>1</th> <td>Adelie</td> <td>Torgersen</td> <td>39.5</td> <td>17.4</td> <td>186.0</td> <td>3800.0</td> <td>FEMALE</td> </tr> <tr> <th>2</th> <td>Adelie</td> <td>Torgersen</td> <td>40.3</td> <td>18.0</td> <td>195.0</td> <td>3250.0</td> <td>FEMALE</td> </tr> <tr> <th>3</th> <td>Adelie</td> <td>Torgersen</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <th>4</th> <td>Adelie</td> <td>Torgersen</td> <td>36.7</td> <td>19.3</td> <td>193.0</td> <td>3450.0</td> <td>FEMALE</td> </tr> </tbody> </table> </div> ```python pinguins = pinguins.dropna() pinguins.species_short.value_counts() ``` Adelie 146 Gentoo 120 Chinstrap 68 Name: species_short, dtype: int64 ```python pinguins.index = range(len(pinguins)) ``` ```python # Hvordan ser vores data ud? sns.pairplot(pinguins, hue='species_short', kind="reg", corner=True, markers=["o", "s", "D"], plot_kws={'line_kws':{'color':'white'}}) ``` Vi danner alle variable om til Z-scores (så de er på samme skala) $Z = \frac{x-\mu}{\sigma} $ x = værdi, $\mu$ = gennemsnit, $\sigma$ = stadnardafvigelse ```python # scaling - vi tager kun de 4 nummeriske variable from sklearn.preprocessing import StandardScaler scaled_pinguins = StandardScaler().fit_transform(pinguins.loc[:,'culmen_length_mm':'body_mass_g']) ``` ```python # plot af alle skalerede variable, som nu har gennemsnit ~ 0 og std ~ 1 for i in range(4): sns.kdeplot(scaled_pinguins[:,i]) ``` ```python print(scaled_pinguins.shape) scaled_pinguins ``` (334, 4) array([[-0.89765322, 0.78348666, -1.42952144, -0.57122888], [-0.82429023, 0.12189602, -1.07240838, -0.50901123], [-0.67756427, 0.42724555, -0.42960487, -1.19340546], ..., [ 1.17485108, -0.74326098, 1.49880565, 1.91747742], [ 0.22113229, -1.20128527, 0.78457953, 1.23308319], [ 1.08314735, -0.53969463, 0.85600214, 1.48195382]]) ```python # pinguin 1 kan representeres som en 4D række-vektor scaled_pinguins[0,:] ``` array([-0.89765322, 0.78348666, -1.42952144, -0.57122888]) Nu bruger vi noget, som vi måske kommer til at se på helt til sidst i Liniær Algebra, næmlig Principal Component Analysis eller PCA. - læs mere om PCA og hvordan man [bygger det fra bunden](https://towardsdatascience.com/principal-component-analysis-pca-from-scratch-in-python-7f3e2a540c51)) - Hvis du er meget interesseret - [læs her](https://jakevdp.github.io/PythonDataScienceHandbook/05.09-principal-component-analysis.html) Vi bruger 2 components (dvs. vores 4D vektorer bliver skrumpet til 2D hvor PCA forsøger at beholde så meget information som muligt ```python # import PCA from sklearn.decomposition import PCA pca = PCA(n_components=2) ``` ```python # Transform penguin matrix med PCA pca_pinguins = pca.fit_transform(scaled_pinguins) ``` ```python print(pca_pinguins.shape) pca_pinguins ``` (334, 2) array([[-1.85848815, 0.03167633], [-1.32072197, -0.44347275], [-1.3816875 , -0.16108641], [-1.89089671, -0.01455592], [-1.92583912, 0.81617939], [-1.77762564, -0.36627297], [-0.82367924, 0.49925942], [-1.80358549, -0.24403619], [-1.96223043, 0.99612874], [-1.57564793, 0.57324551], [-1.75275951, -0.61168037], [-1.58122862, 0.0856511 ], [-0.81030574, 1.29279471], [-2.35538369, -0.64766742], [-1.01078942, 1.97342517], [-2.41302575, -0.30928293], [-2.11868773, -0.13743642], [-1.8624334 , -0.11174636], [-1.50972042, -0.2900524 ], [-1.58542467, -0.60385974], [-1.93370793, -0.3000802 ], [-1.76786874, 0.1386897 ], [-1.70849198, -0.18710269], [-2.71911792, -0.20123026], [-1.68714221, 0.28568225], [-1.88418751, -0.78150116], [-1.91542962, -0.40774928], [-1.66169617, -0.3281584 ], [-1.52330869, 0.3265888 ], [-1.45129345, -0.98913219], [-1.44661497, 1.05681027], [-1.64019046, 0.54659799], [-1.73809895, 0.27380468], [-2.41302455, 0.06626194], [-1.14237972, 0.35686806], [-2.30156091, -0.59486072], [-0.97680879, 0.11731425], [-2.31421828, -0.45107126], [-0.58355348, 1.05513585], [-2.01535054, -0.99768614], [-0.88503963, 0.21090366], [-1.93433513, 0.34353064], [-1.78801577, -0.65926027], [-1.41519347, 1.43771814], [-1.57862681, -0.33941082], [-1.1514131 , 0.27749756], [-1.87115481, -0.76967488], [-0.79212056, 0.7107703 ], [-2.45321183, -0.79702106], [-1.26918564, 0.24411376], [-1.55364072, -0.48183981], [-1.22556619, 0.24686034], [-2.26331137, -1.18987656], [-1.5289907 , 0.03261369], [-2.02082165, -1.12705855], [-1.14220218, 1.31192619], [-1.57536169, -0.83431984], [-0.93217697, 0.08260562], [-2.24970519, -0.99756479], [-0.91834758, 0.04746611], [-1.34635177, -1.40975392], [-1.24575322, 0.44992581], [-1.80562444, -1.23394502], [-0.59718006, 0.68528227], [-2.11691838, -0.47616688], [-1.27423813, -0.00597043], [-1.03080507, -0.53328743], [-0.4093727 , 0.89598575], [-1.57723661, -0.85266145], [-0.59144135, 0.41165003], [-0.94475845, -0.53921533], [-1.9325889 , 0.12031081], [-1.46057066, -1.35695193], [-0.94265366, 0.5534561 ], [-1.97428973, -1.12097589], [-0.05110489, 0.10149606], [-1.79698012, -0.18595097], [-1.53120411, -0.07993517], [-1.68681301, -0.56525116], [-1.60218691, 0.90767348], [-1.84898196, 0.05413884], [-1.86251474, -0.27237218], [-1.56029005, 0.16741527], [-1.62730125, 0.03916956], [-1.27054087, -0.63810036], [-0.20528913, 0.0705198 ], [-2.03180825, -1.21024999], [-1.01009833, -0.08788429], [-1.87558453, -0.89514837], [-0.26931022, 0.36210228], [-1.58473976, -0.12062016], [-0.68968725, 0.14542363], [-2.53370098, -1.76391725], [-0.78438121, 0.44055212], [-1.60064657, -0.74297136], [-0.39159612, 0.86745855], [-1.80524059, -1.27810498], [-1.51811021, 0.46451655], [-2.00793156, -0.21479991], [-1.86249805, 0.16103591], [-0.85354542, -0.62371609], [-1.7242771 , 0.47562703], [-1.98930061, -0.82071409], [-0.21832718, 0.70838641], [-0.74267334, -0.9553643 ], [-0.65000735, 1.48052518], [-1.48716525, -0.35414245], [-0.74516412, 0.75300592], [-1.70901921, 0.91373168], [-0.63760493, 0.3035442 ], [-1.84748182, -0.78891878], [-1.61551852, 0.56999598], [-1.73956083, -1.06625566], [-1.63304201, 0.17484148], [-1.95786103, -0.94976616], [-1.66924381, 0.30466555], [-1.83256732, -0.56469998], [-0.67599318, 0.2242132 ], [-1.88627082, -1.59585662], [-0.88224796, 0.34884588], [-1.57274297, -0.48761496], [-0.62465259, 0.19193082], [-1.60836388, -0.68903797], [ 0.06521825, 0.33480786], [-1.6656762 , -0.39502924], [-1.13948936, 0.65783143], [-1.68571439, -0.32240195], [-0.7133827 , -0.1508346 ], [-1.69311177, -0.55220151], [-0.97484512, -0.21545089], [-1.88686119, -0.8908373 ], [-1.11507456, 0.74764799], [-1.66039222, -1.12171344], [-0.80956404, -0.17407396], [-1.18678397, -0.52275364], [-1.36972834, -0.43324764], [-1.9802787 , -2.09921529], [-1.02614623, -0.47840046], [-1.68170873, -1.0024529 ], [-1.77042737, 0.01265338], [-1.11705145, 0.05278004], [-2.06986815, -0.39058871], [-1.56170623, -0.69788286], [-1.35022667, -0.35018122], [-1.57792032, -0.96035616], [-0.62336652, 0.24669442], [-0.79846205, 0.50554672], [-0.39439897, 1.57845766], [-0.51992067, 1.57593487], [-1.2002597 , 0.70859854], [-0.30943898, 1.98176877], [-0.33120905, 0.36625062], [-1.64046708, 0.5540034 ], [-0.08340163, 1.18233005], [-0.47513545, 0.91733722], [-0.42207312, 1.86564417], [-0.52340043, 0.50483586], [-0.5836264 , 2.07723524], [-0.78644771, 0.33437546], [ 0.36512925, 1.24896512], [-0.716766 , 0.12204011], [-0.06457657, 1.69011138], [-0.84036631, 1.7575375 ], [-0.1383651 , 1.74972927], [-1.06570177, 0.77221009], [ 0.10425761, 1.01111645], [-1.40207485, -0.18443901], [-0.66045307, 0.55476098], [-1.4249049 , -0.44318217], [-0.51617466, 1.59371365], [-0.79506126, 0.50983804], [ 0.08575621, 1.62100751], [-0.30641584, 1.1429426 ], [-0.23792806, 1.31301282], [-0.69082522, 0.4725771 ], [ 0.55212389, 2.15455465], [-1.41090541, -0.66880259], [ 0.17005616, 2.60767092], [-1.19560645, -0.43752143], [ 0.25625192, 1.42716264], [-0.4827569 , 1.15257349], [ 0.07033431, 0.21133563], [-0.42533337, 0.82296016], [ 0.72027306, 2.37482104], [-1.04812691, -0.05170828], [ 0.59645115, 2.18667147], [ 0.13339909, 1.4778854 ], [-0.8454379 , 0.32295545], [-0.47766374, 1.48297061], [-0.53382163, 0.03253412], [-0.1481579 , 1.00936156], [ 0.45765551, 1.31604999], [-0.64995848, 0.89286346], [ 0.43487899, 1.55212341], [-0.9224245 , 1.35578984], [-0.03522139, 0.64592968], [-0.19178663, 0.06147432], [ 0.06384104, 1.53733287], [-0.63337267, 0.18431789], [ 0.01410864, 1.75337696], [-1.31760535, -0.19481417], [-0.33579468, 1.4960773 ], [-0.8544232 , -0.1878347 ], [-0.14295232, 1.67969894], [-0.05667388, 1.31013207], [-1.07879228, 1.01603429], [ 0.2097212 , 1.79662485], [-0.50983242, -0.01457644], [-0.45577945, 0.0684179 ], [ 0.54836251, 2.35368336], [-0.74499913, 0.24941819], [-0.37241751, 0.99552342], [ 0.48732476, 1.48854221], [-0.21822344, 1.26585686], [ 1.59071497, -1.33843338], [ 2.88821701, 0.46645254], [ 1.54847963, -0.69238243], [ 2.61753833, 0.01710052], [ 2.23153155, -0.56048335], [ 1.55597907, -1.16855744], [ 1.45307282, -0.82107349], [ 2.02200947, -0.35367268], [ 1.16646236, -1.57686769], [ 1.81100589, -0.3083651 ], [ 1.28292167, -1.69506952], [ 2.16640073, 0.25584878], [ 1.66536127, -1.18716234], [ 2.50314801, -0.38993216], [ 1.03469459, -0.8339397 ], [ 2.51908961, 0.15586721], [ 0.90822766, -1.70320071], [ 3.0850515 , -0.01346766], [ 1.45752665, -0.77392447], [ 2.45514933, -0.19848214], [ 2.81716449, -0.32487746], [ 1.75005823, -0.87429741], [ 1.37362376, -0.77711474], [ 1.61969734, -0.21121195], [ 1.85181644, -1.68352649], [ 1.7796048 , -0.51173115], [ 2.31750969, -0.3125334 ], [ 1.56885346, -0.65252073], [ 2.57698609, 0.0441166 ], [ 2.22968194, -0.28160455], [ 1.16744693, -1.28013214], [ 1.45469922, -0.87250101], [ 3.78344227, 1.84264657], [ 2.33020098, -0.29529015], [ 2.13802247, 0.25779876], [ 1.58828206, -1.47939725], [ 1.45832988, 0.20612014], [ 1.1085949 , -1.42376297], [ 1.75641654, 0.03910498], [ 0.70655528, -1.56462829], [ 2.70998445, 0.29910444], [ 1.24452985, -1.24376351], [ 1.89269738, -0.19842088], [ 2.57894726, 0.34276262], [ 1.76133269, -1.29133952], [ 1.15208493, -1.15022256], [ 2.59992345, 0.32969321], [ 1.9634441 , -1.37308192], [ 1.69926047, -0.30905704], [ 1.62718021, -0.84602434], [ 2.52525925, -0.63053081], [ 1.15397642, -0.97324357], [ 2.47604047, -0.11631108], [ 1.90077017, -0.76888111], [ 1.79892558, -0.51472715], [ 0.99676443, -1.32921445], [ 1.88762448, -0.62604689], [ 0.92749174, -1.13858936], [ 2.77502282, 0.08974534], [ 1.07339971, -1.21348814], [ 2.21256953, -0.56050899], [ 1.4703067 , -1.10778796], [ 3.37470586, 0.6941716 ], [ 1.82881683, -0.94523108], [ 2.76992345, 0.64662567], [ 2.89419179, 0.37987269], [ 1.67879231, -1.19881895], [ 2.8197687 , 0.00115036], [ 1.73473319, -0.40820863], [ 1.881646 , -0.28360075], [ 2.09970271, -0.07541008], [ 2.02465685, -0.5787431 ], [ 1.59244922, -0.55646981], [ 2.90122823, 0.19996284], [ 1.489794 , -0.77062789], [ 2.77293646, 0.61258452], [ 1.72968811, -1.17002539], [ 2.35146558, -0.00203546], [ 1.70250446, -0.46915262], [ 2.69662309, 0.43242808], [ 1.60924136, -0.60696185], [ 2.48295544, 0.26939799], [ 1.58116057, -1.20424695], [ 2.60059975, 0.94912049], [ 1.47901512, -1.13913382], [ 2.65532487, -0.28165015], [ 1.84216381, -0.82446107], [ 2.8178673 , 0.9673151 ], [ 1.9373088 , -0.41067132], [ 2.62084786, 1.00390974], [ 1.48871194, -0.85483971], [ 2.6059389 , 0.32342627], [ 1.51570358, -0.87417606], [ 2.56991388, 0.26339212], [ 1.83311562, 0.11963357], [ 2.08223426, -0.64434532], [ 1.29388278, -0.59018557], [ 2.42519833, 0.62448265], [ 1.99323669, -0.30933538], [ 3.08560832, 1.39088134], [ 1.70421711, -0.24027573], [ 2.85851941, -0.17840657], [ 1.90809635, 0.00784873], [ 1.01552351, -1.19896643], [ 2.68259459, 0.61669439], [ 1.12297944, -1.31799847], [ 1.97233694, -0.25531874], [ 2.09787229, 0.00546566], [ 3.08274259, 0.30581002], [ 1.15289065, -0.80158911], [ 2.87611271, 0.61318567], [ 1.57769415, -0.97294216], [ 3.47583522, 0.92237997], [ 1.45305952, -0.46620568], [ 2.68444037, 0.31891116], [ 1.99481532, -0.97348544], [ 1.82945126, -0.78166158], [ 2.74811252, 0.26970751], [ 1.71059971, -0.72411071], [ 2.01503039, 0.33995817]]) Nu bruger vi denne 2D matrix og plotter, hvor 1.kollonne = x; 2. kolonne = y; vi bruger farver fra pingvin-arter i vores start-data ```python sns.scatterplot(x = pca_pinguins[:,0], y = pca_pinguins[:,1], hue = pinguins['species_short'] ) ``` Hvordan finder vi så en buddy for en given pingvin? - det er den, der er tættest på 🤖 **Eucledian Distance** **Vi kan også gå fra 2D til n-D** $d(\vec{u}, \vec{v}) = \| \vec{u} - \vec{v} \| = \sqrt{(u_1 - v_1)^2 + (u_2 - v_2)^2 ... (u_n - v_n)^2}$ fx Vi kan regne ED mellem $\vec{u} = (2, 3, 4, 2)$ og $\vec{v} = (1, -2, 1, 3)$ $\begin{align} d(\vec{u}, \vec{v}) = \| \vec{u} - \vec{v} \| = \sqrt{(2-1)^2 + (3+2)^2 + (4-1)^2 + (2-3)^2} \\ d(\vec{u}, \vec{v}) = \| \vec{u} - \vec{v} \| = \sqrt{1 + 25 + 9 + 1} \\ d(\vec{u}, \vec{v}) = \| \vec{u} - \vec{v} \| = \sqrt{36} \\ d(\vec{u}, \vec{v}) = \| \vec{u} - \vec{v} \| = 6 \end{align}$ ```python # hvor tæt er de første 2 print(scaled_pinguins[0,:]) print(scaled_pinguins[1,:]) ``` [-0.89765322 0.78348666 -1.42952144 -0.57122888] [-0.82429023 0.12189602 -1.07240838 -0.50901123] ```python # kvardarod er ikke standard og skal importeres from math import sqrt ``` ```python # manuelt sqrt((-0.89765322--0.82429023)**2 + (0.78348666-0.12189602)**2 + (-1.42952144--1.07240838)**2 + (-0.57122888--0.50901123)**2) ``` 0.7579479380745329 ```python # med numpy np.linalg.norm(scaled_pinguins[0,:] - scaled_pinguins[1,:]) ``` 0.757947942517268 ```python np.linalg.norm(scaled_pinguins[0,:] - scaled_pinguins[2,:]) ``` 1.249913482211539 ```python pinguins.iloc[:5,:] ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>species_short</th> <th>island</th> <th>culmen_length_mm</th> <th>culmen_depth_mm</th> <th>flipper_length_mm</th> <th>body_mass_g</th> <th>sex</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>Adelie</td> <td>Torgersen</td> <td>39.1</td> <td>18.7</td> <td>181.0</td> <td>3750.0</td> <td>MALE</td> </tr> <tr> <th>1</th> <td>Adelie</td> <td>Torgersen</td> <td>39.5</td> <td>17.4</td> <td>186.0</td> <td>3800.0</td> <td>FEMALE</td> </tr> <tr> <th>2</th> <td>Adelie</td> <td>Torgersen</td> <td>40.3</td> <td>18.0</td> <td>195.0</td> <td>3250.0</td> <td>FEMALE</td> </tr> <tr> <th>3</th> <td>Adelie</td> <td>Torgersen</td> <td>36.7</td> <td>19.3</td> <td>193.0</td> <td>3450.0</td> <td>FEMALE</td> </tr> <tr> <th>4</th> <td>Adelie</td> <td>Torgersen</td> <td>39.3</td> <td>20.6</td> <td>190.0</td> <td>3650.0</td> <td>MALE</td> </tr> </tbody> </table> </div> ```python pinguins.iloc[-5:,:] ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>species_short</th> <th>island</th> <th>culmen_length_mm</th> <th>culmen_depth_mm</th> <th>flipper_length_mm</th> <th>body_mass_g</th> <th>sex</th> </tr> </thead> <tbody> <tr> <th>329</th> <td>Gentoo</td> <td>Biscoe</td> <td>47.2</td> <td>13.7</td> <td>214.0</td> <td>4925.0</td> <td>FEMALE</td> </tr> <tr> <th>330</th> <td>Gentoo</td> <td>Biscoe</td> <td>46.8</td> <td>14.3</td> <td>215.0</td> <td>4850.0</td> <td>FEMALE</td> </tr> <tr> <th>331</th> <td>Gentoo</td> <td>Biscoe</td> <td>50.4</td> <td>15.7</td> <td>222.0</td> <td>5750.0</td> <td>MALE</td> </tr> <tr> <th>332</th> <td>Gentoo</td> <td>Biscoe</td> <td>45.2</td> <td>14.8</td> <td>212.0</td> <td>5200.0</td> <td>FEMALE</td> </tr> <tr> <th>333</th> <td>Gentoo</td> <td>Biscoe</td> <td>49.9</td> <td>16.1</td> <td>213.0</td> <td>5400.0</td> <td>MALE</td> </tr> </tbody> </table> </div> ```python np.linalg.norm(scaled_pinguins[0,:] - scaled_pinguins[333,:]) ``` 3.887615834331366 ```python np.linalg.norm(scaled_pinguins[0,:] - scaled_pinguins[331,:]) ``` 4.6254719817752035 ```python import matplotlib.pyplot as plt ``` ```python # This code draws the x and y axis as lines. points = [0,1,2,333,331] fig, ax = plt.subplots() ax.scatter(pca_pinguins[[points],0], pca_pinguins[[points],1]) plt.axhline(0, c='black', lw=0.5) plt.axvline(0, c='black', lw=0.5) plt.xlim(-2,3) plt.ylim(-1,1) plt.quiver(0, 0, pca_pinguins[0,0], pca_pinguins[0,1], angles='xy', scale_units='xy', scale=1, color='blue') plt.quiver(0, 0, pca_pinguins[1,0], pca_pinguins[1,1], angles='xy', scale_units='xy', scale=1, color='green') plt.quiver(0, 0, pca_pinguins[2,0], pca_pinguins[2,1], angles='xy', scale_units='xy', scale=1, color='yellow') plt.quiver(0, 0, pca_pinguins[333,0], pca_pinguins[333,1], angles='xy', scale_units='xy', scale=1, color='violet') plt.quiver(0, 0, pca_pinguins[331,0], pca_pinguins[331,1], angles='xy', scale_units='xy', scale=1, color='black') for i in points: ax.annotate(str(i), (pca_pinguins[i,0], pca_pinguins[i,1])) ``` Man kunne nu enten skrive noget, som gentager denne beregning for alle kombinationer...eller ```python from sklearn.metrics.pairwise import euclidean_distances ``` ```python euclidean_matrix = euclidean_distances(scaled_pinguins) ``` ```python print(euclidean_matrix.shape) euclidean_matrix ``` (334, 334) array([[0. , 0.75794794, 1.24991348, ..., 4.62547198, 3.65359902, 3.88761583], [0.75794794, 0. , 0.99817767, ..., 4.15259574, 3.05401605, 3.42910036], [1.24991348, 0.99817767, 0. , ..., 4.26589822, 3.28965941, 3.58404958], ..., [4.62547198, 4.15259574, 4.26589822, ..., 0. , 1.44840609, 0.80791459], [3.65359902, 3.05401605, 3.28965941, ..., 1.44840609, 0. , 1.11705413], [3.88761583, 3.42910036, 3.58404958, ..., 0.80791459, 1.11705413, 0. ]]) ```python np.argmin(euclidean_matrix[0,:]) ``` 0 ```python np.argsort(euclidean_matrix[0,:])[:3] ``` array([ 0, 139, 16]) ```python scaled_pinguins[[0,139,16],:] ``` array([[-0.89765322, 0.78348666, -1.42952144, -0.57122888], [-0.91599396, 0.78348666, -1.14383099, -0.6956642 ], [-1.15442366, 0.78348666, -1.50094405, -0.75788186]]) ```python euclidean_distances(scaled_pinguins[[0,139,16],:]) ``` array([[0. , 0.31215311, 0.32537914], [0.31215311, 0. , 0.43387728], [0.32537914, 0.43387728, 0. ]]) ### Python fresh-up - Simple datatyper - Grundlæggende matematiske operationer - Lister - Funktioner - Control Flow #### Simple datatyper - Integers - hele tal **6** - Floating-Point Numbers - decimaltal **3.2** - Boolean - digital data type / bit **True / False** - String - text **Roman* ```python i = 6 print(i, type(i)) ``` 6 <class 'int'> ```python x = 3.2 print(x, type(x)) ``` 3.2 <class 'float'> ```python t = i == 6 print(t, type(t)) ``` True <class 'bool'> ```python s = 'Hello' print(s, type(s)) ``` Hello <class 'str'> #### Grundlæggende matematiske operationer ```python a = 2.0 b = 3.0 print(a+b, a*b, a-b, a/b, a**2, a+b**2, (a+b)**2) ``` 5.0 6.0 -1.0 0.6666666666666666 4.0 11.0 25.0 ```python c = a + b print(c) ``` 5.0 ```python a + b == c ``` True ```python a + b < c ``` False #### Lister man kan pakke alt i en liste :-) ```python l = ['Eskil', 1.0, sqrt] type(l) ``` list ```python l[2] ``` <function math.sqrt> ```python l[0] ``` 'Eskil' ```python l.append('Roman') ``` ```python l ``` ['Eskil', 1.0, <function math.sqrt>, 'Roman'] ```python l.extend(['Marie',37]) ``` ```python l ``` ['Eskil', 1.0, <function math.sqrt>, 'Roman', 'Marie', 37] ```python l.pop(2) ``` <function math.sqrt> ```python l ``` ['Eskil', 1.0, 'Roman', 'Marie', 37] #### Funktioner Funktioner har (normalt) in og outputs. $a$ og $b$ er vores input her og funktionen producerer $\sqrt{a^2 + b^2}$ som output. Vi prøver lige ... $\begin{align} a^2 + b^2 = c^2 \rightarrow c = \sqrt{a^2 + b^2} \end{align}$ ```python def pythagoras(a, b): return sqrt(a**2 + b**2) ``` ```python pythagoras(1,2) ``` 2.23606797749979 ```python # Hvis man gør det rigtigt, så er det en god ide at kommentere hvad der sker. # Her er det en no-brainer men funktioner kan blive indviklede og # det er good-practice at skrive "docstrings" til en anden eller en selv (i) def pythagoras(a, b): """ Computes the length of the hypotenuse of a right triangle Arguments a, b: the two lengths of the right triangle """ return sqrt(a**2 + b**2) ``` ##### Mini-assignment * Lav en funktion, som tager to punkter $(x_1, y_1), (x_2, y_2)$ på en linje og beregner hældning $a$ $$ y = ax + b$$ $$ a = \frac{y_2- y_1}{x_2 - x_1}$$ ```python plt.plot((1,2), (2,3), 'ro-') plt.plot((1,2), (2,2), 'bo-') plt.plot((2,2), (2,3), 'bo-') ``` ```python ``` ```python # slope(1,2,2,3) ``` #### Control flow ```python def isNegative(n): if n < 0: return True else: return False ``` ##### Mini-assignment * Lav en funktion `KtoC` som regner Kelvin om til Celcius $$ C = K - 273.15 \quad \text{ved} \quad C\geq - 273.15$$ Funktionen udgiver `None` hvis $C < -273.15$ ```python list(range(10)) ``` [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] ```python # for-loop even = [] # tom liste for i in range(10): even.append(i*2) print(even) ``` [0, 2, 4, 6, 8, 10, 12, 14, 16, 18] ```python # list-comprehension even = [2*i for i in range(10)] print(even) ``` [0, 2, 4, 6, 8, 10, 12, 14, 16, 18] ##### Mini-assignment 1. Beregn summen af integers 1 ... 100 ved at bruge `sum`, list-comprehension, for-loop 2. Beregn summen af integers 1 ... 100 ved at bruge partial-sum formula $$ \sum_{k=1}^n k = 1 + 2 + \cdots + (n-1) + n = \frac{n(n+1)}{2}$$ ### Matematik fresh-up alle øvelser taget fra https://tutorial.math.lamar.edu/Problems/Alg/Preliminaries.aspx Erfaringen viser, at det er en god idé at få sig en god routine med at løse matematiske problemer. - Integer Exponents - Rational Exponents - Radicals - Polynomials Vi arbejder old-school med papir men bruger også `SymPy` for at tjekke vores løsninger #### Integer Exponents $- {6^2} + 4 \cdot {3^2}$ ${\left( {2{w^4}{v^{ - 5}}} \right)^{ - 2}}$ (løsning med kun positive eksponenter!) ```python from sympy import * ``` ```python simplify(-6**2+4*3**2) ``` $\displaystyle 0$ ```python w, v = symbols('w v') simplify((2*w**4*v**-5)**-2) ``` $\displaystyle \frac{v^{10}}{4 w^{8}}$ #### Rational Exponents ${\left( { - 125} \right)^{\frac{1}{3}}}$ ${\left( {{a^3}\,{b^{ - \,\,\frac{1}{4}}}} \right)^{\frac{2}{3}}}$ ```python simplify(-125**(1/3), rational=True) ``` $\displaystyle -5.0$ ```python a, b = symbols('a b') simplify((a**3*b**(-1/4))**(2/3), rational=True) ``` $\displaystyle \left(\frac{a^{3}}{\sqrt[4]{b}}\right)^{\frac{2}{3}}$ #### Radicals $$\begin{array}{c} \sqrt[7]{y}\\ \sqrt[3]{{{x^2}}} \\ \sqrt[3]{{ - 512}} \\ \sqrt x \left( {4 - 3\sqrt x } \right)\end{array}$$ ```python x, y, z = symbols('x, y , z') ``` ```python simplify((x**2)**(1/3), rational=True) ``` $\displaystyle \sqrt[3]{x^{2}}$ ```python simplify(-512**(1/3), rational=True) ``` $\displaystyle -8.0$ ```python simplify(sqrt(x)*(4 - 3*sqrt(x)), rational = True) ``` $\displaystyle 4 \sqrt{x} - 3 x$ #### Polynomials $$(4{x^3} - 2{x^2} + 1) + (7{x^2} + 12x)$$ ```python simplify((4*x**3-2*x**2+1)+(7*x**2+12*x)) ``` $\displaystyle 4 x^{3} + 5 x^{2} + 12 x + 1$
cb32e9d2318abfc53022691a3c2f3a87335bf5cd
354,390
ipynb
Jupyter Notebook
_notebooks/_2021-04-19-intro.ipynb
RJuro/am-notebooks
74a7ec183fbe0f4a5ad9e333d6f5923a4ebd0c05
[ "Apache-2.0" ]
null
null
null
_notebooks/_2021-04-19-intro.ipynb
RJuro/am-notebooks
74a7ec183fbe0f4a5ad9e333d6f5923a4ebd0c05
[ "Apache-2.0" ]
null
null
null
_notebooks/_2021-04-19-intro.ipynb
RJuro/am-notebooks
74a7ec183fbe0f4a5ad9e333d6f5923a4ebd0c05
[ "Apache-2.0" ]
null
null
null
129.05681
172,120
0.864054
true
12,320
Qwen/Qwen-72B
1. YES 2. YES
0.76908
0.763484
0.58718
__label__yue_Hant
0.150068
0.202547
# Inference with GPs The dataset needed for this worksheet [can be downloaded](https://northwestern.box.com/s/el0s1imhdxq5qwvzb4hgap90mxpjcfdq). Once you have downloaded [s9_gp_dat.tar.gz](https://northwestern.box.com/s/el0s1imhdxq5qwvzb4hgap90mxpjcfdq), and moved it to this folder, execute the following cell: ```python !tar -zxvf s9_gp_dat.tar.gz !mv *.txt data/ ``` x ./._sample_data.txt x ./sample_data.txt x ./._sample_data_line.txt x ./sample_data_line.txt x ./._sample_data_line_truths.txt x ./sample_data_line_truths.txt Here are the functions we wrote in the previous tutorial to compute and draw from a GP: ```python import numpy as np from scipy.linalg import cho_factor def ExpSquaredKernel(t1, t2=None, A=1.0, l=1.0): """ Return the ``N x M`` exponential squared covariance matrix between time vectors `t1` and `t2`. The kernel has amplitude `A` and lengthscale `l`. """ if t2 is None: t2 = t1 T2, T1 = np.meshgrid(t2, t1) return A ** 2 * np.exp(-0.5 * (T1 - T2) ** 2 / l ** 2) def draw_from_gaussian(mu, S, ndraws=1, eps=1e-12): """ Generate samples from a multivariate gaussian specified by covariance ``S`` and mean ``mu``. (We derived these equations in Day 1, Notebook 01, Exercise 7.) """ npts = S.shape[0] L, _ = cho_factor(S + eps * np.eye(npts), lower=True) L = np.tril(L) u = np.random.randn(npts, ndraws) x = np.dot(L, u) + mu[:, None] return x.T def compute_gp(t_train, y_train, t_test, sigma=0, A=1.0, l=1.0): """ Compute the mean vector and covariance matrix of a GP at times `t_test` given training points `y_train(t_train)`. The training points have uncertainty `sigma` and the kernel is assumed to be an Exponential Squared Kernel with amplitude `A` and lengthscale `l`. """ # Compute the required matrices kernel = ExpSquaredKernel Stt = kernel(t_train, A=1.0, l=1.0) Stt += sigma ** 2 * np.eye(Stt.shape[0]) Spp = kernel(t_test, A=1.0, l=1.0) Spt = kernel(t_test, t_train, A=1.0, l=1.0) # Compute the mean and covariance of the GP mu = np.dot(Spt, np.linalg.solve(Stt, y_train)) S = Spp - np.dot(Spt, np.linalg.solve(Stt, Spt.T)) return mu, S ``` ## The Marginal Likelihood In the previous notebook, we learned how to construct and sample from a simple GP. This is useful for making predictions, i.e., interpolating or extrapolating based on the data you measured. But the true power of GPs comes from their application to *regression* and *inference*: given a dataset $D$ and a model $M(\theta)$, what are the values of the model parameters $\theta$ that are consistent with $D$? The parameters $\theta$ can be the hyperparameters of the GP (the amplitude and time scale), the parameters of some parametric model, or all of the above. A very common use of GPs is to model things you don't have an explicit physical model for, so quite often they are used to model "nuisances" in the dataset. But just because you don't care about these nuisances doesn't mean they don't affect your inference: in fact, unmodelled correlated noise can often lead to strong biases in the parameter values you infer. In this notebook, we'll learn how to compute likelihoods of Gaussian Processes so that we can *marginalize* over the nuisance parameters (given suitable priors) and obtain unbiased estimates for the physical parameters we care about. Given a set of measurements $y$ distributed according to $$ \begin{align} y \sim \mathcal{N}(\mathbf{\mu}(\theta), \mathbf{\Sigma}(\alpha)) \end{align} $$ where $\theta$ are the parameters of the mean model $\mu$ and $\alpha$ are the hyperparameters of the covariance model $\mathbf{\Sigma}$, the *marginal likelihood* of $y$ is $$ \begin{align} \ln P(y | \theta, \alpha) = -\frac{1}{2}(y-\mu)^\top \mathbf{\Sigma}^{-1} (y-\mu) - \frac{1}{2}\ln |\mathbf{\Sigma}| - \frac{N}{2} \ln 2\pi \end{align} $$ where $||$ denotes the determinant and $N$ is the number of measurements. The term *marginal* refers to the fact that this expression implicitly integrates over all possible values of the Gaussian Process; this is not the likelihood of the data given one particular draw from the GP, but given the ensemble of all possible draws from $\mathbf{\Sigma}$. <div style="background-color: #D6EAF8; border-left: 15px solid #2E86C1;"> <h1 style="line-height:2.5em; margin-left:1em;">Exercise 1</h1> </div> Define a function ``ln_gp_likelihood(t, y, sigma, A=1, l=1)`` that returns the log-likelihood defined above for a vector of measurements ``y`` at a set of times ``t`` with uncertainty ``sigma``. As before, ``A`` and ``l`` should get passed direcetly to the kernel function. Note that you're going to want to use [np.linalg.slogdet](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.linalg.slogdet.html) to compute the log-determinant of the covariance instead of ``np.log(np.linalg.det)``. (Why?) ```python def ln_gp_likelihood(t, y, sigma=0, A=1.0, l=1.0): """ Docstring """ C = ExpSquaredKernel(t, A=A, l=l) + np.eye(len(y)) * sigma**2 term1 = -0.5 * np.dot(y.T, np.linalg.solve(C, y)) term2 = -0.5 * np.linalg.slogdet(C)[1] term3 = -1.0 * len(y) / 2 * np.log(2 * np.pi) loglike = term1 + term2 + term3 return loglike ``` <div style="background-color: #D6EAF8; border-left: 15px solid #2E86C1;"> <h1 style="line-height:2.5em; margin-left:1em;">Exercise 2</h1> </div> The following dataset was generated from a zero-mean Gaussian Process with a Squared Exponential Kernel of unity amplitude and unknown timescale. Compute the marginal log likelihood of the data over a range of reasonable values of $l$ and find the maximum. Plot the **likelihood** (not log likelihood) versus $l$; it should be pretty Gaussian. How well are you able to constrain the timescale of the GP? ```python import matplotlib.pyplot as plt ``` ```python t, y, sigma = np.loadtxt("data/sample_data.txt", unpack=True) plt.plot(t, y, "k.", alpha=0.5, ms=3) plt.xlabel("time") plt.ylabel("data"); ``` ```python l = np.linspace(0.05, 15, 1000) log_likelihoods = np.array([ln_gp_likelihood(t, y, sigma=sigma, A=1.0, l=l_val) for l_val in l]) likelihoods = np.exp(log_likelihoods - np.max(log_likelihoods)) ``` ```python plt.plot(l, likelihoods) plt.xlim(0,1) plt.show() print(l[np.argmax(likelihoods)]) ``` <div style="background-color: #D6EAF8; border-left: 15px solid #2E86C1;"> <h1 style="line-height:2.5em; margin-left:1em;">Exercise 3a</h1> </div> The timeseries below was generated by a linear function of time, $y(t)= mt + b$. In addition to observational uncertainty $\sigma$ (white noise), there is a fair bit of correlated (red) noise, which we will assume is well described by the squared exponential covariance with a certain (unknown) amplitude $A$ and timescale $l$. Your task is to estimate the values of $m$ and $b$, the slope and intercept of the line, respectively. In this part of the exercise, **assume there is no correlated noise.** Your model for the $n^\mathrm{th}$ datapoint is thus $$ \begin{align} y_n \sim \mathcal{N}(m t_n + b, \sigma_n\mathbf{I}) \end{align} $$ and the probability of the data given the model can be computed by calling your GP likelihood function: ```python def lnprob(params): m, b = params model = m * t + b return ln_gp_likelihood(t, y - model, sigma, A=0, l=1) ``` Note, importantly, that we are passing the **residual vector**, $y - (mt + b)$, to the GP, since above we coded up a zero-mean Gaussian process. We are therefore using the GP to model the **residuals** of the data after applying our physical model (the equation of the line). To estimate the values of $m$ and $b$ we could generate a fine grid in those two parameters and compute the likelihood at every point. But since we'll soon be fitting for four parameters (in the next part), we might as well upgrade our inference scheme and use the ``emcee`` package to do Markov Chain Monte Carlo (MCMC). If you haven't used ``emcee`` before, check out the first few tutorials on the [documentation page](https://emcee.readthedocs.io/en/latest/). The basic setup for the problem is this: ```python import emcee sampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob) initial = [4.0, 15.0] p0 = initial + 1e-3 * np.random.randn(nwalkers, ndim) print("Running burn-in...") p0, _, _ = sampler.run_mcmc(p0, nburn) # nburn = 500 should do sampler.reset() print("Running production...") sampler.run_mcmc(p0, nsteps); # nsteps = 1000 should do ``` where ``nwalkers`` is the number of walkers (something like 20 or 30 is fine), ``ndim`` is the number of dimensions (2 in this case), and ``lnprob`` is the log-probability function for the data given the model. Finally, ``p0`` is a list of starting positions for each of the walkers. Above we picked some fiducial/eyeballed value for $m$ and $b$, then added a small random number to each to generate different initial positions for each walker. This will initialize all walkers in a ball centered on some point, and as the chain progresses they'll diffuse out and begin to explore the posterior. Once you have sampled the posterior, plot several draws from it on top of the data. You can access a random draw from the posterior by doing ```python m, b = sampler.flatchain[np.random.randint(len(sampler.flatchain))] ``` Also plot the **true** line that generated the dataset (given by the variables ``m_true`` and ``b_true`` below). Do they agree, or is there bias in your inferred values? Use the ``corner`` package to plot the joint posterior. How many standard deviations away from the truth are your inferred values? ```python t, y, sigma = np.loadtxt("data/sample_data_line.txt", unpack=True) m_true, b_true, A_true, l_true = np.loadtxt("data/sample_data_line_truths.txt", unpack=True) plt.errorbar(t, y, yerr=sigma, fmt="k.", label="observed") plt.plot(t, m_true * t + b_true, color="C0", label="truth") plt.legend(fontsize=12) plt.xlabel("time") plt.ylabel("data") plt.show() ``` ```python def lnprob(params): m, b = params model = m * t + b return ln_gp_likelihood(t, y - model, sigma, A=0, l=1) ``` ```python #!pip install emcee import emcee ``` ```python nwalkers = 24 ndim = 2 sampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob) ``` ```python nburn = 500 nsteps = 1000 initial = [4.0, 15.0] p0 = initial + 1e-3 * np.random.randn(nwalkers, ndim) print("Running burn-in...") p0, _, _ = sampler.run_mcmc(p0, nburn) # nburn = 500 should do sampler.reset() print("Running production...") sampler.run_mcmc(p0, nsteps); # nsteps = 1000 should do ``` Running burn-in... Running production... ```python draws = [] for i in range(10): m, b = sampler.flatchain[np.random.randint(len(sampler.flatchain))] draws.append(m * t + b) ``` ```python plt.errorbar(t, y, yerr=sigma, fmt="k.", label="observed") plt.plot(t, m_true * t + b_true, color="C0", label="truth", lw=3, ls='--') for i in range(9): plt.plot(t, draws[i], label=None) plt.plot(t, draws[9], label='draws') plt.legend(fontsize=12) plt.xlabel("time") plt.ylabel("data") plt.show() ``` ```python from corner import corner ``` ```python fig = corner(sampler.flatchain, truths=(m_true, b_true), range=((0,6), (10, 20))); ``` <div style="background-color: #D6EAF8; border-left: 15px solid #2E86C1;"> <h1 style="line-height:2.5em; margin-left:1em;">Exercise 3b</h1> </div> This time, let's actually model the correlated noise. Re-define your ``lnprob`` function to accept four parameters (slope, intercept, amplitude, and timescale). If you didn't before, it's a good idea to enforce some priors to keep the parameters within reasonable (and physical) ranges. If any parameter falls outside this range, have ``lnprob`` return negative infinity (i.e., zero probability). You'll probably want to run your chains for a bit longer this time, too. As before, plot some posterior samples for the line, as well as the corner plot. How did you do this time? Is there any bias in your inferred values? How does the variance compare to the previous estimate? ```python def lnprob(params): m, b, a, l = params if m > 6 or m < 2 or b < 15 or b > 20 or a <=0.0 or l <= 0.0: return -np.inf model = m * t + b return ln_gp_likelihood(t, y - model, sigma, A=a, l=l) ``` ```python nwalkers = 50 ndim = 4 sampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob) ``` ```python nburn = 1000 nsteps = 500 initial = [4.0, 15.0, 1.0, 1.0] p0 = initial + 1e-3 * np.random.randn(nwalkers, ndim) print("Running burn-in...") p0, _, _ = sampler.run_mcmc(p0, nburn) # nburn = 500 should do sampler.reset() print("Running production...") sampler.run_mcmc(p0, nsteps); # nsteps = 1000 should do ``` Running burn-in... /Users/rmorgan/anaconda3/lib/python3.7/site-packages/emcee/ensemble.py:335: RuntimeWarning: invalid value encountered in subtract lnpdiff = (self.dim - 1.) * np.log(zz) + newlnprob - lnprob0 /Users/rmorgan/anaconda3/lib/python3.7/site-packages/emcee/ensemble.py:336: RuntimeWarning: invalid value encountered in greater accept = (lnpdiff > np.log(self._random.rand(len(lnpdiff)))) Running production... ```python draws = [] for i in range(10): m, b, a, l = sampler.flatchain[np.random.randint(len(sampler.flatchain))] draws.append(m * t + b) ``` ```python plt.errorbar(t, y, yerr=sigma, fmt="k.", label="observed") plt.plot(t, m_true * t + b_true, color="C0", label="truth", lw=3, ls='--') for i in range(9): plt.plot(t, draws[i], label=None) plt.plot(t, draws[9], label='draws') plt.legend(fontsize=12) plt.xlabel("time") plt.ylabel("data") plt.show() ``` ```python fig = corner(sampler.flatchain, truths=(m_true, b_true, 1.0, 1.0)); ``` ```python ``` <div style="background-color: #D6EAF8; border-left: 15px solid #2E86C1;"> <h1 style="line-height:2.5em; margin-left:1em;">Exercise 3c</h1> </div> If you didn't do this already, re-plot the posterior samples on top of the data, but this time draw them from the GP, *conditioned on the data*. How good is the fit?
6e7b3b1a0235721afdb7fed5832eabe88f4bcb81
333,092
ipynb
Jupyter Notebook
Session9/Day1/gps/02-Inference.ipynb
rmorgan10/LSSTC-DSFP-Sessions
1d0b3c28fe7f6f93e00e332e74873e6d1ec29d0b
[ "MIT" ]
null
null
null
Session9/Day1/gps/02-Inference.ipynb
rmorgan10/LSSTC-DSFP-Sessions
1d0b3c28fe7f6f93e00e332e74873e6d1ec29d0b
[ "MIT" ]
null
null
null
Session9/Day1/gps/02-Inference.ipynb
rmorgan10/LSSTC-DSFP-Sessions
1d0b3c28fe7f6f93e00e332e74873e6d1ec29d0b
[ "MIT" ]
null
null
null
486.265693
177,768
0.939041
true
4,151
Qwen/Qwen-72B
1. YES 2. YES
0.847968
0.865224
0.733682
__label__eng_Latn
0.963308
0.542921
# One-Electron Atoms Just as the touchstone of chemistry is the periodic table, the touchstone of quantum chemistry is the atomic wavefunction. As we shall see, our intuition about many-electron atoms is built up from our knowledge of 1-electron atoms, mainly because many-electron atoms are mathematically intractable, while 1-electron atoms are not appreciably more complicated than an electron confined to a spherical ball. A detailed mathematical exposition on 1-electron atoms--which are often called hydrogenic atoms--is provided as a [pdf](https://github.com/QC-Edu/IntroQM2022/blob/master/documents/Hatom.pdf). This is only a brief summary. ## Schr&ouml;dinger Equation for One-Electron Atoms Denoting the mass of the electron as $m_e$, the charge of the electron as $=e$, and the [permittivity of free space](https://en.wikipedia.org/wiki/Vacuum_permittivity) as $\epsilon_0$, the Hamiltonian for the attraction of an electron to a atomic nucleus with atomic number $Z$ (and charge $+Ze$) at the origin, $(x,y,z) = (0,0,0)$, is: $$ \hat{H}_{\text{1 el. atom}} = -\frac{\hbar^2}{2m_e} \nabla^2 - \frac{Z e^2}{4 \pi \epsilon_0 r} $$ where $r = \sqrt{x^2+y^2+z^2}$ is the distance of the electron from the nucleus. In [atomic units](https://en.wikipedia.org/wiki/Hartree_atomic_units), the Hamiltonian is: $$ \hat{H}_{\text{1 el. atom}} = -\tfrac{1}{2} \nabla^2 - Zr^{-1} $$ Just as we did for the electron confined to a spherical ball, we rewrite the Schr&ouml;dinger equation in spherical coordinates, $$ \left(-\frac{1}{2} \left( \frac{d^2}{dr^2} + \frac{2}{r} \frac{d}{dr}\right) + \frac{\hat{L}^2}{2r^2} - \frac{Z}{r} \right) \psi_{n,l,m_l}(r,\theta,\phi) = E_{n,l,m_l}\psi_{n,l,m_l}(r,\theta,\phi) $$ and use the fact the eigenvalues of the squared-magnitude of the angular momentum, $\hat{L}^2$ are the [spherical harmonics](https://en.wikipedia.org/wiki/Spherical_harmonics) $$ \hat{L}^2 Y_l^{m} (\theta, \phi) = l(l+1)Y_l^{m} (\theta, \phi) \qquad l=0,1,2,\ldots m=0, \pm 1, \ldots, \pm l $$ and the technique of separation of variables to deduce that the wavefunctions of one-electron atoms have the form $$ \psi_{n,l,m}(r,\theta,\phi) = R_{n,l}(r) Y_l^{m}(\theta,\phi) $$ where the radial wavefunction, $R_{n,l}(r)$ is obtained by solving the radial Schr&ouml;dinger equation $$ \left(-\frac{1}{2} \left( \frac{d^2}{dr^2} + \frac{2}{r} \frac{d}{dr}\right) + \frac{l(l+1)}{2r^2} - \frac{Z}{r} \right) R_{n,l}(r) = E_{n,l}R_{n,l}(r) $$ ## The Radial Equation for One-Electron Atoms To solve the radial Schr&ouml;dinger equation, we rewrite it as a [homogeneous linear differential equation](https://en.wikipedia.org/wiki/Homogeneous_differential_equation) $$ \left( \frac{d^2}{dr^2} + \frac{2}{r} \frac{d}{dr} - \frac{l(l+1)}{r^2} + \frac{2Z}{r} + 2E_{n,l}\right) R_{n,l}(r) = 0 $$ It is (quite a bit) more involved than the previous cases we have considered, but the same basic technique reveals that the eigenenergies are $$ E_n = -\frac{Z^2}{2n^2} $$ and the radial wavefunctions are the product of an [associated Laguerre polynomial](https://en.wikipedia.org/wiki/Laguerre_polynomials#Generalized_Laguerre_polynomials) and an exponential, $$ R_{n,l}(r) \propto \left(\frac{2Zr}{n}\right)^l L_{n-1-l}^{2l+1}\left(\frac{2Zr}{n}\right) e^{-\frac{Zr}{n}} $$ with $$ n=1,2,3,\ldots \\ l=0,1,\ldots,n-1 \\ m = 0,\pm 1, \pm2, \ldots, \pm l $$ ## Eigenenergies and Wavefunctions for One-Electron Atoms The eigenenergies of the Hydrogenic wavefunctions do not depend on $m$ or $l$. So there are $(n+1)^2$ degenerate eigenfunctions, with energies $$ E_n = -\frac{Z^2}{2n^2} $$ The energy eigenfunctions are: $$ \psi_{nlm}(r,\theta,\phi) \propto \left(\frac{2Zr}{n}\right)^l L_{n-1-l}^{2l+1}\left(\frac{2Zr}{n}\right) e^{-\frac{Zr}{n}} Y_l^{m} (\theta, \phi) $$ These eigenfunctions are complex-valued, because the spherical harmonics are complex-valued. Like all other one-electron wavefunctions, these eigenfunctions are referred to as *orbitals*. For historical reasons, orbitals are labelled by their principle quantum number $n$ (which specifies their energy), their total angular momentum quantum number $l$, and the quantum number that specifies their angular momentum around the $z$ axis, $m$, $$ \hat{L}_z Y_l^{m} (\theta, \phi) = \hbar m Y_l^{m} (\theta, \phi) $$ The $l$ quantum number is stored by a letter code that dates back to the pre-history of quantum mechanics, where certain spectral lines were labelled as **s**harp ($l=0$ indicated no spatial degeneracy that could be broken by an external field), **p**rinciple ($l=1$ lines were still relatively sharp), **d**iffuse ($l=2$ lines were quite diffuse due to the 5-fold degeneracy of d orbitals), and **f**undamental ($l=3$). Note that the orbital images that appear above do not look that much like the usual orbital pictures, with the exception of the $m=0$ orbitals. This is because of the complex-valuedness. We often instead use the *real* spherical harmonics, which are defined simply as: $$ \begin{align} S_l^{m>0}(\theta,\phi) &= \frac{1}{\sqrt{2}} \left(Y_l^{-m} (\theta, \phi) + (-1)^{m} Y_l^{m} (\theta, \phi) \right) \\ S_l^{m=0}(\theta,\phi) &= Y_l^{m=0} (\theta, \phi) \\ S_l^{m<0}(\theta,\phi) &= \frac{i}{\sqrt{2}} \left(Y_l^{-m} (\theta, \phi) - (-1)^{m} Y_l^{m} (\theta, \phi) \right) \end{align} $$ The following animations shows one can take linear combinations of the (complex) spherical harmonics to form the $p_x$, $p_y$, etc. orbitals one generally uses in chemistry. Using the [orbitron](https://winter.group.shef.ac.uk/orbitron/atomic_orbitals/7i/index.html), you can visualize the (real, Cartesian) spherical harmonics and the [radial wavefunctions](https://winter.group.shef.ac.uk/orbitron/atomic_orbitals/7f/7f_wave_function.html) for hydrogenic orbitals. Most orbitals have very complicated formulas, but a few have simple equations, including: $$ \psi_{\text{1s}}(r) = \psi_{100}(r) = \sqrt{\frac{Z^3}{\pi}}e^{-Zr} \\ \psi_{n,n-1,m} \propto r^{n-1} e^{\tfrac{-Zr}{n}} Y_l^{m_l} (\theta,\phi) $$ ### &#x1f4dd; What is the expectation value of $r^k$ for the $l=n-1$ orbital of a hydrogenic atom. ### &#x1f4dd; Show that the energy of the Hydrogen atom in arbitrary units (not necessarily atomic units) can be written as: $$ E_n = -\frac{Z^2}{2n^2} \cdot \left(\frac{e^2}{4 \pi \epsilon_0}\right)^2 \cdot \left( \frac{m_e}{\hbar^2}\right) $$ ## &#x1fa9e; Self-Reflection - Using the conversion from atomic units to traditional chemical units of kJ/mol, what is the energy of the Hydrogen atom? How accurately, in atomic units, must one determine the energy of a one-electron atom in order to attain "chemical accuracy" of ~1 kJ/mol? - Write a small Python script to evaluate the expectation value of the radius, $r$, for a one-electron atom. - Test to confirm that the Heisenberg uncertainty principle for position and momentum holds for the ground state of a Hydrogenic atom. - To what extent is the shape of the spherical harmonic intuitive, especially the doughnut shapes associated with an electron's angular momentum around the z axis. ## &#x1f914; Thought-Provoking Questions - In one-electron atoms, the eigenenergies depend only on the principle quantum number, $n$, and not the angular momentum quantum number, $l$. Why are $s$ orbitals lower in energy than $p$ orbitals in real multielectron atoms, but not one-electron atoms? It turns out this is *not* an accidental degeneracy, but a hidden symmetry of the Hydrogen atom. - Suppose electrons did not repel each other. Can you write the wavefunction for a many-electron atom in that case? - Why do you think solving the Schr&ouml;dinger equation for the one-electron molecule is more complicated than solving the Schr&ouml;dinger equation for the one-electron atom? - For what $Z$ is the energy of a one-electron atom comparable to the rest-mass energy of an electron, $mc^2$? For atomic numbers close to this value, relativistic effects become extremely important. - The Kratzer-Fues potential, $V_{\text{Kratzer-Fues}}(r) = \frac{a}{r^2} - \frac{b}{r}$ (here $a>0$ and $b>0$) is a reasonable model for a diatomic molecule rotating and vibrating in 3 dimensions, or even a ion-pair complex (e.g., an single ion pair from an ionic solvent in the gas phase). What are the solutions to the Schr&ouml;dinger equation for the Kratzer-Fues potential? [Solution](problems/KratzerFues.md) $$ \left(-\frac{\hbar^2}{2m}\nabla^2 +\frac{a}{r^2} - \frac{b}{r} \right) \psi(\mathbf{r}) = E\psi(\mathbf{r}) $$ ## &#x1f501; Recapitulation - What are eigenfunctions for the $n=l+1$ state of a one-electron atom? - What are the energy eigenfunctions and eigenvalues for a one-electron atom? - How does the energy increase as the atomic number increases? ## &#x1f52e; Next Up... - Multielectron systems - Approximate methods. ## &#x1f4da; References My favorite sources for this material are: - [Randy's book](https://github.com/PaulWAyers/IntroQChem/blob/main/documents/DumontBook.pdf?raw=true) - D. A. MacQuarrie, Quantum Chemistry (University Science Books, Mill Valley California, 1983) - [One-electron atoms](https://github.com/PaulWAyers/IntroQChem/blob/main/documents/Hatom.pdf?raw=true) (my notes). - [Davit Potoyan's Jupyterbook course](https://dpotoyan.github.io/Chem324/Lec5-0.html). There are also some excellent wikipedia articles: - [Hydrogen-like atoms](https://en.wikipedia.org/wiki/Hydrogen-like_atom) - [Atomic orbitals](https://en.wikipedia.org/wiki/Atomic_orbital) ```python ```
01f5e3ca38f52f4f26cd0c1b7e704550082b85b3
13,636
ipynb
Jupyter Notebook
book/OneElectronAtoms.ipynb
RichRick1/IntroQM2022
91a37b630b9b83c76c972ee2e958a13640b1a37f
[ "CC0-1.0" ]
5
2022-02-08T18:42:37.000Z
2022-02-21T19:33:46.000Z
book/OneElectronAtoms.ipynb
RichRick1/IntroQM2022
91a37b630b9b83c76c972ee2e958a13640b1a37f
[ "CC0-1.0" ]
2
2022-01-26T18:45:29.000Z
2022-03-04T20:32:52.000Z
book/OneElectronAtoms.ipynb
RichRick1/IntroQM2022
91a37b630b9b83c76c972ee2e958a13640b1a37f
[ "CC0-1.0" ]
2
2022-02-08T17:56:55.000Z
2022-03-03T08:30:53.000Z
52.245211
633
0.622983
true
2,971
Qwen/Qwen-72B
1. YES 2. YES
0.851953
0.79053
0.673495
__label__eng_Latn
0.962139
0.403085
# Calculate events in Circular Restricted Three Body Problem ```python !pip install fastrk orbipy ``` Requirement already satisfied: fastrk in c:\users\stasb\pycharmprojects\fastrk (0.0.2) Requirement already satisfied: orbipy in c:\users\stasb\anaconda3\envs\p38\lib\site-packages (0.2.5) Collecting numpy~=1.19.2 Downloading numpy-1.19.5-cp38-cp38-win_amd64.whl (13.3 MB) Collecting numba~=0.51.2 Using cached numba-0.51.2-cp38-cp38-win_amd64.whl (2.2 MB) Requirement already satisfied: sympy~=1.6.2 in c:\users\stasb\appdata\roaming\python\python38\site-packages\sympy-1.6.2-py3.8.egg (from fastrk) (1.6.2) Requirement already satisfied: matplotlib>=3.0.0 in c:\users\stasb\appdata\roaming\python\python38\site-packages (from orbipy) (3.4.1) Requirement already satisfied: scipy>=1.1.0 in c:\users\stasb\anaconda3\envs\p38\lib\site-packages (from orbipy) (1.6.3) Requirement already satisfied: pandas>=0.23.4 in c:\users\stasb\appdata\roaming\python\python38\site-packages (from orbipy) (1.1.4) Collecting llvmlite<0.35,>=0.34.0.dev0 Using cached llvmlite-0.34.0-cp38-cp38-win_amd64.whl (15.9 MB) Requirement already satisfied: setuptools in c:\users\stasb\anaconda3\envs\p38\lib\site-packages (from numba~=0.51.2->fastrk) (49.6.0.post20200814) Requirement already satisfied: mpmath>=0.19 in c:\users\stasb\appdata\roaming\python\python38\site-packages\mpmath-1.1.0-py3.8.egg (from sympy~=1.6.2->fastrk) (1.1.0) Requirement already satisfied: python-dateutil>=2.7 in c:\users\stasb\appdata\roaming\python\python38\site-packages\python_dateutil-2.8.1-py3.8.egg (from matplotlib>=3.0.0->orbipy) (2.8.1) Requirement already satisfied: pyparsing>=2.2.1 in c:\users\stasb\appdata\roaming\python\python38\site-packages\pyparsing-3.0.0b1-py3.8.egg (from matplotlib>=3.0.0->orbipy) (3.0.0b1) Requirement already satisfied: pillow>=6.2.0 in c:\users\stasb\appdata\roaming\python\python38\site-packages\pillow-8.0.1-py3.8-win-amd64.egg (from matplotlib>=3.0.0->orbipy) (8.0.1) Requirement already satisfied: cycler>=0.10 in c:\users\stasb\appdata\roaming\python\python38\site-packages\cycler-0.10.0-py3.8.egg (from matplotlib>=3.0.0->orbipy) (0.10.0) Requirement already satisfied: kiwisolver>=1.0.1 in c:\users\stasb\appdata\roaming\python\python38\site-packages\kiwisolver-1.3.1-py3.8-win-amd64.egg (from matplotlib>=3.0.0->orbipy) (1.3.1) Requirement already satisfied: pytz>=2017.2 in c:\users\stasb\anaconda3\envs\p38\lib\site-packages (from pandas>=0.23.4->orbipy) (2021.1) Requirement already satisfied: six>=1.5 in c:\users\stasb\appdata\roaming\python\python38\site-packages\six-1.15.0-py3.8.egg (from python-dateutil>=2.7->matplotlib>=3.0.0->orbipy) (1.15.0) Installing collected packages: numpy, llvmlite, numba Attempting uninstall: numpy Found existing installation: numpy 1.20.2 Uninstalling numpy-1.20.2: Successfully uninstalled numpy-1.20.2 WARNING: Keyring is skipped due to an exception: entry_points() got an unexpected keyword argument 'group' ERROR: Could not install packages due to an EnvironmentError: [WinError 5] Access is denied: 'C:\\Users\\stasb\\AppData\\Roaming\\Python\\Python38\\site-packages\\~umpy\\.libs\\libopenblas.GK7GX5KEQ4F6UYO3P26ULGBQYHGQO7J4.gfortran-win_amd64.dll' Consider using the `--user` option or check the permissions. ```python import numpy as np import pandas as pd from scipy.integrate import solve_ivp from fastrk import BT8713M, RKCodeGen, EventsCodeGen from model_crtbp import crtbp from timeit import timeit from numba import njit import orbipy as op ``` C:\Users\stasb\Anaconda3\envs\p38\lib\site-packages\numpy\_distributor_init.py:30: UserWarning: loaded more than 1 DLL from .libs: C:\Users\stasb\Anaconda3\envs\p38\lib\site-packages\numpy\.libs\libopenblas.NOIJJG62EMASZI6NYURL6JBKM4EVBGM7.gfortran-win_amd64.dll C:\Users\stasb\Anaconda3\envs\p38\lib\site-packages\numpy\.libs\libopenblas.WCDJNK7YVMPZQ2ME2ZZHJJRJ3JIKNDB7.gfortran-win_amd64.dll warnings.warn("loaded more than 1 DLL from .libs:" ```python # Orbipy uses scipy.integrate.ode model = op.crtbp3_model() # Events ev_y = op.eventY(0., terminal=False) ev_z = op.eventZ(0., terminal=False) ev_vx = op.eventVX(0., terminal=False) ev_vy = op.eventVY(0., terminal=False) ev_vz = op.eventVZ(0., terminal=False) events = [ev_y, ev_z, ev_vx, ev_vy, ev_vz] ``` ```python # generate code and import rk_prop function # new module rk_bt8713m.py will be created (at first call) or loaded # CRTBP is autonomous system i.e. equations doesn't have explicit time dependency rk_module = RKCodeGen(BT8713M, autonomous=True).save_and_import() rk_prop = rk_module.rk_prop rk_prop_ev = rk_module.rk_prop_ev # after first run code above can be replaced by # from rk_8713M import rk_prop, rk_prop_ev ``` ```python # generate code and import call_event function; # new module __evcache__/ev_<hash>.py will be created (at first call) or loaded; # call_event(t, s, values, idx) function calls specific event function by its index in events list call_event = EventsCodeGen(events).save_and_import() ``` ```python # Let's define Cauchy's problem: # initial time and state for halo orbit t0 = 0. t1 = 3*np.pi s0 = np.zeros(6) s0[[0, 2, 4]] = 9.949942666080747733e-01, 4.732924802139452415e-03, -1.973768492871211949e-02 mc = np.array([3.001348389698916e-06]) ``` ```python # integration parameters, same as Orbipy default params = {'max_step': np.inf, 'rtol': 1e-12, 'atol': 1e-12, } ``` ```python # event detector object dtr = op.event_detector(model, events) df, evdf = dtr.prop(s0, t0, t1) ``` ```python values = np.array([e.value for e in events]) terminals = np.array([e.terminal for e in events]) directions = np.array([e.direction for e in events]) counts = np.array([e.count for e in events]) accurates = np.array([e.accurate for e in events]) trj, evarr = rk_prop_ev(crtbp, s0, 0, t1, *params.values(), values, terminals, directions, counts, accurates, call_event, 1e-12, 1e-12, 100, mc) ``` ```python evdf1 = pd.DataFrame(evarr, columns=['e', 'cnt', 't', 'x', 'y', 'z', 'vx', 'vy', 'vz']) ``` ```python # check event times evdf.t - evdf1.t ``` 0 -4.922285e-12 1 4.164447e-12 2 7.951861e-12 3 3.029155e-11 4 4.951595e-13 5 5.851453e-11 6 1.755840e-11 7 1.680007e-10 8 -6.343224e-10 9 1.618141e-10 10 4.295067e-10 11 1.630771e-10 12 -7.911112e-10 13 6.959928e-09 14 2.087988e-09 15 1.687998e-08 16 8.396579e-09 17 5.646641e-08 18 -2.277712e-07 19 5.820032e-08 20 1.568987e-07 21 5.598841e-08 22 -2.966316e-07 23 2.560417e-06 24 7.688840e-07 25 6.216301e-06 26 3.096731e-06 27 2.080954e-05 28 -8.392425e-05 29 2.144755e-05 Name: t, dtype: float64 ```python evdf[evdf.e == 0].y - evdf1[evdf1.e == 0].y ``` 2 2.089539e-17 7 -1.923244e-18 12 -3.282024e-16 17 -1.551711e-13 22 1.787718e-16 27 -2.841581e-13 Name: y, dtype: float64 ```python pltr = op.plotter.from_model(model) ax = pltr.plot_proj(evdf, color='b', ls='', marker='+', ms=10) pltr.plot_proj(evdf1, color='r', ls='', marker='.', ax=ax); ``` ```python loops = 1000 str0 = "dtr.prop(s0, t0, t1)" str1 = """rk_prop_ev(crtbp, s0, 0, t1, *params.values(), values, terminals, directions, counts, accurates, call_event, 1e-12, 1e-12, 100, mc)""" r0 = timeit(str0, number=loops, globals=globals()) r1 = timeit(str1, number=loops, globals=globals()) print(f"orbipy DOP853 time {r0:.2f}") print(f"fastrk DOP8713M time {r1:.2f}") print(f"speedup x {r0/r1:.2f}") ``` orbipy DOP853 time 12.63 fastrk DOP8713M time 2.11 speedup x 6.00 ```python @njit def event_0(t, s, *args): return s[1] @njit def event_1(t, s, *args): return s[2] @njit def event_2(t, s, *args): return s[3] @njit def event_3(t, s, *args): return s[4] @njit def event_4(t, s, *args): return s[5] # use solve_ivp to calculate events sol = solve_ivp(crtbp, (t0, t1), s0, method='DOP853', **params, args=(mc,), events=[event_0, event_1, event_2, event_3, event_4]) ``` ```python loops = 100 str0 = """solve_ivp(crtbp, (t0, t1), s0, method='DOP853', **params, args=(mc,), events=[event_0, event_1, event_2, event_3, event_4])""" str1 = """rk_prop_ev(crtbp, s0, 0, t1, *params.values(), values, terminals, directions, counts, accurates, call_event, 1e-12, 1e-12, 100, mc)""" r0 = timeit(str0, number=loops, globals=globals()) r1 = timeit(str1, number=loops, globals=globals()) print(f"solve_ivp DOP853 time {r0:.2f}") print(f"fastrk DOP8713M time {r1:.2f}") print(f"speedup x {r0/r1:.2f}") ``` solve_ivp DOP853 time 3.80 fastrk DOP8713M time 0.22 speedup x 17.65 ```python dfs = [] for i, evarr in enumerate(sol.y_events): df = pd.DataFrame(evarr, columns=['x','y','z','vx','vy','vz']) df['e'] = i df['t'] = sol.t_events[i] dfs.append(df) evdf2 = pd.concat(dfs, axis=0) ``` ```python # solve ivp calculated additional events at initial time # let's drop it and sort by time evdf3 = evdf2[evdf2.t > 1e-19].sort_values(by='t').reset_index(drop=True) # check event times evdf1.t - evdf3.t ``` 0 -7.969181e-13 1 -4.184764e-12 2 5.084377e-12 3 5.291101e-12 4 -4.860801e-11 5 -6.075274e-11 6 -7.313705e-12 7 1.243031e-09 8 -4.274163e-10 9 -1.897766e-10 10 -4.538290e-10 11 -1.785923e-10 12 8.427596e-10 13 -7.195746e-10 14 -8.946062e-09 15 -1.796951e-08 16 -8.984693e-09 17 3.321368e-07 18 -1.491594e-07 19 -6.215807e-08 20 -1.675911e-07 21 -5.982199e-08 22 3.168344e-07 23 -2.611800e-07 24 -3.294949e-06 25 -6.639791e-06 26 -3.307739e-06 27 1.223962e-04 28 -5.498337e-05 29 -2.290884e-05 Name: t, dtype: float64 ```python evdf[evdf.e == 0].y.to_numpy() - evdf3[evdf3.e == 0].y.to_numpy() ``` array([-8.60585474e-19, 2.32595247e-18, 9.75781955e-19, -1.55185442e-13, 8.34835673e-18, -2.84128339e-13]) ```python ax = pltr.plot_proj(evdf, color='b', ls='', marker='+', ms=10) pltr.plot_proj(evdf3, color='r', ls='', marker='.', ax=ax); ```
789c5d9df49e7b8013018c3071ee2c8aad8241b9
49,992
ipynb
Jupyter Notebook
examples/ex1_calculate_events.ipynb
BoberSA/fastrk
72b313537428bd4c15bdca29c9c4ef0c01039184
[ "MIT" ]
1
2021-07-12T18:36:27.000Z
2021-07-12T18:36:27.000Z
examples/ex1_calculate_events.ipynb
BoberSA/fastrk
72b313537428bd4c15bdca29c9c4ef0c01039184
[ "MIT" ]
null
null
null
examples/ex1_calculate_events.ipynb
BoberSA/fastrk
72b313537428bd4c15bdca29c9c4ef0c01039184
[ "MIT" ]
null
null
null
88.014085
16,215
0.823372
true
4,087
Qwen/Qwen-72B
1. YES 2. YES
0.810479
0.692642
0.561372
__label__eng_Latn
0.282794
0.142584
<p align="center"> </p> ## Data Analytics ### Fitting Parametric Distributions in Python #### Michael Pyrcz, Associate Professor, The University of Texas at Austin ##### [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1) ### Data Analytics: Fitting Parametric Distributions Here's a demonstration of fitting parametric distributions in Python. This demonstration is part of the resources that I include for my courses in Spatial / Subsurface Data Analytics at the Cockrell School of Engineering at the University of Texas at Austin. #### Parametric Distributions We will cover fitting the following distribution: * Gaussian For more information about working with parametric distributions in Python see this [demonstration](https://github.com/GeostatsGuy/PythonNumericalDemos/blob/master/PythonDataBasics_ParametricDistributions.ipynb). I have a lecture on these parametric distributions available on [YouTube](https://www.youtube.com/watch?v=U7fGsqCLPHU&t=1687s). #### Fitting Parametric Distributions With distribution fitting we are maximizing the likelihood function, $L(\theta)$: \begin{equation} L(\theta) = \prod^{n}_{\alpha=1} f\left(x_{\alpha} | \theta \right) \end{equation} the product sum of the probablity of the data, $f\left(x_{\alpha}\right)$ over all the data $\alpha = 1,\ldots,n$, given the set of distribution parameters, $\theta$. #### Getting Started Here's the steps to get setup in Python with the GeostatsPy package: 1. Install Anaconda 3 on your machine (https://www.anaconda.com/download/). 2. From Anaconda Navigator (within Anaconda3 group), go to the environment tab, click on base (root) green arrow and open a terminal. 3. In the terminal type: pip install geostatspy. 4. Open Jupyter and in the top block get started by copy and pasting the code block below from this Jupyter Notebook to start using the geostatspy functionality. You will need to copy the data file to your working directory. They are available here: * Tabular data - unconv_MV_v4.csv at https://git.io/fhHLT. #### Importing Packages We will need some standard packages. These should have been installed with Anaconda 3. ```python import numpy as np # ndarrys for gridded data import pandas as pd # DataFrames for tabular data import os # set working directory, run executables import matplotlib.pyplot as plt # for plotting from scipy import stats # summary statistics import math # trigonometry etc. import random # for randon numbers ``` #### Set the Working Directory I always like to do this so I don't lose files and to simplify subsequent read and writes (avoid including the full address each time). ```python os.chdir("c:/PGE383") # set the working directory ``` ### Guassian Distribution Let's now use the Gaussian parametric distribution. * we will need the parameters mean and the variance We will apply the forward and reverse operations and calculate the summary statistics. ```python from scipy.stats import norm as my_dist # import traingular dist as my_dist dist_type = 'Gaussian' # give the name of the distribution for labels mean = 0.15; stdev = 0.03 # given the distribution parameters x_values = np.linspace(0.0,0.3,100) # get an array of x values p_values = my_dist.pdf(x_values, loc = mean, scale = stdev) # calculate density for each x value P_values = my_dist.cdf(x_values, loc = mean, scale = stdev) # calculate cumulative probablity for each x value plt.subplot(1,3,1) # plot the resulting PDF plt.plot(x_values, p_values,'r-', lw=5, alpha=0.3); plt.title('Sampling ' + str(dist_type) + ' PDF'); plt.xlabel('Values'); plt.ylabel('Density') plt.subplot(1,3,2) # plot the resulting CDF plt.plot(x_values, P_values,'r-', lw=5, alpha=0.3); plt.title('Sampling ' + str(dist_type) + ' CDF'); plt.xlabel('Values'); plt.ylabel('Cumulative Probability') p_values = np.linspace(0.00001,0.99999,100) # get an array of p-values x_values = my_dist.ppf(p_values, loc = mean, scale = stdev) # apply inverse to get x values from p-values plt.subplot(1,3,3) plt.plot(x_values, p_values,'r-', lw=5, alpha=0.3, label='uniform pdf') plt.subplots_adjust(left=0.0, bottom=0.0, right=2.8, top=0.8, wspace=0.2, hspace=0.3); plt.title('Sampling Inverse ' + str(dist_type) + ' CDF'); plt.xlabel('Values'); plt.ylabel('Cumulative Probability') print('The mean is ' + str(round(my_dist.mean(loc = mean, scale = stdev),4)) + '.') # calculate stats and symmetric interval print('The variance is ' + str(round(my_dist.var(loc = mean, scale = stdev),4)) + '.') ``` #### Fitting the Distribution Let's make a random dataset and then fit it with a Gaussian distribution. ```python n = 100 samples = my_dist.rvs(size=int(n), loc = mean, scale = stdev, random_state = 73073).tolist() # make a synthetic dataset plt.subplot(1,2,1) # plot the binned PDF plt.hist(samples,bins = np.linspace(mean-3*stdev,mean+3*stdev,30), density = True, color = 'red', edgecolor = 'black', alpha = 0.2) plt.xlim(0.0,0.3); plt.ylim(0,25); plt.xlabel('Values'); plt.ylabel('Density'); plt.title('Synthetic Sample Data') plt.subplot(1,2,2) # plot the CDF plt.hist(samples,bins = np.linspace(mean-3*stdev,mean+3*stdev,30), density = True, color = 'red', edgecolor = 'black', cumulative = True, histtype = 'stepfilled',alpha = 0.2) plt.xlim(0.0,0.3); plt.ylim(0,1); plt.xlabel('Values'); plt.ylabel('Cumulative Probability'); plt.title('Synthetic Sample Data') plt.subplots_adjust(left=0.0, bottom=0.0, right=2.2, top=1.0, wspace=0.2, hspace=0.3) ``` ```python plt.subplot(1,2,1) # plot the binned PDF plt.hist(samples,bins = np.linspace(mean-3*stdev,mean+3*stdev,30), density = True, color = 'red', edgecolor = 'black', alpha = 0.2) x = np.linspace(my_dist.ppf(0.0001, loc = mean, scale = stdev),my_dist.ppf(0.9999, loc = mean, scale = stdev), 100) fit_mean, fit_stdev = my_dist.fit(samples,loc = mean, scale = stdev) # fit MLE of the distribution parameters print('MLE mean = ' + str(round(fit_mean,3)) + ', MLE standard deviation = ' + str(round(fit_stdev,3))) plt.plot(x, my_dist.pdf(x, loc = fit_mean, scale = fit_stdev), 'r-', lw=5, alpha=0.6, label='norm pdf') plt.xlim(0.0,0.3); plt.ylim(0,25); plt.xlabel('Values'); plt.ylabel('Density'); plt.title('Synthetic Sample Data and Parametric Distribution Fit') plt.subplot(1,2,2) # plot the binned PDF with the fit parametric distribution plt.hist(samples,bins = np.linspace(fit_mean-3*fit_stdev,fit_mean+3*fit_stdev,30), density = True, color = 'red', edgecolor = 'black', cumulative = True, histtype = 'stepfilled',alpha = 0.2) plt.xlim(0.0,0.3); plt.ylim(0,1); plt.xlabel('Values'); plt.ylabel('Cumulative Probability'); plt.title('Synthetic Sample Data and Parametric Distribution Fit') # plot the CDF with the fit parametric distribution plt.plot(x, my_dist.cdf(x, loc = fit_mean, scale = fit_stdev), 'r-', lw=5, alpha=0.6, label='norm pdf') plt.subplots_adjust(left=0.0, bottom=0.0, right=2.2, top=1.0, wspace=0.2, hspace=0.3) ``` There are many other parametric distributions that we could have included, I just wanted to provide a concise demonstration of distribution fitting. #### Comments This was a basic demonstration of fitting parametric distributions. I have other demonstrations on the basics of working with DataFrames, ndarrays, univariate statistics, plotting data, declustering, data transformations, trend modeling and many other workflows available at [Python Demos](https://github.com/GeostatsGuy/PythonNumericalDemos) and a Python package for data analytics and geostatistics at [GeostatsPy](https://github.com/GeostatsGuy/GeostatsPy). I hope this was helpful, *Michael* #### The Author: ### Michael Pyrcz, Associate Professor, University of Texas at Austin *Novel Data Analytics, Geostatistics and Machine Learning Subsurface Solutions* With over 17 years of experience in subsurface consulting, research and development, Michael has returned to academia driven by his passion for teaching and enthusiasm for enhancing engineers' and geoscientists' impact in subsurface resource development. For more about Michael check out these links: #### [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1) #### Want to Work Together? I hope this content is helpful to those that want to learn more about subsurface modeling, data analytics and machine learning. Students and working professionals are welcome to participate. * Want to invite me to visit your company for training, mentoring, project review, workflow design and / or consulting? I'd be happy to drop by and work with you! * Interested in partnering, supporting my graduate student research or my Subsurface Data Analytics and Machine Learning consortium (co-PIs including Profs. Foster, Torres-Verdin and van Oort)? My research combines data analytics, stochastic modeling and machine learning theory with practice to develop novel methods and workflows to add value. We are solving challenging subsurface problems! * I can be reached at mpyrcz@austin.utexas.edu. I'm always happy to discuss, *Michael* Michael Pyrcz, Ph.D., P.Eng. Associate Professor The Hildebrand Department of Petroleum and Geosystems Engineering, Bureau of Economic Geology, The Jackson School of Geosciences, The University of Texas at Austin #### More Resources Available at: [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1) ```python ```
1774e8696c0378a15ca47abd320c490e896a8b96
96,305
ipynb
Jupyter Notebook
PythonDataBasics_ParametricDistributions_Fitting.ipynb
AndrewAnnex/PythonNumericalDemos
11a7dec09bf08527b358fa95119811b6f73023b5
[ "MIT" ]
403
2017-10-15T02:07:38.000Z
2022-03-30T15:27:14.000Z
PythonDataBasics_ParametricDistributions_Fitting.ipynb
AndrewAnnex/PythonNumericalDemos
11a7dec09bf08527b358fa95119811b6f73023b5
[ "MIT" ]
4
2019-08-21T10:35:09.000Z
2021-02-04T04:57:13.000Z
PythonDataBasics_ParametricDistributions_Fitting.ipynb
AndrewAnnex/PythonNumericalDemos
11a7dec09bf08527b358fa95119811b6f73023b5
[ "MIT" ]
276
2018-06-27T11:20:30.000Z
2022-03-25T16:04:24.000Z
290.951662
33,096
0.90608
true
2,911
Qwen/Qwen-72B
1. YES 2. YES
0.879147
0.785309
0.690401
__label__eng_Latn
0.818117
0.442365
# Perceptron A perceptron is a neural network unit (an artificial neuron). It is the most basic unit of a neural network. The figure below depicts a model of a perceptron. At first glance, this may look complicated but we'll dive deeper into this, step by step. ## General A general explanation is the following: The inputs $x$ are represented in green. In blue, you see one weight that is attributed to each input. In orange, you have the sum of the input/weight combination with a new term, $b$, which represents the bias. All of this is fed into an activation function, in red, which outputs an output $y$. If we make the abstraction of the perceptron and call a function, like $P$, we can say that the output $y$ is a function of the inputs $x$. Mathematically, we can formulate this as $$ y =P(\mathbf{x}) $$. ![Complete perceptron [Image]](./assets/Perceptron_ALL.png) ## Sum In the previous cell, we said that a *weight that is attributed to each input*. Actually, each input $x_i$ is multiplied by it's corresponding weight $w_i$. The sum operator $\Sigma$ sums up every inputs-weight multiplication. This can be written as $$\sum^{m}_{i=1}x_iw_i$$ The bias $b$ is just a simple constant that we add to this whole sum. We will see later what it does. At the end of the summation operation, we end up with the following line, which we'll call $z$ to make it easier. $$z = \sum^{m}_{i=1}x_iw_i + b$$ ![Sum part of perceptron [Image]](./assets/Perceptron_SUM.png) ## Activation function The next (and last) operation is the activation function, $f$. Just like a normal function in a programming context, $f$ takes as argument $z$ and returns the output value $y$. You can see one example of $f$ on the graph on the right. Here, the activation function is called the *heaviside step-function*. This function returns 0 for every $z$ that is smaller than $0$, and for every $z$ greater than $0$, it returns 1. Mathematically, that can be formulated as $$y=\begin{cases} 0, & \text{if } z < 0, \\ 1, & \text{if } z > 0, \end{cases}$$ If we expand, let's say the case when $z > 0$, we have: $$\begin{align} z &> 0 \\ \Leftrightarrow \sum^{m}_{i=1}x_iw_i + b &> 0 && \text{(We replace z by }\sum^{m}_{i=1}x_iw_i + b) \\ \Leftrightarrow \sum^{m}_{i=1}x_iw_i &> -b && \text{(We subtract b on each side of the equation)} \\ \end{align}$$ If we do the same for the case $z<0$, we have $$y=\begin{cases} 0, & \text{if } \sum^{m}_{i=1}x_iw_i &< -b \\ 1, & \text{if } \sum^{m}_{i=1}x_iw_i &> -b \end{cases}$$ What does this last equation tell us ? It tells us that if the sum of the inputs times the weights is greater than some number $-b$, then the output $y$ is 1, else it is 0. This will be easier to understand with an example. Let's say the bias $b = -1$. $$y=\begin{cases} 0, & \text{if } \sum^{m}_{i=1}x_iw_i & < 1 \\ 1, & \text{if } \sum^{m}_{i=1}x_iw_i & > 1 \end{cases}$$ This is like moving the boundary of the activation function, the one in red on the graph below, over by one. To have the boundary at 0, we just have to set the bias to 0. If we want the boundary to be at 5, we set the bias to -5. What this lets us do is control at which **threshold** the inputs activate the neuron. You can see this binary output, 1 or 0, as **on** or **off**. Thus (using this particular activation function) a neuron can either be activated or not activated. ![Activation part of perceptron [Image]](./assets/Perceptron_activation.png) If you don't want the neuron to either be **on** or **off**, but also somewhere in between, you can use any other activation functions, some of which are shown below. The best part of all this is that you don't have to specify the weights and the biases, **they will be modified automatically.** That is what it is meant when a neural network **learns**. The error, from the output, gets propagated back to the input to modify the weights. We will learn more about this later. ## Multiple layer perceptron (MLP) Now that we've seen how a single neuron unit works, we can put multiple ones in a stack to create a layer of neurons, like shown in the GIF below. We have 784 inputs on the left, and every input is connected to every one of the 16 neurons on the right. The weights are represented as the fine line making the connection between the input and the neuron. Thus, we have the following sum **for each of the 16 neurons you see below**. $$z = \sum^{784}_{i=1}x_iw_i + b$$ Now that we know how to create a layer, when can put layers in parallel, to create multiple layers. This is where the name *Multi layer perceptron*(MLP in short) comes from. This means that the **output of a neuron from the first layer is the input of a neuron from the second layer.** ## Supplementary reading material I highly encourage you to watch the following video (from which the gifs were extracted). This video is related to this chapter, but the rest of the videos in the playlist are more advanced, and we will not see it in that detail. - [3blue1brown But what is a neural network?](https://www.youtube.com/watch?v=aircAruvnKk)
b83a72c0ac1ec2b3fbb5c9a38abe719b2a919bd1
7,708
ipynb
Jupyter Notebook
Content/3.deep_learning_intro/1.perceptron.ipynb
becodeorg/ai_fundamentals_1.0
2b8f90511e1e0190e01916f411726013269e210c
[ "MIT" ]
null
null
null
Content/3.deep_learning_intro/1.perceptron.ipynb
becodeorg/ai_fundamentals_1.0
2b8f90511e1e0190e01916f411726013269e210c
[ "MIT" ]
null
null
null
Content/3.deep_learning_intro/1.perceptron.ipynb
becodeorg/ai_fundamentals_1.0
2b8f90511e1e0190e01916f411726013269e210c
[ "MIT" ]
null
null
null
36.358491
238
0.594318
true
1,425
Qwen/Qwen-72B
1. YES 2. YES
0.92944
0.865224
0.804174
__label__eng_Latn
0.999696
0.706699
# Molecular dynamics ## With OpenMM ```python import openmm as mm from openmm import unit from uibcdf_systems import DoubleWell molecular_system = DoubleWell(n_particles = 1, mass = 64 * unit.amu, Eo=3.0 * unit.kilocalories_per_mole, a=5.0 * unit.angstroms, b=0.0 * unit.kilocalories_per_mole, k=1.0 * unit.kilocalories_per_mole/unit.angstrom**2) integrator = mm.LangevinIntegrator(300.0*unit.kelvin, 1.0/unit.picoseconds, 0.1*unit.picoseconds) platform = Platform.getPlatformByName('CUDA') simulation = Simulation(molecular_system.topology, molecular_system.system, integrator, platform) coordinates = np.zeros([1, 3], np.float32) * unit.nanometers simulation.context.setPositions(coordinates) velocities = np.zeros([1, 3], np.float32) * unit.nanometers/unit.picoseconds simulation.context.setVelocities(velocities) simulation.step(1000) ``` ## With this library ```python import numpy as np import sympy as sy import matplotlib.pyplot as plt from openmm import unit from uibcdf_systems import DoubleWell from uibcdf_systems.tools import langevin molecular_system = DoubleWell(n_particles = 1, mass = 64 * unit.amu, Eo=3.0 * unit.kilocalories_per_mole, a=5.0 * unit.angstroms, b=0.0 * unit.kilocalories_per_mole, k=1.0 * unit.kilocalories_per_mole/unit.angstrom**2) ``` ### Newtonian dynamics ```python initial_positions = np.zeros([1, 3], np.float32) * unit.nanometers initial_positions[0,0] = 0.55 * unit.nanometers initial_velocities = np.zeros([1, 3], np.float32) * unit.nanometers/unit.picoseconds molecular_system.set_coordinates(initial_positions) molecular_system.set_velocities(initial_velocities) traj_dict = langevin(molecular_system, friction=0.0/unit.picoseconds, temperature=00.0*unit.kelvin, time=20.0*unit.picoseconds, saving_timestep=0.1*unit.picoseconds, integration_timestep=0.02*unit.picoseconds) ``` 0%| | 0/1000 [00:00<?, ?it/s] We can now plot the trajectory of the x coordinate: ```python plt.plot(traj_dict['time'], traj_dict['coordinates'][:,0,0]) plt.xlabel('time ({})'.format(traj_dict['time'].unit)) plt.ylabel('X ({})'.format(traj_dict['coordinates'].unit)) plt.show() ``` Now we can wonder, is the period of the oscillations is in agreement with the value calculated before? A shorter run with more points could be enough to say, by inspection, that the calculus is correct. ```python mass = 64 * unit.amu Eo=3.0 * unit.kilocalories_per_mole a=5.0 * unit.angstroms T = 2*np.pi*np.sqrt((mass*a**2)/(8.0*Eo)) print('The period of the small oscillations around the minimum is',T) ``` The period of the small oscillations around the minimum is 2.5080627665032216 ps As we can also check with the method: ```python molecular_system.get_small_oscillations_time_periods_around_minima() ``` ([Quantity(value=array([-0.5, 0. , 0. ]), unit=nanometer), Quantity(value=array([0.5, 0. , 0. ]), unit=nanometer)], [Quantity(value=array([2.50806277, 2.45738961, 2.45738961]), unit=picosecond), Quantity(value=array([2.50806277, 2.45738961, 2.45738961]), unit=picosecond)]) ```python plt.plot(traj_dict['time'], traj_dict['coordinates'][:,0,0]) plt.axvline(T._value, color='gray', linestyle='--') # Period of the harmonic oscillations approximation plt.xlabel('time ({})'.format(traj_dict['time'].unit)) plt.ylabel('X ({})'.format(traj_dict['coordinates'].unit)) plt.show() ``` It seems a good approximation to what is observed graphically in the integrated trajectory. Keep in mind that this is an approximation valid when the particle moves very close the bottom of the basins. In an real harmonic oscillator this period does not change with amplitude, but in this case does it. If the amplitude is large enough the oscillations are far from being well approximated by the cuadratic term of the Taylor expansion. Try to play with larger and shorter initial distances to the minimum to see this effect: ```python amplitudes = 0.5*unit.nanometers + np.array([0.02, 0.05, 0.07, 0.1, 0.15]) * unit.nanometers traj_dicts = [] for amplitude in amplitudes: initial_positions[0,0] = amplitude molecular_system.set_coordinates(initial_positions) traj_dict = langevin(molecular_system, friction=0.0/unit.picoseconds, temperature=00.0*unit.kelvin, time = 5*unit.picoseconds, integration_timestep = 0.02 * unit.picoseconds, saving_timestep = 0.1 * unit.picoseconds, tqdm=False) traj_dicts.append(traj_dict) for traj_dict in traj_dicts: plt.plot(traj_dict['time'], traj_dict['coordinates'][:,0,0]) plt.axvline(T._value, color='gray', linestyle='--') # Period of the harmonic oscillations approximation plt.xlabel('time ({})'.format(traj_dict['time'].unit)) plt.ylabel('X ({})'.format(traj_dict['coordinates'].unit)) plt.show() ``` The newtonian dynamics can also include damping. This way we can simulate damped oscillations around the minimum. ```python traj_dict = langevin(molecular_system, friction=0.5/unit.picoseconds, temperature=0.0*unit.kelvin, time=20.0*unit.picoseconds, saving_timestep=0.1*unit.picoseconds, integration_timestep=0.02*unit.picoseconds) ``` 0%| | 0/1000 [00:00<?, ?it/s] ```python plt.plot(traj_dict['time'], traj_dict['coordinates'][:,0,0]) plt.xlabel('time ({})'.format(traj_dict['time'].unit)) plt.ylabel('X ({})'.format(traj_dict['coordinates'].unit)) plt.show() ``` What would be the friction value needed to enter in the overdamped regime? ```python traj_dict = langevin(molecular_system, friction=5.0/unit.picoseconds, temperature=0.0*unit.kelvin, time=20.0*unit.picoseconds, saving_timestep=0.1*unit.picoseconds, integration_timestep=0.02*unit.picoseconds) ``` 0%| | 0/1000 [00:00<?, ?it/s] ```python plt.plot(traj_dict['time'], traj_dict['coordinates'][:,0,0]) plt.xlabel('time ({})'.format(traj_dict['time'].unit)) plt.ylabel('X ({})'.format(traj_dict['coordinates'].unit)) plt.show() ``` And the same can be checked for the asymmetric double well potential. In the case of the period of short oscillations around a minimum, keep in mind that there the minimum is not longer $a$, and $T$ was here calculated with the value of the second derivative (equal, no matter the value of $b$) in $a$. But still is a good approximation. ### Stochastic Dynamics Thanks to the `tools.langevin` method in this library, a simple newtonian dynamics can be run with few effort: ```python initial_positions = np.zeros([1, 3], np.float32) * unit.nanometers initial_positions[0,0] = 0.5 * unit.nanometers initial_velocities = np.zeros([1, 3], np.float32) * unit.nanometers/unit.picoseconds molecular_system.set_coordinates(initial_positions) molecular_system.set_velocities(initial_velocities) traj_dict = langevin(molecular_system, friction=1.0/unit.picoseconds, temperature=300.0*unit.kelvin, time=10.0*unit.nanoseconds, saving_timestep=1.0*unit.picoseconds, integration_timestep=0.1*unit.picoseconds) ``` 0%| | 0/100000 [00:00<?, ?it/s] Let us see the time evolution of the coordinate $x$ of our single particle: ```python plt.plot(traj_dict['time'], traj_dict['coordinates'][:,0,0]) plt.xlabel('time ({})'.format(traj_dict['time'].unit)) plt.ylabel('X ({})'.format(traj_dict['coordinates'].unit)) plt.show() ``` The trajectory is long enough to observe the system hopping on and off from the initial basin.
d5a5308da310102f17d8270af81e5c4207b7b41d
185,361
ipynb
Jupyter Notebook
docs/contents/molecular_systems/double_well/molecular_dynamics.ipynb
uibcdf/Molecular-Systems
74c4313ae25584ad24bea65f961280f187eda9cb
[ "MIT" ]
null
null
null
docs/contents/molecular_systems/double_well/molecular_dynamics.ipynb
uibcdf/Molecular-Systems
74c4313ae25584ad24bea65f961280f187eda9cb
[ "MIT" ]
null
null
null
docs/contents/molecular_systems/double_well/molecular_dynamics.ipynb
uibcdf/Molecular-Systems
74c4313ae25584ad24bea65f961280f187eda9cb
[ "MIT" ]
null
null
null
359.924272
46,680
0.935051
true
2,076
Qwen/Qwen-72B
1. YES 2. YES
0.843895
0.839734
0.708647
__label__eng_Latn
0.81837
0.484757
# Lecture 1: Matrices & vectors ## Syllabus **Week 2:** Matrices, vectors, norms, ranks ##What is a matrix A matrix is a two-dimensional table. Here is an example of a $3 \times 3$ matrix \begin{equation} A = \begin{pmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{pmatrix} \end{equation} A vector is a $n \times 1$ vector (there are **row** and **column** vectors). ## What we can do with matrices and vectors 1. Add them: $a = b + c$ 2. Multiply by numbers ## Matrix as a linear operator Matrix is typically used to encode a **linear operator**: $$ y = A x,$$ called matrix-by-vector product, in the index form $$y_i = \sum_{j=1}^m A_{ij} x_j, \quad i = 1, \ldots, n.$$ ## Linear dependencies Many physical models are formulated as **linear equations**: - Newton law $F = ma$ - Hookes law $F = kx$ which is based on the fact that if the change is small, everything can be approximated by a linear function: $$ f(x + \delta x) \approx f(x) + \delta x. $$ Of course, nonlinearities may come into play, but even in this case the numerical methods **linearize** the problem around the current approximation. But matrices and linear dependence also play important role in data analysis as well. ## Linear dependencies in real life Matrix encodes linear dependence. Linear dependence is the simplest and often very efficient model for the data. We will give two illustrations: - Principal component analysis - Independent component analysis ## Demo: principal component analysis One of the basic factor models, factor analysis: $y_1, \ldots, y_P$ are vectors (data points), that are observed. We think of the as **linear mixture**. We generate random points on a plane and then rotate them by a certain mixture. We generate a sequence of random points in two dimensions, and setup a rotation matrix $A$. ``` %matplotlib inline import matplotlib.pyplot as plt import prettyplotlib as ppl from sklearn.decomposition import PCA import numpy as np P = 1000 points = np.random.randn(P,2) A = [[2, 1], [0, 1]] A = np.array(A) ppl.plot(points[:, 0], points[:, 1], ls='', marker='o') ``` We can also plot the rotated points: look at how they are "skewed". ``` R = np.dot(points, A) ppl.plot(R[:, 0], R[:, 1], ls='', marker='o') ``` We can also rotate them back by finding **principal components**, which is equivalent to singular value decomposition (SVD). We will talk about SVD later, now we just call it from **numpy.linalg** module. As we can see, the points are back to symmetric distribution, so we have recovered the matrix. Principal component analysis is used for many applications, in particular, for the **classification problem**: after the rotation, the points, belonging to to different classes, can be visually separated. ``` %matplotlib inline u, s, v = np.linalg.svd(R, full_matrices=False) unrotated = R.dot(v.T) ppl.plot(unrotated[:, 0], unrotated[:, 1], ls='', marker='o') ``` ## Demo: Cocktail party problem The linear models and the factors may have a real physical meaning. One of the most interesting illustrations is the **cocktail party problem**, which is defined as follows. We have a set of sources $x(t)$ (people talking) and a set of microphones. At each microphone we record a **linear mixture**: $$ y = A x(t) + \eta(t), $$ where $\eta(t)$ is some noise. We do not know $A$ and want to recover it. [Demo](bss.ipynb) ##Matrix-by-matrix product Consider composition of two linear operators: 1. $y = Bx$ 2. $z = Ay$ Then, $z = Ay = A B x = C x$, where $C$ is the **matrix-by-matrix product**. A product of an $n \times k$ matrix $A$ and a $k \times m$ matrix $B$ is a $n \times m$ matrix $C$ with the elements $$ c_{ij} = \sum_{s=1}^k a_{is} b_{sj}, \quad i = 1, \ldots, n, \quad j = 1, \ldots, m $$ Complexity of a naive algorithm for MM is $\mathcal{O}(n^3)$. Matrix-by-matrix product is the **core** for almost all efficient algorithms in linear algebra. Basically, all the NLA algorithms are reduced to a sequence of matrix-by-matrix products, so efficient implementation of MM reduces the complexity of numerical algorithms by the same factor. However, implementing MM is not easy at all! ## Efficient implementation for MM Is it easy to multiply a matrix by a matrix? The answer is: **no**, if you want it as fast as possible, using the computers that are at hand. ## Demo Let us do a short demo and compare a `np.dot()` procedure which in my case uses MKL with a hand-written matrix-by-matrix routine in Python and also its Cython version (and also gives a very short introduction to Cython). ``` import numpy as np def matmul(a, b): n = a.shape[0] k = a.shape[1] m = b.shape[1] c = np.zeros((n, m)) for i in xrange(n): for j in xrange(m): for s in xrange(k): c[i, j] += a[i, s] * b[s, j] ``` `load_ext cythonmagic` makes it possible to use `%%cython` magic for an automatic compilation of the [Cython code](http://cython.org/). Cython is a Python compiler with additional directives that allow to specify the type of some variables. In many cases it may lead to significant speedup. For other possibilities you may look at the [following comparison](http://nbviewer.ipython.org/url/jakevdp.github.io/downloads/notebooks/NumbaCython.ipynb) ``` %load_ext cythonmagic ``` ```cython %%cython import numpy as np def cython_matmul(double [:, :] a, double[:, :] b): cdef int n = a.shape[0] cdef int k = a.shape[1] cdef int m = b.shape[1] cdef int i cdef int j cdef int s c = np.zeros((n, m)) cdef double[:, :] cview = c for i in xrange(n): for j in xrange(m): for s in xrange(k): c[i, j] += a[i, s] * b[s, j] return c ``` Then we just time three different routines. Guess the answer. ``` n = 100 a = np.random.randn(n, n) b = np.random.randn(n, n) %timeit c = matmul(a, b) %timeit cf = cython_matmul(a, b) %timeit c = np.dot(a, b) ``` 1 loops, best of 3: 1.57 s per loop 1 loops, best of 3: 567 ms per loop 10000 loops, best of 3: 110 µs per loop Why it is so? There are two important issues: - Computers are more and more parallel (multicore, graphics processing units) - The memory pyramid: there is a whole hierarchy of levels ## Memory architecture Fast memory is small, bigger memory is slow. - Data fits into the fast memory: Load all data, do computations - Data does not fit into the fast memory: load data by chunks, do computations, load again We need to reduce the number of read/write operations! This is typically achieved in efficient implementations of the BLAS libraries, one of which (Intel MKL) we now use. ## BLAS Basic linear algebra operations (**BLAS**) have three levels: 1. BLAS-1, operations like $c = a + b$ 2. BLAS-2, operations like matrix-by-vector product 3. BLAS-3, matrix-by-matrix product What is the principal differences between those? The main difference is the number of operations vs. the number of data! 1. BLAS-1: $\mathcal{O}(n)$ data, $\mathcal{O}(n)$ operations 2. BLAS-2: $\mathcal{O}(n^2)$ data, $\mathcal{O}(n^2)$ operations 3. BLAS-3: $\mathcal{O}(n^2)$ data, $\mathcal{O}(n^3)$ operations **Remark**: a quest for $\mathcal{O}(n^2)$ matrix-by-matrix multiplication algorithm is not yet done. Strassen gives $\mathcal{O}(n^{2.78...})$ World record $\mathcal{O}(n^{2.37})$ [Reference](http://arxiv.org/pdf/1401.7714v1.pdf) The constant is unfortunately too big to make it practical! ## Memory hierarchy How we can use memory hierarchy? Break the matrix into blocks! ($2 \times 2$ is an **illustration**) $$ A = \begin{bmatrix} A_{11} & A_{12} \\ A_{21} & A_{22} \end{bmatrix}, \quad B = \begin{bmatrix} B_{11} & B_{12} \\ B_{21} & B_{22} \end{bmatrix} $$ Then, $$ A B = \begin{bmatrix} A_{11} B_{11} + A_{12} B_{21} & A_{11} B_{12} + A_{12} B_{22} \\ A_{21} B_{11} + A_{22} B_{21} & A_{21} B_{12} + A_{22} B_{22} \end{bmatrix} $$ If $A_{11}, B_{11}$ and their product fit into the cache memory (which is 1024 Kb for the [Haswell Intel Chip](http://en.wikipedia.org/wiki/List_of_Intel_Core_i7_microprocessors#.22Haswell-H.22_.28MCP.2C_quad-core.2C_22_nm.29)), then we load them only once into the memory. ### Key point The number of read/writes is reduced by a factor $\sqrt{M}$, where $M$ is the cache size. - Have to do linear algebra in terms of blocks! - So, you can not even do Gaussian elimination as usual (or just suffer 10x performance loss) ## Parallelization The blocking has also deep connection with parallel computations. Consider adding two vectors: $$ c = a + b$$ and we have two processors. How fast can we go? Of course, not faster then twice. ``` ## This demo requires Anaconda distribution to be installed import mkl import numpy as np n = 1000 a = np.random.randn(n) mkl.set_num_threads(1) %timeit a + a mkl.set_num_threads(2) %timeit a + a ``` 100000 loops, best of 3: 2.21 µs per loop 100000 loops, best of 3: 2.13 µs per loop ``` ## This demo requires Anaconda distribution to be installed import mkl n = 500 a = np.random.randn(n, n) mkl.set_num_threads(1) %timeit a.dot(a) mkl.set_num_threads(2) %timeit a.dot(a) ``` 100 loops, best of 3: 12.7 ms per loop 100 loops, best of 3: 10.8 ms per loop Typically, two cases are distinguished: 1. Shared memory (i.e., multicore on every desktop/smartphone) 2. Distributed memory (i.e. each processor has its own memory, can send information through a network) In both cases, the efficiency is governed by a **bandwidth**: I.e., for BLAS-1 routines (like sum of two vectors) reads/writes take all the time. For BLAS-3 routines, the speedup can be obtained that is more noticable. For large-scale clusters (>100 000 cores, see the [Top500 list](http://www.top500.org/lists/)) there is still scaling. ##### Questions? ``` from IPython.core.display import HTML def css_styling(): styles = open("./styles/custom.css", "r").read() return HTML(styles) css_styling() ``` <link href='http://fonts.googleapis.com/css?family=Fenix' rel='stylesheet' type='text/css'> <link href='http://fonts.googleapis.com/css?family=Alegreya+Sans:100,300,400,500,700,800,900,100italic,300italic,400italic,500italic,700italic,800italic,900italic' rel='stylesheet' type='text/css'> <link href='http://fonts.googleapis.com/css?family=Source+Code+Pro:300,400' rel='stylesheet' type='text/css'> <style> @font-face { font-family: "Computer Modern"; src: url('http://mirrors.ctan.org/fonts/cm-unicode/fonts/otf/cmunss.otf'); } div.cell{ width:80%; margin-left:auto !important; margin-right:auto; } h1 { font-family: 'Alegreya Sans', sans-serif; } h2 { font-family: 'Fenix', serif; } h3{ font-family: 'Fenix', serif; margin-top:12px; margin-bottom: 3px; } h4{ font-family: 'Fenix', serif; } h5 { font-family: 'Alegreya Sans', sans-serif; } div.text_cell_render{ font-family: 'Alegreya Sans',Computer Modern, "Helvetica Neue", Arial, Helvetica, Geneva, sans-serif; line-height: 1.2; font-size: 160%; width:70%; margin-left:auto; margin-right:auto; } .CodeMirror{ font-family: "Source Code Pro"; font-size: 90%; } /* .prompt{ display: None; }*/ .text_cell_render h1 { font-weight: 200; font-size: 50pt; line-height: 100%; color:#CD2305; margin-bottom: 0.5em; margin-top: 0.5em; display: block; } .text_cell_render h5 { font-weight: 300; font-size: 16pt; color: #CD2305; font-style: italic; margin-bottom: .5em; margin-top: 0.5em; display: block; } .warning{ color: rgb( 240, 20, 20 ) } </style> ``` ```
58d9548db0701b92cd5d22666938b95eb4daee75
144,869
ipynb
Jupyter Notebook
lecture-1.ipynb
oseledets/NLA
d16d47bc8e20df478d98b724a591d33d734ec74b
[ "MIT" ]
14
2015-01-20T13:24:38.000Z
2022-02-03T05:54:09.000Z
lecture-1.ipynb
oseledets/NLA
d16d47bc8e20df478d98b724a591d33d734ec74b
[ "MIT" ]
null
null
null
lecture-1.ipynb
oseledets/NLA
d16d47bc8e20df478d98b724a591d33d734ec74b
[ "MIT" ]
4
2015-09-10T09:14:10.000Z
2019-10-09T04:36:07.000Z
163.878959
42,865
0.871643
true
3,463
Qwen/Qwen-72B
1. YES 2. YES
0.867036
0.888759
0.770586
__label__eng_Latn
0.973332
0.628661
###### Content under Creative Commons Attribution license CC-BY 4.0, code under MIT license © 2015 L.A. Barba, C.D. Cooper, G.F. Forsyth. Based on [CFD Python](https://github.com/barbagroup/CFDPython), © 2013 L.A. Barba, also under CC-BY license. # Relax and hold steady This is **Module 5** of the open course [**"Practical Numerical Methods with Python"**](https://openedx.seas.gwu.edu/courses/course-v1:MAE+MAE6286+2017/about), titled *"Relax and hold steady: elliptic problems"*. If you've come this far in the [#numericalmooc](https://twitter.com/hashtag/numericalmooc) ride, it's time to stop worrying about **time** and relax. So far, you've learned to solve problems dominated by convection—where solutions have a directional bias and can form shocks—in [Module 3](https://nbviewer.jupyter.org/github/numerical-mooc/numerical-mooc/tree/master/lessons/03_wave/): *"Riding the Wave."* In [Module 4](https://nbviewer.jupyter.org/github/numerical-mooc/numerical-mooc/tree/master/lessons/04_spreadout/) (*"Spreading Out"*), we explored diffusion-dominated problems—where solutions spread in all directions. But what about situations where solutions are steady? Many problems in physics have no time dependence, yet are rich with physical meaning: the gravitational field produced by a massive object, the electrostatic potential of a charge distribution, the displacement of a stretched membrane and the steady flow of fluid through a porous medium ... all these can be modeled by **Poisson's equation**: $$ \begin{equation} \nabla^2 u = f \end{equation} $$ where the unknown $u$ and the known $f$ are functions of space, in a domain $\Omega$. To find the solution, we require boundary conditions. These could be Dirichlet boundary conditions, specifying the value of the solution on the boundary, $$ \begin{equation} u = b_1 \text{ on } \partial\Omega, \end{equation} $$ or Neumann boundary conditions, specifying the normal derivative of the solution on the boundary, $$ \begin{equation} \frac{\partial u}{\partial n} = b_2 \text{ on } \partial\Omega. \end{equation} $$ A boundary-value problem consists of finding $u$, given the above information. Numerically, we can do this using *relaxation methods*, which start with an initial guess for $u$ and then iterate towards the solution. Let's find out how! ## Laplace's equation The particular case of $f=0$ (homogeneous case) results in Laplace's equation: $$ \begin{equation} \nabla^2 u = 0 \end{equation} $$ For example, the equation for steady, two-dimensional heat conduction is: $$ \begin{equation} \frac{\partial ^2 T}{\partial x^2} + \frac{\partial ^2 T}{\partial y^2} = 0 \end{equation} $$ This is similar to the model we studied in [lesson 3](https://nbviewer.jupyter.org/github/numerical-mooc/numerical-mooc/blob/master/lessons/04_spreadout/04_03_Heat_Equation_2D_Explicit.ipynb) of **Module 4**, but without the time derivative: i.e., for a temperature $T$ that has reached steady state. The Laplace equation models the equilibrium state of a system under the supplied boundary conditions. The study of solutions to Laplace's equation is called *potential theory*, and the solutions themselves are often potential fields. Let's use $p$ from now on to represent our generic dependent variable, and write Laplace's equation again (in two dimensions): $$ \begin{equation} \frac{\partial ^2 p}{\partial x^2} + \frac{\partial ^2 p}{\partial y^2} = 0 \end{equation} $$ Like in the diffusion equation of the previous course module, we discretize the second-order derivatives with *central differences*. You should be able to write down a second-order central-difference formula by heart now! On a two-dimensional Cartesian grid, it gives: $$ \begin{equation} \frac{p_{i+1, j} - 2p_{i,j} + p_{i-1,j} }{\Delta x^2} + \frac{p_{i,j+1} - 2p_{i,j} + p_{i, j-1} }{\Delta y^2} = 0 \end{equation} $$ When $\Delta x = \Delta y$, we end up with the following equation: $$ \begin{equation} p_{i+1, j} + p_{i-1,j} + p_{i,j+1} + p_{i, j-1}- 4 p_{i,j} = 0 \end{equation} $$ This tells us that the Laplacian differential operator at grid point $(i,j)$ can be evaluated discretely using the value of $p$ at that point (with a factor $-4$) and the four neighboring points to the left and right, above and below grid point $(i,j)$. The stencil of the discrete Laplacian operator is shown in Figure 1. It is typically called the *five-point stencil*, for obvious reasons. #### Figure 1: Laplace five-point stencil. The discrete equation above is valid for every interior point in the domain. If we write the equations for *all* interior points, we have a linear system of algebraic equations. We *could* solve the linear system directly (e.g., with Gaussian elimination), but we can be more clever than that! Notice that the coefficient matrix of such a linear system has mostly zeroes. For a uniform spatial grid, the matrix is *block diagonal*: it has diagonal blocks that are tridiagonal with $-4$ on the main diagonal and $1$ on two off-center diagonals, and two more diagonals with $1$. All of the other elements are zero. Iterative methods are particularly suited for a system with this structure, and save us from storing all those zeroes. We will start with an initial guess for the solution, $p_{i,j}^{0}$, and use the discrete Laplacian to get an update, $p_{i,j}^{1}$, then continue on computing $p_{i,j}^{k}$ until we're happy. Note that $k$ is _not_ a time index here, but an index corresponding to the number of iterations we perform in the *relaxation scheme*. At each iteration, we compute updated values $p_{i,j}^{k+1}$ in a (hopefully) clever way so that they converge to a set of values satisfying Laplace's equation. The system will reach equilibrium only as the number of iterations tends to $\infty$, but we can approximate the equilibrium state by iterating until the change between one iteration and the next is *very* small. The most intuitive method of iterative solution is known as the [**Jacobi method**](https://en.wikipedia.org/wiki/Jacobi_method), in which the values at the grid points are replaced by the corresponding weighted averages: $$ \begin{equation} p^{k+1}_{i,j} = \frac{1}{4} \left(p^{k}_{i,j-1} + p^k_{i,j+1} + p^{k}_{i-1,j} + p^k_{i+1,j} \right) \end{equation} $$ This method does indeed converge to the solution of Laplace's equation. Thank you Professor Jacobi! ##### Challenge task Grab a piece of paper and write out the coefficient matrix for a discretization with 7 grid points in the $x$ direction (5 interior points) and 5 points in the $y$ direction (3 interior). The system should have 15 unknowns, and the coefficient matrix three diagonal blocks. Assume prescribed Dirichlet boundary conditions on all sides (not necessarily zero). ### Boundary conditions and relaxation Suppose we want to model steady-state heat transfer on (say) a computer chip with one side insulated (zero Neumann BC), two sides held at a fixed temperature (Dirichlet condition) and one side touching a component that has a sinusoidal distribution of temperature. We would need to solve Laplace's equation with boundary conditions like $$ \begin{equation} \begin{gathered} p=0 \text{ at } x=0\\ \frac{\partial p}{\partial x} = 0 \text{ at } x = L_x\\ p = 0 \text{ at }y = 0 \\ p = \sin \left( \frac{\frac{3}{2}\pi x}{L_x} \right) \text{ at } y = L_y \end{gathered} \end{equation} $$ We'll take $L_x=1$ and $L_y=1$ for the sizes of the domain in the $x$ and $y$ directions. One of the defining features of elliptic PDEs is that they are "driven" by the boundary conditions. In the iterative solution of Laplace's equation, boundary conditions are set and **the solution relaxes** from an initial guess to join the boundaries together smoothly, given those conditions. Our initial guess will be $p=0$ everywhere. Now, let's relax! First, we import our usual smattering of libraries (plus a few new ones!) ```python import numpy from matplotlib import pyplot %matplotlib inline ``` ```python # Set the font family and size to use for Matplotlib figures. pyplot.rcParams['font.family'] = 'serif' pyplot.rcParams['font.size'] = 16 ``` To visualize 2D data, we can use [`pyplot.imshow()`](http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.imshow), like we've done before, but a 3D plot can sometimes show a more intuitive view the solution. Or it's just prettier! Be sure to enjoy the many examples of 3D plots in the `mplot3d` section of the [Matplotlib Gallery](http://matplotlib.org/gallery.html#mplot3d). We'll import the `mplot3d` module to create 3D plots and also grab the `cm` package, which provides different colormaps for visualizing plots. ```python from mpl_toolkits import mplot3d from matplotlib import cm ``` Let's define a function for setting up our plotting environment, to avoid repeating this set-up over and over again. It will save us some typing. ```python def plot_3d(x, y, p, label='$z$', elev=30.0, azim=45.0): """ Creates a Matplotlib figure with a 3D surface plot of the scalar field p. Parameters ---------- x : numpy.ndarray Gridline locations in the x direction as a 1D array of floats. y : numpy.ndarray Gridline locations in the y direction as a 1D array of floats. p : numpy.ndarray Scalar field to plot as a 2D array of floats. label : string, optional Axis label to use in the third direction; default: 'z'. elev : float, optional Elevation angle in the z plane; default: 30.0. azim : float, optional Azimuth angle in the x,y plane; default: 45.0. """ fig = pyplot.figure(figsize=(8.0, 6.0)) ax = mplot3d.Axes3D(fig) ax.set_xlabel('$x$') ax.set_ylabel('$y$') ax.set_zlabel(label) X, Y = numpy.meshgrid(x, y) ax.plot_surface(X, Y, p, cmap=cm.viridis) ax.set_xlim(x[0], x[-1]) ax.set_ylim(y[0], y[-1]) ax.view_init(elev=elev, azim=azim) ``` ##### Note This plotting function uses *Viridis*, a new (and _awesome_) colormap available in Matplotlib versions 1.5 and greater. If you see an error when you try to plot using `cm.viridis`, just update Matplotlib using `conda` or `pip`. ### Analytical solution The Laplace equation with the boundary conditions listed above has an analytical solution, given by $$ \begin{equation} p(x,y) = \frac{\sinh \left( \frac{\frac{3}{2} \pi y}{L_y}\right)}{\sinh \left( \frac{\frac{3}{2} \pi L_y}{L_x}\right)} \sin \left( \frac{\frac{3}{2} \pi x}{L_x} \right) \end{equation} $$ where $L_x$ and $L_y$ are the length of the domain in the $x$ and $y$ directions, respectively. We previously used `numpy.meshgrid` to plot our 2D solutions to the heat equation in Module 4. Here, we'll use it again as a plotting aid. Always useful, `linspace` creates 1-row arrays of equally spaced numbers: it helps for defining $x$ and $y$ axes in line plots, but now we want the analytical solution plotted for every point in our domain. To do this, we'll use in the analytical solution the 2D arrays generated by `numpy.meshgrid`. ```python def laplace_solution(x, y, Lx, Ly): """ Computes and returns the analytical solution of the Laplace equation on a given two-dimensional Cartesian grid. Parameters ---------- x : numpy.ndarray The gridline locations in the x direction as a 1D array of floats. y : numpy.ndarray The gridline locations in the y direction as a 1D array of floats. Lx : float Length of the domain in the x direction. Ly : float Length of the domain in the y direction. Returns ------- p : numpy.ndarray The analytical solution as a 2D array of floats. """ X, Y = numpy.meshgrid(x, y) p = (numpy.sinh(1.5 * numpy.pi * Y / Ly) / numpy.sinh(1.5 * numpy.pi * Ly / Lx) * numpy.sin(1.5 * numpy.pi * X / Lx)) return p ``` Ok, let's try out the analytical solution and use it to test the `plot_3D` function we wrote above. ```python # Set parameters. Lx = 1.0 # domain length in the x direction Ly = 1.0 # domain length in the y direction nx = 41 # number of points in the x direction ny = 41 # number of points in the y direction # Create the gridline locations. x = numpy.linspace(0.0, Lx, num=nx) y = numpy.linspace(0.0, Ly, num=ny) # Compute the analytical solution. p_exact = laplace_solution(x, y, Lx, Ly) # Plot the analytical solution. plot_3d(x, y, p_exact) ``` It worked! This is what the solution *should* look like when we're 'done' relaxing. (And isn't viridis a cool colormap?) ### How long do we iterate? We noted above that there is no time dependence in the Laplace equation. So it doesn't make a lot of sense to use a `for` loop with `nt` iterations, like we've done before. Instead, we can use a `while` loop that continues to iteratively apply the relaxation scheme until the difference between two successive iterations is small enough. But how small is small enough? That's a good question. We'll try to work that out as we go along. To compare two successive potential fields ($\mathbf{p}^k$ and $\mathbf{p}^{k+1}$), a good option is to use the [L2 norm](http://en.wikipedia.org/wiki/Norm_%28mathematics%29#Euclidean_norm) of the difference. It's defined as $$ \begin{equation} \parallel \mathbf{p}^{k+1} - \mathbf{p}^k \parallel_{L_2} = \sqrt{\sum_{i, j} \left| p_{i, j}^{k+1} - p_{i, j}^k \right|^2} \end{equation} $$ But there's one flaw with this formula. We are summing the difference between successive iterations at each point on the grid. So what happens when the grid grows? (For example, if we're refining the grid, for whatever reason.) There will be more grid points to compare and so more contributions to the sum. The norm will be a larger number just because of the grid size! That doesn't seem right. We'll fix it by normalizing the norm, dividing the above formula by the norm of the potential field at iteration $k$. For two successive iterations, the relative L2 norm is then calculated as $$ \begin{equation} \frac{\parallel \mathbf{p}^{k+1} - \mathbf{p}^k \parallel_{L_2}}{\parallel \mathbf{p}^k \parallel_{L_2}} = \frac{\sqrt{\sum_{i, j} \left| p_{i, j}^{k+1} - p_{i, j}^k \right|^2}}{\sqrt{\sum_{i, j} \left| p_{i, j}^k \right|^2}} \end{equation} $$ For this purpose, we define the `l2_norm` function: ```python def l2_norm(p, p_ref): """ Computes and returns the relative L2-norm of the difference between a solution p and a reference solution p_ref. Parameters ---------- p : numpy.ndarray The solution as an array of floats. p_ref : numpy.ndarray The reference solution as an array of floats. Returns ------- diff : float The relative L2-norm of the difference. """ l2_diff = (numpy.sqrt(numpy.sum((p - p_ref)**2)) / numpy.sqrt(numpy.sum(p_ref**2))) return l2_diff ``` Now, let's define a function that will apply Jacobi's method for Laplace's equation. Three of the boundaries are Dirichlet boundaries and so we can simply leave them alone. Only the Neumann boundary needs to be explicitly calculated at each iteration. ```python def laplace_2d_jacobi(p0, maxiter=20000, rtol=1e-6): """ Solves the 2D Laplace equation using Jacobi relaxation method. The function assumes Dirichlet condition with value zero at all boundaries except at the right boundary where it uses a zero-gradient Neumann condition. The exit criterion of the solver is based on the relative L2-norm of the solution difference between two consecutive iterations. Parameters ---------- p0 : numpy.ndarray The initial solution as a 2D array of floats. maxiter : integer, optional Maximum number of iterations to perform; default: 20000. rtol : float, optional Relative tolerance for convergence; default: 1e-6. Returns ------- p : numpy.ndarray The solution after relaxation as a 2D array of floats. ite : integer The number of iterations performed. diff : float The final relative L2-norm of the difference. """ p = p0.copy() diff = rtol + 1.0 # initial difference ite = 0 # iteration index while diff > rtol and ite < maxiter: pn = p.copy() # Update the solution at interior points. p[1:-1, 1:-1] = 0.25 * (p[1:-1, :-2] + p[1:-1, 2:] + p[:-2, 1:-1] + p[2:, 1:-1]) # Apply Neumann condition (zero-gradient) # at the right boundary. p[1:-1, -1] = p[1:-1, -2] # Compute the residual as the L2-norm of the difference. diff = l2_norm(p, pn) ite += 1 return p, ite, diff ``` ##### Rows and columns, and index order Recall that in the [2D explicit heat equation](http://nbviewer.jupyter.org/github/numerical-mooc/numerical-mooc/blob/master/lessons/04_spreadout/04_03_Heat_Equation_2D_Explicit.ipynb) we stored data with the $y$ coordinates corresponding to the rows of the array and $x$ coordinates on the columns (this is just a code design decision!). We did that so that a plot of the 2D-array values would have the natural ordering, corresponding to the physical domain ($y$ coordinate in the vertical). We'll follow the same convention here (even though we'll be plotting in 3D, so there's no real reason), just to be consistent. Thus, $p_{i,j}$ will be stored in array format as `p[j,i]`. Don't be confused by this. ### Let's relax! The initial values of the potential field are zero everywhere (initial guess), except at the top boundary: $$ p = \sin \left( \frac{\frac{3}{2}\pi x}{L_x} \right) \text{ at } y=L_y $$ To initialize the domain, `numpy.zeros` will handle everything except that one Dirichlet condition. Let's do it! ```python # Set the initial conditions. p0 = numpy.zeros((ny, nx)) p0[-1, :] = numpy.sin(1.5 * numpy.pi * x / Lx) ``` Now let's visualize the initial conditions using the `plot_3D` function, just to check we've got it right. ```python # Plot the initial conditions. plot_3d(x, y, p0) ``` The `p` array is equal to zero everywhere, except along the boundary $y = 1$. Hopefully you can see how the relaxed solution and this initial condition are related. Now, run the iterative solver with a target L2-norm difference between successive iterations of $10^{-8}$. ```python # Compute the solution using Jacobi relaxation method. p, ites, diff = laplace_2d_jacobi(p0, rtol=1e-8) print('Jacobi relaxation: {} iterations '.format(ites) + 'to reach a relative difference of {}'.format(diff)) ``` Jacobi relaxation: 4473 iterations to reach a relative difference of 9.989253685041417e-09 Let's make a gorgeous plot of the final field using the newly minted `plot_3d` function. ```python # Plot the numerical solution. plot_3d(x, y, p) ``` Awesome! That looks pretty good. But we'll need more than a simple visual check, though. The "eyeball metric" is very forgiving! ## Convergence analysis ### Convergence, Take 1 We want to make sure that our Jacobi function is working properly. Since we have an analytical solution, what better way than to do a grid-convergence analysis? We will run our solver for several grid sizes and look at how fast the L2 norm of the difference between consecutive iterations decreases. Now run Jacobi's method on the Laplace equation using four different grids, with the same exit criterion of $10^{-8}$ each time. Then, we look at the error versus the grid size in a log-log plot. What do we get? ```python # List of the grid sizes to investigate. nx_values = [11, 21, 41, 81] # Create an empty list to record the error on each grid. errors = [] # Compute the solution and error for each grid size. for nx in nx_values: ny = nx # same number of points in all directions. # Create the gridline locations. x = numpy.linspace(0.0, Lx, num=nx) y = numpy.linspace(0.0, Ly, num=ny) # Set the initial conditions. p0 = numpy.zeros((ny, nx)) p0[-1, :] = numpy.sin(1.5 * numpy.pi * x / Lx) # Relax the solution. # We do not return the number of iterations or # the final relative L2-norm of the difference. p, _, _ = laplace_2d_jacobi(p0, rtol=1e-8) # Compute the analytical solution. p_exact = laplace_solution(x, y, Lx, Ly) # Compute and record the relative L2-norm of the error. errors.append(l2_norm(p, p_exact)) ``` ```python # Plot the error versus the grid-spacing size. pyplot.figure(figsize=(6.0, 6.0)) pyplot.xlabel(r'$\Delta x$') pyplot.ylabel('Relative $L_2$-norm\nof the error') pyplot.grid() dx_values = Lx / (numpy.array(nx_values) - 1) pyplot.loglog(dx_values, errors, color='black', linestyle='--', linewidth=2, marker='o') pyplot.axis('equal'); ``` Hmm. That doesn't look like 2nd-order convergence, but we're using second-order finite differences. *What's going on?* The culprit is the boundary conditions. Dirichlet conditions are order-agnostic (a set value is a set value), but the scheme we used for the Neumann boundary condition is 1st-order. Remember when we said that the boundaries drive the problem? One boundary that's 1st-order completely tanked our spatial convergence. Let's fix it! ### 2nd-order Neumann BCs Up to this point, we have used the first-order approximation of a derivative to satisfy Neumann B.C.'s. For a boundary located at $x=0$ this reads, $$ \begin{equation} \frac{p^{k+1}_{1,j} - p^{k+1}_{0,j}}{\Delta x} = 0 \end{equation} $$ which, solving for $p^{k+1}_{0,j}$ gives us $$ \begin{equation} p^{k+1}_{0,j} = p^{k+1}_{1,j} \end{equation} $$ Using that Neumann condition will limit us to 1st-order convergence. Instead, we can start with a 2nd-order approximation (the central-difference approximation): $$ \begin{equation} \frac{p^{k+1}_{1,j} - p^{k+1}_{-1,j}}{2 \Delta x} = 0 \end{equation} $$ That seems problematic, since there is no grid point $p^{k}_{-1,j}$. But no matter … let's carry on. According to the 2nd-order approximation, $$ \begin{equation} p^{k+1}_{-1,j} = p^{k+1}_{1,j} \end{equation} $$ Recall the finite-difference Jacobi equation with $i=0$: $$ \begin{equation} p^{k+1}_{0,j} = \frac{1}{4} \left(p^{k}_{0,j-1} + p^k_{0,j+1} + p^{k}_{-1,j} + p^k_{1,j} \right) \end{equation} $$ Notice that the equation relies on the troublesome (nonexistent) point $p^k_{-1,j}$, but according to the equality just above, we have a value we can substitute, namely $p^k_{1,j}$. Ah! We've completed the 2nd-order Neumann condition: $$ \begin{equation} p^{k+1}_{0,j} = \frac{1}{4} \left(p^{k}_{0,j-1} + p^k_{0,j+1} + 2p^{k}_{1,j} \right) \end{equation} $$ That's a bit more complicated than the first-order version, but it's relatively straightforward to code. ##### Note Do not confuse $p^{k+1}_{-1,j}$ with `p[-1]`: `p[-1]` is a piece of Python code used to refer to the last element of a list or array named `p`. $p^{k+1}_{-1,j}$ is a 'ghost' point that describes a position that lies outside the actual domain. ### Convergence, Take 2 We can copy the previous Jacobi function and replace only the line implementing the Neumann boundary condition. ##### Careful! Remember that our problem has the Neumann boundary located at $x = L$ and not $x = 0$ as we assumed in the derivation above. ```python def laplace_2d_jacobi_neumann(p0, maxiter=20000, rtol=1e-6): """ Solves the 2D Laplace equation using Jacobi relaxation method. The function assumes Dirichlet condition with value zero at all boundaries except at the right boundary where it uses a zero-gradient second-order Neumann condition. The exit criterion of the solver is based on the relative L2-norm of the solution difference between two consecutive iterations. Parameters ---------- p0 : numpy.ndarray The initial solution as a 2D array of floats. maxiter : integer, optional Maximum number of iterations to perform; default: 20000. rtol : float, optional Relative tolerance for convergence; default: 1e-6. Returns ------- p : numpy.ndarray The solution after relaxation as a 2D array of floats. ite : integer The number of iterations performed. diff : float The final relative L2-norm of the difference. """ p = p0.copy() diff = rtol + 1.0 # intial difference ite = 0 # iteration index while diff > rtol and ite < maxiter: pn = p.copy() # Update the solution at interior points. p[1:-1, 1:-1] = 0.25 * (p[1:-1, :-2] + p[1:-1, 2:] + p[:-2, 1:-1] + p[2:, 1:-1]) # Apply 2nd-order Neumann condition (zero-gradient) # at the right boundary. p[1:-1, -1] = 0.25 * (2.0 * pn[1:-1, -2] + pn[2:, -1] + pn[:-2, -1]) # Compute the residual as the L2-norm of the difference. diff = l2_norm(p, pn) ite += 1 return p, ite, diff ``` Again, this is the exact same code as before, but now we're running the Jacobi solver with a 2nd-order Neumann boundary condition. Let's do a grid-refinement analysis, and plot the error versus the grid spacing. ```python # List of the grid sizes to investigate. nx_values = [11, 21, 41, 81] # Create an empty list to record the error on each grid. errors = [] # Compute the solution and error for each grid size. for nx in nx_values: ny = nx # same number of points in all directions. # Create the gridline locations. x = numpy.linspace(0.0, Lx, num=nx) y = numpy.linspace(0.0, Ly, num=ny) # Set the initial conditions. p0 = numpy.zeros((ny, nx)) p0[-1, :] = numpy.sin(1.5 * numpy.pi * x / Lx) # Relax the solution. # We do not return the number of iterations or # the final relative L2-norm of the difference. p, _, _ = laplace_2d_jacobi_neumann(p0, rtol=1e-8) # Compute the analytical solution. p_exact = laplace_solution(x, y, Lx, Ly) # Compute and record the relative L2-norm of the error. errors.append(l2_norm(p, p_exact)) ``` ```python # Plot the error versus the grid-spacing size. pyplot.figure(figsize=(6.0, 6.0)) pyplot.xlabel(r'$\Delta x$') pyplot.ylabel('Relative $L_2$-norm\nof the error') pyplot.grid() dx_values = Lx / (numpy.array(nx_values) - 1) pyplot.loglog(dx_values, errors, color='black', linestyle='--', linewidth=2, marker='o') pyplot.axis('equal'); ``` Nice! That's much better. It might not be *exactly* 2nd-order, but it's awfully close. (What is ["close enough"](http://ianhawke.github.io/blog/close-enough.html) in regards to observed convergence rates is a thorny question.) Now, notice from this plot that the error on the finest grid is around $0.0002$. Given this, perhaps we didn't need to continue iterating until a target difference between two solutions of $10^{-8}$. The spatial accuracy of the finite difference approximation is much worse than that! But we didn't know it ahead of time, did we? That's the "catch 22" of iterative solution of systems arising from discretization of PDEs. ## Final word The Jacobi method is the simplest relaxation scheme to explain and to apply. It is also the *worst* iterative solver! In practice, it is seldom used on its own as a solver, although it is useful as a smoother with multi-grid methods. As we will see in the [third lesson](https://nbviewer.jupyter.org/github/numerical-mooc/numerical-mooc/blob/master/lessons/05_relax/05_03_Iterate.This.ipynb) of this module, there are much better iterative methods! But first, let's play with [Poisson's equation](https://nbviewer.jupyter.org/github/numerical-mooc/numerical-mooc/blob/master/lessons/05_relax/05_02_2D.Poisson.Equation.ipynb). --- ###### The cell below loads the style of the notebook ```python from IPython.core.display import HTML css_file = '../../styles/numericalmoocstyle.css' HTML(open(css_file, 'r').read()) ``` <link href='http://fonts.googleapis.com/css?family=Alegreya+Sans:100,300,400,500,700,800,900,100italic,300italic,400italic,500italic,700italic,800italic,900italic' rel='stylesheet' type='text/css'> <link href='http://fonts.googleapis.com/css?family=Arvo:400,700,400italic' rel='stylesheet' type='text/css'> <link href='http://fonts.googleapis.com/css?family=PT+Mono' rel='stylesheet' type='text/css'> <link href='http://fonts.googleapis.com/css?family=Shadows+Into+Light' rel='stylesheet' type='text/css'> <link href='http://fonts.googleapis.com/css?family=Nixie+One' rel='stylesheet' type='text/css'> <link href='https://fonts.googleapis.com/css?family=Source+Code+Pro' rel='stylesheet' type='text/css'> <style> @font-face { font-family: "Computer Modern"; src: url('http://mirrors.ctan.org/fonts/cm-unicode/fonts/otf/cmunss.otf'); } #notebook_panel { /* main background */ background: rgb(245,245,245); } div.cell { /* set cell width */ width: 750px; } div #notebook { /* centre the content */ background: #fff; /* white background for content */ width: 1000px; margin: auto; padding-left: 0em; } #notebook li { /* More space between bullet points */ margin-top:0.8em; } /* draw border around running cells */ div.cell.border-box-sizing.code_cell.running { border: 1px solid #111; } /* Put a solid color box around each cell and its output, visually linking them*/ div.cell.code_cell { background-color: rgb(256,256,256); border-radius: 0px; padding: 0.5em; margin-left:1em; margin-top: 1em; } div.text_cell_render{ font-family: 'Alegreya Sans' sans-serif; line-height: 140%; font-size: 125%; font-weight: 400; width:600px; margin-left:auto; margin-right:auto; } /* Formatting for header cells */ .text_cell_render h1 { font-family: 'Nixie One', serif; font-style:regular; font-weight: 400; font-size: 45pt; line-height: 100%; color: rgb(0,51,102); margin-bottom: 0.5em; margin-top: 0.5em; display: block; } .text_cell_render h2 { font-family: 'Nixie One', serif; font-weight: 400; font-size: 30pt; line-height: 100%; color: rgb(0,51,102); margin-bottom: 0.1em; margin-top: 0.3em; display: block; } .text_cell_render h3 { font-family: 'Nixie One', serif; margin-top:16px; font-size: 22pt; font-weight: 600; margin-bottom: 3px; font-style: regular; color: rgb(102,102,0); } .text_cell_render h4 { /*Use this for captions*/ font-family: 'Nixie One', serif; font-size: 14pt; text-align: center; margin-top: 0em; margin-bottom: 2em; font-style: regular; } .text_cell_render h5 { /*Use this for small titles*/ font-family: 'Nixie One', sans-serif; font-weight: 400; font-size: 16pt; color: rgb(163,0,0); font-style: italic; margin-bottom: .1em; margin-top: 0.8em; display: block; } .text_cell_render h6 { /*use this for copyright note*/ font-family: 'PT Mono', sans-serif; font-weight: 300; font-size: 9pt; line-height: 100%; color: grey; margin-bottom: 1px; margin-top: 1px; } .CodeMirror{ font-family: "Source Code Pro"; font-size: 90%; } .alert-box { padding:10px 10px 10px 36px; margin:5px; } .success { color:#666600; background:rgb(240,242,229); } </style>
768c595210b1d6bba9f9e4c335b265f1d4cbd551
612,126
ipynb
Jupyter Notebook
lessons/05_relax/05_01_2D.Laplace.Equation.ipynb
mcarpe/numerical-mooc
62b3c14c2c56d85d65c6075f2d7eb44266b49c17
[ "CC-BY-3.0" ]
748
2015-01-04T22:50:56.000Z
2022-03-30T20:42:16.000Z
lessons/05_relax/05_01_2D.Laplace.Equation.ipynb
mcarpe/numerical-mooc
62b3c14c2c56d85d65c6075f2d7eb44266b49c17
[ "CC-BY-3.0" ]
62
2015-02-02T01:06:07.000Z
2020-11-09T12:27:41.000Z
lessons/05_relax/05_01_2D.Laplace.Equation.ipynb
mcarpe/numerical-mooc
62b3c14c2c56d85d65c6075f2d7eb44266b49c17
[ "CC-BY-3.0" ]
1,270
2015-01-02T19:19:52.000Z
2022-02-27T01:02:44.000Z
502.566502
182,632
0.935989
true
8,800
Qwen/Qwen-72B
1. YES 2. YES
0.855851
0.888759
0.760645
__label__eng_Latn
0.984726
0.605566
# MATH 497: Final Project Remark: Please upload your solutions for this project to Canvas with a file named "Final_Project_yourname.ipynb". ================================================================================================================= ## Problem 1 [20%]: Consider the following linear system \begin{equation}\label{matrix} A\ast u =f, \end{equation} or equivalently $u=\arg\min \frac{1}{2} (A* v,v)_F-(f,v)_F$, where $(f,v)_F =\sum\limits_{i,j=1}^{n}f_{i,j}v_{i,j}$ is the Frobenius inner product. Here $\ast$ represents a convolution with one channel, stride one and zero padding one. The convolution kernel $A$ is given by $$ A=\begin{bmatrix} 0 & -1 & 0 \\ -1 & 4 & -1 \\ 0 & -1 & 0 \end{bmatrix},~~ $$ the solution $ u \in \mathbb{R}^{n\times n} $, and the RHS $ f\in \mathbb{R}^{n\times n}$ is given by $f_{i,j}=\dfrac{1}{(n+1)^2}.$ ### Tasks: Set $J=4$, $n=2^J-1$ and the number of iterations $M=100$. Use the gradient descent method and the multigrid method to solve the above problem with a random initial guess $u^0$. Let $u_{GD}$ and $u_{MG}$ denote the solutions obtained by gradient descent and multigrid respectively. * [5%] Plot the surface of solution $u_{GD}$ and $u_{MG}$. * [10%] Define error $e_{GD}^m = \|A * u^{m}_{GD}- f\|_F=\sqrt{\sum\limits_{i=1}^{n}\sum\limits_{j=1}^{n} |(A * u^{m}_{GD}- f)_{i,j}}|^2 $ for $m=0,1,2,3,...,M$. Similarly, we define the multigrid error $e_{MG}^m$. Plot the errors $e_{GD}^m$ and $e_{MG}^m$ as a function of the iteration $m$ (your x-axis is $m$ and your y-axis is the error). Put both plots together in the same figure. * [5%] Find the minimal $m_1$ for which $e^{m_1}_{GD} <10^{-5}$ and the minimal $m_2$ for which $e^{m_2}_{MG} <10^{-5}$, and report the computational time for each method. Note that $m_1$ or $m_2$ may be greater than $M=100$, in this case you will have to run more iterations. ### Remark: Below are examples of using gradient descent and multigrid iterations for M-times * #### For gradient descent method with $\eta=\frac{1}{8}$, you need to write a code: Given initial guess $u^0$ $$ \begin{align} &\text{for } m = 1,2,...,M\\ &~~~~\text{for } i,j = 1: n\\ &~~~~~~~~u_{i,j}^{m} = u_{i,j}^{m-1}-\eta(f_{i,j}-(A\ast u^{m-1})_{i,j})\\ &~~~~\text{endfor}\\ &\text{endfor} \end{align} $$ * #### For multigrid method, we have provided the framework code in F02_MultigridandMgNet.ipynb: Given initial guess $u^0$ $$ \begin{align} &\text{for } m = 1,2,...,M\\ &~~~~u^{m} = MG1(u^{m-1},f, J, \nu)\\ &\text{endfor} \end{align} $$ ================================================================================================================= ## Problem 2 [50%]: Use SGD with momentum and weight decay to train MgNet on the Cifar10 dataset. Use 120 epochs, set the initial learning rate to 0.1, momentum to 0.9, weight decay to 0.0005, and divide the learning rate by 10 every 30 epochs. (The code to do this has been provided.) Let $b_i$ denote the test accuracy of the model after $i$ epochs, and let $b^*$ = $\max_i(b_i)$ be the best test accuracy attained during training. ### Tasks: * [30%] Train MgNet with the following three sets of hyper-parameters (As a reminder, the hyper-parameters of MgNet are $\nu$, the number of iterations of each layer, $c_u$, the number of channels for $u$, and $c_f$, the number of channels for $f$.): (1) $\nu=$[1,1,1,1], $c_u=c_f=64$. (2) $\nu=$[2,2,2,2], $c_u=c_f=64$. (3) $\nu=$[2,2,2,2], $c_u=c_f=64$, try to improve the test accuracy by implementing MgNet with $S^{l,i}$, which means different iterations in the same layer do not share the same $S^{l}$. * For each numerical experiment above, print the results with the following format: "Epoch: i, Learning rate: lr$_i$, Training accuracy: $a_i$, Test accuracy: $b_i$" where $i=1,2,3,...$ means the $i$-th epoch, $a_i$ and $b_i$ are the training accuracy and test accuracy computed at the end of $i$-th epoch, and lr$_i$ is the learning rate of $i$-th epoch. * [10%] For each numerical experiment above, plot the test accuracy against the epoch count, i.e. the x-axis is the number of epochs $i$ and y-axis is the test accuracy $b_i$. An example plot is shown in the next cell. * [10%] Calculate the number of parameters that each of the above models has. Discuss why the number of parameters is different (or the same) for each of the models. ```python from IPython.display import Image Image(filename='plot_sample_code.png') ``` ```python # You can calculate the number of parameters of my_model by: model_size = sum(param.numel() for param in my_model.parameters()) ``` ================================================================================================================= ## Problem 3 [25 %]: Try to improve the MgNet Accuracy by increasing the number of channels. (We use the same notation as in the previous problem.) Double the number of channels to $c_u=c_f=128$ and try different $\nu$ to maximize the test accuracy. ### Tasks: * [20%] Report $b^{*}$, $\nu$ and the number of parameters of your model for each of the experiments you run. * [5%] For the best experiment, plot the test accuracy against the epoch count, i.e. the x-axis is the number of epochs $i$ and y-axis is the test accuracy $b_i$. (Same as for the previous problem.) ```python # You can calculate the number of parameters of my_model by: model_size = sum(param.numel() for param in my_model.parameters()) ``` ================================================================================================================= ## Problem 4 [5%]: Continue testing larger MgNet models (i.e. increase the number of channels) to maximize the test accuracy. (Again, we use the same notation as in problem 2.) ### Tasks: + [5%] Try different training strategies and MgNet architectures with the goal of achieving $b^*>$ 95%. Hint: you can tune the number of epochs, the learning rate schedule, $c_u$, $c_f$, $\nu$, try different $S^{l,i}$ in the same layer $l$, etc... =================================================================================================================
e1238afed5b692b6f07ce0e31655db18b0a455c6
99,241
ipynb
Jupyter Notebook
docs/_sources/Module6/Final_Project.ipynb
liuzhengqi1996/math452_Spring2022
b01d1d9bee4778b3069e314c775a54f16dd44053
[ "MIT" ]
null
null
null
docs/_sources/Module6/Final_Project.ipynb
liuzhengqi1996/math452_Spring2022
b01d1d9bee4778b3069e314c775a54f16dd44053
[ "MIT" ]
null
null
null
docs/_sources/Module6/Final_Project.ipynb
liuzhengqi1996/math452_Spring2022
b01d1d9bee4778b3069e314c775a54f16dd44053
[ "MIT" ]
null
null
null
349.440141
89,320
0.925021
true
1,788
Qwen/Qwen-72B
1. YES 2. YES
0.896251
0.909907
0.815505
__label__eng_Latn
0.985058
0.733025
# "모분산 불편 추정량과 모 표준편차" > "모분산의 불편 추정량과 모 표준편차의 관계를 예제를 통하여 계산해 본다." - toc: false - badges: true - author: 단호진 - categories: [statistics] 수리통계학 연습 문제 4.2.10을 살피다 지금까지 별 의심 없이 써왔던 모분산의 불편 추청량 $S^2$을 다시 생각해보게 되었다. 모 표준편차의 분편 추정량을 $\sqrt{S^2}$으로 볼 수 있냐는 문제이다. 모분산의 불편 추정량은 다음과 같다. $S^2 = \frac{\sum_{i=1}^{n} (X_i - \bar X_n)^2}{n - 1}$ $X_i \sim N(\mu, \sigma^2)$이고, $\bar X_n = \sum_{i=1}^{n} X_i$이다. 불편 추청량이라 함은 $E(S^2) = \sigma^2$ 식이 성립한다는 것이다. 그렇다면, $E(S) = \sigma$이라고 볼 수 있나? 그렇지 않다. 더 자세한 내용은 모 표준 편차의 불편 추청에 관한 위키피디아 문서를 참고하기 바란다[2]. 문제의 힌트를 사용해서 $E(S)$를 계산해 보겠다. $E(S) = \frac{\sigma}{\sqrt{n-1}} E\left[\sqrt{\frac{(n-1)S^2}{\sigma^2}}\right]$ 제곱근 안의 항은 $\frac{(n-1)S^2}{\sigma^2} \sim \chi^2(n-1)$이므로 감마분포의 확률밀도함수(pdf) $f(x)$를 이용하여 다음과 같이 쓸 수 있다. $E(S) = \frac{\sigma}{\sqrt{n-1}} \int_0^{\infty} x^{1/2} f(x) dx$ 단, 확률밀도함수 f(x)는 자유도 $r$과 정의 구간 $0<x<\infty$에 대하여 다음과 같다. $f(x) = \frac{1}{\Gamma(r/2) 2^{r/2}} x^{r/2 - 1} e^{-x/2}$ 연습문제에서 $n=9$이고, $r=n-1=8$이다. 자유도를 적분식에 넣고 감마 함수로 정리하면 어렵지 않게 $E(s)$ 값을 계산할 수도 있지만, 여기에서는 sympy 패키지를 이용하여 적분을 풀어보겠다. 참고 문헌 1. 호그, 매킨, 크레이그, 박태영 옮김, 수리통계학 개론, 7판, Pearson/경문사, 2018 1. 위키피디아, Unbiased estimation of standard deviation, 최종 편집 2020년 5월 7일, https://en.wikipedia.org/wiki/Unbiased_estimation_of_standard_deviation ```python from sympy import * init_printing() x, sigma = symbols('x, sigma') n = 9 r = Rational(n - 1, 1) f = 1 / gamma(r / 2) / 2**(r / 2) * x**(r / 2 - 1) * exp(-x / 2) f ``` ```python E_S = sigma / sqrt(r) * integrate(sqrt(x) * f, [x, 0, oo]) # print(E_S) E_S ``` ```python # print(E_S.evalf()) E_S.evalf() ``` 불편 분산의 제곱근 기댓값은 $E(S) = 0.969 \sigma$로 **불편 분산의 제곱근은 모 표준 편차의 불편량으로 사용할 수 없다**는 점을 확인하였다. 4.2.10 (b)에서 신뢰구간이 확률변수 $t(8) = \sqrt{9} (\bar X - \mu) / S$에 근거를 두므로 95% 신뢰 구간의 길이는 $2t_{\alpha/2, n-1} S / \sqrt{n}$이다. 마지막으로 $S$와 $\sigma$ 관계를 삽입하면 신뢰 구간의 길이를 얻을 수 있다. ```python from scipy.stats import t import numpy as np rv = t(n - 1) 2 * rv.ppf(0.975) / np.sqrt(9) * 0.96931 ``` 최종적으로 계산된 신뢰 구간의 길이는 1.49 $\sigma$이다.
d2e34e07ffce28474db1451df1b204ff3f114d0e
13,825
ipynb
Jupyter Notebook
_notebooks/2020-12-27-모분산-모표준편차-불편-추정량.ipynb
danhojin/jupyter-blog
a765d0169a666fdbafaa84ff9efba9d9ca48c41c
[ "Apache-2.0" ]
null
null
null
_notebooks/2020-12-27-모분산-모표준편차-불편-추정량.ipynb
danhojin/jupyter-blog
a765d0169a666fdbafaa84ff9efba9d9ca48c41c
[ "Apache-2.0" ]
5
2020-12-26T23:43:58.000Z
2021-05-01T03:32:46.000Z
_notebooks/2020-12-27-모분산-모표준편차-불편-추정량.ipynb
danhojin/jupyter-blog
a765d0169a666fdbafaa84ff9efba9d9ca48c41c
[ "Apache-2.0" ]
null
null
null
68.440594
2,624
0.783146
true
1,339
Qwen/Qwen-72B
1. YES 2. YES
0.679179
0.76908
0.522343
__label__kor_Hang
0.999997
0.051907
```python import sympy as sm s1 = sm.FiniteSet(1,2) s11 = sm.FiniteSet(1,2,3) s2 = sm.FiniteSet(1,2,3,4,5) s3 = sm.FiniteSet(*range(5,9)) # element 1 in s # subset s1.is_subset(s2) s2.is_superset(s1) s1.is_proper_subset(s11) # pwoerset s1.powerset() #union s3.union(s1) # intersect s3.intersect(s2) for i in s1**2: print(i) ``` (1, 1) (2, 1) (1, 2) (2, 2) ```python import matplotlib.pyplot as plt import numpy as np fig = plt.figure() ax = fig.add_subplot() x = np.linspace(-2,2,1000) y = np.exp(-x**2) ax.plot(x,y) ax.plot(x,np.exp(-x)) ``` ```python x = sm.symbols('x') a = sm.symbols('a',real=True) C = sm.symbols('C') p = C* sm.exp(-x**2/2/a**2) sm.Integral(p,(x,-sm.oo,sm.oo)).doit() ``` $\displaystyle \begin{cases} \sqrt{2} \sqrt{\pi} C a & \text{for}\: 2 \left|{\arg{\left(a \right)}}\right| \leq \frac{\pi}{2} \\\int\limits_{-\infty}^{\infty} C e^{- \frac{x^{2}}{2 a^{2}}}\, dx & \text{otherwise} \end{cases}$ ```python import sympy.functions sm.functions.combinatorial.numbers.nP(5,2) sm.factorial(5)/sm.factorial(3) sm.functions.combinatorial.numbers.nC(5,2) sm.factorial(5)/sm.factorial(3)/sm.factorial(2) ``` $\displaystyle 20$ ```python ``` $\displaystyle \operatorname{atan}{\left(\sqrt{2} \right)}$ ```python ``` $\displaystyle \frac{\sqrt{2}}{2}$ ```python ```
91ac6decf20b40cc17e1b3863720741219b524d9
17,801
ipynb
Jupyter Notebook
python/Vectors/Set.ipynb
karng87/nasm_game
a97fdb09459efffc561d2122058c348c93f1dc87
[ "MIT" ]
null
null
null
python/Vectors/Set.ipynb
karng87/nasm_game
a97fdb09459efffc561d2122058c348c93f1dc87
[ "MIT" ]
null
null
null
python/Vectors/Set.ipynb
karng87/nasm_game
a97fdb09459efffc561d2122058c348c93f1dc87
[ "MIT" ]
null
null
null
83.57277
13,152
0.840065
true
523
Qwen/Qwen-72B
1. YES 2. YES
0.927363
0.79053
0.733109
__label__yue_Hant
0.179272
0.541589
```python %matplotlib inline from __future__ import print_function, division import numpy as np import matplotlib as mpl from matplotlib import pyplot as pl import sys from scipy.misc import logsumexp ``` ```python def logtrapz(lnf, dx): return np.log(dx/2.) + logsumexp([logsumexp(lnf[:-1]), logsumexp(lnf[1:])]) ``` ```python from scipy.special import erf ndp = 10 #hs = [0.001, 0.01, 0.1, 1., 10., 20., 50.] h = 10. sigmas = [0.1, 1., 5., 10.] C = 100. hvals = np.linspace(0., C, 5000) kldivs = [] #for h in hs: for sigma in sigmas: #sigma = 1. d = h + sigma*np.random.randn(10) loglike = np.zeros((len(hvals),)) for dval in d: loglike = loglike - (0.5*(dval-hvals)**2/sigma**2) - 0.5*np.log(2.*np.pi*sigma**2) logprior = -np.log(C) logevd = logtrapz(loglike+logprior, hvals[1]-hvals[0]) logpost = loglike+logprior-logevd #ev = 0.5*(erf(0.5*np.sqrt(2.)*d/sigma) + erf(0.5*np.sqrt(2.)*(C-d)/sigma)) kldivergence = np.sum(np.exp(logpost)*(logpost-logprior)) kldivs.append(kldivergence) #print(logevd, file=sys.stdout) #print(kldivergence, file=sys.stdout) ``` ```python #pl.semilogx(hs, kldivs) pl.plot(sigmas, kldivs) ``` ```python from sympy import * ``` ```python x, y, z = symbols('x y z') integrate(exp(-(x-y)**2/(2*z**2))/sqrt(2*pi*z**2), (y, 0, 100)) ``` z*erf(sqrt(2)*x/(2*z))/(2*sqrt(z**2)) + z*erf(sqrt(2)*(-x + 100)/(2*z))/(2*sqrt(z**2)) ```python ```
e18eb6a6486513a1fc619596f4820a144666bdd9
13,679
ipynb
Jupyter Notebook
TestKLdivergence.ipynb
mattpitkin/random_scripts
8fcfc1d25d8ca7ef66778b7b30be564962e3add3
[ "MIT" ]
null
null
null
TestKLdivergence.ipynb
mattpitkin/random_scripts
8fcfc1d25d8ca7ef66778b7b30be564962e3add3
[ "MIT" ]
null
null
null
TestKLdivergence.ipynb
mattpitkin/random_scripts
8fcfc1d25d8ca7ef66778b7b30be564962e3add3
[ "MIT" ]
null
null
null
77.721591
9,934
0.824037
true
550
Qwen/Qwen-72B
1. YES 2. YES
0.855851
0.757794
0.648559
__label__kor_Hang
0.191289
0.345151
# One Degree-of-Freedom (DoF) Hamiltonian Bifurcation of Equilibria (ADD INTRODUCTORY LANGUAGE ABOUT PROBLEM DEVELOPMENT) We will now consider two examples of bifurcation of equilibria in two dimensional Hamiltonian system; in particular, the Hamiltonian saddle-node and Hamiltonian pitchfork bifurcations. ## Hamiltonian saddle-node bifurcation We consider the Hamiltonian: \begin{equation} H (q, p) = \frac{p^2}{2} - \lambda q + \frac{q^3}{3}, \quad (q, p) \in \mathbb{R}^2. \label{eq:hamApp13} \end{equation} where $\lambda$ is considered to be a parameter that can be varied. From this Hamiltonian, we derive Hamilton's equations: \begin{eqnarray} \dot{q} & = & \frac{\partial H}{\partial p} = p, \nonumber \\ \dot{p} & = & -\frac{\partial H}{\partial q} =\lambda - q^2. \label{eq:hamApp14} \end{eqnarray} ### Revealing the Phase Space Structures and their implications for Reaction Dynamics The fixed points for \eqref{eq:hamApp14} are: \begin{equation} (q, p) = (\pm\sqrt{\lambda}, 0), \end{equation} from which it follows that there are no fixed points for $\lambda <0$, one fixed point for $\lambda =0$, and two fixed points for $\lambda >0$. This is the scenario for a saddle-node bifurcation. Next we examine the stability of the fixed points. The Jacobian of \eqref{eq:hamApp14} is given by: \begin{equation} J =\left( \begin{array}{cc} 0 & 1\\ -2 q & 0 \end{array} \right). \label{eq:hamApp15} \end{equation} The eigenvalues of this matrix are: \begin{equation} \Lambda_{1, 2} = \pm \sqrt{-2q}. \end{equation} Hence $(q, p) = (-\sqrt{\lambda}, 0)$ is a saddle, $(q, p) = (\sqrt{\lambda}, 0)$ is a center, and $(q, p) = (0, 0)$ has two zero eigenvalues. The phase portraits are shown in Fig. [fig:1](#fig:appC_fig3). <a id="fig:appC_fig3"></a> <figcaption style="text-align:center;font-size:14px"><b>fig:1 </b><em> The phase portraits for the Hamiltonian saddle-node bifurcation.</em></figcaption><hr> ## Hamiltonian pitchfork bifurcation We consider the Hamiltonian: \begin{equation} H (q, p) = \frac{p^2}{2} - \lambda \frac{q^2}{2} + \frac{q^4}{4}, \label{eq:hamApp16} \end{equation} where $\lambda$ is considered to be a parameter that can be varied. From this Hamiltonian, we derive Hamilton's equations: \begin{eqnarray} \dot{q} & = & \frac{\partial H}{\partial p} = p, \nonumber \\ \dot{p} & = & -\frac{\partial H}{\partial q} =\lambda q - q^3. \label{eq:hamApp17} \end{eqnarray} ### Revealing the Phase Space Structures and their implications for Reaction Dynamics The fixed points for \eqref{eq:hamApp17} are: \begin{equation} (q, p) = (0, 0), \, (\pm\sqrt{\lambda}, 0), \end{equation} from which it follows that there is one fixed point for $\lambda \leq 0$, and three fixed points for $\lambda >0$. This is the scenario for a pitchfork bifurcation. Next we examine the stability of the fixed points. The Jacobian of \eqref{eq:hamApp17} is given by: \begin{equation} J = \left( \begin{array}{cc} 0 & 1\\ \lambda-3q^2 & 0 \end{array} \right). \label{eq:hamApp18} \end{equation} The eigenvalues of this matrix are: \begin{equation} \Lambda_{1, 2} = \pm \sqrt{\lambda - 3q^2 }. \end{equation} Hence $(q, p) = (0, 0)$ is a center for $\lambda <0$, a saddle for $\lambda >0$ and has two zero eigenvalues for $\lambda =0$. The fixed points $(q, p) = (\sqrt{\lambda}, 0)$ are centers for $\lambda >0$. The phase portraits are shown in Fig. [fig:2](#fig:appC_fig4). <a id="fig:appC_fig4"></a> <figcaption style="text-align:center;font-size:14px"><b>fig:2 </b><em> The phase portraits for the Hamiltonian pitchfork bifurcation.</em></figcaption><hr>
e7b83a3d5c707a8dfe5353f910354b802e027fb0
6,945
ipynb
Jupyter Notebook
content/act1/hamiltonian_bifurcation/ham_bif-jekyll.ipynb
champsproject/chem_react_dyn
53ee9b30fbcfa4316eb08fd3ca69cba82cf7b598
[ "CC-BY-4.0" ]
11
2019-12-09T11:23:13.000Z
2020-12-16T09:49:55.000Z
content/act1/hamiltonian_bifurcation/ham_bif-jekyll.ipynb
champsproject/chem_react_dyn
53ee9b30fbcfa4316eb08fd3ca69cba82cf7b598
[ "CC-BY-4.0" ]
40
2019-12-09T14:52:38.000Z
2022-02-26T06:10:08.000Z
content/act1/hamiltonian_bifurcation/ham_bif-jekyll.ipynb
champsproject/chem_react_dyn
53ee9b30fbcfa4316eb08fd3ca69cba82cf7b598
[ "CC-BY-4.0" ]
3
2020-05-12T06:27:20.000Z
2022-02-08T05:29:56.000Z
29.679487
280
0.551188
true
1,181
Qwen/Qwen-72B
1. YES 2. YES
0.909907
0.936285
0.851932
__label__eng_Latn
0.938216
0.817657
# Homework 1 *This notebook includes both coding and written questions. Please hand in this notebook file with all the outputs and your answers to the written questions.* This assignment covers linear filters, convolution and correlation. ```python # Setup import numpy as np import matplotlib.pyplot as plt from time import time from skimage import io from __future__ import print_function %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # for auto-reloading extenrnal modules %load_ext autoreload %autoreload 2 ``` ## Part 1: Convolutions ### 1.1 Commutative Property (5 points) Recall that the convolution of an image $f:\mathbb{R}^2\rightarrow \mathbb{R}$ and a kernel $h:\mathbb{R}^2\rightarrow\mathbb{R}$ is defined as follows: $$(f*h)[m,n]=\sum_{i=-\infty}^\infty\sum_{j=-\infty}^\infty f[i,j]\cdot h[m-i,n-j]$$ Or equivalently, \begin{align} (f*h)[m,n] &= \sum_{i=-\infty}^\infty\sum_{j=-\infty}^\infty h[i,j]\cdot f[m-i,n-j]\\ &= (h*f)[m,n] \end{align} Show that this is true (i.e. prove that the convolution operator is commutative: $f*h = h*f$). **Your Answer:** *Write your solution in this markdown cell. Please write your equations in [LaTex equations](http://jupyter-notebook.readthedocs.io/en/latest/examples/Notebook/Typesetting%20Equations.html).* ### 1.2 Shift Invariance (5 points) Let $f$ be a function $\mathbb{R}^2\rightarrow\mathbb{R}$. Consider a system $f\xrightarrow{s}g$, where $g=(f*h)$ with some kernel $h:\mathbb{R}^2\rightarrow\mathbb{R}$. Also consider functions $f'(m,n) = f(m-m_0, n-n_0)$ and $g'(m,n) = g(m-m_0, n-n_0)$. Show that $S$ defined by any kernel $h$ is a Linear Shift Invariant (LSI) system by showing that $g' = (f'*h)$. **Your Answer:** *Write your solution in this markdown cell. Please write your equations in [LaTex equations](http://jupyter-notebook.readthedocs.io/en/latest/examples/Notebook/Typesetting%20Equations.html).* ### 1.3 Linear Invariance (10 points) Recall that a system S is considered a linear system if and only if it satisfies the superposition property. In mathematical terms, a (function) S is a Linear Invariant system iff it satisfies: $S[\alpha f_i[n,m] + \beta f_j[k,l]] = \alpha S[f_i[n,m]] + \beta S[f_j[k,l]]$ Let $f_i$ and $f_j$ be functions $\mathbb{R}^2\rightarrow\mathbb{R}$. Consider a system $f\xrightarrow{s}g$, where $g=(f*h)$ with some kernel $h:\mathbb{R}^2\rightarrow\mathbb{R}$. Show that $S$ defined by any kernel $h$ is a Linear Invariant (LSI) system by showing that the superposition property holds for S. **Your Answer:** *Write your solution in this markdown cell. Please write your equations in [LaTex equations](http://jupyter-notebook.readthedocs.io/en/latest/examples/Notebook/Typesetting%20Equations.html).* ### 1.4 Implementation (30 points) In this section, you will implement two versions of convolution: - `conv_nested` - `conv_fast` First, run the code cell below to load the image to work with. ```python # Open image as grayscale img = io.imread('dog.jpg', as_gray=True) # Show image plt.imshow(img) plt.axis('off') plt.title("Isn't he cute?") plt.show() ``` Now, implement the function **`conv_nested`** in **`filters.py`**. This is a naive implementation of convolution which uses 4 nested for-loops. It takes an image $f$ and a kernel $h$ as inputs and outputs the convolved image $(f*h)$ that has the same shape as the input image. This implementation should take a few seconds to run. *- Hint: It may be easier to implement $(h*f)$* We'll first test your `conv_nested` function on a simple input. ```python from filters import conv_nested # Simple convolution kernel. kernel = np.array( [ [1,0,1], [0,0,0], [1,0,0] ]) # Create a test image: a white square in the middle test_img = np.zeros((9, 9)) test_img[3:6, 3:6] = 1 # Run your conv_nested function on the test image test_output = conv_nested(test_img, kernel) # Build the expected output expected_output = np.zeros((9, 9)) expected_output[2:7, 2:7] = 1 expected_output[5:, 5:] = 0 expected_output[4, 2:5] = 2 expected_output[2:5, 4] = 2 expected_output[4, 4] = 3 # Plot the test image plt.subplot(1,3,1) plt.imshow(test_img) plt.title('Test image') plt.axis('off') # Plot your convolved image plt.subplot(1,3,2) plt.imshow(test_output) plt.title('Convolution') plt.axis('off') # Plot the exepected output plt.subplot(1,3,3) plt.imshow(expected_output) plt.title('Exepected output') plt.axis('off') plt.show() # Test if the output matches expected output assert np.max(test_output - expected_output) < 1e-10, "Your solution is not correct." ``` Now let's test your `conv_nested` function on a real image. ```python from filters import conv_nested # Simple convolution kernel. # Feel free to change the kernel to see different outputs. kernel = np.array( [ [1,0,-1], [2,0,-2], [1,0,-1] ]) out = conv_nested(img, kernel) # Plot original image plt.subplot(2,2,1) plt.imshow(img) plt.title('Original') plt.axis('off') # Plot your convolved image plt.subplot(2,2,3) plt.imshow(out) plt.title('Convolution') plt.axis('off') # Plot what you should get solution_img = io.imread('convoluted_dog.jpg', as_gray=True) plt.subplot(2,2,4) plt.imshow(solution_img) plt.title('What you should get') plt.axis('off') plt.show() ``` Let us implement a more efficient version of convolution using array operations in numpy. As shown in the lecture, a convolution can be considered as a sliding window that computes sum of the pixel values weighted by the flipped kernel. The faster version will i) zero-pad an image, ii) flip the kernel horizontally and vertically, and iii) compute weighted sum of the neighborhood at each pixel. First, implement the function **`zero_pad`** in **`filters.py`**. ```python from filters import zero_pad pad_width = 20 # width of the padding on the left and right pad_height = 40 # height of the padding on the top and bottom padded_img = zero_pad(img, pad_height, pad_width) # Plot your padded dog plt.subplot(1,2,1) plt.imshow(padded_img) plt.title('Padded dog') plt.axis('off') # Plot what you should get solution_img = io.imread('padded_dog.jpg', as_gray=True) plt.subplot(1,2,2) plt.imshow(solution_img) plt.title('What you should get') plt.axis('off') plt.show() ``` Next, complete the function **`conv_fast`** in **`filters.py`** using `zero_pad`. Run the code below to compare the outputs by the two implementations. `conv_fast` should run significantly faster than `conv_nested`. Depending on your implementation and computer, `conv_nested` should take a few seconds and `conv_fast` should be around 5 times faster. ```python from filters import conv_fast t0 = time() out_fast = conv_fast(img, kernel) t1 = time() out_nested = conv_nested(img, kernel) t2 = time() # Compare the running time of the two implementations print("conv_nested: took %f seconds." % (t2 - t1)) print("conv_fast: took %f seconds." % (t1 - t0)) # Plot conv_nested output plt.subplot(1,2,1) plt.imshow(out_nested) plt.title('conv_nested') plt.axis('off') # Plot conv_fast output plt.subplot(1,2,2) plt.imshow(out_fast) plt.title('conv_fast') plt.axis('off') # Make sure that the two outputs are the same if not (np.max(out_fast - out_nested) < 1e-10): print("Different outputs! Check your implementation.") ``` ### Extra Credit 1 (10 points) Devise a faster version of convolution and implement **`conv_faster`** in **`filters.py`**. You will earn extra credit only if the `conv_faster` runs faster (by a fair margin) than `conv_fast` **and** outputs the same result. ```python from filters import conv_faster t0 = time() out_fast = conv_fast(img, kernel) t1 = time() out_faster = conv_faster(img, kernel) t2 = time() # Compare the running time of the two implementations print("conv_fast: took %f seconds." % (t1 - t0)) print("conv_faster: took %f seconds." % (t2 - t1)) # Plot conv_nested output plt.subplot(1,2,1) plt.imshow(out_fast) plt.title('conv_fast') plt.axis('off') # Plot conv_fast output plt.subplot(1,2,2) plt.imshow(out_faster) plt.title('conv_faster') plt.axis('off') # Make sure that the two outputs are the same if not (np.max(out_fast - out_faster) < 1e-10): print("Different outputs! Check your implementation.") ``` --- ## Part 2: Cross-correlation Cross-correlation of two 2D signals $f$ and $g$ is defined as follows: $$(f\star{g})[m,n]=\sum_{i=-\infty}^\infty\sum_{j=-\infty}^\infty f[i,j]\cdot g[m-i,n-j]$$ ### 2.1 Template Matching with Cross-correlation (12 points) Suppose that you are a clerk at a grocery store. One of your responsibilites is to check the shelves periodically and stock them up whenever there are sold-out items. You got tired of this laborious task and decided to build a computer vision system that keeps track of the items on the shelf. Luckily, you have learned in CS131 that cross-correlation can be used for template matching: a template $g$ is multiplied with regions of a larger image $f$ to measure how similar each region is to the template. The template of a product (`template.jpg`) and the image of shelf (`shelf.jpg`) is provided. We will use cross-correlation to find the product in the shelf. Implement **`cross_correlation`** function in **`filters.py`** and run the code below. *- Hint: you may use the `conv_fast` function you implemented in the previous question.* ```python from filters import cross_correlation # Load template and image in grayscale img = io.imread('shelf.jpg') img_grey = io.imread('shelf.jpg', as_gray=True) temp = io.imread('template.jpg') temp_grey = io.imread('template.jpg', as_gray=True) # Perform cross-correlation between the image and the template out = cross_correlation(img_grey, temp_grey) # Find the location with maximum similarity y,x = (np.unravel_index(out.argmax(), out.shape)) # Display product template plt.figure(figsize=(25,20)) plt.subplot(3, 1, 1) plt.imshow(temp) plt.title('Template') plt.axis('off') # Display cross-correlation output plt.subplot(3, 1, 2) plt.imshow(out) plt.title('Cross-correlation (white means more correlated)') plt.axis('off') # Display image plt.subplot(3, 1, 3) plt.imshow(img) plt.title('Result (blue marker on the detected location)') plt.axis('off') # Draw marker at detected location plt.plot(x, y, 'bx', ms=40, mew=10) plt.show() ``` #### Interpretation How does the output of cross-correlation filter look? Was it able to detect the product correctly? Explain what problems there might be with using a raw template as a filter. **Your Answer:** *Write your solution in this markdown cell.* --- ### 2.2 Zero-mean cross-correlation (6 points) A solution to this problem is to subtract the mean value of the template so that it has zero mean. Implement **`zero_mean_cross_correlation`** function in **`filters.py`** and run the code below. **If your implementation is correct, you should see the blue cross centered over the correct cereal box** ```python from filters import zero_mean_cross_correlation # Perform cross-correlation between the image and the template out = zero_mean_cross_correlation(img_grey, temp_grey) # Find the location with maximum similarity y,x = (np.unravel_index(out.argmax(), out.shape)) # Display product template plt.figure(figsize=(30,20)) plt.subplot(3, 1, 1) plt.imshow(temp) plt.title('Template') plt.axis('off') # Display cross-correlation output plt.subplot(3, 1, 2) plt.imshow(out) plt.title('Cross-correlation (white means more correlated)') plt.axis('off') # Display image plt.subplot(3, 1, 3) plt.imshow(img) plt.title('Result (blue marker on the detected location)') plt.axis('off') # Draw marker at detcted location plt.plot(x, y, 'bx', ms=40, mew=10) plt.show() ``` You can also determine whether the product is present with appropriate scaling and thresholding. ```python def check_product_on_shelf(shelf, product): out = zero_mean_cross_correlation(shelf, product) # Scale output by the size of the template out = out / float(product.shape[0]*product.shape[1]) # Threshold output (this is arbitrary, you would need to tune the threshold for a real application) out = out > 0.025 if np.sum(out) > 0: print('The product is on the shelf') else: print('The product is not on the shelf') # Load image of the shelf without the product img2 = io.imread('shelf_soldout.jpg') img2_grey = io.imread('shelf_soldout.jpg', as_gray=True) plt.imshow(img) plt.axis('off') plt.show() check_product_on_shelf(img_grey, temp_grey) plt.imshow(img2) plt.axis('off') plt.show() check_product_on_shelf(img2_grey, temp_grey) ``` --- ### 2.3 Normalized Cross-correlation (12 points) One day the light near the shelf goes out and the product tracker starts to malfunction. The `zero_mean_cross_correlation` is not robust to change in lighting condition. The code below demonstrates this. ```python from filters import normalized_cross_correlation # Load image img = io.imread('shelf_dark.jpg') img_grey = io.imread('shelf_dark.jpg', as_gray=True) # Perform cross-correlation between the image and the template out = zero_mean_cross_correlation(img_grey, temp_grey) # Find the location with maximum similarity y,x = (np.unravel_index(out.argmax(), out.shape)) # Display image plt.imshow(img) plt.title('Result (red marker on the detected location)') plt.axis('off') # Draw marker at detcted location plt.plot(x, y, 'rx', ms=25, mew=5) plt.show() ``` A solution is to normalize the pixels of the image and template at every step before comparing them. This is called **normalized cross-correlation**. The mathematical definition for normalized cross-correlation of $f$ and template $g$ is: $$(f\star{g})[m,n]=\sum_{i,j} \frac{f[i,j]-\overline{f_{m,n}}}{\sigma_{f_{m,n}}} \cdot \frac{g[i-m,j-n]-\overline{g}}{\sigma_g}$$ where: - $f_{m,n}$ is the patch image at position $(m,n)$ - $\overline{f_{m,n}}$ is the mean of the patch image $f_{m,n}$ - $\sigma_{f_{m,n}}$ is the standard deviation of the patch image $f_{m,n}$ - $\overline{g}$ is the mean of the template $g$ - $\sigma_g$ is the standard deviation of the template $g$ Implement **`normalized_cross_correlation`** function in **`filters.py`** and run the code below. ```python from filters import normalized_cross_correlation # Perform normalized cross-correlation between the image and the template out = normalized_cross_correlation(img_grey, temp_grey) # Find the location with maximum similarity y,x = (np.unravel_index(out.argmax(), out.shape)) # Display image plt.imshow(img) plt.title('Result (red marker on the detected location)') plt.axis('off') # Draw marker at detcted location plt.plot(x, y, 'rx', ms=25, mew=5) plt.show() ``` ## Part 3: Separable Filters ### 3.1 Theory (10 points) Consider an $M_1\times{N_1}$ image $I$ and an $M_2\times{N_2}$ filter $F$. A filter $F$ is **separable** if it can be written as a product of two 1D filters: $F=F_1F_2$. For example, $$F= \begin{bmatrix} 1 & -1 \\ 1 & -1 \end{bmatrix} $$ can be written as a matrix product of $$F_1= \begin{bmatrix} 1 \\ 1 \end{bmatrix}, F_2= \begin{bmatrix} 1 & -1 \end{bmatrix} $$ Therefore $F$ is a separable filter. Prove that for any separable filter $F=F_1F_2$, $$I*F=(I*F_1)*F_2$$ **Your Answer:** *Write your solution in this markdown cell. Please write your equations in [LaTex equations](http://jupyter-notebook.readthedocs.io/en/latest/examples/Notebook/Typesetting%20Equations.html).* ### 3.2 Complexity comparison (10 points) Consider an $M_1\times{N_1}$ image $I$ and an $M_2\times{N_2}$ filter $F$ that is separable (i.e. $F=F_1F_2$). (i) How many multiplication operations do you need to do a direct 2D convolution (i.e. $I*F$)?<br> (ii) How many multiplication operations do you need to do 1D convolutions on rows and columns (i.e. $(I*F_1)*F_2$)?<br> (iii) Use Big-O notation to argue which one is more efficient in general: direct 2D convolution or two successive 1D convolutions? **Your Answer:** *Write your solution in this markdown cell. Please write your equations in [LaTex equations](http://jupyter-notebook.readthedocs.io/en/latest/examples/Notebook/Typesetting%20Equations.html).* Now, we will empirically compare the running time of a separable 2D convolution and its equivalent two 1D convolutions. The Gaussian kernel, widely used for blurring images, is one example of a separable filter. Run the code below to see its effect. ```python # Load image img = io.imread('dog.jpg', as_grey=True) # 5x5 Gaussian blur kernel = np.array( [ [1,4,6,4,1], [4,16,24,16,4], [6,24,36,24,6], [4,16,24,16,4], [1,4,6,4,1] ]) t0 = time() out = conv_nested(img, kernel) t1 = time() t_normal = t1 - t0 # Plot original image plt.subplot(1,2,1) plt.imshow(img) plt.title('Original') plt.axis('off') # Plot convolved image plt.subplot(1,2,2) plt.imshow(out) plt.title('Blurred') plt.axis('off') plt.show() ``` In the below code cell, define the two 1D arrays (`k1` and `k2`) whose product is equal to the Gaussian kernel. ```python # The kernel can be written as outer product of two 1D filters k1 = None # shape (5, 1) k2 = None # shape (1, 5) ### YOUR CODE HERE pass ### END YOUR CODE # Check if kernel is product of k1 and k2 if not np.all(k1 * k2 == kernel): print('k1 * k2 is not equal to kernel') assert k1.shape == (5, 1), "k1 should have shape (5, 1)" assert k2.shape == (1, 5), "k2 should have shape (1, 5)" ``` We now apply the two versions of convolution to the same image, and compare their running time. Note that the outputs of the two convolutions must be the same. ```python # Perform two convolutions using k1 and k2 t0 = time() out_separable = conv_nested(img, k1) out_separable = conv_nested(out_separable, k2) t1 = time() t_separable = t1 - t0 # Plot normal convolution image plt.subplot(1,2,1) plt.imshow(out) plt.title('Normal convolution') plt.axis('off') # Plot separable convolution image plt.subplot(1,2,2) plt.imshow(out_separable) plt.title('Separable convolution') plt.axis('off') plt.show() print("Normal convolution: took %f seconds." % (t_normal)) print("Separable convolution: took %f seconds." % (t_separable)) ``` ```python # Check if the two outputs are equal assert np.max(out_separable - out) < 1e-10 ```
086dbf21df14267d9470e474f20c09d874fb5d26
27,291
ipynb
Jupyter Notebook
fall_2019/hw1_release/hw1.ipynb
atagulmert/CS131_release
e830eb12970e41d4350be526d631e1fdd51f5274
[ "MIT" ]
371
2017-10-04T01:27:05.000Z
2022-03-24T21:38:03.000Z
fall_2019/hw1_release/hw1.ipynb
atagulmert/CS131_release
e830eb12970e41d4350be526d631e1fdd51f5274
[ "MIT" ]
26
2018-04-10T10:27:10.000Z
2021-10-12T20:08:37.000Z
fall_2019/hw1_release/hw1.ipynb
atagulmert/CS131_release
e830eb12970e41d4350be526d631e1fdd51f5274
[ "MIT" ]
340
2017-10-03T22:27:11.000Z
2022-03-15T02:45:43.000Z
31.994138
405
0.574585
true
5,136
Qwen/Qwen-72B
1. YES 2. YES
0.795658
0.891811
0.709577
__label__eng_Latn
0.974418
0.486916
### Authors’ Note: This is the 1st of 3 notebooks prepared on [2020 Winter School on Synchronization](https://complex-systems-turkey.github.io/). If you have any questions or comments please contact us at [GitHub repository](https://github.com/complex-systems-turkey/winter-2020-synchronization). * [Oğuz Kaan Yüksel](https://github.com/okyksl) * [Enis Simsar](https://github.com/enisimsar) * [Galip Ümit Yolcu](https://github.com/gumityolcu) * [Suzan Üsküdarlı](https://github.com/uskudarli) # Introduction to Dynamical Systems Dynamical systems are rules that determine how a particular state moves to another state in time. To understand quantative behaviour of these systems over time, first we need to define numerical integration and approximation methods. ```python import numpy as np import matplotlib.pyplot as plt %config InlineBackend.figure_format = 'retina' ``` ```python def integrate(estimator, f, x, t=0, dt=0.01, n=100): """Numerical approximation of a trajectory in a vector field Parameters ---------- estimator : function A function that estimates the change of value in trajectory from a given point. f : function A function that returns a vector field (or a gradient) on a given point. x : array_like Initial value point of the trajectory. t : float | tuple | array_like Initial time point | start and end time values | whole time points of the integration. dt : float Time step will be used in the integration. Will only be used if `t` is not given as array_like. n : int Number of steps will be used in the integration. Will only be used together with `dt` when `t` denotes the initial time value. Returns ------- xs : array_like a list of value points calculated with integration that start from given `x`. ts : array a list of time points that corresponds to given value points. """ # calculate time points that will be used in the integration if isinstance(t, float): # if given as (start) time point ts = np.arange(t, t + dt * n, dt) # start integrating from t using `dt` and `n` elif isinstance(t, tuple): # if given (start, end) time points ts = np.arange(t[0], t[1]+dt, dt) # utilize dt to find intermediate points else: # if given as an array of time points ts = t xs = [ x ] for i in range(1, len(ts)): dt = ts[i] - ts[i-1] # calculate time diff x = x + estimator(f, x, ts[i], dt) # calculate next point xs.append(x) return np.stack(xs), ts ``` ### [Euler Method](https://en.wikipedia.org/wiki/Euler_method) (Tangent Line Method) The tangent line approximation of $y$ at $t$ can be written as: \begin{align} y(t_1) \approx y(t_0) + f(y(t_0), t_0)(t_1-t_0) \end{align} where $f(y) = \frac{dy}{dt}$. When the distance $\|t_1-t_0\|$ is very small, this is a good approximation. To define a discrete approximation method, we can fix a very small time interval $d$ and iteratively approximate the values of $y$ around some neighborhood of $t_0$. \begin{align} y_0 &= y(t_0) \\ y_i &= y_{i-1} + f(y_{i-1}) \cdot d \end{align} where $y_i \approx y(t_0 + i \cdot d)$. ```python def euler(f, x, t=0, dt=0.01): """Estimates a change in trajectory using Euler's Method. Parameters ---------- f : function A function that returns a vector field (or a gradient) on a given point. x : array_like Value point of the estimation. t : float | tuple | array Time point of the estimation. dt : float Time step will be used in estimation. Returns ------- dx : array_like Change estimated by Euler's Method. """ return f(x, t) * dt ``` ### [Heun's Method](https://en.wikipedia.org/wiki/Heun%27s_method) (Runge-Kutta $2^{nd}$ order Method) Heun method is improved version of Euler's, a predictor-corrector method. The method uses dynamics to predict the slope at next step, then corrects Euler's method by averaging two consecutive tangent lines. \begin{align} y(t+h) \approx y(t) + \frac{f(y(t), t)+ f(\tilde{y}(t,h), t+h)}{2} \cdot h \end{align} where $\tilde{y}(t,h) = y(t) + f(y(t), t) \cdot h$. This method is a $2^{nd}$ order accurate method which means local error is on the order of $\mathcal{0}(h^3)$. \begin{align} y_0 &= y(t_0) \\ t_{i+1} &= t_0 + i \cdot d \\ y_{i+1} &= y_i + \frac{f(y_i, t_i)+ f(\tilde{y}_{i+1}, t_{i+1})}{2} \cdot d \end{align} where $\tilde{y}_{i+1}$ is calculated with Euler's Method. ```python def heun(f, x, t=0, dt=0.01): """Estimates a change in trajectory using Heun's Method. Parameters ---------- f : function A function that returns a vector field (or a gradient) on a given point. x : array_like Value point of the estimation. t : float | tuple | array Time point of the estimation. dt : float Time step will be used in estimation. Returns ------- dx : array_like Change estimated by Heun's Method. """ d_euler = euler(f, x, t, dt) # calculate change with Euler's x_euler = x + d_euler # calculate point estimated with Euler's return (d_euler + f(x_euler, t + dt) * dt) * 0.5 # apply Heun's correction ``` ### [Runge-Kutta $4^{th}$ order Method](https://en.wikipedia.org/wiki/Runge–Kutta_methods) RK4 method, the most famous member of the Runge-Kutta family, is a $4^{th}$ order accurate method which means local error is on the order of $\mathcal{0}(h^5)$. Here is the numerical integration procedure defined recursively on time points $t_i$s and associated value points $y_{i}s$. \begin{align} h &= t_i-t_{i-1}\\ y_{i} &= y_{i-1} + \frac{1}{6}(k_1+2k_2+2k_3+k_4)\\ k_1 &= h f(t_{i-1},y_{t_{i-1}})\\ k_2 &= h f(t_{i-1}+\frac{h}{2}, y_{i-1}+\frac{k_1}{2})\\ k_3 &= h f(t_{i-1}+\frac{h}{2}, y_{i-1}+\frac{k_2}{2})\\ k_4 &= h f(t_{i-1}+\frac{h}{2}, y_{i-1}+k_3)\\ \end{align} ```python def rk4(f, x, t, dt): """Estimates a change in trajectory using Runge-Kutta 4th order Method. Parameters ---------- f : function A function that returns a vector field (or a gradient) on a given point. x : array_like Value point of the estimation. t : float | tuple | array Time point of the estimation. dt : float Time step will be used in estimation. Returns ------- dx : array_like Change estimated by RK4 Method. """ k1 = f(x, t) * dt k2 = f(x + k1 * 0.5, t + dt * 0.5) * dt k3 = f(x + k2 * 0.5, t + dt * 0.5) * dt k4 = f(x + k3, t + dt * 0.5) * dt return (k1 + 2.0 * k2 + 2.0 * k3 + k4) / 6.0 ``` ### 1-d Differential Equation We define a $1^{st}$ order differential equation with the general form $\dot{y}=f(y,t)$ where $\dot{y}$ is the derivative of the function $y(t)$ with respect to time $t$.<br> For a simple example, let us choose the equation $\dot{y}=2y$ with the initial condition that $y(0)=1$. ```python # define the derivative function def f(x, t): return 2 * x # define initial conditions and parameters of the integration x0 = 1 t = (0,5) dt = 0.1 # calculate trajectory of the initial point with numerical integration x_euler, ts = integrate(euler, f=f, x=x0, t=t, dt=dt) x_heun, _ = integrate(heun, f=f, x=x0, t=ts, dt=dt) x_rk4, _ = integrate(rk4, f=f, x=x0, t=ts, dt=dt) # calculate analytical solution of the problem x_crt = np.exp(2 * ts) # plot trajectories together plt.figure(figsize=(12,6)) plt.plot(ts, x_crt, linewidth=2, linestyle='--') plt.plot(ts, x_euler, linewidth=2, alpha=0.75) plt.plot(ts, x_heun, linewidth=2, alpha=0.75) plt.plot(ts, x_rk4, linewidth=2, alpha=0.75) plt.ylabel('x(t)', fontsize=24) plt.xlabel('t', fontsize=24) plt.legend(['Analytical', 'Euler\'s', 'Heun\'s', 'Runge-Kutta'], fontsize=24); ``` ### 2-d Differential Equation Now, we are going to take a look at an application of $2^{nd}$ order differential equation, in particular mass-spring problem. \begin{align} m \frac{d^{2}y}{dt^2} &= -k y \end{align} Assume for simplicity $\frac{k}{m} = 1$ where $k$ is the spring constant and $m$ is the mass. Our problem turns into: \begin{align} \frac{d^{2}y}{dt^2} &= -y \end{align} Assume the initial condition is $y(0)=1$ and $\dot{y}(0)=0$ ```python # define the derivative function def f(x, t): return np.asarray([ x[1], -x[0] ]) # define initial conditions and parameters of the integration x0 = np.asarray([ 1, 0 ]) t = (0,100) dt = 0.1 # calculate trajectory of the initial point with numerical integration x_euler, ts = integrate(euler, f=f, x=x0, t=t, dt=dt) x_heun, _ = integrate(heun, f=f, x=x0, t=t, dt=dt) x_rk4, _ = integrate(rk4, f=f, x=x0, t=t, dt=dt) # calculate analytical solution of the problem x_crt = np.transpose(np.asarray([np.cos(ts), -np.sin(ts)])) # plot trajectories together fig, axs = plt.subplots(nrows=1, ncols=3, figsize=(22,6)) axs[0].plot(ts, x_crt[:, 0], linewidth=2, linestyle='--') axs[0].plot(ts, x_euler[:, 0], linewidth=2, alpha=0.8) axs[0].plot(ts, x_heun[:, 0], linewidth=2, alpha=0.8) axs[0].plot(ts, x_rk4[:, 0], linewidth=2, alpha=0.8) axs[0].set_ylabel('x(t)', fontsize=24) axs[0].set_xlabel('t', fontsize=24) axs[0].set_title('Position', fontsize=24) axs[1].plot(ts, x_crt[:, 1], linewidth=2, linestyle='--') axs[1].plot(ts, x_euler[:, 1], linewidth=2, alpha=0.8) axs[1].plot(ts, x_heun[:, 1], linewidth=2, alpha=0.8) axs[1].plot(ts, x_rk4[:, 1], linewidth=2, alpha=0.8) axs[1].set_ylabel('$\dot{x}$(t)', fontsize=24) axs[1].set_xlabel('t', fontsize=24) axs[1].set_title('Velocity', fontsize=24) axs[2].plot(x_crt[:, 1], x_crt[:, 0], linewidth=2, linestyle='--') axs[2].plot(x_euler[:, 1], x_euler[:, 0], linewidth=2, alpha=0.8) axs[2].plot(x_heun[:, 1], x_heun[:, 0], linewidth=2, alpha=0.8) axs[2].plot(x_rk4[:, 1], x_rk4[:, 0], linewidth=2, alpha=0.8) axs[2].set_ylabel('$\dot{x}$(t)', fontsize=24) axs[2].set_xlabel('x(t)', fontsize=24) axs[2].set_title('Phase Space', fontsize=24) fig.legend(['Analytical', 'Euler\'s', 'Heun\'s', 'Runge-Kutta'], fontsize=24); ``` # Synchronization of Linear Systems ### Synchronization of two linearly coupled linear systems Analytical investigation predicts that these two systems synchronize at: \begin{align} \alpha_c = \frac{a}{2} \end{align} So, as long as \begin{align} \alpha_c \geq \frac{a}{2} \end{align} synchronization should occur. Let our systems be by defined by the equations: \begin{aligned} \dot{y_1} &= ay_1 + \alpha (y_2 - y_1)\\ \dot{y_2} &= ay_2 + \alpha (y_1 - y_2) \end{aligned} ```python """ 1. TODO: The task given here is this "Tune the alpha parameter to see the synchronization case". An interactive plot that you can play with a and alpha would be great here. """ # define a function for two coupled linear system def f(x, t, a=-0.4, alpha=0.5): return np.asarray([ a * x[0] + alpha * (x[1] - x[0]), a * x[1] + alpha * (x[0] - x[1]) ]) # define initial conditions and parameters of the integration x0 = np.asarray([1000, 5]) t = (0, 12) dt = 0.01 # calculate trajectory of the initial point with numerical integration x_rk4, _ = integrate(rk4, f=f, x=x0, t=t, dt=dt) # plot trajectories of both systems fig, axs = plt.subplots(nrows=1, ncols=2, figsize=(18,6)) axs[0].plot(x_rk4[:, 0], linewidth=2, alpha=0.8) axs[0].plot(x_rk4[:, 1], linewidth=2, alpha=0.8) axs[0].set_xlabel('t', fontsize=24) axs[0].set_title('Time Evolution', fontsize=24) axs[0].legend(['$x_1(t)$', '$x_2(t)$'], fontsize=24) axs[1].plot(x_rk4[:, 0], x_rk4[:, 1], linewidth=2, alpha=0.8) axs[1].set_xlabel('$x_1(t)$', fontsize=24) axs[1].set_ylabel('$x_2(t)$', fontsize=24) axs[1].plot(x0[0], x0[1], marker='o', markersize=10, color='r') axs[1].plot(x_rk4[-1, 0], x_rk4[-1, 1], marker='o', markersize=10, color='g') axs[1].set_title('Phase Space', fontsize=24) axs[1].legend(['Trajectory', 'Initial Point', 'Final Point'], fontsize=24); ``` The synchronization error can be defined by: \begin{align} E = \frac{1}{N(N-1)}\sum_{i,j\gt 1} |x_i - x_j| \end{align} But as $\|x_i - x_j\| = \|x_j - x_i\| $, this sum could be written as: \begin{align} E = \frac{2}{N(N-1)}\sum_{i = 1}^{N} \sum_{j=1}^{i} |x_i - x_j| \end{align} ```python # define a function for synchronization error def error(x): n = x.shape[1] # assume x is shape (t x n) x_row = np.repeat(x[..., np.newaxis], n, axis=2) # construct t times (n x n) matrices with same values in rows x_col = np.transpose(x_row, (0, 2, 1)) # construct t times (n x n) matrices with same values in cols x_err = np.abs(x_col - x_row) # calculate error between all (i, j) pairs over matrices x_err = np.sum(x_err, axis=(1,2)) # sum errors in matrices and collapse to (t) x_err = x_err / (n * (n-1)) # only have n * (n-1) in denominator bcs matrices count every (i,j) twice return x_err # calculate trajectory of the initial point with numerical integration and error x_rk4, ts = integrate(rk4, f=f, x=x0, t=t, dt=dt) err = error(x_rk4) # plot synchronization error plt.figure(figsize=(12,6)) plt.plot(ts, err, linewidth=2, alpha=0.8) plt.ylabel('Error', fontsize=24) plt.xlabel('t', fontsize=24); ``` ### Synchronization of N linearly coupled linear systems Analytical investigation predicts that these N systems synchronize at: \begin{align} \alpha_c = \frac{a}{N} \end{align} where $N$ is the number of variables. So, as long as \begin{align} \alpha_c \geq \frac{a}{N} \end{align} systems are expected to synchronize. Things to note: * As $\alpha$ is increased synchronization happens much fast. Is that expected? * At $\alpha = \frac{a}{N}$ systems are neutrally stable? * What happens as N tends to infinity? \begin{align} \dot{y_i} = ay_i + \alpha \sum_{j=1}^{N} (y_j - y_i) \end{align} ```python """ 1. TODO: The task given here is this "Tune the alpha parameter around its critical value and see the behaviour". An interactive plot that you can play with a and alpha would be great here. """ # define a function for N linearly coupled linear systems def f(x, t, a=0.4, alpha=0.1): n = x.shape[0] x_row = np.repeat(x[..., np.newaxis], n, axis=1) # duplicate over rows x_col = np.transpose(x_row) # duplicate over cols dx_coup = alpha * (x_col - x_row) # calculate coupling matrix dx_diag = a * np.diag(x) # calculate internal dynamics (or diagonal of laplacian) return np.sum(dx_diag + dx_coup, axis=1) # sum over columns to get derivatives for each system # define initial conditions and parameters of the integration x0 = np.asarray([30, 15, -5, 5]) t = (0, 4) dt = 0.01 # calculate trajectory of the initial point with numerical integration and error x_rk4, _ = integrate(rk4, f=f, x=x0, t=t, dt=dt) err = error(x_rk4) fig, axs = plt.subplots(nrows=1, ncols=2, figsize=(20,6)) axs[0].plot(x_rk4, linewidth=2, alpha=0.8) axs[0].set_ylabel('$x_i(t)$', fontsize=24) axs[0].set_xlabel('t', fontsize=24) axs[0].legend([('$x_%d(t)$' % i) for i in range(1, x_rk4.shape[1]+1)], fontsize=16) axs[1].plot(err) axs[1].set_ylabel('Error', fontsize=24) axs[1].set_xlabel('t', fontsize=24); ```
ac9d6d920bb94888a84487130b92a9fc29cf3e4b
704,075
ipynb
Jupyter Notebook
Day #1 - Numerical Integration and Linear Synchronization.ipynb
complex-systems-turkey/winter-2020-synchronization
c6d05ba8fd66ac9755c4526da0382221e0b096de
[ "MIT" ]
1
2020-09-11T14:11:10.000Z
2020-09-11T14:11:10.000Z
Day #1 - Numerical Integration and Linear Synchronization.ipynb
complex-systems-turkey/winter-2020-synchronization
c6d05ba8fd66ac9755c4526da0382221e0b096de
[ "MIT" ]
null
null
null
Day #1 - Numerical Integration and Linear Synchronization.ipynb
complex-systems-turkey/winter-2020-synchronization
c6d05ba8fd66ac9755c4526da0382221e0b096de
[ "MIT" ]
null
null
null
1,066.780303
300,108
0.951
true
4,943
Qwen/Qwen-72B
1. YES 2. YES
0.903294
0.90599
0.818375
__label__eng_Latn
0.891417
0.739693
<a href="https://colab.research.google.com/github/luisarai/NMA2021/blob/main/tutorials/W1D3_ModelFitting/student/W1D3_Tutorial4.ipynb" target="_parent"></a> # Tutorial 4: Multiple linear regression and polynomial regression **Week 1, Day 3: Model Fitting** **By Neuromatch Academy** **Content creators**: Pierre-Étienne Fiquet, Anqi Wu, Alex Hyafil with help from Byron Galbraith, Ella Batty **Content reviewers**: Lina Teichmann, Saeed Salehi, Patrick Mineault, Michael Waskom **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** <p align='center'></p> --- # Tutorial Objectives *Estimated timing of tutorial: 35 minutes* This is Tutorial 4 of a series on fitting models to data. We start with simple linear regression, using least squares optimization (Tutorial 1) and Maximum Likelihood Estimation (Tutorial 2). We will use bootstrapping to build confidence intervals around the inferred linear model parameters (Tutorial 3). We'll finish our exploration of regression models by generalizing to multiple linear regression and polynomial regression (Tutorial 4). We end by learning how to choose between these various models. We discuss the bias-variance trade-off (Tutorial 5) and Cross Validation for model selection (Tutorial 6). In this tutorial, we will generalize the regression model to incorporate multiple features. - Learn how to structure inputs for regression using the 'Design Matrix' - Generalize the MSE for multiple features using the ordinary least squares estimator - Visualize data and model fit in multiple dimensions - Fit polynomial regression models of different complexity - Plot and evaluate the polynomial regression fits ```python # @title Tutorial slides # @markdown These are the slides for the videos in all tutorials today from IPython.display import IFrame IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/2mkq4/?direct%26mode=render%26action=download%26mode=render", width=854, height=480) ``` ```python # @title Video 1: Multiple Linear Regression and Polynomial Regression from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id="BV11Z4y1u7cf", width=854, height=480, fs=1) print('Video available at https://www.bilibili.com/video/{0}'.format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id="d4nfTki6Ejc", width=854, height=480, fs=1, rel=0) print('Video available at https://youtube.com/watch?v=' + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ``` Tab(children=(Output(), Output()), _titles={'0': 'Youtube', '1': 'Bilibili'}) --- # Setup ```python # Imports import numpy as np import matplotlib.pyplot as plt ``` ```python #@title Figure Settings %config InlineBackend.figure_format = 'retina' plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle") ``` ```python # @title Plotting Functions def evaluate_fits(order_list, mse_list): """ Compare the quality of multiple polynomial fits by plotting their MSE values. Args: order_list (list): list of the order of polynomials to be compared mse_list (list): list of the MSE values for the corresponding polynomial fit """ fig, ax = plt.subplots() ax.bar(order_list, mse_list) ax.set(title='Comparing Polynomial Fits', xlabel='Polynomial order', ylabel='MSE') def plot_fitted_polynomials(x, y, theta_hat): """ Plot polynomials of different orders Args: x (ndarray): input vector of shape (n_samples) y (ndarray): vector of measurements of shape (n_samples) theta_hat (dict): polynomial regression weights for different orders """ x_grid = np.linspace(x.min() - .5, x.max() + .5) plt.figure() for order in range(0, max_order + 1): X_design = make_design_matrix(x_grid, order) plt.plot(x_grid, X_design @ theta_hat[order]); plt.ylabel('y') plt.xlabel('x') plt.plot(x, y, 'C0.'); plt.legend([f'order {o}' for o in range(max_order + 1)], loc=1) plt.title('polynomial fits') plt.show() ``` --- # Section 1: Multiple Linear Regression *Estimated timing to here from start of tutorial: 8 min* This video covers linear regression with multiple inputs (more than 1D) and polynomial regression. <details> <summary> <font color='blue'>Click here for text recap of video </font></summary> Now that we have considered the univariate case and how to produce confidence intervals for our estimator, we turn to the general linear regression case, where we can have more than one regressor, or feature, in our input. Recall that our original univariate linear model was given as \begin{align} y = \theta x + \epsilon \end{align} where $\theta$ is the slope and $\epsilon$ some noise. We can easily extend this to the multivariate scenario by adding another parameter for each additional feature \begin{align} y = \theta_0 + \theta_1 x_1 + \theta_2 x_2 + ... +\theta_d x_d + \epsilon \end{align} where $\theta_0$ is the intercept and $d$ is the number of features (it is also the dimensionality of our input). We can condense this succinctly using vector notation for a single data point \begin{align} y_i = \boldsymbol{\theta}^{\top}\mathbf{x}_i + \epsilon \end{align} and fully in matrix form \begin{align} \mathbf{y} = \mathbf{X}\boldsymbol{\theta} + \mathbf{\epsilon} \end{align} where $\mathbf{y}$ is a vector of measurements, $\mathbf{X}$ is a matrix containing the feature values (columns) for each input sample (rows), and $\boldsymbol{\theta}$ is our parameter vector. This matrix $\mathbf{X}$ is often referred to as the "[design matrix](https://en.wikipedia.org/wiki/Design_matrix)". We want to find an optimal vector of paramters $\boldsymbol{\hat\theta}$. Recall our analytic solution to minimizing MSE for a single regressor: \begin{align} \hat\theta = \frac{\sum_{i=1}^N x_i y_i}{\sum_{i=1}^N x_i^2}. \end{align} The same holds true for the multiple regressor case, only now expressed in matrix form \begin{align} \boldsymbol{\hat\theta} = (\mathbf{X}^\top\mathbf{X})^{-1}\mathbf{X}^\top\mathbf{y}. \end{align} This is called the [ordinary least squares](https://en.wikipedia.org/wiki/Ordinary_least_squares) (OLS) estimator. For this tutorial we will focus on the two-dimensional case ($d=2$), which allows us to easily visualize our results. As an example, think of a situation where a scientist records the spiking response of a retinal ganglion cell to patterns of light signals that vary in contrast and in orientation. Then contrast and orientation values can be used as features / regressors to predict the cells response. In this case our model can be writen for a single data point as: \begin{align} y = \theta_0 + \theta_1 x_1 + \theta_2 x_2 + \epsilon \end{align} or for multiple data points in matrix form where \begin{align} \mathbf{X} = \begin{bmatrix} 1 & x_{1,1} & x_{1,2} \\ 1 & x_{2,1} & x_{2,2} \\ \vdots & \vdots & \vdots \\ 1 & x_{n,1} & x_{n,2} \end{bmatrix}, \boldsymbol{\theta} = \begin{bmatrix} \theta_0 \\ \theta_1 \\ \theta_2 \\ \end{bmatrix} \end{align} When we refer to $x_{i, j}$, we mean that it is the i-th data point and the j-th feature of that data point. For our actual exploration dataset we shall set $\boldsymbol{\theta}=[0, -2, -3]$ and draw $N=40$ noisy samples from $x \in [-2,2)$. Note that setting the value of $\theta_0 = 0$ effectively ignores the offset term. ```python # @markdown Execute this cell to simulate some data # Set random seed for reproducibility np.random.seed(1234) # Set parameters theta = [0, -2, -3] n_samples = 40 # Draw x and calculate y n_regressors = len(theta) x0 = np.ones((n_samples, 1)) x1 = np.random.uniform(-2, 2, (n_samples, 1)) x2 = np.random.uniform(-2, 2, (n_samples, 1)) X = np.hstack((x0, x1, x2)) noise = np.random.randn(n_samples) y = X @ theta + noise ax = plt.subplot(projection='3d') ax.plot(X[:,1], X[:,2], y, '.') ax.set( xlabel='$x_1$', ylabel='$x_2$', zlabel='y' ) plt.tight_layout() ``` ## Coding Exercise 1: Ordinary Least Squares Estimator In this exercise you will implement the OLS approach to estimating $\boldsymbol{\hat\theta}$ from the design matrix $\mathbf{X}$ and measurement vector $\mathbf{y}$. You can use the `@` symbol for matrix multiplication, `.T` for transpose, and `np.linalg.inv` for matrix inversion. ```python def ordinary_least_squares(X, y): """Ordinary least squares estimator for linear regression. Args: x (ndarray): design matrix of shape (n_samples, n_regressors) y (ndarray): vector of measurements of shape (n_samples) Returns: ndarray: estimated parameter values of shape (n_regressors) """ ###################################################################### ## TODO for students: solve for the optimal parameter vector using OLS # Fill out function and remove #raise NotImplementedError("Student exercise: solve for theta_hat vector using OLS") ###################################################################### # Compute theta_hat using OLS theta_hat = np.linalg.inv(X.T @ X)@ X.T @y return theta_hat theta_hat = ordinary_least_squares(X, y) print(theta_hat) ``` [ 0.13861386 -2.09395731 -3.16370742] [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D3_ModelFitting/solutions/W1D3_Tutorial4_Solution_25849be9.py) After filling in this function, you should see that $\boldsymbol{\hat\theta}$ = [ 0.13861386, -2.09395731, -3.16370742] Now that we have our $\boldsymbol{\hat\theta}$, we can obtain $\hat{\mathbf{y}}$ and thus our mean squared error. ```python # Compute predicted data theta_hat = ordinary_least_squares(X, y) y_hat = X @ theta_hat # Compute MSE print(f"MSE = {np.mean((y - y_hat)**2):.2f}") ``` MSE = 0.91 Finally, the following code will plot a geometric visualization of the data points (blue) and fitted plane. ```python # @markdown Execute this cell to visualize data and predicted plane theta_hat = ordinary_least_squares(X, y) xx, yy = np.mgrid[-2:2:50j, -2:2:50j] y_hat_grid = np.array([xx.flatten(), yy.flatten()]).T @ theta_hat[1:] y_hat_grid = y_hat_grid.reshape((50, 50)) ax = plt.subplot(projection='3d') ax.plot(X[:, 1], X[:, 2], y, '.') ax.plot_surface(xx, yy, y_hat_grid, linewidth=0, alpha=0.5, color='C1', cmap=plt.get_cmap('coolwarm')) for i in range(len(X)): ax.plot((X[i, 1], X[i, 1]), (X[i, 2], X[i, 2]), (y[i], y_hat[i]), 'g-', alpha=.5) ax.set( xlabel='$x_1$', ylabel='$x_2$', zlabel='y' ) plt.tight_layout() ``` --- # Section 2: Polynomial Regression So far today, you learned how to predict outputs from inputs by fitting a linear regression model. We can now model all sort of relationships, including in neuroscience! One potential problem with this approach is the simplicity of the model. Linear regression, as the name implies, can only capture a linear relationship between the inputs and outputs. Put another way, the predicted outputs are only a weighted sum of the inputs. What if there are more complicated computations happening? Luckily, many more complex models exist (and you will encounter many more over the next 3 weeks). One model that is still very simple to fit and understand, but captures more complex relationships, is **polynomial regression**, an extension of linear regression. <details> <summary> <font color='blue'>Click here for text recap of relevant part of video </font></summary> Since polynomial regression is an extension of linear regression, everything you learned so far will come in handy now! The goal is the same: we want to predict the dependent variable $y$ given the input values $x$. The key change is the type of relationship between inputs and outputs that the model can capture. Linear regression models predict the outputs as a weighted sum of the inputs: \begin{align} y = \theta_0 + \theta x + \epsilon \end{align} With polynomial regression, we model the outputs as a polynomial equation based on the inputs. For example, we can model the outputs as: \begin{align} y & = \theta_0 + \theta_1 x + \theta_2 x^2 + \theta_3 x^3 + \epsilon \end{align} We can change how complex a polynomial is fit by changing the order of the polynomial. The order of a polynomial refers to the highest power in the polynomial. The equation above is a third order polynomial because the highest value x is raised to is 3. We could add another term ($+ \theta_4 x^4$) to model an order 4 polynomial and so on. First, we will simulate some data to practice fitting polynomial regression models. We will generate random inputs $x$ and then compute y according to $y = x^2 - x - 2 $, with some extra noise both in the input and the output to make the model fitting exercise closer to a real life situation. ```python # @markdown Execute this cell to simulate some data # setting a fixed seed to our random number generator ensures we will always # get the same psuedorandom number sequence np.random.seed(121) n_samples = 30 x = np.random.uniform(-2, 2.5, n_samples) # inputs uniformly sampled from [-2, 2.5) y = x**2 - x - 2 # computing the outputs output_noise = 1/8 * np.random.randn(n_samples) y += output_noise # adding some output noise input_noise = 1/2 * np.random.randn(n_samples) x += input_noise # adding some input noise fig, ax = plt.subplots() ax.scatter(x, y) # produces a scatter plot ax.set(xlabel='x', ylabel='y'); ``` ## Section 2.1: Design matrix for polynomial regression *Estimated timing to here from start of tutorial: 16 min* Now we have the basic idea of polynomial regression and some noisy data, let's begin! The key difference between fitting a linear regression model and a polynomial regression model lies in how we structure the input variables. Let's go back to one feature for each data point. For linear regression, we used $\mathbf{X} = \mathbf{x}$ as the input data, where $\mathbf{x}$ is a vector where each element is the input for a single data point. To add a constant bias (a y-intercept in a 2-D plot), we use $\mathbf{X} = \big[ \boldsymbol 1, \mathbf{x} \big]$, where $\boldsymbol 1$ is a column of ones. When fitting, we learn a weight for each column of this matrix. So we learn a weight that multiples with column 1 - in this case that column is all ones so we gain the bias parameter ($+ \theta_0$). This matrix $\mathbf{X}$ that we use for our inputs is known as a **design matrix**. We want to create our design matrix so we learn weights for $\mathbf{x}^2, \mathbf{x}^3,$ etc. Thus, we want to build our design matrix $X$ for polynomial regression of order $k$ as: \begin{align} \mathbf{X} = \big[ \boldsymbol 1 , \mathbf{x}^1, \mathbf{x}^2 , \ldots , \mathbf{x}^k \big], \end{align} where $\boldsymbol{1}$ is the vector the same length as $\mathbf{x}$ consisting of of all ones, and $\mathbf{x}^p$ is the vector $\mathbf{x}$ with all elements raised to the power $p$. Note that $\boldsymbol{1} = \mathbf{x}^0$ and $\mathbf{x}^1 = \mathbf{x}$. If we have inputs with more than one feature, we can use a similar design matrix but include all features raised to each power. Imagine that we have two features per data point: $\mathbf{x}_m$ is a vector of one feature per data point and $\mathbf{x}_n$ is another. Our design matrix for a polynomial regression would be: \begin{align} \mathbf{X} = \big[ \boldsymbol 1 , \mathbf{x}_m^1, \mathbf{x}_n^1, \mathbf{x}_m^2 , \mathbf{x}_n^2\ldots , \mathbf{x}_m^k , \mathbf{x}_n^k \big], \end{align} ### Coding Exercise 2.1: Structure design matrix Create a function (`make_design_matrix`) that structures the design matrix given the input data and the order of the polynomial you wish to fit. We will print part of this design matrix for our data and order 5. ```python def make_design_matrix(x, order): """Create the design matrix of inputs for use in polynomial regression Args: x (ndarray): input vector of shape (n_samples) order (scalar): polynomial regression order Returns: ndarray: design matrix for polynomial regression of shape (samples, order+1) """ ######################################################################## ## TODO for students: create the design matrix ## # Fill out function and remove # raise NotImplementedError("Student exercise: create the design matrix") ######################################################################## # Broadcast to shape (n x 1) so dimensions work if x.ndim == 1: x = x[:, None] #if x has more than one feature, we don't want multiple columns of ones so we assign # x^0 here design_matrix = np.ones((x.shape[0], 1)) # Loop through rest of degrees and stack columns (hint: np.hstack) for degree in range(1, order + 1): design_matrix = np.hstack((design_matrix, x**degree)) return design_matrix order = 5 X_design = make_design_matrix(x, order) print(X_design[0:2, 0:2]) ``` [[ 1. -1.51194917] [ 1. -0.35259945]] [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D3_ModelFitting/solutions/W1D3_Tutorial4_Solution_5e30078a.py) You should see that the printed section of this design matrix is `[[ 1. -1.51194917] [ 1. -0.35259945]]` ## Section 2.2: Fitting polynomial regression models *Estimated timing to here from start of tutorial: 24 min* Now that we have the inputs structured correctly in our design matrix, fitting a polynomial regression is the same as fitting a linear regression model! All of the polynomial structure we need to learn is contained in how the inputs are structured in the design matrix. We can use the same least squares solution we computed in previous exercises. ### Coding Exercise 2.2: Fitting polynomial regression models with different orders Here, we will fit polynomial regression models to find the regression coefficients ($\theta_0, \theta_1, \theta_2,$ ...) by solving the least squares problem. Create a function `solve_poly_reg` that loops over different order polynomials (up to `max_order`), fits that model, and saves out the weights for each. You may invoke the `ordinary_least_squares` function. We will then qualitatively inspect the quality of our fits for each order by plotting the fitted polynomials on top of the data. In order to see smooth curves, we evaluate the fitted polynomials on a grid of $x$ values (ranging between the largest and smallest of the inputs present in the dataset). ```python def solve_poly_reg(x, y, max_order): """Fit a polynomial regression model for each order 0 through max_order. Args: x (ndarray): input vector of shape (n_samples) y (ndarray): vector of measurements of shape (n_samples) max_order (scalar): max order for polynomial fits Returns: dict: fitted weights for each polynomial model (dict key is order) """ # Create a dictionary with polynomial order as keys, # and np array of theta_hat (weights) as the values theta_hats = {} # Loop over polynomial orders from 0 through max_order for order in range(max_order + 1): ################################################################################## ## TODO for students: Create design matrix and fit polynomial model for this order # Fill out function and remove #raise NotImplementedError("Student exercise: fit a polynomial model") ################################################################################## # Create design matrix X_design = make_design_matrix(x, order) # Fit polynomial model this_theta = ordinary_least_squares(X_design, y) theta_hats[order] = this_theta return theta_hats max_order = 5 theta_hats = solve_poly_reg(x, y, max_order) # Visualize plot_fitted_polynomials(x, y, theta_hats) ``` [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D3_ModelFitting/solutions/W1D3_Tutorial4_Solution_f5217dbd.py) *Example output:* ## Section 2.3: Evaluating fit quality *Estimated timing to here from start of tutorial: 29 min* As with linear regression, we can compute mean squared error (MSE) to get a sense of how well the model fits the data. We compute MSE as: \begin{align} \mathrm{MSE} = \frac 1 N ||\mathbf{y} - \hat{\mathbf{y}}||^2 = \frac 1 N \sum_{i=1}^N (y_i - \hat y_i)^2 \end{align} where the predicted values for each model are given by $ \hat{\mathbf{y}} = \mathbf{X}\boldsymbol{\hat\theta}$. *Which model (i.e. which polynomial order) do you think will have the best MSE?* ### Coding Exercise 2.3: Compute MSE and compare models We will compare the MSE for different polynomial orders with a bar plot. ```python mse_list = [] order_list = list(range(max_order + 1)) for order in order_list: X_design = make_design_matrix(x, order) ######################################################################## ## TODO for students # Fill out function and remove #raise NotImplementedError("Student exercise: compute MSE") ######################################################################## # Get prediction for the polynomial regression model of this order y_hat = X_design @ theta_hats[order] # Compute the residuals residuals = y - y_hat # Compute the MSE mse = np.mean(residuals ** 2) mse_list.append(mse) # Visualize MSE of fits evaluate_fits(order_list, mse_list) ``` [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D3_ModelFitting/solutions/W1D3_Tutorial4_Solution_89324713.py) *Example output:* --- # Summary *Estimated timing of tutorial: 35 minutes* * Linear regression generalizes naturally to multiple dimensions * Linear algebra affords us the mathematical tools to reason and solve such problems beyond the two dimensional case * To change from a linear regression model to a polynomial regression model, we only have to change how the input data is structured * We can choose the complexity of the model by changing the order of the polynomial model fit * Higher order polynomial models tend to have lower MSE on the data they're fit with **Note**: In practice, multidimensional least squares problems can be solved very efficiently (thanks to numerical routines such as LAPACK). --- # Notation \begin{align} x &\quad \text{input, independent variable}\\ y &\quad \text{response measurement, dependent variable}\\ \epsilon &\quad \text{measurement error, noise contribution}\\ \theta &\quad \text{slope parameter}\\ \hat{\theta} &\quad \text{estimated slope parameter}\\ \mathbf{x} &\quad \text{vector of inputs where each element is a different data point}\\ \mathbf{X} &\quad \text{design matrix}\\ \mathbf{y} &\quad \text{vector of measurements}\\ \mathbf{\hat y} &\quad \text{vector of estimated measurements}\\ \boldsymbol{\theta} &\quad \text{vector of parameters}\\ \boldsymbol{\hat\theta} &\quad \text{vector of estimated parameters}\\ d &\quad \text{dimensionality of input}\\ N &\quad \text{number of samples}\\ \end{align} **Suggested readings** [Introduction to Applied Linear Algebra – Vectors, Matrices, and Least Squares](http://vmls-book.stanford.edu/) Stephen Boyd and Lieven Vandenberghe
5e73b62377c0b577d4ada121a83dd61e5fe01ff4
825,853
ipynb
Jupyter Notebook
tutorials/W1D3_ModelFitting/student/W1D3_Tutorial4.ipynb
luisarai/NMA2021
d6cd66bf32d929f3030d0d66c2c92de55bd2d886
[ "MIT" ]
null
null
null
tutorials/W1D3_ModelFitting/student/W1D3_Tutorial4.ipynb
luisarai/NMA2021
d6cd66bf32d929f3030d0d66c2c92de55bd2d886
[ "MIT" ]
null
null
null
tutorials/W1D3_ModelFitting/student/W1D3_Tutorial4.ipynb
luisarai/NMA2021
d6cd66bf32d929f3030d0d66c2c92de55bd2d886
[ "MIT" ]
null
null
null
561.422842
341,442
0.936738
true
6,151
Qwen/Qwen-72B
1. YES 2. YES
0.752013
0.731059
0.549765
__label__eng_Latn
0.967407
0.115619
# Logistic Regression Notebook version: 2.0 (Nov 21, 2017) 2.1 (Oct 19, 2018) 2.2 (Oct 09, 2019) 2.3 (Oct 27, 2020) Author: Jesús Cid Sueiro (jcid@tsc.uc3m.es) Jerónimo Arenas García (jarenas@tsc.uc3m.es) Changes: v.1.0 - First version v.1.1 - Typo correction. Prepared for slide presentation v.2.0 - Prepared for Python 3.0 (backcompmatible with 2.7) Assumptions for regression model modified v.2.1 - Minor changes regarding notation and assumptions v.2.2 - Updated notation v.2.3 - Improved slides format. Backward compatibility removed ```python # To visualize plots in the notebook %matplotlib inline # Imported libraries import csv import random import matplotlib import matplotlib.pyplot as plt import pylab import numpy as np from mpl_toolkits.mplot3d import Axes3D from sklearn.preprocessing import PolynomialFeatures from sklearn import linear_model ``` ## 1. Introduction ### 1.1. Binary classification The **goal** of a classification problem is to assign a *class* or *category* to every *instance* or *observation* of a data collection. Here, we will assume that * every instance ${\bf x}$ is an $N$-dimensional vector in $\mathbb{R}^N$, and * the class $y$ of sample ${\bf x}$ is an element of a binary set ${\mathcal Y} = \{0, 1\}$. The goal of a classifier is to predict the true value of $y$ after observing ${\bf x}$. We will denote as $\hat{y}$ the classifier output or **decision**. If $y=\hat{y}$, the decision is a **hit**, otherwise $y\neq \hat{y}$ and the decision is an **error**. ### 1.2. Decision theory: the MAP criterion **Decision theory** provides a solution to the classification problem in situations where the relation between instance ${\bf x}$ and its class $y$ is given by a known probabilistic model. Assume that every tuple $({\bf x}, y)$ is an outcome of a random vector $({\bf X}, Y)$ with joint distribution $p_{{\bf X},Y}({\bf x}, y)$. A natural criteria for classification is to select predictor $\hat{Y}=f({\bf x})$ in such a way that the **probability or error**, $P\{\hat{Y} \neq Y\}$ is minimum. Noting that $$ P\{\hat{Y} \neq Y\} = \int P\{\hat{Y} \neq Y | {\bf x}\} p_{\bf X}({\bf x}) d{\bf x} $$ the optimal decision maker should take, for every sample ${\bf x}$, the decision minimizing the **conditional error probability**: \begin{align} \hat{y}^* &= \arg\min_{\hat{y}} P\{Y \neq \hat{y} |{\bf x}\} \\ &= \arg\max_{\hat{y}} P\{Y = \hat{y} |{\bf x}\} \\ \end{align} Thus, the **optimal decision rule** can be expressed as $$ P_{Y|{\bf X}}(1|{\bf x}) \quad\mathop{\gtrless}^{\hat{y}=1}_{\hat{y}=0}\quad P_{Y|{\bf X}}(0|{\bf x}) $$ or, equivalently $$ P_{Y|{\bf X}}(1|{\bf x}) \quad\mathop{\gtrless}^{\hat{y}=1}_{\hat{y}=0}\quad \frac{1}{2} $$ The classifier implementing this decision rule is usually referred to as the MAP (*Maximum A Posteriori*) classifier. As we have seen, the MAP classifier minimizes the error probability for binary classification, but the result can also be generalized to multiclass classification problems. ### 1.3. Learning **Classical decision theory** is grounded on the assumption that the probabilistic model relating the observed sample ${\bf X}$ and the true hypothesis $Y$ is known. Unfortunately, this is unrealistic in many applications, where the only available information to construct the classifier is a **dataset** $\mathcal D = \{{\bf x}_k, y_k\}_{k=0}^{K-1}$ of instances and their respective class labels. A more realistic formulation of the classification problem is the following: given a dataset $\mathcal D = \{({\bf x}_k, y_k) \in {\mathbb{R}}^N \times {\mathcal Y}, \, k=0,\ldots,{K-1}\}$ of independent and identically distributed (i.i.d.) samples from an ***unknown*** distribution $p_{{\bf X},Y}({\bf x}, y)$, predict the class $y$ of a new sample ${\bf x}$ with the minimum probability of error. ### 1.4. Parametric classifiers Since the probabilistic model generating the data is unknown, the MAP decision rule cannot be applied. However, we can use the dataset to **estimate the a posterior class probability model**, and apply it to approximate the MAP decision maker. **Parametric classifiers** based on this idea assume, additionally, that the posterior class probabilty satisfies some parametric formula: $$ P_{Y|X}(1|{\bf x},{\bf w}) = f_{\bf w}({\bf x}) $$ where ${\bf w}$ is a vector of parameters. Given the expression of the MAP decision maker, classification consists in comparing the value of $f_{\bf w}({\bf x})$ with the threshold $\frac{1}{2}$, and each parameter vector would be associated to a different decision maker. In practice, the dataset ${\mathcal D}$ is used to select a particular parameter vector $\hat{\bf w}$ according to certain criterion. Accordingly, the decision rule becomes $$ f_{\hat{\bf w}}({\bf x}) \quad\mathop{\gtrless}^{\hat{y}=1}_{\hat{y}=0}\quad \frac{1}{2} $$ In this notebook, we explore one of the most popular model-based parametric classification methods: **logistic regression**. ## 2. Logistic regression. ### 2.1. The logistic function The **logistic regression model** assumes that the binary class label $Y \in \{0,1\}$ of observation $X\in \mathbb{R}^N$ satisfies the expression. $$P_{Y|{\bf X}}(1|{\bf x}, {\bf w}) = g({\bf w}^\intercal{\bf x})$$ $$P_{Y|{\bf,X}}(0|{\bf x}, {\bf w}) = 1-g({\bf w}^\intercal{\bf x})$$ where ${\bf w}$ is a parameter vector and $g(·)$ is the **logistic** function, which is defined by $$g(t) = \frac{1}{1+\exp(-t)}$$ The code below defines and plots the logistic function: ```python # Define the logistic function def logistic(t): #<SOL> #</SOL> # Plot the logistic function t = np.arange(-6, 6, 0.1) z = logistic(t) plt.plot(t, z) plt.xlabel('$t$', fontsize=14) plt.ylabel('$g(t)$', fontsize=14) plt.title('The logistic function') plt.grid() ``` It is straightforward to see that the logistic function has the following properties: - **P1**: Probabilistic output: $\quad 0 \le g(t) \le 1$ - **P2**: Symmetry: $\quad g(-t) = 1-g(t)$ - **P3**: Monotonicity: $\quad g'(t) = g(t)\cdot [1-g(t)] \ge 0$ **Exercise 1**: Verify properties P2 and P3. **Exercise 2**: Implement a function to compute the logistic function, and use it to plot such function in the inverval $[-6,6]$. ### 2.2. Classifiers based on the logistic model. The MAP classifier under a logistic model will have the form $$P_{Y|{\bf X}}(1|{\bf x}, {\bf w}) = g({\bf w}^\intercal{\bf x}) \quad\mathop{\gtrless}^{\hat{y}=1}_{\hat{y}=0} \quad \frac{1}{2} $$ Therefore $$ 2 \quad\mathop{\gtrless}^{\hat{y}=1}_{\hat{y}=0} \quad 1 + \exp(-{\bf w}^\intercal{\bf x}) $$ which is equivalent to $${\bf w}^\intercal{\bf x} \quad\mathop{\gtrless}^{\hat{y}=1}_{\hat{y}=0}\quad 0 $$ Thus, the classifiers based on the logistic model are given by **linear decision boundaries** passing through the origin, ${\bf x} = {\bf 0}$. ```python # Weight vector: w = [4, 8] # Try different weights # Create a rectangular grid. x_min = -1 x_max = 1 h = (x_max - x_min) / 200 xgrid = np.arange(x_min, x_max, h) xx0, xx1 = np.meshgrid(xgrid, xgrid) # Compute the logistic map for the given weights, and plot Z = logistic(w[0]*xx0 + w[1]*xx1) fig = plt.figure() ax = fig.gca(projection='3d') ax.plot_surface(xx0, xx1, Z, cmap=plt.cm.copper) ax.contour(xx0, xx1, Z, levels=[0.5], colors='b', linewidths=(3,)) plt.xlabel('$x_0$') plt.ylabel('$x_1$') ax.set_zlabel('P(1|x,w)') plt.show() ``` The next code fragment represents the output of the same classifier, representing the output of the logistic function in the $x_0$-$x_1$ plane, encoding the value of the logistic function in the color map. ```python CS = plt.contourf(xx0, xx1, Z) CS2 = plt.contour(CS, levels=[0.5], colors='m', linewidths=(3,)) plt.xlabel('$x_0$') plt.ylabel('$x_1$') plt.colorbar(CS, ticks=[0, 0.5, 1]) plt.show() ``` ### 3.3. Nonlinear classifiers. The logistic model can be extended to construct non-linear classifiers by using **non-linear data transformations**. A general form for a nonlinear logistic regression model is $$P_{Y|{\bf X}}(1|{\bf x}, {\bf w}) = g[{\bf w}^\intercal{\bf z}({\bf x})] $$ where ${\bf z}({\bf x})$ is an arbitrary nonlinear transformation of the original variables. The boundary decision in that case is given by equation $$ {\bf w}^\intercal{\bf z} = 0 $$ **Exercise 3**: Modify the code above to generate a 3D surface plot of the polynomial logistic regression model given by $$ P_{Y|{\bf X}}(1|{\bf x}, {\bf w}) = g(1 + 10 x_0 + 10 x_1 - 20 x_0^2 + 5 x_0 x_1 + x_1^2) $$ ```python # Weight vector: w = [1, 10, 10, -20, 5, 1] # Try different weights # Create a regtangular grid. x_min = -1 x_max = 1 h = (x_max - x_min) / 200 xgrid = np.arange(x_min, x_max, h) xx0, xx1 = np.meshgrid(xgrid, xgrid) # Compute the logistic map for the given weights # Z = <FILL IN> # Plot the logistic map fig = plt.figure() ax = fig.gca(projection='3d') ax.plot_surface(xx0, xx1, Z, cmap=plt.cm.copper) plt.xlabel('$x_0$') plt.ylabel('$x_1$') ax.set_zlabel('P(1|x,w)') plt.show() ``` ```python CS = plt.contourf(xx0, xx1, Z) CS2 = plt.contour(CS, levels=[0.5], colors='m', linewidths=(3,)) plt.xlabel('$x_0$') plt.ylabel('$x_1$') plt.colorbar(CS, ticks=[0, 0.5, 1]) plt.show() ``` ## 3. Inference Remember that the idea of parametric classification is to use the training data set $\mathcal D = \{({\bf x}_k, y_k) \in {\mathbb{R}}^N \times \{0,1\}, k=0,\ldots,{K-1}\}$ to estimate ${\bf w}$. The estimate, $\hat{\bf w}$, can be used to compute the label prediction for any new observation as $$\hat{y} = \arg\max_y P_{Y|{\bf X}}(y|{\bf x},\hat{\bf w}).$$ In this notebook, we will discuss two different approaches to the estimation of ${\bf w}$: * **Maximum Likelihood** (ML): $\hat{\bf w}_{\text{ML}} = \arg\max_{\bf w} P_{{\mathcal D}|{\bf W}}({\mathcal D}|{\bf w})$ * **Maximum *A Posteriori** (MAP): $\hat{\bf w}_{\text{MAP}} = \arg\max_{\bf w} p_{{\bf W}|{\mathcal D}}({\bf w}|{\mathcal D})$ For the mathematical derivation of the logistic regression algorithm, the following representation of the logistic model will be useful: using the **symmetry** property of the logistic function, we can write $$P_{Y|{\bf X}}(0|{\bf x}, {\bf w}) = 1-g\left({\bf w}^\intercal{\bf z}({\bf x})\right) = g\left(-{\bf w}^\intercal{\bf z}({\bf x})\right)$$ thus $$P_{Y|{\bf X}}(y|{\bf x}, {\bf w}) = g\left(\overline{y}{\bf w}^\intercal{\bf z}({\bf x})\right)$$ where $\overline{y} = 2y-1$ is a **symmetrized label** ($\overline{y}\in\{-1, 1\}$). ### 3.1. Model assumptions In the following, we will make the following assumptions: - **A1**. (Logistic Regression): We assume a logistic model for the *a posteriori* probability of ${Y}$ given ${\bf X}$, i.e., $$P_{Y|{\bf X}}(y|{\bf x}, {\bf w}) = g\left({\bar y}\cdot {\bf w}^\intercal{\bf z}({\bf x})\right).$$ - **A2**. All samples in ${\mathcal D}$ have been generated from the same distribution, $p_{{\bf X}, Y| {\bf W}}({\bf x}, y| {\bf w})$. - **A3**. Input variables $\bf x$ do not depend on $\bf w$. This implies that $p({\bf x}|{\bf w}) = p({\bf x})$ - **A4**. Targets $y_0, \cdots, y_{K-1}$ are statistically independent given $\bf w$ and the inputs ${\bf x}_0, \cdots, {\bf x}_{K-1}$, that is: $$P(y_0, \cdots, y_{K-1} | {\bf x}_0, \cdots, {\bf x}_{K-1}, {\bf w}) = \prod_{k=0}^{K-1} P(y_k | {\bf x}_k, {\bf w})$$ ### 3.2. ML estimation. The ML estimate is defined as $$\hat{\bf w}_{\text{ML}} = \arg\max_{\bf w} P_{{\mathcal D}|{\bf W}}({\mathcal D}|{\bf w})$$ Ussing assumptions A2 and A3 above, we have that \begin{align} P_{{\mathcal D}|{\bf W}}({\mathcal D}|{\bf w}) & = p(y_0, \cdots, y_{K-1},{\bf x}_0, \cdots, {\bf x}_{K-1}| {\bf w}) \\ & = P(y_0, \cdots, y_{K-1}|{\bf x}_0, \cdots, {\bf x}_{K-1}, {\bf w}) \; p({\bf x}_0, \cdots, {\bf x}_{K-1}| {\bf w}) \\ & = P(y_0, \cdots, y_{K-1}|{\bf x}_0, \cdots, {\bf x}_{K-1}, {\bf w}) \; p({\bf x}_0, \cdots, {\bf x}_{K-1})\end{align} Finally, using assumption A4, we can formulate the ML estimation of $\bf w$ as the resolution of the following **optimization problem** \begin{align} \hat {\bf w}_\text{ML} & = \arg \max_{\bf w} P(y_0, \cdots, y_{K-1}|{\bf x}_0, \cdots, {\bf x}_{K-1}, {\bf w}) \\ & = \arg \max_{\bf w} \prod_{k=0}^{K-1} P(y_k|{\bf x}_k, {\bf w}) \\ & = \arg \max_{\bf w} \sum_{k=0}^{K-1} \log P(y_k|{\bf x}_k, {\bf w}) \\ & = \arg \min_{\bf w} \sum_{k=0}^{K-1} - \log P(y_k|{\bf x}_k, {\bf w}) \end{align} where the arguments of the maximization or minimization problems of the last three lines are usually referred to as the **likelihood**, **log-likelihood** $\left[L(\bf w)\right]$, and **negative log-likelihood** $\left[\text{NLL}(\bf w)\right]$, respectively. Now, using A1 (the logistic model) \begin{align} \text{NLL}({\bf w}) &= - \sum_{k=0}^{K-1}\log\left[g\left(\overline{y}_k{\bf w}^\intercal {\bf z}_k\right)\right] \\ &= \sum_{k=0}^{K-1}\log\left[1+\exp\left(-\overline{y}_k{\bf w}^\intercal {\bf z}_k\right)\right] \end{align} where ${\bf z}_k={\bf z}({\bf x}_k)$. It can be shown that $\text{NLL}({\bf w})$ is a **convex** and **differentiable** function of ${\bf w}$. Therefore, its minimum is a point with zero gradient. \begin{align} \nabla_{\bf w} \text{NLL}(\hat{\bf w}_{\text{ML}}) &= - \sum_{k=0}^{K-1} \frac{\exp\left(-\overline{y}_k\hat{\bf w}_{\text{ML}}^\intercal {\bf z}_k\right) \overline{y}_k {\bf z}_k} {1+\exp\left(-\overline{y}_k\hat{\bf w}_{\text{ML}}^\intercal {\bf z}_k \right)} = \\ &= - \sum_{k=0}^{K-1} \left[y_k-g(\hat{\bf w}_{\text{ML}}^T {\bf z}_k)\right] {\bf z}_k = 0 \end{align} Unfortunately, $\hat{\bf w}_{\text{ML}}$ cannot be taken out from the above equation, and some iterative optimization algorithm must be used to search for the minimum. ### 3.3. Gradient descent. A simple iterative optimization algorithm is <a href = https://en.wikipedia.org/wiki/Gradient_descent> gradient descent</a>. \begin{align} {\bf w}_{n+1} = {\bf w}_n - \rho_n \nabla_{\bf w} \text{NLL}({\bf w}_n) \end{align} where $\rho_n >0$ is the *learning step*. Applying the gradient descent rule to logistic regression, we get the following algorithm: \begin{align} {\bf w}_{n+1} &= {\bf w}_n + \rho_n \sum_{k=0}^{K-1} \left[y_k-g({\bf w}_n^\intercal {\bf z}_k)\right] {\bf z}_k \end{align} #### Gradient descent in matrix form Defining vectors \begin{align} {\bf y} &= [y_0,\ldots,y_{K-1}]^\top \\ \hat{\bf p}_n &= [g({\bf w}_n^\top {\bf z}_0), \ldots, g({\bf w}_n^\top {\bf z}_{K-1})]^\top \end{align} and matrix \begin{align} {\bf Z} = \left[{\bf z}_0,\ldots,{\bf z}_{K-1}\right]^\top \end{align} we can write \begin{align} {\bf w}_{n+1} &= {\bf w}_n + \rho_n {\bf Z}^\top \left({\bf y}-\hat{\bf p}_n\right) \end{align} In the following, we will explore the behavior of the gradient descend method using the Iris Dataset. ```python # Adapted from a notebook by Jason Brownlee def loadDataset(filename, split): xTrain, cTrain, xTest, cTest = [], [], [], [] with open(filename, 'r') as csvfile: lines = csv.reader(csvfile) dataset = list(lines) for i in range(len(dataset)-1): for y in range(4): dataset[i][y] = float(dataset[i][y]) item = dataset[i] if random.random() < split: xTrain.append(item[0:4]) cTrain.append(item[4]) else: xTest.append(item[0:4]) cTest.append(item[4]) return xTrain, cTrain, xTest, cTest xTrain_all, cTrain_all, xTest_all, cTest_all = loadDataset('iris.data', 0.66) nTrain_all = len(xTrain_all) nTest_all = len(xTest_all) print('Train:', nTrain_all) print('Test:', nTest_all) ``` Now, we select two classes and two attributes. ```python # Select attributes i = 0 # Try 0,1,2,3 j = 1 # Try 0,1,2,3 with j!=i # Select two classes c0 = 'Iris-versicolor' c1 = 'Iris-virginica' # Select two coordinates ind = [i, j] # Take training test X_tr = np.array([[xTrain_all[n][i] for i in ind] for n in range(nTrain_all) if cTrain_all[n]==c0 or cTrain_all[n]==c1]) C_tr = [cTrain_all[n] for n in range(nTrain_all) if cTrain_all[n]==c0 or cTrain_all[n]==c1] Y_tr = np.array([int(c==c1) for c in C_tr]) n_tr = len(X_tr) # Take test set X_tst = np.array([[xTest_all[n][i] for i in ind] for n in range(nTest_all) if cTest_all[n]==c0 or cTest_all[n]==c1]) C_tst = [cTest_all[n] for n in range(nTest_all) if cTest_all[n]==c0 or cTest_all[n]==c1] Y_tst = np.array([int(c==c1) for c in C_tst]) n_tst = len(X_tst) ``` #### 3.2.2. Data normalization Normalization of data is a common pre-processing step in many machine learning algorithms. Its goal is to get a dataset where all input coordinates have a similar scale. Learning algorithms usually show less instabilities and convergence problems when data are normalized. We will define a normalization function that returns a training data matrix with zero sample mean and unit sample variance. ```python def normalize(X, mx=None, sx=None): # Compute means and standard deviations if mx is None: mx = np.mean(X, axis=0) if sx is None: sx = np.std(X, axis=0) # Normalize X0 = (X-mx)/sx return X0, mx, sx ``` Now, we can normalize training and test data. Observe in the code that **the same transformation should be applied to training and test data**. This is the reason why normalization with the test data is done using the means and the variances computed with the training set. ```python # Normalize data Xn_tr, mx, sx = normalize(X_tr) Xn_tst, mx, sx = normalize(X_tst, mx, sx) ``` The following figure generates a plot of the normalized training data. ```python # Separate components of x into different arrays (just for the plots) x0c0 = [Xn_tr[n][0] for n in range(n_tr) if Y_tr[n]==0] x1c0 = [Xn_tr[n][1] for n in range(n_tr) if Y_tr[n]==0] x0c1 = [Xn_tr[n][0] for n in range(n_tr) if Y_tr[n]==1] x1c1 = [Xn_tr[n][1] for n in range(n_tr) if Y_tr[n]==1] # Scatterplot. labels = {'Iris-setosa': 'Setosa', 'Iris-versicolor': 'Versicolor', 'Iris-virginica': 'Virginica'} plt.plot(x0c0, x1c0,'r.', label=labels[c0]) plt.plot(x0c1, x1c1,'g+', label=labels[c1]) plt.xlabel('$x_' + str(ind[0]) + '$') plt.ylabel('$x_' + str(ind[1]) + '$') plt.legend(loc='best') plt.axis('equal') plt.show() ``` In order to apply the gradient descent rule, we need to define two methods: - A `fit` method, that receives the training data and returns the model weights and the value of the negative log-likelihood during all iterations. - A `predict` method, that receives the model weight and a set of inputs, and returns the posterior class probabilities for that input, as well as their corresponding class predictions. ```python def logregFit(Z_tr, Y_tr, rho, n_it): # Data dimension n_dim = Z_tr.shape[1] # Initialize variables nll_tr = np.zeros(n_it) pe_tr = np.zeros(n_it) Y_tr2 = 2*Y_tr - 1 # Transform labels into binary symmetric. w = np.random.randn(n_dim,1) # Running the gradient descent algorithm for n in range(n_it): # Compute posterior probabilities for weight w p1_tr = logistic(np.dot(Z_tr, w)) # Compute negative log-likelihood # (note that this is not required for the weight update, only for nll tracking) nll_tr[n] = np.sum(np.log(1 + np.exp(-np.dot(Y_tr2*Z_tr, w)))) # Update weights w += rho*np.dot(Z_tr.T, Y_tr - p1_tr) return w, nll_tr def logregPredict(Z, w): # Compute posterior probability of class 1 for weights w. p = logistic(np.dot(Z, w)).flatten() # Class D = [int(round(pn)) for pn in p] return p, D ``` We can test the behavior of the gradient descent method by fitting a logistic regression model with ${\bf z}({\bf x}) = (1, {\bf x}^\top)^\top$. ```python # Parameters of the algorithms rho = float(1)/50 # Learning step n_it = 200 # Number of iterations # Compute Z's Z_tr = np.c_[np.ones(n_tr), Xn_tr] Z_tst = np.c_[np.ones(n_tst), Xn_tst] n_dim = Z_tr.shape[1] # Convert target arrays to column vectors Y_tr2 = Y_tr[np.newaxis].T Y_tst2 = Y_tst[np.newaxis].T # Running the gradient descent algorithm w, nll_tr = logregFit(Z_tr, Y_tr2, rho, n_it) # Classify training and test data p_tr, D_tr = logregPredict(Z_tr, w) p_tst, D_tst = logregPredict(Z_tst, w) # Compute error rates E_tr = D_tr!=Y_tr E_tst = D_tst!=Y_tst # Error rates pe_tr = float(sum(E_tr)) / n_tr pe_tst = float(sum(E_tst)) / n_tst ``` ```python # NLL plot. plt.plot(range(n_it), nll_tr,'b.:', label='Train') plt.xlabel('Iteration') plt.ylabel('Negative Log-Likelihood') plt.legend() print(f'The optimal weights are: {w}') print('The final error rates are:') print(f'- Training: {pe_tr}') print(f'- Test: {pe_tst}') print(f'The NLL after training is {nll_tr[len(nll_tr)-1]}') ``` #### 3.2.3. Free parameters Under certain conditions, the gradient descent method can be shown to converge asymptotically (i.e. as the number of iterations goes to infinity) to the ML estimate of the logistic model. However, in practice, the final estimate of the weights ${\bf w}$ depend on several factors: - Number of iterations - Initialization - Learning step **Exercise 4**: Visualize the variability of gradient descent caused by initializations. To do so, fix the number of iterations to 200 and the learning step, and execute the gradient descent 100 times, storing the training error rate of each execution. Plot the histogram of the error rate values. Note that you can do this exercise with a loop over the 100 executions, including the code in the previous code slide inside the loop, with some proper modifications. To plot a histogram of the values in array `p` with `n`bins, you can use `plt.hist(p, n)` ```python ``` ##### 3.2.3.1. Learning step The learning step, $\rho$, is a free parameter of the algorithm. Its choice is critical for the convergence of the algorithm. Too large values of $\rho$ make the algorithm diverge. For too small values, the convergence gets very slow and more iterations are required for a good convergence. **Exercise 5**: Observe the evolution of the negative log-likelihood with the number of iterations for different values of $\rho$. It is easy to check that, for large enough $\rho$, the gradient descent method does not converge. Can you estimate (through manual observation) an approximate value of $\rho$ stating a boundary between convergence and divergence? ```python ``` **Exercise 6**: In this exercise we explore the influence of the learning step more sistematically. Use the code in the previouse exercises to compute, for every value of $\rho$, the average error rate over 100 executions. Plot the average error rate vs. $\rho$. Note that you should explore the values of $\rho$ in a logarithmic scale. For instance, you can take $\rho = 1, \frac{1}{10}, \frac{1}{100}, \frac{1}{1000}, \ldots$ ```python ``` In practice, the selection of $\rho$ may be a matter of trial an error. Also there is some theoretical evidence that the learning step should decrease along time up to cero, and the sequence $\rho_n$ should satisfy two conditions: - C1: $\sum_{n=0}^{\infty} \rho_n^2 < \infty$ (decrease slowly) - C2: $\sum_{n=0}^{\infty} \rho_n = \infty$ (but not too slowly) For instance, we can take $\rho_n= \frac{1}{n}$. Another common choice is $\rho_n = \frac{\alpha}{1+\beta n}$ where $\alpha$ and $\beta$ are also free parameters that can be selected by trial and error with some heuristic method. #### 3.2.4. Visualizing the posterior map. We can also visualize the posterior probability map estimated by the logistic regression model for the estimated weights. ```python # Create a regtangular grid. x_min, x_max = Xn_tr[:, 0].min(), Xn_tr[:, 0].max() y_min, y_max = Xn_tr[:, 1].min(), Xn_tr[:, 1].max() dx = x_max - x_min dy = y_max - y_min h = dy /400 xx, yy = np.meshgrid(np.arange(x_min - 0.1 * dx, x_max + 0.1 * dx, h), np.arange(y_min - 0.1 * dx, y_max + 0.1 * dy, h)) X_grid = np.array([xx.ravel(), yy.ravel()]).T # Compute Z's Z_grid = np.c_[np.ones(X_grid.shape[0]), X_grid] # Compute the classifier output for all samples in the grid. pp, dd = logregPredict(Z_grid, w) ``` ```python # Paint output maps pylab.rcParams['figure.figsize'] = 6, 6 # Set figure size # Color plot plt.plot(x0c0, x1c0,'r.', label=labels[c0]) plt.plot(x0c1, x1c1,'g+', label=labels[c1]) plt.xlabel('$x_' + str(ind[0]) + '$') plt.ylabel('$x_' + str(ind[1]) + '$') plt.legend(loc='best') plt.axis('equal') pp = pp.reshape(xx.shape) CS = plt.contourf(xx, yy, pp, cmap=plt.cm.copper) plt.contour(xx, yy, pp, levels=[0.5], colors='b', linewidths=(3,)) plt.colorbar(CS, ticks=[0, 0.5, 1]) plt.show() ``` #### 3.2.5. Polynomial Logistic Regression The error rates of the logistic regression model can be potentially reduced by using polynomial transformations. To compute the polynomial transformation up to a given degree, we can use the `PolynomialFeatures` method in `sklearn.preprocessing`. ```python # Parameters of the algorithms rho = float(1)/50 # Learning step n_it = 500 # Number of iterations g = 5 # Degree of polynomial # Compute Z_tr poly = PolynomialFeatures(degree=g) Z_tr = poly.fit_transform(Xn_tr) # Normalize columns (this is useful to make algorithms more stable).) Zn, mz, sz = normalize(Z_tr[:,1:]) Z_tr = np.concatenate((np.ones((n_tr,1)), Zn), axis=1) # Compute Z_tst Z_tst = poly.fit_transform(Xn_tst) Zn, mz, sz = normalize(Z_tst[:,1:], mz, sz) Z_tst = np.concatenate((np.ones((n_tst,1)), Zn), axis=1) # Convert target arrays to column vectors Y_tr2 = Y_tr[np.newaxis].T Y_tst2 = Y_tst[np.newaxis].T # Running the gradient descent algorithm w, nll_tr = logregFit(Z_tr, Y_tr2, rho, n_it) # Classify training and test data p_tr, D_tr = logregPredict(Z_tr, w) p_tst, D_tst = logregPredict(Z_tst, w) # Compute error rates E_tr = D_tr!=Y_tr E_tst = D_tst!=Y_tst # Error rates pe_tr = float(sum(E_tr)) / n_tr pe_tst = float(sum(E_tst)) / n_tst ``` ```python # NLL plot. plt.plot(range(n_it), nll_tr,'b.:', label='Train') plt.xlabel('Iteration') plt.ylabel('Negative Log-Likelihood') plt.legend() print(f'The optimal weights are: {w.T}') print('The final error rates are:') print(f'- Training: {pe_tr} \n- Test: {pe_tst}') print('The NLL after training is', nll_tr[len(nll_tr)-1]) ``` Visualizing the posterior map we can se that the polynomial transformation produces nonlinear decision boundaries. ```python # Compute Z_grid Z_grid = poly.fit_transform(X_grid) Zn, mz, sz = normalize(Z_grid[:,1:], mz, sz) Z_grid = np.concatenate((np.ones((Z_grid.shape[0],1)), Zn), axis=1) # Compute the classifier output for all samples in the grid. pp, dd = logregPredict(Z_grid, w) pp = pp.reshape(xx.shape) ``` ```python # Paint output maps pylab.rcParams['figure.figsize'] = 6, 6 # Set figure size plt.plot(x0c0, x1c0,'r.', label=labels[c0]) plt.plot(x0c1, x1c1,'g+', label=labels[c1]) plt.xlabel('$x_' + str(ind[0]) + '$') plt.ylabel('$x_' + str(ind[1]) + '$') plt.axis('equal') plt.legend(loc='best') CS = plt.contourf(xx, yy, pp, cmap=plt.cm.copper) plt.contour(xx, yy, pp, levels=[0.5], colors='b', linewidths=(3,)) plt.colorbar(CS, ticks=[0, 0.5, 1]) plt.show() ``` ## 4. Regularization and MAP estimation. ### 4.1 MAP estimation An alternative to the ML estimation of the weights in logistic regression is Maximum A Posteriori estimation. Modelling the logistic regression weights as a random variable with prior distribution $p_{\bf W}({\bf w})$, the **MAP estimate** is defined as $$ \hat{\bf w}_{\text{MAP}} = \arg\max_{\bf w} p({\bf w}|{\mathcal D}) $$ The posterior density $p({\bf w}|{\mathcal D})$ is related to the likelihood function and the prior density of the weights, $p_{\bf W}({\bf w})$ through the **Bayes rule** $$ p({\bf w}|{\mathcal D}) = \frac{P\left({\mathcal D}|{\bf w}\right) \; p_{\bf W}({\bf w})} {p\left({\mathcal D}\right)} $$ In general, the denominator in this expression cannot be computed analytically. However, it is not required for MAP estimation because it does not depend on ${\bf w}$. Therefore, the MAP solution is given by \begin{align} \hat{\bf w}_{\text{MAP}} & = \arg\max_{\bf w} \left\{ P\left({\mathcal D}|{\bf w}\right) \; p_{\bf W}({\bf w}) \right\}\\ & = \arg\max_{\bf w} \left\{ L({\mathbf w}) + \log p_{\bf W}({\bf w})\right\} \\ & = \arg\min_{\bf w} \left\{ \text{NLL}({\mathbf w}) - \log p_{\bf W}({\bf w})\right\} \end{align} In the light of this expression, we can conclude that the MAP solution is affected by two terms: - The likelihood, which takes large values for parameter vectors $\bf w$ that fit well the training data (smaller $\text{NLL}$ values) - The prior distribution of weights $p_{\bf W}({\bf w})$, which expresses our *a priori* preference for some solutions. ### 4.2. Regularization Even though the prior distribution has a natural interpretation as a model of our knowledge about $p({\bf w})$ before observing the data, its choice is frequenty motivated by the need to avoid data **overfitting**. **Data overfitting** is a frequent problem in ML estimation when the dimension of ${\bf w}$ is much higher that the dimension of the input ${\bf x}$: the ML solution can be too adjusted to the training data, while the test error rate is large. In practice **we recur to prior distributions that take large values when $\|{\bf w}\|$ is small** (associated to smooth classification borders). This helps to improve **generalization**. In this way, the MAP criterion adds a **penalty term** to the ML objective, that penalizes parameter vectors for which the prior distribution of weights takes small values. In machine learning, the process of introducing penalty terms to avoid overfitting is usually named **regularization**. ### 4.3 MAP estimation with Gaussian prior If we assume that ${\bf W}$ follows a **zero-mean Gaussian** random variable with variance matrix $v{\bf I}$, $$ p_{\bf W}({\bf w}) = \frac{1}{(2\pi v)^{N/2}} \exp\left(-\frac{1}{2v}\|{\bf w}\|^2\right) $$ the **MAP estimate** becomes \begin{align} \hat{\bf w}_{\text{MAP}} &= \arg\min_{\bf w} \left\{\text{NLL}({\bf w}) + \frac{1}{C}\|{\bf w}\|^2 \right\} \end{align} where $C = 2v$. Note that the **regularization term** associated to the prior penalizes parameter vectors with large components. Parameter $C$ controls the regularizatin, and it is named the **inverse regularization strength**. Noting that $$\nabla_{\bf w}\left\{\text{NLL}({\bf w}) + \frac{1}{C}\|{\bf w}\|^2\right\} = - {\bf Z} \left({\bf y}-\hat{\bf p}_n\right) + \frac{2}{C}{\bf w}, $$ we obtain the following **gradient descent rule** for MAP estimation \begin{align} {\bf w}_{n+1} &= \left(1-\frac{2\rho_n}{C}\right){\bf w}_n + \rho_n {\bf Z} \left({\bf y}-\hat{\bf p}_n\right) \end{align} Note that the regularization term "pushes" the weights towards zero. ### 4.4 MAP estimation with Laplacian prior If we assume that ${\bf W}$ follows a multivariate zero-mean Laplacian distribution given by $$ p_{\bf W}({\bf w}) = \frac{1}{(2 C)^{N}} \exp\left(-\frac{1}{C}\|{\bf w}\|_1\right) $$ (where $\|{\bf w}\|=|w_1|+\ldots+|w_N|$ is the $L_1$ norm of ${\bf w}$), the MAP estimate becomes \begin{align} \hat{\bf w}_{\text{MAP}} &= \arg\min_{\bf w} \left\{\text{NLL}({\bf w}) + \frac{1}{C}\|{\bf w}\|_1 \right\} \end{align} Parameter $C$ is named the *inverse regularization strength*. **Exercise 7**: Derive the gradient descent rules for MAP estimation of the logistic regression weights with Laplacian prior. ```python ``` ## 5. Other optimization algorithms ### 5.1. Stochastic Gradient descent. Stochastic gradient descent (SGD) is based on the idea of using a single sample at each iteration of the learning algorithm. The SGD rule for ML logistic regression is \begin{align} {\bf w}_{n+1} &= {\bf w}_n + \rho_n {\bf z}_n \left(y_n-\hat{p}_n\right) \end{align} Once all samples in the training set have been applied, the algorith can continue by applying the training set several times. The computational cost of each iteration of SGD is much smaller than that of gradient descent, though it usually needs many more iterations to converge. **Exercise 8**: Modify logregFit to implement an algorithm that applies the SGD rule. ### 5.2. Newton's method Assume that the function to be minimized, $C({\bf w})$, can be approximated by its **second order Taylor series expansion** around ${\bf w}_0$ $$ C({\bf w}) \approx C({\bf w}_0) + \nabla_{\bf w}^\top C({\bf w}_0)({\bf w}-{\bf w}_0) + \frac{1}{2}({\bf w}-{\bf w}_0)^\top{\bf H}({\bf w}_0)({\bf w}-{\bf w}_0) $$ where ${\bf H}({\bf w})$ is the <a href=https://en.wikipedia.org/wiki/Hessian_matrix> **Hessian matrix**</a> of $C$ at ${\bf w}$. Taking the gradient of $C({\bf w})$, and setting the result to ${\bf 0}$, the minimum of C around ${\bf w}_0$ can be approximated as $$ {\bf w}^* = {\bf w}_0 - {\bf H}({\bf w}_0)^{-1} \nabla_{\bf w}^\top C({\bf w}_0) $$ Since the second order polynomial is only an approximation to $C$, ${\bf w}^*$ is only an approximation to the optimal weight vector, but we can expect ${\bf w}^*$ to be closer to the minimizer of $C$ than ${\bf w}_0$. Thus, we can repeat the process, computing a second order approximation around ${\bf w}^*$ and a new approximation to the minimizer. <a href=https://en.wikipedia.org/wiki/Newton%27s_method_in_optimization> **Newton's method**</a> is based on this idea. At each optimization step, the function to be minimized is approximated by a second order approximation using a Taylor series expansion around the current estimate. As a result, the learning rule becomes $$\hat{\bf w}_{n+1} = \hat{\bf w}_{n} - \rho_n {\bf H}({\bf w}_n)^{-1} \nabla_{{\bf w}}C({\bf w}_n) $$ #### 5.2.1. Example: MAP estimation with Gaussian prior. For instance, for the MAP estimate with Gaussian prior, the *Hessian* matrix becomes $$ {\bf H}({\bf w}) = \frac{2}{C}{\bf I} + \sum_{k=0}^{K-1} g({\bf w}^\top {\bf z}_k) \left[1-g({\bf w}^\top {\bf z}_k)\right]{\bf z}_k {\bf z}_k^\top $$ Defining diagonal matrix $$ {\mathbf S}({\bf w}) = \text{diag}\left[g({\bf w}^\top {\bf z}_k) \left(1-g({\bf w}^\top {\bf z}_k)\right)\right] $$ the Hessian matrix can be written in more compact form as $$ {\bf H}({\bf w}) = \frac{2}{C}{\bf I} + {\bf Z}^\top {\bf S}({\bf w}) {\bf Z} $$ Therefore, the Newton's algorithm for logistic regression becomes \begin{align} {\bf w}_{n+1} = {\bf w}_{n} + \rho_n \left(\frac{2}{C}{\bf I} + {\bf Z}^\top {\bf S}({\bf w}_{n}) {\bf Z} \right)^{-1} {\bf Z}^\top \left({\bf y}-\hat{\bf p}_n\right) \end{align} Some variants of the Newton method are implemented in the <a href="http://scikit-learn.org/stable/"> Scikit-learn </a> package. ```python def logregFit2(Z_tr, Y_tr, rho, n_it, C=1e4): # Compute Z's r = 2.0/C n_dim = Z_tr.shape[1] # Initialize variables nll_tr = np.zeros(n_it) pe_tr = np.zeros(n_it) w = np.random.randn(n_dim,1) # Running the gradient descent algorithm for n in range(n_it): p_tr = logistic(np.dot(Z_tr, w)) sk = np.multiply(p_tr, 1-p_tr) S = np.diag(np.ravel(sk.T)) # Compute negative log-likelihood nll_tr[n] = - np.dot(Y_tr.T, np.log(p_tr)) - np.dot((1-Y_tr).T, np.log(1-p_tr)) # Update weights invH = np.linalg.inv(r*np.identity(n_dim) + np.dot(Z_tr.T, np.dot(S, Z_tr))) w += rho*np.dot(invH, np.dot(Z_tr.T, Y_tr - p_tr)) return w, nll_tr ``` ```python # Parameters of the algorithms rho = float(1)/50 # Learning step n_it = 500 # Number of iterations C = 1000 g = 4 # Compute Z_tr poly = PolynomialFeatures(degree=g) Z_tr = poly.fit_transform(X_tr) # Normalize columns (this is useful to make algorithms more stable).) Zn, mz, sz = normalize(Z_tr[:,1:]) Z_tr = np.concatenate((np.ones((n_tr,1)), Zn), axis=1) # Compute Z_tst Z_tst = poly.fit_transform(X_tst) Zn, mz, sz = normalize(Z_tst[:,1:], mz, sz) Z_tst = np.concatenate((np.ones((n_tst,1)), Zn), axis=1) # Convert target arrays to column vectors Y_tr2 = Y_tr[np.newaxis].T Y_tst2 = Y_tst[np.newaxis].T # Running the gradient descent algorithm w, nll_tr = logregFit2(Z_tr, Y_tr2, rho, n_it, C) # Classify training and test data p_tr, D_tr = logregPredict(Z_tr, w) p_tst, D_tst = logregPredict(Z_tst, w) # Compute error rates E_tr = D_tr!=Y_tr E_tst = D_tst!=Y_tst # Error rates pe_tr = float(sum(E_tr)) / n_tr pe_tst = float(sum(E_tst)) / n_tst ``` ```python # NLL plot. plt.plot(range(n_it), nll_tr,'b.:', label='Train') plt.xlabel('Iteration') plt.ylabel('Negative Log-Likelihood') plt.legend() print('The final error rates are:') print('- Training:', str(pe_tr)) print('- Test:', str(pe_tst)) print('The NLL after training is:', str(nll_tr[len(nll_tr)-1])) ``` ## 6. Logistic regression in Scikit Learn. The <a href="http://scikit-learn.org/stable/"> scikit-learn </a> package includes an efficient implementation of <a href="http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html#sklearn.linear_model.LogisticRegression"> logistic regression</a>. To use it, we must first create a classifier object, specifying the parameters of the logistic regression algorithm. ```python # Create a logistic regression object. LogReg = linear_model.LogisticRegression(C=1.0) # Compute Z_tr poly = PolynomialFeatures(degree=g) Z_tr = poly.fit_transform(Xn_tr) # Normalize columns (this is useful to make algorithms more stable).) Zn, mz, sz = normalize(Z_tr[:,1:]) Z_tr = np.concatenate((np.ones((n_tr,1)), Zn), axis=1) # Compute Z_tst Z_tst = poly.fit_transform(Xn_tst) Zn, mz, sz = normalize(Z_tst[:,1:], mz, sz) Z_tst = np.concatenate((np.ones((n_tst,1)), Zn), axis=1) # Fit model to data. LogReg.fit(Z_tr, Y_tr) # Classify training and test data D_tr = LogReg.predict(Z_tr) D_tst = LogReg.predict(Z_tst) ``` ```python # Compute error rates E_tr = D_tr!=Y_tr E_tst = D_tst!=Y_tst # Error rates pe_tr = float(sum(E_tr)) / n_tr pe_tst = float(sum(E_tst)) / n_tst print('The final error rates are:') print('- Training:', str(pe_tr)) print('- Test:', str(pe_tst)) # Compute Z_grid Z_grid = poly.fit_transform(X_grid) n_grid = Z_grid.shape[0] Zn, mz, sz = normalize(Z_grid[:,1:], mz, sz) Z_grid = np.concatenate((np.ones((n_grid,1)), Zn), axis=1) ``` ```python # Compute the classifier output for all samples in the grid. dd = LogReg.predict(Z_grid) pp = LogReg.predict_proba(Z_grid)[:,1] pp = pp.reshape(xx.shape) # Paint output maps pylab.rcParams['figure.figsize'] = 6, 6 # Set figure size plt.plot(x0c0, x1c0,'r.', label=labels[c0]) plt.plot(x0c1, x1c1,'g+', label=labels[c1]) plt.xlabel('$x_' + str(ind[0]) + '$') plt.ylabel('$x_' + str(ind[1]) + '$') plt.axis('equal') plt.contourf(xx, yy, pp, cmap=plt.cm.copper) plt.legend(loc='best') plt.contour(xx, yy, pp, levels=[0.5], colors='b', linewidths=(3,)) plt.colorbar(CS, ticks=[0, 0.5, 1]) plt.show() ``` ```python ```
eab1dd3a0f322e39deabe54ba2ab6d750eae86ed
62,127
ipynb
Jupyter Notebook
C3.Classification_LogReg/RegresionLogistica_student.ipynb
ML4DS/ML4all
7336489dcb87d2412ad62b5b972d69c98c361752
[ "MIT" ]
27
2016-11-30T17:34:00.000Z
2022-03-23T23:11:48.000Z
C3.Classification_LogReg/RegresionLogistica_student.ipynb
ML4DS/ML4all
7336489dcb87d2412ad62b5b972d69c98c361752
[ "MIT" ]
5
2019-08-12T18:28:49.000Z
2019-11-26T11:01:39.000Z
C3.Classification_LogReg/RegresionLogistica_student.ipynb
ML4DS/ML4all
7336489dcb87d2412ad62b5b972d69c98c361752
[ "MIT" ]
14
2016-11-30T17:34:18.000Z
2021-09-15T09:53:32.000Z
31.974781
420
0.533601
true
12,491
Qwen/Qwen-72B
1. YES 2. YES
0.685949
0.822189
0.56398
__label__eng_Latn
0.919462
0.148645
Loading libraries and setting up the environment ```python import sympy sympy.init_printing() ``` # Newtonian Case Equation of motion ```python m = sympy.Symbol('m', positive=True) # Mass of the bullet v = sympy.Symbol('v', positive=True) # Velocity t = sympy.Symbol('t', positive=True) # Time A = sympy.Symbol('A', positive=True) # Bullet cross section p = sympy.Symbol('p') # Pressure eqn_of_motion = sympy.Eq(v(t).diff(t)*m,A*p) eqn_of_motion ``` Conservation of the Riemann invariant ```python c_0 = sympy.Symbol('c_0', positive=True) # Initial speed of sound c = sympy.Symbol('c', positive=True) # Speed of sound eta = sympy.Symbol('eta', positive=True) # Adiabatic index (I'm not using gamma to avoid confusion with the Lorentz factor) riemann_invariant_conservation = sympy.Eq(v+2*c/(eta-1),2*c_0/(eta-1)) riemann_invariant_conservation ``` Isentropic relation ```python rho = sympy.Symbol('rho') # Density rho_0 = sympy.Symbol('rho_0', positive=True) # Initial density p_0 = sympy.Symbol('p_0', positive=True) # Initial pressure entropy_conservation = sympy.Eq(p/rho**eta,p_0/rho_0**eta) entropy_conservation ``` It will be more useful to relate the pressure to the speed of sound ```python temp = entropy_conservation temp = temp.subs(rho, eta*p/c**2) temp = temp.subs(rho_0, eta*p_0/c_0**2) temp = sympy.expand_power_base(temp,force=True).simplify() entropy_p_vs_c = temp entropy_p_vs_c ``` Pressure as a function of velocity ```python temp = entropy_conservation temp = temp.subs(rho,eta*p/c**2) temp = temp.subs(rho_0,eta*p_0/c_0**2) temp = temp.subs(sympy.solve(riemann_invariant_conservation,c,dict=True)[0]) temp = sympy.expand_power_base(temp,force=True) temp = sympy.solve(temp,p)[0] temp = sympy.expand_power_base(temp, force=True).simplify() p_vs_v = temp sympy.Eq(p,p_vs_v) ``` Terminal velocity ```python temp = riemann_invariant_conservation temp = sympy.solve(temp.subs(c,0),v)[0] terminal_velocity = temp terminal_velocity ``` Solving the equation of motion ```python y = sympy.Symbol('y', positive=True) v_t = sympy.Symbol('v_t', positive=True) temp = eqn_of_motion temp = temp.subs(p, p_vs_v.subs(v,terminal_velocity*(1-y))) temp = temp.subs(v(t), -terminal_velocity*(1-y(t))) temp = temp.doit() temp = sympy.expand_power_base(temp, force=True) temp = temp.simplify() temp = temp.subs(y(t).diff(t),y/t) temp = sympy.solve(temp,y)[0] temp = (temp*terminal_velocity).subs(c_0,v_t*(eta-1)/2) temp = sympy.expand_power_base(temp,force=True).simplify() asymptotic_velocity = temp asymptotic_velocity ``` The thickness of the bullet (size along the direction of motion) increases with time according to ```python w_0 = sympy.Symbol('w_0', positive=True) # Initial thickness temp = w_0*(p/p_0)**(-1/eta) temp = temp.subs(sympy.solve(entropy_p_vs_c,p,dict=True)[0]) temp = temp.subs(c_0, v_t) temp = temp.subs(c, asymptotic_velocity) temp = sympy.expand_power_base(temp).simplify() thickness_history = temp thickness_history ``` Time when the bullet is broken apart by the Rayleigh Taylor instability ```python w = sympy.Symbol('w', positive=True) # Current thickness of the bullet a = sympy.Symbol('a', positive=True) # Acceleration temp = sympy.Eq(t*sympy.sqrt(a/w),1) temp = temp.subs(a,-asymptotic_velocity.diff(t)).simplify() temp = temp.subs(w, thickness_history) sympy.expand_power_base(temp).simplify() ``` We have two timescales in this problem: the sound crossing time of the bullet $w_0/v_t$, and the acceleration time $m v_t/A p_0$. If the acceleration time is larger than the sound crossing time, then the bullet disintegrates right at the beginning. If not, then it never will. # Relativistic Case Equation of motion ```python gamma = sympy.Symbol('gamma', positive=True) # Lorentz factor C = sympy.Symbol('C', positive=True) # Speed of light ur_eqn_of_motion = sympy.Eq(C*m*gamma(t).diff(t), A*p) ur_eqn_of_motion ``` Conservation of the relativistic Riemann invariant ```python ur_riemann_invariant_conservation = sympy.Eq(p,p_0*gamma**(-sympy.sqrt(eta-1)/eta)) ur_riemann_invariant_conservation ``` Solving the equation of motion ```python temp = ur_eqn_of_motion.subs(p,ur_riemann_invariant_conservation.rhs) temp = temp.subs(gamma(t).diff(t),gamma/t) lf_history = sympy.solve(temp,gamma)[0] lf_history ``` Pressure history ```python temp = ur_riemann_invariant_conservation.rhs ur_pressure_history = temp.subs(gamma, lf_history).simplify() ur_pressure_history ``` Density history ```python eta2 = sympy.Symbol('eta2', positive=True) temp = rho_0*(p/p_0)**(1/eta) temp = temp.subs(p,ur_pressure_history) temp = temp.subs(eta,eta2+1) ur_density_history = sympy.expand_power_base(temp,force=True).simplify().subs(eta2, eta-1) ur_density_history ``` Now, let us turn our attention to the Riemann problem. On the left (negative) side, there's a photon gas with pressure $p_l$. On the right, there's a cold baryonic matter with mass density $\rho_r$. Both fluids are stationary. In the case where $p_l/\rho_r \ll c^2$, then the first shock is non relativistic, and what we get is basically the non relativistic problem, boosted to a relativistic velocity. If, on the other hand $p_l/\rho_r \gg c^2$, then the first shock is relativistic. We call the second case the genuinely relativistic case. ## Boosted Newtonian Case Thickness of the bullet in the fluid frame ```python temp = w_0*rho_0/rho temp = temp.subs(rho, ur_density_history) ff_bullet_thickness = temp ff_bullet_thickness ``` Breakup time ```python temp = (t/gamma)**2*a/ff_bullet_thickness temp = temp.subs(gamma, lf_history) temp = temp.subs(a, lf_history/t) temp = temp.subs(eta, eta2+1) temp = sympy.expand_power_base(temp, force=True).simplify().subs(eta2, eta-1).simplify() tentative_growth_factor=temp tentative_growth_factor ``` ```python t*sympy.log(tentative_growth_factor).diff(t).simplify() ``` ```python bn_t_breakup = sympy.solve(tentative_growth_factor-1,t)[0] [bn_t_breakup, sympy.expand_power_base(bn_t_breakup, force=True).simplify().subs(eta,sympy.Rational(4,3)).simplify()] ``` ## Genuinely Relativistic Case In this case the initial pressure in the bullet will be different from the initial pressure in the barrel. To determine this pressre, we need to find the intersection between the relative Hugoniot (Taub) curve of the bullet and the rarefaction curve of the barrel ```python taub_curve = sympy.Eq(p, rho_0*C**2*gamma**2) taub_curve ``` ```python rel_riemann_problem_intersection = sympy.solve([taub_curve, ur_riemann_invariant_conservation],[p,gamma])[1] rel_riemann_problem_intersection ``` When calculating the growth factor, one has to take into account the contribution of the pressure to the intertia ```python temp = (t/gamma)*(a/w)*(rho*C**2/p) temp = temp.subs(w,w_0*(rho_0/rho)) temp = temp.subs(rho, rho_0*(p/p_0)**(1/eta)) temp = temp.subs(p,rel_riemann_problem_intersection[1]*(gamma/rel_riemann_problem_intersection[0])**(sympy.sqrt(eta-1)/eta)) temp = temp.subs(a, gamma/t) temp = temp.subs(gamma, lf_history) temp = temp.subs(eta,eta2+1) temp = sympy.expand_power_base(temp, force=True) temp = temp.simplify() temp = temp.subs(eta2, eta-1).simplify() gr_growth_factor = temp gr_growth_factor ``` Breakup time ```python temp = sympy.solve(gr_growth_factor-1,t)[0] temp = temp.subs(eta, eta2+1) gr_breakup_time = sympy.expand_power_base(temp, force=True).simplify().subs(eta2, eta-1) gr_breakup_time ``` For an equation of state with $\eta=\frac{4}{3}$ ```python gr_breakup_time.subs(eta,sympy.Rational(4,3)).simplify() ``` We note that the relativistic Rayleigh Taylor rate is approximately $\sqrt{k a\frac{\rho_2-\rho_1}{\rho_1+\rho_2+p/c^2}}$. As one might expect, the difference with respect to the newtonian rate is the inclusion of the pressure term as part of the inertia (denominator). It does not appear in the driving term (numerator) because it is the same of both sides. ```python ```
5c3e8aa3a0e541e7cfa73ebdab0533abd7506ca2
135,108
ipynb
Jupyter Notebook
disintegrating_bullet.ipynb
bolverk/disintegrating_bullet
676bd2f575a70497ee0bebee801405f59df7bc9a
[ "MIT" ]
null
null
null
disintegrating_bullet.ipynb
bolverk/disintegrating_bullet
676bd2f575a70497ee0bebee801405f59df7bc9a
[ "MIT" ]
null
null
null
disintegrating_bullet.ipynb
bolverk/disintegrating_bullet
676bd2f575a70497ee0bebee801405f59df7bc9a
[ "MIT" ]
null
null
null
116.171969
16,682
0.760332
true
2,325
Qwen/Qwen-72B
1. YES 2. YES
0.9659
0.897695
0.867084
__label__eng_Latn
0.725387
0.852859
Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved BSD-3 license. (c) Lorena A. Barba, Gilbert F. Forsyth 2017. Thanks to NSF for support via CAREER award #1149784. [@LorenaABarba](https://twitter.com/LorenaABarba) [@ggruszczynski](https://github.com/ggruszczynski) ```python import timeit import numpy as np import matplotlib.pyplot as plt #and the useful plotting library ``` Diffusion part 1: the fundamental solution ----- *** The one-dimensional diffusion equation is: $$\frac{\partial u}{\partial t}= \nu \frac{\partial^2 u}{\partial x^2}$$ The first thing you should notice is that —unlike the previous two simple equations we have studied— this equation has a second-order derivative. We first need to learn what to do with it! ### Discretizing $\frac{\partial ^2 u}{\partial x^2}$ The second-order derivative can be represented geometrically as the line tangent to the curve given by the first derivative. We will discretize the second-order derivative with a Central Difference scheme: a combination of Forward Difference and Backward Difference of the first derivative. Consider the Taylor expansion of $u_{i+1}$ and $u_{i-1}$ around $u_i$: $u_{i+1} = u_i + \Delta x \frac{\partial u}{\partial x}\bigg|_i + \frac{\Delta x^2}{2} \frac{\partial ^2 u}{\partial x^2}\bigg|_i + \frac{\Delta x^3}{3!} \frac{\partial ^3 u}{\partial x^3}\bigg|_i + O(\Delta x^4)$ $u_{i-1} = u_i - \Delta x \frac{\partial u}{\partial x}\bigg|_i + \frac{\Delta x^2}{2} \frac{\partial ^2 u}{\partial x^2}\bigg|_i - \frac{\Delta x^3}{3!} \frac{\partial ^3 u}{\partial x^3}\bigg|_i + O(\Delta x^4)$ If we add these two expansions, you can see that the odd-numbered derivative terms will cancel each other out. If we neglect any terms of $O(\Delta x^4)$ or higher (and really, those are very small), then we can rearrange the sum of these two expansions to solve for our second-derivative. $u_{i+1} + u_{i-1} = 2u_i+\Delta x^2 \frac{\partial ^2 u}{\partial x^2}\bigg|_i + O(\Delta x^4)$ Then rearrange to solve for $\frac{\partial ^2 u}{\partial x^2}\bigg|_i$ and the result is: $$\frac{\partial ^2 u}{\partial x^2}=\frac{u_{i+1}-2u_{i}+u_{i-1}}{\Delta x^2} + O(\Delta x^4)$$ ### Discretizing both $\frac{\partial u}{\partial t}$ and $\frac{\partial ^2 u}{\partial x^2}$ We can now write the discretized version of the diffusion equation in 1D: $$\frac{u_{i}^{n+1}-u_{i}^{n}}{\Delta t}=\nu\frac{u_{i+1}^{n}-2u_{i}^{n}+u_{i-1}^{n}}{\Delta x^2}$$ As before, we notice that once we have an initial condition, the only unknown is $u_{i}^{n+1}$, so we re-arrange the equation solving for our unknown: $$u_{i}^{n+1}=u_{i}^{n}+\underbrace{\frac{\nu\Delta t}{\Delta x^2}}_{\beta}(u_{i+1}^{n}-2u_{i}^{n}+u_{i-1}^{n})$$ $$u_{i}^{n+1}=\beta u_{i-1}^{n} + u_{i}^{n}(1- 2 \beta) +\beta u_{i+1}^{n}$$ The above discrete equation allows us to write a program to advance a solution in time. But we need an initial condition. Let's continue using our favorite: the hat function. So, at $t=0$, $u=2$ in the interval $0.5\le x\le 1$ and $u=1$ everywhere else. We are ready to number-crunch! ```python nx = 128 domain_length = 64 dx = domain_length / (nx-1) xspace = np.linspace(0, domain_length, nx) nt = 200 # the number of timesteps we want to calculate nu = 5 # the value of viscosity sigma = .2 # sigma is a parameter, we'll learn more about it later dt = sigma * dx**2 / nu # dt is defined using sigma ... more later! u_IC = 0*np.ones(nx) # numpy function ones() u_IC[int((nx-1)/4):int(nx/2 + 1)] = 1 # setting u = 2 between 0.5 and 1 as per our I.C.s plt.plot(xspace, u_IC) ``` ```python def calc_diffusion_FD_naive(IC,nx,nt,nu,dt): u = IC.copy() un = IC.copy() #our placeholder array, un, to advance the solution in time beta = nu * dt / dx**2 for n in range(nt): #iterate through time un = u.copy() #copy the existing values of u into un for i in range(0, nx): # this is slow (index operations & branching) if i == nx-1: u[i] = beta*un[i-1]+ (1-2*beta)*un[i] + beta*un[0] # periodic BC else: u[i] = beta*un[i-1]+ (1-2*beta)*un[i] + beta*un[i+1] return u def calc_diffusion_FD(IC,nx,nt,nu,dt): u = IC.copy() un = IC.copy() #our placeholder array, un, to advance the solution in time beta = nu * dt / dx**2 c_ind = np.arange(0, nx) l_ind = np.roll(c_ind, -1) r_ind = np.roll(c_ind, 1) for n in range(nt): #iterate through time un = u.copy() # copy the existing values of u into un lap_u = un[l_ind] - 2 * un[c_ind] + un[r_ind] # periodic BC u = un + beta* lap_u return u starttime = timeit.default_timer() u_FD = calc_diffusion_FD(u_IC,nx,nt,nu,dt) print("The time difference is :", timeit.default_timer() - starttime) plt.plot(xspace, u_FD) ``` # The convolution - part I <https://numpy.org/doc/stable/reference/generated/numpy.convolve.html> <https://en.wikipedia.org/wiki/Convolution> The discrete convolution operation is known as: $$ (a * v)[n]= \sum^{\infty}_{m=-\infty} a[m]v[n-m] $$ Notice, that single step of the explicit algorithm implemented before can be expressed as convolution with a $[\beta,1-2 \beta,\beta]$ filter. To compute more time steps, one have to convolve many times. ### Task Solve the diffusion equation by convolving the initial condition with the filter in each iteration. ```python def calc_diffusion_iterate_convolutions(IC,nx,nt,nu,dt): u = IC.copy() un = IC.copy() #our placeholder array, un, to advance the solution in time beta = nu * dt / dx**2 filtr = np.array([beta,1-2*beta,beta]) for n in range(nt): #iterate through time un = u.copy() ##copy the existing values of u into un u = np.convolve(filtr,un, 'same') return u u_iter_conv = calc_diffusion_iterate_convolutions(u_IC,nx,nt,nu,dt) plt.plot(xspace, u_iter_conv) ``` # The convolution - part II The fundamental solution of the heat equation is the Gaussian function (impulse responce). Consider "diffusion" of a single particle. The probability of finding a particle after T time steps follows the normal (a.k.a Gaussian) distribution. To compute it, one can convolve the initial position with a Gaussian. The result will be equivalent with repeated convolutions with small filter. This means that convolving with a Gaussian tells us the solution to the diffusion equation after a fixed amount of time. This is the same as low pass filtering an image. So smoothing, low pass filtering, diffusion, all mean the same thing. ### Task Solve the diffusion equation by convolving the initial condition with the Gaussian. ```python def calc_diffusion_single_convolution(IC,x,nt,nu,dt): u = IC.copy() def get_gaussian(x, alfa, t): g = -(x-domain_length/2.)**2 g /=(4*alfa*t) g = np.exp(g) g /= np.sqrt(4*np.pi*alfa*t) g *= domain_length/(nx-1) # normalize --> sum(g)=1 return g time_spot = dt*nt fundamental_solution = get_gaussian(x, nu, time_spot) u = np.convolve(fundamental_solution, u, 'same') # plt.plot(x, fundamental_solution, marker='v', linestyle="", markevery=5) return u, fundamental_solution u_single_conv, fs = calc_diffusion_single_convolution(u_IC,xspace,nt,nu,dt) plt.plot(xspace, u_single_conv) ``` ```python # Now plot the solutions obtained using 3 different approaches on the same plot plt.rcParams.update({'font.size': 16}) figure, axis = plt.subplots(1, 1, figsize=(10, 8)) plt.subplots_adjust(hspace=1) axis.set_title('Diffusion') axis.plot(xspace, u_FD, label=r'$u_{FD}$', linewidth="3") axis.plot(xspace, u_iter_conv, label=r'$u_{multiple \; convolutions}$', marker='o', linestyle="", markevery=1) axis.plot(xspace, u_single_conv, label=r'$u_{convolution}$', marker='x', linestyle="", markevery=1) axis.set_xlabel('x') axis.set_ylabel('Concentration') axis.legend(loc="upper right") ``` # Analytical solution: Advection - Diffusion of a Gaussian Hill In case of an isotropic diffusion, the analytical solution describing evolution of a Gaussian Hill can be expressed as $$ C(\boldsymbol{x}, t)=\frac{\left(2\pi\sigma_{0}^{2}\right)^{D/2} }{\left(2\pi(\sigma_{0}^{2} + 2 k t)\right)^{D/2}} C_0 \exp \left(-\frac{\left(\boldsymbol{x}-\boldsymbol{x}_{0}-\boldsymbol{u} t\right)^{2}}{2\left(\sigma_{0}^{2}+ 2 k t\right)}\right) $$ where: * $C_0$ - initial concentration, * $D$ - number of dimensions, * $t$ - time, * $k$ - conductivity, * $\boldsymbol{u}$ - velocity of advection * $\sigma_{0}$ the initial variance of the distribution. ## Task 1) Implement the `GaussianHillAnal` class. It shall have a method `get_concentration_ND(self, X, t)`, which will return the concentration at given time and space. 2) Benchmark the FD code against analytical solution. ```python from sympy.matrices import Matrix import sympy as sp class GaussianHillAnal: def __init__(self, C0, X0, Sigma2_0, k, U, D): """ :param C0: initial concentration :param X0: initial position of the hill's centre = Matrix([x0, y0]) :param U: velocity = Matrix([ux, uy]) :param Sigma2_0: initial width of the Gaussian Hill :param k: conductivity :param dimenions: number of dimensions """ self.C0 = C0 self.X0 = X0 self.U = U self.Sigma2_0 = Sigma2_0 self.k = k self.dim = D def get_concentration_ND(self, X, t): decay = 2.*self.k*t L = X - self.X0 - self.U*t C = self.C0 C *= pow(2. * np.pi * self.Sigma2_0, self.dim / 2.) C /= pow(2. * np.pi * (self.Sigma2_0 + decay), self.dim / 2.) C *= sp.exp(-(L.dot(L)) / (2.*(self.Sigma2_0 + decay))) return C ``` ```python time_0 = dt*nt/2 # initial contidion for FD time_spot = dt*nt # time to be simulated (by FD and analytically) X0 = Matrix([domain_length/2.]) # center of the hill C0 = 1. # concentration variance = 30 # initial variance reference_level = 0 T_0 = np.zeros(nx) T_anal = np.zeros(nx) gha = GaussianHillAnal(C0, X0, variance, nu, Matrix([0]), D=1) for i in range(nx): T_0[i] = reference_level + gha.get_concentration_ND(Matrix([xspace[i]]), time_0) T_anal[i] = reference_level + gha.get_concentration_ND(Matrix([xspace[i]]), time_spot) T_FD = calc_diffusion_FD(T_0,nx,nt,nu,dt) T_single_conv, fs = calc_diffusion_single_convolution(T_0,xspace,nt,nu,dt) plt.rcParams.update({'font.size': 16}) figure, axis = plt.subplots(1, 1, figsize=(8, 6)) plt.subplots_adjust(hspace=1) axis.set_title('Diffusion of a Gaussian Hill') axis.plot(xspace, T_0, label=r'$T_{0}$') axis.plot(xspace, T_anal, label=r'$T_{anal}$') axis.plot(xspace, T_FD, label=r'$T_{FD}$', marker='x', linestyle="", markevery=1) axis.plot(xspace, T_single_conv, label=r'$T_{conv}$', marker='v', linestyle="", markevery=1) axis.set_xlabel('x') axis.set_ylabel('Concentration') axis.legend(loc="upper right") ``` ## Questions: * How do you find the FD solution compared to analytical one? Experiment with different dx, dt. * How would you asses that your mesh is fine enought in a real CFD simulation (without analytical solution)? ## Answers * Mesh convergence study # Learn More Inspiration <http://www.cs.umd.edu/~djacobs/CMSC828seg/Diffusion.pdf> <https://web.math.ucsb.edu/~helena/teaching/math124b/heat.pdf> You should have completed Steps [1](./01_Step_1.ipynb) and [2](./02_Step_2.ipynb) before continuing. This Jupyter notebook continues the presentation of the **12 steps to Navier–Stokes**, the practical module taught in the interactive CFD class of [Prof. Lorena Barba](http://lorenabarba.com). For a careful walk-through of the discretization of the diffusion equation with finite differences (and all steps from 1 to 4), watch **Video Lesson 4** by Prof. Barba on YouTube. ```python from IPython.display import YouTubeVideo YouTubeVideo('y2WaK7_iMRI') ```
94d7919ee7dcd8044c72f8d50a392211be72129e
162,121
ipynb
Jupyter Notebook
lab3_iterative_solvers_diffusion_and_laplace_eq/part1_diffusion_and_convolution_SOLUTION.ipynb
ggruszczynski/CFDPython
1662ede061fb899d6ed3f89c17877e65f65e521c
[ "CC-BY-3.0" ]
null
null
null
lab3_iterative_solvers_diffusion_and_laplace_eq/part1_diffusion_and_convolution_SOLUTION.ipynb
ggruszczynski/CFDPython
1662ede061fb899d6ed3f89c17877e65f65e521c
[ "CC-BY-3.0" ]
null
null
null
lab3_iterative_solvers_diffusion_and_laplace_eq/part1_diffusion_and_convolution_SOLUTION.ipynb
ggruszczynski/CFDPython
1662ede061fb899d6ed3f89c17877e65f65e521c
[ "CC-BY-3.0" ]
1
2021-02-05T08:00:02.000Z
2021-02-05T08:00:02.000Z
249.033794
43,900
0.910406
true
3,674
Qwen/Qwen-72B
1. YES 2. YES
0.851953
0.805632
0.686361
__label__eng_Latn
0.907046
0.432977
# 11 ODE Applications (Projectile with linear air resistance) Let's apply our ODE solvers to some problems involving balls and projectiles. We will start with projectile motion with linear air resistances. The `integrators.py` file from the lesson on [ODE integrators](https://py4phy.github.io/PHY432/modules/ODEs/integrators/) is used here (and named [`ode.py`](https://github.com/Py4Phy/PHY432-resources/blob/main/11_ODE_applications/ode.py)). ```python import numpy as np import ode ``` ```python %matplotlib inline import matplotlib.pyplot as plt plt.matplotlib.style.use('ggplot') ``` ## Theory Linear drag force $$ \mathbf{F}_1 = -b_1 \mathbf{v} $$ Equations of motion with force due to gravity $\mathbf{g} = -g \hat{\mathbf{e}}_y$ \begin{align} \frac{d\mathbf{r}}{dt} &= \mathbf{v}\\ \frac{d\mathbf{v}}{dt} &= - g \hat{\mathbf{e}}_y -\frac{b_1}{m} \mathbf{v} \end{align} Bring into standard ODE form for $$ \frac{d\mathbf{y}}{dt} = \mathbf{f}(t, \mathbf{y}) $$ as $$ \mathbf{y} = \begin{pmatrix} x\\ y\\ v_x\\ v_y \end{pmatrix}, \quad \mathbf{f} = \begin{pmatrix} v_x\\ v_y\\ -\frac{b_1}{m} v_x\\ -g -\frac{b_1}{m} v_y \end{pmatrix} $$ (Based on Wang 2016, Ch 3.3.1) ## Python implementation with ODE solver - Formulate the function `f()` for the standard ODE form (note: velocity dependence) - Set up the integration loop: - only integrate until the particle hits ground, i.e. while $y ≥ 0$. - choose an appropriate ODE solver from `ode.py` such as RK4 (because there's no energy conservation so velocity Verlet is not as useful) ```python def simulate(v0, h=0.01, b1=0.2, g=9.81, m=0.5): def f(t, y): # y = [x, y, vx, vy] return np.array([y[2], y[3], -b1/m * y[2], -g - b1/m * y[3]]) vx, vy = v0 t = 0 positions = [] y = np.array([0, 0, vx, vy], dtype=np.float64) while y[1] >= 0: positions.append([t, y[0], y[1]]) # record t, x and y y[:] = ode.rk4(y, f, t, h) t += h return np.array(positions) def initial_v(v, theta): x = np.deg2rad(theta) return v * np.array([np.cos(x), np.sin(x)]) ``` ### Launch at fixed angle ```python r = simulate(initial_v(200, 30), h=0.01, b1=1) ``` ```python plt.plot(r[:, 1], r[:, 2]) plt.xlabel(r"distance $x$ (m)") plt.ylabel(r"height $y$ (m)"); ``` ### Distance depends on launch angle Plot the trajectory for launch angles from 5º to 45º. ```python for angle in (5, 7.5, 10, 20, 30, 45): r = simulate(initial_v(200, angle), h=0.01, b1=1) plt.plot(r[:, 1], r[:, 2], label=r"$\theta = {}^\circ$".format(angle)) plt.legend(loc="best") plt.xlabel(r"distance $x$ (m)") plt.ylabel(r"height $y$ (m)"); plt.savefig("launch_linear_air_resistance.svg") ``` ```python ```
9cc7a83ad5752e2ee16ddda619db2f34c474ce91
57,788
ipynb
Jupyter Notebook
11_ODE_applications/11-ODE-lineardrag.ipynb
Py4Phy/PHY432-resources
c26d95eaf5c28e25da682a61190e12ad6758a938
[ "CC-BY-4.0" ]
null
null
null
11_ODE_applications/11-ODE-lineardrag.ipynb
Py4Phy/PHY432-resources
c26d95eaf5c28e25da682a61190e12ad6758a938
[ "CC-BY-4.0" ]
1
2022-03-03T21:47:56.000Z
2022-03-03T21:47:56.000Z
11_ODE_applications/11-ODE-lineardrag.ipynb
Py4Phy/PHY432-resources
c26d95eaf5c28e25da682a61190e12ad6758a938
[ "CC-BY-4.0" ]
null
null
null
240.783333
36,044
0.916903
true
955
Qwen/Qwen-72B
1. YES 2. YES
0.865224
0.817574
0.707385
__label__eng_Latn
0.606627
0.481824
```python import sympy as sp ``` ```python x = sp.symbols("x") sp.integrate(sp.exp(-x**2), (x, -sp.oo, sp.oo)) ``` $\displaystyle \sqrt{\pi}$ ```python x, L, n = sp.symbols("x, L, n") g = sp.Function("g") sp.integrate(sp.Sum(sp.exp(-sp.I*sp.pi/L*n*x), (n, -sp.oo, sp.oo)).doit(), (x, 0, L)) ``` $\displaystyle \int\limits_{0}^{L} \sum_{n=-\infty}^{\infty} e^{- \frac{i \pi n x}{L}}\, dx$ ```python x, k, kappa, L = sp.symbols(r"x, k, \kappa, L", real=True) sp.integrate(sp.cosh(kappa*x)*sp.sin(k*x)*x, (x, -L/2, L/2)) ``` $\displaystyle \begin{cases} 0 & \text{for}\: \left(\kappa = 0 \wedge k = 0\right) \vee \left(\kappa = 0 \wedge \kappa = - i k \wedge k = 0\right) \vee \left(\kappa = 0 \wedge \kappa = i k \wedge k = 0\right) \vee \left(\kappa = 0 \wedge \kappa = - i k \wedge \kappa = i k \wedge k = 0\right) \\\frac{L \sin^{2}{\left(\frac{L k}{2} \right)}}{4 k} - \frac{L \cos^{2}{\left(\frac{L k}{2} \right)}}{4 k} + \frac{\sin{\left(\frac{L k}{2} \right)} \cos{\left(\frac{L k}{2} \right)}}{2 k^{2}} & \text{for}\: \left(\kappa = 0 \wedge \kappa = - i k\right) \vee \left(\kappa = 0 \wedge \kappa = i k\right) \vee \left(\kappa = - i k \wedge \kappa = i k\right) \vee \left(\kappa = - i k \wedge k = 0\right) \vee \left(\kappa = i k \wedge k = 0\right) \vee \left(\kappa = 0 \wedge \kappa = - i k \wedge \kappa = i k\right) \vee \left(\kappa = - i k \wedge \kappa = i k \wedge k = 0\right) \vee \kappa = - i k \vee \kappa = i k \\\frac{L \kappa^{3} \sin{\left(\frac{L k}{2} \right)} \sinh{\left(\frac{L \kappa}{2} \right)}}{\kappa^{4} + 2 \kappa^{2} k^{2} + k^{4}} - \frac{L \kappa^{2} k \cos{\left(\frac{L k}{2} \right)} \cosh{\left(\frac{L \kappa}{2} \right)}}{\kappa^{4} + 2 \kappa^{2} k^{2} + k^{4}} + \frac{L \kappa k^{2} \sin{\left(\frac{L k}{2} \right)} \sinh{\left(\frac{L \kappa}{2} \right)}}{\kappa^{4} + 2 \kappa^{2} k^{2} + k^{4}} - \frac{L k^{3} \cos{\left(\frac{L k}{2} \right)} \cosh{\left(\frac{L \kappa}{2} \right)}}{\kappa^{4} + 2 \kappa^{2} k^{2} + k^{4}} - \frac{2 \kappa^{2} \sin{\left(\frac{L k}{2} \right)} \cosh{\left(\frac{L \kappa}{2} \right)}}{\kappa^{4} + 2 \kappa^{2} k^{2} + k^{4}} + \frac{4 \kappa k \cos{\left(\frac{L k}{2} \right)} \sinh{\left(\frac{L \kappa}{2} \right)}}{\kappa^{4} + 2 \kappa^{2} k^{2} + k^{4}} + \frac{2 k^{2} \sin{\left(\frac{L k}{2} \right)} \cosh{\left(\frac{L \kappa}{2} \right)}}{\kappa^{4} + 2 \kappa^{2} k^{2} + k^{4}} & \text{otherwise} \end{cases}$ ```python sp.integrate(sp.sinh(kappa*x)*sp.cos(k*x)*x, (x, -L/2, L/2)) ``` $\displaystyle \begin{cases} 0 & \text{for}\: \left(\kappa = 0 \wedge k = 0\right) \vee \left(\kappa = 0 \wedge \kappa = - i k \wedge k = 0\right) \vee \left(\kappa = 0 \wedge \kappa = i k \wedge k = 0\right) \vee \left(\kappa = 0 \wedge \kappa = - i k \wedge \kappa = i k \wedge k = 0\right) \\i \left(- \frac{L \sin^{2}{\left(\frac{L k}{2} \right)}}{8 k} + \frac{L \cos^{2}{\left(\frac{L k}{2} \right)}}{8 k} - \frac{\sin{\left(\frac{L k}{2} \right)} \cos{\left(\frac{L k}{2} \right)}}{4 k^{2}}\right) - i \left(\frac{L \sin^{2}{\left(\frac{L k}{2} \right)}}{8 k} - \frac{L \cos^{2}{\left(\frac{L k}{2} \right)}}{8 k} + \frac{\sin{\left(\frac{L k}{2} \right)} \cos{\left(\frac{L k}{2} \right)}}{4 k^{2}}\right) & \text{for}\: \left(\kappa = 0 \wedge \kappa = - i k\right) \vee \left(\kappa = - i k \wedge \kappa = i k\right) \vee \left(\kappa = - i k \wedge k = 0\right) \vee \left(\kappa = 0 \wedge \kappa = - i k \wedge \kappa = i k\right) \vee \left(\kappa = - i k \wedge \kappa = i k \wedge k = 0\right) \vee \kappa = - i k \\- i \left(- \frac{L \sin^{2}{\left(\frac{L k}{2} \right)}}{8 k} + \frac{L \cos^{2}{\left(\frac{L k}{2} \right)}}{8 k} - \frac{\sin{\left(\frac{L k}{2} \right)} \cos{\left(\frac{L k}{2} \right)}}{4 k^{2}}\right) + i \left(\frac{L \sin^{2}{\left(\frac{L k}{2} \right)}}{8 k} - \frac{L \cos^{2}{\left(\frac{L k}{2} \right)}}{8 k} + \frac{\sin{\left(\frac{L k}{2} \right)} \cos{\left(\frac{L k}{2} \right)}}{4 k^{2}}\right) & \text{for}\: \left(\kappa = 0 \wedge \kappa = i k\right) \vee \left(\kappa = i k \wedge k = 0\right) \vee \kappa = i k \\\frac{L \kappa^{3} \cos{\left(\frac{L k}{2} \right)} \cosh{\left(\frac{L \kappa}{2} \right)}}{\kappa^{4} + 2 \kappa^{2} k^{2} + k^{4}} + \frac{L \kappa^{2} k \sin{\left(\frac{L k}{2} \right)} \sinh{\left(\frac{L \kappa}{2} \right)}}{\kappa^{4} + 2 \kappa^{2} k^{2} + k^{4}} + \frac{L \kappa k^{2} \cos{\left(\frac{L k}{2} \right)} \cosh{\left(\frac{L \kappa}{2} \right)}}{\kappa^{4} + 2 \kappa^{2} k^{2} + k^{4}} + \frac{L k^{3} \sin{\left(\frac{L k}{2} \right)} \sinh{\left(\frac{L \kappa}{2} \right)}}{\kappa^{4} + 2 \kappa^{2} k^{2} + k^{4}} - \frac{2 \kappa^{2} \cos{\left(\frac{L k}{2} \right)} \sinh{\left(\frac{L \kappa}{2} \right)}}{\kappa^{4} + 2 \kappa^{2} k^{2} + k^{4}} - \frac{4 \kappa k \sin{\left(\frac{L k}{2} \right)} \cosh{\left(\frac{L \kappa}{2} \right)}}{\kappa^{4} + 2 \kappa^{2} k^{2} + k^{4}} + \frac{2 k^{2} \cos{\left(\frac{L k}{2} \right)} \sinh{\left(\frac{L \kappa}{2} \right)}}{\kappa^{4} + 2 \kappa^{2} k^{2} + k^{4}} & \text{otherwise} \end{cases}$ ```python sp.integrate(sp.cosh(kappa*x)*sp.cos(k*x), (x, -L/2, L/2)) ``` $\displaystyle \begin{cases} L & \text{for}\: \left(\kappa = 0 \wedge k = 0\right) \vee \left(\kappa = 0 \wedge \kappa = - i k \wedge k = 0\right) \vee \left(\kappa = 0 \wedge \kappa = i k \wedge k = 0\right) \vee \left(\kappa = 0 \wedge \kappa = - i k \wedge \kappa = i k \wedge k = 0\right) \\\frac{L \sin^{2}{\left(\frac{L k}{2} \right)}}{2} + \frac{L \cos^{2}{\left(\frac{L k}{2} \right)}}{2} + \frac{\sin{\left(\frac{L k}{2} \right)} \cos{\left(\frac{L k}{2} \right)}}{k} & \text{for}\: \left(\kappa = 0 \wedge \kappa = - i k\right) \vee \left(\kappa = 0 \wedge \kappa = i k\right) \vee \left(\kappa = - i k \wedge \kappa = i k\right) \vee \left(\kappa = - i k \wedge k = 0\right) \vee \left(\kappa = i k \wedge k = 0\right) \vee \left(\kappa = 0 \wedge \kappa = - i k \wedge \kappa = i k\right) \vee \left(\kappa = - i k \wedge \kappa = i k \wedge k = 0\right) \vee \kappa = - i k \vee \kappa = i k \\\frac{2 \kappa \cos{\left(\frac{L k}{2} \right)} \sinh{\left(\frac{L \kappa}{2} \right)}}{\kappa^{2} + k^{2}} + \frac{2 k \sin{\left(\frac{L k}{2} \right)} \cosh{\left(\frac{L \kappa}{2} \right)}}{\kappa^{2} + k^{2}} & \text{otherwise} \end{cases}$ ```python sp.integrate(sp.sinh(kappa*x)*sp.sin(k*x), (x, -L/2, L/2)) ``` $\displaystyle \begin{cases} 0 & \text{for}\: \left(\kappa = 0 \wedge k = 0\right) \vee \left(\kappa = 0 \wedge \kappa = - i k \wedge k = 0\right) \vee \left(\kappa = 0 \wedge \kappa = i k \wedge k = 0\right) \vee \left(\kappa = 0 \wedge \kappa = - i k \wedge \kappa = i k \wedge k = 0\right) \\i \left(- \frac{L \sin^{2}{\left(\frac{L k}{2} \right)}}{4} - \frac{L \cos^{2}{\left(\frac{L k}{2} \right)}}{4} + \frac{\sin{\left(\frac{L k}{2} \right)} \cos{\left(\frac{L k}{2} \right)}}{2 k}\right) - i \left(\frac{L \sin^{2}{\left(\frac{L k}{2} \right)}}{4} + \frac{L \cos^{2}{\left(\frac{L k}{2} \right)}}{4} - \frac{\sin{\left(\frac{L k}{2} \right)} \cos{\left(\frac{L k}{2} \right)}}{2 k}\right) & \text{for}\: \left(\kappa = 0 \wedge \kappa = - i k\right) \vee \left(\kappa = - i k \wedge \kappa = i k\right) \vee \left(\kappa = - i k \wedge k = 0\right) \vee \left(\kappa = 0 \wedge \kappa = - i k \wedge \kappa = i k\right) \vee \left(\kappa = - i k \wedge \kappa = i k \wedge k = 0\right) \vee \kappa = - i k \\- i \left(- \frac{L \sin^{2}{\left(\frac{L k}{2} \right)}}{4} - \frac{L \cos^{2}{\left(\frac{L k}{2} \right)}}{4} + \frac{\sin{\left(\frac{L k}{2} \right)} \cos{\left(\frac{L k}{2} \right)}}{2 k}\right) + i \left(\frac{L \sin^{2}{\left(\frac{L k}{2} \right)}}{4} + \frac{L \cos^{2}{\left(\frac{L k}{2} \right)}}{4} - \frac{\sin{\left(\frac{L k}{2} \right)} \cos{\left(\frac{L k}{2} \right)}}{2 k}\right) & \text{for}\: \left(\kappa = 0 \wedge \kappa = i k\right) \vee \left(\kappa = i k \wedge k = 0\right) \vee \kappa = i k \\\frac{2 \kappa \sin{\left(\frac{L k}{2} \right)} \cosh{\left(\frac{L \kappa}{2} \right)}}{\kappa^{2} + k^{2}} - \frac{2 k \cos{\left(\frac{L k}{2} \right)} \sinh{\left(\frac{L \kappa}{2} \right)}}{\kappa^{2} + k^{2}} & \text{otherwise} \end{cases}$ ```python a, b = sp.symbols("a, b") sp.simplify(a*sp.sin(a)*sp.sinh(b) + b*sp.cos(a)*sp.cosh(b)) ``` $\displaystyle a \sin{\left(a \right)} \sinh{\left(b \right)} + b \cos{\left(a \right)} \cosh{\left(b \right)}$
277d95cd24800b06f54b982079a04e9736857aff
16,589
ipynb
Jupyter Notebook
temp_cas_notebook.ipynb
Basistransformoptimusprime/Particle_in_a_Box
61c8587cc449cb0d0d0b6aaa499a524a9133fbca
[ "MIT" ]
1
2021-05-30T19:39:44.000Z
2021-05-30T19:39:44.000Z
temp_cas_notebook.ipynb
Basistransformoptimusprime/Particle_in_a_Box
61c8587cc449cb0d0d0b6aaa499a524a9133fbca
[ "MIT" ]
null
null
null
temp_cas_notebook.ipynb
Basistransformoptimusprime/Particle_in_a_Box
61c8587cc449cb0d0d0b6aaa499a524a9133fbca
[ "MIT" ]
null
null
null
80.921951
2,825
0.472783
true
3,916
Qwen/Qwen-72B
1. YES 2. YES
0.950411
0.835484
0.794053
__label__yue_Hant
0.540663
0.683183
| |Pierre Proulx, ing, professeur| |:---|:---| |Département de génie chimique et de génie biotechnologique |** GCH200-Phénomènes d'échanges I **| ### Section 6.2, exemple 6.2-2 ##### Dans cet exemple on traite de facon légèrement différente car on suppose une conduite lisse. Ceci explique que le débit soit plus élevé. ```python # # Pierre Proulx # # Préparation de l'affichage et des outils de calcul symbolique # import sympy as sp from IPython.display import * sp.init_printing(use_latex=True) ``` ```python # Paramètres, variables et fonctions rho,L,dP,D,v_z,mu=sp.symbols('rho,L,dP,D,v_z,mu') ``` ```python f=1/4*(D/L)*(dP/(1/2*rho*v_z**2)) # equation définissant le facteur f Re=rho*v_z*D/mu f_L=16/Re # f calculé si Re<2100 f_T=0.0791/Re**0.25 # si Re > 2100 ``` ```python # Dictionnaire contenant les valeurs des paramètres dico={'rho':1000,'D':7.981*2.54/100,'mu':0.001,'dP':3/14.7*101325,'L':1000*0.3048} # # a priori on ne sait pas si c'est laminaire ou pas, # donc en premier on teste laminaire # eqL=sp.Eq(f-f_L) v1=sp.solve((eqL,0),v_z) Re1=Re.subs(v1) Re1=Re1.subs(dico).evalf() if Re1 > 2100: print(' Re estimé =',Re1) print(' Pas laminaire, on calcule en turbulent') eqT=sp.Eq(f-f_T) v2=sp.solve((eqT,0),v_z,dict=True) # display(v2) v2=v2[0] # il y aura plusieurs racines display(v2) # la première racine Re2=Re.subs(v2) # est justement réelle Re2=Re2.subs(dico) print(' le Reynolds calculé avec Blasius pour conduite lisse') display(Re2.evalf()) display(' Débit massique calculé en kg/sec') V=((v_z.subs(v2)).subs(dico)).evalf() # evalf() car le résultat est symbolique W=rho*sp.pi*D**2/4*V # et on le veut en format numérique, display(W.subs(dico).evalf()) # c'est similaire à Matlab. else: print(' le Reynolds calculé laminaire pour conduite lisse') display(Re1) V=((v_z.subs(v1)).subs(dico)).evalf() W=rho*sp.pi*D**2/4*V display(' Débit massique calculé en kg/sec') display(W.subs(dico).evalf()) ``` ```python ```
85d026b83b69f706d22b2ff9cdd5820a7787aab8
39,418
ipynb
Jupyter Notebook
Chap-6-ex-6-2-2.ipynb
pierreproulx/GCH200
66786aa96ceb2124b96c93ee3d928a295f8e9a03
[ "MIT" ]
1
2018-02-26T16:29:58.000Z
2018-02-26T16:29:58.000Z
Chap-6-ex-6-2-2.ipynb
pierreproulx/GCH200
66786aa96ceb2124b96c93ee3d928a295f8e9a03
[ "MIT" ]
null
null
null
Chap-6-ex-6-2-2.ipynb
pierreproulx/GCH200
66786aa96ceb2124b96c93ee3d928a295f8e9a03
[ "MIT" ]
2
2018-02-27T15:04:33.000Z
2021-06-03T16:38:07.000Z
137.344948
18,320
0.770663
true
753
Qwen/Qwen-72B
1. YES 2. YES
0.880797
0.833325
0.73399
__label__fra_Latn
0.785056
0.543636
```python import numpy as np import qutip as qt import matplotlib.pyplot as plt %matplotlib inline ``` # Dissipative Jaynes-Cummings Model <br> $$ \begin{equation} H = \hbar\omega_{c}a^{\dagger}a + \frac{1}{2}\hbar\omega_{a}\sigma_{-}^{\dagger}\sigma_{-}+\hbar g(a^{\dagger}\sigma_{-}+a\sigma_{-}^{\dagger}) \end{equation} $$ Build up the Hamiltonian same as before: ```python N = 15 # number of cavity fock states wc = 1.0 * 2 * np.pi # cavity frequency wa = 1.0 * 2 * np.pi # atom frequency g = 0.05 * 2 * np.pi # coupling strength # operators a = qt.tensor(qt.destroy(N), qt.qeye(2)) # qeye is short for identity sm = qt.tensor(qt.qeye(N), qt.sigmam()) # Hamiltonian H = wc * a.dag() * a + wa * sm.dag() * sm + g * (a.dag() * sm + a * sm.dag()) ``` Now define the collapse operators: ```python kappa = 0.005 # cavity dissipation rate gamma = 0.05 # atom dissipation rate c_ops = [np.sqrt(kappa)*a, np.sqrt(gamma) * sm] ``` Repeat initial Jaynes-Cummings simulation with dissipation. ```python # intial state psi0 = qt.tensor(qt.basis(N,0), qt.basis(2,0)) # start with an excited atom, cavity ground # Times over which to evaluate system tlist = np.linspace(0,25,101) # Make list of expectation value operators e_ops = [a.dag() * a, sm.dag() * sm] #Perform evolution output = qt.mesolve(H, psi0, tlist, c_ops, e_ops) ``` ```python plt.figure(figsize=(8,6)) plt.plot(tlist,output.expect[0], lw=3) plt.plot(tlist,output.expect[1], lw=3) plt.legend(['$\langle a^{\dagger}a\\rangle$', '$\langle \sigma_{-}^{\dagger}\sigma_{-}\\rangle$'], loc=1,fontsize=16) plt.xlabel('Time', fontsize=16) plt.ylabel('Occupation Number', fontsize=16); ``` ## State of the cavity Output states from master equation solver, and look at Wigner function of cavity. ```python output = qt.mesolve(H, psi0, tlist, c_ops, []) ``` ```python t_idx = np.where([tlist == t for t in [0.0, 5, 15, 25]])[1] cavity_rho_list = [output.states[kk].ptrace(0) for kk in t_idx] ``` ```python # loop over the list of density matrices xvec = np.linspace(-3,3,200) fig, axes = plt.subplots(1,len(cavity_rho_list), sharex=True, figsize=(4*len(cavity_rho_list),3)) for idx, rho in enumerate(cavity_rho_list): # calculate its wigner function W = qt.wigner(rho, xvec, xvec) # plot its wigner function cf = axes[idx].contourf(xvec, xvec, W, 100, vmin=-0.4, vmax=0.4, cmap='RdBu') axes[idx].set_title(r"$t = %.1f$" % tlist[t_idx][idx], fontsize=16) cbar = fig.colorbar(cf, ax=axes.ravel().tolist()) ``` ## What does super operator look like ```python plt.figure(figsize=(7,7)) plt.spy(qt.liouvillian(H,c_ops).data,ms=2) ``` It is easy to see that the Liouvillian super operator is non-Hermitian. This is to be expected as our evolution traces over the environment which, as we have seen, leads to mixed states independent of the purity of the initial state; something that cannot happen in unitary evolution: $$ \mathrm{Tr}\left[\rho(t)^{2}\right] = \mathrm{Tr}\left[U(t)\rho(0)U^{\dagger}(t)U(t)\rho(0)U^{\dagger}(t)\right] = \mathrm{Tr}\left[\rho(0)^{2}\right] $$ ```python ```
664b007d3e8fdd1e015a1469fdcab3f2e443a89f
125,429
ipynb
Jupyter Notebook
notebooks/dissipative_jaynes_cummings.ipynb
nonhermitian/dartmouth_2017
d4b4bcb422f85389228494e3996e4435a071945b
[ "BSD-3-Clause" ]
null
null
null
notebooks/dissipative_jaynes_cummings.ipynb
nonhermitian/dartmouth_2017
d4b4bcb422f85389228494e3996e4435a071945b
[ "BSD-3-Clause" ]
null
null
null
notebooks/dissipative_jaynes_cummings.ipynb
nonhermitian/dartmouth_2017
d4b4bcb422f85389228494e3996e4435a071945b
[ "BSD-3-Clause" ]
null
null
null
419.494983
53,416
0.931069
true
1,034
Qwen/Qwen-72B
1. YES 2. YES
0.766294
0.705785
0.540839
__label__eng_Latn
0.759058
0.094879
# Initialization ```python %matplotlib inline %config InlineBackend.figure_format = 'svg' import numpy as np import scqubits as scq ``` # Transmon qubit The transmon qubit and the Cooper pair box are described by the Hamiltonian \begin{equation} H_\text{CPB}=4E_\text{C}(\hat{n}-n_g)^2+\frac{1}{2}E_\text{J}\sum_n(|n\rangle\langle n+1|+\text{h.c.}), \end{equation} expressed in the charge basis. Here, $E_C$ is the charging energy, $E_J$ the Josephson energy, and $n_g$ the offset charge. Internal representation of the Hamiltonian proceeds via the charge basis with charge-number cutoff specified by `ncut`, which must be chosen sufficiently large for convergence. An instance of the transmon qubit is initialized as follows: ```python tmon = scq.Transmon( EJ=30.02, EC=1.2, ng=0.3, ncut=31 ) ``` Or, alternatively, we can use the graphical user interface (if the `ipywidgets` package is installed) ```python tmon = scq.Transmon.create() ``` HBox(children=(VBox(children=(HBox(children=(Label(value='EJ'), FloatText(value=15.0)), layout=Layout(justify_… Output() ## Calculating and plotting energy levels and eigenfunctions The energy eigenvalues of the transmon Hamiltonian for the given set of model parameters are obtained by calling the `eigenvals()` method. The optional parameter `evals_count` specifies the sought number of eigenenergies. ```python tmon.eigenvals(evals_count=12) ``` array([-12.07703386, -6.39445819, -1.05664942, 3.89594699, 8.34157217, 12.54032139, 14.69273601, 20.71739651, 20.85721955, 30.96770401, 30.96906144, 43.86230239]) To plot eigenenergies as a function of one of the qubit parameters (`EJ`, `EC`, or `ng`), we generate an array of values for the desired parameter and call the method `plot_evals_vs_paramvals`: ```python ng_list = np.linspace(-2, 2, 220) tmon.plot_evals_vs_paramvals('ng', ng_list, evals_count=6, subtract_ground=False); ``` HBox(children=(FloatProgress(value=0.0, description='Spectral data', max=220.0, style=ProgressStyle(descriptio… ```python tmon.plot_n_wavefunction(esys=None, which=0, mode='real'); ``` ```python tmon.plot_wavefunction(esys=None, which=0, mode='real'); ``` ```python tmon.plot_phi_wavefunction(which=[0, 1, 4, 5], mode='abs_sqr'); ``` ## Calculating and visualizing matrix elements Matrix elements can be calculated by referring to the `Transmon` operator methods in string form. For instance, `.n_operator` yields the charge operator: ```python tmon.matrixelement_table('n_operator', evals_count=3) ``` array([[ 0.2999574 , -0.90286728, -0.00126536], [-0.90286728, 0.30195231, -1.21191324], [-0.00126536, -1.21191324, 0.26241099]]) Visualizing matrix elements is accomplished by calling the `.plot_matrixelements` method: ```python tmon.plot_matrixelements('n_operator', evals_count=10); ``` ```python tmon.plot_matrixelements('cos_phi_operator', evals_count=10, show3d=False, show_numbers=True); ``` ```python fig, ax = tmon.plot_matelem_vs_paramvals('n_operator', 'ng', ng_list, select_elems=4, filename='./data/test'); ``` HBox(children=(FloatProgress(value=0.0, description='Spectral data', max=220.0, style=ProgressStyle(descriptio… HBox(children=(FloatProgress(value=0.0, max=220.0), HTML(value=''))) ## Tunable Transmon An important modification of the transmon qubit is the replacement of the Josephson junction by a SQUID loop of two Josephson junctions. A flux threaded through this loop can then be used to change the effective Josephson energy of the circuit and thus make the transmon tunable. The resulting Hamiltonian is \begin{equation} H_\text{CPB}=4E_\text{C}(\hat{n}-n_g)^2+\frac{1}{2}E_\text{J,eff}(\Phi_\text{ext})\sum_n(|n\rangle\langle n+1|+\text{h.c.}), \end{equation} expressed in the charge basis. Here, parameters are as above except for the effective Josephson energy $E_\text{J,eff}(\Phi_\text{ext}) = E_{\text{J,max}} \sqrt{\cos^2(\pi\Phi_\text{ext}/\Phi_0)+ d^2 \sin^2 (\pi\Phi_\text{ext}/\Phi_0)}$, where $E_\text{J,max} = E_\text{J1} + E_\text{J2}$ is the maximum Josephson energy, and $d=(E_\text{J1}-E_\text{J2})/(E_\text{J1}+E_\text{J2})$ is the relative junction asymmetry. An instance of a tunable transmon qubit is obtained like this: ```python tune_tmon = scq.TunableTransmon( EJmax=50.0, EC=0.5, d=0.01, flux=0.0, ng=0.0, ncut=30 ) ``` Alternatively, the `.create` method can again be used if `ipywidgets` is available: ```python tune_tmon = scq.TunableTransmon.create() ``` HBox(children=(VBox(children=(HBox(children=(Label(value='EJmax'), FloatText(value=30.0)), layout=Layout(justi… Output() ```python flux_list = np.linspace(-1.1, 1.1, 220) tune_tmon.plot_evals_vs_paramvals('flux', flux_list, subtract_ground=True); ``` HBox(children=(FloatProgress(value=0.0, description='Spectral data', max=220.0, style=ProgressStyle(descriptio… ```python ```
930062471db22743ec4b4ca96c60016a933f7b37
764,893
ipynb
Jupyter Notebook
examples/demo_transmon.ipynb
IlyaLSMmisis/scqubits-1
c4915c998c2d1ef3348db0e4c423e74c7181da40
[ "BSD-3-Clause" ]
null
null
null
examples/demo_transmon.ipynb
IlyaLSMmisis/scqubits-1
c4915c998c2d1ef3348db0e4c423e74c7181da40
[ "BSD-3-Clause" ]
null
null
null
examples/demo_transmon.ipynb
IlyaLSMmisis/scqubits-1
c4915c998c2d1ef3348db0e4c423e74c7181da40
[ "BSD-3-Clause" ]
null
null
null
45.545612
41,515
0.507651
true
1,559
Qwen/Qwen-72B
1. YES 2. YES
0.859664
0.828939
0.712609
__label__eng_Latn
0.887341
0.49396
```python import numpy as np import pandas as pd import linearsolve as ls import matplotlib.pyplot as plt plt.style.use('classic') %matplotlib inline ``` # Class 14: Prescott's Real Business Cycle Model I In this notebook, we'll consider a centralized version of the model from pages 11-17 in Edward Prescott's article "Theory Ahead of Business Cycle Measurement in the Fall 1986 of the Federal Reserve Bank of Minneapolis' *Quarterly Review* (link to article: https://www.minneapolisfed.org/research/qr/qr1042.pdf). The model is just like the RBC model that we studying in the previous lecture, except that now we include an endogenous labor supply. ## Prescott's RBC Model with Labor The equilibrium conditions for Prescott's RBC model with labor are: \begin{align} \frac{1}{C_t} & = \beta E_t \left[\frac{\alpha A_{t+1}K_{t+1}^{\alpha-1}L_{t+1}^{1-\alpha} +1-\delta }{C_{t+1}}\right]\\ \frac{\varphi}{1-L_t} & = \frac{(1-\alpha)A_tK_t^{\alpha}L_t^{-\alpha}}{C_t} \\ Y_t & = A_t K_t^{\alpha}L_t^{1-\alpha}\\ K_{t+1} & = I_t + (1-\delta) K_t\\ Y_t & = C_t + I_t\\ \log A_{t+1} & = \rho \log A_t + \epsilon_{t+1} \end{align} where $\epsilon_{t+1} \sim \mathcal{N}(0,\sigma^2)$. The objective is use `linearsolve` to simulate impulse responses to a TFP shock using the following parameter values for the simulation: | $\rho$ | $\sigma$ | $\beta$ | $\varphi$ | $\alpha$ | $\delta $ | |--------|----------|---------|-----------|----------|-----------| | 0.75 | 0.006 | 0.99 | 1.7317 | 0.35 | 0.025 | The value for $\beta$ implies a steady state (annualized) real interest rate of about 4 percent: \begin{align} 4 \cdot \left(\beta^{-1} - 1\right) & \approx 0.04040 \end{align} $\rho = 0.75$ and $\sigma = 0.006$ are consistent with the statistical properties of the cyclical component of TFP in the US. $\alpha$ is set so that, consistent with the long-run average of the US, the labor share of income is about 65 percent of GDP. The deprecation rate of capital is calibrated to be about 10 percent annually. Finally, $\varphi$ was chosen last to ensure that in the steady state households allocate about 33 percent of their available time to labor. ## Model Preparation Before proceding, let's recast the model in the form required for `linearsolve`. Write the model with all variables moved to the left-hand side of the equations and dropping the expecations operator $E_t$ and the exogenous shock $\epsilon_{t+1}$: \begin{align} 0 & = \beta\left[\frac{\alpha A_{t+1}K_{t+1}^{\alpha-1}L_{t+1}^{1-\alpha} +1-\delta }{C_{t+1}}\right] - \frac{1}{C_t}\\ 0 & = \frac{(1-\alpha)A_tK_t^{\alpha}L_t^{-\alpha}}{C_t} - \frac{\varphi}{1-L_t}\\ 0 & = A_t K_t^{\alpha}L_t^{1-\alpha} - Y_t\\ 0 & = I_t + (1-\delta) K_t - K_{t+1}\\ 0 & = C_t + I_t - Y_t\\ 0 & = \rho \log A_t - \log A_{t+1} \end{align} Remember, capital and TFP are called *state variables* because they're $t+1$ values are predetermined. Output, consumption, and investment are called a *costate* or *control* variables. Note that the model as 5 equations in 5 endogenous variables. ## Initialization, Approximation, and Solution The next several cells initialize the model in `linearsolve` and then approximate and solve it. ```python # Create a variable called 'parameters' that stores the model parameter values in a Pandas Series # Print the model's parameters ``` ```python # Create variable called 'varNames' that stores the variable names in a list with state variables ordered first # Create variable called 'shockNames' that stores an exogenous shock name for each state variable. ``` ```python # Define a function that evaluates the equilibrium conditions of the model solved for zero. PROVIDED def equilibrium_equations(variables_forward,variables_current,parameters): # Parameters. PROVIDED p = parameters # Current variables. PROVIDED cur = variables_current # Forward variables. PROVIDED fwd = variables_forward # Euler equation # Labor-labor choice # Production function # Capital evolution # Market clearing # Exogenous tfp # Stack equilibrium conditions into a numpy array ``` Next, initialize the model using `ls.model` which takes the following required arguments: * `equations` * `nstates` * `varNames` * `shockNames` * `parameters` ```python # Initialize the model into a variable named 'rbc_model' ``` ```python # Compute the steady state numerically using .compute_ss() method of rbc_model # Print the computed steady state ``` ```python # Find the log-linear approximation around the non-stochastic steady state and solve using .approximate_and_solve() method of rbc_model ``` ## Impulse Responses Compute a 26 period impulse responses of the model's variables to a 0.01 unit shock to TFP in period 5. ```python # Compute impulse responses # Print the first 10 rows of the computed impulse responses to the TFP shock ``` Construct a $2\times3$ grid of plots of simulated TFP, output, labor, consumption, investment, and capital. Be sure to multiply simulated values by 100 so that vertical axis units are in "percent deviation from steady state." ```python # Create figure. PROVIDED fig = plt.figure(figsize=(18,8)) # Create upper-left axis. PROVIDED ax = fig.add_subplot(2,3,1) # Create upper-center axis. PROVIDED ax = fig.add_subplot(2,3,2) # Create upper-right axis. PROVIDED ax = fig.add_subplot(2,3,3) # Create lower-left axis. PROVIDED ax = fig.add_subplot(2,3,4) # Create lower-center axis. PROVIDED ax = fig.add_subplot(2,3,5) # Create lower-right axis. PROVIDED ax = fig.add_subplot(2,3,6) ```
0fce6a53592f51ed64299d8c85188f77d4928282
12,640
ipynb
Jupyter Notebook
Lecture Notebooks/Econ126_Class_14_blank.ipynb
pmezap/computational-macroeconomics
b703f46176bb16e712badf752784f8a7b996cdb1
[ "MIT" ]
30
2020-02-29T06:09:03.000Z
2022-03-25T13:14:13.000Z
Lecture Notebooks/Econ126_Class_14_blank.ipynb
letsgoexploring/computational-macroeconomics
b703f46176bb16e712badf752784f8a7b996cdb1
[ "MIT" ]
null
null
null
Lecture Notebooks/Econ126_Class_14_blank.ipynb
letsgoexploring/computational-macroeconomics
b703f46176bb16e712badf752784f8a7b996cdb1
[ "MIT" ]
18
2019-09-24T07:48:49.000Z
2022-03-22T21:36:30.000Z
38.419453
494
0.396677
true
1,575
Qwen/Qwen-72B
1. YES 2. YES
0.79053
0.727975
0.575487
__label__eng_Latn
0.957012
0.175378
# Tasas de Interés ## Librerías ```python from finrisk import QC_Financial_3 as Qcf from IPython.display import Image from IPython.core.display import HTML ``` ## Convenciones de Tasas Consideremos la siguiente situación: - El día 10-09-2020, nos conectamos al sitio web de un banco de la plaza para contratar un depósito a plazo en CLP, a 30 días, por 1 MM CLP. - La página del banco nos ofrece una tasa de 0,11% para ese plazo. - ¿Tenemos claro cuál es el negocio que nos están ofreciendo? ```python Image(url="img/20200910_dap_online_0.png", width=500, height=400) ``` Al apretar el botón para **Continuar** se despliega el siguiente resultado: Veamos como se llega a este resultado. Nuestra solicitud fue: ```python monto_inicial = 1000000 plazo = 30 ``` La pantalla inicial indica que la tasa para este plazo es: ```python valor_tasa = .0011 ``` ¿Porqué el plazo de la operación simulada es de 33 días? ```python fecha_inicial = Qcf.QCDate(10, 9, 2020) fecha_final = fecha_inicial.add_days(plazo) print(fecha_final.description(True)) print(fecha_final.week_day()) ``` 10-10-2020 SAT Sumando 30 días al 10-09-2020 llegamos al 10-10-2020 que es un sábado (`SAT`). Para corregir esto, consideremos dentro del cálculo los días feriados. ```python calendario = Qcf.BusinessCalendar(fecha_inicial, 1) fecha_final = calendario.next_busy_day(fecha_final) print(fecha_final.description(True)) print(fecha_final.week_day()) ``` 12-10-2020 MON Vemos que, al pasar a la fecha hábil siguiente, llegamos al 12-10-2020. Sin embargo, la fecha final del depósito simulado es el 13-10-2020. El objeto calendario considera por default como feriados los días sábado y domingo, pero el lunes 12-10-2020 también es feriado y hay que agregarlo manualmente al calendario. ```python calendario.add_holiday(fecha_final) # Se agrega el 12-10-2020 como feriado al calendario. fecha_final = calendario.next_busy_day(fecha_final) # Se recalcula la fecha final. print(f'Fecha final: {fecha_final.description(True)}.') print(f'Día final: {fecha_final.week_day()}.') plazo_op = fecha_inicial.day_diff(fecha_final) print(f'Plazo: {plazo_op} días.') ``` Fecha final: 13-10-2020. Día final: TUE. Plazo: 33 días. ¿Cómo se calcula el interés? La convención de tasa en Chile para depósitos a plazo en CLP es Lineal Act/30. ```python yf = Qcf.QCAct30() # Act/30 wf = Qcf.QCLinearWf() # Lineal tasa = Qcf.QCInterestRate(valor_tasa, yf, wf) # Tasa con ese valor y convención ``` Así, el monto final se calcula como: ```python monto_final = monto_inicial * tasa.wf(fecha_inicial, fecha_final) # wf: factor de capitalización print(f'Monto final: {monto_final:,.0f}') ``` Monto final: 1,001,210 Veamos explícitamente este cálculo: ```python print(f'Monto final: {monto_inicial * (1 + valor_tasa * plazo_op / 30): ,.0f}') ``` Monto final: 1,001,210 El monto final, entonces, se calcula utilizando la siguiente fórmula: $$ \begin{equation} M_{final}=M_{inicial}\cdot\left(1+r\cdot\frac{fecha_{final}-fecha_{inicial}}{30}\right) \end{equation} $$ donde: - $M_{inicial}$ es el monto inicial del depósito. - $M_{final}$ es el monto final del depósito. - $r$ es el valor de la tasa ofrecida por el banco. - $fecha_{final}-fecha_{inicial}$ representa la diferencia real en días entre las fechas inicial y final del depósito. ¿Porqué se usa la fórmula anterior? Por convención: - en Chile las tasas de los depósitos a plazo en CLP se cotizan de esta forma. - La fórmula tiene distintas componentes, las cuales, en su conjunto, definen el tipo de tasa de interés asociado a los depósitos en CLP. Abstrayendo la estructura de la fórmula (1) podemos escribir: $$ \begin{equation} M_{final}=M_{inicial}\cdot wf(valorTasa, yf(fecha_{inicial},fecha_{final})) \end{equation} $$ Donde: - $yf$ es la función *fracción de año* que determina el período de tiempo que transcurre entre ambas fechas y - $wf$ es la función *factor de capitalización* que establece el factor a aplicar al monto inicial de la inversión para obtener el monto final de la inversión. Para el cálculo de la fracción de año, existen distintas maneras de contar los días: - Actual (días reales o la diferencia de fechas que acabamos de ver) - En base 30 (todos los meses tienen 30 días) y distintas maneras de representar un *período* de tasa: - Base 30 (meses) - Año de 360 días - Año de 365 días También hay distintas maneras de calcular el factor de capitalización, entre ellas: - Lineal: $$ \begin{equation} 1+r_{lineal}\cdot yf(fecha_{inicial},fecha_{final}) \end{equation} $$ - Compuesto $$ \begin{equation} \left(1+r_{compuesto}\right)^{yf(fecha_{inicial},fecha_{final})} \end{equation} $$ - Exponencial $$ \begin{equation} \exp\left(r_{exponencial}\cdot yf\left(fecha_{inicial},fecha_{final}\right)\right) \end{equation} $$ Plazo de la Tasa: - En Chile se suele hablar de la tasa a 30 días, 90 días, 180 días etc. Como vimos en el ejemplo anterior, no todos los días podemos tener una tasa a 30 días (por el vencimiento en día inhábil). - Esto hace difícil la comparación en el tiempo de las tasas a un mismo plazo (de hecho no todos los días podemos tener tasa a un cierto plazo). - En los mercados desarrollados, esto se resuelve a través del concepto de plazo estándar o Tenor. Tenor - Por ejemplo, en vez de tasa a 30 días se habla de la tasa a 1 mes (1M) que significa, por ejemplo (es una convención de mercado), que la tasa es al mismo día de hoy, pero del mes siguiente. - Si ese día es inhábil se adopta una convención que puede ser ir al próximo día hábil. Así tenemos también 3M, 6M, 1Y (un año) etc. ```python un_mes = Qcf.Tenor('1M') print(f'Un mes sin corrección: {fecha_inicial.add_months(un_mes.get_months()).description(True)}') temp = calendario.next_busy_day(fecha_inicial.add_months(un_mes.get_months())) print(f'Un mes con corrección al día hábil siguiente: {temp.description(True)}') ``` Un mes sin corrección: 10-10-2020 Un mes con corrección al día hábil siguiente: 13-10-2020 Consideremos la siguiente inversión. - Fecha inicial: 10-09-2020 - Plazo de la inversión: 1Y (corrección al día hábil siguiente). - En la fecha final recibo 10 CLP por cada 100 CLP invertidos. - Dado el plazo de la inversión, ésta queda definida, por su factor de capitalización $\frac{110}{100}=1.1$. ## Ejercicio Utilizando la librería `QC_Financial_3` replicar los valores de la siguiente tabla: ```python Image(url="img/tabla_ejercicio.png", width=550, height=440) ```
806e3e5ee9cd265d11e058e39436dac257059025
14,600
ipynb
Jupyter Notebook
01_tasas_de_interes.ipynb
MagicalUndeadToast/mif-2020
862d5beae62edb889ed178a8953b0d30f58fe74b
[ "Unlicense" ]
null
null
null
01_tasas_de_interes.ipynb
MagicalUndeadToast/mif-2020
862d5beae62edb889ed178a8953b0d30f58fe74b
[ "Unlicense" ]
null
null
null
01_tasas_de_interes.ipynb
MagicalUndeadToast/mif-2020
862d5beae62edb889ed178a8953b0d30f58fe74b
[ "Unlicense" ]
null
null
null
25
320
0.560685
true
2,003
Qwen/Qwen-72B
1. YES 2. YES
0.907312
0.800692
0.726478
__label__spa_Latn
0.980624
0.526183
# Hello world for linear regression in PyMC3 This is meant to be a first notebook for exploring how to use PyMC3 to do: * MAP estimation * MCMC sampling using the No-U-Turn (NUTS) extension of Hamlitonian Monte Carlo Meant to allow side-by-side comparison with Stan Inspired by: * Bob Carpenter's blog post: https://statmodeling.stat.columbia.edu/2017/05/31/compare-stan-pymc3-edward-hello-world/ * The "hello world" example in the PyMC3 journal article: Salvatier, Wiecki, and Fonnesbeck. "Probabilistic programming in Python using PyMC3". https://peerj.com/articles/cs-55.pdf ```python import numpy as np import pandas as pd ``` ```python import matplotlib.pyplot as plt import seaborn as sns sns.set_style('whitegrid'); sns.set_context("notebook", font_scale=1.5, rc={"lines.linewidth": 2.0}) %matplotlib inline ``` # Import PyMC3 If you need to install it first, using the `spr_2020s_env` conda environment just do: ```console $ conda activate spr_2020s_env $ conda install -c conda-forge pymc3 ``` ```python import pymc3 ``` # Model as MATH Likelihood: \begin{align} p(y_{1:N} | x_{1:N}, \alpha, \beta, \sigma) &= \text{NormalPDF}(\alpha + \beta_1 x_{n1} + \beta_2 x_{n2}, \sigma^2) \end{align} Priors on the variance of the likelihood: \begin{align} p(\sigma) &= \text{HalfNormalPDF}(\sigma | 0, 1) = \begin{cases} 2 \cdot \text{NormalPDF}(\sigma | 0, 1) & \sigma > 0 \\ 0.0 & \sigma \leq 0 \end{cases} \end{align} Priors on regression coefficients: \begin{align} p(\alpha) &= \text{NormalPDF}(0, 10) \\ p(\beta_1) &= \text{NormalPDF}(0, 10) \\ p(\beta_2) &= \text{NormalPDF}(0, 10) \\ \end{align} # Simulate data from the model ```python # Initialize random number generator np.random.seed(123) # True regression coefficients alpha = 1.0 beta = np.asarray([1.0, 1.337]) # True regression std dev sigma = 0.3 # Size of dataset N = 100 # Data Features (input variable) x1_N = np.random.randn(N) x2_N = np.random.randn(N) x_N2 = np.hstack([x1_N[:,np.newaxis], x2_N[:,np.newaxis]]) # Simulate outcome variable y_N = alpha + beta[0]*x1_N + beta[1]*x2_N + np.random.randn(N) * sigma true_params = dict( alpha=alpha, beta=beta, sigma=sigma) ``` # Plot of observed data ```python _, axgrid = plt.subplots(nrows=1, ncols=2, figsize=(8,3), sharex=True, sharey=True); axgrid[0].plot(x1_N, y_N, 'k.'); axgrid[0].set_xlabel('x1'); axgrid[0].set_ylabel('y'); axgrid[1].plot(x2_N, y_N, 'b.'); axgrid[1].set_xlabel('x2'); axgrid[1].set_ylabel('y'); ``` # Model specified as a PyMC3 object ```python my_model_N10 = pymc3.Model() with my_model_N10: # Priors for unknown model parameters alpha = pymc3.Normal('alpha', mu=0, sd=10) beta = pymc3.Normal('beta', mu=0, sd=10, shape=2) sigma = pymc3.HalfNormal('sigma', sd=1) # Likelihood (sampling distribution) of observations Y_obs = pymc3.Normal('Y_obs', mu=(alpha + beta[0] * x1_N[:10] + beta[1] * x2_N[:10]), sd=sigma, observed=y_N[:10]) ``` ```python my_model_N100 = pymc3.Model() with my_model_N100: # Priors for unknown model parameters alpha = pymc3.Normal('alpha', mu=0, sd=10) beta = pymc3.Normal('beta', mu=0, sd=10, shape=2) sigma = pymc3.HalfNormal('sigma', sd=1) # Likelihood (sampling distribution) of observations Y_obs = pymc3.Normal('Y_obs', mu=(alpha + beta[0] * x1_N + beta[1] * x2_N), sd=sigma, observed=y_N) ``` ```python print(my_model_N100.basic_RVs) ``` [alpha, beta, sigma_log__, Y_obs] # Setup: Prepare for plotting results ```python def plot_traces(fit): fig, ax_grid = plt.subplots(nrows=2, ncols=2, sharex=True, sharey=True); for ax, var_name, dim in zip( ax_grid.flatten(), ['alpha', 'sigma', 'beta', 'beta'], [None, None, 0, 1]): var_samples_S = fit[var_name] true_val = true_params[var_name] if dim is not None: var_samples_S = var_samples_S[:, dim] true_val = true_val[dim] var_name = var_name + "[%d]" % dim S = var_samples_S.size ax.plot(var_samples_S, 'b.', markersize=4, alpha=0.3, label='samples') ax.plot(true_val * np.ones(S), 'r-', label='true_value') ax.set_ylabel(var_name) ax.legend(bbox_to_anchor=(1.1, 1.0)) ax.set_ylim([0, 2.0]); ``` # Run MAP estimator for N=10 dataset ```python map_N10 = pymc3.find_MAP(model=my_model_N10, maxeval=1000) ``` /Users/mhughes/miniconda3/envs/spr_2020s_env/lib/python3.7/site-packages/pymc3/tuning/starting.py:61: UserWarning: find_MAP should not be used to initialize the NUTS sampler, simply call pymc3.sample() and it will automatically initialize NUTS in a better way. warnings.warn('find_MAP should not be used to initialize the NUTS sampler, simply call pymc3.sample() and it will automatically initialize NUTS in a better way.') logp = -8.0697, ||grad|| = 0.71175: 100%|██████████| 28/28 [00:00<00:00, 2509.63it/s] ```python for key, arr in map_N10.items(): print(key, arr) ``` alpha 1.069344037625922 beta [1.00373515 1.44519906] sigma_log__ -1.607230117606145 sigma 0.2004420467603088 # Run MAP estimator for N=100 dataset ```python map_N100 = pymc3.find_MAP(model=my_model_N100, maxeval=1000) ``` /Users/mhughes/miniconda3/envs/spr_2020s_env/lib/python3.7/site-packages/pymc3/tuning/starting.py:61: UserWarning: find_MAP should not be used to initialize the NUTS sampler, simply call pymc3.sample() and it will automatically initialize NUTS in a better way. warnings.warn('find_MAP should not be used to initialize the NUTS sampler, simply call pymc3.sample() and it will automatically initialize NUTS in a better way.') logp = -28.138, ||grad|| = 0.0017975: 100%|██████████| 22/22 [00:00<00:00, 2838.26it/s] ```python for key, arr in map_N100.items(): print(key, arr) ``` alpha 0.9720043305924299 beta [0.98456717 1.34380165] sigma_log__ -1.237493384514688 sigma 0.290110502779077 ```python for key, arr in true_params.items(): print(key, arr) ``` alpha 1.0 beta [1. 1.337] sigma 0.3 ```python ``` # Run MCMC sampler for N=10 dataset ```python with my_model_N10: start = map_N10 # instantiate sampler mcmc_step = pymc3.NUTS(scaling=start) # draw 1000 posterior samples trace = pymc3.sample( 1000, mcmc_step, start=start, chains=1, tune=500, discard_tuned_samples=True) ``` Sequential sampling (1 chains in 1 job) NUTS: [sigma, beta, alpha] 100%|██████████| 1500/1500 [00:01<00:00, 940.68it/s] Only one chain was sampled, this makes it impossible to run some convergence checks ```python plot_traces(trace) ``` # Run MCMC sampler for N=100 dataset ```python with my_model_N100: start = map_N10 # instantiate sampler mcmc_step = pymc3.NUTS(scaling=start) # draw 1000 posterior samples trace_N100 = pymc3.sample( 1000, mcmc_step, start=start, chains=1, tune=500, discard_tuned_samples=True) ``` Sequential sampling (1 chains in 1 job) NUTS: [sigma, beta, alpha] 100%|██████████| 1500/1500 [00:01<00:00, 1371.11it/s] Only one chain was sampled, this makes it impossible to run some convergence checks ```python plot_traces(trace_N100) ``` ```python ```
916561e231925cd4ec003a3a054474851a2eb384
92,569
ipynb
Jupyter Notebook
notebooks/HelloWorldLinearRegression_PyMC3.ipynb
ErliCai/comp136-21s-assignments
da690dc23a09624a16ba96e1b34ef991f4001a6e
[ "MIT" ]
1
2021-02-17T08:09:17.000Z
2021-02-17T08:09:17.000Z
notebooks/HelloWorldLinearRegression_PyMC3.ipynb
ErliCai/comp136-21s-assignments
da690dc23a09624a16ba96e1b34ef991f4001a6e
[ "MIT" ]
2
2021-02-01T21:19:13.000Z
2021-04-17T23:25:22.000Z
notebooks/HelloWorldLinearRegression_PyMC3.ipynb
ErliCai/comp136-21s-assignments
da690dc23a09624a16ba96e1b34ef991f4001a6e
[ "MIT" ]
9
2021-02-08T21:54:43.000Z
2022-02-15T22:11:23.000Z
167.394213
40,668
0.899426
true
2,345
Qwen/Qwen-72B
1. YES 2. YES
0.853913
0.822189
0.702078
__label__eng_Latn
0.529337
0.469493
<a href="https://colab.research.google.com/github/joaochenriques/MCTE_2022/blob/main/ChannelFlows/CpMaxCurves/CpMaxCurves.ipynb" target="_parent"></a> ```python import sympy as sp import numpy as np from scipy.optimize import fsolve, fmin, minimize_scalar, curve_fit import matplotlib.pyplot as mpl ``` ```python import pathlib if not pathlib.Path("mpl_utils.py").exists(): !curl -O https://raw.githubusercontent.com/joaochenriques/MCTE_2022/main/libs/mpl_utils.py &> /dev/null import mpl_utils as mut mut.config_plots() %config InlineBackend.figure_formats = ['svg'] ``` # **Warning** Before running this notebook locus of optimal values of $C_P$ as function of $C_T$ and $\mathrm{Fr}_1$ in the notebook: https://github.com/joaochenriques/MCTE_2022/blob/main/ChannelFlows/DiskActuator/SensitivityAnalysis_V02.ipynb ```python def compute_C_T_and_C_P( Fr4b, Fr1, B ): # These Eqs. are described in the course Lecture Notes ζ4 = (1/2.)*Fr1**2 - 1/2.*Fr4b**2 + 1.0 Fr4t = (Fr1 - Fr4b*ζ4 + np.sqrt(B**2*Fr4b**2 - 2*B*Fr1**2 + 2*B*Fr1*Fr4b \ + B*ζ4**2 - B + Fr1**2 - 2*Fr1*Fr4b*ζ4 + Fr4b**2*ζ4**2))/B ζ4b = (Fr1 - Fr4t*ζ4)/(Fr4b - Fr4t) ζ4t = -(Fr1 - Fr4b*ζ4)/(Fr4b - Fr4t) Fr2t = Fr4t*ζ4t/B C_T = (Fr4b**2 - Fr4t**2)/Fr1**2 C_P = C_T*Fr2t/Fr1 return C_T, C_P def find_minus_C_P( Fr4b, Fr1, B ): # function created to discard the C_T when calling "compute_C_T_and_C_P" C_T, C_P = compute_C_T_and_C_P( Fr4b, Fr1, B ) return -C_P # Minus C_P to allow minimization ``` ```python # Blockage factor B = 0.1 # define Fr1 interval and number of points Fr1_min = 1E-3 Fr1_max = 0.4 Fr1_num = 30 Fr1_opt_vec = np.linspace( Fr1_min, Fr1_max, Fr1_num ) C_P_opt_vec = np.zeros( Fr1_num ) C_T_opt_vec = np.zeros( Fr1_num ) for i, Fr1 in enumerate( Fr1_opt_vec ): res = minimize_scalar( find_minus_C_P, args=(Fr1, B), bounds=[0,1], method='bounded', options={ 'xatol': 1e-08, 'maxiter': 500, 'disp': 1 } ) Fr4b = res.x # optimal value C_T, C_P = compute_C_T_and_C_P( Fr4b, Fr1, B ) C_T_opt_vec[i] = C_T C_P_opt_vec[i] = C_P fig, (ax1, ax2) = mpl.subplots(1,2, figsize=(12, 4.5) ) fig.subplots_adjust( wspace = 0.19 ) ax1.plot( Fr1_opt_vec, C_P_opt_vec, 'o-' ) ax1.set_title( "B = %.2f" % B ) ax1.set_xlabel("$\mathrm{Fr}_1$") ax1.set_ylabel("$C_P^\mathrm{opt}$") ax1.text(-0.17, 1.05, 'a)', transform=ax1.transAxes, size=16, weight='semibold') ax1.grid() ax2.plot( Fr1_opt_vec, C_T_opt_vec, 'ro-' ) ax2.set_title( "B = %.2f" % B ) ax2.set_xlabel("$\mathrm{Fr}_1$") ax2.set_ylabel("$C_T^\mathrm{opt}$") ax2.text(-0.17, 1.05, 'b)', transform=ax2.transAxes, size=16, weight='semibold'); ax2.grid() mpl.savefig('CP_CT_optimal_B%4.2f.pdf' % B, bbox_inches='tight', pad_inches=0.02); ``` ## **Polynomial fitting** Fit a polynomial of the type $$ a x^6 + b x^4 + c x^2 + d$$ to the optimal $C_T$ and $C_P$. This polynomial has only even monomial to avoid double curvature. ```python def fitting_func( x, a, b, c, d ): x2 = x*x return ( ( ( a * x2 + b ) * x2 + c ) * x2 ) + d C_P_popt, C_P_pcov = curve_fit( fitting_func, Fr1_opt_vec, C_P_opt_vec ) C_T_popt, C_P_pcov = curve_fit( fitting_func, Fr1_opt_vec, C_T_opt_vec ) ``` ## **Optimal $C_T$ and $C_P$** ```python sFr_1, sC_P, sC_T = sp.symbols( "\mathrm{Fr}_1, C_\mathrm{P}, C_\mathrm{T}" ) eqCP = sp.Eq( sC_P, fitting_func( sFr_1, *C_P_popt ) ) sp.expand( eqCP ) ``` $\displaystyle C_\mathrm{P} = 0.950469005167641 \mathrm{Fr}_1^{6} + 0.235239872607705 \mathrm{Fr}_1^{4} + 0.189205872691749 \mathrm{Fr}_1^{2} + 0.731588254027737$ ```python eqCT = sp.Eq( sC_T, fitting_func( sFr_1, *C_T_popt ) ) sp.expand( eqCT ) ``` $\displaystyle C_\mathrm{T} = 3.47964521886281 \mathrm{Fr}_1^{6} + 0.595218942569153 \mathrm{Fr}_1^{4} + 0.474817875529668 \mathrm{Fr}_1^{2} + 1.20709928871966$ ## **Plot results of the optimization and interpolation of the maxima** ```python fit_C_P = fitting_func( Fr1_opt_vec, *C_P_popt ) fit_C_T = fitting_func( Fr1_opt_vec, *C_T_popt ) ``` ```python mpl.plot( Fr1_opt_vec, fit_C_P, label="$(ax^6+bx^4+cx^2+d)$ fitting" ) mpl.plot( Fr1_opt_vec, C_P_opt_vec, 'ro', label="Optimal values" ) mpl.title( "B = %.2f" % B ) mpl.xlabel("$\mathrm{Fr}_1$") mpl.ylabel("$C_P^\mathrm{opt}$") mpl.legend(loc='upper left') mpl.grid() ``` ```python mpl.plot( Fr1_opt_vec, fit_C_T, label="$(ax^6+bx^4+cx^2+d)$ fitting" ) mpl.plot( Fr1_opt_vec, C_T_opt_vec, 'ro', label="Optimal values" ) mpl.title( "B = %.2f" % B ) mpl.xlabel("$\mathrm{Fr}_1$") mpl.ylabel("$C_T^\mathrm{opt}$") mpl.legend(loc='upper left') mpl.grid() ``` ```python ```
7e03e172c1af3f5484dfc19d8bd5629c13a1f41c
201,732
ipynb
Jupyter Notebook
ChannelFlows/CpMaxCurves/CpMaxCurves.ipynb
joaochenriques/MCTE_2022
b999d60b6c4153be5a314da262a18e467cb41d7e
[ "MIT" ]
1
2022-03-06T18:30:41.000Z
2022-03-06T18:30:41.000Z
ChannelFlows/CpMaxCurves/CpMaxCurves.ipynb
joaochenriques/MCTE_2022
b999d60b6c4153be5a314da262a18e467cb41d7e
[ "MIT" ]
null
null
null
ChannelFlows/CpMaxCurves/CpMaxCurves.ipynb
joaochenriques/MCTE_2022
b999d60b6c4153be5a314da262a18e467cb41d7e
[ "MIT" ]
null
null
null
535.098143
67,438
0.62936
true
1,765
Qwen/Qwen-72B
1. YES 2. YES
0.793106
0.757794
0.601011
__label__eng_Latn
0.244768
0.234681
<h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><span><a href="#The-Philosophy-of-Bayesian-Inference" data-toc-modified-id="The-Philosophy-of-Bayesian-Inference-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>The Philosophy of Bayesian Inference</a></span><ul class="toc-item"><li><span><a href="#The-Bayesian-State-of-Mind" data-toc-modified-id="The-Bayesian-State-of-Mind-1.1"><span class="toc-item-num">1.1&nbsp;&nbsp;</span>The Bayesian State of Mind</a></span></li><li><span><a href="#Bayesian-Inference-in-Practice" data-toc-modified-id="Bayesian-Inference-in-Practice-1.2"><span class="toc-item-num">1.2&nbsp;&nbsp;</span>Bayesian Inference in Practice</a></span></li><li><span><a href="#Are-Frequentist-Methods-Incorrect?" data-toc-modified-id="Are-Frequentist-Methods-Incorrect?-1.3"><span class="toc-item-num">1.3&nbsp;&nbsp;</span>Are Frequentist Methods Incorrect?</a></span></li><li><span><a href="#Our-Bayesian-framework" data-toc-modified-id="Our-Bayesian-framework-1.4"><span class="toc-item-num">1.4&nbsp;&nbsp;</span>Our Bayesian framework</a></span></li><li><span><a href="#Probability-Distributions" data-toc-modified-id="Probability-Distributions-1.5"><span class="toc-item-num">1.5&nbsp;&nbsp;</span>Probability Distributions</a></span><ul class="toc-item"><li><span><a href="#Discrete-Case" data-toc-modified-id="Discrete-Case-1.5.1"><span class="toc-item-num">1.5.1&nbsp;&nbsp;</span>Discrete Case</a></span><ul class="toc-item"><li><span><a href="#Poisson-Distribution" data-toc-modified-id="Poisson-Distribution-1.5.1.1"><span class="toc-item-num">1.5.1.1&nbsp;&nbsp;</span>Poisson Distribution</a></span></li></ul></li><li><span><a href="#Continuous-Case" data-toc-modified-id="Continuous-Case-1.5.2"><span class="toc-item-num">1.5.2&nbsp;&nbsp;</span>Continuous Case</a></span></li><li><span><a href="#But-what-is-$\lambda-\;$?" data-toc-modified-id="But-what-is-$\lambda-\;$?-1.5.3"><span class="toc-item-num">1.5.3&nbsp;&nbsp;</span>But what is $\lambda \;$?</a></span></li></ul></li><li><span><a href="#Introducing-our-first-hammer:-PyMC3" data-toc-modified-id="Introducing-our-first-hammer:-PyMC3-1.6"><span class="toc-item-num">1.6&nbsp;&nbsp;</span>Introducing our first hammer: PyMC3</a></span><ul class="toc-item"><li><span><a href="#Why-would-I-want-samples-from-the-posterior,-anyways?" data-toc-modified-id="Why-would-I-want-samples-from-the-posterior,-anyways?-1.6.1"><span class="toc-item-num">1.6.1&nbsp;&nbsp;</span>Why would I want samples from the posterior, anyways?</a></span></li></ul></li><li><span><a href="#Determining-Statistically-if-the-Two-$\lambda$s-Are-Indeed-Different?" data-toc-modified-id="Determining-Statistically-if-the-Two-$\lambda$s-Are-Indeed-Different?-1.7"><span class="toc-item-num">1.7&nbsp;&nbsp;</span>Determining Statistically if the Two $\lambda$s Are Indeed Different?</a></span></li></ul></li></ul></div> [Original content created by Cam Davidson-Pilon](https://github.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers) # The Philosophy of Bayesian Inference ## The Bayesian State of Mind - Bayesian inference differs from more traditional statistical inference by preserving **uncertainty**. Isn't statistics all about deriving certainty from randomness? - The Bayesian worldview interprets probability as a measure of **believability** in an event--that is, how confident we are in an event occuring. - **Frequentists**, who ascribe to the more classical version of statistics, assume that probability is the long-run frequency of events (hence the name). - **Bayesians**, on the other hand, have a more intuitive approach. Bayesians interpret a probability as the measure of belief, or confidence, in an event occuring. Simply, a probability is a summary of an opinion. - **Prior probability** denotes our belief about event $A$ as $P(A)$ - **Posterior probability** denotes our updated belief after seeing the evidence, i.e the probability of $A$ given the evidence $X$ $P(A|X)$ ## Bayesian Inference in Practice If frequentist and Bayesian inference were programming functions, with inputs being statistical problems, then the two would be different in what they return to the user. The frequentist inference function would return a number, representing an estimate (typically a summary statistic like the sample average etc.), whereas the Bayesian function would return probabilities. For example, in our debugging problem above, calling the frequentist function with the argument "My code passed all X tests; is my code bug-free?" would return a YES. On the other hand, asking our Bayesian function "Often my code has bugs. My code passed all X tests; is my code bug-free?" would return something very different: probabilities of YES and NO. The function might return: >YES, with probability 0.8; NO, with probability 0.2 This is very different from the answer the frequentist function returned. Notice that the Bayesian function accepted an additional argument: "Often my code has bugs". This parameter is the prior. By including the prior parameter, we are telling the Bayesian function to include our belief about the situation. Technically this parameter in the Bayesian function is optional, but we will see excluding it has its own consequences. ## Are Frequentist Methods Incorrect? No. Frequentist methods are still useful or state-of-the-art in many areas. Tools such as least squares linear regression, LASSO regression, and expectation- maximization algorithms are all powerful and fast. Bayesian methods complement these techniques by solving problems that these approaches cannot, or by illuminating the underlying system with more flexible modeling. ## Our Bayesian framework We are interested in beliefs, which can be interpreted as probabilities by thinking Bayesian. We have a *prior* belief in event $A$, beliefs formed by previous information, e.g., our prior belief about bugs being in our code before performing tests. Secondly, we observe our evidence. To continue our buggy-code example: if our code passes $X$ tests, we want to update our belief to incorporate this. We call this new belief the *posterior* probability. Updating our belief is done via the following equation, known as Bayes' Theorem, after its discoverer Thomas Bayes: $ \begin{align} P( A | X ) = & \frac{ P(X | A) P(A) } {P(X) } \\\\[5pt] & \propto P(X | A) P(A)\;\; (\propto \text{is proportional to } ) \end{align} $ The above formula is not unique to Bayesian inference: it is a mathematical fact with uses outside Bayesian inference. Bayesian inference merely uses it to connect prior probabilities $P(A)$ with an updated posterior probabilities $P(A | X )$. ##### Example: Mandatory coin-flip example Suppose, naively, that you are unsure about the probability of heads in a coin flip (spoiler alert: it's 50%). You believe there is some true underlying ratio, call it $p$, but have no prior opinion on what $p$ might be. We begin to flip a coin, and record the observations: either $H$ or $T$. This is our observed data. An interesting question to ask is how our inference changes as we observe more and more data? More specifically, what do our posterior probabilities look like when we have little data, versus when we have lots of data. ```python %matplotlib inline import json import matplotlib # Style Customization #s = json.load(open("styles/bmh_matplotlibrc.json")) #matplotlib.RcParams.update(s) import matplotlib.pyplot as plt plt.style.use('seaborn') from IPython.core.pylabtools import figsize figsize(11, 9) import scipy.stats as stats import numpy as np rs = np.random.seed(1234) ``` ```python dist = stats.beta dist ``` <scipy.stats._continuous_distns.beta_gen at 0x185ac2292b0> ```python n_trials = [0, 1, 2, 3, 4, 5, 10, 20, 50, 500] data = stats.bernoulli.rvs(0.5, size=n_trials[-1]) x = np.linspace(0, 1, 100) ``` ```python data ``` array([0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1]) ```python x ``` array([0. , 0.01010101, 0.02020202, 0.03030303, 0.04040404, 0.05050505, 0.06060606, 0.07070707, 0.08080808, 0.09090909, 0.1010101 , 0.11111111, 0.12121212, 0.13131313, 0.14141414, 0.15151515, 0.16161616, 0.17171717, 0.18181818, 0.19191919, 0.2020202 , 0.21212121, 0.22222222, 0.23232323, 0.24242424, 0.25252525, 0.26262626, 0.27272727, 0.28282828, 0.29292929, 0.3030303 , 0.31313131, 0.32323232, 0.33333333, 0.34343434, 0.35353535, 0.36363636, 0.37373737, 0.38383838, 0.39393939, 0.4040404 , 0.41414141, 0.42424242, 0.43434343, 0.44444444, 0.45454545, 0.46464646, 0.47474747, 0.48484848, 0.49494949, 0.50505051, 0.51515152, 0.52525253, 0.53535354, 0.54545455, 0.55555556, 0.56565657, 0.57575758, 0.58585859, 0.5959596 , 0.60606061, 0.61616162, 0.62626263, 0.63636364, 0.64646465, 0.65656566, 0.66666667, 0.67676768, 0.68686869, 0.6969697 , 0.70707071, 0.71717172, 0.72727273, 0.73737374, 0.74747475, 0.75757576, 0.76767677, 0.77777778, 0.78787879, 0.7979798 , 0.80808081, 0.81818182, 0.82828283, 0.83838384, 0.84848485, 0.85858586, 0.86868687, 0.87878788, 0.88888889, 0.8989899 , 0.90909091, 0.91919192, 0.92929293, 0.93939394, 0.94949495, 0.95959596, 0.96969697, 0.97979798, 0.98989899, 1. ]) ```python figsize(11, 9) for k, N in enumerate(n_trials): sx = plt.subplot(len(n_trials) / 2, 2, k + 1) plt.xlabel("$p$, probability of heads") if k in [0, len(n_trials) - 1 ] else None plt.setp(sx.get_yticklabels(), visible=False) heads = data[:N].sum() # Compute Probability density function at x of the given RV. y = dist.pdf(x, 1 + heads, 1 + N - heads) plt.plot(x, y, label="observe %d tosses,\n %d heads" % (N, heads)) plt.fill_between(x, 0, y, color="#348ABD", alpha=0.4) plt.vlines(0.5, 0, 4, color="k", linestyles="--", lw=1) leg = plt.legend() leg.get_frame().set_alpha(0.4) plt.autoscale(tight=True) plt.suptitle( "Bayesian updating of posterior probabilities", y=1.02, fontsize=14) plt.tight_layout() ``` The posterior probabilities are represented by the curves, and our uncertainty is proportional to the width of the curve. As the plot above shows, as we start to observe data our posterior probabilities start to shift and move around. Eventually, as we observe more and more data (coin-flips), our probabilities will tighten closer and closer around the true value of $p=0.5$ (marked by a dashed line). Notice that the plots are not always *peaked* at 0.5. There is no reason it should be: recall we assumed we did not have a prior opinion of what $p$ is. In fact, if we observe quite extreme data, say 8 flips and only 1 observed heads, our distribution would look very biased *away* from lumping around 0.5 (with no prior opinion, how confident would you feel betting on a fair coin after observing 8 tails and 1 head). As more data accumulates, we would see more and more probability being assigned at $p=0.5$, though never all of it. ##### Example: Bug, or just sweet, unintended feature? Let $A$ denote the event that our code has **no bugs** in it. Let $X$ denote the event that the code passes all debugging tests. For now, we will leave the prior probability of no bugs as a variable, i.e. $P(A) = p$. We are interested in $P(A|X)$, i.e. the probability of no bugs, given our debugging tests $X$ pass. To use the formula above, we need to compute some quantities. What is $P(X | A)$, i.e., the probability that the code passes $X$ tests *given* there are no bugs? Well, it is equal to 1, for code with no bugs will pass all tests. $P(X)$ is a little bit trickier: The event $X$ can be divided into two possibilities, event $X$ occurring even though our code *indeed has* bugs (denoted $\sim A\;$, spoken *not $A$*), or event $X$ without bugs ($A$). $P(X)$ can be represented as: \begin{align} P(X ) & = P(X \text{ and } A) + P(X \text{ and } \sim A) \\\\[5pt] & = P(X|A)P(A) + P(X | \sim A)P(\sim A)\\\\[5pt] & = P(X|A)p + P(X | \sim A)(1-p) \end{align} We have already computed $P(X|A)$ above. On the other hand, $P(X | \sim A)$ is subjective: our code can pass tests but still have a bug in it, though the probability there is a bug present is reduced. Note this is dependent on the number of tests performed, the degree of complication in the tests, etc. Let's be conservative and assign $P(X|\sim A) = 0.5$. Then \begin{align} P(A | X) & = \frac{1\cdot p}{ 1\cdot p +0.5 (1-p) } \\\\ & = \frac{ 2 p}{1+p} \end{align} This is the posterior probability. What does it look like as a function of our prior, $p \in [0,1]$? ```python figsize(12.5, 4) p = np.linspace(0, 1, 50) plt.plot(p, 2 * p / (1 + p), color="#348ABD", lw=3) # plt.fill_between(p, 2*p/(1+p), alpha=.5, facecolor=["#A60628"]) plt.scatter(0.4, 2 * (0.4) / 1.4, s=140, c="#348ABD") plt.xlim(0, 1) plt.ylim(0, 1) plt.xlabel("Prior, $P(A) = p$") plt.ylabel("Posterior, $P(A|X)$, with $P(A) = p$") plt.title("Is my code bug-free?") plt.show() ``` We can see the biggest gains if we observe the $X$ tests passed when the prior probability, $p$, is low. Let's settle on a specific value for the prior. If a programmer gives themselves a realistic prior of 0.40, that is, there is a 20% chance that they write code bug-free. To be more realistic, this prior should be a function of how complicated and large the code is, but let's pin it at 0.40. Then the updated belief that theit code is bug-free is 0.57. Recall that the prior is a probability: $p$ is the prior probability that there *are no bugs*, so $1-p$ is the prior probability that there *are bugs*. Similarly, our posterior is also a probability, with $P(A | X)$ the probability there is no bug *given we saw all tests pass*, hence $1-P(A|X)$ is the probability there is a bug *given all tests passed*. What does our posterior probability look like? Below is a chart of both the prior and the posterior probabilities. ```python figsize(12.5, 4) colours = ["#348ABD", "#A60628"] prior = [0.40, 0.60] posterior = [1. / 3, 2. / 3] plt.bar([0, .7], prior, alpha=0.70, width=0.25, color=colours[0], label="prior distribution", lw="3", edgecolor=colours[0]) plt.bar([0 + 0.25, .7 + 0.25], posterior, alpha=0.7, width=0.25, color=colours[1], label="posterior distribution", lw="3", edgecolor=colours[1]) plt.ylim(0,1) plt.xticks([0.20, .95], ["Bugs Absent", "Bugs Present"]) plt.title("Prior and Posterior probability of bugs present") plt.ylabel("Probability") plt.legend(loc="upper left"); ``` Notice that after we observed $X$ occur, the probability of bugs being absent increased. By increasing the number of tests, we can approach confidence (probability 1) that there are no bugs present. ## Probability Distributions Let $Z$ be some random variable. Then associated with $Z$ is a probability distribution function that assigns probabilities to the different outcomes $Z$ can take. Graphically, a probability distribution is a curve where the probability of an outcome is proportional to the height of the curve. We can divide random variables into three classifications: - **$Z$ is discrete**: Discrete random variables may only assume values on a specified list. Things like populations, movie ratings, and number of votes are all discrete random variables. Discrete random variables become more clear when we contrast them with... - **$Z$ is continuous**: Continuous random variable can take on arbitrarily exact values. For example, temperature, speed, time, color are all modeled as continuous variables because you can progressively make the values more and more precise. - **$Z$ is mixed**: Mixed random variables assign probabilities to both discrete and continuous random variables, i.e. it is a combination of the above two categories. ##### Expected Value Expected value (EV) is one of the most important concepts in probability. The EV for a given probability distribution can be described as "the mean value in the long run for many repeated samples from that distribution." To borrow a metaphor from physics, a distribution's EV acts like its "center of mass." Imagine repeating the same experiment many times over, and taking the average over each outcome. The more you repeat the experiment, the closer this average will become to the distributions EV. (side note: as the number of repeated experiments goes to infinity, the difference between the average outcome and the EV becomes arbitrarily small.) ### Discrete Case - if $Z$ is discrete, then its distribution is called a *probability mass function* which measures the probability $Z$ takes on the value $k$, $P(Z=k)$. - The probability mass function completely describes the random variable $Z$, i.e, if we know the mass function, we know how $Z$ should behave. #### Poisson Distribution $$P(Z = k) =\frac{ \lambda^k e^{-\lambda} }{k!}, \; \; k=0,1,2, \dots, \; \; \lambda \in \mathbb{R}_{>0} $$ $\lambda$ is called a parameter of the distribution, and it controls the distribution's shape. For the Poisson distribution, $\lambda$ can be any positive number. By increasing $\lambda$, we add more probability to larger values, and conversely by decreasing $\lambda$ we add more probability to smaller values. One can describe $\lambda$ as the *intensity* of the Poisson distribution. Unlike $\lambda$, which can be any positive number, the value $k$ in the above formula must be a non-negative integer, i.e., $k$ must take on values 0,1,2, and so on. This is very important, because if you wanted to model a population you could not make sense of populations with 4.25 or 5.612 members. If a random variable $Z$ has a Poisson mass distribution, we denote this by writing $$Z \sim \text{Poi}(\lambda) $$ One useful property of the Poisson distribution is that its expected value is equal to its parameter, i.e.: $$E\large[ \;Z\; | \; \lambda \;\large] = \lambda $$ Below, we plot the probability mass distribution for different $\lambda$ values. The first thing to notice is that by increasing $\lambda$, we add more probability of larger values occurring. Second, notice that although the graph ends at 15, the distributions do not. They assign positive probability to every non-negative integer. ```python figsize(12.5, 4) a = np.arange(16) poi = stats.poisson lambda_ = [1.5, 4.25] colours = ["#348ABD", "#A60628"] plt.bar( a, poi.pmf(a, lambda_[0]), color=colours[0], label="$\lambda = %.1f$" % lambda_[0], alpha=0.60, edgecolor=colours[0], lw="3") plt.bar( a, poi.pmf(a, lambda_[1]), color=colours[1], label="$\lambda = %.1f$" % lambda_[1], alpha=0.60, edgecolor=colours[1], lw="3") plt.xticks(a + 0.4, a) plt.legend() plt.ylabel("probability of $k$") plt.xlabel("$k$") plt.title("Probability mass function of a Poisson random variable; differing \ $\lambda$ values") plt.show() ``` ### Continuous Case Instead of a probability mass function, a continuous random variable has a *probability density function*. This might seem like unnecessary nomenclature, but the density function and the mass function are very different creatures. An example of continuous random variable is a random variable with *exponential density*. The density function for an exponential random variable looks like this: $$f_Z(z | \lambda) = \lambda e^{-\lambda z }, \;\; z\ge 0$$ Like a Poisson random variable, an exponential random variable can take on only non-negative values. But unlike a Poisson variable, the exponential can take on *any* non-negative values, including non-integral values such as 4.25 or 5.612401. This property makes it a poor choice for count data, which must be an integer, but a great choice for time data, temperature data (measured in Kelvins, of course), or any other precise *and positive* variable. The graph below shows two probability density functions with different $\lambda$ values. When a random variable $Z$ has an exponential distribution with parameter $\lambda$, we say *$Z$ is exponential* and write $$Z \sim \text{Exp}(\lambda)$$ Given a specific $\lambda$, the expected value of an exponential random variable is equal to the inverse of $\lambda$, that is: $$E[\; Z \;|\; \lambda \;] = \frac{1}{\lambda}$$ ```python a = np.linspace(0, 4, 100) expo = stats.expon lambda_ = [0.5, 1] for l, c in zip(lambda_, colours): plt.plot( a, expo.pdf(a, scale=1. / l), lw=3, color=c, label="$\lambda = %.1f$" % l) plt.fill_between(a, expo.pdf(a, scale=1. / l), color=c, alpha=.33) plt.legend() plt.ylabel("PDF at $z$") plt.xlabel("$z$") plt.ylim(0, 1.2) plt.title("Probability density function of an Exponential random variable;\ differing $\lambda$") plt.show() ``` ### But what is $\lambda \;$? **This question is what motivates statistics**. In the real world, $\lambda$ is hidden from us. We see only $Z$, and must go backwards to try and determine $\lambda$. The problem is difficult because there is no one-to-one mapping from $Z$ to $\lambda$. Many different methods have been created to solve the problem of estimating $\lambda$, but since $\lambda$ is never actually observed, no one can say for certain which method is best! Bayesian inference is concerned with *beliefs* about what $\lambda$ might be. Rather than try to guess $\lambda$ exactly, we can only talk about what $\lambda$ is likely to be by assigning a probability distribution to $\lambda$. This might seem odd at first. After all, $\lambda$ is fixed; it is not (necessarily) random! How can we assign probabilities to values of a non-random variable? Ah, we have fallen for our old, frequentist way of thinking. Recall that under Bayesian philosophy, we *can* assign probabilities if we interpret them as beliefs. And it is entirely acceptable to have *beliefs* about the parameter $\lambda$. ##### Example: Inferring behaviour from text-message data Let's try to model a more interesting example, one that concerns the rate at which a user sends and receives text messages: > You are given a series of daily text-message counts from a user of your system. The data, plotted over time, appears in the chart below. You are curious to know if the user's text-messaging habits have changed over time, either gradually or suddenly. How can you model this? ```python figsize(12.5, 3.5) count_data = np.loadtxt("data/txtdata.csv") n_count_data = len(count_data) plt.bar(np.arange(n_count_data), count_data, color="#348ABD") plt.xlabel("Time (days)") plt.ylabel("count of text-msgs received") plt.title("Did the user's texting habits change over time?") plt.xlim(0, n_count_data); ``` Before we start modeling, see what you can figure out just by looking at the chart above. Would you say there was a change in behaviour during this time period? How can we start to model this? Well, as we have conveniently already seen, a Poisson random variable is a very appropriate model for this type of *count* data. Denoting day $i$'s text-message count by $C_i$, $$ C_i \sim \text{Poisson}(\lambda) $$ We are not sure what the value of the $\lambda$ parameter really is, however. Looking at the chart above, it appears that the rate might become higher late in the observation period, which is equivalent to saying that $\lambda$ increases at some point during the observations. (Recall that a higher value of $\lambda$ assigns more probability to larger outcomes. That is, there is a higher probability of many text messages having been sent on a given day.) How can we represent this observation mathematically? Let's assume that on some day during the observation period (call it $\tau$), the parameter $\lambda$ suddenly jumps to a higher value. So we really have two $\lambda$ parameters: one for the period before $\tau$, and one for the rest of the observation period. In the literature, a sudden transition like this would be called a *switchpoint*: $$ \lambda = \begin{cases} \lambda_1 & \text{if } t \lt \tau \cr \lambda_2 & \text{if } t \ge \tau \end{cases} $$ If, in reality, no sudden change occurred and indeed $\lambda_1 = \lambda_2$, then the $\lambda$s posterior distributions should look about equal. We are interested in inferring the unknown $\lambda$s. To use Bayesian inference, we need to assign prior probabilities to the different possible values of $\lambda$. What would be good prior probability distributions for $\lambda_1$ and $\lambda_2$? Recall that $\lambda$ can be any positive number. As we saw earlier, the *exponential* distribution provides a continuous density function for positive numbers, so it might be a good choice for modeling $\lambda_i$. But recall that the exponential distribution takes a parameter of its own, so we'll need to include that parameter in our model. Let's call that parameter $\alpha$. \begin{align} &\lambda_1 \sim \text{Exp}( \alpha ) \\\ &\lambda_2 \sim \text{Exp}( \alpha ) \end{align} $\alpha$ is called a *hyper-parameter* or *parent variable*. In literal terms, it is a parameter that influences other parameters. Our initial guess at $\alpha$ does not influence the model too strongly, so we have some flexibility in our choice. A good rule of thumb is to set the exponential parameter equal to the inverse of the average of the count data. Since we're modeling $\lambda$ using an exponential distribution, we can use the expected value identity shown earlier to get: $$\frac{1}{N}\sum_{i=0}^N \;C_i \approx E[\; \lambda \; |\; \alpha ] = \frac{1}{\alpha}$$ An alternative, and something I encourage the reader to try, would be to have two priors: one for each $\lambda_i$. Creating two exponential distributions with different $\alpha$ values reflects our prior belief that the rate changed at some point during the observations. What about $\tau$? Because of the noisiness of the data, it's difficult to pick out a priori when $\tau$ might have occurred. Instead, we can assign a *uniform prior belief* to every possible day. This is equivalent to saying \begin{align} & \tau \sim \text{DiscreteUniform(1,70) }\\\\ & \Rightarrow P( \tau = k ) = \frac{1}{70} \end{align} So after all this, what does our overall prior distribution for the unknown variables look like? Frankly, *it doesn't matter*. What we should understand is that it's an ugly, complicated mess involving symbols only a mathematician could love. And things will only get uglier the more complicated our models become. Regardless, all we really care about is the posterior distribution. We next turn to PyMC, a Python library for performing Bayesian analysis that is undaunted by the mathematical monster we have created. ## Introducing our first hammer: PyMC3 PyMC3 code is easy to read. The only novel thing should be the syntax, and I will interrupt the code to explain individual sections. Simply remember that we are representing the model's components ($\tau, \lambda_1, \lambda_2$ ) as variables: ```python import pymc3 as pm import theano.tensor as TT ``` WARNING (theano.tensor.blas): Using NumPy C-API based implementation for BLAS functions. C:\Users\Anderson Banihirwe\AppData\Local\conda\conda\envs\devel\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`. from ._conv import register_converters as _register_converters ```python count_data ``` array([13., 24., 8., 24., 7., 35., 14., 11., 15., 11., 22., 22., 11., 57., 11., 19., 29., 6., 19., 12., 22., 12., 18., 72., 32., 9., 7., 13., 19., 23., 27., 20., 6., 17., 13., 10., 14., 6., 16., 15., 7., 2., 15., 15., 19., 70., 49., 7., 53., 22., 21., 31., 19., 11., 18., 20., 12., 35., 17., 23., 17., 4., 2., 31., 30., 13., 27., 0., 39., 37., 5., 14., 13., 22.]) ```python with pm.Model() as model: alpha = 1.0 / count_data.mean() # Recall count_data is the # variable that holds our txt counts lambda_1 = pm.Exponential("lambda_1", alpha) lambda_2 = pm.Exponential("lambda_2", alpha) tau = pm.DiscreteUniform("tau", lower=0, upper=n_count_data) ``` In the code above, we create the PyMC variables corresponding to $\lambda_1$ and $\lambda_2$. We assign them to PyMC's *stochastic variables*, so-called because they are treated by the back end as random number generators. We can demonstrate this fact by calling their built-in `random()` methods. ```python print("Random output:", tau.random(), tau.random(), tau.random()) ``` Random output: 45 29 71 ```python with model: idx = np.arange(n_count_data) # Index lambda_ = TT.switch(tau >= idx, lambda_1, lambda_2) ``` This code creates a new function `lambda_`, but really we can think of it as a random variable: the random variable $\lambda$ from above. The `switch()` function assigns `lambda_1` or `lambda_2` as the value of `lambda_`, depending on what side of `tau` we are on. The values of `lambda_` up until `tau` are `lambda_1` and the values afterwards are `lambda_2`. Note that because `lambda_1`, `lambda_2` and `tau` are random, `lambda_` will be random. We are **not** fixing any variables yet. ```python with model: observation = pm.Poisson("obs", lambda_, observed=count_data) ``` The variable `observation` combines our data, `count_data`, with our proposed data-generation scheme, given by the variable `lambda_`, through the `observed` keyword. The code below will be explained in Chapter 3, but I show it here so you can see where our results come from. One can think of it as a *learning* step. The machinery being employed is called *Markov Chain Monte Carlo* (MCMC), which I also delay explaining until Chapter 3. This technique returns thousands of random variables from the posterior distributions of $\lambda_1, \lambda_2$ and $\tau$. We can plot a histogram of the random variables to see what the posterior distributions look like. Below, we collect the samples (called *traces* in the MCMC literature) into histograms. ```python ### Mysterious code to be explained in Chapter 3. with model: step = pm.Metropolis() trace = pm.sample(10000, tune=5000, step=step) ``` Multiprocess sampling (2 chains in 2 jobs) CompoundStep >Metropolis: [tau] >Metropolis: [lambda_2_log__] >Metropolis: [lambda_1_log__] The number of effective samples is smaller than 25% for some parameters. ```python lambda_1_samples = trace['lambda_1'] lambda_2_samples = trace['lambda_2'] tau_samples = trace['tau'] ``` ```python figsize(12.5, 10) #histogram of the samples: ax = plt.subplot(311) ax.set_autoscaley_on(False) plt.hist( lambda_1_samples, histtype='stepfilled', bins=30, alpha=0.85, label="posterior of $\lambda_1$", color="#A60628", normed=True) plt.legend(loc="upper left") plt.title(r"""Posterior distributions of the variables $\lambda_1,\;\lambda_2,\;\tau$""") plt.xlim([15, 30]) plt.xlabel("$\lambda_1$ value") ax = plt.subplot(312) ax.set_autoscaley_on(False) plt.hist( lambda_2_samples, histtype='stepfilled', bins=30, alpha=0.85, label="posterior of $\lambda_2$", color="#7A68A6", normed=True) plt.legend(loc="upper left") plt.xlim([15, 30]) plt.xlabel("$\lambda_2$ value") plt.subplot(313) w = 1.0 / tau_samples.shape[0] * np.ones_like(tau_samples) plt.hist( tau_samples, bins=n_count_data, alpha=1, label=r"posterior of $\tau$", color="#467821", weights=w, rwidth=2.) plt.xticks(np.arange(n_count_data)) plt.legend(loc="upper left") plt.ylim([0, .75]) plt.xlim([35, len(count_data) - 20]) plt.xlabel(r"$\tau$ (in days)") plt.ylabel("probability"); ``` Recall that Bayesian methodology returns a *distribution*. Hence we now have distributions to describe the unknown $\lambda$s and $\tau$. What have we gained? Immediately, we can see the uncertainty in our estimates: the wider the distribution, the less certain our posterior belief should be. We can also see what the plausible values for the parameters are: $\lambda_1$ is around 18 and $\lambda_2$ is around 23. The posterior distributions of the two $\lambda$s are clearly distinct, indicating that it is indeed likely that there was a change in the user's text-message behaviour. What other observations can you make? If you look at the original data again, do these results seem reasonable? Notice also that the posterior distributions for the $\lambda$s do not look like exponential distributions, even though our priors for these variables were exponential. In fact, the posterior distributions are not really of any form that we recognize from the original model. But that's OK! This is one of the benefits of taking a computational point of view. If we had instead done this analysis using mathematical approaches, we would have been stuck with an analytically intractable (and messy) distribution. Our use of a computational approach makes us indifferent to mathematical tractability. Our analysis also returned a distribution for $\tau$. Its posterior distribution looks a little different from the other two because it is a discrete random variable, so it doesn't assign probabilities to intervals. We can see that near day 45, there was a 50% chance that the user's behaviour changed. Had no change occurred, or had the change been gradual over time, the posterior distribution of $\tau$ would have been more spread out, reflecting that many days were plausible candidates for $\tau$. By contrast, in the actual results we see that only three or four days make any sense as potential transition points. ### Why would I want samples from the posterior, anyways? We will deal with this question for the remainder of the book, and it is an understatement to say that it will lead us to some amazing results. For now, let's end this chapter with one more example. We'll use the posterior samples to answer the following question: what is the expected number of texts at day $t, \; 0 \le t \le 70$ ? Recall that the expected value of a Poisson variable is equal to its parameter $\lambda$. Therefore, the question is equivalent to *what is the expected value of $\lambda$ at time $t$*? In the code below, let $i$ index samples from the posterior distributions. Given a day $t$, we average over all possible $\lambda_i$ for that day $t$, using $\lambda_i = \lambda_{1,i}$ if $t \lt \tau_i$ (that is, if the behaviour change has not yet occurred), else we use $\lambda_i = \lambda_{2,i}$. ```python figsize(12.5, 5) # tau_samples, lambda_1_samples, lambda_2_samples contain # N samples from the corresponding posterior distribution N = tau_samples.shape[0] expected_texts_per_day = np.zeros(n_count_data) for day in range(0, n_count_data): # ix is a bool index of all tau samples corresponding to # the switchpoint occurring prior to value of 'day' ix = day < tau_samples # Each posterior sample corresponds to a value for tau. # for each day, that value of tau indicates whether we're "before" # (in the lambda1 "regime") or # "after" (in the lambda2 "regime") the switchpoint. # by taking the posterior sample of lambda1/2 accordingly, we can average # over all samples to get an expected value for lambda on that day. # As explained, the "message count" random variable is Poisson distributed, # and therefore lambda (the poisson parameter) is the expected value of # "message count". expected_texts_per_day[day] = ( lambda_1_samples[ix].sum() + lambda_2_samples[~ix].sum()) / N plt.plot( range(n_count_data), expected_texts_per_day, lw=4, color="#E24A33", label="expected number of text-messages received") plt.xlim(0, n_count_data) plt.xlabel("Day") plt.ylabel("Expected # text-messages") plt.title("Expected number of text-messages received") plt.ylim(0, 60) plt.bar( np.arange(len(count_data)), count_data, color="#348ABD", alpha=0.65, label="observed texts per day") plt.legend(loc="upper left"); ``` Our analysis shows strong support for believing the user's behavior did change ($\lambda_1$ would have been close in value to $\lambda_2$ had this not been true), and that the change was sudden rather than gradual (as demonstrated by $\tau$'s strongly peaked posterior distribution). We can speculate what might have caused this: a cheaper text-message rate, a recent weather-to-text subscription, or perhaps a new relationship. ```python print(expected_texts_per_day) ``` [17.76915874 17.76915874 17.76915874 17.76915874 17.76915874 17.76915874 17.76915874 17.76915874 17.76915874 17.76915874 17.76915874 17.76915874 17.76915874 17.76915874 17.76915874 17.76915874 17.76915874 17.76915874 17.76915874 17.76915874 17.76915874 17.76915874 17.76915874 17.76915874 17.76915874 17.76915874 17.76915874 17.76915874 17.76915874 17.76915874 17.76915874 17.76915874 17.76915874 17.76915874 17.76915874 17.76915874 17.76915874 17.76929695 17.77040006 17.77228256 17.77852684 17.92903374 18.46020732 20.29290296 22.71657445 22.71657445 22.71657445 22.71657445 22.71657445 22.71657445 22.71657445 22.71657445 22.71657445 22.71657445 22.71657445 22.71657445 22.71657445 22.71657445 22.71657445 22.71657445 22.71657445 22.71657445 22.71657445 22.71657445 22.71657445 22.71657445 22.71657445 22.71657445 22.71657445 22.71657445 22.71657445 22.71657445 22.71657445 22.71657445] ##### Exercises 1\. Using `lambda_1_samples` and `lambda_2_samples`, what is the mean of the posterior distributions of $\lambda_1$ and $\lambda_2$? To compute the mean of the posteriors (which is the same as the expected value of the posteriors), we just need the samples and a .mean function. ```python lambda_1_samples.mean() ``` 17.76915873626346 ```python lambda_2_samples.mean() ``` 22.716574445528533 2\. What is the expected percentage increase in text-message rates? `hint:` compute the mean of `lambda_1_samples/lambda_2_samples`. Note that this quantity is very different from `lambda_1_samples.mean()/lambda_2_samples.mean()`. ```python relative_increase_samples = ( lambda_2_samples - lambda_1_samples) / lambda_1_samples ``` ```python print("The expected percentage increase in text-message rates was {}". format(relative_increase_samples*100)) ``` The expected percentage increase in text-message rates was [22.38028504 22.38028504 29.60339271 ... 35.79487537 35.79487537 35.79487537] ```python figsize(12.5, 4) plt.hist( relative_increase_samples, histtype='stepfilled', bins=30, alpha=0.85, color="#7A68A6", normed=True, label='posterior of relative increase') plt.xlabel("Relative increase") plt.ylabel("Density of relative increase") plt.title("Posterior of relative increase") plt.legend(); ``` ```python print(relative_increase_samples.mean() * 100) ``` 28.001507366601903 3\. What is the mean of $\lambda_1$ **given** that we know $\tau$ is less than 45. That is, suppose we have been given new information that the change in behaviour occurred prior to day 45. What is the expected value of $\lambda_1$ now? (You do not need to redo the PyMC3 part. Just consider all instances where `tau_samples < 45`.) If we know $\tau < 45$, then all samples need to be conditioned on that: ```python ix = tau_samples < 45 print(lambda_1_samples[ix].mean()) ``` 17.76915873626346 ## Determining Statistically if the Two $\lambda$s Are Indeed Different? In the text-messaging example, we visually inspected the posteriors of $\lambda_1$ and $\lambda_2$ to declare them different. This was fair, as the general locations of the posteriors were very far apart. What if this were not true? What if the distributions partially overlapped? How can we make this decision more formal? One way is to compute $P(\lambda_1 < \lambda_2 | data)$; that is, what is the probability thatthe true value of $\lambda_1$ is smaller than $\lambda_2$, given the data we observed? If this number is close to $50\%$, no better than flipping a coin, then we can’t be certain they are indeed different. If this number is close to $100\%$, then we can be very confident that the two true values are very different. Using samples from the posteriors, this computation is very simple—we compute the fraction of times that a sample from the posterior of $\lambda_1$ is less than one from $\lambda_2$: ```python print(lambda_1_samples < lambda_2_samples) ``` [ True True True ... True True True] ```python # How often does this happen? (lambda_1_samples < lambda_2_samples).sum() ``` 20000 ```python # How many samples are there? lambda_1_samples.shape[0] ``` 20000 ```python # The ratio is the probability. Or, we can just use .mean: (lambda_1_samples < lambda_2_samples).mean() ``` 1.0 So, there is virtually a $100\%$ chance, and we can be very confident the two values are different. We can ask more complicated things, too, like “What is the probability that the values differ by at least 1? 2? 5? 10?” ```python # The vector abs(lambda_1_samples - lambda_2_samples) > 1 is a boolean, # True if the values are more than 1 apart, False otherwise. # How often does this happen? Use .mean() for d in [1, 2, 5, 10]: v = (abs(lambda_1_samples - lambda_2_samples) >= d).mean() print("What is the probability the difference is larger than %d ? %.2f" % (d, v)) ``` What is the probability the difference is larger than 1 ? 1.00 What is the probability the difference is larger than 2 ? 1.00 What is the probability the difference is larger than 5 ? 0.48 What is the probability the difference is larger than 10 ? 0.00 ```python %load_ext version_information %version_information pymc3, matplotlib, theano, numpy, scipy ``` <table><tr><th>Software</th><th>Version</th></tr><tr><td>Python</td><td>3.6.4 64bit [MSC v.1900 64 bit (AMD64)]</td></tr><tr><td>IPython</td><td>6.2.1</td></tr><tr><td>OS</td><td>Windows 10 10.0.16299 SP0</td></tr><tr><td>pymc3</td><td>3.3</td></tr><tr><td>matplotlib</td><td>2.1.2</td></tr><tr><td>theano</td><td>1.0.1</td></tr><tr><td>numpy</td><td>1.14.0</td></tr><tr><td>scipy</td><td>1.0.0</td></tr><tr><td colspan='2'>Fri Feb 16 15:40:53 2018 Central Standard Time</td></tr></table>
ff86215284a2a20657b61ceef5626811881b3232
308,586
ipynb
Jupyter Notebook
01-introduction.ipynb
andersy005/probabilistic-programming-and-bayesian-with-PyMC3
bc02541cb86e8e9c9bb4d1988364d65d09e49cba
[ "MIT" ]
3
2021-03-25T22:37:19.000Z
2022-01-25T01:17:57.000Z
01-introduction.ipynb
andersy005/probabilistic-programming-and-bayesian-with-PyMC3
bc02541cb86e8e9c9bb4d1988364d65d09e49cba
[ "MIT" ]
3
2018-02-16T16:38:11.000Z
2018-02-16T21:46:12.000Z
01-introduction.ipynb
andersy005/probabilistic-programming-and-bayesian-with-PyMC3
bc02541cb86e8e9c9bb4d1988364d65d09e49cba
[ "MIT" ]
null
null
null
208.222672
86,380
0.889217
true
13,324
Qwen/Qwen-72B
1. YES 2. YES
0.746139
0.887205
0.661978
__label__eng_Latn
0.988325
0.376328
# Librerias ```python import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn import datasets from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression ``` # Data Set ```python # Loading pre-defined Boston Dataset boston_dataset = datasets.load_boston() print(boston_dataset.DESCR) ``` .. _boston_dataset: Boston house prices dataset --------------------------- **Data Set Characteristics:** :Number of Instances: 506 :Number of Attributes: 13 numeric/categorical predictive. Median Value (attribute 14) is usually the target. :Attribute Information (in order): - CRIM per capita crime rate by town - ZN proportion of residential land zoned for lots over 25,000 sq.ft. - INDUS proportion of non-retail business acres per town - CHAS Charles River dummy variable (= 1 if tract bounds river; 0 otherwise) - NOX nitric oxides concentration (parts per 10 million) - RM average number of rooms per dwelling - AGE proportion of owner-occupied units built prior to 1940 - DIS weighted distances to five Boston employment centres - RAD index of accessibility to radial highways - TAX full-value property-tax rate per $10,000 - PTRATIO pupil-teacher ratio by town - B 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town - LSTAT % lower status of the population - MEDV Median value of owner-occupied homes in $1000's :Missing Attribute Values: None :Creator: Harrison, D. and Rubinfeld, D.L. This is a copy of UCI ML housing dataset. https://archive.ics.uci.edu/ml/machine-learning-databases/housing/ This dataset was taken from the StatLib library which is maintained at Carnegie Mellon University. The Boston house-price data of Harrison, D. and Rubinfeld, D.L. 'Hedonic prices and the demand for clean air', J. Environ. Economics & Management, vol.5, 81-102, 1978. Used in Belsley, Kuh & Welsch, 'Regression diagnostics ...', Wiley, 1980. N.B. Various transformations are used in the table on pages 244-261 of the latter. The Boston house-price data has been used in many machine learning papers that address regression problems. .. topic:: References - Belsley, Kuh & Welsch, 'Regression diagnostics: Identifying Influential Data and Sources of Collinearity', Wiley, 1980. 244-261. - Quinlan,R. (1993). Combining Instance-Based and Model-Based Learning. In Proceedings on the Tenth International Conference of Machine Learning, 236-243, University of Massachusetts, Amherst. Morgan Kaufmann. ```python df = pd.DataFrame(data = boston_dataset.data, columns=boston_dataset.feature_names) df['House Price'] = boston_dataset.target df.head() ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>CRIM</th> <th>ZN</th> <th>INDUS</th> <th>CHAS</th> <th>NOX</th> <th>RM</th> <th>AGE</th> <th>DIS</th> <th>RAD</th> <th>TAX</th> <th>PTRATIO</th> <th>B</th> <th>LSTAT</th> <th>House Price</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>0.00632</td> <td>18.0</td> <td>2.31</td> <td>0.0</td> <td>0.538</td> <td>6.575</td> <td>65.2</td> <td>4.0900</td> <td>1.0</td> <td>296.0</td> <td>15.3</td> <td>396.90</td> <td>4.98</td> <td>24.0</td> </tr> <tr> <th>1</th> <td>0.02731</td> <td>0.0</td> <td>7.07</td> <td>0.0</td> <td>0.469</td> <td>6.421</td> <td>78.9</td> <td>4.9671</td> <td>2.0</td> <td>242.0</td> <td>17.8</td> <td>396.90</td> <td>9.14</td> <td>21.6</td> </tr> <tr> <th>2</th> <td>0.02729</td> <td>0.0</td> <td>7.07</td> <td>0.0</td> <td>0.469</td> <td>7.185</td> <td>61.1</td> <td>4.9671</td> <td>2.0</td> <td>242.0</td> <td>17.8</td> <td>392.83</td> <td>4.03</td> <td>34.7</td> </tr> <tr> <th>3</th> <td>0.03237</td> <td>0.0</td> <td>2.18</td> <td>0.0</td> <td>0.458</td> <td>6.998</td> <td>45.8</td> <td>6.0622</td> <td>3.0</td> <td>222.0</td> <td>18.7</td> <td>394.63</td> <td>2.94</td> <td>33.4</td> </tr> <tr> <th>4</th> <td>0.06905</td> <td>0.0</td> <td>2.18</td> <td>0.0</td> <td>0.458</td> <td>7.147</td> <td>54.2</td> <td>6.0622</td> <td>3.0</td> <td>222.0</td> <td>18.7</td> <td>396.90</td> <td>5.33</td> <td>36.2</td> </tr> </tbody> </table> </div> ```python df.info() ``` <class 'pandas.core.frame.DataFrame'> RangeIndex: 506 entries, 0 to 505 Data columns (total 14 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 CRIM 506 non-null float64 1 ZN 506 non-null float64 2 INDUS 506 non-null float64 3 CHAS 506 non-null float64 4 NOX 506 non-null float64 5 RM 506 non-null float64 6 AGE 506 non-null float64 7 DIS 506 non-null float64 8 RAD 506 non-null float64 9 TAX 506 non-null float64 10 PTRATIO 506 non-null float64 11 B 506 non-null float64 12 LSTAT 506 non-null float64 13 House Price 506 non-null float64 dtypes: float64(14) memory usage: 55.5 KB ```python # Generate scatter plot of independent vs Dependent variable plt.style.use('ggplot') fig = plt.figure(figsize = (18, 18)) for index, feature_name in enumerate(boston_dataset.feature_names): ax = fig.add_subplot(4, 4, index + 1) ax.scatter(boston_dataset.data[:, index], boston_dataset.target) ax.set_ylabel('House Price', size = 12) ax.set_xlabel(feature_name, size = 12) plt.show() ``` ```python X = df.iloc[:, :-1] y = df.iloc[:, -1] ``` ```python X_train, X_test, y_train, y_test = train_test_split( X, y, test_size = 0.25) print("Train data shape X = % s and y = % s : "%( X_train.shape, y_train.shape)) print("Test data shape X = % s and y = % s : "%( X_test.shape, y_test.shape)) ``` Train data shape X = (379, 13) and y = (379,) : Test data shape X = (127, 13) and y = (127,) : ```python # StandarScale from sklearn.preprocessing import StandardScaler scaler = StandardScaler() X_train_scaled = pd.DataFrame(scaler.fit_transform(X_train), columns = X_train.columns) X_test_scaled = pd.DataFrame(scaler.transform(X_test), columns = X_test.columns) ``` ```python X_train ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>CRIM</th> <th>ZN</th> <th>INDUS</th> <th>CHAS</th> <th>NOX</th> <th>RM</th> <th>AGE</th> <th>DIS</th> <th>RAD</th> <th>TAX</th> <th>PTRATIO</th> <th>B</th> <th>LSTAT</th> </tr> </thead> <tbody> <tr> <th>211</th> <td>0.37578</td> <td>0.0</td> <td>10.59</td> <td>1.0</td> <td>0.489</td> <td>5.404</td> <td>88.6</td> <td>3.6650</td> <td>4.0</td> <td>277.0</td> <td>18.6</td> <td>395.24</td> <td>23.98</td> </tr> <tr> <th>2</th> <td>0.02729</td> <td>0.0</td> <td>7.07</td> <td>0.0</td> <td>0.469</td> <td>7.185</td> <td>61.1</td> <td>4.9671</td> <td>2.0</td> <td>242.0</td> <td>17.8</td> <td>392.83</td> <td>4.03</td> </tr> <tr> <th>195</th> <td>0.01381</td> <td>80.0</td> <td>0.46</td> <td>0.0</td> <td>0.422</td> <td>7.875</td> <td>32.0</td> <td>5.6484</td> <td>4.0</td> <td>255.0</td> <td>14.4</td> <td>394.23</td> <td>2.97</td> </tr> <tr> <th>434</th> <td>13.91340</td> <td>0.0</td> <td>18.10</td> <td>0.0</td> <td>0.713</td> <td>6.208</td> <td>95.0</td> <td>2.2222</td> <td>24.0</td> <td>666.0</td> <td>20.2</td> <td>100.63</td> <td>15.17</td> </tr> <tr> <th>90</th> <td>0.04684</td> <td>0.0</td> <td>3.41</td> <td>0.0</td> <td>0.489</td> <td>6.417</td> <td>66.1</td> <td>3.0923</td> <td>2.0</td> <td>270.0</td> <td>17.8</td> <td>392.18</td> <td>8.81</td> </tr> <tr> <th>...</th> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> </tr> <tr> <th>128</th> <td>0.32543</td> <td>0.0</td> <td>21.89</td> <td>0.0</td> <td>0.624</td> <td>6.431</td> <td>98.8</td> <td>1.8125</td> <td>4.0</td> <td>437.0</td> <td>21.2</td> <td>396.90</td> <td>15.39</td> </tr> <tr> <th>318</th> <td>0.40202</td> <td>0.0</td> <td>9.90</td> <td>0.0</td> <td>0.544</td> <td>6.382</td> <td>67.2</td> <td>3.5325</td> <td>4.0</td> <td>304.0</td> <td>18.4</td> <td>395.21</td> <td>10.36</td> </tr> <tr> <th>372</th> <td>8.26725</td> <td>0.0</td> <td>18.10</td> <td>1.0</td> <td>0.668</td> <td>5.875</td> <td>89.6</td> <td>1.1296</td> <td>24.0</td> <td>666.0</td> <td>20.2</td> <td>347.88</td> <td>8.88</td> </tr> <tr> <th>164</th> <td>2.24236</td> <td>0.0</td> <td>19.58</td> <td>0.0</td> <td>0.605</td> <td>5.854</td> <td>91.8</td> <td>2.4220</td> <td>5.0</td> <td>403.0</td> <td>14.7</td> <td>395.11</td> <td>11.64</td> </tr> <tr> <th>73</th> <td>0.19539</td> <td>0.0</td> <td>10.81</td> <td>0.0</td> <td>0.413</td> <td>6.245</td> <td>6.2</td> <td>5.2873</td> <td>4.0</td> <td>305.0</td> <td>19.2</td> <td>377.17</td> <td>7.54</td> </tr> </tbody> </table> <p>379 rows × 13 columns</p> </div> ```python X_train_scaled ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>CRIM</th> <th>ZN</th> <th>INDUS</th> <th>CHAS</th> <th>NOX</th> <th>RM</th> <th>AGE</th> <th>DIS</th> <th>RAD</th> <th>TAX</th> <th>PTRATIO</th> <th>B</th> <th>LSTAT</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>-0.365345</td> <td>-0.496544</td> <td>-0.032725</td> <td>3.610684</td> <td>-0.554810</td> <td>-1.241634</td> <td>0.723924</td> <td>-0.089781</td> <td>-0.624840</td> <td>-0.752319</td> <td>0.105433</td> <td>0.410116</td> <td>1.604117</td> </tr> <tr> <th>1</th> <td>-0.403233</td> <td>-0.496544</td> <td>-0.546098</td> <td>-0.276956</td> <td>-0.724461</td> <td>1.198062</td> <td>-0.238694</td> <td>0.513705</td> <td>-0.857810</td> <td>-0.961052</td> <td>-0.258658</td> <td>0.383643</td> <td>-1.179552</td> </tr> <tr> <th>2</th> <td>-0.404699</td> <td>2.828737</td> <td>-1.510133</td> <td>-0.276956</td> <td>-1.123142</td> <td>2.143256</td> <td>-1.257319</td> <td>0.829468</td> <td>-0.624840</td> <td>-0.883523</td> <td>-1.806046</td> <td>0.399022</td> <td>-1.327457</td> </tr> <tr> <th>3</th> <td>1.106457</td> <td>-0.496544</td> <td>1.062570</td> <td>-0.276956</td> <td>1.345287</td> <td>-0.140277</td> <td>0.947951</td> <td>-0.758477</td> <td>1.704863</td> <td>1.567596</td> <td>0.833615</td> <td>-2.826040</td> <td>0.374838</td> </tr> <tr> <th>4</th> <td>-0.401108</td> <td>-0.496544</td> <td>-1.079890</td> <td>-0.276956</td> <td>-0.554810</td> <td>0.146021</td> <td>-0.063673</td> <td>-0.355211</td> <td>-0.857810</td> <td>-0.794066</td> <td>-0.258658</td> <td>0.376503</td> <td>-0.512588</td> </tr> <tr> <th>...</th> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> </tr> <tr> <th>374</th> <td>-0.370819</td> <td>-0.496544</td> <td>1.615322</td> <td>-0.276956</td> <td>0.590338</td> <td>0.165198</td> <td>1.080967</td> <td>-0.948362</td> <td>-0.624840</td> <td>0.201888</td> <td>1.288729</td> <td>0.428351</td> <td>0.405535</td> </tr> <tr> <th>375</th> <td>-0.362493</td> <td>-0.496544</td> <td>-0.133358</td> <td>-0.276956</td> <td>-0.088268</td> <td>0.098076</td> <td>-0.025168</td> <td>-0.151191</td> <td>-0.624840</td> <td>-0.591297</td> <td>0.014410</td> <td>0.409787</td> <td>-0.296313</td> </tr> <tr> <th>376</th> <td>0.492611</td> <td>-0.496544</td> <td>1.062570</td> <td>3.610684</td> <td>0.963571</td> <td>-0.596436</td> <td>0.758928</td> <td>-1.264866</td> <td>1.704863</td> <td>1.567596</td> <td>0.833615</td> <td>-0.110112</td> <td>-0.502821</td> </tr> <tr> <th>377</th> <td>-0.162412</td> <td>-0.496544</td> <td>1.278420</td> <td>-0.276956</td> <td>0.429169</td> <td>-0.625203</td> <td>0.835937</td> <td>-0.665876</td> <td>-0.508355</td> <td>-0.000881</td> <td>-1.669512</td> <td>0.408688</td> <td>-0.117712</td> </tr> <tr> <th>378</th> <td>-0.384957</td> <td>-0.496544</td> <td>-0.000639</td> <td>-0.276956</td> <td>-1.199485</td> <td>-0.089593</td> <td>-2.160429</td> <td>0.662108</td> <td>-0.624840</td> <td>-0.585333</td> <td>0.378501</td> <td>0.211625</td> <td>-0.689794</td> </tr> </tbody> </table> <p>379 rows × 13 columns</p> </div> # Plot Function ```python def plot_model(model_coeffs): # plotting the coefficient score fig, ax = plt.subplots(figsize =(10, 5)) color =['tab:gray', 'tab:blue', 'tab:orange', 'tab:green', 'tab:red', 'tab:purple', 'tab:brown', 'tab:pink', 'tab:gray', 'tab:olive', 'tab:cyan', 'tab:orange', 'tab:green', 'tab:blue', 'tab:olive'] ax.bar(model_coeffs["Columns"], model_coeffs['Coefficient Estimate'], color = color) ax.spines['bottom'].set_position('zero') plt.style.use('ggplot') plt.show() ``` # Multiple Linear Regression $$y = w_0 + w_1 x_{1} + w_2 x_{2} + ... + w_n x_{n} + \epsilon$$ $$J(w) = \sum_{i=1}^m r_i^2 = \sum_{i=1}^m (y_i - \tilde{y}_i)^2 = \sum_{i=1}^m (y_i - \tilde{w}_0 - \tilde{w}_1 x_1 - ... - \tilde{w}_n x_n)^2$$ ```python # multiple Linear Regression Model lreg = LinearRegression() lreg.fit(X_train_scaled, y_train) # Prediccion en test set lreg_y_pred = lreg.predict(X_test_scaled) # MSE train set mse_train = np.mean((lreg.predict(X_train_scaled) - y_train)**2) print("Mean squared Error on train set : ", mse_train) # calculating Mean Squared Error (mse) mean_squared_error = np.mean((lreg_y_pred - y_test)**2) print("Mean squared Error on test set : ", mean_squared_error) # Putting together the coefficient and their corresponding variable names lreg_coefficient = pd.DataFrame() lreg_coefficient["Columns"] = X_train.columns lreg_coefficient['Coefficient Estimate'] = pd.Series(lreg.coef_) print(lreg_coefficient) ``` Mean squared Error on train set : 20.11486173313239 Mean squared Error on test set : 37.173340973712655 Columns Coefficient Estimate 0 CRIM -0.981919 1 ZN 1.103023 2 INDUS 0.706474 3 CHAS 0.181104 4 NOX -2.054656 5 RM 3.302312 6 AGE -0.067952 7 DIS -2.860861 8 RAD 2.200599 9 TAX -2.191210 10 PTRATIO -2.275399 11 B 1.048264 12 LSTAT -3.420124 ```python plot_model(lreg_coefficient) ``` # Ridge Regression \begin{equation} J(w)= \sum_i (w^T x_i - y_i)^2 + \lambda w^2 \end{equation} ```python def ridge(lambd, X_train = X_train_scaled, y_train = y_train, X_test = X_test_scaled, y_test = y_test): from sklearn.linear_model import Ridge # Train the model ridgeR = Ridge(alpha = lambd) ridgeR.fit(X_train, y_train) y_pred = ridgeR.predict(X_test) # mse train set mse_train = np.mean((ridgeR.predict(X_train) - y_train)**2) #print('MSE RIDGE train set :', mse_train) # mean square error mean_squared_error_ridge = np.mean((y_pred - y_test)**2) #ridge coefficient df_ridge_coefficient = pd.DataFrame() df_ridge_coefficient["Columns"]= X_train.columns df_ridge_coefficient['Coefficient Estimate'] = pd.Series(ridgeR.coef_) return mean_squared_error_ridge, df_ridge_coefficient, ridgeR.coef_, mse_train ``` ```python mse_ridge, ridge_coefficient,_,mse_train = ridge(1) print('MSE RIDGE train set :', mse_train) print('MSE RIDGE test set :', mse_ridge) print('-----------------') print(ridge_coefficient) ``` MSE RIDGE train set : 21.84825265392536 MSE RIDGE test set : 27.298676841645808 ----------------- Columns Coefficient Estimate 0 CRIM -1.009143 1 ZN 0.980959 2 INDUS -0.194660 3 CHAS 0.701500 4 NOX -1.807307 5 RM 3.044062 6 AGE -0.198334 7 DIS -3.205758 8 RAD 2.488452 9 TAX -1.959227 10 PTRATIO -2.060459 11 B 0.594750 12 LSTAT -3.844630 ```python plot_model(ridge_coefficient) ``` ### Lambda vs MSE ```python # Veamos como varia el MSE por lambda lambdas = np.linspace(0,1000,1000) ridge_test_mses = [ridge(l)[0] for l in lambdas] ridge_train_mses = [ridge(l)[3] for l in lambdas] fig = plt.figure(figsize = (10, 5)) fig.add_subplot(1,2,1) plt.plot(lambdas, ridge_test_mses) plt.title('Test set') fig.add_subplot(1,2,2) plt.plot(lambdas, ridge_train_mses) plt.title('Train set') ``` ### Lambda vs Weights ```python # Veamos como varia el MSE por lambda lambdas = 10**np.linspace(10,-2,100)*0.5 ridge_coeffs = [ridge(l)[2] for l in lambdas] ax = plt.gca() ax.plot(lambdas, ridge_coeffs) ax.set_xscale('log') plt.axis('tight') plt.xlabel('alpha') plt.ylabel('weights') ``` # Lasso Regression \begin{equation} J(w)= \sum_i (w^T x_i - y_i)^2 + \lambda ||w||_1 \end{equation} ```python def lasso(lambd, X_train = X_train_scaled, y_train = y_train, X_test = X_test_scaled, y_test = y_test): from sklearn.linear_model import Lasso # Train the model lasso = Lasso(alpha = lambd) lasso.fit(X_train, y_train) y_pred1 = lasso.predict(X_test) # mse train set mse_train = np.mean((lasso.predict(X_train) - y_train)**2) #print('MSE RIDGE train set :', mse_train) # Calculate Mean Squared Error mean_squared_error = np.mean((y_pred1 - y_test)**2) df_lasso_coeff = pd.DataFrame() df_lasso_coeff["Columns"] = X_train.columns df_lasso_coeff['Coefficient Estimate'] = pd.Series(lasso.coef_) return mean_squared_error, df_lasso_coeff, lasso.coef_, mse_train ``` ```python mse_lasso, lasso_coefficient,_,mse_train = lasso(1) print('MSE LASSO train set :', mse_train) print('MSE LASSO :', mse_lasso) print('-----------------') print(lasso_coefficient) ``` MSE LASSO train set : 28.39264682044312 MSE LASSO : 29.095045432946005 ----------------- Columns Coefficient Estimate 0 CRIM -0.000000 1 ZN 0.000000 2 INDUS -0.000000 3 CHAS 0.000000 4 NOX -0.000000 5 RM 3.063196 6 AGE -0.000000 7 DIS -0.000000 8 RAD -0.000000 9 TAX -0.000000 10 PTRATIO -1.555405 11 B 0.000000 12 LSTAT -3.730375 ```python plot_model(lasso_coefficient) ``` ### Lambda Vs MSE ```python # Veamos como varia el MSE por lambda lambdas = np.linspace(0,30,1000) lasso_test_mses = [lasso(l)[0] for l in lambdas] lasso_train_mses = [lasso(l)[3] for l in lambdas] fig = plt.figure(figsize = (10, 5)) fig.add_subplot(1,2,1) plt.plot(lambdas, lasso_test_mses) plt.title('Test set') fig.add_subplot(1,2,2) plt.plot(lambdas, lasso_train_mses) plt.title('Train set') ``` ### Lambda vs Weights ```python # Veamos como varia el MSE por lambda lambdas = 10**np.linspace(10,-2,100)*0.5 lasso_coeffs = [lasso(l)[2] for l in lambdas] ax = plt.gca() ax.plot(lambdas, lasso_coeffs) ax.set_xscale('log') plt.axis('tight') plt.xlabel('alpha') plt.ylabel('weights') ``` # Conclusiones: ## Ridge: - Incluye todas (o ninguna) las variables del modelo. Por lo tanto, la principal ventaja de la regresión ridge es el 'shrinkage' del coeficiente y la reducción de la complejidad del modelo. - Normalmente se utiliza para reducit el sobre ajuste - En general, funciona bien incluso en presencia de variables altamente correlacionadas, ya que las incluirá todas en el modelo, pero los coeficientes se distribuirán entre ellas según la correlación. ## Lasso: - Junto con los coeficientes de reducción, Lasso también realiza la selección de variables.Algunos de los coeficientes se vuelven exactamente cero, lo que equivale a que la variable particular se excluya del modelo. - Normalmente se utiliza para reducir las variables y obtener una'sparse solution' (Ej: cuando hay miles de variables) - Selecciona arbitrariamente cualquier característica entre las altamente correlacionadas y reduce los coeficientes del resto a cero. Además, la variable elegida cambia aleatoriamente con el cambio en los parámetros del modelo. Por lo general, esto no funciona tan bien en comparación con la regresión ridge # Referencias: - https://www.cienciadedatos.net/documentos/py14-ridge-lasso-elastic-net-python.html - https://www.analyticsvidhya.com/blog/2016/01/ridge-lasso-regression-python-complete-tutorial/ - https://www.geeksforgeeks.org/implementation-of-lasso-ridge-and-elastic-net/ - https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Lasso.html - https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge.html ```python ``` ```python ``` ```python ```
d46a0dba27ab452810c1181d37495ac5510ddafd
363,644
ipynb
Jupyter Notebook
Ayudantia 9/tEEds_Regression.ipynb
diegulio/tEEds
d2b47c0f120947b53d17c04c8ba54258988f93d3
[ "MIT" ]
null
null
null
Ayudantia 9/tEEds_Regression.ipynb
diegulio/tEEds
d2b47c0f120947b53d17c04c8ba54258988f93d3
[ "MIT" ]
null
null
null
Ayudantia 9/tEEds_Regression.ipynb
diegulio/tEEds
d2b47c0f120947b53d17c04c8ba54258988f93d3
[ "MIT" ]
null
null
null
221.060182
180,302
0.871003
true
8,731
Qwen/Qwen-72B
1. YES 2. YES
0.913677
0.867036
0.79219
__label__kor_Hang
0.272459
0.678856
```python from IPython.core.display import HTML, Image css_file = 'style.css' HTML(open(css_file, 'r').read()) ``` <link href='http://fonts.googleapis.com/css?family=Alegreya+Sans:100,300,400,500,700,800,900,100italic,300italic,400italic,500italic,700italic,800italic,900italic' rel='stylesheet' type='text/css'> <link href='http://fonts.googleapis.com/css?family=Arvo:400,700,400italic' rel='stylesheet' type='text/css'> <link href='http://fonts.googleapis.com/css?family=PT+Mono' rel='stylesheet' type='text/css'> <link href='http://fonts.googleapis.com/css?family=Shadows+Into+Light' rel='stylesheet' type='text/css'> <link href='http://fonts.googleapis.com/css?family=Philosopher:400,700,400italic,700italic' rel='stylesheet' type='text/css'> <style> @font-face { font-family: "Computer Modern"; src: url('http://mirrors.ctan.org/fonts/cm-unicode/fonts/otf/cmunss.otf'); } #notebook_panel { /* main background */ background: #ddd; color: #000000; } /* Formatting for header cells */ .text_cell_render h1 { font-family: 'Philosopher', sans-serif; font-weight: 400; font-size: 2.2em; line-height: 100%; color: rgb(0, 80, 120); margin-bottom: 0.1em; margin-top: 0.1em; display: block; } .text_cell_render h2 { font-family: 'Philosopher', serif; font-weight: 400; font-size: 1.9em; line-height: 100%; color: rgb(200,100,0); margin-bottom: 0.1em; margin-top: 0.1em; display: block; } .text_cell_render h3 { font-family: 'Philosopher', serif; margin-top:12px; margin-bottom: 3px; font-style: italic; color: rgb(94,127,192); } .text_cell_render h4 { font-family: 'Philosopher', serif; } .text_cell_render h5 { font-family: 'Alegreya Sans', sans-serif; font-weight: 300; font-size: 16pt; color: grey; font-style: italic; margin-bottom: .1em; margin-top: 0.1em; display: block; } .text_cell_render h6 { font-family: 'PT Mono', sans-serif; font-weight: 300; font-size: 10pt; color: grey; margin-bottom: 1px; margin-top: 1px; } .CodeMirror{ font-family: "PT Mono"; font-size: 100%; } </style> ```python from sympy import init_printing, Matrix, symbols, I, sqrt, Rational from IPython.display import Image from warnings import filterwarnings ``` ```python init_printing(use_latex = 'mathjax') filterwarnings('ignore') ``` # Complex vectors, matrices # Fast Fourier transform ## Complex vectors * Consider the following vector with complex entries (from this point on I will not use the underscore to indicate a vector, so as not to create confusion with the bar, noting complex conjugate, instead, inferring from context) $$ {z} = \begin{bmatrix} {z}_{1} \\ {z}_{2} \\ \vdots \\ {z}_{n} \end{bmatrix} $$ * The length (actually length squared) of this vector is *no good*, since it should be positive $$ {z}^{T}{z} $$ * Instead we consider the following $$ z\bar { z } ={ \left| { z } \right| }^{ 2 }\\ \therefore \quad \bar { z } ^{ T }z\\ \left[ { \bar { z } }_{ 1 },{ \bar { z } }_{ 2 },\dots ,{ \bar { z } }_{ n } \right] \begin{bmatrix} { z }_{ 1 } \\ { z }_{ 2 } \\ \vdots \\ { z }_{ n } \end{bmatrix} $$ ```python z = Matrix([1, I]) # I is the sympy symbol for the imaginary number i z ``` $$\left[\begin{matrix}1\\i\end{matrix}\right]$$ * Let's calculate this manually ```python z.norm() # The length of a vector ``` $$\sqrt{2}$$ ```python z_cc = Matrix([1, -I]) z_cc ``` $$\left[\begin{matrix}1\\- i\end{matrix}\right]$$ ```python sqrt(z_cc.transpose() * z) ``` $$\left(\left[\begin{matrix}2\end{matrix}\right]\right)^{\frac{1}{2}}$$ * Taking the transpose of the complex conjugate is called the Hermitian $$ {z}^{H}{z} $$ * We can use the Hermitian for non-complex (or mixed complex) vectors **u** and **v** too $$ \bar{y}^{T}{x} \\ {y}^{H}{x} $$ ```python from sympy.physics.quantum.dagger import Dagger # A fun way to quickly get the Hermitian ``` ```python Dagger(z) ``` $$\left[\begin{matrix}1 & - i\end{matrix}\right]$$ ```python sqrt(Dagger(z) * z) ``` $$\left(\left[\begin{matrix}2\end{matrix}\right]\right)^{\frac{1}{2}}$$ ## Complex symmetric matrices ### The transpose * If the symmetric matrix has complex entries then A<sup>T</sup>=A is *no good* ```python A = Matrix([[2, 3 + I], [3 - I, 5]]) A # A Hermitian matrix ``` $$\left[\begin{matrix}2 & 3 + i\\3 - i & 5\end{matrix}\right]$$ ```python A.transpose() == A ``` False ```python Dagger(A) ``` $$\left[\begin{matrix}2 & 3 + i\\3 - i & 5\end{matrix}\right]$$ ```python Dagger(A) == A ``` True * This will work for real-values symmetric matrices as well ```python A = Matrix([[3, 4], [4, 2]]) A ``` $$\left[\begin{matrix}3 & 4\\4 & 2\end{matrix}\right]$$ ```python A.transpose() == A ``` True ```python Dagger(A) == A ``` True ### The eigenvalues and eigenvectors * Back to the complex matrix A ```python A = Matrix([[2, 3 + I], [3 - I, 5]]) A ``` $$\left[\begin{matrix}2 & 3 + i\\3 - i & 5\end{matrix}\right]$$ ```python A.eigenvals() ``` $$\begin{Bmatrix}0 : 1, & 7 : 1\end{Bmatrix}$$ $$ A=\begin{bmatrix} 2 & 3+i \\ 3-i & 5 \end{bmatrix}\\ A-\lambda I=\underline { 0 } \\ \left| \begin{bmatrix} 2 & 3+i \\ 3-i & 5 \end{bmatrix}-\begin{bmatrix} \lambda & 0 \\ 0 & \lambda \end{bmatrix} \right| =0\\ \begin{vmatrix} 2-\lambda & 3+i \\ 3-i & 5-\lambda \end{vmatrix}=0\\ \left( 2-\lambda \right) \left( 5-\lambda \right) -\left( 3+i \right) \left( 3-i \right) =0\\ 10-7\lambda +{ \lambda }^{ 2 }-\left( 9+1 \right) =0\\ { \lambda }^{ 2 }-7\lambda =0\\ { \lambda }_{ 1 }=0\\ { \lambda }_{ 2 }=7 $$ ```python A.eigenvects() ``` $$\begin{bmatrix}\begin{pmatrix}0, & 1, & \begin{bmatrix}\left[\begin{matrix}- \frac{3}{2} - \frac{i}{2}\\1\end{matrix}\right]\end{bmatrix}\end{pmatrix}, & \begin{pmatrix}7, & 1, & \begin{bmatrix}\left[\begin{matrix}\frac{3}{5} + \frac{i}{5}\\1\end{matrix}\right]\end{bmatrix}\end{pmatrix}\end{bmatrix}$$ ```python S, D = A.diagonalize() ``` ```python S ``` $$\left[\begin{matrix}-3 - i & 3 + i\\2 & 5\end{matrix}\right]$$ ```python D ``` $$\left[\begin{matrix}0 & 0\\0 & 7\end{matrix}\right]$$ * What about S now? * We have to use its transpose, but it is complex, so we have to take the Hermitian ```python Dagger(S) ``` $$\left[\begin{matrix}-3 + i & 2\\3 - i & 5\end{matrix}\right]$$ ```python S == Dagger(S) # Don't get confused here, S is not symmetric ``` False * Remember that for a symmetric matrix the column vectors in S (usually called Q, the matrix of eigenvectors) are orthogonal, with Q<sup>T</sup>Q=I * With complex entries we have to consider the Hermitian here, not just the simple transpose * Here we call Q *unitary* ## The fast Fourier transform * Look at this special matrix (where we start counting rows and columns at zero) $$ { F }_{ n }=\begin{bmatrix} W^{ \left( 0 \right) \left( 0 \right) } & { W }^{ \left( 0 \right) \left( 1 \right) } & { W }^{ \left( 0 \right) \left( 2 \right) } & \dots & { W }^{ \left( 0 \right) \left( n-1 \right) } \\ W^{ \left( 1 \right) \left( 0 \right) } & { W }^{ \left( 1 \right) \left( 1 \right) } & { W }^{ \left( 1 \right) \left( 2 \right) } & \dots & { W }^{ \left( 1 \right) \left( n-1 \right) } \\ { W }^{ \left( 2 \right) \left( 0 \right) } & { W }^{ \left( 2 \right) \left( 1 \right) } & { W }^{ \left( 2 \right) \left( 2 \right) } & \dots & { W }^{ \left( 2 \right) \left( n-1 \right) } \\ \vdots & \vdots & \vdots & \dots & \vdots \\ { W }^{ \left( n-1 \right) \left( 0 \right) } & { W }^{ \left( n-1 \right) \left( 1 \right) } & { W }^{ \left( n-1 \right) \left( 2 \right) } & \dots & { W }^{ \left( n-1 \right) \left( n-1 \right) } \end{bmatrix} \\ \left({F}_{n}\right)_{ij}={W}^{ij}; i,j=0,1,2,\dots,n-1 $$ * W is a special number whose *n*<sup>th</sup> power equals 1 $$ {W}^{n}=1 \\ W={ e }^{ \frac { i2\pi }{ n } }=\cos { \frac { 2\pi }{ n } +i\sin { \frac { 2\pi }{ n } } } $$ * It is in the complex plane of course (as written in *sin* and *cos* above) * Remember than *n* here refers to the size the matrix * Here it also refers to the *n*<sup>th</sup> *n* roots (if that makes any sense, else look at the image below) ```python Image(filename = 'W.png') ``` * So for *n*=4 we will have the following $$ { F }_{ 4 }=\begin{bmatrix} 1 & 1 & 1 & 1 \\ 1 & \left( { e }^{ \frac { 2\pi i }{ 4 } } \right) ^{ 1 } & { \left( { e }^{ \frac { 2\pi i }{ 4 } } \right) ^{ 2 } } & { \left( { e }^{ \frac { 2\pi i }{ 4 } } \right) ^{ 3 } } \\ 1 & \left( { e }^{ \frac { 2\pi i }{ 4 } } \right) ^{ 2 } & { \left( { e }^{ \frac { 2\pi i }{ 4 } } \right) ^{ 4 } } & { \left( { e }^{ \frac { 2\pi i }{ 4 } } \right) ^{ 6 } } \\ 1 & \left( { e }^{ \frac { 2\pi i }{ 4 } } \right) ^{ 3 } & { \left( { e }^{ \frac { 2\pi i }{ 4 } } \right) ^{ 6 } } & { \left( { e }^{ \frac { 2\pi i }{ 4 } } \right) ^{ 9 } } \end{bmatrix} $$ * We note that a quarter of the way around is *i* $$ {e}^{\frac{2\pi{i}}{4}}={i} $$ * We thus have the following $$ { F }_{ 4 }=\begin{bmatrix} 1 & 1 & 1 & 1 \\ 1 & i & { i }^{ 2 } & { i }^{ 3 } \\ 1 & { i }^{ 2 } & { i }^{ 4 } & { i }^{ 6 } \\ 1 & { i }^{ 3 } & { i }^{ 6 } & { i }^{ 9 } \end{bmatrix}\\ { F }_{ 4 }=\begin{bmatrix} 1 & 1 & 1 & 1 \\ 1 & i & -1 & -i \\ 1 & -1 & 1 & -1 \\ 1 & -i & -1 & i \end{bmatrix} $$ * Note how the columns are orthogonal ```python F = Matrix([[1, 1, 1, 1], [1, I, -1, -I], [1, -1, 1, -1], [1, -I, -1, I]]) F ``` $$\left[\begin{matrix}1 & 1 & 1 & 1\\1 & i & -1 & - i\\1 & -1 & 1 & -1\\1 & - i & -1 & i\end{matrix}\right]$$ ```python F.col(0) # Calling only the selected column (counting starts at 0) ``` $$\left[\begin{matrix}1\\1\\1\\1\end{matrix}\right]$$ * The columns are supposed to be orthogonal, i.e. inner (dot) product should be zero * Clearly below it is not ```python F.col(1).dot(F.col(3)) ``` $$4$$ * Remember, though, that this is a complex matrix and we have to use the Hermitian ```python col1 = F.col(1) col3 = F.col(3) col1, col3 ``` $$\begin{pmatrix}\left[\begin{matrix}1\\i\\-1\\- i\end{matrix}\right], & \left[\begin{matrix}1\\- i\\-1\\i\end{matrix}\right]\end{pmatrix}$$ ```python Dagger(col3), col1 ``` $$\begin{pmatrix}\left[\begin{matrix}1 & i & -1 & - i\end{matrix}\right], & \left[\begin{matrix}1\\i\\-1\\- i\end{matrix}\right]\end{pmatrix}$$ ```python Dagger(col3) * col1 # Another way to do the dot product ``` $$\left[\begin{matrix}0\end{matrix}\right]$$ * So, these columns are all orthogonal, but they are not orthonormal * Note, though that the are all of length 2, so we can normalize each ```python Rational(1, 2) * F ``` $$\left[\begin{matrix}\frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2}\\\frac{1}{2} & \frac{i}{2} & - \frac{1}{2} & - \frac{i}{2}\\\frac{1}{2} & - \frac{1}{2} & \frac{1}{2} & - \frac{1}{2}\\\frac{1}{2} & - \frac{i}{2} & - \frac{1}{2} & \frac{i}{2}\end{matrix}\right]$$ * We also note the following $$ {F}_{n}^{H}{F}_{n}={I} $$ * Just remember to normalize them ```python Dagger(Rational(1, 2) * F) ``` $$\left[\begin{matrix}\frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2}\\\frac{1}{2} & - \frac{i}{2} & - \frac{1}{2} & \frac{i}{2}\\\frac{1}{2} & - \frac{1}{2} & \frac{1}{2} & - \frac{1}{2}\\\frac{1}{2} & \frac{i}{2} & - \frac{1}{2} & - \frac{i}{2}\end{matrix}\right]$$ ```python Dagger(Rational(1, 2) * F) * ((Rational(1, 2) * F)) ``` $$\left[\begin{matrix}1 & 0 & 0 & 0\\0 & 1 & 0 & 0\\0 & 0 & 1 & 0\\0 & 0 & 0 & 1\end{matrix}\right]$$ * Now why do we call it *fast* Fourier transform * Note the following $$ { W }_{ n }={ e }^{ \frac { 2\pi i }{ n } }\\ { \left( { W }_{ n } \right) }^{ p }={ \left( { e }^{ \frac { 2\pi i }{ n } } \right) }^{ p }\\ { \left( { W }_{ 64 } \right) }^{ 2 }={ \left( { e }^{ \frac { 2\pi i }{ 64 } } \right) }^{ 2 };\quad n=64,\quad p=2\\ \therefore \quad { \left( { W }_{ 64 } \right) }^{ 2 }={ W }_{ 32 } $$ * Now we have the following connection between the two $$ \left[ { F }_{ 64 } \right] =\begin{bmatrix} I & D \\ I & -D \end{bmatrix}\begin{bmatrix} { F }_{ 32 } & 0 \\ 0 & { F }_{ 32 } \end{bmatrix}\left[ P \right] \\ D=\begin{bmatrix} 1 & 0 & 0 & \dots & 0 \\ 0 & W & 0 & \dots & 0 \\ 0 & 0 & { W }^{ 2 } & \dots & 0 \\ \vdots & \vdots & \vdots & \dots & \vdots \\ 0 & 0 & 0 & \dots & { W }^{ 31 } \end{bmatrix}$$ * P is a permutation matrix * Going down to 16 will include the following $$ \begin{bmatrix} I & D & 0 & 0 \\ I & -D & 0 & 0 \\ 0 & 0 & I & D \\ 0 & 0 & I & -D \end{bmatrix}\begin{bmatrix} { F }_{ 16 } & 0 & 0 & 0 \\ 0 & { F }_{ 16 } & 0 & 0 \\ 0 & 0 & { F }_{ 16 } & 0 \\ 0 & 0 & 0 & { F }_{ 16 } \end{bmatrix}\left[ P \right] $$ * The recursive work above leads to decreasing the work that is required for working with these problems ```python ```
249a4bc02ae73a8a4506f758e51bd5ffafe19e62
61,487
ipynb
Jupyter Notebook
Mathematics/Linear Algebra/0.0 MIT-18.06 - Jupyter/Lecture_26_Complex_matrices_FFT.ipynb
okara83/Becoming-a-Data-Scientist
f09a15f7f239b96b77a2f080c403b2f3e95c9650
[ "MIT" ]
null
null
null
Mathematics/Linear Algebra/0.0 MIT-18.06 - Jupyter/Lecture_26_Complex_matrices_FFT.ipynb
okara83/Becoming-a-Data-Scientist
f09a15f7f239b96b77a2f080c403b2f3e95c9650
[ "MIT" ]
null
null
null
Mathematics/Linear Algebra/0.0 MIT-18.06 - Jupyter/Lecture_26_Complex_matrices_FFT.ipynb
okara83/Becoming-a-Data-Scientist
f09a15f7f239b96b77a2f080c403b2f3e95c9650
[ "MIT" ]
2
2022-02-09T15:41:33.000Z
2022-02-11T07:47:40.000Z
51.539816
31,252
0.690406
true
4,994
Qwen/Qwen-72B
1. YES 2. YES
0.651355
0.808067
0.526339
__label__eng_Latn
0.573238
0.06119
# Euler's Formula [back to overview page](index.ipynb) a proof: http://austinrochford.com/posts/2014-02-05-eulers-formula-sympy.html ```python import sympy as sp sp.init_printing() ``` ```python x = sp.symbols('x', real=True) ``` ```python exp1 = sp.exp(sp.I * x) exp1 ``` ```python exp2 = exp1.expand(complex=True) exp2 ``` ```python exp2.rewrite(sp.exp) ``` Euler's identity: ```python sp.exp(sp.I * sp.pi) + 1 ``` <p xmlns:dct="http://purl.org/dc/terms/"> <a rel="license" href="http://creativecommons.org/publicdomain/zero/1.0/"> </a> <br /> To the extent possible under law, <span rel="dct:publisher" resource="[_:publisher]">the person who associated CC0</span> with this work has waived all copyright and related or neighboring rights to this work. </p>
3bbea582709c7df157ab0ccc2f75dc54092d1d99
5,921
ipynb
Jupyter Notebook
sympy/euler.ipynb
mgeier/python-audio
70d4d62b148c08c50ec0057f8d4fd9876ce67a13
[ "CC0-1.0" ]
144
2015-04-14T20:13:25.000Z
2022-03-26T20:00:27.000Z
sympy/euler.ipynb
mgeier/python-audio
70d4d62b148c08c50ec0057f8d4fd9876ce67a13
[ "CC0-1.0" ]
1
2017-05-02T13:22:41.000Z
2017-05-03T13:17:15.000Z
sympy/euler.ipynb
mgeier/python-audio
70d4d62b148c08c50ec0057f8d4fd9876ce67a13
[ "CC0-1.0" ]
36
2015-04-19T14:08:37.000Z
2021-04-21T14:24:37.000Z
30.838542
1,148
0.657997
true
250
Qwen/Qwen-72B
1. YES 2. YES
0.935347
0.896251
0.838306
__label__eng_Latn
0.697801
0.785998
# Python: Cluster Robust Double Machine Learning ## Motivation In many empirical applications, errors exhibit a clustered structure such that the usual i.i.d. assumption does not hold anymore. In order to perform valid statistical inference, researchers have to account for clustering. In this notebook, we will shortly emphasize the consequences of clustered data on inference based on the double machine learning (DML) approach as has been considered in [Chiang et al. (2021)](https://doi.org/10.1080/07350015.2021.1895815). We will demonstrate how users of the [DoubleML](https://docs.doubleml.org/stable/index.html) package can account for one- and two-way clustering in their analysis. Clustered errors in terms of one or multiple dimensions might arise in many empirical applications. For example, in a cross-sectional study, errors might be correlated (i) within regions (one-way clustering) or (ii) within regions and industries at the same time (two-way clustering). Another example for two-way clustering, discussed in [Chiang et al. (2021)](https://doi.org/10.1080/07350015.2021.1895815), refers to market share data with market shares being subject to shocks on the market and product level at the same time. We refer to [Cameron et al. (2011)](https://doi.org/10.1198/jbes.2010.07136) for an introduction to multiway clustering and a illustrative list of empirical examples. ## Clustering and double machine learning Clustering creates a challenge to the double machine learning (DML) approach in terms of 1. a necessary adjustment of the formulae used for estimation of the variance covariance matrix, standard errors, p-values etc., and, 2. an adjusted resampling scheme for the cross-fitting algorithm. The first point equally applies to classical statistical models, for example a linear regression model (see, for example [Cameron et al. 2011](https://doi.org/10.1198/jbes.2010.07136)). The second point arises because the clustering implies a correlation of errors from train and test samples if the standard cross-fitting procedure suggested in [Chernozhukov et al. (2018)](https://doi.org/10.1111/ectj.12097) was employed. The DML approach builds on independent sample splits into partitions that are used for training of the machine learning (ML) model learners and generation of predictions that are eventually used for solving the score function. For a motivation of the necessity of sample splitting, we refer to the illustration example in the [user guide]( https://docs.doubleml.org/stable/guide/basics.html#sample-splitting-to-remove-bias-induced-by-overfitting) as well as to the explanation in [Chernozhukov et al. (2018)](https://doi.org/10.1111/ectj.12097) . In order to achieve independent data splits in a setting with one-way or multi-way clustering, [Chiang et al. (2021)](https://doi.org/10.1080/07350015.2021.1895815) develop an updated $K$-fold sample splitting procedure that ensures independent sample splits: The data set is split into disjoint partitions in terms of all clustering dimensions. For example, in a situation with two-way clustering, the data is split into $K^2$ folds. The machine learning models are then trained on a specific fold and used for generation of predictions in hold-out samples. Thereby, the sample splitting procedure ensures that the hold-out samples do not contain observations of the same clusters as used for training. ```python import numpy as np import pandas as pd import matplotlib.pyplot as plt from matplotlib.colors import ListedColormap import seaborn as sns from sklearn.model_selection import KFold, RepeatedKFold from sklearn.base import clone from sklearn.linear_model import LassoCV from doubleml import DoubleMLClusterData, DoubleMLData, DoubleMLPLIV from doubleml.datasets import make_pliv_multiway_cluster_CKMS2021 ``` ## A Motivating Example: Two-Way Cluster Robust DML In a first part, we show how the two-way cluster robust double machine learning (DML) ([Chiang et al. 2021](https://doi.org/10.1080/07350015.2021.1895815)) can be implemented with the [DoubleML](https://docs.doubleml.org/stable/index.html) package. [Chiang et al. (2021)](https://doi.org/10.1080/07350015.2021.1895815) consider double-indexed data \begin{equation} \lbrace W_{ij}: i \in \lbrace 1, \ldots, N \rbrace, j \in \lbrace 1, \ldots, M \rbrace \rbrace \end{equation} and the partially linear IV regression model (PLIV) $$\begin{aligned} Y_{ij} = D_{ij} \theta_0 + g_0(X_{ij}) + \epsilon_{ij}, & &\mathbb{E}(\epsilon_{ij} | X_{ij}, Z_{ij}) = 0, \\ Z_{ij} = m_0(X_{ij}) + v_{ij}, & &\mathbb{E}(v_{ij} | X_{ij}) = 0. \end{aligned}$$ ### Simulate two-way cluster data We use the PLIV data generating process described in Section 4.1 of [Chiang et al. (2021)](https://doi.org/10.1080/07350015.2021.1895815). The DGP is defined as $$\begin{aligned} Z_{ij} &= X_{ij}' \xi_0 + V_{ij}, \\ D_{ij} &= Z_{ij}' \pi_{10} + X_{ij}' \pi_{20} + v_{ij}, \\ Y_{ij} &= D_{ij} \theta + X_{ij}' \zeta_0 + \varepsilon_{ij}, \end{aligned}$$ with $$\begin{aligned} X_{ij} &= (1 - \omega_1^X - \omega_2^X) \alpha_{ij}^X + \omega_1^X \alpha_{i}^X + \omega_2^X \alpha_{j}^X, \\ \varepsilon_{ij} &= (1 - \omega_1^\varepsilon - \omega_2^\varepsilon) \alpha_{ij}^\varepsilon + \omega_1^\varepsilon \alpha_{i}^\varepsilon + \omega_2^\varepsilon \alpha_{j}^\varepsilon, \\ v_{ij} &= (1 - \omega_1^v - \omega_2^v) \alpha_{ij}^v + \omega_1^v \alpha_{i}^v + \omega_2^v \alpha_{j}^v, \\ V_{ij} &= (1 - \omega_1^V - \omega_2^V) \alpha_{ij}^V + \omega_1^V \alpha_{i}^V + \omega_2^V \alpha_{j}^V, \end{aligned}$$ and $\alpha_{ij}^X, \alpha_{i}^X, \alpha_{j}^X \sim \mathcal{N}(0, \Sigma)$ where $\Sigma$ is a $p_x \times p_x$ matrix with entries $\Sigma_{kj} = s_X^{|j-k|}$. Further $$\begin{aligned} \left(\begin{matrix} \alpha_{ij}^\varepsilon \\ \alpha_{ij}^v \end{matrix}\right), \left(\begin{matrix} \alpha_{i}^\varepsilon \\ \alpha_{i}^v \end{matrix}\right), \left(\begin{matrix} \alpha_{j}^\varepsilon \\ \alpha_{j}^v \end{matrix}\right) \sim \mathcal{N}\left(0, \left(\begin{matrix} 1 & s_{\varepsilon v} \\ s_{\varepsilon v} & 1 \end{matrix} \right) \right) \end{aligned}$$ and $\alpha_{ij}^V, \alpha_{i}^V, \alpha_{j}^V \sim \mathcal{N}(0, 1)$. Data from this DGP can be generated with the [make_pliv_multiway_cluster_CKMS2021()](https://docs.doubleml.org/stable/api/generated/doubleml.datasets.make_pliv_multiway_cluster_CKMS2021.html#doubleml.datasets.make_pliv_multiway_cluster_CKMS2021) function from [DoubleML](https://docs.doubleml.org/stable/index.html). ```python # Set the simulation parameters N = 25 # number of observations (first dimension) M = 25 # number of observations (second dimension) dim_X = 100 # dimension of X np.random.seed(3141) # set seed obj_dml_data = make_pliv_multiway_cluster_CKMS2021(N, M, dim_X) ``` ### Data-Backend for Cluster Data The implementation of cluster robust double machine learning is based on a special data-backend called [DoubleMLClusterData](https://docs.doubleml.org/stable/api/generated/doubleml.DoubleMLClusterData.html#doubleml.DoubleMLClusterData). As compared to the standard data-backend [DoubleMLData](https://docs.doubleml.org/dev/api/generated/doubleml.DoubleMLData.html), users can specify the clustering variables during instantiation of a [DoubleMLClusterData](https://docs.doubleml.org/stable/api/generated/doubleml.DoubleMLClusterData.html#doubleml.DoubleMLClusterData) object. The estimation framework will subsequently account for the provided clustering options. ```python # The simulated data is of type DoubleMLClusterData print(obj_dml_data) ``` ```python # The cluster variables are part of the DataFrame obj_dml_data.data.head() ``` ### Initialize the objects of class `DoubleMLPLIV` ```python # Set machine learning methods for m, g & r learner = LassoCV() ml_g = clone(learner) ml_m = clone(learner) ml_r = clone(learner) # initialize the DoubleMLPLIV object dml_pliv_obj = DoubleMLPLIV(obj_dml_data, ml_g, ml_m, ml_r, n_folds=3) ``` ```python print(dml_pliv_obj) ``` ### Define Helper Functions for Plotting ```python #discrete color scheme x = sns.color_palette("RdBu_r", 7) cMap = ListedColormap([x[0], x[3], x[6]]) plt.rcParams['figure.figsize'] = 15, 12 sns.set(font_scale=1.3) ``` ```python def plt_smpls(smpls, n_folds_per_cluster): df = pd.DataFrame(np.zeros([N*M, n_folds_per_cluster*n_folds_per_cluster])) for i_split, this_split_ind in enumerate(smpls): df.loc[this_split_ind[0], i_split] = -1. df.loc[this_split_ind[1], i_split] = 1. ax = sns.heatmap(df, cmap=cMap); ax.invert_yaxis(); ax.set_ylim([0, N*M]); ax.set_xlabel('Fold') ax.set_ylabel('Observation') colorbar = ax.collections[0].colorbar colorbar.set_ticks([-0.667, 0, 0.667]) colorbar.set_ticklabels(['Nuisance', '', 'Score']) ``` ```python def plt_smpls_cluster(smpls_cluster, n_folds_per_cluster): for i_split in range(len(smpls_cluster)): plt.subplot(n_folds_per_cluster, n_folds_per_cluster, i_split + 1) df = pd.DataFrame(np.zeros([N*M, 1]), index = pd.MultiIndex.from_product([range(N), range(M)]), columns=['value']) df.loc[pd.MultiIndex.from_product(smpls_cluster[i_split][0]), :] = -1. df.loc[pd.MultiIndex.from_product(smpls_cluster[i_split][1]), :] = 1. df_wide = df.reset_index().pivot(index="level_0", columns="level_1", values="value") df_wide.index.name='' df_wide.columns.name='' ax = sns.heatmap(df_wide, cmap=cMap); ax.invert_yaxis(); ax.set_ylim([0, M]); colorbar = ax.collections[0].colorbar colorbar.set_ticks([-0.667, 0, 0.667]) l = i_split % n_folds_per_cluster + 1 k = np.floor_divide(i_split, n_folds_per_cluster) + 1 title = f'Nuisance: $I_{{{k}}}^C \\times J_{{{l}}}^C$; Score: $I_{{{k}}} \\times J_{{{l}}}$' ax.set_title(title) if l == n_folds_per_cluster: colorbar.set_ticklabels(['Nuisance', '', 'Score']) else: colorbar.set_ticklabels(['', '', '']) if l == 1: ax.set_ylabel('First Cluster Variable $k$') if k == 3: ax.set_xlabel('Second Cluster Variable $\ell$') plt.tight_layout() ``` ### Cluster Robust Cross Fitting A key element of cluster robust DML ([Chiang et al. 2021](https://doi.org/10.1080/07350015.2021.1895815)) is a special sample splitting used for the cross-fitting. In case of two-way clustering, we assume $N$ clusters in the first dimension and $M$ clusters in the second dimension. For $K$-fold cross-fitting, [Chiang et al. (2021)](https://doi.org/10.1080/07350015.2021.1895815) proposed to randomly partition $[N]:=\{1,\ldots,N\}$ into $K$ subsets $\{I_1, \ldots, I_K\}$ and $[M]:=\{1,\ldots,N\}$ into $K$ subsets $\{J_1, \ldots, J_K\}$. Effectively, one then considers $K^2$ folds. Basically for each $(k, \ell) \in \{1, \ldots, K\} \times \{1, \ldots, K\}$, the nuisance functions are estimated for all double-indexed observations in $([N]\setminus I_K) \times ([M]\setminus J_\ell)$, i.e., $$ \hat{\eta}_{k\ell} = \hat{\eta}\left((W_{ij})_{(i,j)\in ([N]\setminus I_K) \times ([M]\setminus J_\ell)}\right) $$ The causal parameter is then estimated as usual by solving a moment condition with a Neyman orthogonal score function. For two-way cluster robust double machine learning with algorithm [DML2](https://docs.doubleml.org/stable/guide/algorithms.html#algorithm-dml2) this results in solving $$ \frac{1}{K^2} \sum_{k=1}^{K} \sum_{\ell=1}^{K} \frac{1}{|I_k| |J_\ell|} \sum_{(i,j) \in I_K \times J_\ell} \psi(W_{ij}, \tilde{\theta}_0, \hat{\eta}_{k\ell}) = 0 $$ for $\tilde{\theta}_0$. Here $|I_k|$ denotes the cardinality, i.e., the number of clusters in the $k$-th fold for the first cluster variable. We can visualize the sample splitting of the $N \cdot M = 625$ observations into $K \cdot K = 9$ folds. The following heat map illustrates the partitioned data set that is split into $K=9$ folds. The horizontal axis corresponds to the fold indices and the vertical axis to the indices of the observations. A blue field indicates that the observation $i$ is used for fitting the nuisance part, red indicates that the fold is used for prediction generation and white means that an observation is left out from the sample splitting. For example, the first observation as displayed on the very bottom of the figure (using Python indexing starting at `0`) is used for training of the nuisance parts in the first (`0`), the third (`2`), fourth (`3`) and sixth (`5`) fold and used for generation of the predictions in fold eight (`7`). At the same time the observation is left out from the sample splitting procedure in folds two (`1`), five (`4`), seven (`6`) and nine (`8`). ```python plt_smpls(dml_pliv_obj.smpls[0], dml_pliv_obj._n_folds_per_cluster) ``` If we visualize the sample splitting in terms of the cluster variables, the partitioning of the data into $9$ folds $I_k \times J_\ell$ becomes clear. The identifiers for the first cluster variable $[N]:=\{1,\ldots,N\}$ have been randomly partioned into $K=3$ folds denoted by $\{I_1, I_2, I_3\}$ and the identifiers for the second cluster variable $[M]:=\{1,\ldots,M\}$ have also been randomly partioned into $K=3$ folds denoted by $\{J_1, J_2, J_3\}$. By considering every combination $I_k \times J_\ell$ for $1 \leq k, \ell \leq K = 3$ we effectively base the cross-fitting on $9$ folds. We now want to focus on the top-left sub-plot showing the partitioning of the cluster data for the first fold. The $x$-axis corresponds to the first cluster variable and the $y$-axis to the second cluster variable. Observations with cluster variables $(i,j) \in I_K \times J_\ell$ are used for estimation of the target parameter $\tilde{\theta}_0$ by solving a Neyman orthogonal score function. For estimation of the nuisance function, we only use observation where neither the first cluster variable is in $I_K$ nor the second cluster variable is in $J_\ell$, i.e., we use observations indexed by $(i,j)\in ([N]\setminus I_K) \times ([M]\setminus J_\ell)$ to estimate the nuisance functions $$ \hat{\eta}_{k\ell} = \hat{\eta}\left((W_{ij})_{(i,j)\in ([N]\setminus I_K) \times ([M]\setminus J_\ell)}\right). $$ This way we guarantee that there are never observations from the same cluster (first and/or second cluster dimension) in the sample for the nuisance function estimation (blue) and at the same time in the sample for solving the score function (red). As a result of this special sample splitting proposed by [Chiang et al. (2021)](https://doi.org/10.1080/07350015.2021.1895815), the observations in the score (red) and nuisance (blue) sample can be considered independent and the standard cross-fitting approach for double machine learning can be applied. ```python plt_smpls_cluster(dml_pliv_obj.smpls_cluster[0], dml_pliv_obj._n_folds_per_cluster) ``` ### Cluster Robust Standard Errors In the abstract base class `DoubleML` the estimation of cluster robust standard errors is implemented for all supported double machine learning models. It is based on the assumption of a linear Neyman orthogonal score function. We use the notation $n \wedge m := \min\{n,m\}$. For the the asymptotic variance of $\sqrt{\underline{C}}(\tilde{\theta_0} - \theta_0)$ with $\underline{C} := N \wedge M$ [Chiang et al. (2021)](https://doi.org/10.1080/07350015.2021.1895815) then propose the following estimator $$ \hat{\sigma}^2 = \hat{J}^{-1} \hat{\Gamma} \hat{J}^{-1} $$ where $$ \begin{aligned} \hat{\Gamma} = \frac{1}{K^2} \sum_{(k, \ell) \in[K]^2} \Bigg[ \frac{|I_k| \wedge |J_\ell|}{(|I_k||J_\ell|)^2} \bigg(&\sum_{i \in I_k} \sum_{j \in J_\ell} \sum_{j' \in J_\ell} \psi(W_{ij}; \tilde{\theta}, \hat{\eta}_{k \ell}) \psi(W_{ij'}; \tilde{\theta}_0, \hat{\eta}_{k \ell}) \\ &+ \sum_{i \in I_k} \sum_{i' \in I_k} \sum_{j \in J_\ell} \psi(W_{ij}; \tilde{\theta}, \hat{\eta}_{k \ell}) \psi(W_{i'j}; \tilde{\theta}_0, \hat{\eta}_{k \ell}) \bigg) \Bigg] \end{aligned}$$ and $$ \begin{aligned} \hat{J} = \frac{1}{K^2} \sum_{(k, \ell) \in[K]^2} \frac{1}{|I_k||J_\ell|} \sum_{i \in I_k} \sum_{j \in J_\ell} \psi_a(W_{ij}; \tilde{\theta}_0, \hat{\eta}_{k \ell}). \end{aligned} $$ A $(1-\alpha)$ confidence interval is then given by ([Chiang et al. 2021](https://doi.org/10.1080/07350015.2021.1895815)) $$\begin{aligned} \left[ \tilde{\theta} \pm \Phi^{-1}(1-\alpha/2) \sqrt{\hat{\sigma}^2 / \underline{C}} \right] \end{aligned} $$ with $\underline{C} = N \wedge M$. ```python # Estimate the PLIV model with cluster robust double machine learning dml_pliv_obj.fit() dml_pliv_obj.summary ``` ## (One-Way) Cluster Robust Double Machine Learing We again use the PLIV data generating process described in Section 4.1 of [Chiang et al. (2021)](https://doi.org/10.1080/07350015.2021.1895815). To obtain one-way clustered data, we set the following weights to zero $$ \omega_2^X = \omega_2^\varepsilon = \omega_2^v = \omega_2^V = 0. $$ Again we can simulate this data with [make_pliv_multiway_cluster_CKMS2021()](https://docs.doubleml.org/stable/api/generated/doubleml.datasets.make_pliv_multiway_cluster_CKMS2021.html#doubleml.datasets.make_pliv_multiway_cluster_CKMS2021). To prepare the data-backend for one-way clustering, we only have to alter the `cluster_cols` to be `'cluster_var_i'`. ```python obj_dml_data = make_pliv_multiway_cluster_CKMS2021(N, M, dim_X, omega_X=np.array([0.25, 0]), omega_epsilon=np.array([0.25, 0]), omega_v=np.array([0.25, 0]), omega_V=np.array([0.25, 0])) ``` ```python obj_dml_data.cluster_cols = 'cluster_var_i' print(obj_dml_data) ``` ```python # Set machine learning methods for m & g learner = LassoCV() ml_g = clone(learner) ml_m = clone(learner) ml_r = clone(learner) # initialize the DoubleMLPLIV object dml_pliv_obj = DoubleMLPLIV(obj_dml_data, ml_g, ml_m, ml_r, n_folds=3) ``` ```python dml_pliv_obj.fit() dml_pliv_obj.summary ``` ## Real-Data Application As a real-data application we revist the consumer demand example from [Chiang et al. (2021)](https://doi.org/10.1080/07350015.2021.1895815). The U.S. automobile data of [Berry, Levinsohn, and Pakes (1995)](https://doi.org/10.2307/2171802) is obtained from the `R` package [hdm](https://cran.r-project.org/web/packages/hdm/index.html). In this example, we consider different specifications for the cluster dimensions. ### Load and Process Data ```python from sklearn.preprocessing import PolynomialFeatures from rpy2.robjects.packages import importr, data from rpy2.robjects import r, pandas2ri pandas2ri.activate() ``` ```python hdm = importr('hdm') blp_data = data(hdm).fetch('BLP')['BLP'][0] ``` ```python x_cols = ['hpwt', 'air', 'mpd', 'space'] blp_data ``` ```python def construct_iv(blp_data, x_cols=['hpwt', 'air', 'mpd', 'space']): n = blp_data.shape[0] p = len(x_cols) firmid = blp_data['firm.id'].values cdid = blp_data['cdid'].values id_var = blp_data['id'].values X = blp_data[x_cols] sum_other = pd.DataFrame(columns=['sum.other.' + var for var in x_cols], index=blp_data.index) sum_rival = pd.DataFrame(columns=['sum.rival.' + var for var in x_cols], index=blp_data.index) for i in range(n): other_ind = (firmid == firmid[i]) & (cdid == cdid[i]) & (id_var != id_var[i]) rival_ind = (firmid != firmid[i]) & (cdid == cdid[i]) for j in range(p): sum_other.iloc[i, j] = X.iloc[:,j][other_ind].sum() sum_rival.iloc[i, j] = X.iloc[:,j][rival_ind].sum() return pd.concat((sum_other, sum_rival), axis=1) ``` ```python iv_vars = construct_iv(blp_data, x_cols=['hpwt', 'air', 'mpd', 'space']) ``` ```python poly = PolynomialFeatures(degree=3, include_bias=False) data_transf = poly.fit_transform(blp_data[x_cols]) x_cols_poly = poly.get_feature_names(x_cols) data_transf = pd.DataFrame(data_transf, columns=x_cols_poly) data_transf.index = blp_data.index ``` ```python sel_cols_chiang = list(np.setdiff1d(data_transf.columns, ['hpwt air mpd', 'hpwt air space', 'hpwt mpd space', 'air mpd space'])) sel_cols_chiang ``` ```python blp_data['log_p'] = np.log(blp_data['price'] + 11.761) y_col = 'y' d_col = 'log_p' cluster_cols = ['model.id', 'cdid'] all_z_cols = ['sum.other.hpwt', 'sum.other.mpd', 'sum.other.space'] z_col = all_z_cols[0] ``` ```python dml_df = pd.concat((blp_data[[y_col] + [d_col] + cluster_cols], data_transf[sel_cols_chiang], iv_vars[all_z_cols]), axis=1) ``` ```python dml_df.shape ``` ### Initialize `DoubleMLClusterData` object ```python dml_data = DoubleMLClusterData(dml_df, y_col=y_col, d_cols=d_col, z_cols=z_col, cluster_cols=cluster_cols, x_cols=sel_cols_chiang) ``` ```python print(dml_data) ``` ```python learner = LassoCV(max_iter=50000) ``` ```python res_df = pd.DataFrame() n_rep = 10 ``` ### Two-Way Clustering with Respect to Product and Market ```python dml_data.z_cols = z_col dml_data.cluster_cols = ['model.id', 'cdid'] dml_pliv = DoubleMLPLIV(dml_data, clone(learner), clone(learner), clone(learner), n_folds=2, n_rep=n_rep) dml_pliv.fit() res = dml_pliv.summary.reset_index(drop=True) res['z_col'] = dml_data.z_cols[0] res['clustering'] = 'two-way' res_df = res_df.append(res) ``` ### One-Way Clustering with Respect to the Product ```python dml_data.z_cols = z_col dml_data.cluster_cols = 'model.id' dml_pliv = DoubleMLPLIV(dml_data, clone(learner), clone(learner), clone(learner), n_folds=4, n_rep=n_rep) dml_pliv.fit() res = dml_pliv.summary.reset_index(drop=True) res['z_col'] = dml_data.z_cols[0] res['clustering'] = 'one-way-product' res_df = res_df.append(res) ``` ### One-Way Clustering with Respect to the Market ```python dml_data.z_cols = z_col dml_data.cluster_cols = 'cdid' dml_pliv = DoubleMLPLIV(dml_data, clone(learner), clone(learner), clone(learner), n_folds=4, n_rep=n_rep) dml_pliv.fit() res = dml_pliv.summary.reset_index(drop=True) res['z_col'] = dml_data.z_cols[0] res['clustering'] = 'one-way-market' res_df = res_df.append(res) ``` ### No Clustering / Zero-Way Clustering ```python dml_data = DoubleMLData(dml_df, y_col=y_col, d_cols=d_col, z_cols=z_col, x_cols=sel_cols_chiang) ``` ```python print(dml_data) ``` ```python dml_data.z_cols = z_col dml_pliv = DoubleMLPLIV(dml_data, clone(learner), clone(learner), clone(learner), n_folds=4, n_rep=n_rep) dml_pliv.fit() res = dml_pliv.summary.reset_index(drop=True) res['z_col'] = dml_data.z_cols[0] res['clustering'] = 'zero-way' res_df = res_df.append(res) ``` ### Application Results ```python res_df ``` ## References Berry, S., Levinsohn, J., and Pakes, A. (1995), Automobile Prices in Market Equilibrium, Econometrica: Journal of the Econometric Society, 63, 841-890, doi: [10.2307/2171802](https://doi.org/10.2307/2171802). Cameron, A. C., Gelbach, J. B. and Miller, D. L. (2011), Robust Inference with Multiway Clustering, Journal of Business & Economic Statistics, 29:2, 238-249, doi: [10.1198/jbes.2010.07136](https://doi.org/10.1198/jbes.2010.07136). Chernozhukov, V., Chetverikov, D., Demirer, M., Duflo, E., Hansen, C., Newey, W. and Robins, J. (2018), Double/debiased machine learning for treatment and structural parameters. The Econometrics Journal, 21: C1-C68, doi: [10.1111/ectj.12097](https://doi.org/10.1111/ectj.12097). Chiang, H. D., Kato K., Ma, Y. and Sasaki, Y. (2021), Multiway Cluster Robust Double/Debiased Machine Learning, Journal of Business & Economic Statistics, doi: [10.1080/07350015.2021.1895815](https://doi.org/10.1080/07350015.2021.1895815), arXiv: [1909.03489](https://arxiv.org/abs/1909.03489).
6acb2af5e4304b9b44b70aa0e3f6dd629119057e
35,793
ipynb
Jupyter Notebook
doc/examples/py_double_ml_multiway_cluster.ipynb
FrederikBornemann/doubleml-docs
193f9625e85d3de4b65e6480b92f7c820a7a23e5
[ "MIT" ]
null
null
null
doc/examples/py_double_ml_multiway_cluster.ipynb
FrederikBornemann/doubleml-docs
193f9625e85d3de4b65e6480b92f7c820a7a23e5
[ "MIT" ]
null
null
null
doc/examples/py_double_ml_multiway_cluster.ipynb
FrederikBornemann/doubleml-docs
193f9625e85d3de4b65e6480b92f7c820a7a23e5
[ "MIT" ]
null
null
null
37.479581
774
0.578465
true
7,236
Qwen/Qwen-72B
1. YES 2. YES
0.812867
0.727975
0.591747
__label__eng_Latn
0.648933
0.213158
# Transformadores lineales __UNLZ - Facultad de Ingeniería__ __Electrotecnia__ __Alumno:__ Daniel Antonio Lorenzo <a href="https://colab.research.google.com/github/daniel-lorenzo/Electrotecnia/blob/master/Transformadores_lineales.ipynb"></a> <div class="alert-info">Un transformador es por lo general un dispositivo de cuatro terminales que comprende dos (o más) bobinas magnéticamente acopladas.</div> ## Ejemplo 13.4 En el circuito de la figura, calcule la impedancia de entrada y la corriente $I_1$. Considere $Z_1 = 60 - j100 \, \Omega$, $Z_2 = 30 + j40 \, \Omega$ y $Z_L = 80 + j60 \, \Omega$. ### Solución ```python import numpy as np import cmath ``` ```python # Datos: Z1 = 60 - 100j # Ohm Z2 = 30 + 40j # Ohm ZL = 80 + 60j # Ohm XL1 = 20j # Ohm XL2 = 40j # Ohm M = 5j # Ohm Vs = cmath.rect(50 , np.deg2rad(60) ) # V w = 1 # rad/s ``` A partir de la ecuación $$ Z_{ent} = Z_1 + X_{L1} + \frac{\omega^2 M^2}{Z_2 + X_{L2} + Z_L} $$ ```python Zent = Z1 + XL1 + (w**2 * M**2)/(Z2 + XL2 + ZL) ``` ```python print('Zent = (%.0f < %.2f°) Ohm'%(abs(Zent) , np.rad2deg( cmath.phase(Zent) ) )) ``` Zent = (100 < -53.13°) Ohm Así $$ I_1 = \frac{V}{Z_{ent}} $$ ```python I1 = Vs/Zent ``` ```python print('I1 = (%.2f < %.2f°) A'%(abs(I1) , np.rad2deg( cmath.phase(I1) ))) ``` I1 = (0.50 < 113.13°) A __Simulación en qucs:__ ```python %reset -s -f ``` ## Problema de práctica 13.4 Halle la impedancia de entrada del circuito de la figura y la corriente procedente de la fuente de tensión. ### Solución ```python import numpy as np import cmath ``` ```python # Datos: Z1 = 4 # Ohm XL1 = 8j # Ohm XL2 = 10j # Ohm M = 3j # Ohm Z2 = -6j # Ohm ZL = 6 + 4j # Ohm Vs = 20 # V w = 1 # rad/s ``` A partir de la ecuación $$ Z_{ent} = Z_1 + X_{L1} - \frac{\omega^2 M^2}{Z_2 + X_{L2} + Z_L} $$ ```python Zent = Z1 + XL1 - (w**2 * M**2)/(Z2 + XL2 + ZL) ``` ```python print('Zent = (%.2f < %.2f°) Ohm'%(abs(Zent) , np.rad2deg( cmath.phase(Zent) ) )) ``` Zent = (8.58 < 58.05°) Ohm Así $$ I_1 = \frac{V}{Z_{ent}} $$ ```python I1 = Vs/Zent ``` ```python print('I1 = (%.2f < %.2f°) A'%(abs(I1) , np.rad2deg( cmath.phase(I1) ))) ``` I1 = (2.33 < -58.05°) A __Simulación en qucs:__ ```python %reset -s -f ``` ## Ejemplo 13.5 Determine el circuito T equivalente del transformador lineal de la figura. (a) transformador lineal (b) circuito T equivalente ### Solución Dado que $L_1 = 10$, $L_2 = 4$ y $M = 2$, la red equivalente tiene los siguientes parámetros: $$ L_a = L_1 - M = 10 -2 = 8 \, \mathrm{H} $$ $$ L_b = L_2 - M = 4 - 2 = 2 \, \mathrm{H} $$ $$ L_c = M = 2 \, \mathrm{H} $$ Se ha supuesto que las direcciones de referencia de las corrientes y las polaridades de las tensiones en los devanados de referencia de las corrientes y las polaridades de las tensiones en los devanados primario y secundario se ajustan a las de la figura. De lo contrario, podría ser necesario reemplazar $M$ por $-M$. ## Problema de práctica 13.5 En relación con el transformador lineal de la figura, halle la red $\Pi$ equivalente. ### Solución $$ L_A = \frac{L_1 L_2 - M^2}{L_2 - M} \quad ; \quad L_B = \frac{L_1 L_2 - M^2}{L_1 - M} \quad ; \quad L_C = \frac{L_1 L_2 - M^2}{M} $$ ```python # Datos: L1 = 10 # H L2 = 4 # H M = 2 # H ``` ```python LA = (L1*L2 - M**2)/(L2 - M) LB = (L1*L2 - M**2)/(L1 - M) LC = (L1*L2 - M**2)/(M) ``` ```python print('LA = %.1f H'%LA) print('LB = %.1f H'%LB) print('LC = %.1f H'%LC) ``` LA = 18.0 H LB = 4.5 H LC = 18.0 H ## Ejemplo 13.6 Determine $I_1$, $I_2$ y $V_0$ en la figura usando el circuito T equivalente del transformador lineal. ### Solución Las bobinas acopladas magnéticamente deben reemplazarse por el circuito T equivalente. $$\begin{array}{l} L_a = L_1 - (-M) \\ L_b = L_2 - (-M) \\ L_c = -M \end{array}$$ ```python import cmath import numpy as np ``` ```python # Datos: L1 = 8j # Ohm L2 = 5j # Ohm M = 1j # Ohm R1 = 4 # Ohm R2 = 10 # Ohm Vs = cmath.rect(60 , np.deg2rad(90) ) ``` ```python La = L1 - (-M) Lb = L2 - (-M) Lc = -M ``` ```python print('La = {:.1f} H'.format(La)) print('Lb = {:.1f} H'.format(Lb)) print('Lc = {:.1f} H'.format(Lc)) ``` La = 0.0+9.0j H Lb = 0.0+6.0j H Lc = -0.0-1.0j H Así, el circuito T equivalente de la figura (b) en reemplazo de las dos bobinas de la figura (a) El circuito equivalente de la figura (b) puede resolverse aplicando el análisis nodal o el de mallas. De la aplicación del análisis de mallas se obtiene Malla 1: $$ j6 = I_1 (4 + j9 - j1) + I_2 (-j1) $$ $$ (4 + 8j) I_1 - (j1)I_2 = j6 \tag{1} $$ Malla 2: $$ 0 = I_1 (-j1) + I_2 (10 + j6 - j1) $$ $$ (-j1) I_1 + (10 + j5)I_2 = 0 \tag{2} $$ ```python A = np.array([[4+8j , -1j],[-1j , 10+5j] ]) B = np.array([[6j],[0]]) ``` ```python I = np.dot( np.linalg.inv(A) , B ) ``` ```python print('I1 = (%.3f < %.2f°) A'%(abs(I[0]) , np.rad2deg( cmath.phase(I[0]) ) )) print('I2 = (%.3f < %.2f°) A'%(abs(I[1]) , np.rad2deg( cmath.phase(I[1]) ) )) ``` I1 = (0.671 < 27.14°) A I2 = (0.060 < 90.57°) A $$ V_0 = -I_2 R_2 $$ ```python I1 = I[0] I2 = I[1] Vo = -I2*R2 ``` ```python print('Vo = (%.2f < %.2f°) V'%(abs(Vo) , np.rad2deg( cmath.phase(Vo) ))) ``` Vo = (0.60 < -89.43°) V ```python %reset -s -f ``` ## Problema de práctica 13.6 Resuelva el problema del ejemplo 13.1 usando el modelo T equivalente de las bobinas acopladas magnéticamente. ### Solución $$\begin{align} L_a &= L_1 - M \\ L_b &= L_2 - M \\ L_c &= M \end{align}$$ ```python # Datos: L1 = 5j # Ohm L2 = 6j # Ohm M = 3j # Ohm C1 = -4j # Ohm R1 = 12 # Ohm Vs = 12 # V ``` ```python La = L1 - M Lb = L2 - M Lc = M ``` ```python print('La = {:.2f} Ohm'.format(La)) print('Lb = {:.2f} Ohm'.format(Lb)) print('Lc = {:.2f} Ohm'.format(Lc)) ``` La = 0.00+2.00j Ohm Lb = 0.00+3.00j Ohm Lc = 0.00+3.00j Ohm ```python import numpy as np import cmath ``` ```python A = np.array([[1j,-3j],[-3j,12+6j]]) B = np.array([[12],[0]]) ``` ```python I = np.dot( np.linalg.inv(A) , B ) ``` ```python print('I1 = (%.2f < %.2f°) A'%(abs(I[0]) , np.rad2deg( cmath.phase(I[0]) ) )) print('I2 = (%.2f < %.2f°) A'%(abs(I[1]) , np.rad2deg( cmath.phase(I[1]) ) )) ``` I1 = (13.02 < -49.40°) A I2 = (2.91 < 14.04°) A __Simulación en qucs:__ <a href="https://colab.research.google.com/github/daniel-lorenzo/Electrotecnia/blob/master/Transformadores_lineales.ipynb"></a>
88b04cba2f5e0ec176d52a3b832a93f20f2c3a61
16,872
ipynb
Jupyter Notebook
Transformadores_lineales.ipynb
daniel-lorenzo/Electrotecnia
c9441dd58c84a635954e85b755bdd2c47b09c589
[ "MIT" ]
1
2021-11-16T16:46:27.000Z
2021-11-16T16:46:27.000Z
Transformadores_lineales.ipynb
daniel-lorenzo/Electrotecnia
c9441dd58c84a635954e85b755bdd2c47b09c589
[ "MIT" ]
null
null
null
Transformadores_lineales.ipynb
daniel-lorenzo/Electrotecnia
c9441dd58c84a635954e85b755bdd2c47b09c589
[ "MIT" ]
1
2022-01-04T00:08:57.000Z
2022-01-04T00:08:57.000Z
19.990521
324
0.463193
true
2,724
Qwen/Qwen-72B
1. YES 2. YES
0.812867
0.640636
0.520752
__label__spa_Latn
0.52459
0.048211
# Step-by-step NMO correction Devito is equally useful as a framework for other stencil computations in general; for example, computations where all array indices are affine functions of loop variables. The Devito compiler is also capable of generating arbitrarily nested, possibly irregular, loops. This key feature is needed to support many complex algorithms that are used in engineering and scientific practice, including applications from image processing, cellular automata, and machine-learning. This tutorial, a step-by-step NMO correction, is an example of it. In reflection seismology, normal moveout (NMO) describes the effect that the distance between a seismic source and a receiver (the offset) has on the arrival time of a reflection in the form of an increase of time with offset. The relationship between arrival time and offset is hyperbolic. Based on the field geometry information, each individual trace is assigned to the midpoint between the shot and receiver locations associated with that trace. Those traces with the same midpoint location are grouped together, making up a common midpoint gather (CMP). Consider a reflection event on a CMP gather. The difference between the two-way time at a given offset and the two-way zero-offset time is called normal moveout (NMO). Reflection traveltimes must be corrected for NMO prior to summing the traces in the CMP gather along the offset axis. The normal moveout depends on velocity above the reflector, offset, two-way zero-offset time associated with the reflection event, dip of the reflector, the source-receiver azimuth with respect to the true-dip direction, and the degree of complexity of the near-surface and the medium above the reflector. # Seismic modelling with devito Before the NMO corretion we will describe a setup of seismic modelling with Devito in a simple 2D case. We will create a physical model of our domain and define a multiple source and an according set of receivers to model for the forward model. But first, we initialize some basic utilities. ```python import numpy as np import sympy as sp from devito import * ``` We will create a simple velocity model here by hand for demonstration purposes. This model essentially consists of three layers, each with a different velocity: 1.5km/s in the top layer, 2.5km/s in the middle layer and 4.5 km/s in the bottom layer. ```python #NBVAL_IGNORE_OUTPUT from examples.seismic import Model, plot_velocity shape = (301, 501) # Number of grid point (nx, ny, nz) spacing = (10., 10) # Grid spacing in m. The domain size is now 3km by 5km origin = (0., 0) # What is the location of the top left corner. # Define a velocity profile. The velocity is in km/s v = np.empty(shape, dtype=np.float32) v[:,:100] = 1.5 v[:,100:350] = 2.5 v[:,350:] = 4.5 # With the velocity and model size defined, we can create the seismic model that # encapsulates these properties. We also define the size of the absorbing layer as 10 grid points model = Model(vp=v, origin=origin, shape=shape, spacing=spacing, space_order=4, nbl=40) plot_velocity(model) ``` Next we define the positioning and the wave signal of our source, as well as the location of our receivers. To generate the wavelet for our sources we require the discretized values of time that we are going to use to model a multiple "shot", which depends on the grid spacing used in our model. We will use one source and eleven receivers. The source is located in the position (550, 20). The receivers start at (550, 20) with an even horizontal spacing of 100m at consistent depth. ```python from examples.seismic import TimeAxis t0 = 0. # Simulation starts a t=0 tn = 2400. # Simulation last 2.4 second (2400 ms) dt = model.critical_dt # Time step from model grid spacing time_range = TimeAxis(start=t0, stop=tn, step=dt) nrcv = 250 # Number of Receivers ``` ```python #NBVAL_IGNORE_OUTPUT from examples.seismic import RickerSource f0 = 0.010 # Source peak frequency is 10Hz (0.010 kHz) src = RickerSource(name='src', grid=model.grid, f0=f0, npoint=1, time_range=time_range) # Define the wavefield with the size of the model and the time dimension u = TimeFunction(name="u", grid=model.grid, time_order=2, space_order=4) # We can now write the PDE pde = model.m * u.dt2 - u.laplace + model.damp * u.dt stencil = Eq(u.forward, solve(pde, u.forward)) src.coordinates.data[:, 0] = 400 # Source coordinates src.coordinates.data[:, -1] = 20. # Depth is 20m ``` ```python #NBVAL_IGNORE_OUTPUT from examples.seismic import Receiver rec = Receiver(name='rec', grid=model.grid, npoint=nrcv, time_range=time_range) rec.coordinates.data[:,0] = np.linspace(src.coordinates.data[0, 0], model.domain_size[0], num=nrcv) rec.coordinates.data[:,-1] = 20. # Depth is 20m # Finally we define the source injection and receiver read function to generate the corresponding code src_term = src.inject(field=u.forward, expr=src * dt**2 / model.m) # Create interpolation expression for receivers rec_term = rec.interpolate(expr=u.forward) op = Operator([stencil] + src_term + rec_term, subs=model.spacing_map) op(time=time_range.num-1, dt=model.critical_dt) ``` Operator `Kernel` run in 1.25 s How we are modelling a horizontal layers, we will group this traces and made a NMO correction using this set traces. ```python offset = [] data = [] for i, coord in enumerate(rec.coordinates.data): off = (src.coordinates.data[0, 0] - coord[0]) offset.append(off) data.append(rec.data[:,i]) ``` Auxiliary function for plotting traces: ```python #NBVAL_IGNORE_OUTPUT import matplotlib as mpl import matplotlib.pyplot as plt from matplotlib import cm from mpl_toolkits.axes_grid1 import make_axes_locatable mpl.rc('font', size=16) mpl.rc('figure', figsize=(8, 6)) def plot_traces(rec, xb, xe, t0, tn, colorbar=True): scale = np.max(rec)/100 extent = [xb, xe, 1e-3*tn, t0] plot = plt.imshow(rec, cmap=cm.gray, vmin=-scale, vmax=scale, extent=extent) plt.xlabel('X position (km)') plt.ylabel('Time (s)') # Create aligned colorbar on the right if colorbar: ax = plt.gca() divider = make_axes_locatable(ax) cax = divider.append_axes("right", size="5%", pad=0.05) plt.colorbar(plot, cax=cax) plt.show() ``` # Common Midpoint Gather At this point, we have a dataset composed of the receivers. "If our model wasn't purely horizontal, we would have to sort these traces by common midpoints prior to NMO correction." ```python plot_traces(np.transpose(data), rec.coordinates.data[0][0]/1000, rec.coordinates.data[nrcv-1][0]/1000, t0, tn) ``` # NMO Correction We can correct the measured traveltime of a reflected wave $t$ at a given offset $x$ to obtain the traveltime at normal incidence $t_0$ by applying the following equation: \begin{equation*} t = \sqrt{t_0^2 + \frac{x^2}{V_{nmo}^2}} \end{equation*} in which $V_{nmo}$ is the NMO velocity. This equation results from the Pythagorean theorem, and is only valid for horizontal reflectors. There are variants of this equation with different degrees of accuracy, but we'll use this one for simplicity. For the NMO Correction we use a grid of size samples x traces. ```python ns = time_range.num # Number of samples in each trace grid = Grid(shape=(ns, nrcv)) # Construction of grid with samples X traces dimension ``` In this example we will use a constant velocity guide. The guide will be arranged in a SparseFunction with the number of points equal to number of samples in the traces. ```python vnmo = 1500 vguide = SparseFunction(name='v', grid=grid, npoint=ns) vguide.data[:] = vnmo ``` The computed offset for each trace will be arraged in another SparseFunction with number of points equal to number of traces. ```python off = SparseFunction(name='off', grid=grid, npoint=nrcv) off.data[:] = offset ``` The previous modelled traces will be arranged in a SparseFunction with the same dimensions as the grid. ```python amps = SparseFunction(name='amps', grid=grid, npoint=ns*nrcv, dimensions=grid.dimensions, shape=grid.shape) amps.data[:] = np.transpose(data) ``` Now, we define SparseFunctions with the same dimensions as the grid, describing the NMO traveltime equation. The $t_0$ SparseFunction isn't offset dependent, so the number of points is equal to the number of samples. ```python sample, trace = grid.dimensions t_0 = SparseFunction(name='t0', grid=grid, npoint=ns, dimensions=[sample], shape=[grid.shape[0]]) tt = SparseFunction(name='tt', grid=grid, npoint=ns*nrcv, dimensions=grid.dimensions, shape=grid.shape) snmo = SparseFunction(name='snmo', grid=grid, npoint=ns*nrcv, dimensions=grid.dimensions, shape=grid.shape) s = SparseFunction(name='s', grid=grid, dtype=np.intc, npoint=ns*nrcv, dimensions=grid.dimensions, shape=grid.shape) ``` The Equation relates traveltimes: the one we can measure ($t_0$) and the one we want to know (t). But the data in our CMP gather are actually a matrix of amplitudes measured as a function of time ($t_0$) and offset. Our NMO-corrected gather will also be a matrix of amplitudes as a function of time (t) and offset. So what we really have to do is transform one matrix of amplitudes into the other. With Equations we describe the NMO traveltime equation, and use the Operator to compute the traveltime and the samples for each trace. ```python #NBVAL_IGNORE_OUTPUT dtms = model.critical_dt/1000 # Time discretization in ms E1 = Eq(t_0, sample*dtms) E2 = Eq(tt, sp.sqrt(t_0**2 + (off[trace]**2)/(vguide[sample]**2) )) E3 = Eq(s, sp.floor(tt/dtms)) op1 = Operator([E1, E2, E3]) op1() ``` Operator `Kernel` run in 0.01 s With the computed samples, we remove all that are out of the samples range, and shift the amplitude for the correct sample. ```python #NBVAL_IGNORE_OUTPUT s.data[s.data >= time_range.num] = 0 E4 = Eq(snmo, amps[s[sample, trace], trace]) op2 = Operator([E4]) op2() stack = snmo.data.sum(axis=1) # We can stack traces and create a ZO section!!! plot_traces(snmo.data, rec.coordinates.data[0][0]/1000, rec.coordinates.data[nrcv-1][0]/1000, t0, tn) ``` # References: https://library.seg.org/doi/full/10.1190/tle36020179.1 https://wiki.seg.org/wiki/Normal_moveout https://en.wikipedia.org/wiki/Normal_moveout
de54e47c83696bb1ed1cea319fbcde39d8ab3a86
159,452
ipynb
Jupyter Notebook
examples/seismic/tutorials/10_nmo_correction.ipynb
rhodrin/devito
cd1ae745272eb0315aa1c36038a3174f1817e0d0
[ "MIT" ]
1
2020-06-08T20:44:35.000Z
2020-06-08T20:44:35.000Z
examples/seismic/tutorials/10_nmo_correction.ipynb
rhodrin/devito
cd1ae745272eb0315aa1c36038a3174f1817e0d0
[ "MIT" ]
null
null
null
examples/seismic/tutorials/10_nmo_correction.ipynb
rhodrin/devito
cd1ae745272eb0315aa1c36038a3174f1817e0d0
[ "MIT" ]
1
2021-01-05T07:27:35.000Z
2021-01-05T07:27:35.000Z
314.500986
70,524
0.930644
true
2,677
Qwen/Qwen-72B
1. YES 2. YES
0.868827
0.79053
0.686834
__label__eng_Latn
0.986832
0.434077
# Lecture 03 Multiplication and Inverse Matrices Today's lecture includes: 1. Matrix Multiplication (4 ways) 2. Inverse of $A, AB, A^{T}$ 3. Gauss-Jordan / Find $A^{-1}$ ## Matrix Multiplication ### 1.1 First method We first take a look on this matrix: \begin{align} \begin{matrix}-&-&-\\-&-&-&\end{matrix} × \begin{matrix}-&-\\-&-\\-&-\end{matrix} = \begin{matrix}-&-\\-&-\end{matrix} \end{align} * 行列内积:有$m\times n$矩阵$A$和$n\times p$矩阵$B$($A$的总行数必须与$B$的总列数相等),两矩阵相乘有$AB=C$,$C$是一个$m\times p$矩阵,对于$C$矩阵中的第$i$行第$j$列元素$c_{ij}$,有: $$c_{ij}=row_i\cdot column_j=\sum_{k=i}^na_{ik}b_{kj}$$ 其中$a_{ik}$是$A$矩阵的第$i$行第$k$列元素,$b_{kj}$是$B$矩阵的第$k$行第$j$列元素。 可以看出$c_{ij}$其实是$A$矩阵第$i$行点乘$B$矩阵第$j$列 $\begin{bmatrix}&\vdots&\\&row_i&\\&\vdots&\end{bmatrix}\begin{bmatrix}&&\\\cdots&column_j&\cdots\\&&\end{bmatrix}=\begin{bmatrix}&\vdots&\\\cdots&c_{ij}&\cdots\\&\vdots&\end{bmatrix}$ ```python ```
10bb9ffb467f26b725c5fc09e70327c53f400fe6
1,873
ipynb
Jupyter Notebook
Lecture 03 Multiplication and Inverse Matrices.ipynb
XingxinHE/Linear_Algebra
7d6b78699f8653ece60e07765fd485dd36b26194
[ "MIT" ]
3
2021-04-24T17:23:50.000Z
2021-11-27T11:00:04.000Z
Lecture 03 Multiplication and Inverse Matrices.ipynb
XingxinHE/Linear_Algebra
7d6b78699f8653ece60e07765fd485dd36b26194
[ "MIT" ]
null
null
null
Lecture 03 Multiplication and Inverse Matrices.ipynb
XingxinHE/Linear_Algebra
7d6b78699f8653ece60e07765fd485dd36b26194
[ "MIT" ]
null
null
null
26.757143
258
0.514682
true
468
Qwen/Qwen-72B
1. YES 2. YES
0.924142
0.849971
0.785494
__label__eng_Latn
0.100408
0.663298
<a href="https://colab.research.google.com/github/colbrydi/Lithophane/blob/master/Lithophane_Tutorial.ipynb" target="_parent"></a> # Lithophane Library written by Dirk Colbry [Link to slides](https://docs.google.com/presentation/d/1s_8gcGfFDEHnqS7U-TkC4xp9T49fblb2_EWRpsd-v_I/edit#slide=id.g7d81a7112a_0_68) In this notebook we will use some code that makes lithophanes out of images. Lithophans work by changing the thickness of a material to correspond with how bright or dark a pixel is in an image. Dark pixels are thicker and light pixels are thiner. This notebook describes steps to build a lithophane using python: * [Step 1: Installing numpy-stl](#Installing_numpy-stl) * [Step 2: Reading an image into python](#Reading_an_image_into_python) * [Step 3: Flat Lithophane](#Flat_Lithophane) * [Step 4: Cylinder Lithophane](#Cylinder_Lithophane) --- <a name="Installing_numpy-stl"></a> # Step 1: Installing numpy-stl First thing we need to do is install some functions that I have written to convert an image into an stl file. I have stored these functions in a file named ```lithophane.py``` which should be in the same directory as this notebokk. These functions also use a module called [numpy-stl](https://pypi.org/project/numpy-stl/) which can be installed using pip. &#9989; **<font color=red>DO THIS:</font>** Uncomment the following command (Delete the #) and run the cell using "shift-enter". your own image and change the following code to display your image. Show the instructor when you get it working. ```python !pip install numpy-stl ``` ```python import urllib imageurl = 'https://insideindiana.images.worldnow.com/images/9914370_G.jpg?auto=webp&disable=upscale&height=560&fit=bounds' urllib.request.urlretrieve('https://raw.githubusercontent.com/colbrydi/Lithophane/master/lithophane.py', 'lithophane.py') urllib.request.urlretrieve(imageurl, 'image.jpg') ``` ```python import lithophane as li ``` **Note** The above import may give a warning. This can be ignored for now. --- <a name="Reading_an_image_into_python"></a> # Step 2: Reading image data into python There are many python libraries that can read an image. In this example we will use a very common one called ```matplotlib```. Run the following code by clicking on the cell and hitting "Shift and Enter" at the same time. &#9989; **<font color=red>DO THIS:</font>** Upload your own image and change the following code to display your image. UShow the instructor when you get it working. ```python %matplotlib inline import matplotlib.pylab as plt import matplotlib.image as img imagefile = './image.jpg' im = img.imread(imagefile) plt.imshow(im); ``` ```python from IPython.display import YouTubeVideo YouTubeVideo("15aqFQQVBWU",width=640,height=360) ``` ```python %matplotlib inline import matplotlib.pylab as plt from ipywidgets import interact, fixed import numpy as np def showcolor(red,green,blue): plt.scatter(1,1, color=(red/255,green/255,blue/255), s=10000 ); plt.axis('off'); plt.show(); interact(showcolor, red=(0,255), green=(0,255), blue=(0,255)); ``` We use the following fomula to convert the colors to a grayscale value. An average would work but these look more "realistic": $$gray = 0.2989r + 0.5870g + 0.1140b$$ ```python gray = li.rgb2gray(im) plt.imshow(gray, cmap='gray'); plt. colorbar() ``` --- <a name="Flat_Lithophane"></a> # Step 3: Generate Flat Lithophane First we will start by creating a "point cloud" of three matrixes. The following function takes in a python image object (the one we created above is called ```im```) and returns the point cloud scaled to a width in millimeters. The aspect ratio of the image will be maintained. $$z = h\left(1 - \frac{p}{255}\right) + d$$ $$ \begin{align} z &- \text{depth value for each pixel} \\ h &- \text{hight of the lithophan (thickness)} \\ p &- \text{pixel value (0-255)} \\ d &- \text{default depth} \end{align} $$ ```python #Generate x,y and z values for each pixel width = 102 #Width in mm x,y,z = li.jpg2stl(gray, width=width, h=3, d=0.6, show=False) ``` ```python plt.figure(figsize=(25,5)) plt.subplot(1,3,1) plt.imshow(x) plt.axis('off') plt.title('x distances (mm)') plt.colorbar() plt.subplot(1,3,2) plt.imshow(y) plt.axis('off') plt.title('y distances (mm)') plt.colorbar() plt.subplot(1,3,3) plt.imshow(z) plt.axis('off'); plt.title('z distances (mm)'); plt.colorbar() ``` ```python #plot a cross section to try and visualize data plt.plot(x[550,0:100],z[550,0:100]) plt.axis('equal'); plt.ylabel('Lithophane Depth (z direction mm)') plt.title('Lithophane cross-seciton'); ``` The following takes our 3D points and creates a mesh model and saves the model as an STL file. A model is just a list of points (x,y,z) and a list of triangles which are just lists of points. ```python model = li.makemesh(x,y,z); filename=imagefile[:-4] + '_Flat.stl' model.save(filename) ``` We can use the following function to visualize the stl file (note the z axis is _**NOT**_ to scale: ```python #note z axis is not same scale as x and y axes. li.showstl(x,y,z) ``` --- <a name="Cylinder_Lithophane"></a> # Step 4: Cylinder Lithophane Since we understand math, there is nothing that requires us to make lithophanes flat. Consider the following example: ```python from IPython.display import YouTubeVideo YouTubeVideo("4I3ItcZOAjM",height=320, cc_load_policy=True) ``` To make lithophans cylinderical we just need to modify the x,y and z values. The following code wraps x and z axis around the y axis and maintains the pixel depth described above. ```python cx,cy,cz = li.makeCylinder(x,y,z) li.showstl(cx,cy,cz) ``` We can save the new file using the same ```makemesh``` function from above. ```python model = li.makemesh(cx,cy,cz); filename=imagefile[:-4] + '_Cylinder.stl' model.save(filename) ``` Here is a timelaps of a printed cylindrical stl file. ```python from IPython.display import YouTubeVideo YouTubeVideo("-h8pF6psdp4",height=320, cc_load_policy=True) ``` &#169; Copyright 2019, Dirk Colbry
9494eb2d518ebe1ffe94ea0b4ef0ac72bb350e28
30,772
ipynb
Jupyter Notebook
Lithophane_Tutorial.ipynb
jaejamespark/Lithophane
628e8918483f402ecfb42fc18747fad6bb2e69ed
[ "MIT" ]
12
2020-02-28T09:42:36.000Z
2021-11-11T21:26:34.000Z
Lithophane_Tutorial.ipynb
jaejamespark/Lithophane
628e8918483f402ecfb42fc18747fad6bb2e69ed
[ "MIT" ]
1
2021-02-08T16:01:16.000Z
2021-02-08T16:01:16.000Z
Lithophane_Tutorial.ipynb
jaejamespark/Lithophane
628e8918483f402ecfb42fc18747fad6bb2e69ed
[ "MIT" ]
8
2020-08-08T06:03:17.000Z
2022-02-01T10:04:10.000Z
31.658436
4,130
0.625699
true
1,715
Qwen/Qwen-72B
1. YES 2. YES
0.907312
0.812867
0.737524
__label__eng_Latn
0.952921
0.551848
# Generating C Code to implement Method of Lines Timestepping for Explicit Runge Kutta Methods ## Authors: Zach Etienne & Brandon Clark ## This tutorial notebook generates three blocks of C Code in order to perform Method of Lines timestepping. **Notebook Status:** <font color='green'><b> Validated </b></font> **Validation Notes:** This tutorial notebook has been confirmed to be self-consistent with its corresponding NRPy+ module, as documented [below](#code_validation). All Runge-Kutta (RK) Butcher tables were validated using truncated Taylor series in [a separate module](Tutorial-RK_Butcher_Table_Validation.ipynb). Finally, C-code implementation of RK4 was validated against a trusted version. C-code implementations of other RK methods seem to work as expected in the context of solving the scalar wave equation in Cartesian coordinates. ### NRPy+ Source Code for this module: * [MoLtimestepping/C_Code_Generation.py](../edit/MoLtimestepping/C_Code_Generation.py) * [MoLtimestepping/RK_Butcher_Table_Dictionary.py](../edit/MoLtimestepping/RK_Butcher_Table_Dictionary.py) ([**Tutorial**](Tutorial-RK_Butcher_Table_Dictionary.ipynb)) Stores the Butcher tables for the explicit Runge Kutta methods ## Introduction: When numerically solving a partial differential equation initial-value problem, subject to suitable boundary conditions, we implement Method of Lines to "integrate" the solution forward in time. ### The Method of Lines: Once we have the initial data for a PDE, we "evolve it forward in time", using the [Method of Lines](https://reference.wolfram.com/language/tutorial/NDSolveMethodOfLines.html). In short, the Method of Lines enables us to handle 1. the **spatial derivatives** of an initial value problem PDE using **standard finite difference approaches**, and 2. the **temporal derivatives** of an initial value problem PDE using **standard strategies for solving ordinary differential equations (ODEs), like Runge Kutta methods** so long as the initial value problem PDE can be written in the first-order-in-time form $$\partial_t \vec{f} = \mathbf{M}\ \vec{f},$$ where $\mathbf{M}$ is an $N\times N$ matrix containing only *spatial* differential operators that act on the $N$-element column vector $\vec{f}$. $\mathbf{M}$ may not contain $t$ or time derivatives explicitly; only *spatial* partial derivatives are allowed to appear inside $\mathbf{M}$. You may find the next module [Tutorial-ScalarWave](Tutorial-ScalarWave.ipynb) extremely helpful as an example for implementing the Method of Lines for solving the Scalar Wave equation in Cartesian coordinates. ### Generating the C code: This module describes how core C functions are generated to implement Method of Lines timestepping for a specified RK method. There are three core functions: 1. Allocate memory for gridfunctions. 1. Step forward the solution one full timestep. 1. Free memory for gridfunctions. The first function is called first, then the second function is repeated within a loop to a fixed "final" time (such that the end state of each iteration is the initial state for the next iteration), and the third function is called at the end of the calculation. The generated codes are essential for a number of Start-to-Finish example tutorial notebooks that demonstrate how to numerically solve hyperbolic PDEs. <a id='toc'></a> # Table of Contents $$\label{toc}$$ This notebook is organized as follows 1. [Step 1](#initializenrpy): Initialize needed Python/NRPy+ modules 1. [Step 2](#diagonal): Checking if Butcher Table is Diagonal 1. [Step 3](#ccode): Generating the C Code 1. [Step 3.a](#generategfnames): `generate_gridfunction_names()`: Uniquely and descriptively assign names to sets of gridfunctions 1. [Step 3.b](#alloc): Memory allocation: `MoL_malloc_y_n_gfs()` and `MoL_malloc_non_y_n_gfs()` 1. [Step 3.c](#molstep): Take one Method of Lines time step: `MoL_step_forward_in_time()` 1. [Step 3.d](#free): Memory deallocation: `MoL_free_memory()` 1. [Step 3.e](#nrpybasicdefines): Define & register `MoL_gridfunctions_struct` in `NRPy_basic_defines.h`: `NRPy_basic_defines_MoL_timestepping_struct()` 1. [Step 3.f](#setupall): Add all MoL C codes to C function dictionary, and add MoL definitions to `NRPy_basic_defines.h`: `register_C_functions_and_NRPy_basic_defines()` 1. [Step 4](#code_validation): Code Validation against `MoLtimestepping.RK_Butcher_Table_Generating_C_Code` NRPy+ module 1. [Step 5](#latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file <a id='initializenrpy'></a> # Step 1: Initialize needed Python/NRPy+ modules [Back to [top](#toc)\] $$\label{initializenrpy}$$ Let's start by importing all the needed modules from Python/NRPy+: ```python import sympy as sp # Import SymPy, a computer algebra system written entirely in Python import os, sys # Standard Python modules for multiplatform OS-level functions from MoLtimestepping.RK_Butcher_Table_Dictionary import Butcher_dict from outputC import add_to_Cfunction_dict, indent_Ccode, outC_NRPy_basic_defines_h_dict # NRPy+: Basic C code output functionality ``` <a id='diagonal'></a> # Step 2: Checking if a Butcher table is Diagonal [Back to [top](#toc)\] $$\label{diagonal}$$ A diagonal Butcher table takes the form $$\begin{array}{c|cccccc} 0 & \\ a_1 & a_1 & \\ a_2 & 0 & a_2 & \\ a_3 & 0 & 0 & a_3 & \\ \vdots & \vdots & \ddots & \ddots & \ddots \\ a_s & 0 & 0 & 0 & \cdots & a_s \\ \hline & b_1 & b_2 & b_3 & \cdots & b_{s-1} & b_s \end{array}$$ where $s$ is the number of required predictor-corrector steps for a given RK method (see [Butcher, John C. (2008)](https://onlinelibrary.wiley.com/doi/book/10.1002/9780470753767)). One known diagonal RK method is the classic RK4 represented in Butcher table form as: $$\begin{array}{c|cccc} 0 & \\ 1/2 & 1/2 & \\ 1/2 & 0 & 1/2 & \\ 1 & 0 & 0 & 1 & \\ \hline & 1/6 & 1/3 & 1/3 & 1/6 \end{array} $$ Diagonal Butcher tables are nice when it comes to saving required memory space. Each new step for a diagonal RK method, when computing the new $k_i$, does not depend on the previous calculation, and so there are ways to save memory. Significantly so in large three-dimensional spatial grid spaces. ```python # Check if Butcher Table is diagonal def diagonal(key): Butcher = Butcher_dict[key][0] L = len(Butcher)-1 # Establish the number of rows to check for diagonal trait, all bust last row row_idx = 0 # Initialize the Butcher table row index for i in range(L): # Check all the desired rows for j in range(1,row_idx): # Check each element before the diagonal element in a row if Butcher[i][j] != sp.sympify(0): # If any non-diagonal coeffcient is non-zero, # then the table is not diagonal return False row_idx += 1 # Update to check the next row return True # Loop over all Butcher tables to check whether each is diagonal or not for key, value in Butcher_dict.items(): if diagonal(key) == True: print("The MoL method "+str(key)+" is diagonal!") else: print("The MoL method "+str(key)+" is NOT diagonal!") ``` The MoL method Euler is diagonal! The MoL method RK2 Heun is diagonal! The MoL method RK2 MP is diagonal! The MoL method RK2 Ralston is diagonal! The MoL method RK3 is NOT diagonal! The MoL method RK3 Heun is diagonal! The MoL method RK3 Ralston is diagonal! The MoL method SSPRK3 is NOT diagonal! The MoL method RK4 is diagonal! The MoL method DP5 is NOT diagonal! The MoL method DP5alt is NOT diagonal! The MoL method CK5 is NOT diagonal! The MoL method DP6 is NOT diagonal! The MoL method L6 is NOT diagonal! The MoL method DP8 is NOT diagonal! <a id='ccode'></a> # Step 3: Generating the C Code [Back to [top](#toc)\] $$\label{ccode}$$ The following sections build up the C code for implementing the [Method of Lines timestepping algorithm](http://www.scholarpedia.org/article/Method_of_lines) for solving hyperbolic PDEs. **First an important note on efficiency:** Memory efficiency is incredibly important here, as $\vec{f}$ is usually the largest object in memory. If we only made use of the Butcher tables without concern for memory efficiency, `generate_gridfunction_names()` and `MoL_step_forward_in_time()` would be very simple functions. It turns out that several of the Runge-Kutta-like methods in MoL can be made more efficient; for example "RK4" can be performed using only 4 "timelevels" of $\vec{f}$ in memory (i.e., a total memory usage of `sizeof(f) * 4`). A naive implementation might use 5 or 6 copies. RK-like methods that have diagonal Butcher tables can be made far more efficient than the naive approach. **Exercise to student:** Improve the efficiency of other RK-like methods. <a id='generategfnames'></a> ## Step 3.a: `generate_gridfunction_names()`: Uniquely and descriptively assign names to sets of gridfunctions [Back to [top](#toc)\] $$\label{generategfnames}$$ `generate_gridfunction_names()` names gridfunctions to be consistent with a given RK substep. For example we might call the set of gridfunctions stored at substep $k_1$ `k1_gfs`. ```python # Each MoL method has its own set of names for groups of gridfunctions, # aiming to be sufficiently descriptive. So for example a set of # gridfunctions that store "k_1" in an RK-like method could be called # "k1_gfs". def generate_gridfunction_names(MoL_method = "RK4"): # Step 3.a: MoL gridfunctions fall into 3 overlapping categories: # 1) y_n=y_i(t_n) gridfunctions y_n_gfs, which stores data for the vector of gridfunctions y_i at t_n, # the start of each MoL timestep. # 2) non-y_n gridfunctions, needed to compute the data at t_{n+1}. Often labeled with k_i in the name, # these gridfunctions are *not* needed at the start of each timestep, so are available for temporary # storage when gridfunctions needed for diagnostics are computed at the start of each timestep. # These gridfunctions can also be freed during a regrid, to enable storage for the post-regrid # destination y_n_gfs. # 3) Diagnostic output gridfunctions diagnostic_output_gfs, which simply uses the memory from auxiliary # gridfunctions at one auxiliary time to compute diagnostics at t_n. # Here we specify which gridfunctions fall into each category, starting with the obvious: y_n_gridfunctions y_n_gridfunctions = "y_n_gfs" # Next the less-obvious, which depend on non-y_n_gfs non_y_n_gridfunctions_list = [] # No matter the method we define gridfunctions "y_n_gfs" to store the initial data if diagonal(MoL_method) and "RK3" in MoL_method: non_y_n_gridfunctions_list.append("k1_or_y_nplus_a21_k1_or_y_nplus1_running_total_gfs") non_y_n_gridfunctions_list.append("k2_or_y_nplus_a32_k2_gfs") diagnostic_gridfunctions_point_to = "k1_or_y_nplus_a21_k1_or_y_nplus1_running_total_gfs" else: if not diagonal(MoL_method): # Allocate memory for non-diagonal Butcher tables # Determine the number of k_i steps based on length of Butcher Table num_k = len(Butcher_dict[MoL_method][0])-1 # For non-diagonal tables an intermediate gridfunction "next_y_input" is used for rhs evaluations non_y_n_gridfunctions_list.append("next_y_input_gfs") for i in range(num_k): # Need to allocate all k_i steps for a given method non_y_n_gridfunctions_list.append("k" + str(i + 1) + "_gfs") diagnostic_gridfunctions_point_to = "k1_gfs" else: # Allocate memory for diagonal Butcher tables, which use a "y_nplus1_running_total gridfunction" non_y_n_gridfunctions_list.append("y_nplus1_running_total_gfs") if MoL_method != 'Euler': # Allocate memory for diagonal Butcher tables that aren't Euler # Need k_odd for k_1,3,5... and k_even for k_2,4,6... non_y_n_gridfunctions_list.append("k_odd_gfs") non_y_n_gridfunctions_list.append("k_even_gfs") diagnostic_gridfunctions_point_to = "y_nplus1_running_total_gfs" non_y_n_gridfunctions_list.append("auxevol_gfs") return y_n_gridfunctions, non_y_n_gridfunctions_list, diagnostic_gridfunctions_point_to ``` <a id='alloc'></a> ## Step 3.b: Memory allocation: `MoL_malloc_y_n_gfs()` and `MoL_malloc_non_y_n_gfs()`: [Back to [top](#toc)\] $$\label{alloc}$$ Generation of C functions `MoL_malloc_y_n_gfs()` and `MoL_malloc_non_y_n_gfs()` read the full list of needed lists of gridfunctions, provided by (Python) function `generate_gridfunction_names()`, to allocate space for all gridfunctions. ```python # add_to_Cfunction_dict_MoL_malloc() registers # MoL_malloc_y_n_gfs() and # MoL_malloc_non_y_n_gfs(), which allocate memory for # the indicated sets of gridfunctions def add_to_Cfunction_dict_MoL_malloc(MoL_method, which_gfs): includes = ["NRPy_basic_defines.h", "NRPy_function_prototypes.h"] desc = "Method of Lines (MoL) for \"" + MoL_method + "\" method: Allocate memory for \""+which_gfs+"\" gridfunctions\n" desc += " * y_n_gfs are used to store data for the vector of gridfunctions y_i at t_n, at the start of each MoL timestep\n" desc += " * non_y_n_gfs are needed for intermediate (e.g., k_i) storage in chosen MoL method\n" c_type = "void" y_n_gridfunctions, non_y_n_gridfunctions_list, diagnostic_gridfunctions_point_to = \ generate_gridfunction_names(MoL_method = MoL_method) gridfunctions_list = [] if which_gfs == "y_n_gfs": gridfunctions_list = [y_n_gridfunctions] elif which_gfs == "non_y_n_gfs": gridfunctions_list = non_y_n_gridfunctions_list else: print("ERROR: which_gfs = \"" + which_gfs + "\" unrecognized.") sys.exit(1) name = "MoL_malloc_" + which_gfs params = "const paramstruct *restrict params, MoL_gridfunctions_struct *restrict gridfuncs" body = "const int Nxx_plus_2NGHOSTS_tot = Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2;\n" for gridfunctions in gridfunctions_list: num_gfs = "NUM_EVOL_GFS" if gridfunctions == "auxevol_gfs": num_gfs = "NUM_AUXEVOL_GFS" body += "gridfuncs->" + gridfunctions + " = (REAL *restrict)malloc(sizeof(REAL) * " + num_gfs + " * Nxx_plus_2NGHOSTS_tot);\n" body += "\ngridfuncs->diagnostic_output_gfs = gridfuncs->" + diagnostic_gridfunctions_point_to + ";\n" add_to_Cfunction_dict( includes=includes, desc=desc, c_type=c_type, name=name, params=params, body=indent_Ccode(body, " "), rel_path_to_Cparams=os.path.join(".")) ``` <a id='molstep'></a> ## Step 3.c: Take one Method of Lines time step: `MoL_step_forward_in_time()` [Back to [top](#toc)\] $$\label{molstep}$$ An MoL step consists in general of a series of Runge-Kutta-like substeps, and the `MoL_step_forward_in_time()` C function pulls together all of these substeps. The basic C code for an MoL substep, set up by the Python function `single_RK_substep()` below, is as follows. 1. Evaluate the right-hand side of $\partial_t \vec{f}=$ `RHS`, to get the time derivative of the set of gridfunctions $\vec{f}$ at our current time. 1. Perform the Runge-Kutta update, which depends on $\partial_t \vec{f}$ on the current and sometimes previous times. 1. Call post-right-hand side functions as desired. The `single_RK_substep()` function generates the C code for performing the above steps, applying substitutions for e.g., `RK_INPUT_GFS` and `RK_OUTPUT_GFS` as appropriate. ```python # single_RK_substep() performs necessary replacements to # define C code for a single RK substep # (e.g., computing k_1 and then updating the outer boundaries) def single_RK_substep(commentblock, RHS_str, RHS_input_str, RHS_output_str, RK_lhss_list, RK_rhss_list, post_RHS_list, post_RHS_output_list, indent=" "): addl_indent = "" return_str = commentblock + "\n" if not isinstance(RK_lhss_list, list): RK_lhss_list = [RK_lhss_list] if not isinstance(RK_rhss_list, list): RK_rhss_list = [RK_rhss_list] if not isinstance(post_RHS_list, list): post_RHS_list = [post_RHS_list] if not isinstance(post_RHS_output_list, list): post_RHS_output_list = [post_RHS_output_list] # Part 1: RHS evaluation: return_str += indent_Ccode(RHS_str.replace("RK_INPUT_GFS", RHS_input_str). replace("RK_OUTPUT_GFS", RHS_output_str)+"\n", indent=addl_indent) # Part 2: RK update return_str += addl_indent + "LOOP_ALL_GFS_GPS(i) {\n" for lhs, rhs in zip(RK_lhss_list, RK_rhss_list): return_str += addl_indent + indent + lhs + "[i] = " + rhs + ";\n" return_str += addl_indent + "}\n" # Part 3: Call post-RHS functions for post_RHS, post_RHS_output in zip(post_RHS_list, post_RHS_output_list): return_str += indent_Ccode(post_RHS.replace("RK_OUTPUT_GFS", post_RHS_output), indent=addl_indent) return return_str ``` In the `add_to_Cfunction_dict_MoL_step_forward_in_time()` Python function below we construct and register the core C function for MoL timestepping: `MoL_step_forward_in_time()`. `MoL_step_forward_in_time()` implements Butcher tables for Runge-Kutta-like methods, leveraging the `single_RK_substep()` helper function above as needed. Again, we aim for maximum memory efficiency so that e.g., RK4 needs to store only 4 levels of $\vec{f}$. ```python ######################################################################################################################## # EXAMPLE # ODE: y' = f(t,y), y(t_0) = y_0 # Starting at time t_n with solution having value y_n and trying to update to y_nplus1 with timestep dt # Example of scheme for RK4 with k_1, k_2, k_3, k_4 (Using non-diagonal algorithm) Notice this requires storage of # y_n, y_nplus1, k_1 through k_4 # k_1 = dt*f(t_n, y_n) # k_2 = dt*f(t_n + 1/2*dt, y_n + 1/2*k_1) # k_3 = dt*f(t_n + 1/2*dt, y_n + 1/2*k_2) # k_4 = dt*f(t_n + dt, y_n + k_3) # y_nplus1 = y_n + 1/3k_1 + 1/6k_2 + 1/6k_3 + 1/3k_4 # Example of scheme RK4 using only k_odd and k_even (Diagonal algroithm) Notice that this only requires storage # k_odd = dt*f(t_n, y_n) # y_nplus1 = 1/3*k_odd # k_even = dt*f(t_n + 1/2*dt, y_n + 1/2*k_odd) # y_nplus1 += 1/6*k_even # k_odd = dt*f(t_n + 1/2*dt, y_n + 1/2*k_even) # y_nplus1 += 1/6*k_odd # k_even = dt*f(t_n + dt, y_n + k_odd) # y_nplus1 += 1/3*k_even ######################################################################################################################## def add_to_Cfunction_dict_MoL_step_forward_in_time(MoL_method, RHS_string = "", post_RHS_string = "", enable_rfm=False, enable_curviBCs=False): includes = ["NRPy_basic_defines.h", "NRPy_function_prototypes.h"] desc = "Method of Lines (MoL) for \"" + MoL_method + "\" method: Step forward one full timestep.\n" c_type = "void" name = "MoL_step_forward_in_time" params = "const paramstruct *restrict params, " if enable_rfm: params += "const rfm_struct *restrict rfmstruct, " else: params += "REAL *xx[3], " if enable_curviBCs: params += "const bc_struct *restrict bcstruct, " params += "MoL_gridfunctions_struct *restrict gridfuncs, const REAL dt" indent = "" # We don't bother with an indent here. body = indent + "// C code implementation of -={ " + MoL_method + " }=- Method of Lines timestepping.\n\n" y_n_gridfunctions, non_y_n_gridfunctions_list, _throwaway = generate_gridfunction_names(MoL_method) body += "// First set gridfunction aliases from gridfuncs struct\n\n" body += "// y_n gridfunctions:\n" body += "REAL *restrict " + y_n_gridfunctions + " = gridfuncs->" + y_n_gridfunctions + ";\n" body += "\n" body += "// Temporary timelevel & AUXEVOL gridfunctions:\n" for gf in non_y_n_gridfunctions_list: body += "REAL *restrict " + gf + " = gridfuncs->" + gf + ";\n" body += "\n" body += "// Next perform a full step forward in time\n" # Implement Method of Lines (MoL) Timestepping Butcher = Butcher_dict[MoL_method][0] # Get the desired Butcher table from the dictionary num_steps = len(Butcher)-1 # Specify the number of required steps to update solution # Diagonal RK3 only!!! if diagonal(MoL_method) and "RK3" in MoL_method: # In a diagonal RK3 method, only 3 gridfunctions need be defined. Below implements this approach. # k_1 body += """ // In a diagonal RK3 method like this one, only 3 gridfunctions need be defined. Below implements this approach. // Using y_n_gfs as input, k1 and apply boundary conditions\n""" body += single_RK_substep( commentblock = """// -={ START k1 substep }=- // RHS evaluation: // 1. We will store k1_or_y_nplus_a21_k1_or_y_nplus1_running_total_gfs now as // ... the update for the next rhs evaluation y_n + a21*k1*dt // Post-RHS evaluation: // 1. Apply post-RHS to y_n + a21*k1*dt""", RHS_str = RHS_string, RHS_input_str = "y_n_gfs", RHS_output_str = "k1_or_y_nplus_a21_k1_or_y_nplus1_running_total_gfs", RK_lhss_list = ["k1_or_y_nplus_a21_k1_or_y_nplus1_running_total_gfs"], RK_rhss_list = ["("+sp.ccode(Butcher[1][1]).replace("L","")+")*k1_or_y_nplus_a21_k1_or_y_nplus1_running_total_gfs[i]*dt + y_n_gfs[i]"], post_RHS_list = [post_RHS_string], post_RHS_output_list = ["k1_or_y_nplus_a21_k1_or_y_nplus1_running_total_gfs"]) + "// -={ END k1 substep }=-\n\n" # k_2 body += single_RK_substep( commentblock="""// -={ START k2 substep }=- // RHS evaluation: // 1. Reassign k1_or_y_nplus_a21_k1_or_y_nplus1_running_total_gfs to be the running total y_{n+1}; a32*k2*dt to the running total // 2. Store k2_or_y_nplus_a32_k2_gfs now as y_n + a32*k2*dt // Post-RHS evaluation: // 1. Apply post-RHS to both y_n + a32*k2 (stored in k2_or_y_nplus_a32_k2_gfs) // ... and the y_{n+1} running total, as they have not been applied yet to k2-related gridfunctions""", RHS_str=RHS_string, RHS_input_str="k1_or_y_nplus_a21_k1_or_y_nplus1_running_total_gfs", RHS_output_str="k2_or_y_nplus_a32_k2_gfs", RK_lhss_list=["k1_or_y_nplus_a21_k1_or_y_nplus1_running_total_gfs","k2_or_y_nplus_a32_k2_gfs"], RK_rhss_list=["("+sp.ccode(Butcher[3][1]).replace("L","")+")*(k1_or_y_nplus_a21_k1_or_y_nplus1_running_total_gfs[i] - y_n_gfs[i])/("+sp.ccode(Butcher[1][1]).replace("L","")+") + y_n_gfs[i] + ("+sp.ccode(Butcher[3][2]).replace("L","")+")*k2_or_y_nplus_a32_k2_gfs[i]*dt", "("+sp.ccode(Butcher[2][2]).replace("L","")+")*k2_or_y_nplus_a32_k2_gfs[i]*dt + y_n_gfs[i]"], post_RHS_list=[post_RHS_string,post_RHS_string], post_RHS_output_list=["k2_or_y_nplus_a32_k2_gfs","k1_or_y_nplus_a21_k1_or_y_nplus1_running_total_gfs"]) + "// -={ END k2 substep }=-\n\n" # k_3 body += single_RK_substep( commentblock="""// -={ START k3 substep }=- // RHS evaluation: // 1. Add k3 to the running total and save to y_n // Post-RHS evaluation: // 1. Apply post-RHS to y_n""", RHS_str=RHS_string, RHS_input_str="k2_or_y_nplus_a32_k2_gfs", RHS_output_str="y_n_gfs", RK_lhss_list=["y_n_gfs","k2_or_y_nplus_a32_k2_gfs"], RK_rhss_list=["k1_or_y_nplus_a21_k1_or_y_nplus1_running_total_gfs[i] + ("+sp.ccode(Butcher[3][3]).replace("L","")+")*y_n_gfs[i]*dt"], post_RHS_list=[post_RHS_string], post_RHS_output_list=["y_n_gfs"]) + "// -={ END k3 substep }=-\n\n" else: y_n = "y_n_gfs" if not diagonal(MoL_method): for s in range(num_steps): next_y_input = "next_y_input_gfs" # If we're on the first step (s=0), we use y_n gridfunction as input. # Otherwise next_y_input is input. Output is just the reverse. if s == 0: # If on first step: RHS_input = y_n else: # If on second step or later: RHS_input = next_y_input RHS_output = "k" + str(s + 1) + "_gfs" if s == num_steps-1: # If on final step: RK_lhs = y_n RK_rhs = y_n + "[i] + dt*(" else: # If on anything but the final step: RK_lhs = next_y_input RK_rhs = y_n + "[i] + dt*(" for m in range(s+1): if Butcher[s+1][m+1] != 0: if Butcher[s+1][m+1] != 1: RK_rhs += " + k"+str(m+1)+"_gfs[i]*("+sp.ccode(Butcher[s+1][m+1]).replace("L","")+")" else: RK_rhs += " + k"+str(m+1)+"_gfs[i]" RK_rhs += " )" post_RHS = post_RHS_string if s == num_steps-1: # If on final step: post_RHS_output = y_n else: # If on anything but the final step: post_RHS_output = next_y_input body += single_RK_substep( commentblock="// -={ START k" + str(s + 1) + " substep }=-", RHS_str=RHS_string, RHS_input_str=RHS_input, RHS_output_str=RHS_output, RK_lhss_list=[RK_lhs], RK_rhss_list=[RK_rhs], post_RHS_list=[post_RHS], post_RHS_output_list=[post_RHS_output]) + "// -={ END k" + str(s + 1) + " substep }=-\n\n" else: # diagonal case: y_nplus1_running_total = "y_nplus1_running_total_gfs" if MoL_method == 'Euler': # Euler's method doesn't require any k_i, and gets its own unique algorithm body += single_RK_substep( commentblock=indent + "// ***Euler timestepping only requires one RHS evaluation***", RHS_str=RHS_string, RHS_input_str=y_n, RHS_output_str=y_nplus1_running_total, RK_lhss_list=[y_n], RK_rhss_list=[y_n+"[i] + "+y_nplus1_running_total+"[i]*dt"], post_RHS_list=[post_RHS_string], post_RHS_output_list=[y_n]) else: for s in range(num_steps): # If we're on the first step (s=0), we use y_n gridfunction as input. # and k_odd as output. if s == 0: RHS_input = "y_n_gfs" RHS_output = "k_odd_gfs" # For the remaining steps the inputs and ouputs alternate between k_odd and k_even elif s % 2 == 0: RHS_input = "k_even_gfs" RHS_output = "k_odd_gfs" else: RHS_input = "k_odd_gfs" RHS_output = "k_even_gfs" RK_lhs_list = [] RK_rhs_list = [] if s != num_steps-1: # For anything besides the final step if s == 0: # The first RK step RK_lhs_list.append(y_nplus1_running_total) RK_rhs_list.append(RHS_output+"[i]*dt*("+sp.ccode(Butcher[num_steps][s+1]).replace("L","")+")") RK_lhs_list.append(RHS_output) RK_rhs_list.append(y_n+"[i] + "+RHS_output+"[i]*dt*("+sp.ccode(Butcher[s+1][s+1]).replace("L","")+")") else: if Butcher[num_steps][s+1] != 0: RK_lhs_list.append(y_nplus1_running_total) if Butcher[num_steps][s+1] != 1: RK_rhs_list.append(y_nplus1_running_total+"[i] + "+RHS_output+"[i]*dt*("+sp.ccode(Butcher[num_steps][s+1]).replace("L","")+")") else: RK_rhs_list.append(y_nplus1_running_total+"[i] + "+RHS_output+"[i]*dt") if Butcher[s+1][s+1] != 0: RK_lhs_list.append(RHS_output) if Butcher[s+1][s+1] != 1: RK_rhs_list.append(y_n+"[i] + "+RHS_output+"[i]*dt*("+sp.ccode(Butcher[s+1][s+1]).replace("L","")+")") else: RK_rhs_list.append(y_n+"[i] + "+RHS_output+"[i]*dt") post_RHS_output = RHS_output if s == num_steps-1: # If on the final step if Butcher[num_steps][s+1] != 0: RK_lhs_list.append(y_n) if Butcher[num_steps][s+1] != 1: RK_rhs_list.append(y_n+"[i] + "+y_nplus1_running_total+"[i] + "+RHS_output+"[i]*dt*("+sp.ccode(Butcher[num_steps][s+1]).replace("L","")+")") else: RK_rhs_list.append(y_n+"[i] + "+y_nplus1_running_total+"[i] + "+RHS_output+"[i]*dt)") post_RHS_output = y_n body += single_RK_substep( commentblock=indent + "// -={ START k" + str(s + 1) + " substep }=-", RHS_str=RHS_string, RHS_input_str=RHS_input, RHS_output_str=RHS_output, RK_lhss_list=RK_lhs_list, RK_rhss_list=RK_rhs_list, post_RHS_list=[post_RHS_string], post_RHS_output_list=[post_RHS_output]) + "// -={ END k" + str(s + 1) + " substep }=-\n\n" add_to_Cfunction_dict( includes=includes, desc=desc, c_type=c_type, name=name, params=params, body=indent_Ccode(body, " "), rel_path_to_Cparams=os.path.join(".")) ``` <a id='free'></a> ## Step 3.d: Memory deallocation: `MoL_free_memory()` [Back to [top](#toc)\] $$\label{free}$$ We define the function `MoL_free_memory()` which generates the C code for freeing the memory that was being occupied by the grid functions lists that had been allocated. ```python # add_to_Cfunction_dict_MoL_free_memory() registers # MoL_free_memory_y_n_gfs() and # MoL_free_memory_non_y_n_gfs(), which free memory for # the indicated sets of gridfunctions def add_to_Cfunction_dict_MoL_free_memory(MoL_method, which_gfs): includes = ["NRPy_basic_defines.h", "NRPy_function_prototypes.h"] desc = "Method of Lines (MoL) for \"" + MoL_method + "\" method: Free memory for \""+which_gfs+"\" gridfunctions\n" desc += " - y_n_gfs are used to store data for the vector of gridfunctions y_i at t_n, at the start of each MoL timestep\n" desc += " - non_y_n_gfs are needed for intermediate (e.g., k_i) storage in chosen MoL method\n" c_type = "void" y_n_gridfunctions, non_y_n_gridfunctions_list, diagnostic_gridfunctions_point_to = \ generate_gridfunction_names(MoL_method = MoL_method) gridfunctions_list = [] if which_gfs == "y_n_gfs": gridfunctions_list = [y_n_gridfunctions] elif which_gfs == "non_y_n_gfs": gridfunctions_list = non_y_n_gridfunctions_list else: print("ERROR: which_gfs = \"" + which_gfs + "\" unrecognized.") sys.exit(1) name = "MoL_free_memory_" + which_gfs params = "const paramstruct *restrict params, MoL_gridfunctions_struct *restrict gridfuncs" body = "" for gridfunctions in gridfunctions_list: body += " free(gridfuncs->" + gridfunctions + ");\n" add_to_Cfunction_dict( includes=includes, desc=desc, c_type=c_type, name=name, params=params, body=indent_Ccode(body, " "), rel_path_to_Cparams=os.path.join(".")) ``` <a id='nrpybasicdefines'></a> ## Step 3.e: Define & register `MoL_gridfunctions_struct` in `NRPy_basic_defines.h`: `NRPy_basic_defines_MoL_timestepping_struct()` [Back to [top](#toc)\] $$\label{nrpybasicdefines}$$ `MoL_gridfunctions_struct` stores pointers to all the gridfunctions needed by MoL, and we define this struct within `NRPy_basic_defines.h`. ```python def NRPy_basic_defines_MoL_timestepping_struct(MoL_method="RK4"): y_n_gridfunctions, non_y_n_gridfunctions_list, diagnostic_gridfunctions_point_to = \ generate_gridfunction_names(MoL_method=MoL_method) # Step 3.b: Create MoL_timestepping struct: indent = " " Nbd = "typedef struct __MoL_gridfunctions_struct__ {\n" Nbd += indent + "REAL *restrict " + y_n_gridfunctions + ";\n" for gfs in non_y_n_gridfunctions_list: Nbd += indent + "REAL *restrict " + gfs + ";\n" Nbd += indent + "REAL *restrict diagnostic_output_gfs;\n" Nbd += "} MoL_gridfunctions_struct;\n" Nbd += """#define LOOP_ALL_GFS_GPS(ii) _Pragma("omp parallel for") \\ for(int (ii)=0;(ii)<Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2*NUM_EVOL_GFS;(ii)++)\n""" outC_NRPy_basic_defines_h_dict["MoL"] = Nbd ``` <a id='setupall'></a> ## Step 3.f: Add all MoL C codes to C function dictionary, and add MoL definitions to `NRPy_basic_defines.h`: `register_C_functions_and_NRPy_basic_defines()` \[Back to [top](#toc)\] $$\label{setupall}$$ ```python def register_C_functions_and_NRPy_basic_defines(MoL_method = "RK4", RHS_string = "rhs_eval(Nxx,Nxx_plus_2NGHOSTS,dxx, RK_INPUT_GFS, RK_OUTPUT_GFS);", post_RHS_string = "apply_bcs(Nxx,Nxx_plus_2NGHOSTS, RK_OUTPUT_GFS);", enable_rfm=False, enable_curviBCs=False): for which_gfs in ["y_n_gfs", "non_y_n_gfs"]: add_to_Cfunction_dict_MoL_malloc(MoL_method, which_gfs) add_to_Cfunction_dict_MoL_free_memory(MoL_method, which_gfs) add_to_Cfunction_dict_MoL_step_forward_in_time(MoL_method, RHS_string, post_RHS_string, enable_rfm=enable_rfm, enable_curviBCs=enable_curviBCs) NRPy_basic_defines_MoL_timestepping_struct(MoL_method = MoL_method) ``` <a id='code_validation'></a> # Step 4: Code Validation against `MoLtimestepping.MoL_new_way` NRPy+ module [Back to [top](#toc)\] $$\label{code_validation}$$ As a code validation check, we verify agreement in the dictionary of Butcher tables between 1. this tutorial and 2. the NRPy+ [MoLtimestepping.MoL_new_way](../edit/MoLtimestepping/MoL_new_way.py) module. We generate the header files for each RK method and check for agreement with the NRPy+ module. ```python import sys import MoLtimestepping.MoL_new_way as MoLC import difflib import pprint # Courtesy https://stackoverflow.com/questions/12956957/print-diff-of-python-dictionaries , # which itself is an adaptation of some Cpython core code def compare_dicts(d1, d2): return ('\n' + '\n'.join(difflib.ndiff( pprint.pformat(d1).splitlines(), pprint.pformat(d2).splitlines()))) print("\n\n ### BEGIN VALIDATION TESTS ###") import filecmp for key, value in Butcher_dict.items(): register_C_functions_and_NRPy_basic_defines(key, "rhs_eval(Nxx,Nxx_plus_2NGHOSTS,dxx, RK_INPUT_GFS, RK_OUTPUT_GFS);", "apply_bcs(Nxx,Nxx_plus_2NGHOSTS, RK_OUTPUT_GFS);") from outputC import outC_function_dict notebook_dict = outC_function_dict.copy() outC_function_dict.clear() from outputC import outC_function_dict if outC_function_dict != {}: print("Error in clearing dictionary.") sys.exit(1) MoLC.register_C_functions_and_NRPy_basic_defines(key, "rhs_eval(Nxx,Nxx_plus_2NGHOSTS,dxx, RK_INPUT_GFS, RK_OUTPUT_GFS);", "apply_bcs(Nxx,Nxx_plus_2NGHOSTS, RK_OUTPUT_GFS);") from outputC import outC_function_dict trusted_dict = outC_function_dict if notebook_dict != trusted_dict: print("VALIDATION TEST FAILED.\n") print(compare_dicts(notebook_dict, trusted_dict)) sys.exit(1) print("VALIDATION TEST PASSED on all files from "+str(key)+" method") print("### END VALIDATION TESTS ###") ``` ### BEGIN VALIDATION TESTS ### VALIDATION TEST PASSED on all files from Euler method VALIDATION TEST PASSED on all files from RK2 Heun method VALIDATION TEST PASSED on all files from RK2 MP method VALIDATION TEST PASSED on all files from RK2 Ralston method VALIDATION TEST PASSED on all files from RK3 method VALIDATION TEST PASSED on all files from RK3 Heun method VALIDATION TEST PASSED on all files from RK3 Ralston method VALIDATION TEST PASSED on all files from SSPRK3 method VALIDATION TEST PASSED on all files from RK4 method VALIDATION TEST PASSED on all files from DP5 method VALIDATION TEST PASSED on all files from DP5alt method VALIDATION TEST PASSED on all files from CK5 method VALIDATION TEST PASSED on all files from DP6 method VALIDATION TEST PASSED on all files from L6 method VALIDATION TEST PASSED on all files from DP8 method ### END VALIDATION TESTS ### <a id='latex_pdf_output'></a> # Step 5: Output this notebook to $\LaTeX$-formatted PDF \[Back to [top](#toc)\] $$\label{latex_pdf_output}$$ The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename [Tutorial-RK_Butcher_Table_Generating_C_Code.pdf](Tutorial-RK_Butcher_Table_Generating_C_Code.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.) ```python import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-Method_of_Lines-C_Code_Generation_new_way") ``` Created Tutorial-Method_of_Lines-C_Code_Generation_new_way.tex, and compiled LaTeX file to PDF file Tutorial-Method_of_Lines- C_Code_Generation_new_way.pdf
b820a5a173d343e100e75ded8f20e0078025663c
50,487
ipynb
Jupyter Notebook
Tutorial-Method_of_Lines-C_Code_Generation_new_way.ipynb
GeraintPratten/nrpytutorial
9d9ecb6c4f020adca29b51c79fb33787644c05e1
[ "BSD-2-Clause" ]
2
2021-07-16T02:35:40.000Z
2021-08-02T15:08:20.000Z
Tutorial-Method_of_Lines-C_Code_Generation_new_way.ipynb
GeraintPratten/nrpytutorial
9d9ecb6c4f020adca29b51c79fb33787644c05e1
[ "BSD-2-Clause" ]
null
null
null
Tutorial-Method_of_Lines-C_Code_Generation_new_way.ipynb
GeraintPratten/nrpytutorial
9d9ecb6c4f020adca29b51c79fb33787644c05e1
[ "BSD-2-Clause" ]
null
null
null
52.921384
545
0.5749
true
10,412
Qwen/Qwen-72B
1. YES 2. YES
0.808067
0.640636
0.517677
__label__eng_Latn
0.8254
0.041066
# 解方程 ## 简单的一元一次方程 \begin{equation}x + 16 = -25\end{equation} \begin{equation}x + 16 - 16 = -25 - 16\end{equation} \begin{equation}x = -25 - 16\end{equation} \begin{equation}x = -41\end{equation} ```python x = -41 # 验证方程的解 x + 16 == -25 ``` True ## 带系数的方程 \begin{equation}3x - 2 = 10 \end{equation} \begin{equation}3x - 2 + 2 = 10 + 2 \end{equation} \begin{equation}3x = 12 \end{equation} \begin{equation}x = 4\end{equation} ```python x = 4 # 代入 x = 4 3 * x - 2 == 10 ``` True ## 系数是分数的方程 \begin{equation}\frac{x}{3} + 1 = 16 \end{equation} \begin{equation}\frac{x}{3} = 15 \end{equation} \begin{equation}\frac{3}{1} \cdot \frac{x}{3} = 15 \cdot 3 \end{equation} \begin{equation}x = 45 \end{equation} ```python x = 45 x/3 + 1 == 16 ``` True ## 需要合并同类项的例子 \begin{equation}3x + 2 = 5x - 1 \end{equation} \begin{equation}3x + 3 = 5x \end{equation} \begin{equation}3 = 2x \end{equation} \begin{equation}\frac{3}{2} = x \end{equation} \begin{equation}x = \frac{3}{2} \end{equation} \begin{equation}x = 1\frac{1}{2} \end{equation} ```python x = 1.5 3*x + 2 == 5*x -1 ``` True ## 一元方程练习 \begin{equation}\textbf{4(x + 2)} + \textbf{3(x - 2)} = 16 \end{equation} \begin{equation}4x + 8 + 3x - 6 = 16 \end{equation} \begin{equation}7x + 2 = 16 \end{equation} \begin{equation}7x = 14 \end{equation} \begin{equation}\frac{7x}{7} = \frac{14}{7} \end{equation} \begin{equation}x = 2 \end{equation} ```python x = 2 4 * (x + 2) + 3 * (x - 2) == 16 ``` True # 线性方程 ## 例子 \begin{equation}2y + 3 = 3x - 1 \end{equation} \begin{equation}2y + 4 = 3x \end{equation} \begin{equation}2y = 3x - 4 \end{equation} \begin{equation}y = \frac{3x - 4}{2} \end{equation} ```python import numpy as np from tabulate import tabulate x = np.array(range(-10, 11)) # 从 -10 到 10 的21个数据点 y = (3 * x - 4) / 2 # 对应的函数值 print(tabulate(np.column_stack((x,y)), headers=['x', 'y'])) ``` x y --- ----- -10 -17 -9 -15.5 -8 -14 -7 -12.5 -6 -11 -5 -9.5 -4 -8 -3 -6.5 -2 -5 -1 -3.5 0 -2 1 -0.5 2 1 3 2.5 4 4 5 5.5 6 7 7 8.5 8 10 9 11.5 10 13 ```python from IPython.core.interactiveshell import InteractiveShell InteractiveShell.ast_node_interactivity = "last_expr" # %matplotlib inline ``` ```python from matplotlib import pyplot as plt plt.plot(x, y, color="grey", marker = "o") plt.xlabel('x') plt.ylabel('y') plt.grid() plt.show() ``` ## 截距 ```python plt.plot(x, y, color="grey") plt.xlabel('x') plt.ylabel('y') plt.grid() plt.axhline() # 画出坐标轴 plt.axvline() plt.show() ``` - x轴上的截距 \begin{equation}0 = \frac{3x - 4}{2} \end{equation} \begin{equation}\frac{3x - 4}{2} = 0 \end{equation} \begin{equation}3x - 4 = 0 \end{equation} \begin{equation}3x = 4 \end{equation} \begin{equation}x = \frac{4}{3} \end{equation} \begin{equation}x = 1\frac{1}{3} \end{equation} - y轴上的截距 \begin{equation}y = \frac{3\cdot0 - 4}{2} \end{equation} \begin{equation}y = \frac{-4}{2} \end{equation} \begin{equation}y = -2 \end{equation} ```python plt.plot(x, y, color="grey") plt.xlabel('x') plt.ylabel('y') plt.grid() plt.axhline() plt.axvline() plt.annotate('x',(1.333, 0)) # 标出截距点 plt.annotate('y',(0,-2)) plt.show() ``` > 截距的作用是明显的:两点成一线。连接截距就可以画出函数曲线。 ## 斜率 \begin{equation}slope = \frac{\Delta{y}}{\Delta{x}} \end{equation} \begin{equation}m = \frac{y_{2} - y_{1}}{x_{2} - x_{1}} \end{equation} \begin{equation}m = \frac{7 - -2}{6 - 0} \end{equation} \begin{equation}m = \frac{7 + 2}{6 - 0} \end{equation} \begin{equation}m = 1.5 \end{equation} ```python plt.plot(x, y, color="grey") plt.xlabel('x') plt.ylabel('y') plt.grid() plt.axhline() plt.axvline() m = 1.5 xInt = 4 / 3 yInt = -2 mx = [0, xInt] my = [yInt, yInt + m * xInt] plt.plot(mx, my, color='red', lw=5) # 用红色标出 plt.show() ``` ```python plt.grid() # 放大图 plt.axhline() plt.axvline() m = 1.5 xInt = 4 / 3 yInt = -2 mx = [0, xInt] my = [yInt, yInt + m * xInt] plt.plot(mx, my, color='red', lw=5) plt.show() ``` ### 直线的斜率、截距形式 \begin{equation}y = mx + b \end{equation} \begin{equation}y = \frac{3x - 4}{2} \end{equation} \begin{equation}y = 1\frac{1}{2}x + -2 \end{equation} ```python m = 1.5 yInt = -2 x = np.array(range(-10, 11)) y2 = m * x + yInt # 斜率截距形式 plt.plot(x, y2, color="grey") plt.xlabel('x') plt.ylabel('y') plt.grid() plt.axhline() plt.axvline() plt.annotate('y', (0, yInt)) plt.show() ``` # 线形方程组 > 解的含义:直线的交点 \begin{equation}x + y = 16 \end{equation} \begin{equation}10x + 25y = 250 \end{equation} ```python l1p1 = [16, 0] # 线1点1 l1p2 = [0, 16] # 线1点2 l2p1 = [25,0] # 线2点1 l2p2 = [0,10] # 线2点2 plt.plot(l1p1,l1p2, color='blue') plt.plot(l2p1, l2p2, color="orange") plt.xlabel('x') plt.ylabel('y') plt.grid() plt.show() ``` ### 解线形方程组 (消去法) \begin{equation}x + y = 16 \end{equation} \begin{equation}10x + 25y = 250 \end{equation} \begin{equation}-10(x + y) = -10(16) \end{equation} \begin{equation}10x + 25y = 250 \end{equation} \begin{equation}-10x + -10y = -160 \end{equation} \begin{equation}10x + 25y = 250 \end{equation} \begin{equation}15y = 90 \end{equation} \begin{equation}y = \frac{90}{15} \end{equation} \begin{equation}y = 6 \end{equation} \begin{equation}x + 6 = 16 \end{equation} \begin{equation}x = 10 \end{equation} ```python x = 10 y = 6 print ((x + y == 16) & ((10 * x) + (25 * y) == 250)) ``` True # 指数、根和对数 ## 指数 \begin{equation}2^{2} = 2 \cdot 2 = 4\end{equation} \begin{equation}2^{3} = 2 \cdot 2 \cdot 2 = 8\end{equation} ```python x = 5**3 print(x) ``` 125 ## 根 \begin{equation}?^{2} = 9 \end{equation} \begin{equation}\sqrt{9} = 3 \end{equation} \begin{equation}\sqrt[3]{64} = 4 \end{equation} ```python import math x = math.sqrt(9) # 平方根 print (x) cr = round(64 ** (1. / 3)) # 立方根 print(cr) ``` 3.0 4 ### 根是分数形式的指数 \begin{equation} 8^{\frac{1}{3}} = \sqrt[3]{8} = 2 \end{equation} \begin{equation} 9^{\frac{1}{2}} = \sqrt{9} = 3 \end{equation} ```python print (9**0.5) print (math.sqrt(9)) ``` 3.0 3.0 ## 对数 > 对数是指数的逆运算 \begin{equation}4^{?} = 16 \end{equation} \begin{equation}log_{4}(16) = 2 \end{equation} ```python x = math.log(16, 4) print(x) ``` 2.0 ### 以10为底的对数 \begin{equation}log(64) = 1.8061 \end{equation} ### 自然对数 \begin{equation}log_{e}(64) = ln(64) = 4.1589 \end{equation} ```python print(math.log10(64)) print (math.log(64)) ``` 1.806179973983887 4.1588830833596715 ## 幂运算 (合并同类项) \begin{equation}2y = 2x^{4} ( \frac{x^{2} + 2x^{2}}{x^{3}} ) \end{equation} \begin{equation}2y = 2x^{4} ( \frac{3x^{2}}{x^{3}} ) \end{equation} \begin{equation}2y = 2x^{4} ( 3x^{-1} ) \end{equation} \begin{equation}2y = 6x^{3} \end{equation} \begin{equation}y = 3x^{3} \end{equation} ```python x = np.array(range(-10, 11)) y3 = 3 * x ** 3 print(tabulate(np.column_stack((x, y3)), headers=['x', 'y'])) ``` x y --- ----- -10 -3000 -9 -2187 -8 -1536 -7 -1029 -6 -648 -5 -375 -4 -192 -3 -81 -2 -24 -1 -3 0 0 1 3 2 24 3 81 4 192 5 375 6 648 7 1029 8 1536 9 2187 10 3000 ```python plt.plot(x, y3, color="magenta") # y3是曲线 plt.xlabel('x') plt.ylabel('y') plt.grid() plt.axhline() plt.axvline() plt.show() ``` ### 看一个成指数增长的例子 \begin{equation}y = 2^{x} \end{equation} ```python y4 = 2.0**x plt.plot(x, y4, color="magenta") plt.xlabel('x') plt.ylabel('y') plt.grid() plt.axhline() plt.axvline() plt.show() ``` ### 复利计算 > 存款100,年利5%,20年后余额是多少(记复利) \begin{equation}y1 = 100 + (100 \cdot 0.05) \end{equation} \begin{equation}y1 = 100 \cdot 1.05 \end{equation} \begin{equation}y2 = 100 \cdot 1.05 \cdot 1.05 \end{equation} \begin{equation}y2 = 100 \cdot 1.05^{2} \end{equation} \begin{equation}y20 = 100 \cdot 1.05^{20} \end{equation} ```python year = np.array(range(1, 21)) # 年份 balance = 100 * (1.05 ** year) # 余额 plt.plot(year, balance, color="green") plt.xlabel('Year') plt.ylabel('Balance') plt.show() ``` # 多项式 \begin{equation}12x^{3} + 2x - 16 \end{equation} 三项: - 12x<sup>3</sup> - 2x - -16 - 两个系数(12 和 2) 和一个常量-16 - 变量 x - 指数 <sup>3</sup> ## 标准形式 随着x的指数增长排列(升幂) \begin{equation}3x + 4xy^{2} - 3 + x^{3} \end{equation} 指数最高项在前(降幂) \begin{equation}x^{3} + 4xy^{2} + 3x - 3 \end{equation} ## 多项式化简 \begin{equation}x^{3} + 2x^{3} - 3x - x + 8 - 3 \end{equation} \begin{equation}3x^{3} - 4x + 5 \end{equation} ```python from random import randint x = randint(1,100) # 取任意值验证多项式化简 (x**3 + 2*x**3 - 3*x - x + 8 - 3) == (3*x**3 - 4*x + 5) ``` True ## 多项式相加 \begin{equation}(3x^{3} - 4x + 5) + (2x^{3} + 3x^{2} - 2x + 2) \end{equation} \begin{equation}3x^{3} + 2x^{3} + 3x^{2} - 4x -2x + 5 + 2 \end{equation} \begin{equation}5x^{3} + 3x^{2} - 6x + 7 \end{equation} ```python x = randint(1,100) (3*x**3 - 4*x + 5) + (2*x**3 + 3*x**2 - 2*x + 2) == 5*x**3 + 3*x**2 - 6*x + 7 ``` True ## 多项式相减 \begin{equation}(2x^{2} - 4x + 5) - (x^{2} - 2x + 2) \end{equation} \begin{equation}(2x^{2} - 4x + 5) + (-x^{2} + 2x - 2) \end{equation} \begin{equation}2x^{2} + -x^{2} + -4x + 2x + 5 + -2 \end{equation} \begin{equation}x^{2} - 2x + 3 \end{equation} ```python from random import randint x = randint(1,100) (2*x**2 - 4*x + 5) - (x**2 - 2*x + 2) == x**2 - 2*x + 3 ``` True ## 多项式相乘 1. 用第一个多项式的每一项乘以第二个多项式 2. 把相乘的结果合并同类项 \begin{equation}(x^{4} + 2)(2x^{2} + 3x - 3) \end{equation} \begin{equation}2x^{6} + 3x^{5} - 3x^{4} + 4x^{2} + 6x - 6 \end{equation} ```python x = randint(1,100) (x**4 + 2)*(2*x**2 + 3*x - 3) == 2*x**6 + 3*x**5 - 3*x**4 + 4*x**2 + 6*x - 6 ``` True ## 多项式相除 ### 简单例子 \begin{equation}(4x + 6x^{2}) \div 2x \end{equation} \begin{equation}\frac{4x + 6x^{2}}{2x} \end{equation} \begin{equation}\frac{4x}{2x} + \frac{6x^{2}}{2x}\end{equation} \begin{equation}2 + 3x\end{equation} ```python x = randint(1,100) (4*x + 6*x**2) / (2*x) == 2 + 3*x ``` True ### 长除法 \begin{equation}(x^{2} + 2x - 3) \div (x - 2) \end{equation} \begin{equation} x - 2 |\overline{x^{2} + 2x - 3} \end{equation} \begin{equation} \;\;\;\;x \end{equation} \begin{equation}x - 2 |\overline{x^{2} + 2x - 3} \end{equation} \begin{equation} \;x^{2} -2x \end{equation} \begin{equation} \;\;\;\;x \end{equation} \begin{equation}x - 2 |\overline{x^{2} + 2x - 3} \end{equation} \begin{equation}- (x^{2} -2x) \end{equation} \begin{equation}\;\;\;\;\;\overline{\;\;\;\;\;\;\;\;\;\;4x -3} \end{equation} \begin{equation} \;\;\;\;\;\;\;\;x + 4 \end{equation} \begin{equation}x - 2 |\overline{x^{2} + 2x - 3} \end{equation} \begin{equation}- (x^{2} -2x) \end{equation} \begin{equation}\;\;\;\;\;\overline{\;\;\;\;\;\;\;\;\;\;4x -3} \end{equation} \begin{equation}- (\;\;\;\;\;\;\;\;\;\;\;\;4x -8) \end{equation} \begin{equation}\;\;\;\;\;\overline{\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;5} \end{equation} \begin{equation}x + 4 + \frac{5}{x-2} \end{equation} ```python x = randint(3,100) (x**2 + 2*x -3)/(x-2) == x + 4 + (5/(x-2)) ``` True # 因式 16可以如下表示 - 1 x 16 - 2 x 8 - 4 x 4 或者 1, 2, 4, 8 是 16 的因子 ## 用多项式乘来表示多项式 \begin{equation}-6x^{2}y^{3} \end{equation} \begin{equation}(2xy^{2})(-3xy) \end{equation} 又如 \begin{equation}(x + 2)(2x^{2} - 3y + 2) = 2x^{3} + 4x^{2} - 3xy + 2x - 6y + 4 \end{equation} 那么**x+2** 和 **2x<sup>2</sup> - 3y + 2** 都是 **2x<sup>3</sup> + 4x<sup>2</sup> - 3xy + 2x - 6y + 4**的因子 ## 最大公共因子 | 16 | 24 | |--------|--------| | 1 x 16 | 1 x 24 | | 2 x 8 | 2 x 12 | | 2 x **8** | 3 x **8** | | 4 x 4 | 4 x 6 | 8是16和24的最大公约数 \begin{equation}15x^{2}y\;\;\;\;\;\;\;\;9xy^{3}\end{equation} 这两个多项式的最大公共因子是? ## 最大公共因子 先看系数,他们都包含**3** - 3 x 5 = 15 - 3 x 3 = 9 再看***x***项, x<sup>2</sup> 和 x。 最后看***y***项, y 和 y<sup>3</sup>。 最大公共因子是 \begin{equation}3xy\end{equation} ## 最大公共因子 可见最大公共因子总包括 - 系数的最大公约数 - 变量指数的最小值 用多项式除来验证: \begin{equation}\frac{15x^{2}y}{3xy}\;\;\;\;\;\;\;\;\frac{9xy^{3}}{3xy}\end{equation} \begin{equation}3xy(5x) = 15x^{2}y\end{equation} \begin{equation}3xy(3y^{2}) = 9xy^{3}\end{equation} ```python x = randint(1,100) y = randint(1,100) print((3*x*y)*(5*x) == 15*x**2*y) print((3*x*y)*(3*y**2) == 9*x*y**3) ``` True True ## 用系数最大公约数作因式分解 \begin{equation}6x + 15y \end{equation} \begin{equation}6x + 15y = 3(2x) + 3(5y) \end{equation} \begin{equation}6x + 15y = 3(2x) + 3(5y) = \mathbf{3(2x + 5y)} \end{equation} ```python x = randint(1,100) y = randint(1,100) (6*x + 15*y) == (3*(2*x) + 3*(5*y)) == (3*(2*x + 5*y)) ``` True ## 用最大公共因子作因式分解 \begin{equation}15x^{2}y + 9xy^{3}\end{equation} \begin{equation}3xy(5x) = 15x^{2}y\end{equation} \begin{equation}3xy(3y^{2}) = 9xy^{3}\end{equation} \begin{equation}15x^{2}y + 9xy^{3} = \mathbf{3xy(5x + 3y^{2})}\end{equation} ```python x = randint(1,100) y = randint(1,100) (15*x**2*y + 9*x*y**3) == (3*x*y*(5*x + 3*y**2)) ``` True ## 用平方差公式作因式分解 \begin{equation}x^{2} - 9\end{equation} \begin{equation}x^{2} - 3^{2}\end{equation} \begin{equation}(x - 3)(x + 3)\end{equation} ```python x = randint(1,100) (x**2 - 9) == (x - 3)*(x + 3) ``` True ## 用平方作因式分解 \begin{equation}x^{2} + 10x + 25\end{equation} \begin{equation}(x + 5)(x + 5)\end{equation} \begin{equation}(x + 5)^{2}\end{equation} 一般的 \begin{equation}(a + b)^{2} = a^{2} + b^{2}+ 2ab \end{equation} ```python a = randint(1,100) b = randint(1,100) a**2 + b**2 + (2*a*b) == (a + b)**2 ``` True # 二次方程 \begin{equation}y = 2(x - 1)(x + 2)\end{equation} \begin{equation}y = 2x^{2} + 2x - 4\end{equation} ```python x = np.array(range(-9, 9)) y = 2 * x **2 + 2 * x - 4 plt.plot(x, y, color="grey") # 画二次曲线 (抛物线) plt.xlabel('x') plt.ylabel('y') plt.grid() plt.axhline() plt.axvline() plt.show() ``` >会是什么形状? \begin{equation}y = -2x^{2} + 6x + 7\end{equation} ```python x = np.array(range(-8, 12)) y = -2 * x ** 2 + 6 * x + 7 plt.plot(x, y, color="grey") plt.xlabel('x') plt.ylabel('y') plt.grid() plt.axhline() plt.axvline() plt.show() ``` ## 抛物线的顶点 \begin{equation}y = ax^{2} + bx + c\end{equation} ***a***, ***b***, ***c***是系数 它产生的抛物线有顶点,或者是最高点,或者是最低点。 ```python def plot_parabola(a, b, c): vx = (-1*b)/(2*a) # 顶点 vy = a*vx**2 + b*vx + c minx = int(vx - 10) # 范围 maxx = int(vx + 11) x = np.array(range(minx, maxx)) y = a * x ** 2 + b * x + c miny = y.min() maxy = y.max() plt.plot(x, y, color="grey") # 画曲线 plt.xlabel('x') plt.ylabel('y') plt.grid() plt.axhline() plt.axvline() sx = [vx, vx] # 画中心线 sy = [miny, maxy] plt.plot(sx, sy, color='magenta') plt.scatter(vx,vy, color="red") # 画顶点 ``` ```python plot_parabola(2, 2, -4) plt.show() plot_parabola(-2, 3, 5) plt.show() ``` ## 抛物线的交点(二次方程的解) \begin{equation}y = 2(x - 1)(x + 2)\end{equation} \begin{equation}2(x - 1)(x + 2) = 0\end{equation} \begin{equation}x = 1\end{equation} \begin{equation}x = -2\end{equation} ```python # 画曲线 plot_parabola(2, 2, -4) # 画交点 x1 = -2 x2 = 1 plt.scatter([x1,x2],[0,0], color="green") plt.annotate('x1',(x1, 0)) plt.annotate('x2',(x2, 0)) plt.show() ``` ## 二次方程的解公式 \begin{equation}ax^{2} + bx + c = 0\end{equation} \begin{equation}x = \frac{-b \pm \sqrt{b^{2} - 4ac}}{2a}\end{equation} ```python def plot_parabola_from_formula (a, b, c): plot_parabola(a, b, c) # 画曲线 x1 = (-b + (b*b - 4*a*c)**0.5)/(2 * a) x2 = (-b - (b*b - 4*a*c)**0.5)/(2 * a) plt.scatter([x1, x2], [0, 0], color="green") # 画解 plt.annotate('x1', (x1, 0)) plt.annotate('x2', (x2, 0)) plt.show() plot_parabola_from_formula (2, -16, 2) ``` # 函数 \begin{equation}f(x) = x^{2} + 2\end{equation} \begin{equation}f(3) = 11\end{equation} ```python def f(x): return x**2 + 2 f(3) ``` 11 ```python x = np.array(range(-100, 101)) plt.xlabel('x') plt.ylabel('f(x)') plt.grid() plt.plot(x, f(x), color='purple') plt.show() ``` ## 函数的定义域 \begin{equation}f(x) = x + 1, \{x \in \rm I\!R\}\end{equation} \begin{equation}g(x) = (\frac{12}{2x})^{2}, \{x \in \rm I\!R\;\;|\;\; x \ne 0 \}\end{equation} 简化形式: \begin{equation}g(x) = (\frac{12}{2x})^{2},\;\; x \ne 0\end{equation} ```python def g(x): if x != 0: return (12/(2*x))**2 x = range(-100, 101) y = [g(a) for a in x] print(g(0.1)) plt.xlabel('x') plt.ylabel('g(x)') plt.grid() plt.plot(x, y, color='purple') # 绘制临界点, 如果取接近0的值,临界点附件函数的形状变得不可见 plt.plot(0, g(1), color='purple', marker='o', markerfacecolor='w', markersize=8) plt.show() ``` \begin{equation}h(x) = 2\sqrt{x}, \{x \in \rm I\!R\;\;|\;\; x \ge 0 \}\end{equation} ```python def h(x): if x >= 0: return 2 * np.sqrt(x) x = range(-100, 101) y = [h(a) for a in x] plt.xlabel('x') plt.ylabel('h(x)') plt.grid() plt.plot(x, y, color='purple') # 画出边界 plt.plot(0, h(0), color='purple', marker='o', markerfacecolor='purple', markersize=8) plt.show() ``` \begin{equation}j(x) = x + 2,\;\; x \ge 0 \text{ and } x \le 5\end{equation} \begin{equation}\{x \in \rm I\!R\;\;|\;\; 0 \le x \le 5 \}\end{equation} ```python def j(x): if x >= 0 and x <= 5: return x + 2 x = range(-100, 101) y = [j(a) for a in x] plt.xlabel('x') plt.ylabel('j(x)') plt.grid() plt.plot(x, y, color='purple') # 两个边界点 plt.plot(0, j(0), color='purple', marker='o', markerfacecolor='purple', markersize=8) plt.plot(5, j(5), color='purple', marker='o', markerfacecolor='purple', markersize=8) plt.show() ``` ### 阶梯函数 \begin{equation} k(x) = \begin{cases} 0, & \text{if } x = 0, \\ 1, & \text{if } x = 100 \end{cases} \end{equation} ```python def k(x): if x == 0: return 0 elif x == 100: return 1 x = range(-100, 101) y = [k(a) for a in x] plt.xlabel('x') plt.ylabel('k(x)') plt.grid() plt.scatter(x, y, color='purple') plt.show() ``` ### 函数的值域 \begin{equation}p(x) = x^{2} + 1\end{equation} \begin{equation}\{p(x) \in \rm I\!R\;\;|\;\; p(x) \ge 1 \}\end{equation} ```python def p(x): return x**2 + 1 x = np.array(range(-100, 101)) plt.xlabel('x') plt.ylabel('p(x)') plt.grid() plt.plot(x, p(x), color='purple') plt.show() ```
c5e27801c1566ab5a2e40f8cd6879810463de442
364,996
ipynb
Jupyter Notebook
基础教程/A1-Python与基础知识/数学基础/01_代数.ipynb
microsoft/ai-edu
2f59fa4d3cf19f14e0b291e907d89664bcdc8df3
[ "Apache-2.0" ]
11,094
2019-05-07T02:48:50.000Z
2022-03-31T08:49:42.000Z
基础教程/A1-Python与基础知识/数学基础/01_代数.ipynb
microsoft/ai-edu
2f59fa4d3cf19f14e0b291e907d89664bcdc8df3
[ "Apache-2.0" ]
157
2019-05-13T15:07:19.000Z
2022-03-23T08:52:32.000Z
基础教程/A1-Python与基础知识/数学基础/01_代数.ipynb
microsoft/ai-edu
2f59fa4d3cf19f14e0b291e907d89664bcdc8df3
[ "Apache-2.0" ]
2,412
2019-05-07T02:55:15.000Z
2022-03-30T06:56:52.000Z
145.127634
17,920
0.893119
true
8,695
Qwen/Qwen-72B
1. YES 2. YES
0.870597
0.752013
0.6547
__label__yue_Hant
0.596231
0.359419
# How to define a compartment population model in Compartor $$ \def\n{\mathbf{n}} \def\x{\mathbf{x}} \def\N{\mathbb{\mathbb{N}}} \def\X{\mathbb{X}} \def\NX{\mathbb{\N_0^\X}} \def\C{\mathcal{C}} \def\Jc{\mathcal{J}_c} \def\DM{\Delta M_{c,j}} \newcommand\diff{\mathop{}\!\mathrm{d}} \def\Xc{\mathbf{X}_c} \def\Yc{\mathbf{Y}_c} \newcommand{\muset}[1]{\dot{\{}#1\dot{\}}} $$ Whenever using Compartor in a Jupyter notebook, run the following commands: ```python # initialize sympy printing (for latex output) from sympy import init_printing, Symbol init_printing() # import functions and classes for compartment models from compartor import * ``` ## Usage of the constructor TransitionClass The population dynamics are specified in Compartor through a set of transition classes. These are stoichiometric-like equations whose left-hand and right-hand sides specify how some `Compartments` are modified by the occurrence of a transition. To define a compartment $[\x]$, it is first necessary to define some `Content` variables $\x \in \N_0^D$ that Compartor can interpret as symbols on which to perform symbolic computation. For instance, ```python x = Content('x') y = Content('y') Compartment(x) ``` Content variables are $D$-dimensional, with `x[d]` denoting the copy number of chemical species `d`, for $d=0,1,...,D-1$ . Once some content variables have been defined, the fastest way to define a transition class is the constructor `TransitionClass`. For instance, ```python Exit = TransitionClass( [x] -to> {}, 'k_E', name='E') display(Exit) ``` defines a transition class that randomly removes one compartment from the population with rate $k_E$. In particular: * The first argument of `TransitionClass` is the compartment stoichiometry, where lelf-hand side and right-hand side are separated by the keyword `-to>`. The notation `[x]` denotes a compartment of content `x`, while `{}` denotes the empty set. * The second argument assignes a name to the rate constant * The optional parameter `name` defines the subscript of the transition propensity Similarly, we can define a transition class that randomly fuses two compartments as follows ```python Fusion = TransitionClass( [x] + [y] -to> [x+y], 'k_F', name='F') display(Fusion) ``` Note that the population dependency of the propensity `h` is automatically inferred with the law of mass action. Note that in the compartment notation we can use compound expressions inside compartment brackets. In the above example, we have used `x+y` to denote the content formed by adding content vectors `x` and `y`. Content vectors can be also notated explicitly as $D$-tuples, listing the copy number of each chemical species $d=0,1,...,D-1$. For example, in the expression `[x + (-1, 0)]`, the tuple `(-1, 0)` denotes a change by $-1$ in chemical species $d=0$ (in a model with $D=2$ species). The expression could be equivalently written as `[(x[0]-1,x[1])]`. We will see more examples of this notation below. ### Propensities with content dependency It is possible to tune the propensity as a function of the compartment contents by providing a third argument to `TransitionClass`, such as for the following chemical events given in the example of the paper: ```python Conversion = TransitionClass( [x] -to> [x + (-1,1)], 'k_c', x[0], name='c') Degradation = TransitionClass( [x] -to> [x + (0,-1)], 'k_d', x[1], name='d') display(Conversion, Degradation) ``` The Conversion class transforms the first chemical species (indexed by `0`) to the second type with propensity $k_cx_0$ in any compartment across the population. The Degradation class, instead, removes one molecule of the second chemical species with rate $k_dx_1$, for a given compartment. Some transition classes involve compartments on the product side (i.e. right-hand side) whose content is drawn in probabilistic fashion with respect to the reactant compartments. In such cases, a conditional distribution can be passed as optional argument `pi` in `TransitionClass`. The type of $\pi$ is `OutcomeDistribution`, which is a class comprising * an expression or symbol to use for displaying $\pi$ in compound expressions * a function `expectation` that takes an expression over reactant contents, and returns its expectation over all product compartment variables. There are generators for several predefined outcome distributions. If nothing is specified, as in the above "Exit" transition example, `OutcomeDistribution.Identity()` is used by default. Instead, when the content of product compartments follows a distribution, other generators can be used or created. Compartor currently includes the following `OutcomeDistribution` generators * `Poisson()` * `NegativeBinomial()` * `Uniform()` For example, the model in the paper has an "Intake" transition class where new compartments are created with Poisson-distributed content ```python from sympy import Symbol pi_I = OutcomeDistribution.Poisson(Symbol('\pi_{I}(y; \lambda)'),y[0],Symbol('\lambda')) Intake = TransitionClass( {} -to> [(y[0],0)], 'k_I', pi=pi_I, name='I') display(Intake) ``` ## Model definition The declaration of a model consists in defining a list of transition classes. We provide some examples of model declaration here below. ### Example: case study shown in the paper ```python x = Content('x') y = Content('y') # Intake Distribution pi_I = OutcomeDistribution.Poisson(Symbol('\pi_{I}(y; \lambda)'),y[0],Symbol('\lambda')) Intake = TransitionClass( {} -to> [(y[0],0)], 'k_I', pi=pi_I, name='I') Fusion = TransitionClass( [x] + [y] -to> [x+y], 'k_F', name='F') Conversion = TransitionClass( [x] -to> [x + (-1,1)], 'k_c', x[0], name='c') Degradation = TransitionClass( [x] -to> [x + (0,-1)], 'k_d', x[1], name='d') transitions = [ Intake, Fusion, Conversion, Degradation] ``` The transition classes stored into the variable `transitions` can be displayed with the function `display_transition_classes()` as follows ```python display_transition_classes(transitions) ``` $\displaystyle \begin{align} \emptyset&\overset{h_{I}}{\longrightarrow}\left[\left( {y}_{0}, \ 0\right)\right] && h_{I} = \pi_{I}(y; \lambda) k_{I}\\\left[x\right] + \left[y\right]&\overset{h_{F}}{\longrightarrow}\left[x + y\right] && h_{F} = \frac{k_{F} \left(n{\left(y \right)} - \delta_{x y}\right) n{\left(x \right)}}{\delta_{x y} + 1}\\\left[x\right]&\overset{h_{c}}{\longrightarrow}\left[\left( -1, \ 1\right) + x\right] && h_{c} = k_{c} n{\left(x \right)} {x}_{0}\\\left[x\right]&\overset{h_{d}}{\longrightarrow}\left[\left( 0, \ -1\right) + x\right] && h_{d} = k_{d} n{\left(x \right)} {x}_{1} \end{align}$ ### Example: nested birth-death process ```python x = Content('x') y = Content('y') # Intake pi_I = OutcomeDistribution.NegativeBinomial(Symbol('\pi_{NB}(y; \lambda)'), y[0],Symbol('r'),Symbol('p')) Intake = TransitionClass( {} -to> [y], 'k_I', pi=pi_I, name='I') Exit = TransitionClass( [x] -to> {}, 'k_E', name='E') Birth = TransitionClass( [x] -to> [x+1], 'k_b', name='b') Death = TransitionClass( [x] -to> [x-1], 'k_d', x[0], name='d') transitions = [Intake, Exit, Birth, Death] display_transition_classes(transitions) ``` $\displaystyle \begin{align} \emptyset&\overset{h_{I}}{\longrightarrow}\left[y\right] && h_{I} = \pi_{NB}(y; \lambda) k_{I}\\\left[x\right]&\overset{h_{E}}{\longrightarrow}\emptyset && h_{E} = k_{E} n{\left(x \right)}\\\left[x\right]&\overset{h_{b}}{\longrightarrow}\left[\left( 1\right) + x\right] && h_{b} = k_{b} n{\left(x \right)}\\\left[x\right]&\overset{h_{d}}{\longrightarrow}\left[\left( -1\right) + x\right] && h_{d} = k_{d} n{\left(x \right)} {x}_{0} \end{align}$ ### Example: coagulation-fragmentation system with intake and export ```python x = Content('x') y = Content('y') pi_I = OutcomeDistribution.Poisson(Symbol("\pi_{Poiss}(y; \lambda)"), y[0], Symbol("\lambda")) pi_F = OutcomeDistribution.Uniform(Symbol("\pi_F(y|x)"), y[0], 0, x[0]) Intake = TransitionClass( {} -to> [y], 'k_I', pi=pi_I, name='I') Exit = TransitionClass( [x] -to> {}, 'k_E', name='E') Coagulation = TransitionClass( [x] + [y] -to> [x+y], 'k_C', name='C') Fragmentation = TransitionClass( [x] -to> [y] + [x-y], 'k_F', g=x[0], pi=pi_F, name='F') transitions = [Intake, Exit, Coagulation, Fragmentation] display_transition_classes(transitions) ``` $\displaystyle \begin{align} \emptyset&\overset{h_{I}}{\longrightarrow}\left[y\right] && h_{I} = \pi_{Poiss}(y; \lambda) k_{I}\\\left[x\right]&\overset{h_{E}}{\longrightarrow}\emptyset && h_{E} = k_{E} n{\left(x \right)}\\\left[x\right] + \left[y\right]&\overset{h_{C}}{\longrightarrow}\left[x + y\right] && h_{C} = \frac{k_{C} \left(n{\left(y \right)} - \delta_{x y}\right) n{\left(x \right)}}{\delta_{x y} + 1}\\\left[x\right]&\overset{h_{F}}{\longrightarrow}\left[x - y\right] + \left[y\right] && h_{F} = \pi_F(y|x) k_{F} n{\left(x \right)} {x}_{0} \end{align}$
4da835bcff48e819a3c441b5bbd5e1bc5e5b7206
37,316
ipynb
Jupyter Notebook
(1) HOWTO - define a model.ipynb
zechnerlab/Compartor
93c1b0752b6fdfffddd4f1ac6b9631729eae9a95
[ "BSD-2-Clause" ]
1
2021-02-10T15:56:02.000Z
2021-02-10T15:56:02.000Z
(1) HOWTO - define a model.ipynb
zechnerlab/Compartor
93c1b0752b6fdfffddd4f1ac6b9631729eae9a95
[ "BSD-2-Clause" ]
null
null
null
(1) HOWTO - define a model.ipynb
zechnerlab/Compartor
93c1b0752b6fdfffddd4f1ac6b9631729eae9a95
[ "BSD-2-Clause" ]
1
2021-12-05T11:24:22.000Z
2021-12-05T11:24:22.000Z
82.557522
6,428
0.78599
true
2,649
Qwen/Qwen-72B
1. YES 2. YES
0.884039
0.76908
0.679897
__label__eng_Latn
0.951447
0.41796
# Statistics Fundamentals Statistics is primarily about analyzing data samples, and that starts with udnerstanding the distribution of data in a sample. ## Analyzing Data Distribution A great deal of statistical analysis is based on the way that data values are distributed within the dataset. In this section, we'll explore some statistics that you can use to tell you about the values in a dataset. ### Measures of Central Tendency The term *measures of central tendency* sounds a bit grand, but really it's just a fancy way of saying that we're interested in knowing where the middle value in our data is. For example, suppose decide to conduct a study into the comparative salaries of people who graduated from the same school. You might record the results like this: | Name | Salary | |----------|-------------| | Dan | 50,000 | | Joann | 54,000 | | Pedro | 50,000 | | Rosie | 189,000 | | Ethan | 55,000 | | Vicky | 40,000 | | Frederic | 59,000 | Now, some of the former-students may earn a lot, and others may earn less; but what's the salary in the middle of the range of all salaries? #### Mean A common way to define the central value is to use the *mean*, often called the *average*. This is calculated as the sum of the values in the dataset, divided by the number of observations in the dataset. When the dataset consists of the full population, the mean is represented by the Greek symbol ***&mu;*** (*mu*), and the formula is written like this: \begin{equation}\mu = \frac{\displaystyle\sum_{i=1}^{N}X_{i}}{N}\end{equation} More commonly, when working with a sample, the mean is represented by ***x&#772;*** (*x-bar*), and the formula is written like this (note the lower case letters used to indicate values from a sample): \begin{equation}\bar{x} = \frac{\displaystyle\sum_{i=1}^{n}x_{i}}{n}\end{equation} In the case of our list of heights, this can be calculated as: \begin{equation}\bar{x} = \frac{50000+54000+50000+189000+55000+40000+59000}{7}\end{equation} Which is **71,000**. >In technical terminology, ***x&#772;*** is a *statistic* (an estimate based on a sample of data) and ***&mu;*** is a *parameter* (a true value based on the entire population). A lot of the time, the parameters for the full population will be impossible (or at the very least, impractical) to measure; so we use statistics obtained from a representative sample to approximate them. In this case, we can use the sample mean of salary for our selection of surveyed students to try to estimate the actual average salary of all students who graduate from our school. In Python, when working with data in a *pandas.dataframe*, you can use the ***mean*** function, like this: ```python import pandas as pd df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'], 'Salary':[50000,54000,50000,189000,55000,40000,59000]}) print (df['Salary'].mean()) ``` So, is **71,000** really the central value? Or put another way, would it be reasonable for a graduate of this school to expect to earn $71,000? After all, that's the average salary of a graduate from this school. If you look closely at the salaries, you can see that out of the seven former students, six earn less than the mean salary. The data is *skewed* by the fact that Rosie has clearly managed to find a much higher-paid job than her classmates. #### Median OK, let's see if we can find another definition for the central value that more closely reflects the expected earning potential of students attending our school. Another measure of central tendancy we can use is the *median*. To calculate the median, we need to sort the values into ascending order and then find the middle-most value. When there are an odd number of observations, you can find the position of the median value using this formula (where *n* is the number of observations): \begin{equation}\frac{n+1}{2}\end{equation} Remember that this formula returns the *position* of the median value in the sorted list; not the value itself. If the number of observations is even, then things are a little (but not much) more complicated. In this case you calculate the median as the average of the two middle-most values, which are found like this: \begin{equation}\frac{n}{2} \;\;\;\;and \;\;\;\; \frac{n}{2} + 1\end{equation} So, for our graduate salaries; first lets sort the dataset: | Salary | |-------------| | 40,000 | | 50,000 | | 50,000 | | 54,000 | | 55,000 | | 59,000 | | 189,000 | There's an odd number of observation (7), so the median value is at position (7 + 1) &div; 2; in other words, position 4: | Salary | |-------------| | 40,000 | | 50,000 | | 50,000 | |***>54,000*** | | 55,000 | | 59,000 | | 189,000 | So the median salary is **54,000**. The *pandas.dataframe* class in Python has a ***median*** function to find the median: ```python import pandas as pd df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'], 'Salary':[50000,54000,50000,189000,55000,40000,59000]}) print (df['Salary'].median()) ``` #### Mode Another related statistic is the *mode*, which indicates the most frequently occurring value. If you think about it, this is potentially a good indicator of how much a student might expect to earn when they graduate from the school; out of all the salaries that are being earned by former students, the mode is earned by more than any other. Looking at our list of salaries, there are two instances of former students earning **50,000**, but only one instance each for all other salaries: | Salary | |-------------| | 40,000 | |***>50,000***| |***>50,000***| | 54,000 | | 55,000 | | 59,000 | | 189,000 | The mode is therefore **50,000**. As you might expect, the *pandas.dataframe* class has a ***mode*** function to return the mode: ```python import pandas as pd df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'], 'Salary':[50000,54000,50000,189000,55000,40000,59000]}) print (df['Salary'].mode()) ``` ##### Multimodal Data It's not uncommon for a set of data to have more than one value as the mode. For example, suppose Ethan receives a raise that takes his salary to **59,000**: | Salary | |-------------| | 40,000 | |***>50,000***| |***>50,000***| | 54,000 | |***>59,000***| |***>59,000***| | 189,000 | Now there are two values with the highest frequency. This dataset is *bimodal*. More generally, when there is more than one mode value, the data is considered *multimodal*. The *pandas.dataframe.**mode*** function returns all of the modes: ```python import pandas as pd df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'], 'Salary':[50000,54000,50000,189000,59000,40000,59000]}) print (df['Salary'].mode()) ``` 0 50000 1 59000 dtype: int64 ### Distribution and Density Now we know something about finding the center, we can start to explore how the data is distributed around it. What we're interested in here is understanding the general "shape" of the data distribution so that we can begin to get a feel for what a 'typical' value might be expected to be. We can start by finding the extremes - the minimum and maximum. In the case of our salary data, the lowest paid graduate from our school is Vicky, with a salary of **40,000**; and the highest-paid graduate is Rosie, with **189,000**. The *pandas.dataframe* class has ***min*** and ***max*** functions to return these values. Run the following code to compare the minimum and maximum salaries to the central measures we calculated previously: ```python import pandas as pd df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'], 'Salary':[50000,54000,50000,189000,55000,40000,59000]}) print ('Min: ' + str(df['Salary'].min())) print ('Mode: ' + str(df['Salary'].mode()[0])) print ('Median: ' + str(df['Salary'].median())) print ('Mean: ' + str(df['Salary'].mean())) print ('Max: ' + str(df['Salary'].max())) ``` We can examine these values, and get a sense for how the data is distributed - for example, we can see that the *mean* is closer to the max than the *median*, and that both are closer to the *min* than to the *max*. However, it's generally easier to get a sense of the distribution by visualizing the data. Let's start by creating a histogram of the salaries, highlighting the *mean* and *median* salaries (the *min*, *max* are fairly self-evident, and the *mode* is wherever the highest bar is): ```python %matplotlib inline import pandas as pd import matplotlib.pyplot as plt df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'], 'Salary':[50000,54000,50000,189000,55000,40000,59000]}) salary = df['Salary'] salary.plot.hist(title='Salary Distribution', color='lightblue', bins=25) plt.axvline(salary.mean(), color='magenta', linestyle='dashed', linewidth=2) plt.axvline(salary.median(), color='green', linestyle='dashed', linewidth=2) plt.show() ``` The <span style="color:magenta">***mean***</span> and <span style="color:green">***median***</span> are shown as dashed lines. Note the following: - *Salary* is a continuous data value - graduates could potentially earn any value along the scale, even down to a fraction of cent. - The number of bins in the histogram determines the size of each salary band for which we're counting frequencies. Fewer bins means merging more individual salaries together to be counted as a group. - The majority of the data is on the left side of the histogram, reflecting the fact that most graduates earn between 40,000 and 55,000 - The mean is a higher value than the median and mode. - There are gaps in the histogram for salary bands that nobody earns. The histogram shows the relative frequency of each salary band, based on the number of bins. It also gives us a sense of the *density* of the data for each point on the salary scale. With enough data points, and small enough bins, we could view this density as a line that shows the shape of the data distribution. Run the following cell to show the density of the salary data as a line on top of the histogram: ```python %matplotlib inline import pandas as pd import matplotlib.pyplot as plt import numpy as np import scipy.stats as stats df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'], 'Salary':[50000,54000,50000,189000,55000,40000,59000]}) salary = df['Salary'] density = stats.gaussian_kde(salary) n, x, _ = plt.hist(salary, histtype='step', normed=True, bins=25) plt.plot(x, density(x)*5) plt.axvline(salary.mean(), color='magenta', linestyle='dashed', linewidth=2) plt.axvline(salary.median(), color='green', linestyle='dashed', linewidth=2) plt.show() ``` Note that the density line takes the form of an asymmetric curve that has a "peak" on the left and a long tail on the right. We describe this sort of data distribution as being *skewed*; that is, the data is not distributed symmetrically but "bunched together" on one side. In this case, the data is bunched together on the left, creating a long tail on the right; and is described as being *right-skewed* because some infrequently occurring high values are pulling the *mean* to the right. Let's take a look at another set of data. We know how much money our graduates make, but how many hours per week do they need to work to earn their salaries? Here's the data: | Name | Hours | |----------|-------| | Dan | 41 | | Joann | 40 | | Pedro | 36 | | Rosie | 30 | | Ethan | 35 | | Vicky | 39 | | Frederic | 40 | Run the following code to show the distribution of the hours worked: ```python %matplotlib inline import pandas as pd import matplotlib.pyplot as plt import numpy as np import scipy.stats as stats df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'], 'Hours':[41,40,36,30,35,39,40]}) hours = df['Hours'] density = stats.gaussian_kde(hours) n, x, _ = plt.hist(hours, histtype='step', normed=True, bins=25) plt.plot(x, density(x)*7) plt.axvline(hours.mean(), color='magenta', linestyle='dashed', linewidth=2) plt.axvline(hours.median(), color='green', linestyle='dashed', linewidth=2) plt.show() ``` Once again, the distribution is skewed, but this time it's **left-skewed**. Note that the curve is asymmetric with the <span style="color:magenta">***mean***</span> to the left of the <span style="color:green">***median***</span> and the *mode*; and the average weekly working hours skewed to the lower end. Once again, Rosie seems to be getting the better of the deal. She earns more than her former classmates for working fewer hours. Maybe a look at the test scores the students achieved on their final grade at school might help explain her success: | Name | Grade | |----------|-------| | Dan | 50 | | Joann | 50 | | Pedro | 46 | | Rosie | 95 | | Ethan | 50 | | Vicky | 5 | | Frederic | 57 | Let's take a look at the distribution of these grades: ```python %matplotlib inline import pandas as pd import matplotlib.pyplot as plt import numpy as np import scipy.stats as stats df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'], 'Grade':[50,50,46,95,50,5,57]}) grade = df['Grade'] density = stats.gaussian_kde(grade) n, x, _ = plt.hist(grade, histtype='step', normed=True, bins=25) plt.plot(x, density(x)*7.5) plt.axvline(grade.mean(), color='magenta', linestyle='dashed', linewidth=2) plt.axvline(grade.median(), color='green', linestyle='dashed', linewidth=2) plt.show() ``` This time, the distribution is symmetric, forming a "bell-shaped" curve. The <span style="color:magenta">***mean***</span>, <span style="color:green">***median***</span>, and mode are at the same location, and the data tails off evenly on both sides from a central peak. Statisticians call this a *normal* distribution (or sometimes a *Gaussian* distribution), and it occurs quite commonly in many scenarios due to something called the *Central Limit Theorem*, which reflects the way continuous probability works - more about that later. #### Skewness and Kurtosis You can measure *skewness* (in which direction the data is skewed and to what degree) and kurtosis (how "peaked" the data is) to get an idea of the shape of the data distribution. In Python, you can use the ***skew*** and ***kurt*** functions to find this: ```python %matplotlib inline import pandas as pd import numpy as np from matplotlib import pyplot as plt import scipy.stats as stats df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'], 'Salary':[50000,54000,50000,189000,55000,40000,59000], 'Hours':[41,40,36,30,35,39,40], 'Grade':[50,50,46,95,50,5,57]}) numcols = ['Salary', 'Hours', 'Grade'] for col in numcols: print(df[col].name + ' skewness: ' + str(df[col].skew())) print(df[col].name + ' kurtosis: ' + str(df[col].kurt())) density = stats.gaussian_kde(df[col]) n, x, _ = plt.hist(df[col], histtype='step', normed=True, bins=25) plt.plot(x, density(x)*6) plt.show() print('\n') ``` Now let's look at the distribution of a real dataset - let's see how the heights of the father's measured in Galton's study of parent and child heights are distributed: ```python %matplotlib inline import pandas as pd import matplotlib.pyplot as plt import numpy as np import scipy.stats as stats import statsmodels.api as sm df = sm.datasets.get_rdataset('GaltonFamilies', package='HistData').data fathers = df['father'] density = stats.gaussian_kde(fathers) n, x, _ = plt.hist(fathers, histtype='step', normed=True, bins=50) plt.plot(x, density(x)*2.5) plt.axvline(fathers.mean(), color='magenta', linestyle='dashed', linewidth=2) plt.axvline(fathers.median(), color='green', linestyle='dashed', linewidth=2) plt.show() ``` As you can see, the father's height measurements are approximately normally distributed - in other words, they form a more or less *normal* distribution that is symmetric around the mean. ### Measures of Variance We can see from the distribution plots of our data that the values in our dataset can vary quite widely. We can use various measures to quantify this variance. #### Range A simple way to quantify the variance in a dataset is to identify the difference between the lowest and highest values. This is called the *range*, and is calculated by subtracting the minimim value from the maximum value. The following Python code creates a single Pandas dataframe for our school graduate data, and calculates the *range* for each of the numeric features: ```python import pandas as pd df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'], 'Salary':[50000,54000,50000,189000,55000,40000,59000], 'Hours':[41,40,36,30,35,39,40], 'Grade':[50,50,46,95,50,5,57]}) numcols = ['Salary', 'Hours', 'Grade'] for col in numcols: print(df[col].name + ' range: ' + str(df[col].max() - df[col].min())) ``` #### Percentiles and Quartiles The range is easy to calculate, but it's not a particularly useful statistic. For example, a range of 149,000 between the lowest and highest salary does not tell us which value within that range a graduate is most likely to earn - it doesn't tell us nothing about how the salaries are distributed around the mean within that range. The range tells us very little about the comparative position of an individual value within the distribution - for example, Frederic scored 57 in his final grade at school; which is a pretty good score (it's more than all but one of his classmates); but this isn't immediately apparent from a score of 57 and range of 90. ##### Percentiles A percentile tells us where a given value is ranked in the overall distribution. For example, 25% of the data in a distribution has a value lower than the 25th percentile; 75% of the data has a value lower than the 75th percentile, and so on. Note that half of the data has a value lower than the 50th percentile - so the 50th percentile is also the median! Let's examine Frederic's grade using this approach. We know he scored 57, but how does he rank compared to his fellow students? Well, there are seven students in total, and five of them scored less than Frederic; so we can calculate the percentile for Frederic's grade like this: \begin{equation}\frac{5}{7} \times 100 \approx 71.4\end{equation} So Frederic's score puts him at the 71.4th percentile in his class. In Python, you can use the ***percentileofscore*** function in the *scipy.stats* package to calculate the percentile for a given value in a set of values: ```python import pandas as pd from scipy import stats df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'], 'Salary':[50000,54000,50000,189000,55000,40000,59000], 'Hours':[41,40,36,30,35,39,40], 'Grade':[50,50,46,95,50,5,57]}) print(stats.percentileofscore(df['Grade'], 57, 'strict')) ``` We've used the strict definition of percentile; but sometimes it's calculated as being the percentage of values that are less than *or equal to* the value you're comparing. In this case, the calculation for Frederic's percentile would include his own score: \begin{equation}\frac{6}{7} \times 100 \approx 85.7\end{equation} You can calculate this way in Python by using the ***weak*** mode of the ***percentileofscore*** function: ```python import pandas as pd from scipy import stats df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'], 'Salary':[50000,54000,50000,189000,55000,40000,59000], 'Hours':[41,40,36,30,35,39,40], 'Grade':[50,50,46,95,50,5,57]}) print(stats.percentileofscore(df['Grade'], 57, 'weak')) ``` We've considered the percentile of Frederic's grade, and used it to rank him compared to his fellow students. So what about Dan, Joann, and Ethan? How do they compare to the rest of the class? They scored the same grade (50), so in a sense they share a percentile. To deal with this *grouped* scenario, we can average the percentage rankings for the matching scores. We treat half of the scores matching the one we're ranking as if they are below it, and half as if they are above it. In this case, there were three matching scores of 50, and for each of these we calculate the percentile as if 1 was below and 1 was above. So the calculation for a percentile for Joann based on scores being less than or equal to 50 is: \begin{equation}(\frac{4}{7}) \times 100 \approx 57.14\end{equation} The value of **4** consists of the two scores that are below Joann's score of 50, Joann's own score, and half of the scores that are the same as Joann's (of which there are two, so we count one). In Python, the ***percentileofscore*** function has a ***rank*** function that calculates grouped percentiles like this: ```python import pandas as pd from scipy import stats df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'], 'Salary':[50000,54000,50000,189000,55000,40000,59000], 'Hours':[41,40,36,30,35,39,40], 'Grade':[50,50,46,95,50,5,57]}) print(stats.percentileofscore(df['Grade'], 50, 'rank')) ``` ##### Quartiles Rather than using individual percentiles to compare data, we can consider the overall spread of the data by dividing those percentiles into four *quartiles*. The first quartile contains the values from the minimum to the 25th percentile, the second from the 25th percentile to the 50th percentile (which is the median), the third from the 50th percentile to the 75th percentile, and the fourth from the 75th percentile to the maximum. In Python, you can use the ***quantile*** function of the *pandas.dataframe* class to find the threshold values at the 25th, 50th, and 75th percentiles (*quantile* is a generic term for a ranked position, such as a percentile or quartile). Run the following code to find the quartile thresholds for the weekly hours worked by our former students: ```python # Quartiles import pandas as pd df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'], 'Salary':[50000,54000,50000,189000,55000,40000,59000], 'Hours':[41,40,36,17,35,39,40], 'Grade':[50,50,46,95,50,5,57]}) print(df['Hours'].quantile([0.25, 0.5, 0.75])) ``` 0.25 35.5 0.50 39.0 0.75 40.0 Name: Hours, dtype: float64 Its usually easier to understand how data is distributed across the quartiles by visualizing it. You can use a histogram, but many data scientists use a kind of visualization called a *box plot* (or a *box and whiskers* plot). Let's create a box plot for the weekly hours: ```python %matplotlib inline import pandas as pd from matplotlib import pyplot as plt df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'], 'Salary':[50000,54000,50000,189000,55000,40000,59000], 'Hours':[41,40,36,30,35,39,40], 'Grade':[50,50,46,95,50,5,57]}) # Plot a box-whisker chart df['Hours'].plot(kind='box', title='Weekly Hours Distribution', figsize=(10,8)) plt.show() ``` The box plot consists of: - A rectangular *box* that shows where the data between the 25th and 75th percentile (the second and third quartile) lie. This part of the distribution is often referred to as the *interquartile range* - it contains the middle 50 data values. - *Whiskers* that extend from the box to the bottom of the first quartile and the top of the fourth quartile to show the full range of the data. - A line in the box that shows that location of the median (the 50th percentile, which is also the threshold between the second and third quartile) In this case, you can see that the interquartile range is between 35 and 40, with the median nearer the top of that range. The range of the first quartile is from around 30 to 35, and the fourth quartile is from 40 to 41. #### Outliers Let's take a look at another box plot - this time showing the distribution of the salaries earned by our former classmates: ```python %matplotlib inline import pandas as pd from matplotlib import pyplot as plt df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'], 'Salary':[50000,54000,50000,189000,55000,40000,59000], 'Hours':[41,40,36,30,35,39,40], 'Grade':[50,50,46,95,50,5,57]}) # Plot a box-whisker chart df['Salary'].plot(kind='box', title='Salary Distribution', figsize=(10,8)) plt.show() ``` So what's going on here? Well, as we've already noticed, Rosie earns significantly more than her former classmates. So much more in fact, that her salary has been identifed as an *outlier*. An outlier is a value that is so far from the center of the distribution compared to other values that it skews the distribution by affecting the mean. There are all sorts of reasons that you might have outliers in your data, including data entry errors, failures in sensors or data-generating equipment, or genuinely anomalous values. So what should we do about it? This really depends on the data, and what you're trying to use it for. In this case, let's assume we're trying to figure out what's a reasonable expectation of salary for a graduate of our school to earn. Ignoring for the moment that we have an extremly small dataset on which to base our judgement, it looks as if Rosie's salary could be either an error (maybe she mis-typed it in the form used to collect data) or a genuine anomaly (maybe she became a professional athelete or some other extremely highly paid job). Either way, it doesn't seem to represent a salary that a typical graduate might earn. Let's see what the distribution of the data looks like without the outlier: ```python %matplotlib inline import pandas as pd from matplotlib import pyplot as plt df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'], 'Salary':[50000,54000,50000,189000,55000,40000,59000], 'Hours':[41,40,36,17,35,39,40], 'Grade':[50,50,46,95,50,5,57]}) # Plot a box-whisker chart df['Salary'].plot(kind='box', title='Salary Distribution', figsize=(10,8), showfliers=False) plt.show() ``` Now it looks like there's a more even distribution of salaries. It's still not quite symmetrical, but there's much less overall variance. There's potentially some cause here to disregard Rosie's salary data when we compare the salaries, as it is tending to skew the analysis. So is that OK? Can we really just ignore a data value we don't like? Again, it depends on what you're analyzing. Let's take a look at the distribution of final grades: ```python %matplotlib inline import pandas as pd from matplotlib import pyplot as plt df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'], 'Salary':[50000,54000,50000,189000,55000,40000,59000], 'Hours':[41,40,36,17,35,39,40], 'Grade':[50,50,46,95,50,5,57]}) # Plot a box-whisker chart df['Grade'].plot(kind='box', title='Grade Distribution', figsize=(10,8)) plt.show() ``` Once again there are outliers, this time at both ends of the distribution. However, think about what this data represents. If we assume that the grade for the final test is based on a score out of 100, it seems reasonable to expect that some students will score very low (maybe even 0) and some will score very well (maybe even 100); but most will get a score somewhere in the middle. The reason that the low and high scores here look like outliers might just be because we have so few data points. Let's see what happens if we include a few more students in our data: ```python %matplotlib inline import pandas as pd from matplotlib import pyplot as plt df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic', 'Jimmie', 'Rhonda', 'Giovanni', 'Francesca', 'Rajab', 'Naiyana', 'Kian', 'Jenny'], 'Grade':[50,50,46,95,50,5,57,42,26,72,78,60,40,17,85]}) # Plot a box-whisker chart df['Grade'].plot(kind='box', title='Grade Distribution', figsize=(10,8)) plt.show() ``` With more data, there are some more high and low scores; so we no longer consider the isolated cases to be outliers. The key point to take away here is that you need to really understand the data and what you're trying to do with it, and you need to ensure that you have a reasonable sample size, before determining what to do with outlier values. #### Variance and Standard Deviation We've seen how to understand the *spread* of our data distribution using the range, percentiles, and quartiles; and we've seen the effect of outliers on the distribution. Now it's time to look at how to measure the amount of variance in the data. ##### Variance Variance is measured as the average of the squared difference from the mean. For a full population, it's indicated by a squared Greek letter *sigma* (***&sigma;<sup>2</sup>***) and calculated like this: \begin{equation}\sigma^{2} = \frac{\displaystyle\sum_{i=1}^{N} (X_{i} -\mu)^{2}}{N}\end{equation} For a sample, it's indicated as ***s<sup>2</sup>*** calculated like this: \begin{equation}s^{2} = \frac{\displaystyle\sum_{i=1}^{n} (x_{i} -\bar{x})^{2}}{n-1}\end{equation} In both cases, we sum the difference between the individual data values and the mean and square the result. Then, for a full population we just divide by the number of data items to get the average. When using a sample, we divide by the total number of items **minus 1** to correct for sample bias. Let's work this out for our student grades (assuming our data is a sample from the larger student population). First, we need to calculate the mean grade: \begin{equation}\bar{x} = \frac{50+50+46+95+50+5+57}{7}\approx 50.43\end{equation} Then we can plug that into our formula for the variance: \begin{equation}s^{2} = \frac{(50-50.43)^{2}+(50-50.43)^{2}+(46-50.43)^{2}+(95-50.43)^{2}+(50-50.43)^{2}+(5-50.43)^{2}+(57-50.43)^{2}}{7-1}\end{equation} So: \begin{equation}s^{2} = \frac{0.185+0.185+19.625+1986.485+0.185+2063.885+43.165}{6}\end{equation} Which simplifies to: \begin{equation}s^{2} = \frac{4113.715}{6}\end{equation} Giving the result: \begin{equation}s^{2} \approx 685.619\end{equation} The higher the variance, the more spread your data is around the mean. In Python, you can use the ***var*** function of the *pandas.dataframe* class to calculate the variance of a column in a dataframe: ```python import pandas as pd df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'], 'Salary':[50000,54000,50000,189000,55000,40000,59000], 'Hours':[41,40,36,17,35,39,40], 'Grade':[50,50,46,95,50,5,57]}) print(df['Grade'].var()) ``` ##### Standard Deviation To calculate the variance, we squared the difference of each value from the mean. If we hadn't done this, the numerator of our fraction would always end up being zero (because the mean is at the center of our values). However, this means that the variance is not in the same unit of measurement as our data - in our case, since we're calculating the variance for grade points, it's in grade points squared; which is not very helpful. To get the measure of variance back into the same unit of measurement, we need to find its square root: \begin{equation}s = \sqrt{685.619} \approx 26.184\end{equation} So what does this value represent? It's the *standard deviation* for our grades data. More formally, it's calculated like this for a full population: \begin{equation}\sigma = \sqrt{\frac{\displaystyle\sum_{i=1}^{N} (X_{i} -\mu)^{2}}{N}}\end{equation} Or like this for a sample: \begin{equation}s = \sqrt{\frac{\displaystyle\sum_{i=1}^{n} (x_{i} -\bar{x})^{2}}{n-1}}\end{equation} Note that in both cases, it's just the square root of the corresponding variance forumla! In Python, you can calculate it using the ***std*** function: ```python import pandas as pd df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'], 'Salary':[50000,54000,50000,189000,55000,40000,59000], 'Hours':[41,40,36,17,35,39,40], 'Grade':[50,50,46,95,50,5,57]}) print(df['Grade'].std()) ``` #### Standard Deviation in a Normal Distribution In statistics and data science, we spend a lot of time considering *normal* distributions; because they occur so frequently. The standard deviation has an important relationship to play in a normal distribution. Run the following cell to show a histogram of a *standard normal* distribution (which is a distribution with a mean of 0 and a standard deviation of 1): ```python %matplotlib inline import pandas as pd import matplotlib.pyplot as plt import numpy as np import scipy.stats as stats # Create a random standard normal distribution df = pd.DataFrame(np.random.randn(100000, 1), columns=['Grade']) # Plot the distribution as a histogram with a density curve grade = df['Grade'] density = stats.gaussian_kde(grade) n, x, _ = plt.hist(grade, color='lightgrey', density=True, bins=100) plt.plot(x, density(x)) # Get the mean and standard deviation s = df['Grade'].std() m = df['Grade'].mean() # Annotate 1 stdev x1 = [m-s, m+s] y1 = [0.25, 0.25] plt.plot(x1,y1, color='magenta') plt.annotate('1s (68.26%)', (x1[1],y1[1])) # Annotate 2 stdevs x2 = [m-(s*2), m+(s*2)] y2 = [0.05, 0.05] plt.plot(x2,y2, color='green') plt.annotate('2s (95.45%)', (x2[1],y2[1])) # Annotate 3 stdevs x3 = [m-(s*3), m+(s*3)] y3 = [0.005, 0.005] plt.plot(x3,y3, color='orange') plt.annotate('3s (99.73%)', (x3[1],y3[1])) # Show the location of the mean plt.axvline(grade.mean(), color='grey', linestyle='dashed', linewidth=1) plt.show() ``` The horizontal colored lines show the percentage of data within 1, 2, and 3 standard deviations of the mean (plus or minus). In any normal distribution: - Approximately 68.26% of values fall within one standard deviation from the mean. - Approximately 95.45% of values fall within two standard deviations from the mean. - Approximately 99.73% of values fall within three standard deviations from the mean. #### Z Score So in a normal (or close to normal) distribution, standard deviation provides a way to evaluate how far from a mean a given range of values falls, allowing us to compare where a particular value lies within the distribution. For example, suppose Rosie tells you she was the highest scoring student among her friends - that doesn't really help us assess how well she scored. She may have scored only a fraction of a point above the second-highest scoring student. Even if we know she was in the top quartile; if we don't know how the rest of the grades are distributed it's still not clear how well she performed compared to her friends. However, if she tells you how many standard deviations higher than the mean her score was, this will help you compare her score to that of her classmates. So how do we know how many standard deviations above or below the mean a particular value is? We call this a *Z Score*, and it's calculated like this for a full population: \begin{equation}Z = \frac{x - \mu}{\sigma}\end{equation} or like this for a sample: \begin{equation}Z = \frac{x - \bar{x}}{s}\end{equation} So, let's examine Rosie's grade of 95. Now that we know the *mean* grade is 50.43 and the *standard deviation* is 26.184, we can calculate the Z Score for this grade like this: \begin{equation}Z = \frac{95 - 50.43}{26.184} = 1.702\end{equation}. So Rosie's grade is 1.702 standard deviations above the mean. ### Summarizing Data Distribution in Python We've seen how to obtain individual statistics in Python, but you can also use the ***describe*** function to retrieve summary statistics for all numeric columns in a dataframe. These summary statistics include many of the statistics we've examined so far (though it's worth noting that the *median* is not included): ```python import pandas as pd df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'], 'Salary':[50000,54000,50000,189000,55000,40000,59000], 'Hours':[41,40,36,17,35,39,40], 'Grade':[50,50,46,95,50,5,57]}) print(df.describe()) ```
1161fef73dd13646f0091ea388b6660077c0cb80
133,474
ipynb
Jupyter Notebook
documents/04-02-Statistics_Fundamentals.ipynb
liawbeile/liawbeile.github.io
4ff861c9785d8ac1f2a68875bc3f66326ccfe91f
[ "MIT" ]
null
null
null
documents/04-02-Statistics_Fundamentals.ipynb
liawbeile/liawbeile.github.io
4ff861c9785d8ac1f2a68875bc3f66326ccfe91f
[ "MIT" ]
null
null
null
documents/04-02-Statistics_Fundamentals.ipynb
liawbeile/liawbeile.github.io
4ff861c9785d8ac1f2a68875bc3f66326ccfe91f
[ "MIT" ]
null
null
null
113.017782
21,388
0.810855
true
10,013
Qwen/Qwen-72B
1. YES 2. YES
0.91118
0.847968
0.772651
__label__eng_Latn
0.99486
0.633459
# Superposition Kata **Superposition** quantum kata is a series of exercises designed to get you familiar with the concept of superposition and with programming in Q#. It covers the following topics: * basic single-qubit and multi-qubit gates, * superposition, * flow control and recursion in Q#. It is recommended to complete the [BasicGates kata](./../BasicGates/BasicGates.ipynb) before this one to get familiar with the basic gates used in quantum computing. The list of basic gates available in Q# can be found at [Microsoft.Quantum.Intrinsic](https://docs.microsoft.com/qsharp/api/qsharp/microsoft.quantum.intrinsic). Each task is wrapped in one operation preceded by the description of the task. Your goal is to fill in the blank (marked with `// ...` comments) with some Q# code that solves the task. To verify your answer, run the cell using Ctrl/⌘+Enter. The tasks are given in approximate order of increasing difficulty; harder ones are marked with asterisks. To begin, first prepare this notebook for execution (if you skip this step, you'll get "Syntax does not match any known patterns" error when you try to execute Q# code in the next cells): ```qsharp %package Microsoft.Quantum.Katas::0.10.1911.1607 ``` <ul><li>Microsoft.Quantum.Standard::0.10.1911.1607</li><li>Microsoft.Quantum.Katas::0.10.1911.1607</li></ul> > The package versions in the output of the cell above should always match. If you are running the Notebooks locally and the versions do not match, please install the IQ# version that matches the version of the `Microsoft.Quantum.Katas` package. > <details> > <summary><u>How to install the right IQ# version</u></summary> > For example, if the version of `Microsoft.Quantum.Katas` package above is 0.1.2.3, the installation steps are as follows: > > 1. Stop the kernel. > 2. Uninstall the existing version of IQ#: > dotnet tool uninstall microsoft.quantum.iqsharp -g > 3. Install the matching version: > dotnet tool install microsoft.quantum.iqsharp -g --version 0.1.2.3 > 4. Reinstall the kernel: > dotnet iqsharp install > 5. Restart the Notebook. > </details> ### <a name="plus-state"></a> Task 1. Plus state. **Input:** A qubit in the $|0\rangle$ state. **Goal:** Change the state of the qubit to $|+\rangle = \frac{1}{\sqrt{2}} \big(|0\rangle + |1\rangle\big)$. ```qsharp %kata T01_PlusState_Test operation PlusState (q : Qubit) : Unit { H(q); } ``` The desired state: # wave function for qubits with ids (least to most significant): 0 ∣0❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] ∣1❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] The actual state: # wave function for qubits with ids (least to most significant): 0 ∣0❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] ∣1❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] Test case passed Success! *Can't come up with a solution? See the explained solution in the [Superposition Workbook](./Workbook_Superposition.ipynb#plus-state).* ### <a name="minus-state"></a> Task 2. Minus state. **Input**: A qubit in the $|0\rangle$ state. **Goal**: Change the state of the qubit to $|-\rangle = \frac{1}{\sqrt{2}} \big(|0\rangle - |1\rangle\big)$. ```qsharp %kata T02_MinusState_Test operation MinusState (q : Qubit) : Unit { H(q); Z(q); } ``` The desired state: # wave function for qubits with ids (least to most significant): 0 ∣0❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] ∣1❭: -0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 3.14159 rad ] The actual state: # wave function for qubits with ids (least to most significant): 0 ∣0❭: -0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 3.14159 rad ] ∣1❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] Test case passed Success! *Can't come up with a solution? See the explained solution in the [Superposition Workbook](./Workbook_Superposition.ipynb#minus-state).* ### <a name="unequal-superposition"></a> Task 3*. Unequal superposition. **Inputs:** 1. A qubit in the $|0\rangle$ state. 2. Angle $\alpha$, in radians, represented as `Double`. **Goal** : Change the state of the qubit to $\cos{α} |0\rangle + \sin{α} |1\rangle$. <br/> <details> <summary><b>Need a hint? Click here</b></summary> Experiment with rotation gates from Microsoft.Quantum.Intrinsic namespace. Note that all rotation operators rotate the state by <i>half</i> of its angle argument. </details> ```qsharp %kata T03_UnequalSuperposition_Test operation UnequalSuperposition (q : Qubit, alpha : Double) : Unit { Ry(2.0 * alpha, q); } ``` The desired state for α = 0.5 π # wave function for qubits with ids (least to most significant): 0 ∣0❭: 0.000000 + 0.000000 i == [ 0.000000 ] ∣1❭: 1.000000 + 0.000000 i == ******************** [ 1.000000 ] --- [ 0.00000 rad ] The actual state: # wave function for qubits with ids (least to most significant): 0 ∣0❭: 0.000000 + 0.000000 i == * [ 0.000000 ] --- [ 0.00000 rad ] ∣1❭: 1.000000 + 0.000000 i == ******************** [ 1.000000 ] --- [ 0.00000 rad ] Test case passed The desired state for α = 0.25 π # wave function for qubits with ids (least to most significant): 0 ∣0❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] ∣1❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] The actual state: # wave function for qubits with ids (least to most significant): 0 ∣0❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] ∣1❭: 0.707107 + 0.000000 i == ********** [ 0.500000 ] --- [ 0.00000 rad ] Test case passed The desired state for α = 0.75 π # wave function for qubits with ids (least to most significant): 0 ∣0❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] ∣1❭: -0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 3.14159 rad ] The actual state: # wave function for qubits with ids (least to most significant): 0 ∣0❭: -0.707107 + 0.000000 i == ********** [ 0.500000 ] --- [ 3.14159 rad ] ∣1❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] Test case passed Testing on hidden test cases... Success! *Can't come up with a solution? See the explained solution in the [Superposition Workbook](./Workbook_Superposition.ipynb#unequal-superposition).* ### <a name="superposition-of-all-basis-vectors-on-two-qubits"></a>Task 4. Superposition of all basis vectors on two qubits. **Input:** Two qubits in the $|00\rangle$ state (stored in an array of length 2). **Goal:** Change the state of the qubits to $|+\rangle \otimes |+\rangle = \frac{1}{2} \big(|00\rangle + |01\rangle + |10\rangle + |11\rangle\big)$. ```qsharp %kata T04_AllBasisVectors_TwoQubits_Test operation AllBasisVectors_TwoQubits (qs : Qubit[]) : Unit { for (q in qs) { H(q); } } ``` The desired state: # wave function for qubits with ids (least to most significant): 0;1 ∣0❭: 0.500000 + 0.000000 i == ****** [ 0.250000 ] --- [ 0.00000 rad ] ∣1❭: 0.500000 + 0.000000 i == ****** [ 0.250000 ] --- [ 0.00000 rad ] ∣2❭: 0.500000 + 0.000000 i == ****** [ 0.250000 ] --- [ 0.00000 rad ] ∣3❭: 0.500000 + 0.000000 i == ****** [ 0.250000 ] --- [ 0.00000 rad ] The actual state: # wave function for qubits with ids (least to most significant): 0;1 ∣0❭: 0.500000 + 0.000000 i == ****** [ 0.250000 ] --- [ 0.00000 rad ] ∣1❭: 0.500000 + 0.000000 i == ****** [ 0.250000 ] --- [ 0.00000 rad ] ∣2❭: 0.500000 + 0.000000 i == ****** [ 0.250000 ] --- [ 0.00000 rad ] ∣3❭: 0.500000 + 0.000000 i == ****** [ 0.250000 ] --- [ 0.00000 rad ] Test case passed Success! *Can't come up with a solution? See the explained solution in the [Superposition Workbook](./Workbook_Superposition.ipynb#superposition-of-all-basis-vectors-on-two-qubits).* ### <a name="superposition-of-basis-vectors-with-phases"></a>Task 5. Superposition of basis vectors with phases. **Input:** Two qubits in the $|00\rangle$ state (stored in an array of length 2). **Goal:** Change the state of the qubits to $\frac{1}{2} \big(|00\rangle + i|01\rangle - |10\rangle - i|11\rangle\big)$. <br/> <details> <summary><b>Need a hint? Click here</b></summary> Is this state separable? </details> 📝 $ \frac{1}{2} \big(|00\rangle + i|01\rangle - |10\rangle - i|11\rangle\big) = \frac{1}{2} \big(|0\rangle - |1\rangle \big) \otimes \big(|0\rangle + i|1\rangle\big) $ ```qsharp %kata T05_AllBasisVectorsWithPhases_TwoQubits_Test operation AllBasisVectorsWithPhases_TwoQubits (qs : Qubit[]) : Unit { for (q in qs) { H(q); } Z(qs[0]); S(qs[1]); } ``` The desired state: # wave function for qubits with ids (least to most significant): 0;1 ∣0❭: 0.500000 + 0.000000 i == ****** [ 0.250000 ] --- [ 0.00000 rad ] ∣1❭: -0.500000 + 0.000000 i == ****** [ 0.250000 ] --- [ 3.14159 rad ] ∣2❭: 0.000000 + 0.500000 i == ****** [ 0.250000 ] ↑ [ 1.57080 rad ] ∣3❭: 0.000000 + -0.500000 i == ****** [ 0.250000 ] ↓ [ -1.57080 rad ] The actual state: # wave function for qubits with ids (least to most significant): 0;1 ∣0❭: 0.000000 + -0.500000 i == ****** [ 0.250000 ] ↓ [ -1.57080 rad ] ∣1❭: 0.000000 + 0.500000 i == ****** [ 0.250000 ] ↑ [ 1.57080 rad ] ∣2❭: 0.500000 + 0.000000 i == ****** [ 0.250000 ] --- [ 0.00000 rad ] ∣3❭: -0.500000 + 0.000000 i == ****** [ 0.250000 ] --- [ 3.14159 rad ] Test case passed Success! *Can't come up with a solution? See the explained solution in the [Superposition Workbook](./Workbook_Superposition.ipynb#superposition-of-basis-vectors-with-phases).* ### <a name="bell-state"></a>Task 6. Bell state $|\Phi^{+}\rangle$. **Input:** Two qubits in the $|00\rangle$ state (stored in an array of length 2). **Goal:** Change the state of the qubits to $|\Phi^{+}\rangle = \frac{1}{\sqrt{2}} \big (|00\rangle + |11\rangle\big)$. > You can find detailed coverage of Bell states and their creation [in this blog post](https://blogs.msdn.microsoft.com/uk_faculty_connection/2018/02/06/a-beginners-guide-to-quantum-computing-and-q/). 📝 $$ \begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \end{bmatrix} \longrightarrow \frac{1}{\sqrt 2} \begin{bmatrix} 1 \\ 1 \\ 0 \\ 0 \end{bmatrix} \longrightarrow \frac{1}{\sqrt 2} \begin{bmatrix} 1 \\ 0 \\ 0 \\ 1 \end{bmatrix} $$ ```qsharp %kata T06_BellState_Test operation BellState (qs : Qubit[]) : Unit { H(qs[0]); CNOT(qs[0], qs[1]); } ``` The desired state: # wave function for qubits with ids (least to most significant): 0;1 ∣0❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] ∣1❭: 0.000000 + 0.000000 i == [ 0.000000 ] ∣2❭: 0.000000 + 0.000000 i == [ 0.000000 ] ∣3❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] The actual state: # wave function for qubits with ids (least to most significant): 0;1 ∣0❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] ∣1❭: 0.000000 + 0.000000 i == [ 0.000000 ] ∣2❭: 0.000000 + 0.000000 i == [ 0.000000 ] ∣3❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] Test case passed Success! *Can't come up with a solution? See the explained solution in the [Superposition Workbook](./Workbook_Superposition.ipynb#bell-state).* ### <a name="all-bell-states"></a> Task 7. All Bell states. **Inputs:** 1. Two qubits in the $|00\rangle$ state (stored in an array of length 2). 2. An integer index. **Goal:** Change the state of the qubits to one of the Bell states, based on the value of index: <table> <col width="50"/> <col width="200"/> <tr> <th style="text-align:center">Index</th> <th style="text-align:center">State</th> </tr> <tr> <td style="text-align:center">0</td> <td style="text-align:center">$|\Phi^{+}\rangle = \frac{1}{\sqrt{2}} \big (|00\rangle + |11\rangle\big)$</td> </tr> <tr> <td style="text-align:center">1</td> <td style="text-align:center">$|\Phi^{-}\rangle = \frac{1}{\sqrt{2}} \big (|00\rangle - |11\rangle\big)$</td> </tr> <tr> <td style="text-align:center">2</td> <td style="text-align:center">$|\Psi^{+}\rangle = \frac{1}{\sqrt{2}} \big (|01\rangle + |10\rangle\big)$</td> </tr> <tr> <td style="text-align:center">3</td> <td style="text-align:center">$|\Psi^{-}\rangle = \frac{1}{\sqrt{2}} \big (|01\rangle - |10\rangle\big)$</td> </tr> </table> ```qsharp %kata T07_AllBellStates_Test operation AllBellStates (qs : Qubit[], index : Int) : Unit { H(qs[0]); CNOT(qs[0], qs[1]); if (index % 2 == 1) { Z(qs[1]); } if (index >= 2) { X(qs[0]); } } ``` The desired state for index = 0 # wave function for qubits with ids (least to most significant): 0;1 ∣0❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] ∣1❭: 0.000000 + 0.000000 i == [ 0.000000 ] ∣2❭: 0.000000 + 0.000000 i == [ 0.000000 ] ∣3❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] The actual state: # wave function for qubits with ids (least to most significant): 0;1 ∣0❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] ∣1❭: 0.000000 + 0.000000 i == [ 0.000000 ] ∣2❭: 0.000000 + 0.000000 i == [ 0.000000 ] ∣3❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] Test case passed The desired state for index = 1 # wave function for qubits with ids (least to most significant): 0;1 ∣0❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] ∣1❭: 0.000000 + 0.000000 i == [ 0.000000 ] ∣2❭: 0.000000 + 0.000000 i == [ 0.000000 ] ∣3❭: -0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 3.14159 rad ] The actual state: # wave function for qubits with ids (least to most significant): 0;1 ∣0❭: -0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 3.14159 rad ] ∣1❭: 0.000000 + 0.000000 i == [ 0.000000 ] ∣2❭: 0.000000 + 0.000000 i == [ 0.000000 ] ∣3❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] Test case passed The desired state for index = 2 # wave function for qubits with ids (least to most significant): 0;1 ∣0❭: 0.000000 + 0.000000 i == [ 0.000000 ] ∣1❭: -0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 3.14159 rad ] ∣2❭: -0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 3.14159 rad ] ∣3❭: 0.000000 + 0.000000 i == [ 0.000000 ] The actual state: # wave function for qubits with ids (least to most significant): 0;1 ∣0❭: 0.000000 + 0.000000 i == [ 0.000000 ] ∣1❭: -0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 3.14159 rad ] ∣2❭: -0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 3.14159 rad ] ∣3❭: 0.000000 + 0.000000 i == [ 0.000000 ] Test case passed The desired state for index = 3 # wave function for qubits with ids (least to most significant): 0;1 ∣0❭: 0.000000 + 0.000000 i == [ 0.000000 ] ∣1❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] ∣2❭: -0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 3.14159 rad ] ∣3❭: 0.000000 + 0.000000 i == [ 0.000000 ] The actual state: # wave function for qubits with ids (least to most significant): 0;1 ∣0❭: 0.000000 + 0.000000 i == [ 0.000000 ] ∣1❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] ∣2❭: -0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 3.14159 rad ] ∣3❭: 0.000000 + 0.000000 i == [ 0.000000 ] Test case passed Success! *Can't come up with a solution? See the explained solution in the [Superposition Workbook](./Workbook_Superposition.ipynb#all-bell-states).* ### Task 8. Greenberger–Horne–Zeilinger state. **Input:** $N$ ($N \ge 1$) qubits in the $|0 \dots 0\rangle$ state (stored in an array of length $N$). **Goal:** Change the state of the qubits to the GHZ state $\frac{1}{\sqrt{2}} \big (|0\dots0\rangle + |1\dots1\rangle\big)$. > For the syntax of flow control statements in Q#, see [the Q# documentation](https://docs.microsoft.com/quantum/language/statements#control-flow). 📝 $$ \begin{align} |0\rangle^{\otimes N} & \xrightarrow{\text{H}(0)} \frac{1}{\sqrt{2}} \big( |0\rangle + |1\rangle \big) \otimes |0\rangle^{\otimes N-1} = \frac{1}{\sqrt{2}} \big( |00\rangle + |10\rangle \big) \otimes |0\rangle^{\otimes N-2} \\ & \xrightarrow{\text{CNOT}(0,1)} \frac{1}{\sqrt{2}} \big( |00\rangle + |11\rangle \big) \otimes |0\rangle^{\otimes N-2} = \frac{1}{\sqrt{2}} \big( |000\rangle + |110\rangle \big) \otimes |0\rangle^{\otimes N-3} \\ & \xrightarrow{\text{CNOT}(0,2)} \frac{1}{\sqrt{2}} \big( |000\rangle + |111\rangle \big) \otimes |0\rangle^{\otimes N-3} \\ & \dots \phantom{\frac{1}{\sqrt{2}}} \\ & \xrightarrow{\text{CNOT}(0,N-1)} \frac{1}{\sqrt{2}} \big( |0\rangle^{\otimes N} + |1\rangle^{\otimes N} \big) \\ \end{align} $$ ```qsharp %kata T08_GHZ_State_Test operation GHZ_State (qs : Qubit[]) : Unit { H(qs[0]); for (i in 1 .. Length(qs) - 1) { CNOT(qs[0], qs[i]); } } ``` The desired state for N = 1 # wave function for qubits with ids (least to most significant): 0 ∣0❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] ∣1❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] The actual state: # wave function for qubits with ids (least to most significant): 0 ∣0❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] ∣1❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] Test case passed The desired state for N = 2 # wave function for qubits with ids (least to most significant): 0;1 ∣0❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] ∣1❭: 0.000000 + 0.000000 i == [ 0.000000 ] ∣2❭: 0.000000 + 0.000000 i == [ 0.000000 ] ∣3❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] The actual state: # wave function for qubits with ids (least to most significant): 0;1 ∣0❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] ∣1❭: 0.000000 + 0.000000 i == [ 0.000000 ] ∣2❭: 0.000000 + 0.000000 i == [ 0.000000 ] ∣3❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] Test case passed Testing on hidden test cases... Success! ### Task 9. Superposition of all basis vectors. **Input:** $N$ ($N \ge 1$) qubits in the $|0 \dots 0\rangle$ state. **Goal:** Change the state of the qubits to an equal superposition of all basis vectors $\frac{1}{\sqrt{2^N}} \big (|0 \dots 0\rangle + \dots + |1 \dots 1\rangle\big)$. <br/> <details> <summary><b>Need a hint? Click here</b></summary> Is this state separable? </details> 📝 $$ \frac{1}{\sqrt{2^N}} \big (|0 \dots 0\rangle + \dots + |1 \dots 1\rangle\big) = \left( \frac{1}{\sqrt 2} \big (|0\rangle + |1\rangle \big) \right) ^{\otimes N} $$ ```qsharp %kata T09_AllBasisVectorsSuperposition_Test operation AllBasisVectorsSuperposition (qs : Qubit[]) : Unit { for (i in 0 .. Length(qs) - 1) { H(qs[i]); } } ``` The desired state for N = 1 # wave function for qubits with ids (least to most significant): 0 ∣0❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] ∣1❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] The actual state: # wave function for qubits with ids (least to most significant): 0 ∣0❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] ∣1❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] Test case passed The desired state for N = 2 # wave function for qubits with ids (least to most significant): 0;1 ∣0❭: 0.500000 + 0.000000 i == ****** [ 0.250000 ] --- [ 0.00000 rad ] ∣1❭: 0.500000 + 0.000000 i == ****** [ 0.250000 ] --- [ 0.00000 rad ] ∣2❭: 0.500000 + 0.000000 i == ****** [ 0.250000 ] --- [ 0.00000 rad ] ∣3❭: 0.500000 + 0.000000 i == ****** [ 0.250000 ] --- [ 0.00000 rad ] The actual state: # wave function for qubits with ids (least to most significant): 0;1 ∣0❭: 0.500000 + 0.000000 i == ****** [ 0.250000 ] --- [ 0.00000 rad ] ∣1❭: 0.500000 + 0.000000 i == ****** [ 0.250000 ] --- [ 0.00000 rad ] ∣2❭: 0.500000 + 0.000000 i == ****** [ 0.250000 ] --- [ 0.00000 rad ] ∣3❭: 0.500000 + 0.000000 i == ****** [ 0.250000 ] --- [ 0.00000 rad ] Test case passed Testing on hidden test cases... Success! ### <a name="superposition-of-all-even-or-all-odd-numbers"></a> Task 10. Superposition of all even or all odd numbers. **Inputs:** 1. $N$ ($N \ge 1$) qubits in the $|0 \dots 0\rangle$ state (stored in an array of length $N$). 2. A boolean `isEven`. **Goal:** Prepare a superposition of all *even* numbers if `isEven` is `true`, or of all *odd* numbers if `isEven` is `false`. A basis state encodes an integer number using [big-endian](https://en.wikipedia.org/wiki/Endianness) binary notation: state $|01\rangle$ corresponds to the integer $1$, and state $|10 \rangle$ - to the integer $2$. > For example, for $N = 2$ and `isEven = false` you need to prepare superposition $\frac{1}{\sqrt{2}} \big (|01\rangle + |11\rangle\big )$, and for $N = 2$ and `isEven = true` - superposition $\frac{1}{\sqrt{2}} \big (|00\rangle + |10\rangle\big )$. 📝 $$ \begin{cases} \left( \frac{1}{\sqrt 2} \big (|0\rangle + |1\rangle \big) \right) ^{\otimes N-1} \otimes |0\rangle, & \text{if } \texttt{isEven} \\ \left( \frac{1}{\sqrt 2} \big (|0\rangle + |1\rangle \big) \right) ^{\otimes N-1} \otimes |1\rangle, & \text{if } \texttt{!isEven} \end{cases} $$ ```qsharp %kata T10_EvenOddNumbersSuperposition_Test operation EvenOddNumbersSuperposition (qs : Qubit[], isEven : Bool) : Unit { let len = Length(qs); for (i in 0 .. len - 2) { H(qs[i]); } if (!isEven) { X(qs[len - 1]); } } ``` /snippet_.qs(7,9): warning QS3301: Deprecated syntax. Use "not" to denote the logical NOT operator. The desired state for N = 1, isEven = False # wave function for qubits with ids (least to most significant): 0 ∣0❭: 0.000000 + 0.000000 i == [ 0.000000 ] ∣1❭: 1.000000 + 0.000000 i == ******************** [ 1.000000 ] --- [ 0.00000 rad ] The actual state: # wave function for qubits with ids (least to most significant): 0 ∣0❭: 0.000000 + 0.000000 i == [ 0.000000 ] ∣1❭: 1.000000 + 0.000000 i == ******************** [ 1.000000 ] --- [ 0.00000 rad ] Test case passed The desired state for N = 1, isEven = True # wave function for qubits with ids (least to most significant): 0 ∣0❭: 1.000000 + 0.000000 i == ******************** [ 1.000000 ] --- [ 0.00000 rad ] ∣1❭: 0.000000 + 0.000000 i == [ 0.000000 ] The actual state: # wave function for qubits with ids (least to most significant): 0 ∣0❭: 1.000000 + 0.000000 i == ******************** [ 1.000000 ] --- [ 0.00000 rad ] ∣1❭: 0.000000 + 0.000000 i == [ 0.000000 ] Test case passed The desired state for N = 2, isEven = False # wave function for qubits with ids (least to most significant): 0;1 ∣0❭: 0.000000 + 0.000000 i == [ 0.000000 ] ∣1❭: 0.000000 + 0.000000 i == [ 0.000000 ] ∣2❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] ∣3❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] The actual state: # wave function for qubits with ids (least to most significant): 0;1 ∣0❭: 0.000000 + 0.000000 i == [ 0.000000 ] ∣1❭: 0.000000 + 0.000000 i == [ 0.000000 ] ∣2❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] ∣3❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] Test case passed The desired state for N = 2, isEven = True # wave function for qubits with ids (least to most significant): 0;1 ∣0❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] ∣1❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] ∣2❭: 0.000000 + 0.000000 i == [ 0.000000 ] ∣3❭: 0.000000 + 0.000000 i == [ 0.000000 ] The actual state: # wave function for qubits with ids (least to most significant): 0;1 ∣0❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] ∣1❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] ∣2❭: 0.000000 + 0.000000 i == [ 0.000000 ] ∣3❭: 0.000000 + 0.000000 i == [ 0.000000 ] Test case passed Testing on hidden test cases... Success! *Can't come up with a solution? See the explained solution in the [Superposition Workbook](./Workbook_Superposition_Part2.ipynb#superposition-of-all-even-or-all-odd-numbers).* ### <a name="threestates-twoqubits"></a>Task 11*. $\frac{1}{\sqrt{3}} \big(|00\rangle + |01\rangle + |10\rangle\big)$ state. **Input:** Two qubits in the $|00\rangle$ state. **Goal:** Change the state of the qubits to $\frac{1}{\sqrt{3}} \big(|00\rangle + |01\rangle + |10\rangle\big)$. <br/> <details> <summary><b>Need a hint? Click here</b></summary> If you need trigonometric functions, you can find them in Microsoft.Quantum.Math namespace; you'll need to add <pre>open Microsoft.Quantum.Math;</pre> to the code before the operation definition. </details> 📝 係数 $1/\sqrt{3}$ の成分がほしいので、まずは1つ作るところから考える。$H = R_y(\pi)$ だったのを参考にすると、適当な角度 $\theta \in [0, \pi/2] $ をとって $$ R_y(\theta) |0\rangle = \cos\frac{\theta}{2} |0\rangle + \sin\frac{\theta}{2} |1\rangle = \frac{1}{\sqrt 3} |0\rangle + {\sqrt \frac{2}{3}} |1\rangle $$ とできる。このとき $\theta = 2 \arccos 1/\sqrt{3}$。この $R_y(\theta)$ を第0キュビットにかけると、状態は $$ \left(R_y(\theta) \otimes I \right) |00\rangle = \frac{1}{\sqrt 3} |00\rangle + {\sqrt \frac{2}{3}} |10\rangle $$ となる。${\sqrt {2/3}} |10\rangle$ の第1キュビットに $H$ をかけてやればよさそう。 $$ \begin{align} \frac{1}{\sqrt 3} |00\rangle + {\sqrt \frac{2}{3}} |10\rangle &\xrightarrow{\text{C-H}} \frac{1}{\sqrt 3} |00\rangle + {\sqrt \frac{2}{3}} |1\rangle \otimes \frac{1}{\sqrt{2}} \big( |0\rangle + |1\rangle \big) \\ & = \frac{1}{\sqrt 3} \big( |00\rangle + |10\rangle + |11\rangle \big) \end{align} $$ Goal と比較すると、さらに $|11\rangle$ を $|01\rangle$ にしたいので、 $$ \xrightarrow{\text{CNOT}} \frac{1}{\sqrt 3} \big( |00\rangle + |01\rangle + |10\rangle \big) $$ ```qsharp %kata T11_ThreeStates_TwoQubits_Test open Microsoft.Quantum.Math; operation ThreeStates_TwoQubits (qs : Qubit[]) : Unit { let theta = 2.0 * ArcCos(Sqrt(1.0 / 3.0)); Ry(theta, qs[0]); Controlled H([qs[0]], qs[1]); CNOT(qs[1], qs[0]); } ``` The desired state: # wave function for qubits with ids (least to most significant): 0;1 ∣0❭: 0.577350 + 0.000000 i == ******* [ 0.333333 ] --- [ 0.00000 rad ] ∣1❭: 0.577350 + 0.000000 i == ******* [ 0.333333 ] --- [ 0.00000 rad ] ∣2❭: 0.577350 + 0.000000 i == ******* [ 0.333333 ] --- [ 0.00000 rad ] ∣3❭: 0.000000 + 0.000000 i == [ 0.000000 ] The actual state: # wave function for qubits with ids (least to most significant): 0;1 ∣0❭: 0.577350 + 0.000000 i == ******* [ 0.333333 ] --- [ 0.00000 rad ] ∣1❭: 0.577350 + 0.000000 i == ******* [ 0.333333 ] --- [ 0.00000 rad ] ∣2❭: 0.577350 + 0.000000 i == ******* [ 0.333333 ] --- [ 0.00000 rad ] ∣3❭: 0.000000 + 0.000000 i == [ 0.000000 ] Test case passed Success! 📝 解答 `ReferenceImplementations.qs` を見ると、[`ControlledOnInt`](https://docs.microsoft.com/en-us/qsharp/api/qsharp/microsoft.quantum.canon.controlledonint?view=qsharp-preview) が使われている。通常の `Controlled Op(control, target)` は「`control` が `1` のとき `target` に `Op` を適用」だが、`(ControlledOnInt(num, Op))(control, target)` は「`control` が `num` のとき `target` に `Op` を適用」らしい。 ### Task 12*. Hardy state. **Input:** Two qubits in the $|00\rangle$ state. **Goal:** Change the state of the qubits to $\frac{1}{\sqrt{12}} \big(3|00\rangle + |01\rangle + |10\rangle + |11\rangle\big)$. <br/> <details> <summary><b>Need a hint? Click here</b></summary> If you need trigonometric functions, you can find them in Microsoft.Quantum.Math namespace; you'll need to add <pre>open Microsoft.Quantum.Math;</pre> to the code before the operation definition. </details> 📝 前の Task で $R_y$ や $H$ を使って同じ係数 $1/\sqrt{3}$ を2つ作った流れを参考にする。$\alpha = 2 \arccos \sqrt{5/6}$ として、 $$ \begin{align} |00\rangle &\xrightarrow{R_y(\alpha) \otimes I} \sqrt\frac{5}{6} |00\rangle + \frac{1}{\sqrt6} |10\rangle \\ &\xrightarrow{\text{C-}H} \sqrt\frac{5}{6} |00\rangle + \frac{1}{\sqrt6} |1\rangle \otimes \frac{1}{\sqrt2} \big( |0\rangle + |1\rangle \big) \\ &= \sqrt\frac{5}{6} |00\rangle + \frac{1}{\sqrt{12}} \big( |10\rangle + |11\rangle \big) \end{align} $$ `ControlledOnInt(0, Ry)` を使って $|00\rangle$ の第1ビットにだけ $R_y(\beta)$ を作用させ、 $$ \begin{align} &\xrightarrow{\text{C-}R_y(\beta)} \sqrt\frac{5}{6} |0\rangle \otimes \big( \frac{3}{\sqrt{10}} |0\rangle + \frac{1}{\sqrt{10}} |1\rangle \big) + \frac{1}{\sqrt{12}} \big( |10\rangle + |11\rangle \big) \\ & = \frac{1}{\sqrt{12}} \big( 3|00\rangle + |01\rangle + |10\rangle + |11\rangle \big) \end{align} $$ とできる。ただし $\beta = 2 \arccos \left(3/\sqrt{10} \right)$。 ```qsharp %kata T12_Hardy_State_Test open Microsoft.Quantum.Canon; open Microsoft.Quantum.Math; operation Hardy_State (qs : Qubit[]) : Unit { let alpha = 2.0 * ArcCos(Sqrt(5.0 / 6.0)); Ry(alpha, qs[0]); Controlled H([qs[0]], qs[1]); let beta = 2.0 * ArcCos(3.0 / Sqrt(10.0)); (ControlledOnInt(0, Ry))([qs[0]], (beta, qs[1])); } ``` /snippet_.qs(2,6): warning QS6003: The namespace is already open. The desired state: # wave function for qubits with ids (least to most significant): 0;1 ∣0❭: 0.866025 + 0.000000 i == **************** [ 0.750000 ] --- [ 0.00000 rad ] ∣1❭: 0.288675 + 0.000000 i == ** [ 0.083333 ] --- [ 0.00000 rad ] ∣2❭: 0.288675 + 0.000000 i == ** [ 0.083333 ] --- [ 0.00000 rad ] ∣3❭: 0.288675 + 0.000000 i == ** [ 0.083333 ] --- [ 0.00000 rad ] The actual state: # wave function for qubits with ids (least to most significant): 0;1 ∣0❭: 0.866025 + 0.000000 i == **************** [ 0.750000 ] --- [ 0.00000 rad ] ∣1❭: 0.288675 + 0.000000 i == ** [ 0.083333 ] --- [ 0.00000 rad ] ∣2❭: 0.288675 + 0.000000 i == ** [ 0.083333 ] --- [ 0.00000 rad ] ∣3❭: 0.288675 + 0.000000 i == ** [ 0.083333 ] --- [ 0.00000 rad ] Test case passed Success! ### Task 13. Superposition of $|0 \dots 0\rangle$ and the given bit string. **Inputs:** 1. $N$ ($N \ge 1$) qubits in the $|0 \dots 0\rangle$ state. 2. A bit string of length $N$ represented as `Bool[]`. Bit values `false` and `true` correspond to $|0\rangle$ and $|1\rangle$ states. You are guaranteed that the first bit of the bit string is `true`. **Goal:** Change the state of the qubits to an equal superposition of $|0 \dots 0\rangle$ and the basis state given by the bit string. > For example, for the bit string `[true, false]` the state required is $\frac{1}{\sqrt{2}}\big(|00\rangle + |10\rangle\big)$. 📝 0番目のビットは必ず `true` なので、0番目のキュビットに H をかけて $$ |0 \dots 0\rangle \longrightarrow \frac{1}{\sqrt{2}}\big( |00 \dots 0\rangle + |\color{blue}{1}0 \dots 0\rangle \big) $$ 1番目のビットが `true` のとき、第2項の1番目のキュビットを反転させればよい。0番目のキュビットを制御側とした CNOT を使えば、 $$ \frac{1}{\sqrt{2}}\big( |00 \dots 0\rangle + |10 \dots 0\rangle \big) \longrightarrow \frac{1}{\sqrt{2}}\big( |000 \dots 0\rangle + |\color{blue}{11}0 \dots 0\rangle \big) $$ 一方、1番目のビットが `false` の場合はそのままで $$ \frac{1}{\sqrt{2}}\big( |00 \dots 0\rangle + |10 \dots 0\rangle \big) = \frac{1}{\sqrt{2}}\big( |000 \dots 0\rangle + |\color{blue}{10}0 \dots 0\rangle \big) $$ 以下繰り返し。 ```qsharp %kata T13_ZeroAndBitstringSuperposition_Test operation ZeroAndBitstringSuperposition (qs : Qubit[], bits : Bool[]) : Unit { H(qs[0]); for (i in 1 .. Length(qs) - 1) { if (bits[i]) { CNOT(qs[0], qs[i]); } } } ``` The desired state for bits = [True] # wave function for qubits with ids (least to most significant): 0 ∣0❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] ∣1❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] The actual state: # wave function for qubits with ids (least to most significant): 0 ∣0❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] ∣1❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] Test case passed The desired state for bits = [True,True] # wave function for qubits with ids (least to most significant): 0;1 ∣0❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] ∣1❭: 0.000000 + 0.000000 i == [ 0.000000 ] ∣2❭: 0.000000 + 0.000000 i == [ 0.000000 ] ∣3❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] The actual state: # wave function for qubits with ids (least to most significant): 0;1 ∣0❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] ∣1❭: 0.000000 + 0.000000 i == [ 0.000000 ] ∣2❭: 0.000000 + 0.000000 i == [ 0.000000 ] ∣3❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] Test case passed The desired state for bits = [True,False] # wave function for qubits with ids (least to most significant): 0;1 ∣0❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] ∣1❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] ∣2❭: 0.000000 + 0.000000 i == [ 0.000000 ] ∣3❭: 0.000000 + 0.000000 i == [ 0.000000 ] The actual state: # wave function for qubits with ids (least to most significant): 0;1 ∣0❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] ∣1❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] ∣2❭: 0.000000 + 0.000000 i == [ 0.000000 ] ∣3❭: 0.000000 + 0.000000 i == [ 0.000000 ] Test case passed Testing on hidden test cases... Success! ### Task 14. Superposition of two bit strings. **Inputs:** 1. $N$ ($N \ge 1$) qubits in the $|0 \dots 0\rangle$ state. 2. Two bit strings of length $N$ represented as `Bool[]`s. Bit values `false` and `true` correspond to $|0\rangle$ and $|1\rangle$ states. You are guaranteed that the two bit strings differ in at least one bit. **Goal:** Change the state of the qubits to an equal superposition of the basis states given by the bit strings. > For example, for bit strings `[false, true, false]` and `[false, false, true]` the state required is $\frac{1}{\sqrt{2}}\big(|010\rangle + |001\rangle\big)$. > If you need to define any helper functions, you'll need to create an extra code cell for it and execute it before returning to this cell. 📝 最初に `bits1[k] != bits2[k]` なるインデックス `k` を探してそこに H をかけ、`qs[k]` を制御キュビットとして前問のように順次セットしていけばよい。 ```qsharp open Microsoft.Quantum.Canon; operation SetFromBits (qs : Qubit[], bits : Bool[], k: Int) : Unit { let kValue = bits[k] ? 1 | 0; for (i in 0 .. Length(qs) - 1) { if (i != k and bits[i]) { (ControlledOnInt(kValue, X))([qs[k]], qs[i]); } } } ``` /snippet_.qs(1,90): warning QS6003: The namespace is already open. <ul><li>SetFromBits</li></ul> ```qsharp %kata T14_TwoBitstringSuperposition_Test open Microsoft.Quantum.Arrays; open Microsoft.Quantum.Logical; operation TwoBitstringSuperposition (qs : Qubit[], bits1 : Bool[], bits2 : Bool[]) : Unit { let k = IndexOf(NotEqualB, Zip(bits1, bits2)); H(qs[k]); SetFromBits(qs, bits1, k); SetFromBits(qs, bits2, k); } ``` The desired state for bits1 = [True], bits2 = [False] # wave function for qubits with ids (least to most significant): 0 ∣0❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] ∣1❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] The actual state: # wave function for qubits with ids (least to most significant): 0 ∣0❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] ∣1❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] Test case passed The desired state for bits1 = [False,True], bits2 = [True,False] # wave function for qubits with ids (least to most significant): 0;1 ∣0❭: 0.000000 + 0.000000 i == [ 0.000000 ] ∣1❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] ∣2❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] ∣3❭: 0.000000 + 0.000000 i == [ 0.000000 ] The actual state: # wave function for qubits with ids (least to most significant): 0;1 ∣0❭: 0.000000 + 0.000000 i == [ 0.000000 ] ∣1❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] ∣2❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] ∣3❭: 0.000000 + 0.000000 i == [ 0.000000 ] Test case passed Testing on hidden test cases... Success! ### Task 15*. Superposition of four bit strings. **Inputs:** 1. $N$ ($N \ge 1$) qubits in the $|0 \dots 0\rangle$ state. 2. Four bit strings of length $N$, represented as `Bool[][]` `bits`. `bits` is an $4 \times N$ which describes the bit strings as follows: `bits[i]` describes the `i`-th bit string and has $N$ elements. You are guaranteed that all four bit strings will be distinct. **Goal:** Change the state of the qubits to an equal superposition of the four basis states given by the bit strings. > For example, for $N = 3$ and `bits = [[false, true, false], [true, false, false], [false, false, true], [true, true, false]]` the state required is $\frac{1}{2}\big(|010\rangle + |100\rangle + |001\rangle + |110\rangle\big)$. <br/> <details> <summary><b>Need a hint? Click here</b></summary> Remember that you can allocate extra qubits. If you do, you'll need to return them to the $|0\rangle$ state before releasing them. </details> 📝 hint に従い、[`using` 文](https://docs.microsoft.com/en-us/quantum/language/statements?view=qsharp-preview#clean-qubits)で補助キュビットを用意して使ってみる。4状態を区別するには2キュビットで十分。 ```qsharp %kata T15_FourBitstringSuperposition_Test operation FourBitstringSuperposition (qs : Qubit[], bits : Bool[][]) : Unit { using (ancillae = Qubit[2]) { H(ancillae[0]); H(ancillae[1]); let N = Length(qs); for (i in 0 .. 3) { for (j in 0 .. N - 1) { if (bits[i][j]) { (ControlledOnInt(i, X))(ancillae, qs[j]); } } } // using 文を抜ける前にすべて |0> に戻す必要がある for (i in 0 .. 3) { if (i % 2 == 1) { (ControlledOnBitString(bits[i], X))(qs, ancillae[0]); } if (i / 2 == 1) { (ControlledOnBitString(bits[i], X))(qs, ancillae[1]); } } } } ``` The desired state for bits = [[False,False],[False,True],[True,False],[True,True]] # wave function for qubits with ids (least to most significant): 0;1 ∣0❭: 0.500000 + 0.000000 i == ****** [ 0.250000 ] --- [ 0.00000 rad ] ∣1❭: 0.500000 + 0.000000 i == ****** [ 0.250000 ] --- [ 0.00000 rad ] ∣2❭: 0.500000 + 0.000000 i == ****** [ 0.250000 ] --- [ 0.00000 rad ] ∣3❭: 0.500000 + 0.000000 i == ****** [ 0.250000 ] --- [ 0.00000 rad ] The actual state: # wave function for qubits with ids (least to most significant): 0;1 ∣0❭: 0.500000 + 0.000000 i == ****** [ 0.250000 ] --- [ 0.00000 rad ] ∣1❭: 0.500000 + 0.000000 i == ****** [ 0.250000 ] --- [ 0.00000 rad ] ∣2❭: 0.500000 + 0.000000 i == ****** [ 0.250000 ] --- [ 0.00000 rad ] ∣3❭: 0.500000 + 0.000000 i == ****** [ 0.250000 ] --- [ 0.00000 rad ] Test case passed Testing on hidden test cases... Success! 📝 `using` 文は、ここから抜ける前に補助キュビットをすべて $|0\rangle$ に戻しておかないとエラーになる。この戻し方は、解答を見ないとわからなかった。[`ResetAll`](https://docs.microsoft.com/en-us/qsharp/api/qsharp/microsoft.quantum.intrinsic.resetall?view=qsharp-preview) というそれっぽい操作があるが、これは内部で測定をするらしく、状態が収束してしまう。 📝 前問の方針を応用してもできそう? 4 つのビット列はたかだか 3 つの桁 `k01`, `k02`, `k03` で区別できるので、それらを制御キュビットとして状態をセットしていく方法。 ### Task 16**. W state on $2^k$ qubits. **Input:** $N = 2^k$ qubits in the $|0 \dots 0\rangle$ state. **Goal:** Change the state of the qubits to the [W state](https://en.wikipedia.org/wiki/W_state) - an equal superposition of $N$ basis states on $N$ qubits which have Hamming weight of 1. > For example, for $N = 4$ the required state is $\frac{1}{2}\big(|1000\rangle + |0100\rangle + |0010\rangle + |0001\rangle\big)$. <br/> <details> <summary><b>Need a hint? Click here</b></summary> You can use Controlled modifier to perform arbitrary controlled gates. </details> ```qsharp %kata T16_WState_PowerOfTwo_Test open Microsoft.Quantum.Canon; open Microsoft.Quantum.Convert; open Microsoft.Quantum.Math; operation WState_PowerOfTwo (qs : Qubit[]) : Unit { let N = Length(qs); let k = Ceiling(Lg(IntAsDouble(N))); using (anc = Qubit[k]) { ApplyToEach(H, anc); // 各キュビットに H をかけて均等な重ね合わせにする for (i in 0 .. N - 1) { (ControlledOnInt(i, X))(anc, qs[i]); } // anc をすべて |0> に戻す for (i in 0 .. N - 1) { let bits = IntAsBoolArray(i, k); for (j in 0 .. k - 1) { if (bits[j]) { // (ControlledOnInt(2 ^ i, X))(qs, anc[j]); // qs[i] の1つを見るだけで状態を判別できるので、下のように書ける (ControlledOnInt(1, X))([qs[i]], anc[j]); } } } } } ``` /snippet_.qs(2,6): warning QS6003: The namespace is already open. The desired state for N = 1 # wave function for qubits with ids (least to most significant): 0 ∣0❭: 0.000000 + 0.000000 i == [ 0.000000 ] ∣1❭: 1.000000 + 0.000000 i == ******************** [ 1.000000 ] --- [ 0.00000 rad ] The actual state: # wave function for qubits with ids (least to most significant): 0 ∣0❭: 0.000000 + 0.000000 i == [ 0.000000 ] ∣1❭: 1.000000 + 0.000000 i == ******************** [ 1.000000 ] --- [ 0.00000 rad ] Test case passed The desired state for N = 2 # wave function for qubits with ids (least to most significant): 0;1 ∣0❭: 0.000000 + 0.000000 i == [ 0.000000 ] ∣1❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] ∣2❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] ∣3❭: 0.000000 + 0.000000 i == [ 0.000000 ] The actual state: # wave function for qubits with ids (least to most significant): 0;1 ∣0❭: 0.000000 + 0.000000 i == [ 0.000000 ] ∣1❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] ∣2❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] ∣3❭: 0.000000 + 0.000000 i == [ 0.000000 ] Test case passed Testing on hidden test cases... Success! ### Task 17**. W state on an arbitrary number of qubits. **Input:** $N$ qubits in the $|0 \dots 0\rangle$ state ($N$ is not necessarily a power of 2). **Goal:** Change the state of the qubits to the [W state](https://en.wikipedia.org/wiki/W_state) - an equal superposition of $N$ basis states on $N$ qubits which have Hamming weight of 1. > For example, for $N = 3$ the required state is $\frac{1}{\sqrt{3}}\big(|100\rangle + |010\rangle + |001\rangle\big)$. <br/> <details> <summary><b>Need a hint? Click here</b></summary> You can modify the signature of the given operation to specify its controlled specialization. </details> 📝 $N$ キュビットからなる W state を $|W_N\rangle$ とすると $$ \begin{align} |W_N\rangle &= \frac{1}{\sqrt{N}}\big(|100\dots0\rangle + |010\dots0\rangle + \dots + |000\dots1\rangle \big) \end{align} $$ であり、特に $|W_0\rangle = |1\rangle$。 $N > 1$ のとき、Task 11 や Task 12 の操作を参考にして、 $$ R_y (\theta_N) |0\rangle = \sqrt\frac{N - 1}{N} |0\rangle + \sqrt\frac{1}{N} |1\rangle $$ なるキュビットを用意できる($\theta_N = 2\arcsin1/\sqrt{N}$)。 $|0\rangle ^{\otimes N-1} \rightarrow |W_{N-1}\rangle$ とする操作を、用意したキュビットが $|0\rangle$ のときに適用させれば、 $$ \begin{align} |0\rangle ^{\otimes N-1} \otimes \left( \sqrt\frac{N - 1}{N} |0\rangle + \sqrt\frac{1}{N} |1\rangle \right) &\longrightarrow |W_{N-1}\rangle \otimes \sqrt\frac{N - 1}{N} |0\rangle + |0\rangle ^{\otimes N-1} \otimes \sqrt\frac{1}{N} |1\rangle \\ &= \frac{1}{\sqrt{N-1}}\big(|100\dots0\rangle + |010\dots0\rangle + \dots + |000\dots1\rangle \big) \otimes \sqrt\frac{N - 1}{N} |0\rangle + \frac{1}{\sqrt{N}} |000\dots01\rangle \\ &= \frac{1}{\sqrt{N}}\big(|100\dots00\rangle + |010\dots00\rangle + \dots + |000\dots10\rangle + |000\dots01\rangle \big)\\ &= |W_N\rangle \end{align} $$ となり、$|W_N\rangle$ が得られる。 ```qsharp %kata T17_WState_Arbitrary_Test open Microsoft.Quantum.Convert; open Microsoft.Quantum.Math; operation WState_Arbitrary (qs : Qubit[]) : Unit is Ctl { let N = Length(qs); if (N == 1) { X(qs[0]); } else { let theta = 2.0 * ArcSin(1.0 / Sqrt(IntAsDouble(N))); Ry(theta, qs[N - 1]); // qs[N - 1] が |0> であることを条件にして、再帰的に WState_Arbitrary を適用 X(qs[N - 1]); Controlled WState_Arbitrary([qs[N - 1]], qs[... N - 2]); X(qs[N - 1]); // ちなみに、ControlledOnInt は `Adj+Ctl` でないと使えないらしい } } ``` The desired state for N = 1 # wave function for qubits with ids (least to most significant): 0 ∣0❭: 0.000000 + 0.000000 i == [ 0.000000 ] ∣1❭: 1.000000 + 0.000000 i == ******************** [ 1.000000 ] --- [ 0.00000 rad ] The actual state: # wave function for qubits with ids (least to most significant): 0 ∣0❭: 0.000000 + 0.000000 i == [ 0.000000 ] ∣1❭: 1.000000 + 0.000000 i == ******************** [ 1.000000 ] --- [ 0.00000 rad ] Test case passed The desired state for N = 2 # wave function for qubits with ids (least to most significant): 0;1 ∣0❭: 0.000000 + 0.000000 i == [ 0.000000 ] ∣1❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] ∣2❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] ∣3❭: 0.000000 + 0.000000 i == [ 0.000000 ] The actual state: # wave function for qubits with ids (least to most significant): 0;1 ∣0❭: 0.000000 + 0.000000 i == [ 0.000000 ] ∣1❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ] ∣2❭: 0.707107 + 0.000000 i == ********** [ 0.500000 ] --- [ 0.00000 rad ] ∣3❭: 0.000000 + 0.000000 i == [ 0.000000 ] Test case passed Testing on hidden test cases... Success! ```qsharp ```
eb9e417e61ba279ab14b0066bca46bed5469684e
72,813
ipynb
Jupyter Notebook
Superposition/Superposition.ipynb
mashabow/QuantumKatas
828b1d1e72597192417124098925c4ed4ddf277a
[ "MIT" ]
null
null
null
Superposition/Superposition.ipynb
mashabow/QuantumKatas
828b1d1e72597192417124098925c4ed4ddf277a
[ "MIT" ]
null
null
null
Superposition/Superposition.ipynb
mashabow/QuantumKatas
828b1d1e72597192417124098925c4ed4ddf277a
[ "MIT" ]
null
null
null
41.183824
362
0.462486
true
19,547
Qwen/Qwen-72B
1. YES 2. YES
0.865224
0.73412
0.635178
__label__eng_Latn
0.485293
0.314062
# Batch Normalization One way to make deep networks easier to train is to use more sophisticated optimization procedures such as SGD+momentum, RMSProp, or Adam. Another strategy is to change the architecture of the network to make it easier to train. One idea along these lines is batch normalization which was proposed by [1] in 2015. The idea is relatively straightforward. Machine learning methods tend to work better when their input data consists of uncorrelated features with zero mean and unit variance. When training a neural network, we can preprocess the data before feeding it to the network to explicitly decorrelate its features; this will ensure that the first layer of the network sees data that follows a nice distribution. However, even if we preprocess the input data, the activations at deeper layers of the network will likely no longer be decorrelated and will no longer have zero mean or unit variance since they are output from earlier layers in the network. Even worse, during the training process the distribution of features at each layer of the network will shift as the weights of each layer are updated. The authors of [1] hypothesize that the shifting distribution of features inside deep neural networks may make training deep networks more difficult. To overcome this problem, [1] proposes to insert batch normalization layers into the network. At training time, a batch normalization layer uses a minibatch of data to estimate the mean and standard deviation of each feature. These estimated means and standard deviations are then used to center and normalize the features of the minibatch. A running average of these means and standard deviations is kept during training, and at test time these running averages are used to center and normalize features. It is possible that this normalization strategy could reduce the representational power of the network, since it may sometimes be optimal for certain layers to have features that are not zero-mean or unit variance. To this end, the batch normalization layer includes learnable shift and scale parameters for each feature dimension. [1] [Sergey Ioffe and Christian Szegedy, "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift", ICML 2015.](https://arxiv.org/abs/1502.03167) ```python # As usual, a bit of setup import time import numpy as np import matplotlib.pyplot as plt from cs231n.classifiers.fc_net import * from cs231n.data_utils import get_CIFAR10_data from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array from cs231n.solver import Solver %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # for auto-reloading external modules # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython %load_ext autoreload %autoreload 2 def rel_error(x, y): """ returns relative error """ return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y)))) def print_mean_std(x,axis=0): print(' means: ', x.mean(axis=axis)) print(' stds: ', x.std(axis=axis)) print() ``` The autoreload extension is already loaded. To reload it, use: %reload_ext autoreload ```python # Load the (preprocessed) CIFAR10 data. data = get_CIFAR10_data() for k, v in data.items(): print('%s: ' % k, v.shape) ``` X_train: (49000, 3, 32, 32) y_train: (49000,) X_val: (1000, 3, 32, 32) y_val: (1000,) X_test: (1000, 3, 32, 32) y_test: (1000,) ## Batch normalization: forward In the file `cs231n/layers.py`, implement the batch normalization forward pass in the function `batchnorm_forward`. Once you have done so, run the following to test your implementation. Referencing the paper linked to above in [1] may be helpful! ```python # Check the training-time forward pass by checking means and variances # of features both before and after batch normalization # Simulate the forward pass for a two-layer network np.random.seed(231) N, D1, D2, D3 = 200, 50, 60, 3 X = np.random.randn(N, D1) W1 = np.random.randn(D1, D2) W2 = np.random.randn(D2, D3) a = np.maximum(0, X.dot(W1)).dot(W2) print('Before batch normalization:') print_mean_std(a,axis=0) gamma = np.ones((D3,)) beta = np.zeros((D3,)) # Means should be close to zero and stds close to one print('After batch normalization (gamma=1, beta=0)') a_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'}) print_mean_std(a_norm,axis=0) gamma = np.asarray([1.0, 2.0, 3.0]) beta = np.asarray([11.0, 12.0, 13.0]) # Now means should be close to beta and stds close to gamma print('After batch normalization (gamma=', gamma, ', beta=', beta, ')') a_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'}) print_mean_std(a_norm,axis=0) ``` Before batch normalization: means: [ -2.3814598 -13.18038246 1.91780462] stds: [27.18502186 34.21455511 37.68611762] After batch normalization (gamma=1, beta=0) means: [5.32907052e-17 7.04991621e-17 1.85962357e-17] stds: [0.99999999 1. 1. ] After batch normalization (gamma= [1. 2. 3.] , beta= [11. 12. 13.] ) means: [11. 12. 13.] stds: [0.99999999 1.99999999 2.99999999] ```python # Check the test-time forward pass by running the training-time # forward pass many times to warm up the running averages, and then # checking the means and variances of activations after a test-time # forward pass. np.random.seed(231) N, D1, D2, D3 = 200, 50, 60, 3 W1 = np.random.randn(D1, D2) W2 = np.random.randn(D2, D3) bn_param = {'mode': 'train'} gamma = np.ones(D3) beta = np.zeros(D3) for t in range(50): X = np.random.randn(N, D1) a = np.maximum(0, X.dot(W1)).dot(W2) batchnorm_forward(a, gamma, beta, bn_param) bn_param['mode'] = 'test' X = np.random.randn(N, D1) a = np.maximum(0, X.dot(W1)).dot(W2) a_norm, _ = batchnorm_forward(a, gamma, beta, bn_param) # Means should be close to zero and stds close to one, but will be # noisier than training-time forward passes. print('After batch normalization (test-time):') print_mean_std(a_norm,axis=0) ``` After batch normalization (test-time): means: [-0.03927354 -0.04349152 -0.10452688] stds: [1.01531428 1.01238373 0.97819988] ## Batch normalization: backward Now implement the backward pass for batch normalization in the function `batchnorm_backward`. To derive the backward pass you should write out the computation graph for batch normalization and backprop through each of the intermediate nodes. Some intermediates may have multiple outgoing branches; make sure to sum gradients across these branches in the backward pass. Once you have finished, run the following to numerically check your backward pass. ```python # Gradient check batchnorm backward pass np.random.seed(231) N, D = 4, 5 x = 5 * np.random.randn(N, D) + 12 gamma = np.random.randn(D) beta = np.random.randn(D) dout = np.random.randn(N, D) bn_param = {'mode': 'train'} fx = lambda x: batchnorm_forward(x, gamma, beta, bn_param)[0] fg = lambda a: batchnorm_forward(x, a, beta, bn_param)[0] fb = lambda b: batchnorm_forward(x, gamma, b, bn_param)[0] dx_num = eval_numerical_gradient_array(fx, x, dout) da_num = eval_numerical_gradient_array(fg, gamma.copy(), dout) db_num = eval_numerical_gradient_array(fb, beta.copy(), dout) _, cache = batchnorm_forward(x, gamma, beta, bn_param) dx, dgamma, dbeta = batchnorm_backward(dout, cache) #You should expect to see relative errors between 1e-13 and 1e-8 print('dx error: ', rel_error(dx_num, dx)) print('dgamma error: ', rel_error(da_num, dgamma)) print('dbeta error: ', rel_error(db_num, dbeta)) ``` dx error: 1.7029258328157158e-09 dgamma error: 7.420414216247087e-13 dbeta error: 2.8795057655839487e-12 ## Batch normalization: alternative backward In class we talked about two different implementations for the sigmoid backward pass. One strategy is to write out a computation graph composed of simple operations and backprop through all intermediate values. Another strategy is to work out the derivatives on paper. For example, you can derive a very simple formula for the sigmoid function's backward pass by simplifying gradients on paper. Surprisingly, it turns out that you can do a similar simplification for the batch normalization backward pass too! In the forward pass, given a set of inputs $X=\begin{bmatrix}x_1\\x_2\\...\\x_N\end{bmatrix}$, we first calculate the mean $\mu$ and variance $v$. With $\mu$ and $v$ calculated, we can calculate the standard deviation $\sigma$ and normalized data $Y$. The equations and graph illustration below describe the computation ($y_i$ is the i-th element of the vector $Y$). \begin{align} & \mu=\frac{1}{N}\sum_{k=1}^N x_k & v=\frac{1}{N}\sum_{k=1}^N (x_k-\mu)^2 \\ & \sigma=\sqrt{v+\epsilon} & y_i=\frac{x_i-\mu}{\sigma} \end{align} # The meat of our problem during backpropagation is to compute $\frac{\partial L}{\partial X}$, given the upstream gradient we receive, $\frac{\partial L}{\partial Y}.$ To do this, recall the chain rule in calculus gives us $\frac{\partial L}{\partial X} = \frac{\partial L}{\partial Y} \cdot \frac{\partial Y}{\partial X}$. The unknown/hart part is $\frac{\partial Y}{\partial X}$. We can find this by first deriving step-by-step our local gradients at $\frac{\partial v}{\partial X}$, $\frac{\partial \mu}{\partial X}$, $\frac{\partial \sigma}{\partial v}$, $\frac{\partial Y}{\partial \sigma}$, and $\frac{\partial Y}{\partial \mu}$, and then use the chain rule to compose these gradients (which appear in the form of vectors!) appropriately to compute $\frac{\partial Y}{\partial X}$. If it's challenging to directly reason about the gradients over $X$ and $Y$ which require matrix multiplication, try reasoning about the gradients in terms of individual elements $x_i$ and $y_i$ first: in that case, you will need to come up with the derivations for $\frac{\partial L}{\partial x_i}$, by relying on the Chain Rule to first calculate the intermediate $\frac{\partial \mu}{\partial x_i}, \frac{\partial v}{\partial x_i}, \frac{\partial \sigma}{\partial x_i},$ then assemble these pieces to calculate $\frac{\partial y_i}{\partial x_i}$. You should make sure each of the intermediary gradient derivations are all as simplified as possible, for ease of implementation. After doing so, implement the simplified batch normalization backward pass in the function `batchnorm_backward_alt` and compare the two implementations by running the following. Your two implementations should compute nearly identical results, but the alternative implementation should be a bit faster. ```python np.random.seed(231) N, D = 100, 500 x = 5 * np.random.randn(N, D) + 12 gamma = np.random.randn(D) beta = np.random.randn(D) dout = np.random.randn(N, D) bn_param = {'mode': 'train'} out, cache = batchnorm_forward(x, gamma, beta, bn_param) t1 = time.time() dx1, dgamma1, dbeta1 = batchnorm_backward(dout, cache) t2 = time.time() dx2, dgamma2, dbeta2 = batchnorm_backward_alt(dout, cache) t3 = time.time() print('dx difference: ', rel_error(dx1, dx2)) print('dgamma difference: ', rel_error(dgamma1, dgamma2)) print('dbeta difference: ', rel_error(dbeta1, dbeta2)) print('speedup: %.2fx' % ((t2 - t1) / (t3 - t2))) ``` dx difference: 9.20004371222927e-13 dgamma difference: 0.0 dbeta difference: 0.0 speedup: 2.00x ## Fully Connected Nets with Batch Normalization Now that you have a working implementation for batch normalization, go back to your `FullyConnectedNet` in the file `cs231n/classifiers/fc_net.py`. Modify your implementation to add batch normalization. Concretely, when the `normalization` flag is set to `"batchnorm"` in the constructor, you should insert a batch normalization layer before each ReLU nonlinearity. The outputs from the last layer of the network should not be normalized. Once you are done, run the following to gradient-check your implementation. HINT: You might find it useful to define an additional helper layer similar to those in the file `cs231n/layer_utils.py`. If you decide to do so, do it in the file `cs231n/classifiers/fc_net.py`. ```python np.random.seed(231) N, D, H1, H2, C = 2, 15, 20, 30, 10 X = np.random.randn(N, D) y = np.random.randint(C, size=(N,)) # You should expect losses between 1e-4~1e-10 for W, # losses between 1e-08~1e-10 for b, # and losses between 1e-08~1e-09 for beta and gammas. for reg in [0, 3.14]: print('Running check with reg = ', reg) model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C, reg=reg, weight_scale=5e-2, dtype=np.float64, normalization='batchnorm') loss, grads = model.loss(X, y) print('Initial loss: ', loss) for name in sorted(grads): f = lambda _: model.loss(X, y)[0] grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5) print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name]))) if reg == 0: print() ``` Running check with reg = 0 Initial loss: 2.2611955101340957 W1 relative error: 1.10e-04 W2 relative error: 2.85e-06 W3 relative error: 4.05e-10 b1 relative error: 4.44e-08 b2 relative error: 4.44e-08 b3 relative error: 1.01e-10 beta1 relative error: 7.33e-09 beta2 relative error: 1.89e-09 gamma1 relative error: 6.96e-09 gamma2 relative error: 1.96e-09 Running check with reg = 3.14 Initial loss: 6.996533220108303 W1 relative error: 1.98e-06 W2 relative error: 2.28e-06 W3 relative error: 1.11e-08 b1 relative error: 2.78e-09 b2 relative error: 4.44e-08 b3 relative error: 2.10e-10 beta1 relative error: 6.65e-09 beta2 relative error: 4.23e-09 gamma1 relative error: 6.27e-09 gamma2 relative error: 5.28e-09 # Batchnorm for deep networks Run the following to train a six-layer network on a subset of 1000 training examples both with and without batch normalization. ```python np.random.seed(231) # Try training a very deep net with batchnorm hidden_dims = [100, 100, 100, 100, 100] num_train = 1000 small_data = { 'X_train': data['X_train'][:num_train], 'y_train': data['y_train'][:num_train], 'X_val': data['X_val'], 'y_val': data['y_val'], } weight_scale = 2e-2 bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization='batchnorm') model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=None) print('Solver with batch norm:') bn_solver = Solver(bn_model, small_data, num_epochs=10, batch_size=50, update_rule='adam', optim_config={ 'learning_rate': 1e-3, }, verbose=True,print_every=20) bn_solver.train() print('\nSolver without batch norm:') solver = Solver(model, small_data, num_epochs=10, batch_size=50, update_rule='adam', optim_config={ 'learning_rate': 1e-3, }, verbose=True, print_every=20) solver.train() ``` Solver with batch norm: (Iteration 1 / 200) loss: 2.340974 (Epoch 0 / 10) train acc: 0.107000; val_acc: 0.115000 (Epoch 1 / 10) train acc: 0.315000; val_acc: 0.266000 (Iteration 21 / 200) loss: 2.039365 (Epoch 2 / 10) train acc: 0.385000; val_acc: 0.278000 (Iteration 41 / 200) loss: 2.041103 (Epoch 3 / 10) train acc: 0.493000; val_acc: 0.309000 (Iteration 61 / 200) loss: 1.753903 (Epoch 4 / 10) train acc: 0.535000; val_acc: 0.308000 (Iteration 81 / 200) loss: 1.246585 (Epoch 5 / 10) train acc: 0.573000; val_acc: 0.313000 (Iteration 101 / 200) loss: 1.320590 (Epoch 6 / 10) train acc: 0.632000; val_acc: 0.339000 (Iteration 121 / 200) loss: 1.159473 (Epoch 7 / 10) train acc: 0.680000; val_acc: 0.325000 (Iteration 141 / 200) loss: 1.151109 (Epoch 8 / 10) train acc: 0.780000; val_acc: 0.339000 (Iteration 161 / 200) loss: 0.628461 (Epoch 9 / 10) train acc: 0.809000; val_acc: 0.341000 (Iteration 181 / 200) loss: 0.890583 (Epoch 10 / 10) train acc: 0.807000; val_acc: 0.331000 Solver without batch norm: (Iteration 1 / 200) loss: 2.302332 (Epoch 0 / 10) train acc: 0.129000; val_acc: 0.131000 (Epoch 1 / 10) train acc: 0.283000; val_acc: 0.250000 (Iteration 21 / 200) loss: 2.041970 (Epoch 2 / 10) train acc: 0.316000; val_acc: 0.277000 (Iteration 41 / 200) loss: 1.900473 (Epoch 3 / 10) train acc: 0.373000; val_acc: 0.282000 (Iteration 61 / 200) loss: 1.713156 (Epoch 4 / 10) train acc: 0.390000; val_acc: 0.310000 (Iteration 81 / 200) loss: 1.662208 (Epoch 5 / 10) train acc: 0.434000; val_acc: 0.300000 (Iteration 101 / 200) loss: 1.696062 (Epoch 6 / 10) train acc: 0.536000; val_acc: 0.346000 (Iteration 121 / 200) loss: 1.550785 (Epoch 7 / 10) train acc: 0.530000; val_acc: 0.310000 (Iteration 141 / 200) loss: 1.436307 (Epoch 8 / 10) train acc: 0.622000; val_acc: 0.342000 (Iteration 161 / 200) loss: 1.000868 (Epoch 9 / 10) train acc: 0.654000; val_acc: 0.328000 (Iteration 181 / 200) loss: 0.925455 (Epoch 10 / 10) train acc: 0.726000; val_acc: 0.335000 Run the following to visualize the results from two networks trained above. You should find that using batch normalization helps the network to converge much faster. ```python def plot_training_history(title, label, baseline, bn_solvers, plot_fn, bl_marker='.', bn_marker='.', labels=None): """utility function for plotting training history""" plt.title(title) plt.xlabel(label) bn_plots = [plot_fn(bn_solver) for bn_solver in bn_solvers] bl_plot = plot_fn(baseline) num_bn = len(bn_plots) for i in range(num_bn): label='with_norm' if labels is not None: label += str(labels[i]) plt.plot(bn_plots[i], bn_marker, label=label) label='baseline' if labels is not None: label += str(labels[0]) plt.plot(bl_plot, bl_marker, label=label) plt.legend(loc='lower center', ncol=num_bn+1) plt.subplot(3, 1, 1) plot_training_history('Training loss','Iteration', solver, [bn_solver], \ lambda x: x.loss_history, bl_marker='o', bn_marker='o') plt.subplot(3, 1, 2) plot_training_history('Training accuracy','Epoch', solver, [bn_solver], \ lambda x: x.train_acc_history, bl_marker='-o', bn_marker='-o') plt.subplot(3, 1, 3) plot_training_history('Validation accuracy','Epoch', solver, [bn_solver], \ lambda x: x.val_acc_history, bl_marker='-o', bn_marker='-o') plt.gcf().set_size_inches(15, 15) plt.show() ``` # Batch normalization and initialization We will now run a small experiment to study <span class="mark">the interaction of batch normalization and weight initialization</span>. The first cell will train 8-layer networks both with and without batch normalization using different scales for weight initialization. The second layer will plot training accuracy, validation set accuracy, and training loss as a function of the weight initialization scale. ```python np.random.seed(231) # Try training a very deep net with batchnorm hidden_dims = [50, 50, 50, 50, 50, 50, 50] num_train = 1000 small_data = { 'X_train': data['X_train'][:num_train], 'y_train': data['y_train'][:num_train], 'X_val': data['X_val'], 'y_val': data['y_val'], } bn_solvers_ws = {} solvers_ws = {} weight_scales = np.logspace(-4, 0, num=20) for i, weight_scale in enumerate(weight_scales): print('Running weight scale %d / %d' % (i + 1, len(weight_scales))) bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization='batchnorm') model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=None) bn_solver = Solver(bn_model, small_data, num_epochs=10, batch_size=50, update_rule='adam', optim_config={ 'learning_rate': 1e-3, }, verbose=False, print_every=200) bn_solver.train() bn_solvers_ws[weight_scale] = bn_solver solver = Solver(model, small_data, num_epochs=10, batch_size=50, update_rule='adam', optim_config={ 'learning_rate': 1e-3, }, verbose=False, print_every=200) solver.train() solvers_ws[weight_scale] = solver ``` Running weight scale 1 / 20 Running weight scale 2 / 20 Running weight scale 3 / 20 Running weight scale 4 / 20 Running weight scale 5 / 20 Running weight scale 6 / 20 Running weight scale 7 / 20 Running weight scale 8 / 20 Running weight scale 9 / 20 Running weight scale 10 / 20 Running weight scale 11 / 20 Running weight scale 12 / 20 Running weight scale 13 / 20 Running weight scale 14 / 20 Running weight scale 15 / 20 Running weight scale 16 / 20 Running weight scale 17 / 20 Running weight scale 18 / 20 Running weight scale 19 / 20 Running weight scale 20 / 20 ```python # Plot results of weight scale experiment best_train_accs, bn_best_train_accs = [], [] best_val_accs, bn_best_val_accs = [], [] final_train_loss, bn_final_train_loss = [], [] for ws in weight_scales: best_train_accs.append(max(solvers_ws[ws].train_acc_history)) bn_best_train_accs.append(max(bn_solvers_ws[ws].train_acc_history)) best_val_accs.append(max(solvers_ws[ws].val_acc_history)) bn_best_val_accs.append(max(bn_solvers_ws[ws].val_acc_history)) final_train_loss.append(np.mean(solvers_ws[ws].loss_history[-100:])) bn_final_train_loss.append(np.mean(bn_solvers_ws[ws].loss_history[-100:])) plt.subplot(3, 1, 1) plt.title('Best val accuracy vs weight initialization scale') plt.xlabel('Weight initialization scale') plt.ylabel('Best val accuracy') plt.semilogx(weight_scales, best_val_accs, '-o', label='baseline') plt.semilogx(weight_scales, bn_best_val_accs, '-o', label='batchnorm') plt.legend(ncol=2, loc='lower right') plt.subplot(3, 1, 2) plt.title('Best train accuracy vs weight initialization scale') plt.xlabel('Weight initialization scale') plt.ylabel('Best training accuracy') plt.semilogx(weight_scales, best_train_accs, '-o', label='baseline') plt.semilogx(weight_scales, bn_best_train_accs, '-o', label='batchnorm') plt.legend() plt.subplot(3, 1, 3) plt.title('Final training loss vs weight initialization scale') plt.xlabel('Weight initialization scale') plt.ylabel('Final training loss') plt.semilogx(weight_scales, final_train_loss, '-o', label='baseline') plt.semilogx(weight_scales, bn_final_train_loss, '-o', label='batchnorm') plt.legend() plt.gca().set_ylim(1.0, 3.5) plt.gcf().set_size_inches(15, 15) plt.show() ``` ## Inline Question 1: Describe the results of this experiment. How does the scale of weight initialization affect models with/without batch normalization differently, and why? ## Answer: [FILL THIS IN]:The weight initialization scale of 10^-1 is a turning point.As we can see, with_bn is more stable than without_bn using different weight initialization scales. # Batch normalization and batch size We will now run a small experiment to study the interaction of batch normalization and batch size. The first cell will train 6-layer networks both with and without batch normalization using different batch sizes. The second layer will plot training accuracy and validation set accuracy over time. ```python def run_batchsize_experiments(normalization_mode): np.random.seed(231) # Try training a very deep net with batchnorm hidden_dims = [100, 100, 100, 100, 100] num_train = 1000 small_data = { 'X_train': data['X_train'][:num_train], 'y_train': data['y_train'][:num_train], 'X_val': data['X_val'], 'y_val': data['y_val'], } n_epochs=10 weight_scale = 2e-2 batch_sizes = [5,10,50] lr = 10**(-3.5) solver_bsize = batch_sizes[0] print('No normalization: batch size = ',solver_bsize) model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=None) solver = Solver(model, small_data, num_epochs=n_epochs, batch_size=solver_bsize, update_rule='adam', optim_config={ 'learning_rate': lr, }, verbose=False) solver.train() bn_solvers = [] for i in range(len(batch_sizes)): b_size=batch_sizes[i] print('Normalization: batch size = ',b_size) bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=normalization_mode) bn_solver = Solver(bn_model, small_data, num_epochs=n_epochs, batch_size=b_size, update_rule='adam', optim_config={ 'learning_rate': lr, }, verbose=False) bn_solver.train() bn_solvers.append(bn_solver) return bn_solvers, solver, batch_sizes batch_sizes = [5,10,50] bn_solvers_bsize, solver_bsize, batch_sizes = run_batchsize_experiments('batchnorm') ``` No normalization: batch size = 5 Normalization: batch size = 5 Normalization: batch size = 10 Normalization: batch size = 50 ```python plt.subplot(2, 1, 1) plot_training_history('Training accuracy (Batch Normalization)','Epoch', solver_bsize, bn_solvers_bsize, \ lambda x: x.train_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes) plt.subplot(2, 1, 2) plot_training_history('Validation accuracy (Batch Normalization)','Epoch', solver_bsize, bn_solvers_bsize, \ lambda x: x.val_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes) plt.gcf().set_size_inches(15, 10) plt.show() ``` ## Inline Question 2: Describe the results of this experiment. What does this imply about the relationship between batch normalization and batch size? Why is this relationship observed? ## Answer: [FILL THIS IN]: When the batch size is small, without_bn may have a good reslut than with_bn. But when the batch size becomes larger, as the increasment of batch size, the train/valiadation acc will be high. # Layer Normalization <span class="mark">Batch normalization has proved to be effective in making networks easier to train, but the dependency on batch size makes it less useful in complex networks which have a cap on the input batch size due to hardware limitations.</span> Several alternatives to batch normalization have been proposed to mitigate this problem; one such technique is Layer Normalization [2]. Instead of normalizing over the batch, we normalize over the features. In other words, when using Layer Normalization, each feature vector corresponding to a single datapoint is normalized based on the sum of all terms within that feature vector. [2] [Ba, Jimmy Lei, Jamie Ryan Kiros, and Geoffrey E. Hinton. "Layer Normalization." stat 1050 (2016): 21.](https://arxiv.org/pdf/1607.06450.pdf) ## Inline Question 3: Which of these data preprocessing steps is analogous to batch normalization, and which is analogous to layer normalization? 1. Scaling each image in the dataset, so that the RGB channels for each row of pixels within an image sums up to 1. 2. Scaling each image in the dataset, so that the RGB channels for all pixels within an image sums up to 1. 3. Subtracting the mean image of the dataset from each image in the dataset. 4. Setting all RGB values to either 0 or 1 depending on a given threshold. ## Answer: [FILL THIS IN]1 2 # Layer Normalization: Implementation Now you'll implement layer normalization. This step should be relatively straightforward, as conceptually the implementation is almost identical to that of batch normalization. <span class="mark">One significant difference though is that for layer normalization, <span class="burk">we do not keep track of the moving moments</span>, and <span class="burk">the testing phase is identical to the training</span> phase, where the mean and variance are directly calculated per datapoint</span>. Here's what you need to do: * In `cs231n/layers.py`, implement the forward pass for layer normalization in the function `layernorm_backward`. Run the cell below to check your results. * In `cs231n/layers.py`, implement the backward pass for layer normalization in the function `layernorm_backward`. Run the second cell below to check your results. * Modify `cs231n/classifiers/fc_net.py` to add layer normalization to the `FullyConnectedNet`. When the `normalization` flag is set to `"layernorm"` in the constructor, you should insert a layer normalization layer before each ReLU nonlinearity. Run the third cell below to run the batch size experiment on layer normalization. ```python # Check the training-time forward pass by checking means and variances # of features both before and after layer normalization # Simulate the forward pass for a two-layer network np.random.seed(231) N, D1, D2, D3 =4, 50, 60, 3 X = np.random.randn(N, D1) W1 = np.random.randn(D1, D2) W2 = np.random.randn(D2, D3) a = np.maximum(0, X.dot(W1)).dot(W2) print('Before layer normalization:') print_mean_std(a,axis=1) gamma = np.ones(D3) beta = np.zeros(D3) # Means should be close to zero and stds close to one print('After layer normalization (gamma=1, beta=0)') a_norm, _ = layernorm_forward(a, gamma, beta, {'mode': 'train'}) print_mean_std(a_norm,axis=1) gamma = np.asarray([3.0,3.0,3.0]) beta = np.asarray([5.0,5.0,5.0]) # Now means should be close to beta and stds close to gamma print('After layer normalization (gamma=', gamma, ', beta=', beta, ')') a_norm, _ = layernorm_forward(a, gamma, beta, {'mode': 'train'}) print_mean_std(a_norm,axis=1) ``` Before layer normalization: means: [-59.06673243 -47.60782686 -43.31137368 -26.40991744] stds: [10.07429373 28.39478981 35.28360729 4.01831507] After layer normalization (gamma=1, beta=0) means: [ 4.81096644e-16 -7.40148683e-17 2.22044605e-16 -5.92118946e-16] stds: [0.99999995 0.99999999 1. 0.99999969] After layer normalization (gamma= [3. 3. 3.] , beta= [5. 5. 5.] ) means: [5. 5. 5. 5.] stds: [2.99999985 2.99999998 2.99999999 2.99999907] ```python # Gradient check batchnorm backward pass np.random.seed(231) N, D = 4, 5 x = 5 * np.random.randn(N, D) + 12 gamma = np.random.randn(D) beta = np.random.randn(D) dout = np.random.randn(N, D) ln_param = {} fx = lambda x: layernorm_forward(x, gamma, beta, ln_param)[0] fg = lambda a: layernorm_forward(x, a, beta, ln_param)[0] fb = lambda b: layernorm_forward(x, gamma, b, ln_param)[0] dx_num = eval_numerical_gradient_array(fx, x, dout) da_num = eval_numerical_gradient_array(fg, gamma.copy(), dout) db_num = eval_numerical_gradient_array(fb, beta.copy(), dout) _, cache = layernorm_forward(x, gamma, beta, ln_param) dx, dgamma, dbeta = layernorm_backward(dout, cache) #You should expect to see relative errors between 1e-12 and 1e-8 print('dx error: ', rel_error(dx_num, dx)) print('dgamma error: ', rel_error(da_num, dgamma)) print('dbeta error: ', rel_error(db_num, dbeta)) ``` dx error: 1.433615657860454e-09 dgamma error: 4.519489546032799e-12 dbeta error: 2.276445013433725e-12 [autoreload of cs231n.classifiers.fc_net failed: Traceback (most recent call last): File "e:\019_anaconda\envs\cs231n\lib\site-packages\IPython\extensions\autoreload.py", line 244, in check superreload(m, reload, self.old_objects) File "e:\019_anaconda\envs\cs231n\lib\site-packages\IPython\extensions\autoreload.py", line 378, in superreload module = reload(module) File "e:\019_anaconda\envs\cs231n\lib\imp.py", line 314, in reload return importlib.reload(module) File "e:\019_anaconda\envs\cs231n\lib\importlib\__init__.py", line 169, in reload _bootstrap._exec(spec, module) File "<frozen importlib._bootstrap>", line 630, in _exec File "<frozen importlib._bootstrap_external>", line 724, in exec_module File "<frozen importlib._bootstrap_external>", line 860, in get_code File "<frozen importlib._bootstrap_external>", line 791, in source_to_code File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "D:\002_selflearning\005_cs231n\assignment2\cs231n\classifiers\fc_net.py", line 289 w = self.params['W%d' % (lay + 1) ^ IndentationError: expected an indented block ] # Layer Normalization and batch size We will now run the previous batch size experiment with layer normalization instead of batch normalization. Compared to the previous experiment, you should see a markedly smaller influence of batch size on the training history! ```python ln_solvers_bsize, solver_bsize, batch_sizes = run_batchsize_experiments('layernorm') plt.subplot(2, 1, 1) plot_training_history('Training accuracy (Layer Normalization)','Epoch', solver_bsize, ln_solvers_bsize, \ lambda x: x.train_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes) plt.subplot(2, 1, 2) plot_training_history('Validation accuracy (Layer Normalization)','Epoch', solver_bsize, ln_solvers_bsize, \ lambda x: x.val_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes) plt.gcf().set_size_inches(15, 10) plt.show() ``` ## Inline Question 4: When is layer normalization likely to not work well, and why? 1. Using it in a very deep network 2. Having a very small dimension of features 3. Having a high regularization term ## Answer: [FILL THIS IN] 2
58248f79a23e88c308bfa7852baa13742c3fa5ec
441,422
ipynb
Jupyter Notebook
assignment2/BatchNormalization.ipynb
sqzhang-jeremy/CS231nAssignment
67fccaa7b8ac5986dda252bb95ecad5ea926e623
[ "MIT" ]
null
null
null
assignment2/BatchNormalization.ipynb
sqzhang-jeremy/CS231nAssignment
67fccaa7b8ac5986dda252bb95ecad5ea926e623
[ "MIT" ]
null
null
null
assignment2/BatchNormalization.ipynb
sqzhang-jeremy/CS231nAssignment
67fccaa7b8ac5986dda252bb95ecad5ea926e623
[ "MIT" ]
null
null
null
344.591725
115,852
0.923941
true
9,465
Qwen/Qwen-72B
1. YES 2. YES
0.622459
0.826712
0.514594
__label__eng_Latn
0.904472
0.033905
# Taylor problem 5.32 last revised: 12-Jan-2019 by Dick Furnstahl [furnstahl.1@osu.edu] **Replace ### by appropriate expressions.** The equation for an underdamped oscillator, such as a mass on the end of a spring, takes the form $\begin{align} x(t) = e^{-\beta t} [B_1 \cos(\omega_1 t) + B_2 \sin(\omega_1 t)] \end{align}$ where $\begin{align} \omega_1 = \sqrt{\omega_0^2 - \beta^2} \end{align}$ and the mass is released from rest at position $x_0$ at $t=0$. **Goal: plot $x(t)$ for $0 \leq t \leq 20$, with $x_0 = 1$, $\omega_0=1$, and $\beta = 0.$, 0.02, 0.1, 0.3, and 1.** ```python import numpy as np import matplotlib.pyplot as plt ``` ```python def underdamped(t, betas, omega_0=1, x_0=1): """Solution x(t) for an underdamped harmonic oscillator.""" omega_1 = np.sqrt(omega_0**2 - betas**2) B_1 = x B_2 = 5 return np.exp(-betas*t) \ * ( B_1 * np.cos(omega_1*t) + B_2 * np.sin(omega_1*t) ) ``` ```python t_pts = np.arange(0., 20., .01) betas = [0., 0.02, 0.1, 0.3, 0.9999] fig = plt.figure(figsize=(10,6)) # look up "python enumerate" to find out how this works! for i, beta in enumerate(betas): ax = fig.add_subplot(2, 3, i+1) ax.plot(t_pts, underdamped(t_pts, beta), color='blue') ax.set_title(rf'$\beta = {beta:.2f}$') ax.set_xlabel('t') ax.set_ylabel('x(t)') ax.set_ylim(-1.1,1.1) ax.axhline(0., color='black', alpha=0.3) # lightened black zero line fig.tight_layout() plt.show() ``` ## Bonus: Widgetized! ```python from ipywidgets import interact, fixed import ipywidgets as widgets omega_0 = 1. def plot_beta(beta): """Plot function for underdamped harmonic oscillator.""" t_pts = np.arange(0., 20., .01) fig = plt.figure() ax = fig.add_subplot(1,1,1) ax.plot(t_pts, underdamped(t_pts, beta), color='blue') ax.set_title(rf'$\beta = {beta:.2f}$') ax.set_xlabel('t') ax.set_ylabel('x(t)') ax.set_ylim(-1.1,1.1) ax.axhline(0., color='black', alpha=0.3) fig.tight_layout() max_value = omega_0 - 0.0001 interact(plot_beta, beta=widgets.FloatSlider(min=0., max=max_value, step=0.01, value=0., readout_format='.2f', continuous_update=False)); ``` interactive(children=(FloatSlider(value=0.0, continuous_update=False, description='beta', max=0.9999, step=0.0… Now let's allow for complex numbers! This will enable us to take $\beta > \omega_0$. ```python # numpy.lib.scimath version of sqrt handles complex numbers. # numpy exp, cos, and sin already can. import numpy.lib.scimath as smath def all_beta(t, beta, omega_0=1, x_0=1): """Solution x(t) for damped harmonic oscillator, allowing for overdamped as well as underdamped solution. """ omega_1 = smath.sqrt(omega_0**2 - beta**2) return np.real( x_0 * np.exp(-beta*t) \ * (np.cos(omega_1*t) + (beta/omega_1)*np.sin(omega_1*t)) ) ``` ```python from ipywidgets import interact, fixed import ipywidgets as widgets omega_0 = 1. def plot_all_beta(beta): """Plot of x(t) for damped harmonic oscillator, allowing for overdamped as well as underdamped cases.""" t_pts = np.arange(0., 20., .01) fig = plt.figure() ax = fig.add_subplot(1,1,1) ax.plot(t_pts, all_beta(t_pts, beta), color='blue') ax.set_title(rf'$\beta = {beta:.2f}$') ax.set_xlabel('t') ax.set_ylabel('x(t)') ax.set_ylim(-1.1,1.1) ax.axhline(0., color='black', alpha=0.3) fig.tight_layout() interact(plot_all_beta, beta=widgets.FloatSlider(min=0., max=2, step=0.01, value=0., readout_format='.2f', continuous_update=False)); ``` interactive(children=(FloatSlider(value=0.0, continuous_update=False, description='beta', max=2.0, step=0.01),… ```python import numpy as np import matplotlib.pyplot as plt ``` ```python def underdamped(t, betas, omega_0=1, x_0=1): """Solution x(t) for an underdamped harmonic oscillator.""" omega_1 = np.sqrt(omega_0**2 - betas**2) B_1 = x_0 B_2 = x_0*betas/omega_1 return np.exp(-betas*t) \ * ( B_1 * np.cos(omega_1*t) + B_2 * np.sin(omega_1*t) ) ``` ```python t_pts = np.arange(0., 20., .01) betas = [0., 0.02, 0.1, 0.3, 0.9999] fig = plt.figure(figsize=(10,6)) # look up "python enumerate" to find out how this works! for i, beta in enumerate(betas): ax = fig.add_subplot(2, 3, i+1) ax.plot(t_pts, underdamped(t_pts, beta), color='blue') ax.set_title(rf'$\beta = {beta:.2f}$') ax.set_xlabel('t') ax.set_ylabel('x(t)') ax.set_ylim(-1.1,1.1) ax.axhline(0., color='black', alpha=0.3) # lightened black zero line fig.tight_layout() plt.show() ``` ```python ```
bc8cb5191bcb31f55bccba98e3cd641b6f27a2e2
79,019
ipynb
Jupyter Notebook
2020_week_2/Taylor_problem_5.32_Copy_CDL.ipynb
CLima86/Physics_5300_CDL
d9e8ee0861d408a85b4be3adfc97e98afb4a1149
[ "MIT" ]
null
null
null
2020_week_2/Taylor_problem_5.32_Copy_CDL.ipynb
CLima86/Physics_5300_CDL
d9e8ee0861d408a85b4be3adfc97e98afb4a1149
[ "MIT" ]
null
null
null
2020_week_2/Taylor_problem_5.32_Copy_CDL.ipynb
CLima86/Physics_5300_CDL
d9e8ee0861d408a85b4be3adfc97e98afb4a1149
[ "MIT" ]
null
null
null
238.009036
38,576
0.911262
true
1,588
Qwen/Qwen-72B
1. YES 2. YES
0.841826
0.746139
0.628119
__label__eng_Latn
0.543295
0.297661
# Euler Problem 243 A positive fraction whose numerator is less than its denominator is called a proper fraction. For any denominator, d, there will be d−1 proper fractions; for example, with d = 12: 1/12 , 2/12 , 3/12 , 4/12 , 5/12 , 6/12 , 7/12 , 8/12 , 9/12 , 10/12 , 11/12 . We shall call a fraction that cannot be cancelled down a resilient fraction. Furthermore we shall define the resilience of a denominator, R(d), to be the ratio of its proper fractions that are resilient; for example, R(12) = 4/11 . In fact, d = 12 is the smallest denominator having a resilience R(d) < 4/10 . Find the smallest denominator d, having a resilience R(d) < 15499/94744 . ```python import heapq from sympy import primerange smallprimes = list(primerange(1, 100)) queue = [] heapq.heappush(queue, (2, 1, 1)) r = 1 while queue and r >= 15499/94744: n, i, phi = heapq.heappop(queue) r = phi / (n - 1) p, q = smallprimes[i-1:i+1] heapq.heappush(queue, (p*n, i, p*phi)) heapq.heappush(queue, (q*n, i+1, (q-1)*phi)) print(n) ``` 892371480 ```python ```
0e0cb1ef4ff58e27d6964c19769dfc5bff29e2d3
2,153
ipynb
Jupyter Notebook
Euler 243 - Resilience.ipynb
Radcliffe/project-euler
5eb0c56e2bd523f3dc5329adb2fbbaf657e7fa38
[ "MIT" ]
6
2016-05-11T18:55:35.000Z
2019-12-27T21:38:43.000Z
Euler 243 - Resilience.ipynb
Radcliffe/project-euler
5eb0c56e2bd523f3dc5329adb2fbbaf657e7fa38
[ "MIT" ]
null
null
null
Euler 243 - Resilience.ipynb
Radcliffe/project-euler
5eb0c56e2bd523f3dc5329adb2fbbaf657e7fa38
[ "MIT" ]
null
null
null
25.630952
163
0.542034
true
359
Qwen/Qwen-72B
1. YES 2. YES
0.893309
0.757794
0.676945
__label__eng_Latn
0.97466
0.411101
# 微積分の計算について   N0.3 不定積分の内容-5 部分積分 ### 学籍番号[_________]クラス[_____] クラス番号[_____] 名前[_______________] ##### 積分の手法 (1)部分積分法 $$ \int f'(x)g(x)dx =f(x)g(x)- \int f(x)g'(x)dx $$ ```python from sympy import * x, n , y, a = symbols('x n y a') init_printing() m ='3//5' i =0 ``` ### 例題ー2 ```python expr = x*cos(x) itg = Integral(expr,x) i=i+1 print( 'No.',m,'---',i) itg ``` ```python simplify(itg.doit()) ``` ```python expr = log(x) itg = Integral(expr,x) i=i+1 print( 'No.',m,'---',i) itg ``` ```python simplify(itg.doit()) ``` ```python expr = x**2*sin(x) itg = Integral(expr,x) i=i+1 print( 'No.',m,'---',i) itg ``` ```python simplify(itg.doit()) ``` ```python expr = x**3*exp(x) itg = Integral(expr,x) i=i+1 print( 'No.',m,'---',i) itg ``` ```python simplify(itg.doit()) ``` ```python expr = atan(x) itg = Integral(expr,x) i=i+1 print( 'No.',m,'---',i) itg ``` ```python simplify(itg.doit()) ``` ```python expr = log(x)**2 itg = Integral(expr,x) i=i+1 print( 'No.',m,'---',i) itg ``` ```python simplify(itg.doit()) ``` ```python expr = x**3*exp(2*x) itg = Integral(expr,x) i=i+1 print( 'No.',m,'---',i) itg ``` ```python simplify(itg.doit()) ``` ```python expr = x**2*sin(x) itg = Integral(expr,x) i=i+1 print( 'No.',m,'---',i) itg ``` ```python simplify(itg.doit()) ``` ```python ```
86b9419512e35e790ad2492091b53562250d55f0
43,767
ipynb
Jupyter Notebook
05_20181106-sekibun-5-1-Ex&ans.ipynb
kt-pro-git-1/Calculus_Differential_Equation-public
d5deaf117e6841c4f6ceb53bc80b020220fd4814
[ "MIT" ]
1
2019-07-10T11:33:18.000Z
2019-07-10T11:33:18.000Z
05_20181106-sekibun-5-1-Ex&ans.ipynb
kt-pro-git-1/Calculus_Differential_Equation-public
d5deaf117e6841c4f6ceb53bc80b020220fd4814
[ "MIT" ]
null
null
null
05_20181106-sekibun-5-1-Ex&ans.ipynb
kt-pro-git-1/Calculus_Differential_Equation-public
d5deaf117e6841c4f6ceb53bc80b020220fd4814
[ "MIT" ]
null
null
null
74.943493
2,764
0.814723
true
636
Qwen/Qwen-72B
1. YES 2. YES
0.921922
0.855851
0.789028
__label__roh_Latn
0.233203
0.671509
The classical free term coefficient for a smooth surface is either 0.5 or -0.5, depending on whether we are evaluating from the interior or the exterior. But with a halfspace surface, the free term can end up being either 0 or 1. The integrations below demonstrate this fact. ```python import numpy as np import sympy as sp from tectosaur2 import gauss_rule, refine_surfaces, integrate_term from tectosaur2.laplace2d import double_layer from tectosaur2.elastic2d import elastic_t qx, qw = gauss_rule(12) t = sp.var("t") circle = refine_surfaces( [(t, sp.cos(sp.pi * t), sp.sin(sp.pi * t))], (qx, qw), max_curvature=0.125 ) A = integrate_term(double_layer, circle.pts, circle) print(A[:, 0, :, 0].sum(axis=1)[0]) A2 = integrate_term(double_layer, circle.pts, circle, limit_direction=-1) print(A2[:, 0, :, 0].sum(axis=1)[0]) line = refine_surfaces( [(t, 100 * t, 0.0 * t)], (qx, qw), control_points=np.array([[0, 0, 100, 1]]), ) A3 = integrate_term( double_layer, line.pts, line, singularities=np.array([[-100, 0], [100, 0]]) ) print(A3[:, 0, :, 0].sum(axis=1)[A3.shape[0] // 2]) A4 = integrate_term( double_layer, line.pts, line, limit_direction=-1, singularities=np.array([[-100, 0], [100, 0]]), ) print(A4[:, 0, :, 0].sum(axis=1)[A3.shape[0] // 2]) ``` -0.9999999999999523 -1.687538997430238e-14 /Users/tbent/Dropbox/active/eq/tectosaur2/tectosaur2/integrate.py:174: UserWarning: Some integrals failed to converge during adaptive integration. This an indication of a problem in either the integration or the problem formulation. warnings.warn( -0.5000000000000002 0.5000000000000002 ```python qx, qw = gauss_rule(12) t = sp.var("t") circle = refine_surfaces( [(t, sp.cos(sp.pi * t), sp.sin(sp.pi * t))], (qx, qw), max_curvature=0.125 ) A = integrate_term(elastic_t(0.25), circle.pts, circle) print(A[:, :, :, :].sum(axis=2)[0]) A2 = integrate_term(elastic_t(0.25), circle.pts, circle, limit_direction=-1) print(A2[:, :, :, :].sum(axis=2)[0]) line = refine_surfaces( [(t, 100 * t, 0.0 * t)], (qx, qw), control_points=np.array([[0, 0, 100, 1]]) ) A3 = integrate_term( elastic_t(0.25), line.pts, line, singularities=np.array([[-100, 0], [100, 0]]), ) print(A3[:, :, :, :].sum(axis=2)[A3.shape[0] // 2]) A4 = integrate_term( elastic_t(0.25), line.pts, line, singularities=np.array([[-100, 0], [100, 0]]), limit_direction=-1, ) print(A4[:, :, :, :].sum(axis=2)[A3.shape[0] // 2]) ``` [[-1.00000000e+00 -4.98212582e-15] [ 3.46944695e-16 -1.00000000e+00]] [[-2.88380431e-14 -5.15559817e-15] [-5.00988140e-15 -9.43689571e-15]] /Users/tbent/Dropbox/active/eq/tectosaur2/tectosaur2/integrate.py:180: UserWarning: Some expanded integrals reached maximum expansion order. These integrals may be inaccurate. warnings.warn( [[-5.00000000e-01 7.64249014e-06] [-7.64249014e-06 -5.00000000e-01]] [[ 5.00000000e-01 7.64249014e-06] [-7.64249014e-06 5.00000000e-01]] ```python ```
05e46c9be174b4c401e3ce6ae4db74f13a26e41e
5,197
ipynb
Jupyter Notebook
experiments/free_terms.ipynb
tbenthompson/BIE_tutorials
02cd56ab7e63e36afc4a10db17072076541aab77
[ "MIT" ]
1
2021-06-18T18:02:55.000Z
2021-06-18T18:02:55.000Z
experiments/free_terms.ipynb
tbenthompson/BIE_tutorials
02cd56ab7e63e36afc4a10db17072076541aab77
[ "MIT" ]
null
null
null
experiments/free_terms.ipynb
tbenthompson/BIE_tutorials
02cd56ab7e63e36afc4a10db17072076541aab77
[ "MIT" ]
1
2021-07-14T19:47:00.000Z
2021-07-14T19:47:00.000Z
29.697143
281
0.533769
true
1,064
Qwen/Qwen-72B
1. YES 2. YES
0.884039
0.782662
0.691904
__label__eng_Latn
0.446097
0.445857
```python import sympy as sm ``` ```python from sympy.physics.vector import init_vprinting init_vprinting(use_latex='mathjax', pretty_print=False) ``` ```python from IPython.display import Image Image('fig/2rp_new.png', width=300) ``` ```python from sympy.physics.mechanics import dynamicsymbols ``` ```python theta1, theta2, l1, l2 = dynamicsymbols('theta1 theta2 l1 l2') theta1, theta2, l1, l2 ``` $\displaystyle \left( \theta_{1}, \ \theta_{2}, \ l_{1}, \ l_{2}\right)$ ```python px = l1*sm.cos(theta1) + l2*sm.cos(theta1 + theta2) # tip psition in x-direction py = l1*sm.sin(theta1) + l2*sm.sin(theta1 + theta2) # tip position in y-direction ``` ```python # evaluating the jacobian matrix a11 = sm.diff(px, theta1) # differentiate px with theta_1 a12 = sm.diff(px, theta2) # differentiate px with theta_2 a21 = sm.diff(py, theta1) # differentiate py with theta_1 a22 = sm.diff(py, theta2) # differentiate py with theta_2 J = sm.Matrix([[a11, a12], [a21, a22]]) # assemble into matix form Jsim = sm.simplify(J) # simplified result Jsim ``` $\displaystyle \left[\begin{matrix}- l_{1} \sin{\left(\theta_{1} \right)} - l_{2} \sin{\left(\theta_{1} + \theta_{2} \right)} & - l_{2} \sin{\left(\theta_{1} + \theta_{2} \right)}\\l_{1} \cos{\left(\theta_{1} \right)} + l_{2} \cos{\left(\theta_{1} + \theta_{2} \right)} & l_{2} \cos{\left(\theta_{1} + \theta_{2} \right)}\end{matrix}\right]$ ```python # Manipulator singularities Jdet = sm.det(Jsim) #determinant of the jacobian matrix detJ = sm.simplify(Jdet) detJ ``` $\displaystyle l_{1} l_{2} \sin{\left(\theta_{2} \right)}$ ```python sm.solve(detJ, (theta2)) # slove detJ for theta_2 ``` $\displaystyle \left[ 0, \ \pi\right]$ ```python # This means the manipulator will be in singular configuration when the angle θ2 is either zero or it is ±π , Image('fig/2rp_sing_config1.png', width=300) # θ2=0 ``` ```python Image('fig/2rp_sing_config2.png', width=300) # θ2=±π ``` ```python ```
dc04763a33b59fe50c6495e74aeeb38157e1e830
213,195
ipynb
Jupyter Notebook
Jacobian--Singularity--2RP.ipynb
Eddy-Morgan/Jacobian--Singularity--2RP
37c86725c9b5cc87b926b20ef7eee5b68f9b4fc5
[ "Apache-2.0" ]
null
null
null
Jacobian--Singularity--2RP.ipynb
Eddy-Morgan/Jacobian--Singularity--2RP
37c86725c9b5cc87b926b20ef7eee5b68f9b4fc5
[ "Apache-2.0" ]
null
null
null
Jacobian--Singularity--2RP.ipynb
Eddy-Morgan/Jacobian--Singularity--2RP
37c86725c9b5cc87b926b20ef7eee5b68f9b4fc5
[ "Apache-2.0" ]
null
null
null
795.503731
70,612
0.951861
true
687
Qwen/Qwen-72B
1. YES 2. YES
0.954647
0.890294
0.849917
__label__eng_Latn
0.420717
0.812975
# Práctica del tema 2 Blanca Cano Camarero ```python # import necesarios para la práctica import sympy as sp import numpy as np ## ficheros propios #### De vuelve el polinomio de interpolación de newton import polinomioNewton as pn # polinomioNewton([a-h,a,a+h]) ##### Algunas fórmulas de integración import formulasIntegracion as fs #### cuadratura gaussina import nodosGaussiana as ng ``` # Ejercicio 1 1.- Obtenga mediante interpolación en el espacio $\mathbb{P}_2$ una fórmula para aproximar $f''(a)$ del tipo combinación de $f(a-h)$, $f(a)$ y $f(a+h)$. ```python a,h,z = sp.symbols('a,h,z') x = [a-h, a, a+h] #llamamos a nuestra funcioncilla que nos calcula polinomios de interpolación de newton ## (para ver su código mirar en polinomioNewton.py) p = pn.polinomioNewton(x) ## Finalemente, derivamos pa obtener nuestra fórmula buscada formula=sp.diff(p,z,2).subs({z:a}).simplify() # esta será la fórmula # obtenida esta vez para aproximar la derivada segunda f''(a) print(formula) formula ``` (-2*f(a) + f(a - h) + f(a + h))/h**2 $\displaystyle \frac{- 2 f{\left(a \right)} + f{\left(a - h \right)} + f{\left(a + h \right)}}{h^{2}}$ ## Ejercicio 2 2.- Con la fórmula obtenida en el ejercicio 1, halle una tabla de aproximaciones y errores de $f_1''(2.5)$, siendo $f_1(x)=x^x$, para $h=10^{-i},\; i=1,\ldots,5.$ ```python ## derivada aproximada def d2(f): return (-2*f(a) + f(a - h) + f(a + h))/h**2 def g(x): return x**x ### derivada exacta g2 = sp.lambdify(z,sp.diff(z**z, z, 2), 'numpy') ##### def g2_aprox(formula, g,mia, eh): """Devuelve la derivada aproximada de g con la fórmula "formula" en el punto mia, con h valiendo eh Argumentos: formula: aplicación lineal, con la que este caso aproximamos derivadas g la función de la que queremos calcular la derivada mia: punto en el que geremos calcular a eh: variación de h """ return formula(g).evalf(subs={a:mia,h:10**(-eh)}) ### hacemos una función pa representar la tablilla def tabla(formula,faprox, fexac, nodos, vh): print(f' h | a \t | f(a) aprox \t | error'.expandtabs(25)) print('-'*65) for x in nodos: valor_exacto = fexac(x) for mih in vh: v = g2_aprox(formula,faprox,x,mih) print(f'10^(-{mih}) | {x} \t | {v} \t|{abs(v-valor_exacto)}'.expandtabs(25)) tabla(d2,g,g2,[2.5],[*range(1,6)]) ``` h | a | f(a) aprox | error ----------------------------------------------------------------- 10^(-1) | 2.5 | 40.4205682979584 |0.178903441570895 10^(-2) | 2.5 | 40.2434502309567 |0.00178537456920225 10^(-3) | 2.5 | 40.2416827097676 |0.0000178533801502567 10^(-4) | 2.5 | 40.2416650349212 |1.78533753114607E-7 10^(-5) | 2.5 | 40.2416648581728 |1.78533099415290E-9 ## Ejercicio 3 3.- Sea $f_2(x)=\frac{x^2+40}{x+\sqrt{5x}+7}$. Calcule una tabla que recoja las derivadas de $f_2$ en $x_i=1,2,\ldots,10$, utilizando alguna de las fórmulas de derivación numérica de primer orden obtenidas al inicio de la práctica, con $h=10^{-3}$, y muestre al mismo tiempo el error cometido en cada punto. Repita el ejercicio con la fórmula centrada obtenida para la derivada primera y, finalmente, para la obtenida en el ejercicio 1 (con respecto a la segunda derivada). ```python def f2(x): return (x**2 +40) / (x+(5*x)**(1/2)+7) ### Vamos a comenzar con la segunda derivada ddf=sp.diff( (z**2 +40) / (z+(5*z)**(1/2)+7), z, 2) ddf=sp.lambdify(z,ddf, 'numpy') x = [*range(1,11)] ### tabla para la segunda derivada print('Las aproximaciones para la segunda derivada son ') tabla(d2,f2,ddf,x,[3]) ## ________Para la fórmula centrada ____ print('\nDerivamos una vez el polinomio de newton, para obtener la primera derivada') formula=sp.diff(p,z,1).subs({z:a}).simplify() # esta será la fórmula # obtenida esta vez para aproximar la derivada segunda f''(a) print(formula) #copiamso en el return la fórmula anterior def df_aprox(f): return (-f(a - h) + f(a + h))/(2*h) #calculamos primera derivada exacta df=sp.diff( (z**2 +40) / (z+(5*z)**(1/2)+7), z, 1) df=sp.lambdify(z,df, 'numpy') print('\n\n Las proximaciones para la primera derivada') tabla(df_aprox,f2,df,x,[3] ) ``` Las aproximaciones para la segunda derivada son h | a | f(a) aprox | error ----------------------------------------------------------------- 10^(-3) | 1 | 0.676265261235160 |1.62949784665578E-7 10^(-3) | 2 | 0.283220380779308 |1.66032020909590E-8 10^(-3) | 3 | 0.168340324507543 |4.57942220255525E-9 10^(-3) | 4 | 0.114907314792673 |1.89762004720873E-9 10^(-3) | 5 | 0.0846224312804510 |9.93457316411650E-10 10^(-3) | 6 | 0.0654364319768901 |6.12947123612706E-10 10^(-3) | 7 | 0.0523721747995802 |4.30544377927333E-10 10^(-3) | 8 | 0.0430109452392692 |3.36394023747744E-10 10^(-3) | 9 | 0.0360420060101230 |2.86374354785401E-10 10^(-3) | 10 | 0.0306970069224294 |2.60408326846484E-10 Derivamos una vez el polinomio de newton, para obtener la primera derivada (-f(a - h) + f(a + h))/(2*h) Las proximaciones para la primera derivada h | a | f(a) aprox | error ----------------------------------------------------------------- 10^(-3) | 1 | -0.633413983453862 |1.41948961474014E-7 10^(-3) | 2 | -0.203730021215820 |2.98523985808874E-8 10^(-3) | 3 | 0.0135536643822118 |1.22135472690327E-8 10^(-3) | 4 | 0.152356375979550 |6.46680242688547E-9 10^(-3) | 5 | 0.250865047976407 |3.92670773674553E-9 10^(-3) | 6 | 0.325234483749073 |2.59700044802358E-9 10^(-3) | 7 | 0.383753087445717 |1.82151538297148E-9 10^(-3) | 8 | 0.431201819322740 |1.33390964940361E-9 10^(-3) | 9 | 0.470566738047861 |1.00977470829378E-9 10^(-3) | 10 | 0.503824069630687 |7.84850628932077E-10 ```python ``` ```python # Algunas de las definidas en formulasSimples.py # declaramos las fórmulas def rectangulo_izquierdo(f,a,b): return f(a)*(b-a) def simpson(f, a, b): return 1/6*(f(a)+4*f((a+b)/2)+f(b))*(b-a) def trapecio(f,a,b): return (f(a)+f(b))/2*(b-a) def formula_compuesta (f,formula,a,b,n): h = (b-a)/n return sum([formula(f,a+i*h,a+(i+1)*h) for i in range(n)]) ``` # Ejericicios 4 y 5 4.- Divida el intervalo $[1,2]$ en 100 partes iguales y aplique las fórmulas del rectángulo, Simpson y trapecio compuestas para aproximar la integral en dicho intervalo de $f_1$. Compare dichos resultados. 5.- Repita el ejercicio 4 para $f_2$. ```python def f1(x): return x**x print("___Integral para f1___") print(f'Integra por rectángulo izquierdo {formula_compuesta(f1,rectangulo_izquierdo,1,2,100)}') print(f'Integra por trapecio {formula_compuesta(f1,trapecio,1,2,100)}') print(f'Integra por simpson {formula_compuesta(f1,simpson,1,2,100)}') def f2(x): return (x*x+40)/(x+sqrt(5*x)+7) print("___Integral para f2___") print(f'Integra por rectángulo izquierdo {formula_compuesta(f2,rectangulo_izquierdo,1,2,100)}') print(f'Integra por trapecio {formula_compuesta(f2,trapecio,1,2,100)}') print(f'Integra por simpson {formula_compuesta(f2,simpson,1,2,100)}') ``` ___Integral para f1___ Integra por rectángulo izquierdo 2.0354943390855578 Integra por trapecio 2.0504943390855574 Integra por simpson 2.0504462346235286 ___Integral para f2___ Integra por rectángulo izquierdo 3.77852320278209 Integra por trapecio 3.77658469845732 Integra por simpson 3.77658111777024 Comentario: Como podemos observar dan un resultado similar, siend trapecio y simpson muy parecidas entre sí. ## Ejercicio 6 6.- Sea $f_3(x)=x^{15} e^x$ en $[0,2]$. Vamos a dividir el intervalo en $10\times 2^n$ subintervalos, es decir, $10,\,20,\,40,\, 80,\ldots $ y a aplicar la fórmula de Simpson compuesta hasta que la diferencia entre dos aproximaciones consecutivas (por ejemplo, podrían ser con $20$ y $40$ subintervalos) sea menor que $10^{-2}$, dando en tal caso por buena la última aproximación obtenida. Programe y calcule dicha aproximación. Compare ambas aproximaciones con el valor exacto. ```python def f3(x): return x**(15) * np.exp(x) ## cálculo de valor exacto por regla de barrow x = sp.symbols("x") f3s = x**15 * sp.exp(x) If3s = sp.integrate(f3s,x) exacto = If3s.subs(x,2)-If3s.subs(x,0) print(f'Aplicando la regla de barrow el valor es {exacto.evalf()}') diferencia = 9 y = fs.formula_compuesta(f3,fs.simpson,0,2,10) n= 1 while diferencia > 10**(-2): ys = fs.formula_compuesta(f3,fs.simpson,0,2,10*2**(n)) n += 1 diferencia = abs(ys-y) y = ys print (f'La a integral aproximada (en la iteración {n}) es {y}') print (f'El error es de {abs(exacto-y).evalf()}') ``` Aplicando la regla de barrow el valor es 27062.7024138996 La a integral aproximada (en la iteración 6) es 27062.702480891196 El error es de 0.0000213214760707824 ## Ejercicio 7 7.- Calcule las fórmulas gaussianas con $2$ y $3$ nodos,en el intervalo $[-1,1]$, siendo la función peso el valor absoluto de la variable. Aplíquelas para aproximar la función $x\; e^x$ en $[-1,1]$ y compare los resultados con el valor exacto (organizando los cálculos de forma adecuada). ```python #calcula nodos def nodos_gaussiana (a, b, n,w): """ Devuelve los nodos de la fórmula de la gaussiana a,b: intervalo [a,b] de definición n: número de nodos w: función peso """ x = sp.Symbol("x") c = list(sp.symbols('c0:'+ str(n))) pi = np.prod([ (x - c[i]) for i in range(n)]) #print('Voy a computar la integral (esto puede tardar)...') I = [sp.integrate(pi*w(x)*x**i,(x, a, b)) for i in range(n)] #print(f'El valor de la integrales es {I}') #print('la solución buscada es: ') s = sp.solve(I,c) return list(s[0]) ``` ```python a = -1 b= 1 n = 3 def w(x): return abs(x)#sp.sqrt(1/(1-x**2)) z=sp.Symbol('z') for i in range(2,n+1): q =pn.polinomioNewton(nodos_gaussiana(a,b,i,w)) print(f'fóruma para {i} nodos') print(sp.integrate(q,(z,a,b))) f_exacta = x*sp.exp(x) print(f'\nEl valor exacto es {sp.N(sp.integrate(f_exacta,(x,a,b)))}') ## vemos ahora a por el valro aproximado def f_aprox(x): return x*np.exp(x) def dosNodos(f): return f(-np.sqrt(2)/2) + f(np.sqrt(2)/2) def tresNodos(f): return f(0) + f(-np.sqrt(6)/3)/2 + f(np.sqrt(6)/3)/2 print(f'Para la aproximación de dos nodos el valor es {dosNodos(f_aprox)}') print(f'Para la aproximación de tres nodos el valor es {tresNodos(f_aprox)}') ``` fóruma para 2 nodos f(-sqrt(2)/2) + f(sqrt(2)/2) fóruma para 3 nodos f(0) + f(-sqrt(6)/3)/2 + f(sqrt(6)/3)/2 El valor exacto es 0.735758882342885 Para dos nodos el valor es 1.0854416412726071 Para tres nodos el valor es 0.7432494342785245 ## Ejercicio 8 8.- Programar las técnicas de integración de Romberg y adaptativa, para después aplicarlas a la aproximación de la siguiente integral $$\int_a^b p(x)\, dx$$ siendo $\;a=\displaystyle\min_{0\leq i\leq 7}{d_i}$, $\;b=\displaystyle\max_{0\leq i\leq 7}{d_i}$ y $$p(x)=d_0 + d_1 x + d_2 x^2 + d_3 x^3+ d_4 x^4 + d_5 x^5 + d_6 x^6 + d_7 x^7 $$ (siendo $d_0, d_1, \ldots, d_7$ los dígitos). ```python ## El polinomio pedido es: coef = [2,5,3,7,8,2,9,1] a = min(coef) b = max(coef) def p(x): return sum([coef[i]*x*i for i in range(len(coef))]) # fórmula en fichero formulasIntegracion.py ### ADAPTATIVA ####### print(f'El resultado de la adaptativa es {fs.adaptativa(fs.simpson , p, a , b)}') ### ROMBER def T2n(tn,f,h,a,n): """Fórmula recursiva trapecio tn: valor de T_n f: función a evaluar h: diferencia partición h_{n+1}=h_{n}/2, por eso los nodos por calcular son x_i+h_{n+1} = (a + m*h_n)+h_{n+1} = (a + m*2h_{n+1})+h_{n+1} a: intervalo inferior partición n: proviene de h{n+1}=(b-a)/2^{n+1} """ return 1/2*(tn)+h*sum(map((lambda e: f(a+(e*2+1)*h )),[*range(n)])) def Romberg(f,a,b,tol=(10**(-5)),length=20): """ Integrar calculada por el método de Romberg, devuelve el número ancho de columan exploradas f: función sobre la que integrar a: intervalo inferior sobre el que integrar b: intervalo superior integración length: Ancho de la tabla maximo tol: diferencia que debe haber entre R(n-1,n-1) y R(n,n) """ #matriz que contentendrá datos integración Romberg l = [ [None]*(i+1) for i in range(length)] # ojo que viene dada por l[fila][columan] #Calulamos R(N,0), es decir la primera columa de la tabla l[0][0]=fs.formula_compuesta(f,fs.trapecio, a, b,1) h = (b-a) n=1 for i in range(1,length): h=h/2 n = n*2 l[i][0] = T2n(l[i-1][0],f,h,a,n) ## camos a proceder ahora con los elementos de la forma R(j,k) columna = 1 diferencia = 99999 #for columna in range(1,length): while( columna < length and diferencia > tol): for fila in range(columna, length): l[fila][columna] = (4**columna*l[fila][columna -1] - l[fila - 1][columna -1] ) / (4**columna -1) diferencia = abs(l[columna][columna]-l[columna-1][columna-1]) columna += 1 columna -= 1 return (l[columna][columna],columna) r = Romberg(p,a,b) print(f'El resultado de mi fórmula de romber es {r[0]} tras {r[1]} iteraciones') ``` El resultado de la adaptativa es 5400.0 El resultado de mi fórmula de romber es 19439.983687473403 tras 19 iteraciones ```python ```
c79665ebdf0d16d3595e10571bdbe9e7740eafad
20,064
ipynb
Jupyter Notebook
practica2-integracia_numeria/Entregable.ipynb
BlancaCC/metodosNumericosII
73fd6d8bc202a34af0d95d9ac17a3b671895314d
[ "MIT" ]
null
null
null
practica2-integracia_numeria/Entregable.ipynb
BlancaCC/metodosNumericosII
73fd6d8bc202a34af0d95d9ac17a3b671895314d
[ "MIT" ]
null
null
null
practica2-integracia_numeria/Entregable.ipynb
BlancaCC/metodosNumericosII
73fd6d8bc202a34af0d95d9ac17a3b671895314d
[ "MIT" ]
null
null
null
35.2
482
0.516697
true
5,029
Qwen/Qwen-72B
1. YES 2. YES
0.879147
0.908618
0.798808
__label__spa_Latn
0.778292
0.694232
# Local search In unconstrained optimization, we wish to solve problems of the form \begin{align} \text{minimize} & & E(w) \end{align} * The local search algorithms have the form : \begin{align} w_0 & = \text{some initial value} \\ \text{for}\;\; & \tau = 1, 2,\dots \\ & w_\tau = w_{\tau-1} + g_\tau \end{align} Here, $g_\tau$ is a search direction. The loop is executed until a convergence condition is satisfied or the maximum number of iterations is reached. The algorithm iteratively search for solutions that achieve a lower objective value by moving in the search direction. # Gradient Descent * Gradient descent is a popular local search method with the search direction chosen as the negative gradient direction: \begin{align} g_\tau & = - \eta \nabla E(w_{\tau-1}) \end{align} * When the gradient vanishes, i.e., $\nabla E(w) = 0$, the algorithm does not make any progress. Such points are also called fixed points. * The iterates, under certain conditions, converge to the minimum $w^* = \arg\min_{w} E(w)$. A natural question here finding the conditions for guaranteed convergence to a fixed point and the rate -- how fast convergence happens as a function of iterations * The parameter $\eta$ is called the *learning rate*, to be chosen depending on the problem. If the learning rate is not properly chosen, the algorithm can (and will) diverge. * There is a well developed theory on how to choose $\eta$ adaptively to speed up covergence. * Even for minimizing quadratic objective functions, or equivalently for solving linear systems, gradient descent can have a quite poor converge properties: it takes a lot of iterations to find the minimum. However, it is applicable as a practical method in many problems as it requires only the calculation of the gradient. * For maximization problems \begin{align} \text{maximize} & & E(w) \end{align} we just move in the direction of the gradient so the search direction is $g_\tau = \eta \nabla E(w_{\tau-1})$ ```python %matplotlib inline import numpy as np import matplotlib as mpl import matplotlib.pylab as plt from ipywidgets import interact, interactive, fixed import ipywidgets as widgets from IPython.display import clear_output, display, HTML from matplotlib import rc mpl.rc('font',**{'size': 20, 'family':'sans-serif','sans-serif':['Helvetica']}) mpl.rc('text', usetex=True) import time import numpy as np y = np.array([7.04, 7.95, 7.58, 7.81, 8.33, 7.96, 8.24, 8.26, 7.84, 6.82, 5.68]) x = np.array(np.linspace(-1,1,11)) N = len(x) # Design matrix #A = np.vstack((np.ones(N), x, x**2, x**3)).T degree = 9 A = np.hstack([np.power(x.reshape(N,1),i) for i in range(degree+1)]) # Learning rate eta = 0.001 # initial parameters w = np.array(np.random.randn(degree+1)) W = [] Err = [] for epoch in range(50000): # Error err = y-A.dot(w) # Total error E = np.sum(err**2)/N # Gradient dE = -2.*A.T.dot(err)/N if epoch%100 == 0: #print(epoch,':',E) # print(w) W.append(w) Err.append(E) # Perfom one descent step w = w - eta*dE ``` The following cell demonstrates interactively the progress of plain gradient descent and how its solution differs from the optimum found by solving the corresponding least squares problem. ```python fig = plt.figure(figsize=(5,5)) left = -1.5 right = 1.5 xx = np.linspace(left,right,50) AA = np.hstack((np.power(xx.reshape(len(xx),1),i) for i in range(degree+1))) # Find best A_orth, R = np.linalg.qr(A) w_orth, res, rank, s = np.linalg.lstsq(A_orth, y) w_star = np.linalg.solve(R, w_orth) yy = AA.dot(w_star) #ax.set_xlim((2,15)) #dots = plt.Line2D(x,y, linestyle='', markerfacecolor='b',marker='o', alpha=0.5, markersize=5) #ax.add_line(dots) plt.plot(x,y, linestyle='', markerfacecolor='b',marker='o', alpha=0.5, markersize=5) plt.plot(xx, yy, linestyle=':', color='k', alpha=0.3) ln = plt.Line2D(xdata=[], ydata=[], linestyle='-',linewidth=2) ax = fig.gca() ax.add_line(ln) plt.close(fig) ax.set_xlim((left,right)) ax.set_ylim((5,9)) def plot_gd(iteration=0): w = W[iteration] f = AA.dot(w) #print(w) ln.set_ydata(f) ln.set_xdata(xx) ax.set_title('$E = '+str(Err[iteration])+'$') display(fig) res = interact(plot_gd, iteration=(0,len(W)-1)) ``` <p>Failed to display Jupyter Widget of type <code>interactive</code>.</p> <p> If you're reading this message in Jupyter Notebook or JupyterLab, it may mean that the widgets JavaScript is still loading. If this message persists, it likely means that the widgets JavaScript library is either not installed or not enabled. See the <a href="https://ipywidgets.readthedocs.io/en/stable/user_install.html">Jupyter Widgets Documentation</a> for setup instructions. </p> <p> If you're reading this message in another notebook frontend (for example, a static rendering on GitHub or <a href="https://nbviewer.jupyter.org/">NBViewer</a>), it may mean that your frontend doesn't currently support widgets. </p> Plotting the Error Surface ```python %matplotlib inline import scipy as sc import numpy as np import pandas as pd import matplotlib as mpl import matplotlib.pylab as plt df_arac = pd.read_csv(u'data/arac.csv',sep=';') #df_arac[['Year','Car']] BaseYear = 1995 x = np.matrix(df_arac.Year[0:]).T-BaseYear y = np.matrix(df_arac.Car[0:]).T/1000000. plt.plot(x+BaseYear, y, 'o-') plt.xlabel('Yil') plt.ylabel('Araba (Millions)') plt.show() ``` ```python from itertools import product def Error_Surface(y, A, left=0, right=1, bottom=0, top=1, step=0.1): W0 = np.arange(left,right, step) W1 = np.arange(bottom,top, step) ErrSurf = np.zeros((len(W1),len(W0))) for i,j in product(range(len(W1)), range(len(W0))): e = y - A*np.matrix([W0[j], W1[i]]).T ErrSurf[i,j] = e.T*e/2 return ErrSurf ``` ```python BaseYear = 1995 x = np.matrix(df_arac.Year[0:]).T-BaseYear y = np.matrix(df_arac.Car[0:]).T/1000000. # Setup the vandermonde matrix N = len(x) A = np.hstack((np.ones((N,1)), x)) left = -5 right = 15 bottom = -4 top = 6 step = 0.05 ErrSurf = Error_Surface(y, A, left=left, right=right, top=top, bottom=bottom) plt.figure(figsize=(10,10)) #plt.imshow(ErrSurf, interpolation='nearest', # vmin=0, vmax=10000,origin='lower', # extent=(left,right,bottom,top), cmap='jet') plt.contour(ErrSurf, vmin=0, vmax=10000,origin='lower', levels=np.linspace(100,5000,10), extent=(left,right,bottom,top), cmap='jet') plt.xlabel('$w_0$') plt.ylabel('$w_1$') plt.title('Error Surface') #plt.colorbar(orientation='horizontal') plt.show() ``` ### Animation of Gradient descent ```python %matplotlib inline import matplotlib.pylab as plt import time from IPython import display import numpy as np # Setup the Design matrix N = len(x) A = np.hstack((np.ones((N,1)), x)) # Starting point w = np.matrix('[15; -6]') # Number of iterations EPOCH = 200 # Learning rate: The following is the largest possible fixed rate for this problem #eta = 0.0001696 eta = 0.0001696 fig = plt.figure() ax = fig.gca() plt.plot(x+BaseYear, y, 'o-') plt.xlabel('x') plt.ylabel('y') f = A.dot(w) ln = plt.Line2D(xdata=x+BaseYear, ydata=f, linestyle='-',linewidth=2,color='red') ax.add_line(ln) for epoch in range(EPOCH): f = A.dot(w) err = y-f ln.set_xdata(x) ln.set_ydata(f) E = np.sum(err.T*err)/2 dE = -A.T.dot(err) # if epoch%1 == 0: # print(epoch,':',E) # print(w) w = w - eta*dE ax.set_title(E) display.clear_output(wait=True) display.display(plt.gcf()) time.sleep(0.1) ``` ```python # An implementation of Gradient Descent for solving linear a system # Setup the Design matrix N = len(x) A = np.hstack((np.ones((N,1)), x)) # Starting point w = np.matrix('[15; -6]') # Number of iterations EPOCH = 5000 # Learning rate: The following is the largest possible fixed rate for this problem #eta = 0.00016 eta = 0.000161 Error = np.zeros((EPOCH)) W = np.zeros((2,EPOCH)) for tau in range(EPOCH): # Calculate the error e = y - A*w # Store the intermediate results W[0,tau] = w[0] W[1,tau] = w[1] Error[tau] = (e.T*e)/2 # Compute the gradient descent step g = -A.T*e w = w - eta*g #print(w.T) w_star = w plt.figure(figsize=(8,8)) plt.imshow(ErrSurf, interpolation='nearest', vmin=0, vmax=1000,origin='lower', extent=(left,right,bottom,top)) plt.xlabel('w0') plt.ylabel('w1') ln = plt.Line2D(W[0,:300:1], W[1,:300:1], marker='o',markerfacecolor='w') plt.gca().add_line(ln) ln = plt.Line2D(w_star[0], w_star[1], marker='x',markerfacecolor='w') plt.gca().add_line(ln) plt.show() plt.figure(figsize=(8,3)) plt.semilogy(Error) plt.xlabel('Iteration tau') plt.ylabel('Error') plt.show() ``` * The illustration shows the convergence of GD with learning rate near the limit where the convergence is oscillatory. * $\eta$, Learning rate is a parameter of the algorithm * $w$, the variable are the parameters of the Model * $y$: Targets * $x$: Inputs, # Accelerating Gradient descent ## Momentum methods, a.k.a., heavy ball \begin{align} p(\tau) & = \nabla E(w(\tau-1)) + \beta p(\tau-1) \\ w(\tau) & = w(\tau-1) - \alpha p(\tau) \end{align} When $\beta=0$, we recover gradient descent. ```python %matplotlib inline import matplotlib.pylab as plt from notes_utilities import pnorm_ball_line import time from IPython import display import numpy as np #y = np.array([7.46, 6.77, 12.74, 7.11, 7.81, 8.84, 6.08, 5.39, 8.15, 6.42, 5.73]) y = np.array([8.04, 6.95, 7.58, 8.81, 8.33, 9.96, 7.24, 4.26, 10.84, 4.82, 5.68]) #y = np.array([9.14, 8.14, 8.74, 8.77, 9.26, 8.10, 6.13, 3.10, 9.13, 7.26, 4.74]) x = np.array([10., 8., 13., 9., 11., 14., 6., 4., 12., 7., 5.]) N = len(x) # Design matrix A = np.vstack((np.ones(N), x)).T w_best, E, rank, s = np.linalg.lstsq(A, y) err = y-A.dot(w_best) E_min = np.sum(err**2)/N def inspect_momentum(alpha = 0.005, beta = 0.97): ln = pnorm_ball_line(mu=w_best, A=np.linalg.cholesky(np.linalg.inv(A.T.dot(A))),linewidth=1) ln2 = pnorm_ball_line(mu=w_best, A=4*np.linalg.cholesky(np.linalg.inv(A.T.dot(A))),linewidth=1) # initial parameters w0 = np.array([2., 1.]) w = w0.copy() p = np.zeros(2) EPOCHS = 100 W = np.zeros((2,EPOCHS)) for epoch in range(EPOCHS): # Error err = y-A.dot(w) W[:,epoch] = w # Mean square error E = np.sum(err**2)/N # Gradient dE = -2.*A.T.dot(err)/N p = dE + beta*p # if epoch%10 == 1: # print(epoch,':',E) # print(w) # Perfom one descent step w = w - alpha*p # print(E_min) plt.plot(W[0,:],W[1,:],'.-b') plt.plot(w_best[0],w_best[1],'ro') plt.plot(w0[0],w0[1],'ko') plt.xlim((1.8,4.3)) plt.ylim((0,1.2)) plt.title('$\\alpha = $'+str(alpha)+' $\\beta = $'+str(beta)) plt.gca().add_line(ln) plt.gca().add_line(ln2) plt.show() inspect_momentum(alpha=0.0014088, beta=0.95) ``` ```python %matplotlib inline from __future__ import print_function from ipywidgets import interact, interactive, fixed import ipywidgets as widgets import matplotlib.pylab as plt from IPython.display import clear_output, display, HTML interact(inspect_momentum, alpha=(0, 0.02, 0.001), beta=(0, 0.99, 0.001)) ``` <p>Failed to display Jupyter Widget of type <code>interactive</code>.</p> <p> If you're reading this message in Jupyter Notebook or JupyterLab, it may mean that the widgets JavaScript is still loading. If this message persists, it likely means that the widgets JavaScript library is either not installed or not enabled. See the <a href="https://ipywidgets.readthedocs.io/en/stable/user_install.html">Jupyter Widgets Documentation</a> for setup instructions. </p> <p> If you're reading this message in another notebook frontend (for example, a static rendering on GitHub or <a href="https://nbviewer.jupyter.org/">NBViewer</a>), it may mean that your frontend doesn't currently support widgets. </p> <function __main__.inspect_momentum> # Advanced Material https://distill.pub/2017/momentum/ http://blog.mrtz.org/2013/09/07/the-zen-of-gradient-descent.html A great talk by Ben Recht: https://simons.berkeley.edu/talks/ben-recht-2013-09-04 Backpropagation http://www.offconvex.org/2016/12/20/backprop/ ## Analysis of convergence of Gradient descent for a quadratic function Recall that the error function we minimize is $$ E(w) = \frac{1}{2} (y-Aw)^T(y-Aw) = \frac{1}{2}(y^\top y - 2 y^\top A w + w^\top A^\top A w) $$ The gradient at the point $w$ will be denoted as $\nabla E(w) = g(w)$ where $$g(w) = -A^\top (y - Aw) = A^\top A w - A^\top y$$ Moreover, the gradient at the minimum will vanish: $$ g(w_\star) = 0 $$ Indeed, we can solve $$0 = A^\top (Aw_\star - y)$$ as $$w_\star = (A^\top A)^{-1}A^\top y $$ but this is not our point. For a constant learning rate $\eta$, gradient descent executes the following iteration $$ w_t = w_{t-1} - \eta g(w_{t-1}) = w_{t-1} - \eta A^\top (Aw_{t-1} - y) $$ $$ w_t = (I - \eta A^\top A) w_{t-1} + \eta A^\top y $$ This is a fixed point equation of form $$ w_t = T(w_{t-1}) $$ where $T$ is an affine transformation. We will assume that $T$ is a contraction, i.e. for any two different parameters $w$ and $w'$ in the domain we have $$ \| T(w) - T(w') \| \leq L_\eta \|w-w' \| $$ where $L_\eta < 1$, then the distance shrinks. Hence the mapping converges to a fixed point (this is a consequence of a deeper result in analysis called the Brouwer fixed-point theorem (https://en.0wikipedia.org/wiki/Brouwer_fixed-point_theorem)) We will consider in particular the distance between the optimum and the current point $w(t)$ $$ \| T(w_t) - T(w_\star) \| \leq L_\eta \|w_t - w_\star \| $$ But we have $T(w_\star) = w_\star$ and $w_t = T(w_{t-1})$ so $\|w_t - w_\star \| = \|T(w_{t-1}) - T(w_\star) \|$. \begin{align} \| T(w_t) - T(w_\star) \| & \leq L_\eta \|T(w_{t-1}) - T(w_\star) \| \\ & \leq L^2_\eta \|T(w_{t-2}) - T(w_\star) \| \\ \vdots \\ & \leq L^{t+1}_\eta \| w_{0} - w_\star \| \end{align} $$ T(w) = (I - \eta A^\top A) w + \eta A^\top y $$ $$ T(w_\star) = (I - \eta A^\top A) w_\star + \eta A^\top y $$ $$ \| T(w) - T(w') \| = \| (I - \eta A^\top A) (w-w') \| \leq \| I - \eta A^\top A \| \| w-w' \| $$ When the norm of the matrix $\| I - \eta A^\top A \| < 1$ we have convergence. Here we take the operator norm, i.e., the magnitude of the largest eigenvalue. Below, we plot the absolute value of the maximum eigenvalues of $I - \eta A^\top A$ as a function of $\eta$. ```python left = 0.0000 right = 0.015 N = 1000 ETA = np.linspace(left,right,N) def compute_largest_eig(ETA, A): LAM = np.zeros(N) D = A.shape[1] n = A.shape[0] for i,eta in enumerate(ETA): #print(eta) lam,v = np.linalg.eig(np.eye(D) - 2*eta*A.T.dot(A)/n) LAM[i] = np.max(np.abs(lam)) return LAM # This number is L_\eta LAM = compute_largest_eig(ETA, A) plt.plot(ETA, LAM) #plt.plot(ETA, np.ones((N,1))) #plt.gca().set_ylim([0.98, 1.02]) plt.ylim([0.997,1.01]) plt.xlabel('eta') plt.ylabel('absolute value of the largest eigenvalue') plt.show() ``` ```python plt.semilogy(ETA,LAM) plt.ylim([0.997,1]) plt.show() ``` If $E$ is twice differentiable, contractivity means that $E$ is convex. For $t>0$ \begin{align} \|T(x + t \Delta x) - T(x) \| & \leq \rho \|t \Delta x\| \\ \frac{1}{t} \|T(x + t \Delta x) - T(x) \| &\leq \rho \|\Delta x\| \end{align} If we can show that $\rho< 1$, then $T$ is a contraction. By definitions $$ T(x) = x - \alpha \nabla E(x) $$ $$ T(x + t \Delta x) = x + t \Delta x - \alpha \nabla E(x + t \Delta x) $$ \begin{align} \frac{1}{t} \|T(x + t \Delta x) - T(x) \| & = \frac{1}{t} \|x + t \Delta x - \alpha \nabla E(x + t \Delta x) - x + \alpha \nabla E(x) \| \\ & = \| \Delta x - \frac{\alpha}{t} (\nabla E(x + t \Delta x) - \nabla E(x) ) \| \\ \end{align} As this relation holds for all $t$, we take the limit when $t\rightarrow 0^+$ \begin{align} \| \Delta x - \alpha \nabla^2 E(x) \Delta x \| & = \| (I - \alpha \nabla^2 E(x)) \Delta x \| \\ & \leq \| I - \alpha \nabla^2 E(x) \| \| \Delta x \| \end{align} If we can choose $\alpha$ for all $\xi$ in the domain such that $$ \| I - \alpha \nabla^2 E(\xi) \| \leq \rho < 1 $$ is satisfied, we have a sufficient condition for a contraction. Lemma: Assume that for $0 \leq \rho < 1$, $\alpha> 0$ and $U(\xi)$ is a symmetric matrix valued function for all $\xi \in \mathcal{D}$ and we have $$ \| I - \alpha U(\xi) \| \leq \rho $$ then $U = U(\xi)$ is positive semidefinite with $$\frac{1 - \rho}{\alpha} I \preceq U $$ for every $\xi$. Proof: $$ \|I - \alpha U \| = \sup_{x\neq 0} \frac{x^\top(I - \alpha U )x }{x^\top x} \leq \rho $$ $$ x^\top(I - \alpha U )x \leq \rho x^\top x $$ $$ (1- \rho) x^\top x \leq \alpha x^\top U x $$ This implies that for all $x$ we have $$ 0 \leq x^\top (U - \frac{1 - \rho}{\alpha} I) x $$ In other words, the matrix $U - \frac{1 - \rho}{\alpha} I$ is positive semidefinite, or: $$ \frac{1 - \rho}{\alpha} I \preceq U $$ We now see that $\rho<1$ we have the guarantee that $U$ is positive semidefinite. $$ T(x) = M x + b $$ $$ \|T(x) - T(x_\star) \| = \|Mx + b - M x_\star + b \| = \| M(x-x_\star) \| $$ By Schwarz inequality $$ \|T(x) - T(x_\star) \| \leq \|M\| \|x-x_\star\| $$ If $\|M\| < 1$, we have a contraction. Assume the existence of a fixed point $x_\star$ such that $x_\star = T(x_\star)$. (Does a fixed point always exist for a contraction?) ```python # Try to fit with GD to the original data BaseYear2 = 0 x2 = np.matrix(df_arac.Year[31:]).T-BaseYear2 # Setup the vandermonde matrix N = len(x2) A = np.hstack((np.ones((N,1)), x2)) left = -8 right = -7.55 N = 100 ETA = np.logspace(left,right,N) LAM = compute_largest_eig(ETA, A) plt.plot(ETA, LAM) plt.plot(ETA, np.ones((N,1))) plt.gca().set_ylim([0.98, 1.02]) plt.xlabel('eta') plt.ylabel('absolute value of the largest eigenvalue') plt.show() ``` Analysis of Momentum \begin{align} p(\tau) & = \nabla E(w(\tau-1)) + \beta p(\tau-1) \\ w(\tau) & = w(\tau-1) - \alpha p(\tau) \\ w(\tau-1) & = w(\tau-2) - \alpha p(\tau-1) \\ \end{align} \begin{align} \left(\begin{array}{c} w(\tau) \\ w(\tau-1) \end{array} \right) & = &\left(\begin{array}{cc} \cdot & \cdot \\ \cdot & \cdot \end{array} \right) \left(\begin{array}{c} w(\tau-1) \\ w(\tau-2) \end{array} \right) \end{align} \begin{align} \frac{1}{\alpha}(w(\tau-1) - w(\tau)) & = p(\tau) = \nabla E(w(\tau-1)) + \beta \frac{1}{\alpha}(w(\tau-2) - w(\tau-1)) = \\ \frac{1}{\alpha}(w(\tau-2) - w(\tau-1)) & = p(\tau-1) \\ \end{align} \begin{align} \frac{1}{\alpha}(w(\tau-1) - w(\tau)) & = \nabla E(w(\tau-1)) + \beta \frac{1}{\alpha}(w(\tau-2) - w(\tau-1)) \\ w(\tau) & = -\alpha \nabla E(w(\tau-1)) - \beta w(\tau-2) + (\beta+1) w(\tau-1) \end{align} * Note that GD is sensetive to scaling of data * For example, if we would not have shifted the $x$ axis our original data, GD might not have worked. The maximum eigenvalue is very close to $1$ for all $\eta$ upto numerical precision ```python w = 7 alpha = 0.7 EPOCH = 100 W = [] for tau in range(EPOCH): W.append(w) w = w - alpha*(2*w - 4) plt.plot(W) plt.show() ``` ```python w = 7 alpha = 0.1 beta = 0.95 p = 0 EPOCH = 1000 W = [] for tau in range(EPOCH): W.append(w) p = (2*w - 4) + beta*p w = w - alpha*p plt.plot(W) plt.show() ```
5c4c354023948714c83275bd893730a97109e350
220,257
ipynb
Jupyter Notebook
GradientDescent.ipynb
ardakdemir/notes
4f7f2059622218210d3f5ab867197e4ece086128
[ "MIT" ]
1
2021-08-16T21:07:04.000Z
2021-08-16T21:07:04.000Z
GradientDescent.ipynb
onurboyar/notes
2ec14820af044c2cfbc99bc989338346572a5e24
[ "MIT" ]
1
2018-07-12T09:38:40.000Z
2018-07-12T09:38:40.000Z
GradientDescent.ipynb
onurboyar/notes
2ec14820af044c2cfbc99bc989338346572a5e24
[ "MIT" ]
null
null
null
186.185123
55,760
0.874188
true
6,631
Qwen/Qwen-72B
1. YES 2. YES
0.907312
0.897695
0.81449
__label__eng_Latn
0.825164
0.730666
# Aula 1 - Sensores Resistivos <style> .output { align-items: center; } </style> <form action="javascript:code_toggle()"><input type="submit" id="toggleButton" value="Show Code"></form> ```python # Important Libraries import sympy as sp import numpy as np # Display Settings from sympy.interactive import printing from IPython.display import display, Latex, Markdown, Math printing.init_printing(use_latex=True) def mydisp(textvar,contentvar): display(Math(textvar+" = "+sp.latex(contentvar))) ``` # Ponte de Wheatstone A ponte de Wheatstone é um circuito bastante utilizado para converter o valor de uma variação de resistência em uma variação de tensão, possibilitando assim sua medida. É constituido por quatro resistências ligadas a uma fonte de tensão como ilustrado logo abaixo: Podemos obter a relação entre as resistências avaliando o circuito pela lei. Desta maneira teremos as seguinte equação: ```python Vb, R1,R2,R3,Rx = sp.symbols("V_b R_1 R_2 R_3 R_x",real=True) Vd = Vb*R2/(R1+R2) Vb = Vb*Rx/(Rx+R3) Vg = Vb - Vd mydisp("V_G",Vg) # Rescrevendo a equação Vg = ((Vg*((Rx+R3)*(R1+R2))).expand()/((Rx+R3)*(R1+R2))).simplify() mydisp("V_G",Vg) ``` $$V_G = - \frac{R_{2} V_{b}}{R_{1} + R_{2}} + \frac{R_{x} V_{b}}{R_{3} + R_{x}}$$ $$V_G = \frac{V_{b} \left(R_{1} R_{x} - R_{2} R_{3}\right)}{R_{1} R_{3} + R_{1} R_{x} + R_{2} R_{3} + R_{2} R_{x}}$$ Assumindo $R_2 = 0$ e $R_1 = R_2 = R$ temos ```python R = sp.symbols("R") dR =sp.symbols("Delta_R") Vg1 = Vg.subs([(R1,R),(R2,0),(R3,R)]).simplify() mydisp("V_G",Vg1) ``` $$V_G = \frac{R_{x} V_{b}}{R + R_{x}}$$ ## Potênciometro Supondo $R = R_1 + R_2$ , $V_0$ como a tensão medida em $R_2$ e $x = \frac{R_1}{R_2}$ temos que a tensão de saída num caso ideal, isto é com $R_L = 0$ temos que a tensão de saída será ```python ''' Variáveis Potênciometro R: Resistência Total Potênciometro x: Fração do Comprimento Total do Potênciometro Vi: Tensão de Entrada do Potênciometro Vo: Tensão de Saída do Potênciometro ''' R, x, Vi = sp.symbols("R x V_s"); # Modelo Sem Carga Vo = Vi*x mydisp("V_0",Vo) ``` $$V_0 = V_{s} x$$ Para $R_L > 0$, temos ```python ''' Rl: Resistência da Carga Vol: Tensão ''' Rl = sp.Symbol("R_L"); # Modelo Acrescido da Carga Vol = Vi*x*(Rl/(R*x*(1-x)+Rl)) mydisp("V_0",Vol) ``` $$V_0 = \frac{R_{L} V_{s} x}{R x \left(- x + 1\right) + R_{L}}$$ Fazendo $k = \frac{R_l}{R}$ e substituindo em $V_0$ temos ```python # Estudo Modelo Acrescido da Carga k = sp.symbols("k") # Rl/R Vol2 = Vi*x*(1/(x*(1-x)/k +1)) mydisp("V_0",Vol2) ``` $$V_0 = \frac{V_{s} x}{1 + \frac{x}{k} \left(- x + 1\right)}$$ # Anemômetro de fio quente O anemômetro de fio quente é um sensor bastante simples e comum usado para medir o fluxo de ar em uma tubulação. É costituido de um fio de baixa resistência o qual é submetido a uma tensão constante. O fio esquenta dissipando calor pelo efeito Joule. Parte deste calor é dissipado pelo fenômeno da convecção e por consequência o a resistência do fio é alterada e pode ser medida através de uma [ponte de wheatstone](#Ponte-de-Wheatstone) [1]. [1]: http://www.dee.ufrn.br/~luciano/arquivos/ins_ele/Apresenta%E7%F5es_2008_2/CAYO%20CID/Instrumentao_Eletrnica_Cayo_Cid_200321285_AFQ.pdf [2]: http://www.abcm.org.br/anais/conem/2010/PDF/CON10-1587.pdf
83f5b190be85bd6c2932e1a412171b126fae36a6
8,524
ipynb
Jupyter Notebook
notes/sismed-sensores-resistivos.ipynb
akafael/unb-sismed
4552cc21f1e193d4a49fb7857bdab3867d482e07
[ "MIT" ]
1
2018-03-27T00:19:49.000Z
2018-03-27T00:19:49.000Z
notes/sismed-sensores-resistivos.ipynb
akafael/unb-sismed
4552cc21f1e193d4a49fb7857bdab3867d482e07
[ "MIT" ]
null
null
null
notes/sismed-sensores-resistivos.ipynb
akafael/unb-sismed
4552cc21f1e193d4a49fb7857bdab3867d482e07
[ "MIT" ]
null
null
null
24.707246
451
0.51924
true
1,261
Qwen/Qwen-72B
1. YES 2. YES
0.896251
0.859664
0.770475
__label__por_Latn
0.983825
0.628403
$\newcommand{\xv}{\mathbf{x}} \newcommand{\wv}{\mathbf{w}} \newcommand{\vv}{\mathbf{v}} \newcommand{\yv}{\mathbf{y}} \newcommand{\zv}{\mathbf{z}} \newcommand{\tv}{\mathbf{t}} \newcommand{\Chi}{\mathcal{X}} \newcommand{\R}{\rm I\!R} \newcommand{\sign}{\text{sign}} \newcommand{\Tm}{\mathbf{T}} \newcommand{\Xm}{\mathbf{X}} \newcommand{\Zm}{\mathbf{Z}} \newcommand{\Wm}{\mathbf{W}} \newcommand{\Ym}{\mathbf{Y}} \newcommand{\I}{\mathbf{I}} \newcommand{\muv}{\boldsymbol\mu} \newcommand{\Sigmav}{\boldsymbol\Sigma} $ # Support Vector Machines (SVM) What is a good classifier? There can be multiple hyperplanes that divides the data in two classes. SVMs, largin margin classifiers, choose the hyperplane that has a large margine. The idea of support vector machines, or the large margin linear classifiers, first suggested in 1963 by Vapnik, et al. In 1992, [Boser, et al.](https://dl.acm.org/citation.cfm?id=130401), extended this to nonlinear classification and [Cortes, et al.](http://image.diku.dk/imagecanon/material/cortes_vapnik95.pdf), further examines non-separable data in 1995. After this, lots of large margin algorithms have been developed. Let us take a look at the classifier choice with a largin margin and talk about other issues. ## What is the best classifier? <table> <tr> <td> </td> <td> </td> </tr> </table> ## Large Margin Classifier Margin is the distance or room between the hyperplane and the closest data samples from each class. The margin gives the classifer flexibility to changes of data or variant future data. Thus, having the largest margin can be a good choice. <table> <tr> <td> </td> <td> </td> </tr> <tr> <td> </td> <td> </td> </tr> </table> # Geometric Margin The margin, $d$, can be calculated by the linear projection with a unit vector in the direction of $\wv$, $\hat{\wv}$. Let us say that there is a point $\zv$ on a line, $$ \wv^\top \zv + b = 0. $$ Also, we assume that $\zv$ is orthogoal to a data point $\xv$: $$ \zv = \xv + d \hat{\wv}. $$ Plugging in this to the hyperplane equation $\yv = \wv^\top \xv + b$, $$ \begin{align*} \wv^\top \zv + b &= \wv^\top (\xv + d \hat{\wv}) + b \\ 0 &= \wv^\top \xv + d \wv^\top \hat{\wv} + b \\ 0 &= y + d \wv^\top \frac{\wv}{\Vert \wv \Vert} = y + d \frac{\Vert \wv \Vert^2}{\Vert \wv \Vert}\\ d &= -\frac{y}{\Vert \wv \Vert} \end{align*} $$ Now, let us check the sign of the label. We assume that the data point $\xv$ here was from negative samples. That is, the $\yv$ is negative. Looking back our perceptron, we use our label to unify the objective function: $$ t_i y_i = \tv_i (\wv^\top \xv_i + b) \gt 0. $$ If we rescaling the weights and bias, we can get the following canonical hyperplane with the minimum distance to be 1: $$ \min_{n=1, \dots, N} t_n (\wv^\top \xv + b) = 1. $$ Under this assumption, we get the margin that equals to $$d = \frac{1}{\Vert \wv \Vert}.$$ Remember that we want *maximize* this margine $\frac{1}{\Vert \wv \Vert}$, so we can invert this to *minimize* $\Vert \wv \Vert$. ## The Linear SVM The objective function can be written as: $$ \begin{equation*} \begin{aligned} & \underset{\wv, b}{\text{minimize}} & & \frac{1}{2} \Vert \wv \Vert^2 \\ & \text{subject to} & & t_i (\wv^\top \xv_i + b) \ge 1, \; i = 1, \ldots, N. \end{aligned} \end{equation*} $$ The Lagrangian is $$ L = \frac{\Vert \wv \Vert^2 }{2} - \sum_i^N \lambda_i \big( t_i (\wv^\top \xv_i + b) - 1 \big). $$ Let us find the optimal solution with gradient! With positive Lagrangian multipliers $\lambda = [\lambda_1, \ldots, \lambda_N]$ and the constraint, we can find the two additional gradient against $\lambda$ and $\wv$ and set to to zero to obtain the optimal conditions. That is, $$ \begin{cases} \frac{\partial L}{\partial \wv} = 0, \\ \frac{\partial L}{\partial b} = 0, \\ t_i (\wv^\top \xv_i + b) - 1 \ge 0, \\ \lambda \ge 0, \\ \lambda_i \big(t_i (\wv^\top \xv_i + b) - 1\big) = 0. \end{cases} $$ This is called **Karush–Kuhn–Tucker (KKT)** conditions (See http://www.svms.org/kkt/ and http://mat.gsia.cmu.edu/classes/QUANT/NOTES/chap4/node6.html). Going through few steps for the derivaiton, $$ \begin{align*} \frac{\partial L}{\partial \wv} &= \wv - \sum_i^N \lambda_i t_i \xv_i = 0\\ \wv &= \sum_i^N \lambda_i t_i \xv_i, \end{align*} $$ and $$ \begin{align*} \frac{\partial L}{\partial b} &= - \sum_i^N \lambda_i t_i = 0\\ \sum_i^N \lambda_i t_i &= 0. \end{align*} $$ Rewriting and summarizing the KKT conditions, $$ \begin{align*} &\wv = \sum_i^N \lambda_i t_i \xv_i, \\ &\sum_i^N \lambda_i t_i = 0, \\ &t_i (\wv^\top \xv_i + b) - 1 \ge 0, \\ \\ &\lambda \ge 0, \\ \\ &\lambda_i \big(t_i (\wv^\top \xv_i + b) - 1\big) = 0. &\end{align*} $$ # Dual Problem For duality convention, let us change the notation $\lambda$ to $\alpha$: $$ \begin{equation} \wv = \sum_i^N \alpha_i t_i \xv_i. \label{eq_w} \tag{1} \end{equation} $$ Since there is no solution for $b$, use the KKT condition and find it: $$ \begin{align} \lambda_i \big(t_i (\wv^\top \xv_i + b) - 1\big) &= 0 \\ t_i (\wv^\top \xv_i + b) &= 1 \\ \wv^\top \xv_i + b = t_i \\ \\ b = t_i - \wv^\top \xv_i. \label{eq_b} \tag{2} \end{align} $$ With the KKT conditions, $$ \alpha_i > 0 \\ \text{and}\\ t_i (\wv^\top \xv + b) = 1, $$ the $\xv_i$ are the data samples that lie on the re-weighted margin *1*. The corresponding $\xv_i$ are called **support vectors**, that support the hyperplane construction. Now, using the KKT conditions, replace $\wv$ and $b$ in the objective function $L$ with $\eqref{eq_w}$ and $\eqref{eq_b}$, $$ \begin{align*} L &= \frac{\Vert \wv \Vert^2 }{2} - \sum_i^N \alpha_i \big( t_i (\wv^\top \xv_i + b) - 1 \big) \\ &= \frac{1}{2} \Big( \sum_i^N \alpha_i t_i \xv_i \Big)^\top \Big( \sum_i^N \alpha_i t_i \xv_i \Big) - \sum_i^N \alpha_i t_i \Big( \sum_j^N \alpha_j t_j \xv_j \Big)^\top \xv_i - b \sum_i^N \alpha_i t_i + \sum_i^N \alpha_i \\ &= \frac{1}{2} \sum_i^N \sum_j^N \alpha_i \alpha_j t_i t_j \xv_i^\top \xv_j - \sum_i^N \sum_j^N \alpha_i \alpha_j t_i t_j \xv_i^\top \xv_j - b \sum_i^N \alpha_i t_i + \sum_i^N \alpha_i \\ &= - \frac{1}{2} \sum_i^N \sum_j^N \alpha_i \alpha_j t_i t_j \xv_i^\top \xv_j + \sum_i^N \alpha_i. \end{align*} $$ Let us rewrite the optimization problem in duality: $$ \begin{equation*} \begin{aligned} & \underset{\alpha}{\text{maximize}} & & \sum_i^N \alpha_i - \frac{1}{2} \sum_i^N \sum_j^N \alpha_i \alpha_j t_i t_j \xv_i^\top \xv_j \\ & \text{subject to} & & \sum_i^N \alpha_i t_i = 0, \\ & & &\alpha_i \ge 0. \end{aligned} \end{equation*} $$ # Soft Margin SVM So far, we constrained the data is clearly separable: $$ t_i (\wv^\top \xv_i + b) \ge 1. $$ To relax this constraints, we can define additional slack variables $\xi_i \ge 0$: $$ t_i (\wv^\top \xv_i + b) \ge 1 - \xi_i. $$ With the slack variables, the objective function is $$ \begin{equation*} \begin{aligned} & \underset{\wv, b, \xi}{\text{minimize}} & & \frac{1}{2} \Vert \wv \Vert^2 + C \sum_i \xi_i \\ & \text{subject to} & & t_i (\wv^\top \xv_i + b) \ge 1 - \xi_i,\\ & & & \xi_i \ge 0 \quad \; i = 1, \ldots, N. \end{aligned} \end{equation*} $$ From the Lagrangin, $$ L = \frac{\Vert \wv \Vert^2 }{2} - \sum_i^N \alpha_i \big( t_i (\wv^\top \xv_i + b) - 1 + \xi_i \big) - \sum_i^N \beta_i \xi_i + C \sum_i^N \xi_i, $$ we get the same saddle point equations as in $\eqref{eq_w}$ and $\eqref{eq_b}$ along with the following: $$ \frac{\partial L}{\partial \xi_i} = C - \alpha_i - \beta_i = 0. $$ This leads us to dual problem of $$ \begin{equation*} \begin{aligned} & \underset{\alpha}{\text{maximize}} & & \sum_i^N \alpha_i - \frac{1}{2} \sum_i^N \sum_j^N \alpha_i \alpha_j t_i t_j \xv_i^\top \xv_j \\ & \text{subject to} & & \sum_i^N \alpha_i t_i = 0, \\ & & &\alpha_i \ge 0, \\ & & &\beta_i \ge 0, \\ & & &C - \alpha_i - \beta_i = 0. \end{aligned} \end{equation*} $$ Using the last saddle point equaiton, we can replace $\beta_i$ inequality with the $$ 0 \le \alpha_i \le C. $$ $$ \begin{equation*} \begin{aligned} & \underset{\alpha}{\text{maximize}} & & \sum_i^N \alpha_i - \frac{1}{2} \sum_i^N \sum_j^N \alpha_i \alpha_j t_i t_j \xv_i^\top \xv_j \\ & \text{subject to} & & \sum_i^N \alpha_i t_i = 0, \\ & & &0 \le \alpha_i \le C. \end{aligned} \end{equation*} $$ # Kernels For the data that cannot be classified with a linear classifier, we used a non-linear function mapping from $\xv$ to $\phi(\xv)$. We can use this nonlinear mapping for the SVM as well: $$ \begin{equation*} \begin{aligned} & \underset{\wv, b, \xi}{\text{minimize}} & & \frac{1}{2} \Vert \wv \Vert^2 + C \sum_i \xi_i \\ & \text{subject to} & & t_i (\wv^\top \phi(\xv_i) + b) \ge 1 - \xi_i,\\ & & & \xi_i \ge 0 \quad \; i = 1, \ldots, N. \end{aligned} \end{equation*} $$ In duality, we can write the optimization problem as $$ \begin{equation*} \ \begin{aligned} & \underset{\alpha}{\text{maximize}} & & \sum_i^N \alpha_i - \frac{1}{2} \sum_i^N \sum_j^N \alpha_i \alpha_j t_i t_j \phi(\xv_i)^\top \phi(\xv_j) \\ & \text{subject to} & & \sum_i^N \alpha_i t_i = 0, \\ & & &0 \le \alpha_i \le C. \end{aligned} \end{equation*} $$ In nonlinear feature space $\Phi$, the inner product $\phi^\top \phi$ compute the euclidean distance between betweeen two feature vectors. We define this distance, or similarity measure as a kernel $k$: $$ k(\xv, \vv) = \phi(\xv)^\top \phi(\vv). $$ With this kernel notation, the dual problem can be written as $$ \begin{equation*} \begin{aligned} & \underset{\alpha}{\text{maximize}} & & \sum_i^N \alpha_i - \frac{1}{2} \sum_i^N \sum_j^N \alpha_i \alpha_j t_i t_j k(\xv_i, \xv_j) \\ & \text{subject to} & & \sum_i^N \alpha_i t_i = 0, \\ & & &0 \le \alpha_i \le C. \end{aligned} \end{equation*} $$ Here are summary of well-known stard kernels. - Linear Kernel $$ k(\xv, \yv) = \xv^\top \yv $$ - Polynomial Kernel $$ k(\xv, \yv) = (\xv^\top \yv + 1)^d $$ - Gaussian Kernel $$ k(\xv, \yv) = \exp(-\gamma \Vert \xv - \yv \Vert^2) $$ # Then, How do we solve this? So far, we set up the optimization problems. How do we need to solve or program? Any of optimization tools can be used to solve this, but there are some fast optimization tools for the SVM including the sequential minimal optimization (SMO) and libsvm. Scikit-learn uses liblinear for the primal and libsvm for the dual problem. Now, let us implement svm and try to classify the circles data. You can use any libsvm wrapper like scikit-learn, but for exercise, let us use gradient descent for learning. The derivative of the objective function w.r.t. $\alpha$ is $$ \frac{\partial L}{\partial \alpha_i} = 1 - t_i \sum_j^N \alpha_j t_j k(\xv_i, \xv_j) - \sum_j^N t_i. $$ From $$ \wv = \sum_i^N \alpha_i t_i \xv_i, $$ $$ \wv^\top \xv_i = \sum_j^N \alpha_j t_j \xv_j^\top \xv_i = \sum_j^N \alpha_j t_j k(\xv_j, \xv_i). $$ Thus, $$ b = t_i - \wv^\top \xv_i = t_i - \sum_j^N \alpha_j t_j k(\xv_j, \xv_i), $$ we can make a prediction using the $\wv$, $b$, and the kernel evaluations. ```python import numpy as np import matplotlib.pyplot as plt %matplotlib inline from sklearn.datasets import make_circles X, T = make_circles(n_samples=800, noise=0.07, factor=0.4) T[T==0] = -1 plt.figure(figsize=(8, 8)) plt.scatter(X[:, 0], X[:, 1], marker='o', c=T) plt.title("Circles") ``` ```python # linear kernel def linear_kernel(x, y): return x @ y def polynomial_kernel(x, y, d=3): return (x @ y +1) ** d def gaussian_kernel(x, y, r=1.): d = (x - y) return np.exp(-r * (d @ d)) ``` ```python from functools import partial kernelf = partial(linear_kernel) ###### build a kernel matrix (optional) def build_Kmat(X, Xn=X, kernelf=linear_kernel): N = X.shape[0] Nn = Xn.shape[0] K = # TODO: finish this line to create a kernel matrix return K ``` ```python K = build_Kmat(X, X, kernelf) T.shape, K[0, :].shape ``` ((800,), (800,)) ```python def train(X, T, K, n_epochs=1, learning_rate=0.01, C=10): N = X.shape[0] #alpha = np.random.rand(N) alpha = np.zeros(N) T_sum = np.sum(T) for n in range(n_epochs): for i in range(N): grad = # TODO: fill this gradient alpha += learning_rate * grad alpha[alpha <= 0] = 0 alpha[alpha >= C] = C b = # TODO: Finish this b return alpha, b ``` ```python # predict # # Parameters # =========== # Xtest test input # alpha SVM learned parameters # X Support Vectors # T Target for SVs # K kernel matrix if exists def predict(Xtest, alpha, b, **params): # support vectors i_sv = np.where(np.logical_and(alpha<C, alpha>0))[0] K = params.pop('K', None) T = params.pop('T', None) if T is None: raise ValueError("predict: None support vector targets!") if K is None: kernelf = params.pop('kernelf', linear_kernel) X = params.pop('X', None) if X is None: raise ValueError("predict: None support vectors!") print("rebuilding kernel matrix") K = build_Kmat(X, Xtest, kernelf) y = # TP return y ``` ```python # train alpha, b = train(X, T, K) # prediction y = predict(X, alpha, b, K=K, T=T) # plot plt.plot(T) plt.plot(np.sign(y)) plt.show() print("Accuracy: ", 100 * np.sum(T == np.sign(y)) / N, "%") ``` ```python x = np.linspace(-1.5, 1.5, 100) y = np.linspace(-1.5, 1.5, 100) xs, ys = np.meshgrid(x, y) Xt = np.vstack((xs.flat, ys.flat)).T Y = predict(Xt, alpha, b, X=X, T=T) classes = np.sign(Y) zs = classes.reshape(xs.shape) plt.figure(figsize=(6,6)) plt.contourf(xs, ys, zs.reshape(xs.shape), alpha=0.3) plt.title("Decision Boundary") plt.scatter(X[:, 0], X[:, 1], marker='o', c=T+3) ``` ```python # nonlinear now gausskf = partial(gaussian_kernel, r=0.1) K = build_Kmat(X, X, gausskf) ``` ```python alpha, b = train(X, T, K) ``` ```python # predict y = predict(X, alpha, b, K=Kg, T=T) print("Accuracy: ", 100 * np.sum(T == np.sign(y)) / N, "%") ``` Accuracy: 100.0 % ```python x = np.linspace(-1.5, 1.5, 100) y = np.linspace(-1.5, 1.5, 100) xs, ys = np.meshgrid(x, y) Xt = np.vstack((xs.flat, ys.flat)).T Y = predict(Xt, alpha, b, X=X, T=T, kernelf=gausskf) classes = np.sign(Y) zs = classes.reshape(xs.shape) plt.figure(figsize=(6,6)) plt.contourf(xs, ys, zs.reshape(xs.shape), alpha=0.3) plt.title("Decision Boundary") plt.scatter(X[:, 0], X[:, 1], marker='o', c=T+3) ```
64a5fb6c7d4fc236edcceb783629f676f4984904
307,002
ipynb
Jupyter Notebook
reading_assignments/questions/15_Note-Support Vector Machines.ipynb
biqar/Fall-2020-ITCS-8156-MachineLearning
ce14609327e5fa13f7af7b904a69da3aa3606f37
[ "MIT" ]
null
null
null
reading_assignments/questions/15_Note-Support Vector Machines.ipynb
biqar/Fall-2020-ITCS-8156-MachineLearning
ce14609327e5fa13f7af7b904a69da3aa3606f37
[ "MIT" ]
null
null
null
reading_assignments/questions/15_Note-Support Vector Machines.ipynb
biqar/Fall-2020-ITCS-8156-MachineLearning
ce14609327e5fa13f7af7b904a69da3aa3606f37
[ "MIT" ]
null
null
null
374.392683
113,990
0.914838
true
5,270
Qwen/Qwen-72B
1. YES 2. YES
0.766294
0.817574
0.626502
__label__eng_Latn
0.714617
0.293905
# Control based on León paper A. E. Leon and J. A. Solsona, "Performance Improvement of Full-Converter Wind Turbines Under Distorted Conditions," in IEEE Transactions on Sustainable Energy, vol. 4, no. 3, pp. 652-660, July 2013, doi: 10.1109/TSTE.2013.2239317. ```python %matplotlib widget ``` ```python import numpy as np import sympy as sym import matplotlib.pyplot as plt import scipy.signal as sctrl import pydae.ssa as ssa from sympy.physics.quantum import TensorProduct ``` ### Plant model ```python Δt = 50e-6 R_t = 0.039269908169872414 L_t = 0.00125 C_m = 4e-06 G_d = 1.0 R_s = 0.039269908169872414 L_s = 0.00125 A = np.array([ [-R_t/L_t, 0, -1/L_t, 0, 0, 0], [ 0,-R_t/L_t, 0, -1/L_t, 0, 0], [ 1/C_m, 0, -G_d/C_m, 0, -1/C_m, 0], [ 0, 1/C_m, 0, -G_d/C_m, 0, -1/C_m], [ 0, 0, 1/L_s, 0, -R_s/L_s, 0], [ 0, 0, 0, 1/L_s, 0, -R_s/L_s], ]) B = np.array([ [ 1/L_t, 0], [ 0, 1/L_t], [ 0, 0], [ 0, 0], [ 0, 0], [ 0, 0], ]) B_g = np.array([ [ 0, 0], [ 0, 0], [ 0, 0], [ 0, 0], [-1/L_s, 0], [ 0, -1/L_s], ]) C_c = np.array([ [ 0, 0, 0, 0, 1, 0], [ 0, 0, 0, 0, 0, 1], ]) D_c = np.array([ [ 0, 0], [ 0, 0], ]) C_o = np.array([ [ 0, 0, 1, 0, 0, 0], [ 0, 0, 0, 1, 0, 0], [ 0, 0, 0, 0, 1, 0], [ 0, 0, 0, 0, 0, 1], ]) D_o = np.array([ [ 0, 0], [ 0, 0], ]) # plant discretization A_d,B_d,C_d,D_d,Dt = sctrl.cont2discrete((A,B,C_c,D_c),Δt,method='zoh') A_,B_gd,C_,D_,Dt = sctrl.cont2discrete((A,B_g,C_c,D_c),Δt,method='zoh') A_,B_,C_o,D_o,Dt = sctrl.cont2discrete((A,B,C_o,D_o),Δt,method='zoh') ``` ## Park aplication ```python omega_g,t_k,Δt_sym = sym.symbols('omega_g,t_k,Δt_sym') theta_k = omega_g*t_k theta_kp1 = omega_g*(t_k + Δt_sym) P_k = sym.Matrix([ [ sym.cos(theta_k), -sym.sin(theta_k)], [ -sym.sin(theta_k), -sym.cos(theta_k)], ]) P_kp1 = sym.Matrix([ [ sym.cos(theta_kp1), -sym.sin(theta_kp1)], [ -sym.sin(theta_kp1), -sym.cos(theta_kp1)], ]) W_sym = sym.simplify(P_kp1 @ P_k.inv()) m2 = sym.Matrix([[1,0,0],[0,1,0],[0,0,1]]) P_kp1_3 = TensorProduct(m2, P_kp1) P_ki_3 = TensorProduct(m2, P_k.inv()) A_b_sym = sym.simplify(P_kp1_3 @ A_d @ P_ki_3 ) B_b_sym = sym.simplify(P_kp1_3 @ B_d @ P_k.inv() ) B_g_bsym = sym.simplify(P_kp1_3 @ B_gd @ P_k.inv() ) ``` ```python A_b_eval = sym.lambdify([omega_g,Δt_sym], A_b_sym) B_b_eval = sym.lambdify([omega_g,Δt_sym], B_b_sym) B_g_b_eval = sym.lambdify([omega_g,Δt_sym], B_g_bsym) W_eval = sym.lambdify([omega_g,Δt_sym], W_sym) ``` ```python omega_b = 2*np.pi*50 A_b = A_b_eval(omega_b,Δt) B_b = B_b_eval(omega_b,Δt) B_g_b = B_g_b_eval(omega_b,Δt) W = W_eval(omega_b,Δ𝑡) ``` ## Controller ```python # Controller ################################################################################## N_x_c,N_u_d = B_b.shape N_z_c,N_x_c = C_c.shape O_ux = np.zeros((N_u_d,N_x_c)) O_xu = np.zeros((N_x_c,N_u_d)) O_uu = np.zeros((N_u_d,N_u_d)) I_uu = np.eye(N_u_d) # discretized plant: # Δx_d = A_d*Δx_d + B_d*Δu_d # Δz_c = C_c*Δx_d + D_c*Δu_d # dinamic extension: # Δx_d = A_d*Δx_d + B_d*Δu_d # Δx_i = Δx_i + Δt*(Δz_c-Δz_c_ref) = Δx_i + Δt*C_c*Δx_d - Dt*Δz_c_ref # Δz_c = z_c - z_c_0 # Δz_c_ref = z_c_ref - z_c_0 # (Δz_c-Δz_c_ref) = z_c - z_c_ref A_e = np.block([ [ A_b, B_b, O_xu], # Δx_d [ O_ux, O_uu, O_uu], # Δx_r [ Δt*C_d, O_uu, I_uu], # Δx_i ]) B_e = np.block([ [ O_xu], [ W], [ O_uu], ]) # weighting matrices Q_c = np.eye(A_e.shape[0]) Q_c[-1,-1] = 1e8 Q_c[-2,-2] = 1e8 R_c = np.eye(B_e.shape[1])*10 K_c,S_c,E_c = ssa.dlqr(A_e,B_e,Q_c,R_c) E_cont = np.log(E_c)/Δt ``` ## Observer ```python N_z_o = C_o.shape[0] Q_o = np.eye(A_d.shape[0]) R_o = np.diag([1]*N_z_o) K_o_T,S_o,E_o = ssa.dlqr(A_d.T,C_o.T,Q_o,R_o) K_o = K_o_T.T print('damp_ctrl',-E_c.real/np.abs(E_c)) print('damp_obs',-E_o.real/np.abs(E_o)) ``` damp_ctrl [-0.99989883 -0.99989883 -0.99881325 -0.99881325 -0.9995029 -0.9995029 -0.99987663 -0.99987663 -1. -1. ] damp_obs [-1. -1. -1. -1. -1. -1.] ## Simulink ```python # Control without observer Du_r = -K_c*Dx_e x_d_1,x_d_2,x_d_3,x_d_4,x_d_5,x_d_6 = sym.symbols('Dx_d_1,Dx_d_2,Dx_d_3,Dx_d_4,Dx_d_5,Dx_d_6') x_r_1,x_r_2 = sym.symbols('Dx_r_1,Dx_r_2') x_i_1,x_i_2 = sym.symbols('Dx_i_1,Dx_i_2') x_e = sym.Matrix([x_d_1,x_d_2,x_d_3,x_d_4,x_d_5,x_d_6,x_r_1,x_r_2,x_i_1,x_i_2]) u_r = -K_c * x_e u_r_d = str(sym.N(u_r[0],8)) u_r_q = str(sym.N(u_r[1],8)) print(f'Du_r_1 = {u_r_d};') print(f'Du_r_2 = {u_r_q};') print('\nWarning: Control output is v_t_dq!!') ``` Du_r_1 = -1.8262175*Dx_d_1 + 0.019754652*Dx_d_2 - 0.0048070784*Dx_d_3 + 0.0010494948*Dx_d_4 - 3.4543535*Dx_d_5 + 0.39761572*Dx_d_6 - 2715.7977*Dx_i_1 + 1040.9568*Dx_i_2 - 0.071871552*Dx_r_1 + 0.00052148736*Dx_r_2; Du_r_2 = -0.019754652*Dx_d_1 - 1.8262175*Dx_d_2 - 0.0010494948*Dx_d_3 - 0.0048070784*Dx_d_4 - 0.39761572*Dx_d_5 - 3.4543535*Dx_d_6 - 1040.9568*Dx_i_1 - 2715.7977*Dx_i_2 - 0.00052148736*Dx_r_1 - 0.071871552*Dx_r_2; Warning: Control output is v_t_dq!! ```python it_ini = 4 Δx_o = sym.Matrix([sym.Symbol(f'xD[{it+it_ini}]') for it in range(6)]) Δz_o = sym.Matrix([sym.Symbol(item) - sym.Symbol(item+'_0') for item in ['v_md', 'v_mq', 'i_sd', 'i_sq']]) Δx_r = sym.Matrix([sym.Symbol(f'Dx_r_{it+1}') for it in range(2)]) Δu_pert = sym.Matrix([sym.Symbol(f'Du_pert_{it+1}') for it in range(2)]) Δx_o_kp1 = A_d @ Δx_o + B_b@(Δx_r) + K_o @ (Δz_o - C_o @ Δx_o) # + B_pert@Δu_pert + K_o @ (Δz_o - C_o @ Δx_o - D_o @ (Δx_r)) for it in range(6): print(f'xD[{it+it_ini}] = {Δx_o_kp1[it]};') ``` xD[4] = 0.0392952872921423*Dx_r_1 + 0.000617299701089028*Dx_r_2 + 0.167670572903802*i_sd - 0.167670572903802*i_sd_0 + 0.427599429013122*v_md - 0.427599429013122*v_md_0 + 0.962811636697318*xD[4] - 0.430586049216921*xD[6] - 0.132051772873075*xD[8]; xD[5] = -0.000617299701089028*Dx_r_1 + 0.0392952872921423*Dx_r_2 + 0.167670572903802*i_sq - 0.167670572903802*i_sq_0 + 0.427599429013122*v_mq - 0.427599429013122*v_mq_0 + 0.962811636697318*xD[5] - 0.430586049216921*xD[7] - 0.132051772873075*xD[9]; xD[6] = 0.0356406531431186*Dx_r_1 + 0.000559888120127431*Dx_r_2 - 0.425448653071038*i_sd + 0.425448653071038*i_sd_0 + 0.443158057967852*v_md - 0.443158057967852*v_md_0 + 0.933318813687106*xD[4] - 0.449166750687225*xD[6] - 0.507870160616068*xD[8]; xD[7] = -0.000559888120127431*Dx_r_1 + 0.0356406531431186*Dx_r_2 - 0.425448653071038*i_sq + 0.425448653071038*i_sq_0 + 0.443158057967853*v_mq - 0.443158057967853*v_mq_0 + 0.933318813687106*xD[5] - 0.449166750687225*xD[7] - 0.507870160616068*xD[9]; xD[8] = 0.000668382397147673*Dx_r_1 + 1.0499789730636e-5*Dx_r_2 + 0.590326602628365*i_sd - 0.590326602628365*i_sd_0 - 0.0126468749218836*v_md + 0.0126468749218836*v_md_0 + 0.0356188000307263*xD[4] + 0.0156334951256824*xD[6] + 0.372485034068953*xD[8]; xD[9] = -1.0499789730636e-5*Dx_r_1 + 0.000668382397147673*Dx_r_2 + 0.590326602628365*i_sq - 0.590326602628365*i_sq_0 - 0.0126468749218836*v_mq + 0.0126468749218836*v_mq_0 + 0.0356188000307263*xD[5] + 0.0156334951256824*xD[7] + 0.372485034068953*xD[9]; ```python #eta_dq[0] = Du_r_1*2/v_dc[0] + (v_sd)*2/v_dc[0] ; #eta_dq[1] = Du_r_2*2/v_dc[0] + (v_sq)*2/v_dc[0] ; #Du_r_1 = eta_dq[0]*v_dc[0]/2 #Du_r_2 = eta_dq[1]*v_dc[0]/2 Du_r_1,Du_r_2 = sym.symbols('Du_r_1,Du_r_2') Du_r = sym.Matrix([Du_r_1,Du_r_2 ]) Dx_r = W@Du_r Dx_r_1 = str(sym.N(Dx_r[0],8)) Dx_r_1 = str(sym.N(Dx_r[1],8)) print(f'Dx_r_1 = {Dx_r_1};') print(f'Dx_r_2 = {Dx_r_1};') ``` Dx_r_1 = -0.015707317*Du_r_1 + 0.99987663*Du_r_2; Dx_r_2 = -0.015707317*Du_r_1 + 0.99987663*Du_r_2; ```python # Control with observer Du_r = -K_c*Dx_o x_d_1,x_d_2,x_d_3,x_d_4,x_d_5,x_d_6 = sym.symbols('xD[4],xD[5],xD[6],xD[7],xD[8],xD[9]') x_r_1,x_r_2 = sym.symbols('Dx_r_1,Dx_r_2') x_i_1,x_i_2 = sym.symbols('Dx_i_1,Dx_i_2') x_e = sym.Matrix([x_d_1,x_d_2,x_d_3,x_d_4,x_d_5,x_d_6,x_r_1,x_r_2,x_i_1,x_i_2]) u_r = -K_c * x_e u_r_d = str(sym.N(u_r[0],8)) u_r_q = str(sym.N(u_r[1],8)) print(f'Du_r_1 = {u_r_d};') print(f'Du_r_2 = {u_r_q};') print('\nWarning: Control output is eta_dq!!') ``` Du_r_1 = -2715.7977*Dx_i_1 + 1040.9568*Dx_i_2 - 0.071871552*Dx_r_1 + 0.00052148736*Dx_r_2 - 1.8262175*xD[4] + 0.019754652*xD[5] - 0.0048070784*xD[6] + 0.0010494948*xD[7] - 3.4543535*xD[8] + 0.39761572*xD[9]; Du_r_2 = -1040.9568*Dx_i_1 - 2715.7977*Dx_i_2 - 0.00052148736*Dx_r_1 - 0.071871552*Dx_r_2 - 0.019754652*xD[4] - 1.8262175*xD[5] - 0.0010494948*xD[6] - 0.0048070784*xD[7] - 0.39761572*xD[8] - 3.4543535*xD[9]; Warning: Control output is eta_dq!! ## Symbolic obtention of the plant model # as in pydae example (synchronous dq): di_tD = 1/L_t*(eta_D/2*v_dc - R_t*i_tD + omega*L_t*i_tQ - v_mD) di_tQ = 1/L_t*(eta_Q/2*v_dc - R_t*i_tQ - omega*L_t*i_tD - v_mQ) dv_mD = 1/C_m*(i_tD + C_m*omega*v_mQ - G_d*v_mD - i_sD) dv_mQ = 1/C_m*(i_tQ - C_m*omega*v_mD - G_d*v_mQ - i_sQ) di_sD = 1/L_s*(v_mD - R_s*i_sD + omega*L_s*i_sQ - v_sD) di_sQ = 1/L_s*(v_mQ - R_s*i_sQ - omega*L_s*i_sD - v_sQ) # equivalent to pydae example (stationary dq): di_tD = 1/L_t*(v_tD - R_t*i_tD - v_mD) di_tQ = 1/L_t*(v_tQ - R_t*i_tQ - v_mQ) dv_mD = 1/C_m*(i_tD - G_d*v_mD - i_sD) dv_mQ = 1/C_m*(i_tQ - G_d*v_mQ - i_sQ) di_sD = 1/L_s*(v_mD - R_s*i_sD - v_sD) di_sQ = 1/L_s*(v_mQ - R_s*i_sQ - v_sQ) ```python R_t,L_t,C_m,G_d,R_s,L_s = sym.symbols('R_t,L_t,C_m,G_d,R_s,L_s', real=True) i_tD,i_tQ,v_mD,v_mQ,i_sD,i_sQ = sym.symbols('i_tD,i_tQ,v_mD,v_mQ,i_sD,i_sQ', real=True) v_tD,v_tQ = sym.symbols('v_tD,v_tQ', real=True) v_sD,v_sQ = sym.symbols('v_sD,v_sQ', real=True) A = sym.Matrix([ [-R_t/L_t, 0, -1/L_t, 0, 0, 0], [ 0,-R_t/L_t, 0, -1/L_t, 0, 0], [ 1/C_m, 0, -G_d/C_m, 0, -1/C_m, 0], [ 0, 1/C_m, 0, -G_d/C_m, 0, -1/C_m], [ 0, 0, 1/L_s, 0, -R_s/L_s, 0], [ 0, 0, 0, 1/L_s, 0, -R_s/L_s], ]) B = sym.Matrix([ [ 1/L_t, 0], [ 0, 1/L_t], [ 0, 0], [ 0, 0], [ 0, 0], [ 0, 0], ]) B_g = sym.Matrix([ [ 0, 0], [ 0, 0], [ 0, 0], [ 0, 0], [-1/L_s, 0], [ 0, -1/L_s], ]) x = sym.Matrix([i_tD,i_tQ,v_mD,v_mQ,i_sD,i_sQ]) u = sym.Matrix([v_tD,v_tQ]) u_g = sym.Matrix([v_sD,v_sQ]) dx_new = A@x + B@u + B_g@u_g di_tD = 1/L_t*(v_tD - R_t*i_tD - v_mD) di_tQ = 1/L_t*(v_tQ - R_t*i_tQ - v_mQ) dv_mD = 1/C_m*(i_tD - G_d*v_mD - i_sD) dv_mQ = 1/C_m*(i_tQ - G_d*v_mQ - i_sQ) di_sD = 1/L_s*(v_mD - R_s*i_sD - v_sD) di_sQ = 1/L_s*(v_mQ - R_s*i_sQ - v_sQ) dx = sym.Matrix([di_tD,di_tQ,dv_mD,dv_mQ,di_sD,di_sQ]) dx = sym.Matrix([di_tD,di_tQ,dv_mD,dv_mQ,di_sD,di_sQ]) sym.simplify(dx_new - dx) # just to check the model is ok ``` $\displaystyle \left[\begin{matrix}0\\0\\0\\0\\0\\0\end{matrix}\right]$
9905e9528eec90e8b611db046b5f8e0f3917ff54
17,267
ipynb
Jupyter Notebook
examples/machines/vsc/vsc_lcl_fisix.ipynb
pydae/pydae
8076bcfeb2cdc865a5fc58561ff8d246d0ed7d9d
[ "MIT" ]
1
2020-12-20T03:45:26.000Z
2020-12-20T03:45:26.000Z
examples/machines/vsc/vsc_lcl_fisix.ipynb
pydae/pydae
8076bcfeb2cdc865a5fc58561ff8d246d0ed7d9d
[ "MIT" ]
null
null
null
examples/machines/vsc/vsc_lcl_fisix.ipynb
pydae/pydae
8076bcfeb2cdc865a5fc58561ff8d246d0ed7d9d
[ "MIT" ]
null
null
null
32.154562
261
0.48196
true
5,384
Qwen/Qwen-72B
1. YES 2. YES
0.880797
0.731059
0.643914
__label__yue_Hant
0.129461
0.334359
```python import numpy as np import math import scipy as sp from scipy import linalg import pandas as pd import matplotlib.pyplot as plt import sympy ``` ```python sp.info(sp.linalg.solve) ``` solve(a, b, sym_pos=False, lower=False, overwrite_a=False, overwrite_b=False, debug=None, check_finite=True, assume_a='gen', transposed=False) Solves the linear equation set ``a * x = b`` for the unknown ``x`` for square ``a`` matrix. If the data matrix is known to be a particular type then supplying the corresponding string to ``assume_a`` key chooses the dedicated solver. The available options are =================== ======== generic matrix 'gen' symmetric 'sym' hermitian 'her' positive definite 'pos' =================== ======== If omitted, ``'gen'`` is the default structure. The datatype of the arrays define which solver is called regardless of the values. In other words, even when the complex array entries have precisely zero imaginary parts, the complex solver will be called based on the data type of the array. Parameters ---------- a : (N, N) array_like Square input data b : (N, NRHS) array_like Input data for the right hand side. sym_pos : bool, optional Assume `a` is symmetric and positive definite. This key is deprecated and assume_a = 'pos' keyword is recommended instead. The functionality is the same. It will be removed in the future. lower : bool, optional If True, only the data contained in the lower triangle of `a`. Default is to use upper triangle. (ignored for ``'gen'``) overwrite_a : bool, optional Allow overwriting data in `a` (may enhance performance). Default is False. overwrite_b : bool, optional Allow overwriting data in `b` (may enhance performance). Default is False. check_finite : bool, optional Whether to check that the input matrices contain only finite numbers. Disabling may give a performance gain, but may result in problems (crashes, non-termination) if the inputs do contain infinities or NaNs. assume_a : str, optional Valid entries are explained above. transposed: bool, optional If True, depending on the data type ``a^T x = b`` or ``a^H x = b`` is solved (only taken into account for ``'gen'``). Returns ------- x : (N, NRHS) ndarray The solution array. Raises ------ ValueError If size mismatches detected or input a is not square. LinAlgError If the matrix is singular. RuntimeWarning If an ill-conditioned input a is detected. Examples -------- Given `a` and `b`, solve for `x`: >>> a = np.array([[3, 2, 0], [1, -1, 0], [0, 5, 1]]) >>> b = np.array([2, 4, -1]) >>> from scipy import linalg >>> x = linalg.solve(a, b) >>> x array([ 2., -2., 9.]) >>> np.dot(a, x) == b array([ True, True, True], dtype=bool) Notes ----- If the input b matrix is a 1D array with N elements, when supplied together with an NxN input a, it is assumed as a valid column vector despite the apparent size mismatch. This is compatible with the numpy.dot() behavior and the returned result is still 1D array. The generic, symmetric, hermitian and positive definite solutions are obtained via calling ?GESVX, ?SYSVX, ?HESVX, and ?POSVX routines of LAPACK respectively. ```python sp.source(sp.linalg.lu) ``` In file: /home/me/anaconda3/lib/python3.6/site-packages/scipy/linalg/decomp_lu.py def lu(a, permute_l=False, overwrite_a=False, check_finite=True): """ Compute pivoted LU decomposition of a matrix. The decomposition is:: A = P L U where P is a permutation matrix, L lower triangular with unit diagonal elements, and U upper triangular. Parameters ---------- a : (M, N) array_like Array to decompose permute_l : bool, optional Perform the multiplication P*L (Default: do not permute) overwrite_a : bool, optional Whether to overwrite data in a (may improve performance) check_finite : bool, optional Whether to check that the input matrix contains only finite numbers. Disabling may give a performance gain, but may result in problems (crashes, non-termination) if the inputs do contain infinities or NaNs. Returns ------- **(If permute_l == False)** p : (M, M) ndarray Permutation matrix l : (M, K) ndarray Lower triangular or trapezoidal matrix with unit diagonal. K = min(M, N) u : (K, N) ndarray Upper triangular or trapezoidal matrix **(If permute_l == True)** pl : (M, K) ndarray Permuted L matrix. K = min(M, N) u : (K, N) ndarray Upper triangular or trapezoidal matrix Notes ----- This is a LU factorization routine written for Scipy. """ if check_finite: a1 = asarray_chkfinite(a) else: a1 = asarray(a) if len(a1.shape) != 2: raise ValueError('expected matrix') overwrite_a = overwrite_a or (_datacopied(a1, a)) flu, = get_flinalg_funcs(('lu',), (a1,)) p, l, u, info = flu(a1, permute_l=permute_l, overwrite_a=overwrite_a) if info < 0: raise ValueError('illegal value in %d-th argument of ' 'internal lu.getrf' % -info) if permute_l: return l, u return p, l, u ```python dir(sp.linalg) ``` ['LinAlgError', 'Tester', '__all__', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__', '__spec__', '__version__', '_decomp_polar', '_decomp_qz', '_decomp_update', '_expm_frechet', '_fblas', '_flapack', '_flinalg', '_matfuncs_sqrtm', '_procrustes', '_solve_toeplitz', '_solvers', 'absolute_import', 'basic', 'blas', 'block_diag', 'cho_factor', 'cho_solve', 'cho_solve_banded', 'cholesky', 'cholesky_banded', 'circulant', 'companion', 'coshm', 'cosm', 'cython_blas', 'cython_lapack', 'decomp', 'decomp_cholesky', 'decomp_lu', 'decomp_qr', 'decomp_schur', 'decomp_svd', 'det', 'dft', 'diagsvd', 'division', 'eig', 'eig_banded', 'eigh', 'eigvals', 'eigvals_banded', 'eigvalsh', 'expm', 'expm2', 'expm3', 'expm_cond', 'expm_frechet', 'find_best_blas_type', 'flinalg', 'fractional_matrix_power', 'funm', 'get_blas_funcs', 'get_lapack_funcs', 'hadamard', 'hankel', 'helmert', 'hessenberg', 'hilbert', 'inv', 'invhilbert', 'invpascal', 'kron', 'lapack', 'leslie', 'linalg_version', 'logm', 'lstsq', 'lu', 'lu_factor', 'lu_solve', 'matfuncs', 'matrix_balance', 'misc', 'norm', 'ordqz', 'orth', 'orthogonal_procrustes', 'pascal', 'pinv', 'pinv2', 'pinvh', 'polar', 'print_function', 'qr', 'qr_delete', 'qr_insert', 'qr_multiply', 'qr_update', 'qz', 'rq', 'rsf2csf', 'schur', 'signm', 'sinhm', 'sinm', 'solve', 'solve_banded', 'solve_circulant', 'solve_continuous_are', 'solve_discrete_are', 'solve_discrete_lyapunov', 'solve_lyapunov', 'solve_sylvester', 'solve_toeplitz', 'solve_triangular', 'solveh_banded', 'special_matrices', 'sqrtm', 'svd', 'svdvals', 'tanhm', 'tanm', 'test', 'toeplitz', 'tri', 'tril', 'triu'] # Matrix Creation ```python a = np.empty((4,5)) a ``` array([[ 6.93542489e-310, 2.13381972e-316, 6.93538267e-310, 6.93542578e-310, 6.93538762e-310], [ 6.93542582e-310, 6.93542582e-310, 6.93542582e-310, 6.93538267e-310, 6.93540180e-310], [ 6.93538751e-310, 4.00193173e-322, 2.12576764e-316, 6.93542489e-310, 6.93542157e-310], [ 6.93541501e-310, 6.93542583e-310, 6.93542143e-310, 6.93541525e-310, 6.93538782e-310]]) ```python A = np.zeros((3,4)) A ``` array([[ 0., 0., 0., 0.], [ 0., 0., 0., 0.], [ 0., 0., 0., 0.]]) ```python B = np.ones((4,2)) B ``` array([[ 1., 1.], [ 1., 1.], [ 1., 1.], [ 1., 1.]]) ```python A = np.eye(3,3) A ``` array([[ 1., 0., 0.], [ 0., 1., 0.], [ 0., 0., 1.]]) ```python B = np.arange(9).reshape((3,3)) B ``` array([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) ```python C = np.fromfunction(lambda x, y: x + y, (3,3)) C ``` array([[ 0., 1., 2.], [ 1., 2., 3.], [ 2., 3., 4.]]) ```python fn = lambda x: math.sin(x) vfn = np.vectorize(fn) D = vfn(np.arange(9)).reshape(3,3) D ``` array([[ 0. , 0.84147098, 0.90929743], [ 0.14112001, -0.7568025 , -0.95892427], [-0.2794155 , 0.6569866 , 0.98935825]]) ```python fn2 = lambda x: np.sin(x) D = fn2(np.arange(9)).reshape(3,3) D ``` array([[ 0. , 0.84147098, 0.90929743], [ 0.14112001, -0.7568025 , -0.95892427], [-0.2794155 , 0.6569866 , 0.98935825]]) ```python map1 = map(math.sin, np.arange(9)) E = np.fromiter(map1, dtype = 'float').reshape(3,3) E ``` array([[ 0. , 0.84147098, 0.90929743], [ 0.14112001, -0.7568025 , -0.95892427], [-0.2794155 , 0.6569866 , 0.98935825]]) ```python F = np.array([math.sin(x) for x in range(9)]).reshape(3,3) F ``` array([[ 0. , 0.84147098, 0.90929743], [ 0.14112001, -0.7568025 , -0.95892427], [-0.2794155 , 0.6569866 , 0.98935825]]) ## Time required in each 3x3 matrix creation ```python %%timeit -n 5 fn = lambda x: math.sin(x) vfn = np.vectorize(fn) D = vfn(np.arange(9)).reshape(3,3) ``` 5 loops, best of 3: 28.7 µs per loop ```python %%timeit -n 5 fn2 = lambda x: np.sin(x) D = fn2(np.arange(9)).reshape(3,3) ``` 5 loops, best of 3: 5.22 µs per loop ```python %%timeit -n 5 map1 = map(math.sin, np.arange(9)) E = np.fromiter(map1, dtype = 'float').reshape(3,3) ``` 5 loops, best of 3: 8.61 µs per loop ```python %%timeit -n 5 F = np.array([math.sin(x) for x in range(9)]).reshape(3,3) ``` 5 loops, best of 3: 3.41 µs per loop ## Time required to create 100 x 100 matrix Note: Numpy can has some overhead costs. Yet, applying numpy function to numpy array directly is fast for large matrix. (faster than vectorize non-numpy function / map non-numpy function / array of list comprehension) ```python %%timeit -n 5 fn = lambda x: math.sin(x) vfn = np.vectorize(fn) D = vfn(np.arange(10000)).reshape(100,100) ``` 5 loops, best of 3: 2.82 ms per loop ```python %%timeit -n 5 fn2 = lambda x: np.sin(x) D = fn2(np.arange(10000)).reshape(100,100) ``` 5 loops, best of 3: 290 µs per loop ```python %%timeit -n 5 map1 = map(math.sin, np.arange(10000)) E = np.fromiter(map1, dtype = 'float').reshape(100,100) ``` 5 loops, best of 3: 1.39 ms per loop ```python %%timeit -n 5 F = np.array([math.sin(x) for x in range(10000)]).reshape(100,100) ``` 5 loops, best of 3: 1.82 ms per loop # Matrix Concatenation ```python A = np.eye(3,3) B = np.zeros((3,3)) C = np.ones((3,3)) ``` ```python A ``` array([[ 1., 0., 0.], [ 0., 1., 0.], [ 0., 0., 1.]]) ```python B ``` array([[ 0., 0., 0.], [ 0., 0., 0.], [ 0., 0., 0.]]) ```python C ``` array([[ 1., 1., 1.], [ 1., 1., 1.], [ 1., 1., 1.]]) ```python A + B # not for concatenation ``` array([[ 1., 0., 0.], [ 0., 1., 0.], [ 0., 0., 1.]]) ```python np.concatenate([A,B,C]) ``` array([[ 1., 0., 0.], [ 0., 1., 0.], [ 0., 0., 1.], [ 0., 0., 0.], [ 0., 0., 0.], [ 0., 0., 0.], [ 1., 1., 1.], [ 1., 1., 1.], [ 1., 1., 1.]]) ```python np.vstack([A,B,C]) ``` array([[ 1., 0., 0.], [ 0., 1., 0.], [ 0., 0., 1.], [ 0., 0., 0.], [ 0., 0., 0.], [ 0., 0., 0.], [ 1., 1., 1.], [ 1., 1., 1.], [ 1., 1., 1.]]) ```python np.r_[A,B,C] ``` array([[ 1., 0., 0.], [ 0., 1., 0.], [ 0., 0., 1.], [ 0., 0., 0.], [ 0., 0., 0.], [ 0., 0., 0.], [ 1., 1., 1.], [ 1., 1., 1.], [ 1., 1., 1.]]) ```python np.concatenate([A,B,C],axis = 1) ``` array([[ 1., 0., 0., 0., 0., 0., 1., 1., 1.], [ 0., 1., 0., 0., 0., 0., 1., 1., 1.], [ 0., 0., 1., 0., 0., 0., 1., 1., 1.]]) ```python np.hstack([A,B,C]) ``` array([[ 1., 0., 0., 0., 0., 0., 1., 1., 1.], [ 0., 1., 0., 0., 0., 0., 1., 1., 1.], [ 0., 0., 1., 0., 0., 0., 1., 1., 1.]]) ```python np.c_[A,B,C] ``` array([[ 1., 0., 0., 0., 0., 0., 1., 1., 1.], [ 0., 1., 0., 0., 0., 0., 1., 1., 1.], [ 0., 0., 1., 0., 0., 0., 1., 1., 1.]]) # Row Swapping ```python A = np.eye(4,4) A ``` array([[ 1., 0., 0., 0.], [ 0., 1., 0., 0.], [ 0., 0., 1., 0.], [ 0., 0., 0., 1.]]) ```python A[0], A[3] = A[3], A[0] #does not swap! ``` ```python A ``` array([[ 0., 0., 0., 1.], [ 0., 1., 0., 0.], [ 0., 0., 1., 0.], [ 0., 0., 0., 1.]]) ```python A = np.eye(4,4) ``` ```python A[0,:], A[3,:] = A[3,:], A[0,:] #does not swap either! ``` ```python A ``` array([[ 0., 0., 0., 1.], [ 0., 1., 0., 0.], [ 0., 0., 1., 0.], [ 0., 0., 0., 1.]]) ```python A = np.eye(4,4) A[[0,3]] ``` array([[ 1., 0., 0., 0.], [ 0., 0., 0., 1.]]) ```python A[[0,3]] = A[[3,0]] #Will swap! A ``` array([[ 0., 0., 0., 1.], [ 0., 1., 0., 0.], [ 0., 0., 1., 0.], [ 1., 0., 0., 0.]]) # Column Swapping ```python A = np.arange(16).reshape((4,4)) A ``` array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11], [12, 13, 14, 15]]) ```python A[:,[0,3]] = A[:,[3,0]] ``` ```python A ``` array([[ 3, 1, 2, 0], [ 7, 5, 6, 4], [11, 9, 10, 8], [15, 13, 14, 12]]) # Matrix Expansion ```python A = np.eye(3,3) A ``` array([[ 1., 0., 0.], [ 0., 1., 0.], [ 0., 0., 1.]]) ```python A[:,:,np.newaxis] ``` array([[[ 1.], [ 0.], [ 0.]], [[ 0.], [ 1.], [ 0.]], [[ 0.], [ 0.], [ 1.]]]) ```python A[:,np.newaxis,:] ``` array([[[ 1., 0., 0.]], [[ 0., 1., 0.]], [[ 0., 0., 1.]]]) ```python A[np.newaxis,:,:] ``` array([[[ 1., 0., 0.], [ 0., 1., 0.], [ 0., 0., 1.]]]) # Element by element operation ```python %%timeit -n 5 A = np.arange(100) B = np.arange(100) C = A * B ``` The slowest run took 4.87 times longer than the fastest. This could mean that an intermediate result is being cached. 5 loops, best of 3: 4.87 µs per loop ```python np.__version__ ``` '1.13.1' ```python %%timeit -n 5 a = range(100) b = range(100) c = [] for i in range(len(a)): c.append(a[i]*b[i]) ``` 5 loops, best of 3: 27.2 µs per loop ```python A = np.arange(100) B = np.arange(100) C = A * B C ``` array([ 0, 1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121, 144, 169, 196, 225, 256, 289, 324, 361, 400, 441, 484, 529, 576, 625, 676, 729, 784, 841, 900, 961, 1024, 1089, 1156, 1225, 1296, 1369, 1444, 1521, 1600, 1681, 1764, 1849, 1936, 2025, 2116, 2209, 2304, 2401, 2500, 2601, 2704, 2809, 2916, 3025, 3136, 3249, 3364, 3481, 3600, 3721, 3844, 3969, 4096, 4225, 4356, 4489, 4624, 4761, 4900, 5041, 5184, 5329, 5476, 5625, 5776, 5929, 6084, 6241, 6400, 6561, 6724, 6889, 7056, 7225, 7396, 7569, 7744, 7921, 8100, 8281, 8464, 8649, 8836, 9025, 9216, 9409, 9604, 9801]) ```python a = range(100) b = range(100) c = [] for i in range(len(a)): c.append(a[i]*b[i]) c ``` [0, 1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121, 144, 169, 196, 225, 256, 289, 324, 361, 400, 441, 484, 529, 576, 625, 676, 729, 784, 841, 900, 961, 1024, 1089, 1156, 1225, 1296, 1369, 1444, 1521, 1600, 1681, 1764, 1849, 1936, 2025, 2116, 2209, 2304, 2401, 2500, 2601, 2704, 2809, 2916, 3025, 3136, 3249, 3364, 3481, 3600, 3721, 3844, 3969, 4096, 4225, 4356, 4489, 4624, 4761, 4900, 5041, 5184, 5329, 5476, 5625, 5776, 5929, 6084, 6241, 6400, 6561, 6724, 6889, 7056, 7225, 7396, 7569, 7744, 7921, 8100, 8281, 8464, 8649, 8836, 9025, 9216, 9409, 9604, 9801] # Matrix - Matrix Multiplication Time for numpy ```python %%timeit -n 5 A = np.array([[1,2],[3,4]]) np.dot(A,A) ``` 5 loops, best of 3: 3.63 µs per loop ```python A = np.array([[1,2],[3,4]]) np.dot(A,A) ``` array([[ 7, 10], [15, 22]]) ```python A.dot(A) ``` array([[ 7, 10], [15, 22]]) Time with no direct multiplication function ```python %%timeit -n 5 B = [[1,2],[3,4]] C = [] for i in range(2): C.append([]) for j in range(2): sum = 0 for k in range(2): sum = sum + B[i][k] * B[k][j] C[i].append(sum) ``` 5 loops, best of 3: 3.6 µs per loop ```python B = [[1,2],[3,4]] C = [] for i in range(2): C.append([]) for j in range(2): sum = 0 for k in range(2): sum = sum + B[i][k] * B[k][j] C[i].append(sum) #printing part N = 2 print('[', end = '') for i in range(N): if i != 0: print(' ', end = '') print('[', end = '') for j in range(N): print('{:2d}'.format(C[i][j]), end = '') if j != N-1: print(', ', end = '') print(']', end = '') if i != N-1: print(',') print(']', end = '') ``` [[ 7, 10], [15, 22]] 4 x 4 Matrix Multiplication ```python np.random.seed(1) A = np.ceil(10*np.random.random((4,4))) np.random.seed(2) B = np.ceil(10*np.random.random((4,4))) print('A = ') print(A, end = '\n\n') print('B = ') print(B) ``` A = [[ 5. 8. 1. 4.] [ 2. 1. 2. 4.] [ 4. 6. 5. 7.] [ 3. 9. 1. 7.]] B = [[ 5. 1. 6. 5.] [ 5. 4. 3. 7.] [ 3. 3. 7. 6.] [ 2. 6. 2. 8.]] ```python np.dot(A,B) ``` array([[ 76., 64., 69., 119.], [ 29., 36., 37., 61.], [ 79., 85., 91., 148.], [ 77., 84., 66., 140.]]) ```python ans = np.zeros((4,4)) for i in range(4): for j in range(4): sum = 0 for k in range(4): sum = sum + A[i,k] * B[k,j] ans[i,j] = sum ans ``` array([[ 76., 64., 69., 119.], [ 29., 36., 37., 61.], [ 79., 85., 91., 148.], [ 77., 84., 66., 140.]]) Pythonic way for matrix multiplication ```python %%timeit -n 5 np.dot(A,B) ``` The slowest run took 54.15 times longer than the fastest. This could mean that an intermediate result is being cached. 5 loops, best of 3: 1.36 µs per loop non-Pythonic way for matrix multiplication ```python %%timeit -n 5 ans = np.zeros((4,4)) for i in range(4): for j in range(4): sum = 0 for k in range(4): sum = sum + A[i,k] * B[k,j] ans[i,j] = sum ans ``` 5 loops, best of 3: 32.8 µs per loop # Gaussian Elimination ```python np.random.seed(2) a = np.ceil(10*np.random.random((4,4))) a ``` array([[ 5., 1., 6., 5.], [ 5., 4., 3., 7.], [ 3., 3., 7., 6.], [ 2., 6., 2., 8.]]) ```python b = np.floor(10*np.random.random((4,1))) b ``` array([[ 8.], [ 4.], [ 8.], [ 0.]]) ```python A = np.hstack((a,b)) A ``` array([[ 5., 1., 6., 5., 8.], [ 5., 4., 3., 7., 4.], [ 3., 3., 7., 6., 8.], [ 2., 6., 2., 8., 0.]]) ```python A.shape[0] ``` 4 ### Use: &nbsp; $L_{ij}^{new} = \left[L_{ij} - \frac{L_{ik}}{L_{kk}} L_{kj}\right]^{old}$ ```python N = A.shape[0] for k in range(N-1): for i in range(k+1, N): r = -A[i,k] / A[k,k] for j in range(k+1, N+1): A[i,j] = A[i,j] + r * A[k,j] #lines below are not used during back substitution for j in range(k+1): A[i,j] = 0 ``` ```python A ``` array([[ 5. , 1. , 6. , 5. , 8. ], [ 0. , 3. , -3. , 2. , -4. ], [ 0. , 0. , 5.8 , 1.4 , 6.4 ], [ 0. , 0. , 0. , 1.01149425, -1.47126437]]) # Back Substitution: &nbsp; $x_i = \frac{a_{im} - \sum_{j=i+1}^n a_{ij}x_j}{a_{ii}}$ ```python #For ax = b # A = np.hstack((a,b)) A[N-1,N] = A[N-1,-1] / A[N-1, -2] for i in range(N-2, -1, -1): #2, 1, 0 sum = 0 for j in range(i+1, N): #i+1 to N-1 sum = sum + A[i,j] * A[j,N] A[i,N] = (A[i,N] - sum)/A[i,i] A[:,N] ``` array([ 1.09090909, 1.09090909, 1.45454545, -1.45454545]) ```python ans = A[:,N] ans ``` array([ 1.09090909, 1.09090909, 1.45454545, -1.45454545]) ```python ans = ans[:,np.newaxis] ans ``` array([[ 1.09090909], [ 1.09090909], [ 1.45454545], [-1.45454545]]) ## Check answer Ax = b, by calculating A* ans ```python np.dot(a,ans) ``` array([[ 8.], [ 4.], [ 8.], [ 0.]]) ```python b ``` array([[ 8.], [ 4.], [ 8.], [ 0.]]) ### Answer from Scipy ```python sp.__version__ ``` '0.19.1' ```python ans_sp = linalg.solve(a,b) ans_sp ``` array([[ 1.09090909], [ 1.09090909], [ 1.45454545], [-1.45454545]]) ```python ans - ans_sp ``` array([[ 2.22044605e-16], [ -6.66133815e-16], [ -2.22044605e-16], [ 4.44089210e-16]]) ```python a.dot(ans) - b ``` array([[ 0.00000000e+00], [ -8.88178420e-16], [ -1.77635684e-15], [ 0.00000000e+00]]) ```python a.dot(ans_sp) - b ``` array([[ -1.77635684e-15], [ -3.55271368e-15], [ -1.77635684e-15], [ -1.77635684e-15]]) ### Up-scaling to 10 x 10 ```python def user_gaussian_solve(a,b): A = np.hstack((a,b)) N = A.shape[0] for k in range(N-1): for i in range(k+1, N): r = -A[i,k] / A[k,k] for j in range(k+1, N+1): A[i,j] = A[i,j] + r * A[k,j] #lines below are not used during back substitution A[N-1,N] = A[N-1,-1] / A[N-1, -2] for i in range(N-2, -1, -1): #2, 1, 0 sum = 0 for j in range(i+1, N): #i+1 to N-1 sum = sum + A[i,j] * A[j,N] A[i,N] = (A[i,N] - sum)/A[i,i] return A[:,N][:,np.newaxis] ``` ```python x = user_gaussian_solve(a,b) a.dot(x) - b ``` array([[ 0.00000000e+00], [ -8.88178420e-16], [ -1.77635684e-15], [ 0.00000000e+00]]) ```python x ``` array([[ 1.09090909], [ 1.09090909], [ 1.45454545], [-1.45454545]]) ```python np.random.seed(2) a = np.ceil(10*np.random.random((10,10))) a ``` array([[ 5., 1., 6., 5., 5., 4., 3., 7., 3., 3.], [ 7., 6., 2., 6., 2., 8., 9., 5., 9., 1.], [ 6., 1., 5., 1., 2., 6., 3., 2., 3., 4.], [ 5., 3., 7., 5., 6., 4., 8., 6., 2., 8.], [ 10., 6., 9., 4., 6., 5., 5., 8., 6., 10.], [ 6., 1., 4., 9., 5., 1., 3., 1., 10., 10.], [ 9., 7., 8., 2., 3., 6., 4., 1., 10., 5.], [ 6., 4., 3., 4., 9., 8., 4., 1., 8., 3.], [ 6., 1., 7., 4., 5., 5., 4., 6., 10., 2.], [ 4., 1., 8., 7., 3., 5., 7., 7., 2., 9.]]) ```python b = np.ceil(10*np.random.random((10,1))) b ``` array([[ 8.], [ 2.], [ 9.], [ 8.], [ 8.], [ 6.], [ 3.], [ 10.], [ 6.], [ 4.]]) ```python x = user_gaussian_solve(a,b) a.dot(x) - b ``` array([[ 1.77635684e-15], [ 2.66453526e-15], [ 1.77635684e-15], [ 0.00000000e+00], [ 0.00000000e+00], [ -8.88178420e-16], [ -4.44089210e-16], [ -3.37507799e-14], [ 1.77635684e-15], [ 8.88178420e-15]]) ```python x ``` array([[ 2.07221037], [-1.39430277], [-0.12657769], [-0.01830467], [ 1.07867709], [-0.07065234], [ 0.4425574 ], [-0.48488272], [-0.7342585 ], [-0.31908441]]) ```python x2 = linalg.solve(a,b) a.dot(x2) - b ``` array([[ 0.00000000e+00], [ -8.88178420e-16], [ 0.00000000e+00], [ 1.77635684e-15], [ 3.55271368e-15], [ -8.88178420e-16], [ 4.88498131e-15], [ -1.77635684e-15], [ -1.77635684e-15], [ -8.88178420e-16]]) ```python x2 ``` array([[ 2.07221037], [-1.39430277], [-0.12657769], [-0.01830467], [ 1.07867709], [-0.07065234], [ 0.4425574 ], [-0.48488272], [-0.7342585 ], [-0.31908441]]) ```python %%timeit -n 5 user_gaussian_solve(a,b) ``` 5 loops, best of 3: 208 µs per loop ```python %%timeit -n 5 linalg.solve(a,b) ``` 5 loops, best of 3: 59.7 µs per loop # Partial Pivoting: max( abs(diagonal)) on each step of elimination Find the maximum value that is on the same column as the pivot element, then swap ```python np.random.seed(1) a2 = np.ceil(10*np.random.random((4,4))) a2[3,:] = a2[3,:] * 10000 a2 ``` array([[ 5.00000000e+00, 8.00000000e+00, 1.00000000e+00, 4.00000000e+00], [ 2.00000000e+00, 1.00000000e+00, 2.00000000e+00, 4.00000000e+00], [ 4.00000000e+00, 6.00000000e+00, 5.00000000e+00, 7.00000000e+00], [ 3.00000000e+04, 9.00000000e+04, 1.00000000e+04, 7.00000000e+04]]) ```python np.random.seed(1) b2 = np.ceil(10*np.random.random((4,1))) b2 ``` array([[ 5.], [ 8.], [ 1.], [ 4.]]) ```python ans2 = user_gaussian_solve(a2,b2) ``` ```python a2.dot(ans2) - b2 ``` array([[ 1.77635684e-15], [ 0.00000000e+00], [ 1.77635684e-15], [ 1.45519152e-11]]) ```python ans2b = linalg.solve(a2,b2) ``` ```python a2.dot(ans2b) - b2 ``` array([[ 8.88178420e-16], [ 0.00000000e+00], [ -5.32907052e-15], [ -5.82076609e-11]]) ```python np.random.seed(1) a2 = np.floor(10*np.random.random((10,10))) - 6 a2[0,0] = 0 ``` ```python a2 ``` array([[ 0., 1., -6., -3., -5., -6., -5., -3., -3., -1.], [-2., 0., -4., 2., -6., 0., -2., -1., -5., -5.], [ 2., 3., -3., 0., 2., 2., -6., -6., -5., 2.], [-6., -2., 3., -1., 0., -3., 0., 2., -6., 1.], [ 3., 1., -4., 1., -5., -2., 3., -4., -4., -5.], [-6., 0., -4., -4., -2., -6., -1., -5., -1., 0.], [-5., -2., 0., -2., -6., -1., 0., -1., 3., -1.], [ 3., -5., -5., 2., -3., -5., 3., -3., 1., 1.], [ 2., 0., 1., -3., -4., 2., -2., 3., 0., 0.], [-5., 3., -2., -1., -2., -4., 3., -1., -6., 0.]]) ```python b ``` array([[ 8.], [ 2.], [ 9.], [ 8.], [ 8.], [ 6.], [ 3.], [ 10.], [ 6.], [ 4.]]) ```python linalg.solve(a2,b) ``` array([[ 0.77432279], [-1.39446307], [ 0.61638638], [-1.23315046], [-1.0049791 ], [ 0.19168532], [ 0.28772884], [-1.2307709 ], [-1.45290852], [ 1.09384768]]) ```python user_gaussian_solve(a2,b) ``` /home/me/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:6: RuntimeWarning: divide by zero encountered in double_scalars /home/me/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:6: RuntimeWarning: invalid value encountered in double_scalars array([[ nan], [ nan], [ nan], [ nan], [ nan], [ nan], [ nan], [ nan], [ nan], [ nan]]) ### Get "<font color=#FF0000> Not A Number </font>" (nan) because of zeros in the diagonal terms ```python def user_gaussian_solve_pp(a,b): A = np.hstack((a,b)) N = A.shape[0] for k in range(N-1): maxidx = np.abs(A[k:,k]).argmax() + k #get index of the max arg # +k is needed, because, argmax restart at 0 for the new slice A[[k,maxidx]] = A[[maxidx, k]] for i in range(k+1, N): r = -A[i,k] / A[k,k] for j in range(k+1, N+1): A[i,j] = A[i,j] + r * A[k,j] A[N-1,N] = A[N-1,-1] / A[N-1, -2] for i in range(N-2, -1, -1): #2, 1, 0 sum = 0 for j in range(i+1, N): #i+1 to N-1 sum = sum + A[i,j] * A[j,N] A[i,N] = (A[i,N] - sum)/A[i,i] return A[:,N][:,np.newaxis] ``` ```python ans = user_gaussian_solve_pp(a2,b) ans ``` array([[ 0.77432279], [-1.39446307], [ 0.61638638], [-1.23315046], [-1.0049791 ], [ 0.19168532], [ 0.28772884], [-1.2307709 ], [-1.45290852], [ 1.09384768]]) ```python err_usr = a2.dot(ans)-b err_usr ``` array([[ -2.66453526e-15], [ 0.00000000e+00], [ -3.55271368e-15], [ 0.00000000e+00], [ 1.77635684e-15], [ -2.66453526e-15], [ 0.00000000e+00], [ -1.77635684e-15], [ 8.88178420e-16], [ 5.32907052e-15]]) ```python ans_sp = linalg.solve(a2,b) err_sp = a2.dot(ans_sp)-b err_sp ``` array([[ 0.00000000e+00], [ 8.88178420e-16], [ 0.00000000e+00], [ 0.00000000e+00], [ 1.77635684e-15], [ 0.00000000e+00], [ 2.66453526e-15], [ 0.00000000e+00], [ 8.88178420e-16], [ 2.66453526e-15]]) ```python ans_sp ``` array([[ 0.77432279], [-1.39446307], [ 0.61638638], [-1.23315046], [-1.0049791 ], [ 0.19168532], [ 0.28772884], [-1.2307709 ], [-1.45290852], [ 1.09384768]]) ```python print('err scipy', np.linalg.norm(err_sp)) print('err user ', np.linalg.norm(err_usr)) ``` err scipy 4.35116785763e-15 err user 7.89430247156e-15 # Checking the condition number of A (square matrix) well conditioned: 1 <br>ill conditioned: very large number ## For Ax = b, if x change a little (contain a little error), what will happen to b ## $A(x+\delta x) = b + \delta b$ ## $A \delta x = \delta b$ ## $\frac{\|\delta x\|}{\|x\|} = \frac{\|A^{-1}\delta b\|}{\|x\|} <= \frac{\|A^{-1}\|\|\delta b\|}{\|x\|}$ ## $\frac{\|A^{-1}\|\|\delta b\|}{\|x\|} \cdot \frac{\|b\|}{\|b\|} = \frac{\|A^{-1}\|\| b\|}{\|x\|} \cdot \frac{\| \delta b\|}{\|b\|} = \|A^{-1}\|\cdot\|A\|\cdot \frac{\| \delta b\|}{\|b\|}$ ## $\therefore \; \frac{\|\delta x\|}{\|x\|} <= \|A^{-1}\|\cdot\|A\|\cdot \frac{\| \delta b\|}{\|b\|}$ ## $\|A^{-1}\|\cdot\|A\|$ is the condition number of matrix A ### The relative error in x (answer) is bounded by the relative error in b * condition number ```python a2 ``` array([[ 0., 1., -6., -3., -5., -6., -5., -3., -3., -1.], [-2., 0., -4., 2., -6., 0., -2., -1., -5., -5.], [ 2., 3., -3., 0., 2., 2., -6., -6., -5., 2.], [-6., -2., 3., -1., 0., -3., 0., 2., -6., 1.], [ 3., 1., -4., 1., -5., -2., 3., -4., -4., -5.], [-6., 0., -4., -4., -2., -6., -1., -5., -1., 0.], [-5., -2., 0., -2., -6., -1., 0., -1., 3., -1.], [ 3., -5., -5., 2., -3., -5., 3., -3., 1., 1.], [ 2., 0., 1., -3., -4., 2., -2., 3., 0., 0.], [-5., 3., -2., -1., -2., -4., 3., -1., -6., 0.]]) ```python np.linalg.cond(a2) ``` 18.062377254421843 ```python a ``` array([[ 5., 1., 6., 5., 5., 4., 3., 7., 3., 3.], [ 7., 6., 2., 6., 2., 8., 9., 5., 9., 1.], [ 6., 1., 5., 1., 2., 6., 3., 2., 3., 4.], [ 5., 3., 7., 5., 6., 4., 8., 6., 2., 8.], [ 10., 6., 9., 4., 6., 5., 5., 8., 6., 10.], [ 6., 1., 4., 9., 5., 1., 3., 1., 10., 10.], [ 9., 7., 8., 2., 3., 6., 4., 1., 10., 5.], [ 6., 4., 3., 4., 9., 8., 4., 1., 8., 3.], [ 6., 1., 7., 4., 5., 5., 4., 6., 10., 2.], [ 4., 1., 8., 7., 3., 5., 7., 7., 2., 9.]]) ```python np.linalg.cond(a) ``` 40.145162515000834 ```python c_num_test = a + 50 * np.eye((a.shape[0])) c_num_test ``` array([[ 55., 1., 6., 5., 5., 4., 3., 7., 3., 3.], [ 7., 56., 2., 6., 2., 8., 9., 5., 9., 1.], [ 6., 1., 55., 1., 2., 6., 3., 2., 3., 4.], [ 5., 3., 7., 55., 6., 4., 8., 6., 2., 8.], [ 10., 6., 9., 4., 56., 5., 5., 8., 6., 10.], [ 6., 1., 4., 9., 5., 51., 3., 1., 10., 10.], [ 9., 7., 8., 2., 3., 6., 54., 1., 10., 5.], [ 6., 4., 3., 4., 9., 8., 4., 51., 8., 3.], [ 6., 1., 7., 4., 5., 5., 4., 6., 60., 2.], [ 4., 1., 8., 7., 3., 5., 7., 7., 2., 59.]]) ```python np.linalg.cond(c_num_test,p='fro') # p = 'fro' is just to select Frobenius norm ``` 11.250797180551318 ## large diagonal term make it easier to get the answer, better accuracy (lower condition number) ```python np.linalg.norm(sp.linalg.inv(c_num_test)) * np.linalg.norm(c_num_test) ``` 11.250797180551317 <font size = '4', style = "line-height:2"> Frobenius norm is defined as <br>$\|A\|_{\rm F}=\sqrt{\sum_{i=1}^m\sum_{j=1}^n |a_{ij}|^2}=\sqrt{\operatorname{trace}(A^{\dagger}A)}$ <br> dagger is conjugate transpose (or just transpose for real number) <br> trace is summation of the diagonal term ```python np.linalg.norm(c_num_test) ``` 183.01639270841287 ```python np.linalg.norm(sp.linalg.inv(c_num_test)) ``` 0.061474259294774868 ```python for row in c_num_test: print(row) ``` [ 55. 1. 6. 5. 5. 4. 3. 7. 3. 3.] [ 7. 56. 2. 6. 2. 8. 9. 5. 9. 1.] [ 6. 1. 55. 1. 2. 6. 3. 2. 3. 4.] [ 5. 3. 7. 55. 6. 4. 8. 6. 2. 8.] [ 10. 6. 9. 4. 56. 5. 5. 8. 6. 10.] [ 6. 1. 4. 9. 5. 51. 3. 1. 10. 10.] [ 9. 7. 8. 2. 3. 6. 54. 1. 10. 5.] [ 6. 4. 3. 4. 9. 8. 4. 51. 8. 3.] [ 6. 1. 7. 4. 5. 5. 4. 6. 60. 2.] [ 4. 1. 8. 7. 3. 5. 7. 7. 2. 59.] ```python #this is non-pythonic way. #use just to show the direct calculation sum_ = 0 for row in c_num_test: for i in row: sum_ = sum_ + i**2 sum_**0.5 ``` 183.01639270841287 ```python #this is non-pythonic way. #use just to show the direct calculation sum_ = 0 for i in range(c_num_test.shape[0]): dia = c_num_test.dot(c_num_test.T)[i,i] sum_ += dia sum_**0.5 ``` 183.01639270841287 ## Condition number before and after partial pivoting is the same ```python def partial_pivoting(a): A = a.copy() N = A.shape[0] p = np.eye(N,N) for k in range(N-1): maxidx = np.abs(A[k:,k]).argmax() + k #get index of the max arg # +k is needed, because, argmax restart at 0 for the new slice if (k != maxidx): p[[k, maxidx]] = p[[maxidx, k]] A[[k, maxidx]] = A[[maxidx, k]] return A, p ``` ```python np.random.seed(1) a2 = np.floor(10*np.random.random((10,10))) - 6 a2[0,0] = 0 a2 ``` array([[ 0., 1., -6., -3., -5., -6., -5., -3., -3., -1.], [-2., 0., -4., 2., -6., 0., -2., -1., -5., -5.], [ 2., 3., -3., 0., 2., 2., -6., -6., -5., 2.], [-6., -2., 3., -1., 0., -3., 0., 2., -6., 1.], [ 3., 1., -4., 1., -5., -2., 3., -4., -4., -5.], [-6., 0., -4., -4., -2., -6., -1., -5., -1., 0.], [-5., -2., 0., -2., -6., -1., 0., -1., 3., -1.], [ 3., -5., -5., 2., -3., -5., 3., -3., 1., 1.], [ 2., 0., 1., -3., -4., 2., -2., 3., 0., 0.], [-5., 3., -2., -1., -2., -4., 3., -1., -6., 0.]]) ```python a2_pp, p = partial_pivoting(a2) print(a2_pp) ``` [[-6. -2. 3. -1. 0. -3. 0. 2. -6. 1.] [ 3. -5. -5. 2. -3. -5. 3. -3. 1. 1.] [ 0. 1. -6. -3. -5. -6. -5. -3. -3. -1.] [-6. 0. -4. -4. -2. -6. -1. -5. -1. 0.] [-5. -2. 0. -2. -6. -1. 0. -1. 3. -1.] [-5. 3. -2. -1. -2. -4. 3. -1. -6. 0.] [ 2. 3. -3. 0. 2. 2. -6. -6. -5. 2.] [ 3. 1. -4. 1. -5. -2. 3. -4. -4. -5.] [-2. 0. -4. 2. -6. 0. -2. -1. -5. -5.] [ 2. 0. 1. -3. -4. 2. -2. 3. 0. 0.]] ```python np.linalg.cond(a2,p='fro') ``` 33.563995874635459 ```python np.linalg.cond(a2_pp,p='fro') ``` 33.563995874635467 ```python a2 ``` array([[ 0., 1., -6., -3., -5., -6., -5., -3., -3., -1.], [-2., 0., -4., 2., -6., 0., -2., -1., -5., -5.], [ 2., 3., -3., 0., 2., 2., -6., -6., -5., 2.], [-6., -2., 3., -1., 0., -3., 0., 2., -6., 1.], [ 3., 1., -4., 1., -5., -2., 3., -4., -4., -5.], [-6., 0., -4., -4., -2., -6., -1., -5., -1., 0.], [-5., -2., 0., -2., -6., -1., 0., -1., 3., -1.], [ 3., -5., -5., 2., -3., -5., 3., -3., 1., 1.], [ 2., 0., 1., -3., -4., 2., -2., 3., 0., 0.], [-5., 3., -2., -1., -2., -4., 3., -1., -6., 0.]]) ```python a2_pp ``` array([[-6., -2., 3., -1., 0., -3., 0., 2., -6., 1.], [ 3., -5., -5., 2., -3., -5., 3., -3., 1., 1.], [ 0., 1., -6., -3., -5., -6., -5., -3., -3., -1.], [-6., 0., -4., -4., -2., -6., -1., -5., -1., 0.], [-5., -2., 0., -2., -6., -1., 0., -1., 3., -1.], [-5., 3., -2., -1., -2., -4., 3., -1., -6., 0.], [ 2., 3., -3., 0., 2., 2., -6., -6., -5., 2.], [ 3., 1., -4., 1., -5., -2., 3., -4., -4., -5.], [-2., 0., -4., 2., -6., 0., -2., -1., -5., -5.], [ 2., 0., 1., -3., -4., 2., -2., 3., 0., 0.]]) <font size = '4', style = "line-height:2"> In the above case, the condition number does not change after row switching <br> Can condition number change after just row switching? <br> No. <br>Scaling / adding with other row can change the condition number. <br> https://math.stackexchange.com/questions/149516/effects-of-elementary-row-operation-on-condition-number </font> ```python p ``` array([[ 0., 0., 0., 1., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 1., 0., 0.], [ 1., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 1., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 1., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.], [ 0., 0., 1., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 1., 0., 0., 0., 0., 0.], [ 0., 1., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 1., 0.]]) ```python np.linalg.cond(p) ``` 1.0 ```python p2, _ = partial_pivoting(p) ``` ```python p2 ``` array([[ 1., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 1., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 1., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 1., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 1., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 1., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 1., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 1., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 1., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.]]) ```python np.linalg.cond(p2) ``` 1.0 ```python p[[2]] = p[[2]] * 5 p ``` array([[ 0., 0., 0., 1., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 1., 0., 0.], [ 5., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 1., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 1., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.], [ 0., 0., 1., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 1., 0., 0., 0., 0., 0.], [ 0., 1., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 1., 0.]]) ```python np.linalg.cond(p) ``` 5.0 ```python p[[2]] = p[[2]] + p[[4]] * -5 p ``` array([[ 0., 0., 0., 1., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 1., 0., 0.], [ 5., 0., 0., 0., 0., 0., -5., 0., 0., 0.], [ 0., 0., 0., 0., 0., 1., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 1., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.], [ 0., 0., 1., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 1., 0., 0., 0., 0., 0.], [ 0., 1., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 1., 0.]]) ```python np.linalg.cond(p) ``` 10.100999900019994 # LU Decomposition ```python np.random.seed(1) B = np.ceil(10*np.random.random((4,4))) + 10 * np.eye(4,4) B ``` array([[ 15., 8., 1., 4.], [ 2., 11., 2., 4.], [ 4., 6., 15., 7.], [ 3., 9., 1., 17.]]) ```python p, l, u = sp.linalg.lu(B ,permute_l = False) print(p, end = '\n\n') print(l, end = '\n\n') print(u, end = '\n\n') ``` [[ 1. 0. 0. 0.] [ 0. 1. 0. 0.] [ 0. 0. 1. 0.] [ 0. 0. 0. 1.]] [[ 1. 0. 0. 0. ] [ 0.13333333 1. 0. 0. ] [ 0.26666667 0.38926174 1. 0. ] [ 0.2 0.74496644 -0.04216579 1. ]] [[ 15. 8. 1. 4. ] [ 0. 9.93333333 1.86666667 3.46666667] [ 0. 0. 14.00671141 4.58389262] [ 0. 0. 0. 13.81073311]] ```python i = 2 j = 2 B[0:i,j].dot(B[i,0:i]) ``` 16.0 ```python B[0:i,j] ``` array([ 1., 2.]) ```python B[i,0:i] ``` array([ 4., 6.]) ```python N = B.shape[0] A = B.copy() for i in range(1, N): #loop for L for j in range(i): #A[a:b] is [a,b) not [a,b] Sum = A[0:j,j].dot(A[i,0:j]) A[i,j] = (A[i,j] - Sum)/A[j,j] #loop for U for j in range(i,N): Sum = A[0:i,j].dot(A[i,0:i]) A[i,j] = A[i,j] - Sum ``` ```python U_ans = np.triu(A) U_ans ``` array([[ 15. , 8. , 1. , 4. ], [ 0. , 9.93333333, 1.86666667, 3.46666667], [ 0. , 0. , 14.00671141, 4.58389262], [ 0. , 0. , 0. , 13.81073311]]) ```python u ``` array([[ 15. , 8. , 1. , 4. ], [ 0. , 9.93333333, 1.86666667, 3.46666667], [ 0. , 0. , 14.00671141, 4.58389262], [ 0. , 0. , 0. , 13.81073311]]) ```python L_ans = np.tril(A) L_ans[np.diag_indices(4)] = 1 L_ans ``` array([[ 1. , 0. , 0. , 0. ], [ 0.13333333, 1. , 0. , 0. ], [ 0.26666667, 0.38926174, 1. , 0. ], [ 0.2 , 0.74496644, -0.04216579, 1. ]]) ```python l ``` array([[ 1. , 0. , 0. , 0. ], [ 0.13333333, 1. , 0. , 0. ], [ 0.26666667, 0.38926174, 1. , 0. ], [ 0.2 , 0.74496644, -0.04216579, 1. ]]) ```python L_ans.dot(U_ans) ``` array([[ 15., 8., 1., 4.], [ 2., 11., 2., 4.], [ 4., 6., 15., 7.], [ 3., 9., 1., 17.]]) ```python B ``` array([[ 15., 8., 1., 4.], [ 2., 11., 2., 4.], [ 4., 6., 15., 7.], [ 3., 9., 1., 17.]]) ```python L_ans.dot(U_ans) - B ``` array([[ 0., 0., 0., 0.], [ 0., 0., 0., 0.], [ 0., 0., 0., 0.], [ 0., 0., 0., 0.]]) # A = PLU (LU with partial pivoting) Just row permutation (use permutation matrix p) ## $p = p^T$ and $pp^T = I$ ### $Ax = d$, do left-multiply the permutation matrix on both sides ### $PAx = Pd$, do LU-decomposition with "PA" (so that there is no division by zero) ### $PAx = LUx = Pd$ ### $Ly = c$, Solve for y. y and c are column vectors where $y = Ux$, and $c = Pd$ ### $Ux = y$, Use backward substitution to solve for x ### $PA = LU$ and $A = P^TLU$ because $P^T = P^{-1}$ ```python np.random.seed(1) B = np.ceil(10*np.random.random((4,4))) B ``` array([[ 5., 8., 1., 4.], [ 2., 1., 2., 4.], [ 4., 6., 5., 7.], [ 3., 9., 1., 7.]]) ```python p, l, u = sp.linalg.lu(B ,permute_l = False) print(p, end = '\n\n') print(l, end = '\n\n') print(u, end = '\n\n') ``` [[ 1. 0. 0. 0.] [ 0. 0. 0. 1.] [ 0. 0. 1. 0.] [ 0. 1. 0. 0.]] [[ 1. 0. 0. 0. ] [ 0.6 1. 0. 0. ] [ 0.8 -0.0952381 1. 0. ] [ 0.4 -0.52380952 0.42696629 1. ]] [[ 5. 8. 1. 4. ] [ 0. 4.2 0.4 4.6 ] [ 0. 0. 4.23809524 4.23809524] [ 0. 0. 0. 3. ]] ```python np.dot(np.dot(p,l),u) ``` array([[ 5., 8., 1., 4.], [ 2., 1., 2., 4.], [ 4., 6., 5., 7.], [ 3., 9., 1., 7.]]) ```python np.dot(l,u) ``` array([[ 5., 8., 1., 4.], [ 3., 9., 1., 7.], [ 4., 6., 5., 7.], [ 2., 1., 2., 4.]]) ```python np.dot(p,p) ``` array([[ 1., 0., 0., 0.], [ 0., 1., 0., 0.], [ 0., 0., 1., 0.], [ 0., 0., 0., 1.]]) ```python B ``` array([[ 5., 8., 1., 4.], [ 2., 1., 2., 4.], [ 4., 6., 5., 7.], [ 3., 9., 1., 7.]]) ```python max(range(4), key = lambda i:B[1,i]) ``` 3 ```python N = B.shape[0] A = B.copy() p_ans = np.eye(N,N) for i in range(N): maxidx = np.abs(A[i:,i]).argmax() + i #may use max(range(i,N), key = lambda x: A[x,i]) if (i != maxidx): p_ans[[i, maxidx]] = p_ans[[maxidx, i]] A[[i, maxidx]] = A[[maxidx, i]] ``` ```python p_ans ``` array([[ 1., 0., 0., 0.], [ 0., 0., 0., 1.], [ 0., 0., 1., 0.], [ 0., 1., 0., 0.]]) ```python p_ans.dot(p_ans) ``` array([[ 1., 0., 0., 0.], [ 0., 1., 0., 0.], [ 0., 0., 1., 0.], [ 0., 0., 0., 1.]]) ```python N = B.shape[0] A = p_ans.dot(B.copy()) for i in range(1, N): #loop for L for j in range(i): #A[a:b] is [a,b) not [a,b] Sum = A[0:j,j].dot(A[i,0:j]) A[i,j] = (A[i,j] - Sum)/A[j,j] #loop for U for j in range(i,N): Sum = A[0:i,j].dot(A[i,0:i]) A[i,j] = A[i,j] - Sum ``` ```python U_ans = np.triu(A) U_ans ``` array([[ 5. , 8. , 1. , 4. ], [ 0. , 4.2 , 0.4 , 4.6 ], [ 0. , 0. , 4.23809524, 4.23809524], [ 0. , 0. , 0. , 3. ]]) ```python L_ans = np.tril(A) L_ans[np.diag_indices(4)] = 1 L_ans ``` array([[ 1. , 0. , 0. , 0. ], [ 0.6 , 1. , 0. , 0. ], [ 0.8 , -0.0952381 , 1. , 0. ], [ 0.4 , -0.52380952, 0.42696629, 1. ]]) ```python (p_ans.T).dot(L_ans.dot(U_ans)) ``` array([[ 5., 8., 1., 4.], [ 2., 1., 2., 4.], [ 4., 6., 5., 7.], [ 3., 9., 1., 7.]]) ```python B ``` array([[ 5., 8., 1., 4.], [ 2., 1., 2., 4.], [ 4., 6., 5., 7.], [ 3., 9., 1., 7.]]) # Forward substitution $x_m = \frac{b_m - \sum_{i=1}^{m-1} \ell_{m,i}x_i}{\ell_{m,m}}$ ## After we know that code works, wrap it up into functions ```python #LU with partial pivoting #PA = LU #take A, return PLU def p_LU(B): N = B.shape[0] A = B.copy() p_ans = np.eye(N,N) #pivoting for i in range(N): maxidx = np.abs(A[i:,i]).argmax() + i #may use max(range(i,N), key = lambda x: A[x,i]) if (i != maxidx): p_ans[[i, maxidx]] = p_ans[[maxidx, i]] A[[i, maxidx]] = A[[maxidx, i]] for i in range(1, N): #loop for L for j in range(i): #A[a:b] is [a,b) not [a,b] # Sum = A[0:j,j].dot(A[i,0:j]) Sum = 0 for k in range(j): Sum = Sum + A[k,j] * A[i,k] A[i,j] = (A[i,j] - Sum)/A[j,j] #loop for U for j in range(i,N): # Sum = A[0:i,j].dot(A[i,0:i]) Sum = 0 for k in range(i): Sum = Sum + A[k,j] * A[i,k] A[i,j] = A[i,j] - Sum U_ans = np.triu(A) L_ans = np.tril(A) L_ans[np.diag_indices(N)] = 1 #need to change from 4 to N return p_ans, L_ans, U_ans ``` ```python def solve_LU(B,d): p, l, u = p_LU(B) c = p.dot(d) #PAx = LUx = Ly = Pd = c #Ly = c #solve for y (forward sub) N = d.shape[0] y = np.empty(N) for i in range(N): Sum = 0 for k in range(i): Sum = Sum + l[i,k] * y[k] y[i] = (c[i] - Sum)/l[i,i] #solve for x (backward sub) #Ux = y x = np.empty(N) for i in range(N-1,-1,-1): #3 to 0 Sum = 0 for k in range(i+1, N): Sum = Sum + u[i,k] * x[k] x[i] = (y[i] - Sum)/u[i,i] return x ``` ```python np.random.seed(1) B = np.ceil(10*np.random.random((4,4))) np.random.seed(10) D = np.ceil(10*np.random.random(4)) ``` ```python x = solve_LU(B,D) ``` ```python B.dot(x) ``` array([ 8., 1., 7., 8.]) ```python D ``` array([ 8., 1., 7., 8.]) ```python x ``` array([ 0.06741573, 1.04494382, 0.51685393, -0.30337079]) ```python np.random.seed(1) a2 = np.ceil(10*np.random.random((5,5))) np.random.seed(2) b = np.ceil(10*np.random.random((5,1))) ``` ```python a2 ``` array([[ 5., 8., 1., 4., 2.], [ 1., 2., 4., 4., 6.], [ 5., 7., 3., 9., 1.], [ 7., 5., 6., 2., 2.], [ 9., 10., 4., 7., 9.]]) ```python b ``` array([[ 5.], [ 1.], [ 6.], [ 5.], [ 5.]]) ```python p, l, u = p_LU(a2) ``` ```python x = solve_LU(a2,b) x ``` array([-0.03360414, 0.64881832, 0.43426883, 0.07884047, -0.38607829]) ```python a2.dot(x) -b.T ``` array([[ 0.00000000e+00, 0.00000000e+00, 8.88178420e-16, 0.00000000e+00, 0.00000000e+00]]) ```python a2.dot(x) ``` array([ 5., 1., 6., 5., 5.]) ```python p ``` array([[ 0., 0., 0., 0., 1.], [ 1., 0., 0., 0., 0.], [ 0., 0., 0., 1., 0.], [ 0., 0., 1., 0., 0.], [ 0., 1., 0., 0., 0.]]) ```python l ``` array([[ 1. , 0. , 0. , 0. , 0. ], [ 0.55555556, 1. , 0. , 0. , 0. ], [ 0.77777778, -1.13636364, 1. , 0. , 0. ], [ 0.55555556, 0.59090909, 1. , 1. , 0. ], [ 0.11111111, 0.36363636, 2.66666667, 1.4384058 , 1. ]]) ```python u ``` array([[ 9. , 10. , 4. , 7. , 9. ], [ 0. , 2.44444444, -1.22222222, 0.11111111, -3. ], [ 0. , 0. , 1.5 , -3.31818182, -8.40909091], [ 0. , 0. , 0. , 8.36363636, 6.18181818], [ 0. , 0. , 0. , 0. , 19.62318841]]) ```python (p.T).dot(l.dot(u)) ``` array([[ 5., 8., 1., 4., 2.], [ 1., 2., 4., 4., 6.], [ 5., 7., 3., 9., 1.], [ 7., 5., 6., 2., 2.], [ 9., 10., 4., 7., 9.]]) ```python a2 ``` array([[ 5., 8., 1., 4., 2.], [ 1., 2., 4., 4., 6.], [ 5., 7., 3., 9., 1.], [ 7., 5., 6., 2., 2.], [ 9., 10., 4., 7., 9.]]) ```python x2 = user_gaussian_solve_pp(a2,b) ``` ```python x2 ``` array([[-0.03360414], [ 0.64881832], [ 0.43426883], [ 0.07884047], [-0.38607829]]) ```python a2.dot(x2) ``` array([[ 5.], [ 1.], [ 6.], [ 5.], [ 5.]]) ```python p, l, u = p_LU(a2) ``` ```python p.dot(p) ``` array([[ 0., 1., 0., 0., 0.], [ 0., 0., 0., 0., 1.], [ 0., 0., 1., 0., 0.], [ 0., 0., 0., 1., 0.], [ 1., 0., 0., 0., 0.]]) ```python p2, l2, u2 = sp.linalg.lu(a2) ``` ```python p2 ``` array([[ 0., 0., 0., 1., 0.], [ 0., 0., 1., 0., 0.], [ 0., 0., 0., 0., 1.], [ 0., 1., 0., 0., 0.], [ 1., 0., 0., 0., 0.]]) ```python sp.linalg.inv(p2).dot(p2) ``` array([[ 1., 0., 0., 0., 0.], [ 0., 1., 0., 0., 0.], [ 0., 0., 1., 0., 0.], [ 0., 0., 0., 1., 0.], [ 0., 0., 0., 0., 1.]]) ```python sp.linalg.inv(p2) ``` array([[ 0., 0., -0., 0., 1.], [ 0., 0., -0., 1., 0.], [ 0., 1., -0., 0., 0.], [ 1., 0., -0., 0., 0.], [ 0., 0., 1., 0., 0.]]) ```python (p2.T).dot(p2) ``` array([[ 1., 0., 0., 0., 0.], [ 0., 1., 0., 0., 0.], [ 0., 0., 1., 0., 0.], [ 0., 0., 0., 1., 0.], [ 0., 0., 0., 0., 1.]]) ```python (l.dot(u)).astype(int) ``` array([[ 9, 10, 4, 7, 9], [ 5, 8, 1, 4, 2], [ 7, 5, 6, 2, 2], [ 5, 7, 3, 9, 1], [ 1, 2, 3, 4, 6]]) ```python np.random.seed(1) a2 = np.ceil(10*np.random.random((10,10))) - 6 np.random.seed(2) b = np.ceil(10*np.random.random((10,1))) ``` ```python x = solve_LU(a2,b) x ``` array([ 0.48316746, -0.9890326 , 0.67385345, -0.73804389, -1.47130591, 0.14987401, 0.21031571, -1.27113437, -1.25877634, 1.40582528]) ```python a2.dot(x) ``` array([ 5., 1., 6., 5., 5., 4., 3., 7., 3., 3.]) ```python b.T ``` array([[ 5., 1., 6., 5., 5., 4., 3., 7., 3., 3.]]) ```python p, l, u = p_LU(a2) ``` ```python p2, l2, u2 = sp.linalg.lu(a2) ``` ```python p ``` array([[ 0., 0., 0., 1., 0., 0., 0., 0., 0., 0.], [ 0., 0., 1., 0., 0., 0., 0., 0., 0., 0.], [ 1., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 1., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 1., 0., 0., 0.], [ 0., 0., 0., 0., 0., 1., 0., 0., 0., 0.], [ 0., 0., 0., 0., 1., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 1., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.], [ 0., 0., 0., 0., 0., 0., 0., 1., 0., 0.]]) ```python p2 ``` array([[ 0., 0., 0., 0., 0., 0., 1., 0., 0., 0.], [ 0., 0., 0., 1., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.], [ 1., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 1., 0., 0.], [ 0., 0., 1., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 1., 0.], [ 0., 1., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 1., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 1., 0., 0., 0., 0.]]) The linalg.lu and p_lu use different pivoting method ```python p2, l2, u2 = sp.linalg.lu(a2) ``` ```python LU = sp.linalg.lu_factor(a2) LU ``` (array([[ -5.00000000e+00, -1.00000000e+00, 4.00000000e+00, 0.00000000e+00, 1.00000000e+00, -2.00000000e+00, 1.00000000e+00, 3.00000000e+00, -5.00000000e+00, 2.00000000e+00], [ -8.00000000e-01, -4.80000000e+00, -8.00000000e-01, 3.00000000e+00, -1.20000000e+00, -5.60000000e+00, 4.80000000e+00, 4.00000000e-01, -2.00000000e+00, 3.60000000e+00], [ 1.00000000e+00, -4.16666667e-01, -7.33333333e+00, -1.75000000e+00, -2.50000000e+00, -5.33333333e+00, 1.00000000e+00, -6.83333333e+00, 4.16666667e+00, 5.00000000e-01], [ 2.00000000e-01, -2.50000000e-01, 5.45454545e-01, 4.70454545e+00, -4.13636364e+00, 2.90909091e+00, -5.45454545e-01, 3.22727273e+00, -5.77272727e+00, -3.77272727e+00], [ -6.00000000e-01, -8.33333333e-02, -5.90909091e-01, -5.91787440e-01, -6.42512077e+00, -9.66183575e-02, 2.68115942e-01, 3.70531401e+00, -3.12077295e+00, 5.62801932e-01], [ 8.00000000e-01, -1.00000000e+00, 6.81818182e-01, 8.91304348e-01, -3.72180451e-01, -5.99248120e+00, 7.90413534e+00, 1.16165414e+00, -1.85714286e+00, 6.23120301e+00], [ 2.00000000e-01, -4.58333333e-01, 8.40909091e-01, 1.79951691e-01, 2.96240602e-01, 5.30112923e-01, -7.01226474e+00, 1.03531995e+00, -2.47264743e+00, -1.96151192e+00], [ -8.00000000e-01, -2.50000000e-01, -3.02788098e-17, 5.84541063e-01, 1.68421053e-01, 9.48557089e-01, 1.74528407e-01, -4.29311432e+00, -1.40684676e+00, -4.95778592e+00], [ 8.00000000e-01, 4.16666667e-02, 2.95454545e-01, -1.29227053e-01, 8.63157895e-01, -6.45545797e-01, -6.42461899e-01, 4.10865556e-01, 6.59057235e+00, 1.92826107e+00], [ -6.00000000e-01, -7.08333333e-01, 2.27272727e-02, 6.72705314e-01, -8.69924812e-01, 6.81932246e-01, 8.28924943e-01, 7.82507580e-01, -4.43952969e-01, 1.18784156e+01]]), array([3, 7, 5, 7, 8, 9, 7, 8, 8, 9], dtype=int32)) ```python LU[1] ``` array([3, 7, 5, 7, 8, 9, 7, 8, 8, 9], dtype=int32) ```python len(LU) ``` 2 ```python ans_scipy = sp.linalg.lu_solve(LU,b) ans_scipy ``` array([[ 0.48316746], [-0.9890326 ], [ 0.67385345], [-0.73804389], [-1.47130591], [ 0.14987401], [ 0.21031571], [-1.27113437], [-1.25877634], [ 1.40582528]]) ```python x ``` array([ 0.48316746, -0.9890326 , 0.67385345, -0.73804389, -1.47130591, 0.14987401, 0.21031571, -1.27113437, -1.25877634, 1.40582528]) ```python np.set_printoptions(precision=8) ``` ```python x ``` array([ 0.48316746, -0.9890326 , 0.67385345, -0.73804389, -1.47130591, 0.14987401, 0.21031571, -1.27113437, -1.25877634, 1.40582528]) ```python a2.dot(x) ``` array([ 5., 1., 6., 5., 5., 4., 3., 7., 3., 3.]) ```python b.T ``` array([[ 5., 1., 6., 5., 5., 4., 3., 7., 3., 3.]]) ```python a2.dot(ans_scipy).T ``` array([[ 5., 1., 6., 5., 5., 4., 3., 7., 3., 3.]]) ```python ans_scipy.T ``` array([[ 0.48316746, -0.9890326 , 0.67385345, -0.73804389, -1.47130591, 0.14987401, 0.21031571, -1.27113437, -1.25877634, 1.40582528]]) ```python x ``` array([ 0.48316746, -0.9890326 , 0.67385345, -0.73804389, -1.47130591, 0.14987401, 0.21031571, -1.27113437, -1.25877634, 1.40582528]) # Speed Test ```python a2 ``` array([[-1., 2., -5., -2., -4., -5., -4., -2., -2., 0.], [-1., 1., -3., 3., -5., 1., -1., 0., -4., -4.], [ 3., 4., -2., 1., 3., 3., -5., -5., -4., 3.], [-5., -1., 4., 0., 1., -2., 1., 3., -5., 2.], [ 4., 2., -3., 2., -4., -1., 4., -3., -3., -4.], [-5., 1., -3., -3., -1., -5., 0., -4., 0., 1.], [-4., -1., 1., -1., -5., 0., 1., 0., 4., 0.], [ 4., -4., -4., 3., -2., -4., 4., -2., 2., 2.], [ 3., 1., 2., -2., -3., 3., -1., 4., 1., 1.], [-4., 4., -1., 0., -1., -3., 4., 0., -5., 1.]]) ```python b ``` array([[ 5.], [ 1.], [ 6.], [ 5.], [ 5.], [ 4.], [ 3.], [ 7.], [ 3.], [ 3.]]) ```python %%timeit -n 5 LU = sp.linalg.lu_factor(a2) ans = sp.linalg.lu_solve(LU,b) ``` 5 loops, best of 3: 46.7 µs per loop ```python %%timeit -n 5 x = solve_LU(a2,b) ``` 5 loops, best of 3: 427 µs per loop ```python LU = sp.linalg.lu_factor(a2) ans = sp.linalg.lu_solve(LU,b) ans.T ``` array([[ 0.48316746, -0.9890326 , 0.67385345, -0.73804389, -1.47130591, 0.14987401, 0.21031571, -1.27113437, -1.25877634, 1.40582528]]) ```python x = solve_LU(a2,b) x ``` array([ 0.48316746, -0.9890326 , 0.67385345, -0.73804389, -1.47130591, 0.14987401, 0.21031571, -1.27113437, -1.25877634, 1.40582528]) ```python sp.linalg.lu ``` <function scipy.linalg.decomp_lu.lu> ```python sp_solve_ans = sp.linalg.solve(a2,b) ``` ```python sp_solve_ans.T ``` array([[ 0.48316746, -0.9890326 , 0.67385345, -0.73804389, -1.47130591, 0.14987401, 0.21031571, -1.27113437, -1.25877634, 1.40582528]]) ```python %%timeit -n 5 sp_solve_ans = sp.linalg.solve(a2,b) ``` 5 loops, best of 3: 61.1 µs per loop # Determinant: $det(A)=(-1)^s\Pi_{i=0}^{n-1}L_{ii}\Pi_{i=0}^{n-1}U_{ii}$ # $det(A)=(-1)^s\Pi_{i=0}^{n-1}U_{ii}$ ```python def det_LU(B): N = B.shape[0] A = B.copy() p_ans = np.eye(N,N) s = 0 #pivoting for i in range(N): maxidx = np.abs(A[i:,i]).argmax() + i #may use max(range(i,N), key = lambda x: A[x,i]) if (i != maxidx): p_ans[[i, maxidx]] = p_ans[[maxidx, i]] A[[i, maxidx]] = A[[maxidx, i]] s = s+1 for i in range(1, N): #loop for L for j in range(i): #A[a:b] is [a,b) not [a,b] # Sum = A[0:j,j].dot(A[i,0:j]) Sum = 0 for k in range(j): Sum = Sum + A[k,j] * A[i,k] A[i,j] = (A[i,j] - Sum)/A[j,j] #loop for U for j in range(i,N): # Sum = A[0:i,j].dot(A[i,0:i]) Sum = 0 for k in range(i): Sum = Sum + A[k,j] * A[i,k] A[i,j] = A[i,j] - Sum U_ans = np.triu(A) L_ans = np.tril(A) L_ans[np.diag_indices(N)] = 1 #need to change from 4 to N U_ii = np.product(U_ans[range(N),range(N)]) det_ans = (-1)**s * U_ii return det_ans ``` ```python #debug (do during making function) np.random.seed(3) a = np.ceil(10*np.random.random((5,5))) a[range(5),range(5)] = a[range(5),range(5)] * 10 print(a, end = '\n\n') a[range(5),range(5)] ``` [[ 60. 8. 3. 6. 9.] [ 9. 20. 3. 1. 5.] [ 1. 5. 70. 3. 7.] [ 6. 1. 6. 30. 5.] [ 3. 7. 5. 2. 60.]] array([ 60., 20., 70., 30., 60.]) ```python np.product(_) ``` 151200000.0 ```python prod = 1 for i in a[range(5),range(5)]: prod = prod * i prod ``` 151200000.0 ```python a ``` array([[ 60., 8., 3., 6., 9.], [ 9., 20., 3., 1., 5.], [ 1., 5., 70., 3., 7.], [ 6., 1., 6., 30., 5.], [ 3., 7., 5., 2., 60.]]) ```python det_LU(a) ``` 131629550.0 ```python sp.linalg.det(a) ``` 131629549.99999996 ```python a2 ``` array([[-1., 2., -5., -2., -4., -5., -4., -2., -2., 0.], [-1., 1., -3., 3., -5., 1., -1., 0., -4., -4.], [ 3., 4., -2., 1., 3., 3., -5., -5., -4., 3.], [-5., -1., 4., 0., 1., -2., 1., 3., -5., 2.], [ 4., 2., -3., 2., -4., -1., 4., -3., -3., -4.], [-5., 1., -3., -3., -1., -5., 0., -4., 0., 1.], [-4., -1., 1., -1., -5., 0., 1., 0., 4., 0.], [ 4., -4., -4., 3., -2., -4., 4., -2., 2., 2.], [ 3., 1., 2., -2., -3., 3., -1., 4., 1., 1.], [-4., 4., -1., 0., -1., -3., 4., 0., -5., 1.]]) ```python sp.linalg.det(a2) ``` -75132998.00000001 ```python det_LU(a2) ``` -75132997.99999994 ### det_LU gives the same result as the one from scipy.linalg.det ! ```python %%timeit -n 5 sp.linalg.det(a2) ``` 5 loops, best of 3: 19.5 µs per loop ```python %%timeit -n 5 det_LU(a2) ``` 5 loops, best of 3: 386 µs per loop ```python # det from LU is a lot slower ``` ```python p, l, u = sp.linalg.lu(a2) ``` ```python sp.linalg.det(p) ``` 1.0 # Determinant if LU does not return s ```python p, l, u = p_LU(a2) ``` ```python p = p.astype(int) p ``` array([[0, 0, 0, 1, 0, 0, 0, 0, 0, 0], [0, 0, 1, 0, 0, 0, 0, 0, 0, 0], [1, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 1, 0, 0, 0], [0, 0, 0, 0, 0, 1, 0, 0, 0, 0], [0, 0, 0, 0, 1, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 1, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 1], [0, 0, 0, 0, 0, 0, 0, 1, 0, 0]]) ```python p = np.eye(10,10) p[[2,5]] = p[[5,2]] ``` ```python def det_p(p): p = p.astype(int) N = p.shape[0] if N == 1: return 1 for i in range(N): for j in range(N): if p[i,j] == 1: new_p = np.delete(p,i,0) new_p = np.delete(new_p,j,1) return (-1)**(i+j) * det_p(new_p) ``` ```python p ``` array([[ 1., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 1., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 1., 0., 0., 0., 0.], [ 0., 0., 0., 1., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 1., 0., 0., 0., 0., 0.], [ 0., 0., 1., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 1., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 1., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 1., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.]]) ```python p2 = np.delete(p,2,0) p2 = np.delete(p2,5,1) p2 ``` array([[ 1., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 1., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 1., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 1., 0., 0., 0., 0.], [ 0., 0., 1., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 1., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 1., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 1., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 1.]]) ```python p ``` array([[ 1., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 1., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 1., 0., 0., 0., 0.], [ 0., 0., 0., 1., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 1., 0., 0., 0., 0., 0.], [ 0., 0., 1., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 1., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 1., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 1., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.]]) ## Testing function det_p ```python p = np.eye(10,10) p[[2,5]] = p[[5,2]] det_p(p) ``` -1 ```python p = np.eye(10,10) det_p(p) ``` 1 ```python p = np.eye(10,10) p[[2,5]] = p[[5,2]] p[[0,1]] = p[[1,0]] det_p(p) ``` 1 ```python p = np.eye(10,10) p[[2,5]] = p[[5,2]] p[[0,1]] = p[[1,0]] p[[7,4]] = p[[4,7]] det_p(p) ``` -1 ```python p = np.eye(10,10) p[[2,5]] = p[[5,2]] p[[0,1]] = p[[1,0]] p[[7,4]] = p[[4,7]] p[[8,1]] = p[[1,8]] det_p(p) ``` 1 ```python p = np.eye(10,10) p[[2,5]] = p[[5,2]] p[[0,1]] = p[[1,0]] p[[7,4]] = p[[4,7]] p[[8,1]] = p[[1,8]] p[[8,1]] = p[[1,8]] det_p(p) ``` -1 ```python def det_LU_no_S(p,u): N = u.shape[0] return det_p(p) * np.product(u[range(N),range(N)]) ``` ```python a2 ``` array([[-1., 2., -5., -2., -4., -5., -4., -2., -2., 0.], [-1., 1., -3., 3., -5., 1., -1., 0., -4., -4.], [ 3., 4., -2., 1., 3., 3., -5., -5., -4., 3.], [-5., -1., 4., 0., 1., -2., 1., 3., -5., 2.], [ 4., 2., -3., 2., -4., -1., 4., -3., -3., -4.], [-5., 1., -3., -3., -1., -5., 0., -4., 0., 1.], [-4., -1., 1., -1., -5., 0., 1., 0., 4., 0.], [ 4., -4., -4., 3., -2., -4., 4., -2., 2., 2.], [ 3., 1., 2., -2., -3., 3., -1., 4., 1., 1.], [-4., 4., -1., 0., -1., -3., 4., 0., -5., 1.]]) ```python p,l,u = p_LU(a2) ``` ```python det_LU_no_S(p,u) ``` -75132997.99999994 ```python sp.linalg.det(a2) ``` -75132998.00000001 ```python det_p(p) ``` 1 ```python np.random.seed(6) a3 = np.ceil(10*np.random.random((15,15))) a3[range(15),range(15)] = a3[range(15),range(15)] - 10 print(a3, end = '\n\n') ``` [[ -1. 4. 9. 1. 2. 6. 6. 5. 4. 7. 5. 8. 6. 6. 7.] [ 10. -1. 5. 9. 9. 1. 8. 9. 8. 8. 6. 2. 10. 5. 3.] [ 8. 10. -7. 7. 6. 8. 10. 4. 3. 5. 8. 8. 5. 10. 5.] [ 4. 8. 1. -6. 8. 8. 3. 2. 5. 4. 8. 2. 4. 9. 7.] [ 9. 10. 9. 2. -3. 5. 5. 8. 9. 7. 2. 8. 1. 2. 10.] [ 5. 2. 7. 2. 7. -7. 1. 10. 1. 6. 4. 7. 9. 7. 6.] [ 2. 8. 3. 5. 1. 4. -1. 6. 6. 4. 9. 3. 2. 6. 3.] [ 8. 2. 8. 7. 1. 6. 7. 0. 8. 8. 5. 1. 9. 1. 6.] [ 6. 2. 1. 6. 5. 4. 1. 7. -8. 1. 3. 4. 2. 1. 8.] [ 5. 3. 6. 3. 9. 5. 7. 10. 4. -7. 10. 6. 10. 10. 8.] [ 1. 7. 3. 2. 4. 3. 8. 10. 1. 3. -7. 2. 10. 3. 4.] [ 1. 3. 9. 10. 7. 3. 1. 9. 2. 3. 6. -7. 9. 6. 5.] [ 10. 10. 3. 1. 7. 3. 1. 1. 5. 2. 3. 8. -5. 2. 3.] [ 10. 1. 7. 7. 7. 6. 8. 9. 7. 2. 8. 3. 9. -4. 1.] [ 10. 5. 10. 1. 2. 8. 1. 5. 9. 5. 2. 9. 2. 3. -7.]] ```python p,l,u = p_LU(a3) ``` ```python det_p(p) ``` -1 ```python np.float128(det_LU_no_S(p,u)) ``` 12001154699853958.0 ```python np.float128(sp.linalg.det(a3)) ``` 12001154699853984.0 ```python np.float128(det_LU(a3)) ``` 12001154699853958.0 # Matrix Inversion: Gauss-Jordan ### Use: &nbsp; $L_{ij}^{new} = \left[L_{ij} - \frac{L_{ik}}{L_{kk}} L_{kj}\right]^{old}$ ```python np.random.seed(3) a = np.ceil(10 * np.random.random([4,4])) a ``` array([[ 6., 8., 3., 6.], [ 9., 9., 2., 3.], [ 1., 5., 1., 5.], [ 7., 3., 7., 6.]]) ```python I = np.eye(4,4) I ``` array([[ 1., 0., 0., 0.], [ 0., 1., 0., 0.], [ 0., 0., 1., 0.], [ 0., 0., 0., 1.]]) ```python aI = np.hstack([a,I]) aI ``` array([[ 6., 8., 3., 6., 1., 0., 0., 0.], [ 9., 9., 2., 3., 0., 1., 0., 0.], [ 1., 5., 1., 5., 0., 0., 1., 0.], [ 7., 3., 7., 6., 0., 0., 0., 1.]]) ```python N = I.shape[0] N ``` 4 ```python A = aI.copy() for k in range(N-1): for i in range(k+1, N): r = -A[i,k] / A[k,k] print('r = ', r) for j in range(0, 2*N): A[i,j] = A[i,j] + r * A[k,j] #lines below are not used during back substitution np.set_printoptions(precision=2) print(A, end = '\n\n') ``` r = -1.5 [[ 6. 8. 3. 6. 1. 0. 0. 0. ] [ 0. -3. -2.5 -6. -1.5 1. 0. 0. ] [ 1. 5. 1. 5. 0. 0. 1. 0. ] [ 7. 3. 7. 6. 0. 0. 0. 1. ]] r = -0.166666666667 [[ 6. 8. 3. 6. 1. 0. 0. 0. ] [ 0. -3. -2.5 -6. -1.5 1. 0. 0. ] [ 0. 3.67 0.5 4. -0.17 0. 1. 0. ] [ 7. 3. 7. 6. 0. 0. 0. 1. ]] r = -1.16666666667 [[ 6. 8. 3. 6. 1. 0. 0. 0. ] [ 0. -3. -2.5 -6. -1.5 1. 0. 0. ] [ 0. 3.67 0.5 4. -0.17 0. 1. 0. ] [ 0. -6.33 3.5 -1. -1.17 0. 0. 1. ]] r = 1.22222222222 [[ 6. 8. 3. 6. 1. 0. 0. 0. ] [ 0. -3. -2.5 -6. -1.5 1. 0. 0. ] [ 0. 0. -2.56 -3.33 -2. 1.22 1. 0. ] [ 0. -6.33 3.5 -1. -1.17 0. 0. 1. ]] r = -2.11111111111 [[ 6. 8. 3. 6. 1. 0. 0. 0. ] [ 0. -3. -2.5 -6. -1.5 1. 0. 0. ] [ 0. 0. -2.56 -3.33 -2. 1.22 1. 0. ] [ 0. 0. 8.78 11.67 2. -2.11 0. 1. ]] r = 3.4347826087 [[ 6. 8. 3. 6. 1. 0. 0. 0. ] [ 0. -3. -2.5 -6. -1.5 1. 0. 0. ] [ 0. 0. -2.56 -3.33 -2. 1.22 1. 0. ] [ 0. 0. 0. 0.22 -4.87 2.09 3.43 1. ]] ```python for k in range(N-1,-1,-1): for i in range(k-1, -1, -1): r = -A[i,k] / A[k,k] print('r = ', r) for j in range(0, 2*N): A[i,j] = A[i,j] + r * A[k,j] #lines below are not used during back substitution np.set_printoptions(precision=2) print(A, end = '\n\n') ``` r = 15.3333333333 [[ 6. 8. 3. 6. 1. 0. 0. 0. ] [ 0. -3. -2.5 -6. -1.5 1. 0. 0. ] [ 0. 0. -2.56 0. -76.67 33.22 53.67 15.33] [ 0. 0. 0. 0.22 -4.87 2.09 3.43 1. ]] r = 27.6 [[ 6. 8. 3. 6. 1. 0. 0. 0. ] [ 0. -3. -2.5 0. -135.9 58.6 94.8 27.6 ] [ 0. 0. -2.56 0. -76.67 33.22 53.67 15.33] [ 0. 0. 0. 0.22 -4.87 2.09 3.43 1. ]] r = -27.6 [[ 6. 8. 3. 0. 135.4 -57.6 -94.8 -27.6 ] [ 0. -3. -2.5 0. -135.9 58.6 94.8 27.6 ] [ 0. 0. -2.56 0. -76.67 33.22 53.67 15.33] [ 0. 0. 0. 0.22 -4.87 2.09 3.43 1. ]] r = -0.978260869565 [[ 6. 8. 3. 0. 135.4 -57.6 -94.8 -27.6 ] [ 0. -3. 0. 0. -60.9 26.1 42.3 12.6 ] [ 0. 0. -2.56 0. -76.67 33.22 53.67 15.33] [ 0. 0. 0. 0.22 -4.87 2.09 3.43 1. ]] r = 1.17391304348 [[ 6. 8. 0. 0. 45.4 -18.6 -31.8 -9.6 ] [ 0. -3. 0. 0. -60.9 26.1 42.3 12.6 ] [ 0. 0. -2.56 0. -76.67 33.22 53.67 15.33] [ 0. 0. 0. 0.22 -4.87 2.09 3.43 1. ]] r = 2.66666666667 [[ 6. 0. 0. 0. -117. 51. 81. 24. ] [ 0. -3. 0. 0. -60.9 26.1 42.3 12.6 ] [ 0. 0. -2.56 0. -76.67 33.22 53.67 15.33] [ 0. 0. 0. 0.22 -4.87 2.09 3.43 1. ]] ```python for i in range(N): for j in range(N,2*N): A[i,j] = A[i,j]/A[i,i] A[i,i] = 1 print(A, end = '\n\n') ``` [[ 1. 0. 0. 0. -19.5 8.5 13.5 4. ] [ 0. -3. 0. 0. -60.9 26.1 42.3 12.6 ] [ 0. 0. -2.56 0. -76.67 33.22 53.67 15.33] [ 0. 0. 0. 0.22 -4.87 2.09 3.43 1. ]] [[ 1. 0. 0. 0. -19.5 8.5 13.5 4. ] [ 0. 1. 0. 0. 20.3 -8.7 -14.1 -4.2 ] [ 0. 0. -2.56 0. -76.67 33.22 53.67 15.33] [ 0. 0. 0. 0.22 -4.87 2.09 3.43 1. ]] [[ 1. 0. 0. 0. -19.5 8.5 13.5 4. ] [ 0. 1. 0. 0. 20.3 -8.7 -14.1 -4.2 ] [ 0. 0. 1. 0. 30. -13. -21. -6. ] [ 0. 0. 0. 0.22 -4.87 2.09 3.43 1. ]] [[ 1. 0. 0. 0. -19.5 8.5 13.5 4. ] [ 0. 1. 0. 0. 20.3 -8.7 -14.1 -4.2] [ 0. 0. 1. 0. 30. -13. -21. -6. ] [ 0. 0. 0. 1. -22.4 9.6 15.8 4.6]] ```python a_inv = A[:,N:] a_inv ``` array([[-19.5, 8.5, 13.5, 4. ], [ 20.3, -8.7, -14.1, -4.2], [ 30. , -13. , -21. , -6. ], [-22.4, 9.6, 15.8, 4.6]]) ```python a.dot(a_inv) ``` array([[ 1.00e+00, 1.24e-14, 2.13e-14, -7.11e-15], [ 0.00e+00, 1.00e+00, 1.42e-14, -1.42e-14], [ -1.78e-14, 1.78e-14, 1.00e+00, 1.78e-15], [ -1.42e-14, 0.00e+00, 2.13e-14, 1.00e+00]]) ```python np.set_printoptions(precision=2, suppress=True) a.dot(a_inv) ``` array([[ 1., 0., 0., -0.], [ 0., 1., 0., -0.], [-0., 0., 1., 0.], [-0., 0., 0., 1.]]) ```python ## Making function out of the tested code def GJ_inv(a): N = a.shape[0] I = np.eye(N,N) A = np.hstack([a,I]) for k in range(N-1): for i in range(k+1, N): r = -A[i,k] / A[k,k] for j in range(k+1, N+k+1): #optimize: 0, 2N to k+1, N+k+1 A[i,j] = A[i,j] + r * A[k,j] for k in range(N-1,-1,-1): for i in range(k-1, -1, -1): r = -A[i,k] / A[k,k] for j in range(k+1, 2*N): #optimize by change 0 to k+1 A[i,j] = A[i,j] + r * A[k,j] for i in range(N): for j in range(N,2*N): A[i,j] = A[i,j]/A[i,i] return A[:,N:] ``` ```python np.random.seed(3) a = np.ceil(10 * np.random.random([4,4])) fn_cal = GJ_inv(a) fn_cal ``` array([[-19.5, 8.5, 13.5, 4. ], [ 20.3, -8.7, -14.1, -4.2], [ 30. , -13. , -21. , -6. ], [-22.4, 9.6, 15.8, 4.6]]) ```python np.set_printoptions(precision=2, suppress=True) a.dot(fn_cal) ``` array([[ 1., 0., 0., -0.], [ 0., 1., 0., -0.], [-0., 0., 1., 0.], [-0., 0., 0., 1.]]) ```python np.set_printoptions(precision=2, suppress=False) a.dot(fn_cal) ``` array([[ 1.00e+00, 1.24e-14, 2.13e-14, -7.11e-15], [ 0.00e+00, 1.00e+00, 1.42e-14, -1.42e-14], [ -1.78e-14, 1.78e-14, 1.00e+00, 1.78e-15], [ -1.42e-14, 0.00e+00, 2.13e-14, 1.00e+00]]) ```python def pivoting(B): A = B.copy() N = A.shape[0] p_ans = np.eye(N,N) #pivoting for i in range(N): maxidx = np.abs(A[i:,i]).argmax() + i if (i != maxidx): p_ans[[i, maxidx]] = p_ans[[maxidx, i]] A[[i, maxidx]] = A[[maxidx, i]] return A, p_ans def GJ_inv_pp(a): N = a.shape[0] I = np.eye(N,N) b, p = pivoting(a) A = np.hstack([b.copy(),I]) for k in range(N-1): for i in range(k+1, N): r = -A[i,k] / A[k,k] for j in range(k+1, N+k+1): #optimize: 0, 2N to k+1, N+k+1 A[i,j] = A[i,j] + r * A[k,j] for k in range(N-1,-1,-1): for i in range(k-1, -1, -1): r = -A[i,k] / A[k,k] for j in range(k+1, 2*N): #optimize by change 0 to k+1 A[i,j] = A[i,j] + r * A[k,j] for i in range(N): for j in range(N,2*N): A[i,j] = A[i,j]/A[i,i] return A[:,N:].dot(p) ``` ```python a3.shape ``` (15, 15) ```python a3[range(15),range(15)] = 0 a3.astype(int) ``` array([[ 0, 4, 9, 1, 2, 6, 6, 5, 4, 7, 5, 8, 6, 6, 7], [10, 0, 5, 9, 9, 1, 8, 9, 8, 8, 6, 2, 10, 5, 3], [ 8, 10, 0, 7, 6, 8, 10, 4, 3, 5, 8, 8, 5, 10, 5], [ 4, 8, 1, 0, 8, 8, 3, 2, 5, 4, 8, 2, 4, 9, 7], [ 9, 10, 9, 2, 0, 5, 5, 8, 9, 7, 2, 8, 1, 2, 10], [ 5, 2, 7, 2, 7, 0, 1, 10, 1, 6, 4, 7, 9, 7, 6], [ 2, 8, 3, 5, 1, 4, 0, 6, 6, 4, 9, 3, 2, 6, 3], [ 8, 2, 8, 7, 1, 6, 7, 0, 8, 8, 5, 1, 9, 1, 6], [ 6, 2, 1, 6, 5, 4, 1, 7, 0, 1, 3, 4, 2, 1, 8], [ 5, 3, 6, 3, 9, 5, 7, 10, 4, 0, 10, 6, 10, 10, 8], [ 1, 7, 3, 2, 4, 3, 8, 10, 1, 3, 0, 2, 10, 3, 4], [ 1, 3, 9, 10, 7, 3, 1, 9, 2, 3, 6, 0, 9, 6, 5], [10, 10, 3, 1, 7, 3, 1, 1, 5, 2, 3, 8, 0, 2, 3], [10, 1, 7, 7, 7, 6, 8, 9, 7, 2, 8, 3, 9, 0, 1], [10, 5, 10, 1, 2, 8, 1, 5, 9, 5, 2, 9, 2, 3, 0]]) ```python a3_inv = GJ_inv_pp(a3) a3_inv ``` array([[-0.14, -0.07, 0.07, 0.03, 0.08, 0.12, -0.06, 0.02, -0.06, -0.02, -0.06, 0.01, -0.06, 0.07, -0. ], [-0.03, -0.05, 0.02, 0. , 0.03, 0.02, 0.02, 0. , -0.05, -0.03, 0.03, 0.03, 0.05, 0.02, -0.03], [-0.01, -0.07, 0.05, 0.02, 0.11, 0.05, -0.1 , -0.03, -0.12, -0.04, -0.08, 0.11, -0.03, 0.09, -0.03], [ 0.02, 0.05, 0.01, -0.08, -0.05, -0.09, 0.03, 0.02, 0.07, 0.02, 0.01, 0.04, 0.04, -0.08, 0.03], [ 0.08, 0.06, -0.04, 0.05, -0.03, -0.05, -0.05, -0.06, 0.02, -0.03, -0. , 0.04, 0.07, 0.01, -0.02], [ 0.02, -0.03, 0. , 0.06, -0.04, -0.03, -0.02, 0. , 0.07, -0.04, 0.03, -0. , -0.06, 0.02, 0.07], [ 0.02, 0.01, 0.07, 0. , 0.08, -0.02, -0.08, -0.05, -0.07, -0.01, -0.03, 0.02, -0.03, 0.07, -0.06], [-0.02, 0. , -0. , 0.02, 0.04, 0.03, 0.02, -0.07, 0.01, -0.03, 0.01, -0.01, -0.07, 0.05, 0.01], [ 0.06, 0.15, -0.13, -0.06, -0.09, -0.18, 0.1 , 0.04, 0.08, 0.12, 0.08, -0.07, 0.08, -0.17, 0.08], [ 0.05, 0.01, 0.03, 0.07, 0.02, 0.09, -0.01, -0.03, -0.02, -0.15, -0.03, -0.01, -0.04, 0.06, -0.04], [ 0.01, -0.07, 0.02, 0.03, 0.03, 0.09, 0.04, -0.01, -0.06, -0.04, -0.08, -0.02, -0.02, 0.14, -0.09], [ 0.11, 0.05, -0.04, -0.11, -0.12, -0.06, 0.07, 0.02, 0.1 , 0.06, 0.04, -0.08, 0.1 , -0.09, 0.04], [-0.02, -0.01, -0.06, -0.06, -0.12, 0. , 0.08, 0.12, 0.05, 0.07, 0.11, -0.07, 0.05, -0.08, 0.04], [-0.08, 0.02, 0.06, 0.01, 0.04, 0.01, -0.04, -0.01, -0.06, 0.06, -0.03, 0.04, -0.07, -0.08, 0.05], [ 0.01, 0.01, -0.04, 0. , 0.02, -0.03, -0. , 0.04, 0.06, 0.05, 0. , -0.02, 0.01, -0.06, -0.03]]) ```python a3.dot(a3_inv) ``` array([[ 1.00e+00, -1.61e-15, -4.27e-15, 6.41e-16, 4.16e-15, 1.31e-15, -2.65e-15, 5.75e-15, 1.98e-15, 6.76e-15, -3.14e-16, 6.28e-16, -1.99e-15, -6.66e-16, -1.60e-15], [ 2.19e-16, 1.00e+00, 4.86e-17, 1.93e-17, 1.11e-16, 1.77e-16, -6.16e-17, -1.32e-16, -6.94e-17, -2.36e-16, 1.75e-17, -1.08e-16, 1.32e-16, -1.67e-16, -1.60e-16], [ -1.08e-16, -2.56e-16, 1.00e+00, 1.37e-16, 6.59e-16, 2.26e-16, -4.25e-17, -1.73e-16, -2.64e-16, -1.53e-16, -1.02e-16, -3.12e-17, -2.01e-16, 3.33e-16, -2.01e-16], [ 1.77e-16, 1.34e-16, 3.96e-16, 1.00e+00, 1.60e-16, 1.21e-16, 1.15e-16, -1.87e-16, -1.39e-17, -6.94e-17, -1.17e-16, -1.49e-16, -1.18e-16, -2.78e-16, 3.47e-17], [ -1.39e-15, -8.31e-16, 2.93e-15, 2.08e-16, 1.00e+00, 4.23e-15, -2.28e-15, -1.57e-15, -1.30e-15, -7.49e-16, -8.90e-16, 2.71e-16, -1.30e-15, 3.89e-15, -1.51e-15], [ -4.23e-16, 8.52e-16, 1.79e-15, -9.52e-16, 1.71e-15, 1.00e+00, -1.39e-15, -1.60e-15, -9.16e-16, -1.94e-16, -1.16e-15, -1.32e-16, -3.33e-16, 9.44e-16, -3.75e-16], [ -1.98e-16, 1.78e-15, -1.60e-16, -2.10e-15, -2.21e-15, -1.54e-15, 1.00e+00, 4.86e-17, 1.12e-15, 9.02e-16, -6.38e-16, -6.90e-16, 3.68e-16, -1.67e-15, 1.03e-15], [ -4.93e-16, -3.27e-15, 1.80e-16, 3.33e-15, 3.54e-15, 2.74e-15, -2.60e-17, 1.00e+00, -2.69e-15, -1.75e-15, 8.85e-16, 1.91e-15, -7.36e-16, 3.28e-15, -2.76e-15], [ 2.25e-15, 2.78e-15, 3.11e-15, 7.42e-16, -6.66e-16, -3.08e-15, -1.46e-15, -5.61e-15, 1.00e+00, -4.55e-15, -3.09e-15, -1.94e-16, 8.74e-16, -1.61e-15, 2.08e-15], [ -1.39e-16, -2.29e-16, 5.00e-16, 1.34e-16, 0.00e+00, 2.22e-16, 3.26e-16, -4.44e-16, -6.66e-16, 1.00e+00, -1.36e-16, -8.33e-17, -3.61e-16, -1.11e-16, 1.11e-16], [ -9.85e-16, -1.84e-16, -1.94e-16, -1.16e-15, 5.55e-17, 1.01e-15, 1.08e-16, 2.78e-17, 5.55e-17, 5.55e-16, 1.00e+00, -8.47e-16, 3.19e-16, 3.89e-16, 3.19e-16], [ -3.30e-16, -8.11e-16, 4.86e-17, 4.28e-16, 7.63e-17, -7.98e-17, 2.91e-16, -2.57e-16, -8.19e-16, -2.64e-16, -3.10e-16, 1.00e+00, 2.08e-17, 3.33e-16, 2.15e-16], [ -2.92e-15, -2.37e-15, 2.85e-15, -2.29e-15, 4.85e-15, 3.52e-15, -2.52e-15, -1.88e-15, -9.58e-16, 2.15e-15, 6.87e-16, 1.14e-16, 1.00e+00, 3.77e-15, -1.33e-15], [ 2.12e-16, 4.60e-17, 1.69e-15, 4.57e-16, -1.66e-15, -1.12e-15, -1.92e-16, -2.53e-15, -2.21e-15, -2.87e-15, -1.64e-15, 7.18e-16, 1.24e-15, 1.00e+00, 1.33e-15], [ 0.00e+00, -4.72e-16, 6.25e-16, 0.00e+00, 2.22e-16, 1.39e-16, 0.00e+00, -3.33e-16, -4.44e-16, -2.22e-16, 0.00e+00, 1.11e-16, -1.80e-16, 1.67e-16, 1.00e+00]]) ```python np.set_printoptions(suppress=True, precision=2) result = a3.dot(a3_inv) result ``` array([[ 1., -0., -0., 0., 0., 0., -0., 0., 0., 0., -0., 0., -0., -0., -0.], [ 0., 1., 0., 0., 0., 0., -0., -0., -0., -0., 0., -0., 0., -0., -0.], [-0., -0., 1., 0., 0., 0., -0., -0., -0., -0., -0., -0., -0., 0., -0.], [ 0., 0., 0., 1., 0., 0., 0., -0., -0., -0., -0., -0., -0., -0., 0.], [-0., -0., 0., 0., 1., 0., -0., -0., -0., -0., -0., 0., -0., 0., -0.], [-0., 0., 0., -0., 0., 1., -0., -0., -0., -0., -0., -0., -0., 0., -0.], [-0., 0., -0., -0., -0., -0., 1., 0., 0., 0., -0., -0., 0., -0., 0.], [-0., -0., 0., 0., 0., 0., -0., 1., -0., -0., 0., 0., -0., 0., -0.], [ 0., 0., 0., 0., -0., -0., -0., -0., 1., -0., -0., -0., 0., -0., 0.], [-0., -0., 0., 0., 0., 0., 0., -0., -0., 1., -0., -0., -0., -0., 0.], [-0., -0., -0., -0., 0., 0., 0., 0., 0., 0., 1., -0., 0., 0., 0.], [-0., -0., 0., 0., 0., -0., 0., -0., -0., -0., -0., 1., 0., 0., 0.], [-0., -0., 0., -0., 0., 0., -0., -0., -0., 0., 0., 0., 1., 0., -0.], [ 0., 0., 0., 0., -0., -0., -0., -0., -0., -0., -0., 0., 0., 1., 0.], [ 0., -0., 0., 0., 0., 0., 0., -0., -0., -0., 0., 0., -0., 0., 1.]]) ```python result.round().astype(int) ``` array([[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1]]) ## without pivoting, error occur ```python GJ_inv(a3) ``` /home/me/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:8: RuntimeWarning: divide by zero encountered in double_scalars /home/me/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:8: RuntimeWarning: invalid value encountered in double_scalars /home/me/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:15: RuntimeWarning: invalid value encountered in double_scalars from ipykernel import kernelapp as app array([[ nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan], [ nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan], [ nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan], [ nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan], [ nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan], [ nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan], [ nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan], [ nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan], [ nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan], [ nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan], [ nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan], [ nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan], [ nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan], [ nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan], [ nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan]]) ```python a3_inv_scipy = sp.linalg.inv(a3) ``` ```python diff = a3_inv - a3_inv_scipy diff ``` array([[-0., 0., 0., 0., -0., 0., 0., -0., -0., -0., -0., 0., 0., 0., 0.], [-0., -0., 0., -0., 0., 0., -0., -0., -0., 0., 0., -0., 0., 0., -0.], [-0., -0., 0., 0., 0., 0., -0., 0., -0., -0., 0., 0., -0., 0., -0.], [ 0., -0., -0., 0., 0., -0., 0., 0., 0., 0., 0., -0., -0., -0., -0.], [-0., -0., -0., -0., 0., 0., -0., 0., 0., 0., 0., 0., -0., -0., 0.], [ 0., 0., -0., 0., -0., -0., 0., -0., 0., -0., -0., 0., 0., -0., 0.], [-0., -0., -0., 0., 0., 0., -0., 0., -0., 0., 0., 0., -0., 0., -0.], [ 0., 0., 0., -0., -0., -0., 0., -0., 0., -0., -0., -0., 0., -0., 0.], [-0., -0., -0., -0., 0., 0., 0., 0., 0., 0., 0., -0., -0., -0., -0.], [ 0., 0., -0., -0., -0., -0., -0., 0., 0., 0., -0., 0., 0., -0., 0.], [-0., 0., 0., -0., -0., -0., -0., -0., -0., -0., -0., 0., 0., -0., 0.], [ 0., 0., -0., -0., 0., -0., -0., 0., 0., 0., 0., -0., -0., -0., -0.], [-0., -0., -0., -0., 0., -0., 0., -0., 0., 0., 0., -0., 0., -0., -0.], [ 0., -0., 0., 0., -0., 0., 0., 0., -0., -0., 0., -0., 0., 0., -0.], [ 0., -0., 0., 0., 0., 0., -0., -0., 0., -0., -0., 0., -0., 0., -0.]]) ```python np.linalg.norm(diff) ``` 3.8769658820079069e-15 ```python Sum = 0 for i in diff: for j in i: Sum = j**2 + Sum Sum**0.5 ``` 3.8769658820079069e-15 # Cholesky Factorization ```python np.random.seed(4) a = np.ceil(10*np.random.random([4,4])).astype(int) b = np.tril(a) b ``` array([[10, 0, 0, 0], [ 7, 3, 0, 0], [ 3, 5, 8, 0], [ 9, 10, 2, 6]]) ```python A = b.dot(b.T) A ``` array([[100, 70, 30, 90], [ 70, 58, 36, 93], [ 30, 36, 98, 93], [ 90, 93, 93, 221]]) ```python sp.linalg.cholesky(A) ``` array([[ 10., 7., 3., 9.], [ 0., 3., 5., 10.], [ 0., 0., 8., 2.], [ 0., 0., 0., 6.]]) ```python sp.linalg.cholesky(A, lower=True) ``` array([[ 10., 0., 0., 0.], [ 7., 3., 0., 0.], [ 3., 5., 8., 0.], [ 9., 10., 2., 6.]]) ```python def cholesky_user(A): N = A.shape[0] L = np.zeros([N,N]) for i in range(N): for j in range(i+1): if i == j: Sum = 0 for k in range(j): Sum = Sum + L[j,k]**2 try: L[i,j] = (A[i,i] - Sum)**0.5 except Exception as e: print(e) return None else: Sum = 0 for k in range(j): Sum = Sum + L[i,k]*L[j,k] L[i,j] = (A[i,j] - Sum) / L[j,j] return L ``` ```python cholesky_user(A) ``` array([[ 10., 0., 0., 0.], [ 7., 3., 0., 0.], [ 3., 5., 8., 0.], [ 9., 10., 2., 6.]]) ```python sp.linalg.cholesky(A, lower=True) ``` array([[ 10., 0., 0., 0.], [ 7., 3., 0., 0.], [ 3., 5., 8., 0.], [ 9., 10., 2., 6.]]) ```python _ - __ ``` array([[ 0., 0., 0., 0.], [ 0., 0., 0., 0.], [ 0., 0., 0., 0.], [ 0., 0., 0., 0.]]) ```python %%timeit -n 5 cholesky_user(A) ``` 5 loops, best of 3: 39.2 µs per loop ```python %%timeit -n 5 sp.linalg.cholesky(A, lower=True) ``` 5 loops, best of 3: 14.8 µs per loop ## with zero on one of the diagonal terms ```python np.random.seed(4) a = np.ceil(10*np.random.random([4,4])).astype(int) b = np.tril(a) b[2,2] = 0 b ``` array([[10, 0, 0, 0], [ 7, 3, 0, 0], [ 3, 5, 0, 0], [ 9, 10, 2, 6]]) ```python A = b.dot(b.T) A ``` array([[100, 70, 30, 90], [ 70, 58, 36, 93], [ 30, 36, 34, 77], [ 90, 93, 77, 221]]) ```python try: sp.linalg.cholesky(A) except Exception as e: print(e) ``` 3-th leading minor not positive definite ```python cholesky_user(A) ``` /home/me/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:19: RuntimeWarning: invalid value encountered in double_scalars array([[ 10., 0., 0., 0.], [ 7., 3., 0., 0.], [ 3., 5., 0., 0.], [ 9., 10., nan, nan]]) ## if not symmetric ```python np.random.seed(4) A = np.ceil(10*np.random.random([4,4])).astype(int) A ``` array([[10, 6, 10, 8], [ 7, 3, 10, 1], [ 3, 5, 8, 2], [ 9, 10, 2, 6]]) ```python try: sp.linalg.cholesky(A) except Exception as e: print(e) ``` 2-th leading minor not positive definite ```python cholesky_user(A) ``` /home/me/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:11: RuntimeWarning: invalid value encountered in double_scalars # This is added back by InteractiveShellApp.init_path() array([[ 3.16, 0. , 0. , 0. ], [ 2.21, nan, 0. , 0. ], [ 0.95, nan, nan, 0. ], [ 2.85, nan, nan, nan]]) # Compare Cholesky with LU ```python np.random.seed(42) a = np.ceil(10*np.random.random([400,400])).astype(int) b = np.tril(a) A = b.dot(b.T) ``` ```python %%timeit -n 5 p1,l1,u1 = sp.linalg.lu(A) ``` 5 loops, best of 3: 2.84 ms per loop ```python %%timeit -n 5 ans1 = sp.linalg.cholesky(A, lower = True) ``` 5 loops, best of 3: 836 µs per loop ```python %%timeit -n 5 ans2 = np.linalg.cholesky(A) ``` 5 loops, best of 3: 1.59 ms per loop ```python p,l,u = sp.linalg.lu(A) ``` ```python p ``` array([[ 0., 0., 0., ..., 0., 0., 1.], [ 0., 0., 0., ..., 0., 0., 0.], [ 0., 0., 0., ..., 0., 0., 0.], ..., [ 0., 0., 0., ..., 0., 0., 0.], [ 0., 0., 0., ..., 0., 0., 0.], [ 0., 0., 0., ..., 0., 0., 0.]]) ```python l ``` array([[ 1. , 0. , 0. , ..., 0. , 0. , 0. ], [ 0.1 , 1. , 0. , ..., 0. , 0. , 0. ], [ 0.2 , -0. , 1. , ..., 0. , 0. , 0. ], ..., [ 0.4 , -0. , 0.41, ..., 1. , 0. , 0. ], [ 0.4 , -0.11, 0.37, ..., 0.58, 1. , 0. ], [ 0.4 , -0.21, -0.12, ..., -0.03, -0.25, 1. ]]) ```python u ``` array([[ 40. , 70. , 126. , ..., 718. , 846. , 559. ], [ 0. , 95. , 57.4, ..., 4376.2, 4369.4, 3672.1], [ 0. , 0. , 52.8, ..., 167.4, 178.8, 81.2], ..., [ 0. , 0. , 0. , ..., -0. , -0. , -0. ], [ 0. , 0. , 0. , ..., 0. , 0. , 0. ], [ 0. , 0. , 0. , ..., 0. , 0. , 0. ]]) ```python np.linalg.cholesky(A) ``` array([[ 4., 0., 0., ..., 0., 0., 0.], [ 2., 10., 0., ..., 0., 0., 0.], [ 8., 2., 6., ..., 0., 0., 0.], ..., [ 3., 2., 5., ..., 5., 0., 0.], [ 6., 3., 10., ..., 4., 7., 0.], [ 9., 1., 4., ..., 3., 8., 5.]]) ```python sp.linalg.cholesky(A, lower = True) ``` array([[ 4., 0., 0., ..., 0., 0., 0.], [ 2., 10., 0., ..., 0., 0., 0.], [ 8., 2., 6., ..., 0., 0., 0.], ..., [ 3., 2., 5., ..., 5., 0., 0.], [ 6., 3., 10., ..., 4., 7., 0.], [ 9., 1., 4., ..., 3., 8., 5.]]) # Determinant after Cholesky Decomposition ```python np.random.seed(42) a = np.ceil(10*np.random.random([5,5])).astype(int) b = np.tril(a) A = b.dot(b.T) A ``` array([[ 16, 8, 4, 8, 28], [ 8, 5, 12, 8, 16], [ 4, 12, 182, 96, 54], [ 8, 8, 96, 81, 60], [ 28, 16, 54, 60, 103]]) ```python sp.linalg.det(A) ``` 809999.9999999998 ```python L = cholesky_user(A) ``` ```python det_ans = np.product(L[range(5),range(5)]) **2 #**2 to account for L and L^T det_ans ``` 810000.0 # Iterative method: Jacobi # $\mathbf{x}^{(k+1)} = D^{-1} (\mathbf{b} - R \mathbf{x}^{(k)})$ ```python def jacobi(A,b): """This function use Jacobi method to solve Ax = b A is decomposed into D + R where D is the diagonal term and R is the rest D^-1 is 1/a_ii for all diagonal terms Function input: A = nxn numpy array, b = nx1 numpy array This function return x This function is not the most efficient way of Jacobi method It is just to show the algorithm and how it works""" N = A.shape[0] D_inv = np.zeros((N,N)) D_inv[range(N),range(N)] = 1/A[range(N),range(N)] R = A.copy() R[range(N),range(N)] = 0 x = np.ones((N,1)) for i in range(2000): x_old = x x = D_inv.dot(b-R.dot(x)) diff = np.abs(x_old-x) norm_av = sp.linalg.norm(diff) / N**2 if norm_av < 1e-14: break if i%5 == 0: print(x,end = '\n\n') print('#iteration = ', i, 'diff_av = ', norm_av) return x ``` ```python np.random.seed(100) A = np.ceil(100*np.random.random((5,5))) A[range(5),range(5)] = A[range(5),range(5)] + 200 A ``` array([[ 255., 28., 43., 85., 1.], [ 13., 268., 83., 14., 58.], [ 90., 21., 219., 11., 22.], [ 98., 82., 18., 282., 28.], [ 44., 95., 82., 34., 218.]]) ```python np.random.seed(1) b = np.ceil(100*np.random.random((5,1))) b ``` array([[ 42.], [ 73.], [ 1.], [ 31.], [ 15.]]) ```python jacobi_ans = jacobi(A,b) ``` [[-0.45] [-0.35] [-0.65] [-0.69] [-1.1 ]] [[ 0.27] [ 0.43] [ 0.04] [ 0.13] [ 0.14]] [[ 0.13] [ 0.28] [-0.1 ] [-0.05] [-0.1 ]] [[ 0.16] [ 0.31] [-0.07] [-0.01] [-0.05]] [[ 0.15] [ 0.3 ] [-0.08] [-0.02] [-0.06]] [[ 0.15] [ 0.3 ] [-0.08] [-0.02] [-0.06]] [[ 0.15] [ 0.3 ] [-0.08] [-0.02] [-0.06]] [[ 0.15] [ 0.3 ] [-0.08] [-0.02] [-0.06]] [[ 0.15] [ 0.3 ] [-0.08] [-0.02] [-0.06]] [[ 0.15] [ 0.3 ] [-0.08] [-0.02] [-0.06]] [[ 0.15] [ 0.3 ] [-0.08] [-0.02] [-0.06]] [[ 0.15] [ 0.3 ] [-0.08] [-0.02] [-0.06]] [[ 0.15] [ 0.3 ] [-0.08] [-0.02] [-0.06]] [[ 0.15] [ 0.3 ] [-0.08] [-0.02] [-0.06]] [[ 0.15] [ 0.3 ] [-0.08] [-0.02] [-0.06]] [[ 0.15] [ 0.3 ] [-0.08] [-0.02] [-0.06]] [[ 0.15] [ 0.3 ] [-0.08] [-0.02] [-0.06]] [[ 0.15] [ 0.3 ] [-0.08] [-0.02] [-0.06]] [[ 0.15] [ 0.3 ] [-0.08] [-0.02] [-0.06]] [[ 0.15] [ 0.3 ] [-0.08] [-0.02] [-0.06]] #iteration = 96 diff_av = 8.05882946335e-15 ```python np.set_printoptions(precision=8) jacobi_ans ``` array([[ 0.15168993], [ 0.30401938], [-0.07977452], [-0.02002359], [-0.06116461]]) ```python A.dot(jacobi_ans) ``` array([[ 42.], [ 73.], [ 1.], [ 31.], [ 15.]]) ```python b ``` array([[ 42.], [ 73.], [ 1.], [ 31.], [ 15.]]) Jacobi method converges and give the correct answer ```python np.random.seed(42) a = np.ceil(10*np.random.random([50,50])).astype(int) b = np.tril(a) A = b.dot(b.T) A[range(50),range(50)] = A[range(50),range(50)] + 200000 A ``` array([[200016, 40, 4, ..., 28, 16, 12], [ 40, 200164, 66, ..., 150, 48, 110], [ 4, 66, 200066, ..., 105, 31, 97], ..., [ 28, 150, 105, ..., 201804, 1352, 1635], [ 16, 48, 31, ..., 1352, 201658, 1525], [ 12, 110, 97, ..., 1635, 1525, 202131]]) ```python np.random.seed(1) b = np.ceil(100*np.random.random((50,1))) b ``` array([[ 42.], [ 73.], [ 1.], [ 31.], [ 15.], [ 10.], [ 19.], [ 35.], [ 40.], [ 54.], [ 42.], [ 69.], [ 21.], [ 88.], [ 3.], [ 68.], [ 42.], [ 56.], [ 15.], [ 20.], [ 81.], [ 97.], [ 32.], [ 70.], [ 88.], [ 90.], [ 9.], [ 4.], [ 17.], [ 88.], [ 10.], [ 43.], [ 96.], [ 54.], [ 70.], [ 32.], [ 69.], [ 84.], [ 2.], [ 76.], [ 99.], [ 75.], [ 29.], [ 79.], [ 11.], [ 45.], [ 91.], [ 30.], [ 29.], [ 14.]]) ```python try: ans = jacobi(A,b) except Exception as e: print(e) ``` [[-0.00488961] [-0.02266641] [-0.01536993] [-0.02573724] [-0.03386385] [-0.03582099] [-0.05650403] [-0.04919678] [-0.06817672] [-0.05730834] [-0.09476197] [-0.08457448] [-0.05980896] [-0.08441507] [-0.08665247] [-0.1007044 ] [-0.12825209] [-0.08796209] [-0.10641925] [-0.10555284] [-0.14321314] [-0.1484513 ] [-0.14534297] [-0.15250588] [-0.13321586] [-0.11809788] [-0.1561532 ] [-0.17393875] [-0.14555345] [-0.1791533 ] [-0.15114521] [-0.17394804] [-0.15181981] [-0.17476239] [-0.15668771] [-0.17725182] [-0.17085036] [-0.20487094] [-0.17415022] [-0.17865119] [-0.16724991] [-0.18681123] [-0.20059019] [-0.17835191] [-0.18949224] [-0.19752603] [-0.1859551 ] [-0.19036788] [-0.16870642] [-0.19760452]] [[ 0.00020929] [ 0.00036167] [ 0.00000274] [ 0.00015129] [ 0.00007027] [ 0.0000449 ] [ 0.00008733] [ 0.00016804] [ 0.00019041] [ 0.00026192] [ 0.00019699] [ 0.00033318] [ 0.00009684] [ 0.0004282 ] [ 0.00000324] [ 0.00032595] [ 0.00019243] [ 0.00026781] [ 0.00006072] [ 0.00008564] [ 0.00038468] [ 0.00046419] [ 0.00013962] [ 0.00032866] [ 0.00042177] [ 0.00043401] [ 0.00002379] [-0.0000037 ] [ 0.00006535] [ 0.00041532] [ 0.00002969] [ 0.00019083] [ 0.00045935] [ 0.00024614] [ 0.00032852] [ 0.00013633] [ 0.00032183] [ 0.00039151] [-0.00001318] [ 0.00035565] [ 0.00047229] [ 0.00034926] [ 0.0001179 ] [ 0.00037057] [ 0.00002918] [ 0.00019828] [ 0.00042952] [ 0.00012443] [ 0.00012272] [ 0.00004338]] [[ 0.0002089 ] [ 0.00035989] [ 0.00000154] [ 0.00014928] [ 0.00006754] [ 0.00004204] [ 0.0000827 ] [ 0.00016408] [ 0.00018488] [ 0.0002573 ] [ 0.00018909] [ 0.00032618] [ 0.00009174] [ 0.00042104] [-0.00000416] [ 0.00031742] [ 0.00018141] [ 0.00026026] [ 0.00005141] [ 0.00007658] [ 0.00037225] [ 0.00045145] [ 0.0001269 ] [ 0.00031539] [ 0.0004098 ] [ 0.00042343] [ 0.00000999] [-0.0000191 ] [ 0.00005211] [ 0.00039917] [ 0.00001604] [ 0.0001754 ] [ 0.00044532] [ 0.00022996] [ 0.00031425] [ 0.00011996] [ 0.00030613] [ 0.000373 ] [-0.00002933] [ 0.00033931] [ 0.00045666] [ 0.00033207] [ 0.00009951] [ 0.00035397] [ 0.00001186] [ 0.00018035] [ 0.00041237] [ 0.0001069 ] [ 0.00010686] [ 0.00002511]] #iteration = 15 diff_av = 2.16372742506e-15 Jacobi method work fine for the above case ```python np.set_printoptions(suppress=False) sp.linalg.norm(A.dot(ans) - b)/sp.linalg.norm(b) * 100 ``` 4.259427776293408e-08 ## Check convergence condition ### Sufficient but not necessary condition $|a_{ii}| \geq \sum_{j\neq i} |a_{ij}| \quad\text{for all } i$ ### Standard convergence condition $\text{max abs( Eigenvalues( } D^{-1}R\text{ ))}<1$ ```python for i in range(A.shape[0]): off_diag = A[i,:].sum() - A[i,i] print('diagonal - sum(non-diagonal) = ', A[i,i] - off_diag) ``` diagonal - sum(non-diagonal) = 198996 diagonal - sum(non-diagonal) = 195554 diagonal - sum(non-diagonal) = 196990 diagonal - sum(non-diagonal) = 194956 diagonal - sum(non-diagonal) = 193390 diagonal - sum(non-diagonal) = 192982 diagonal - sum(non-diagonal) = 188984 diagonal - sum(non-diagonal) = 190370 diagonal - sum(non-diagonal) = 186730 diagonal - sum(non-diagonal) = 188753 diagonal - sum(non-diagonal) = 181575 diagonal - sum(non-diagonal) = 183461 diagonal - sum(non-diagonal) = 188272 diagonal - sum(non-diagonal) = 183441 diagonal - sum(non-diagonal) = 183104 diagonal - sum(non-diagonal) = 180328 diagonal - sum(non-diagonal) = 175100 diagonal - sum(non-diagonal) = 182762 diagonal - sum(non-diagonal) = 179265 diagonal - sum(non-diagonal) = 179390 diagonal - sum(non-diagonal) = 172170 diagonal - sum(non-diagonal) = 171043 diagonal - sum(non-diagonal) = 171825 diagonal - sum(non-diagonal) = 170367 diagonal - sum(non-diagonal) = 174042 diagonal - sum(non-diagonal) = 176891 diagonal - sum(non-diagonal) = 169708 diagonal - sum(non-diagonal) = 166278 diagonal - sum(non-diagonal) = 171831 diagonal - sum(non-diagonal) = 165265 diagonal - sum(non-diagonal) = 170693 diagonal - sum(non-diagonal) = 166214 diagonal - sum(non-diagonal) = 170490 diagonal - sum(non-diagonal) = 166219 diagonal - sum(non-diagonal) = 169596 diagonal - sum(non-diagonal) = 165746 diagonal - sum(non-diagonal) = 166969 diagonal - sum(non-diagonal) = 160315 diagonal - sum(non-diagonal) = 166377 diagonal - sum(non-diagonal) = 165411 diagonal - sum(non-diagonal) = 167666 diagonal - sum(non-diagonal) = 163959 diagonal - sum(non-diagonal) = 161427 diagonal - sum(non-diagonal) = 165677 diagonal - sum(non-diagonal) = 163637 diagonal - sum(non-diagonal) = 161749 diagonal - sum(non-diagonal) = 164145 diagonal - sum(non-diagonal) = 163357 diagonal - sum(non-diagonal) = 167608 diagonal - sum(non-diagonal) = 162175 ```python N = A.shape[0] D_inv = np.zeros((N,N)) D_inv[range(N),range(N)] = 1/A[range(N),range(N)] R = A.copy() R[range(N),range(N)] = 0 np.abs(sp.linalg.eigvals(D_inv.dot(R))).max() ``` 0.15525468366230136 ## compare value one by one (& export data) ```python ``` ```python col1 = A.dot(ans) ``` ```python col2 = b ``` ```python dt = pd.DataFrame(data = np.hstack([col1,col2,col1-col2]), columns=['A.dot(ans)','b', 'A.dot(ans) - b']) dt.to_csv('Jacobi_result.csv') dt ``` <div> <style> .dataframe thead tr:only-child th { text-align: right; } .dataframe thead th { text-align: left; } .dataframe tbody tr th { vertical-align: top; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>A.dot(ans)</th> <th>b</th> <th>A.dot(ans) - b</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>42.0</td> <td>42.0</td> <td>7.316743e-10</td> </tr> <tr> <th>1</th> <td>73.0</td> <td>73.0</td> <td>3.352710e-09</td> </tr> <tr> <th>2</th> <td>1.0</td> <td>1.0</td> <td>2.248211e-09</td> </tr> <tr> <th>3</th> <td>31.0</td> <td>31.0</td> <td>3.789875e-09</td> </tr> <tr> <th>4</th> <td>15.0</td> <td>15.0</td> <td>5.141253e-09</td> </tr> <tr> <th>5</th> <td>10.0</td> <td>10.0</td> <td>5.381203e-09</td> </tr> <tr> <th>6</th> <td>19.0</td> <td>19.0</td> <td>8.715883e-09</td> </tr> <tr> <th>7</th> <td>35.0</td> <td>35.0</td> <td>7.442544e-09</td> </tr> <tr> <th>8</th> <td>40.0</td> <td>40.0</td> <td>1.043262e-08</td> </tr> <tr> <th>9</th> <td>54.0</td> <td>54.0</td> <td>8.685554e-09</td> </tr> <tr> <th>10</th> <td>42.0</td> <td>42.0</td> <td>1.489620e-08</td> </tr> <tr> <th>11</th> <td>69.0</td> <td>69.0</td> <td>1.319873e-08</td> </tr> <tr> <th>12</th> <td>21.0</td> <td>21.0</td> <td>9.607835e-09</td> </tr> <tr> <th>13</th> <td>88.0</td> <td>88.0</td> <td>1.349062e-08</td> </tr> <tr> <th>14</th> <td>3.0</td> <td>3.0</td> <td>1.394296e-08</td> </tr> <tr> <th>15</th> <td>68.0</td> <td>68.0</td> <td>1.606581e-08</td> </tr> <tr> <th>16</th> <td>42.0</td> <td>42.0</td> <td>2.080829e-08</td> </tr> <tr> <th>17</th> <td>56.0</td> <td>56.0</td> <td>1.422748e-08</td> </tr> <tr> <th>18</th> <td>15.0</td> <td>15.0</td> <td>1.755305e-08</td> </tr> <tr> <th>19</th> <td>20.0</td> <td>20.0</td> <td>1.709546e-08</td> </tr> <tr> <th>20</th> <td>81.0</td> <td>81.0</td> <td>2.348943e-08</td> </tr> <tr> <th>21</th> <td>97.0</td> <td>97.0</td> <td>2.404808e-08</td> </tr> <tr> <th>22</th> <td>32.0</td> <td>32.0</td> <td>2.403634e-08</td> </tr> <tr> <th>23</th> <td>70.0</td> <td>70.0</td> <td>2.508342e-08</td> </tr> <tr> <th>24</th> <td>88.0</td> <td>88.0</td> <td>2.258433e-08</td> </tr> <tr> <th>25</th> <td>90.0</td> <td>90.0</td> <td>1.995809e-08</td> </tr> <tr> <th>26</th> <td>9.0</td> <td>9.0</td> <td>2.609357e-08</td> </tr> <tr> <th>27</th> <td>4.0</td> <td>4.0</td> <td>2.913193e-08</td> </tr> <tr> <th>28</th> <td>17.0</td> <td>17.0</td> <td>2.503499e-08</td> </tr> <tr> <th>29</th> <td>88.0</td> <td>88.0</td> <td>3.057720e-08</td> </tr> <tr> <th>30</th> <td>10.0</td> <td>10.0</td> <td>2.580600e-08</td> </tr> <tr> <th>31</th> <td>43.0</td> <td>43.0</td> <td>2.919178e-08</td> </tr> <tr> <th>32</th> <td>96.0</td> <td>96.0</td> <td>2.652394e-08</td> </tr> <tr> <th>33</th> <td>54.0</td> <td>54.0</td> <td>3.063180e-08</td> </tr> <tr> <th>34</th> <td>70.0</td> <td>70.0</td> <td>2.697900e-08</td> </tr> <tr> <th>35</th> <td>32.0</td> <td>32.0</td> <td>3.100639e-08</td> </tr> <tr> <th>36</th> <td>69.0</td> <td>69.0</td> <td>2.972990e-08</td> </tr> <tr> <th>37</th> <td>84.0</td> <td>84.0</td> <td>3.509113e-08</td> </tr> <tr> <th>38</th> <td>2.0</td> <td>2.0</td> <td>3.057989e-08</td> </tr> <tr> <th>39</th> <td>76.0</td> <td>76.0</td> <td>3.094534e-08</td> </tr> <tr> <th>40</th> <td>99.0</td> <td>99.0</td> <td>2.958207e-08</td> </tr> <tr> <th>41</th> <td>75.0</td> <td>75.0</td> <td>3.260220e-08</td> </tr> <tr> <th>42</th> <td>29.0</td> <td>29.0</td> <td>3.491621e-08</td> </tr> <tr> <th>43</th> <td>79.0</td> <td>79.0</td> <td>3.148396e-08</td> </tr> <tr> <th>44</th> <td>11.0</td> <td>11.0</td> <td>3.286018e-08</td> </tr> <tr> <th>45</th> <td>45.0</td> <td>45.0</td> <td>3.398931e-08</td> </tr> <tr> <th>46</th> <td>91.0</td> <td>91.0</td> <td>3.252404e-08</td> </tr> <tr> <th>47</th> <td>30.0</td> <td>30.0</td> <td>3.325217e-08</td> </tr> <tr> <th>48</th> <td>29.0</td> <td>29.0</td> <td>3.005018e-08</td> </tr> <tr> <th>49</th> <td>14.0</td> <td>14.0</td> <td>3.471172e-08</td> </tr> </tbody> </table> </div> ## Export using numpy.savetxt ```python np.savetxt(fname = 'jacobi_via_np.csv', X = np.hstack([col1,col2,col1-col2]), delimiter=',') ``` # Case where Jacobi does not work # When it seems that $|a_{ii}| \geq \sum_{j\neq i} |a_{ij}| \quad\text{for all } i$, but it is not # Standard convergence condition # $\text{max abs( Eigenvalues( } D^{-1}R\text{ ))}<1$ ```python np.random.seed(42) a = np.ceil(10*np.random.random([50,50])).astype(int) b = np.tril(a) A = b.dot(b.T) A[range(50),range(50)] = A[range(50),range(50)] + 20000 np.random.seed(1) b = np.ceil(100*np.random.random((50,1))) try: ans = jacobi(A,b) except Exception as e: print(e) ``` [[-0.04886091] [-0.22500496] [-0.15324429] [-0.25578508] [-0.33586009] [-0.35561948] [-0.55698258] [-0.4863504 ] [-0.66870565] [-0.56583683] [-0.9216152 ] [-0.8276872 ] [-0.59089339] [-0.82743276] [-0.84828361] [-0.9807739 ] [-1.23234014] [-0.86220049] [-1.03489894] [-1.02866582] [-1.36824597] [-1.42240763] [-1.38623536] [-1.45307244] [-1.28096879] [-1.1459794 ] [-1.48681532] [-1.64418878] [-1.3859483 ] [-1.68309859] [-1.44065788] [-1.64621244] [-1.44573864] [-1.63891087] [-1.48768287] [-1.66170381] [-1.6040919 ] [-1.9021494 ] [-1.63459747] [-1.67558887] [-1.57015704] [-1.73518442] [-1.8440985 ] [-1.65531837] [-1.74639401] [-1.84212961] [-1.72468165] [-1.76192442] [-1.57082833] [-1.8047987 ]] [[ 0.29072651] [ 1.31684399] [ 0.88505599] [ 1.48812413] [ 2.01145546] [ 2.10827454] [ 3.38644254] [ 2.9027519 ] [ 4.03315171] [ 3.38374627] [ 5.69878011] [ 5.08625606] [ 3.73957444] [ 5.20688056] [ 5.36774956] [ 6.1539882 ] [ 7.8438494 ] [ 5.48662393] [ 6.7038683 ] [ 6.5449084 ] [ 8.79772881] [ 9.03740272] [ 8.97590838] [ 9.35946632] [ 8.50590908] [ 7.59292668] [ 9.72071312] [ 10.76119679] [ 9.31436252] [ 11.21378397] [ 9.61413957] [ 10.79531926] [ 9.86572195] [ 11.1984442 ] [ 10.00205229] [ 11.32403094] [ 10.87769789] [ 12.69537919] [ 11.17873976] [ 11.31326287] [ 10.81811989] [ 11.78598878] [ 12.48165736] [ 11.36686077] [ 11.78016624] [ 12.34939317] [ 11.73802582] [ 11.96593493] [ 10.87714578] [ 12.31503771]] [[ -1.92215915] [ -8.75141136] [ -5.90029799] [ -9.90939244] [-13.4043386 ] [-14.05184608] [-22.56940858] [-19.33891133] [-26.87312748] [-22.5383346 ] [-37.97643109] [-33.88271772] [-24.92270977] [-34.67942787] [-35.78418516] [-41.00127694] [-52.27679943] [-36.5565059 ] [-44.68678708] [-43.62545803] [-58.62152384] [-60.21323149] [-59.82791995] [-62.37050251] [-56.67245017] [-50.58501324] [-64.80154204] [-71.74010895] [-62.08902795] [-74.72511119] [-64.09025177] [-71.95298482] [-65.73439808] [-74.63534283] [-66.65358651] [-75.48068185] [-72.49132993] [-84.60419726] [-74.52346966] [-75.39253904] [-72.08242076] [-78.5445577 ] [-83.19963902] [-75.74854409] [-78.53016369] [-82.3119179 ] [-78.21851707] [-79.76097255] [-72.50223539] [-82.09440402]] [[ 12.82772325] [ 58.3580568 ] [ 39.32714949] [ 66.06020199] [ 89.34889074] [ 93.66265527] [ 150.43806204] [ 128.91181838] [ 179.13118097] [ 150.24404071] [ 253.13858133] [ 225.86309534] [ 166.1243639 ] [ 231.18083728] [ 238.51205332] [ 273.30986983] [ 348.45449351] [ 243.67995054] [ 297.85515671] [ 290.78270685] [ 390.75826669] [ 401.37354254] [ 398.78069632] [ 415.74216319] [ 377.77078249] [ 337.19725942] [ 431.92297919] [ 478.1683959 ] [ 413.84685988] [ 498.09648879] [ 427.18284639] [ 479.60187857] [ 438.17459579] [ 497.4856736 ] [ 444.29091742] [ 503.11199637] [ 483.20095464] [ 563.94148725] [ 496.72052621] [ 502.54084998] [ 480.48718752] [ 523.54939858] [ 554.559485 ] [ 504.9150406 ] [ 523.42903696] [ 548.64853656] [ 521.38253466] [ 531.640364 ] [ 483.25906642] [ 547.18723673]] [[ -85.48691341] [ -388.95689795] [ -262.13424152] [ -440.31150118] [ -595.54785174] [ -624.3032088 ] [-1002.73497016] [ -859.24633971] [-1193.98069778] [-1001.42862941] [-1687.27462968] [-1505.4602156 ] [-1107.29078082] [-1540.89806201] [-1589.79642332] [-1821.71606676] [-2322.60079976] [-1624.22276517] [-1985.34180907] [-1938.19894411] [-2604.56126378] [-2675.31105337] [-2658.05313127] [-2771.09497434] [-2517.98983488] [-2247.54695771] [-2878.97059243] [-3187.22047836] [-2758.48106869] [-3320.01852665] [-2847.37452512] [-3196.76071991] [-2920.60673507] [-3315.95963567] [-2961.38526967] [-3353.46986546] [-3220.73942709] [-3758.90879689] [-3310.87904443] [-3349.64655116] [-3202.63914456] [-3489.67910971] [-3696.39349539] [-3365.4703494 ] [-3488.90126414] [-3656.98823659] [-3475.22963741] [-3543.6260429 ] [-3221.14134642] [-3647.25948385]] [[ 569.82460153] [ 2592.5995682 ] [ 1747.24226185] [ 2934.88490088] [ 3969.59867308] [ 4161.26396651] [ 6683.68479792] [ 5727.27466642] [ 7958.43094722] [ 6674.99054687] [ 11246.45694997] [ 10034.59262169] [ 7380.59704523] [ 10270.80920287] [ 10596.70640377] [ 12142.57925054] [ 15481.19068212] [ 10826.19357131] [ 13233.1985159 ] [ 12918.97178938] [ 17360.59800358] [ 17832.1831715 ] [ 17717.12674549] [ 18470.61575555] [ 16783.56425233] [ 14980.9421686 ] [ 19189.63237365] [ 21244.25399602] [ 18386.51922188] [ 22129.44474723] [ 18979.0312347 ] [ 21307.85858914] [ 19467.18980306] [ 22102.37793483] [ 19738.98669284] [ 22352.39233578] [ 21467.6982791 ] [ 25054.84330329] [ 22068.49403441] [ 22326.92458226] [ 21347.06350781] [ 23260.30383173] [ 24638.13056063] [ 22432.39860963] [ 23255.09472023] [ 24375.48280445] [ 23163.99790864] [ 23619.86754867] [ 21470.36267512] [ 24310.62479626]] [[ -3798.12300223] [ -17280.8226513 ] [ -11646.16084918] [ -19562.32551036] [ -26459.16744674] [ -27736.70547137] [ -44549.77904603] [ -38174.86651335] [ -53046.53173342] [ -44491.81483012] [ -74962.713151 ] [ -66885.07838714] [ -49195.01385924] [ -68459.56141829] [ -70631.84613628] [ -80935.76422169] [-103189.13109242] [ -72161.46186045] [ -88205.25514951] [ -86110.7901238 ] [-115716.2175134 ] [-118859.5401017 ] [-118092.66156003] [-123114.99256405] [-111870.02263162] [ -99854.72923261] [-127907.58665214] [-141602.57221241] [-122554.47028356] [-147502.73718294] [-126503.83388061] [-142026.51102123] [-129757.59873191] [-147322.33710612] [-131569.25840401] [-148988.80439851] [-143091.90407787] [-167001.84511285] [-147096.50504835] [-148819.03393851] [-142287.80697425] [-155040.42982688] [-164224.28621605] [-149522.0647743 ] [-155005.73316203] [-162473.6140276 ] [-154398.50095196] [-157437.10159819] [-143109.67806033] [-162041.31754326]] [[ 25316.22312791] [ 115184.5239548 ] [ 77626.92788925] [ 130391.75596994] [ 176362.31978073] [ 184877.68630024] [ 296944.42718379] [ 254452.75865839] [ 353579.13328431] [ 296558.08257201] [ 499660.39261904] [ 445819.31487094] [ 327907.01844993] [ 456313.96362288] [ 470793.1911666 ] [ 539473.48856775] [ 687802.23074233] [ 480988.79174849] [ 587927.91283547] [ 573967.35803105] [ 771300.94027602] [ 792252.612681 ] [ 787140.99581871] [ 820617.11725275] [ 745664.31596746] [ 665576.95369001] [ 852561.90108029] [ 943845.16996026] [ 816880.96361587] [ 983172.46946965] [ 843205.25496651] [ 946670.92841095] [ 864893.10194655] [ 981970.00890542] [ 876968.62247727] [ 993077.75880465] [ 953772.26570694] [ 1113142.83847859] [ 980464.71664844] [ 991946.17827739] [ 948412.60666561] [ 1033414.59434261] [ 1094629.13232793] [ 996632.19799102] [ 1033183.30096127] [ 1082960.11658603] [ 1029135.85443113] [ 1049389.47857444] [ 953890.72276006] [ 1080078.65354554]] [[ -168744.06043786] [ -767756.91701429] [ -517418.58121934] [ -869120.01200301] [-1175534.61613739] [-1232293.38712634] [-1979268.89393291] [-1696042.7003902 ] [-2356764.81556265] [-1976693.71927491] [-3330462.47363827] [-2971587.3305107 ] [-2185648.56669125] [-3041538.90321089] [-3138049.53533505] [-3595834.75380113] [-4584512.91209448] [-3206007.80401929] [-3918805.42477354] [-3825752.01172305] [-5141069.55909516] [-5280721.92498842] [-5246650.71150534] [-5469784.18602715] [-4970189.86025429] [-4436371.37014399] [-5682710.64003895] [-6291154.91213996] [-5444881.05155543] [-6553289.11695914] [-5620344.36132282] [-6309989.85555992] [-5764903.63673876] [-6545274.1859689 ] [-5845392.45256796] [-6619312.37063597] [-6357323.46936852] [-7419600.40749783] [-6535240.75178817] [-6611769.85831294] [-6321598.90622015] [-6888175.60605494] [-7296198.20339196] [-6643004.29764367] [-6886633.9554877 ] [-7218418.93018767] [-6859655.86319256] [-6994655.45089649] [-6358113.05377639] [-7199212.68738777]] [[ 1124755.49092736] [ 5117447.08655614] [ 3448828.83907259] [ 5793077.93855856] [ 7835469.84067849] [ 8213792.71561699] [ 13192722.28105748] [ 11304891.62198075] [ 15708903.32353805] [ 13175557.59047133] [ 22199038.5563776 ] [ 19806973.44148535] [ 14568336.13317007] [ 20273232.31558732] [ 20916519.29196163] [ 23967864.82436724] [ 30557851.8552133 ] [ 21369491.90607155] [ 26120610.3856452 ] [ 25500367.30717937] [ 34267553.61675256] [ 35198399.79733703] [ 34971299.72661053] [ 36458585.25769842] [ 33128563.14968915] [ 29570421.50219142] [ 37877836.34392033] [ 41933392.57571444] [ 36292594.58515716] [ 43680635.62953838] [ 37462136.89282602] [ 42058936.00605088] [ 38425689.87141533] [ 43627212.47736503] [ 38962184.21851858] [ 44120710.45279412] [ 42374435.95889802] [ 49454992.15612932] [ 43560335.08158994] [ 44070436.21351713] [ 42136315.60131299] [ 45912805.52226871] [ 48632460.63863368] [ 44278627.87830706] [ 45902529.69962635] [ 48114026.61411254] [ 45722708.544729 ] [ 46622541.82536083] [ 42379698.88147832] [ 47986008.30859033]] [[ -7.49700399e+06] [ -3.41100991e+07] [ -2.29880039e+07] [ -3.86134843e+07] [ -5.22269500e+07] [ -5.47486430e+07] [ -8.79354602e+07] [ -7.53522152e+07] [ -1.04706945e+08] [ -8.78210498e+07] [ -1.47966631e+08] [ -1.32022435e+08] [ -9.71045487e+07] [ -1.35130262e+08] [ -1.39418060e+08] [ -1.59756658e+08] [ -2.03681902e+08] [ -1.42437328e+08] [ -1.74105681e+08] [ -1.69971480e+08] [ -2.28408742e+08] [ -2.34613253e+08] [ -2.33099528e+08] [ -2.43012959e+08] [ -2.20816855e+08] [ -1.97100232e+08] [ -2.52472909e+08] [ -2.79505025e+08] [ -2.41906556e+08] [ -2.91151190e+08] [ -2.49702084e+08] [ -2.80341829e+08] [ -2.56124601e+08] [ -2.90795101e+08] [ -2.59700578e+08] [ -2.94084488e+08] [ -2.82444779e+08] [ -3.29639888e+08] [ -2.90349333e+08] [ -2.93749388e+08] [ -2.80857599e+08] [ -3.06029612e+08] [ -3.24157344e+08] [ -2.95137079e+08] [ -3.05961119e+08] [ -3.20701746e+08] [ -3.04762530e+08] [ -3.10760326e+08] [ -2.82479859e+08] [ -3.19848446e+08]] [[ 4.99709220e+07] [ 2.27359236e+08] [ 1.53225442e+08] [ 2.57376335e+08] [ 3.48116241e+08] [ 3.64924465e+08] [ 5.86129610e+08] [ 5.02256591e+08] [ 6.97919140e+08] [ 5.85367013e+08] [ 9.86264513e+08] [ 8.79989238e+08] [ 6.47245731e+08] [ 9.00704305e+08] [ 9.29284422e+08] [ 1.06485038e+09] [ 1.35763199e+09] [ 9.49409204e+08] [ 1.16049310e+09] [ 1.13293678e+09] [ 1.52244756e+09] [ 1.56380343e+09] [ 1.55371377e+09] [ 1.61979127e+09] [ 1.47184420e+09] [ 1.31376218e+09] [ 1.68284612e+09] [ 1.86302740e+09] [ 1.61241659e+09] [ 1.94065435e+09] [ 1.66437732e+09] [ 1.86860507e+09] [ 1.70718629e+09] [ 1.93828086e+09] [ 1.73102180e+09] [ 1.96020611e+09] [ 1.88262218e+09] [ 2.19719893e+09] [ 1.93530961e+09] [ 1.95797251e+09] [ 1.87204291e+09] [ 2.03982576e+09] [ 2.16065529e+09] [ 1.96722211e+09] [ 2.03936923e+09] [ 2.13762217e+09] [ 2.03138009e+09] [ 2.07135811e+09] [ 1.88285601e+09] [ 2.13193454e+09]] [[ -3.33078794e+08] [ -1.51545213e+09] [ -1.02131687e+09] [ -1.71552967e+09] [ -2.32035218e+09] [ -2.43238659e+09] [ -3.90681892e+09] [ -3.34776732e+09] [ -4.65194669e+09] [ -3.90173587e+09] [ -6.57389901e+09] [ -5.86552623e+09] [ -4.31418551e+09] [ -6.00360153e+09] [ -6.19410093e+09] [ -7.09770931e+09] [ -9.04923118e+09] [ -6.32824170e+09] [ -7.73521134e+09] [ -7.55153602e+09] [ -1.01478015e+10] [ -1.04234571e+10] [ -1.03562049e+10] [ -1.07966413e+10] [ -9.81050723e+09] [ -8.75681909e+09] [ -1.12169304e+10] [ -1.24179201e+10] [ -1.07474858e+10] [ -1.29353389e+10] [ -1.10938275e+10] [ -1.24550979e+10] [ -1.13791687e+10] [ -1.29195185e+10] [ -1.15380431e+10] [ -1.30656602e+10] [ -1.25485282e+10] [ -1.46453245e+10] [ -1.28997138e+10] [ -1.30507723e+10] [ -1.24780126e+10] [ -1.35963612e+10] [ -1.44017446e+10] [ -1.31124250e+10] [ -1.35933182e+10] [ -1.42482185e+10] [ -1.35400669e+10] [ -1.38065385e+10] [ -1.25500868e+10] [ -1.42103079e+10]] [[ 2.22012079e+09] [ 1.01011738e+10] [ 6.80753879e+09] [ 1.14347811e+10] [ 1.54661966e+10] [ 1.62129567e+10] [ 2.60407149e+10] [ 2.23143832e+10] [ 3.10073285e+10] [ 2.60068341e+10] [ 4.38180098e+10] [ 3.90963850e+10] [ 2.87559974e+10] [ 4.00167192e+10] [ 4.12864839e+10] [ 4.73094424e+10] [ 6.03172182e+10] [ 4.21805928e+10] [ 5.15586817e+10] [ 5.03344027e+10] [ 6.76396862e+10] [ 6.94770553e+10] [ 6.90287893e+10] [ 7.19644972e+10] [ 6.53914674e+10] [ 5.83681594e+10] [ 7.47659139e+10] [ 8.27710535e+10] [ 7.16368532e+10] [ 8.62198837e+10] [ 7.39453774e+10] [ 8.30188599e+10] [ 7.58473054e+10] [ 8.61144333e+10] [ 7.69062753e+10] [ 8.70885340e+10] [ 8.36416157e+10] [ 9.76177111e+10] [ 8.59824261e+10] [ 8.69892992e+10] [ 8.31715971e+10] [ 9.06258962e+10] [ 9.59941411e+10] [ 8.74002425e+10] [ 9.06056131e+10] [ 9.49708199e+10] [ 9.02506695e+10] [ 9.20268232e+10] [ 8.36520040e+10] [ 9.47181284e+10]] [[ -1.47981091e+10] [ -6.73288915e+10] [ -4.53753246e+10] [ -7.62179871e+10] [ -1.03089195e+11] [ -1.08066688e+11] [ -1.73573142e+11] [ -1.48735455e+11] [ -2.06677867e+11] [ -1.73347310e+11] [ -2.92066852e+11] [ -2.60595087e+11] [ -1.91671728e+11] [ -2.66729531e+11] [ -2.75193087e+11] [ -3.15338829e+11] [ -4.02041537e+11] [ -2.81152727e+11] [ -3.43661930e+11] [ -3.35501557e+11] [ -4.50849098e+11] [ -4.63095994e+11] [ -4.60108098e+11] [ -4.79675919e+11] [ -4.35863703e+11] [ -3.89050179e+11] [ -4.98348628e+11] [ -5.51706504e+11] [ -4.77492024e+11] [ -5.74694517e+11] [ -4.92879381e+11] [ -5.53358246e+11] [ -5.05556592e+11] [ -5.73991642e+11] [ -5.12615105e+11] [ -5.80484464e+11] [ -5.57509195e+11] [ -6.50666191e+11] [ -5.73111754e+11] [ -5.79823019e+11] [ -5.54376308e+11] [ -6.04062583e+11] [ -6.39844362e+11] [ -5.82562142e+11] [ -6.03927387e+11] [ -6.33023464e+11] [ -6.01561527e+11] [ -6.13400395e+11] [ -5.57578437e+11] [ -6.31339161e+11]] [[ 9.86360895e+10] [ 4.48777511e+11] [ 3.02447059e+11] [ 5.08027353e+11] [ 6.87136104e+11] [ 7.20313349e+11] [ 1.15694349e+12] [ 9.91389073e+11] [ 1.37760145e+12] [ 1.15543822e+12] [ 1.94675765e+12] [ 1.73698410e+12] [ 1.27757875e+12] [ 1.77787295e+12] [ 1.83428637e+12] [ 2.10187590e+12] [ 2.67978866e+12] [ 1.87401007e+12] [ 2.29066218e+12] [ 2.23626960e+12] [ 3.00511313e+12] [ 3.08674423e+12] [ 3.06682855e+12] [ 3.19725693e+12] [ 2.90522869e+12] [ 2.59319538e+12] [ 3.32171898e+12] [ 3.67737335e+12] [ 3.18270028e+12] [ 3.83059885e+12] [ 3.28526397e+12] [ 3.68838296e+12] [ 3.36976331e+12] [ 3.82591388e+12] [ 3.41681150e+12] [ 3.86919147e+12] [ 3.71605091e+12] [ 4.33698442e+12] [ 3.82004903e+12] [ 3.86478264e+12] [ 3.69516880e+12] [ 4.02635029e+12] [ 4.26485204e+12] [ 3.88304013e+12] [ 4.02544915e+12] [ 4.21938766e+12] [ 4.00967963e+12] [ 4.08859103e+12] [ 3.71651244e+12] [ 4.20816102e+12]] [[ -6.57454143e+11] [ -2.99130506e+12] [ -2.01594643e+12] [ -3.38623206e+12] [ -4.58007288e+12] [ -4.80121423e+12] [ -7.71155154e+12] [ -6.60805651e+12] [ -9.18233667e+12] [ -7.70151825e+12] [ -1.29760201e+13] [ -1.15777846e+13] [ -8.51564011e+12] [ -1.18503272e+13] [ -1.22263482e+13] [ -1.40099534e+13] [ -1.78620034e+13] [ -1.24911246e+13] [ -1.52682994e+13] [ -1.49057482e+13] [ -2.00304380e+13] [ -2.05745462e+13] [ -2.04417992e+13] [ -2.13111633e+13] [ -1.93646631e+13] [ -1.72848200e+13] [ -2.21407592e+13] [ -2.45113564e+13] [ -2.12141367e+13] [ -2.55326737e+13] [ -2.18977701e+13] [ -2.45847404e+13] [ -2.24609964e+13] [ -2.55014462e+13] [ -2.27745939e+13] [ -2.57899109e+13] [ -2.47691598e+13] [ -2.89079625e+13] [ -2.54623543e+13] [ -2.57605241e+13] [ -2.46299712e+13] [ -2.68374456e+13] [ -2.84271676e+13] [ -2.58822185e+13] [ -2.68314390e+13] [ -2.81241269e+13] [ -2.67263281e+13] [ -2.72523083e+13] [ -2.47722362e+13] [ -2.80492963e+13]] [[ 4.38222919e+12] [ 1.99384010e+13] [ 1.34371946e+13] [ 2.25707680e+13] [ 3.05282570e+13] [ 3.20022641e+13] [ 5.14009785e+13] [ 4.40456851e+13] [ 6.12044265e+13] [ 5.13341021e+13] [ 8.64910425e+13] [ 7.71711701e+13] [ 5.67605924e+13] [ 7.89877901e+13] [ 8.14941401e+13] [ 9.33826755e+13] [ 1.19058330e+14] [ 8.32589946e+13] [ 1.01770120e+14] [ 9.93535529e+13] [ 1.33511928e+14] [ 1.37138655e+14] [ 1.36253837e+14] [ 1.42048541e+14] [ 1.29074237e+14] [ 1.15211142e+14] [ 1.47578173e+14] [ 1.63379276e+14] [ 1.41401815e+14] [ 1.70186817e+14] [ 1.45958541e+14] [ 1.63868413e+14] [ 1.49712699e+14] [ 1.69978672e+14] [ 1.51802968e+14] [ 1.71901420e+14] [ 1.65097652e+14] [ 1.92684643e+14] [ 1.69718107e+14] [ 1.71705543e+14] [ 1.64169897e+14] [ 1.78883712e+14] [ 1.89479928e+14] [ 1.72516691e+14] [ 1.78843675e+14] [ 1.87460025e+14] [ 1.78143063e+14] [ 1.81648959e+14] [ 1.65118157e+14] [ 1.86961245e+14]] [[ -2.92095394e+13] [ -1.32898460e+14] [ -8.95649790e+13] [ -1.50444376e+14] [ -2.03484639e+14] [ -2.13309563e+14] [ -3.42610767e+14] [ -2.93584410e+14] [ -4.07955182e+14] [ -3.42165006e+14] [ -5.76501913e+14] [ -5.14380749e+14] [ -3.78335018e+14] [ -5.26489342e+14] [ -5.43195299e+14] [ -6.22437764e+14] [ -7.93577614e+14] [ -5.54958852e+14] [ -6.78343876e+14] [ -6.62236362e+14] [ -8.89917379e+14] [ -9.14091155e+14] [ -9.08193439e+14] [ -9.46817768e+14] [ -8.60338161e+14] [ -7.67934364e+14] [ -9.83675263e+14] [ -1.08899676e+15] [ -9.42507043e+14] [ -1.13437210e+15] [ -9.72879682e+14] [ -1.09225708e+15] [ -9.97902844e+14] [ -1.13298472e+15] [ -1.01183543e+15] [ -1.14580070e+15] [ -1.10045051e+15] [ -1.28433029e+15] [ -1.13124793e+15] [ -1.14449510e+15] [ -1.09426660e+15] [ -1.19234084e+15] [ -1.26296941e+15] [ -1.14990177e+15] [ -1.19207398e+15] [ -1.24950584e+15] [ -1.18740408e+15] [ -1.21077246e+15] [ -1.10058719e+15] [ -1.24618125e+15]] [[ 1.94694790e+14] [ 8.85828336e+14] [ 5.96991090e+14] [ 1.00277980e+15] [ 1.35631714e+15] [ 1.42180470e+15] [ 2.28365570e+15] [ 1.95687286e+15] [ 2.71920578e+15] [ 2.28068450e+15] [ 3.84264596e+15] [ 3.42858031e+15] [ 2.52177399e+15] [ 3.50928956e+15] [ 3.62064232e+15] [ 4.14882918e+15] [ 5.28955368e+15] [ 3.69905172e+15] [ 4.52146870e+15] [ 4.41410484e+15] [ 5.93170178e+15] [ 6.09283094e+15] [ 6.05351999e+15] [ 6.31096861e+15] [ 5.73454292e+15] [ 5.11862982e+15] [ 6.55664049e+15] [ 7.25865588e+15] [ 6.28223568e+15] [ 7.56110305e+15] [ 6.48468305e+15] [ 7.28038740e+15] [ 6.65147374e+15] [ 7.55185552e+15] [ 6.74434073e+15] [ 7.63727985e+15] [ 7.33500034e+15] [ 8.56064219e+15] [ 7.54027907e+15] [ 7.62857740e+15] [ 7.29378178e+15] [ 7.94749090e+15] [ 8.41826226e+15] [ 7.66461531e+15] [ 7.94571215e+15] [ 8.32852151e+15] [ 7.91458517e+15] [ 8.07034601e+15] [ 7.33591135e+15] [ 8.30636159e+15]] [[ -1.29772882e+15] [ -5.90444646e+15] [ -3.97921559e+15] [ -6.68398089e+15] [ -9.04046714e+15] [ -9.47697134e+15] [ -1.52215980e+16] [ -1.30434426e+16] [ -1.81247362e+16] [ -1.52017936e+16] [ -2.56129731e+16] [ -2.28530383e+16] [ -1.68087641e+16] [ -2.33910020e+16] [ -2.41332184e+16] [ -2.76538228e+16] [ -3.52572674e+16] [ -2.46558526e+16] [ -3.01376337e+16] [ -2.94220050e+16] [ -3.95374749e+16] [ -4.06114736e+16] [ -4.03494484e+16] [ -4.20654598e+16] [ -3.82233220e+16] [ -3.41179826e+16] [ -4.37029740e+16] [ -4.83822241e+16] [ -4.18739419e+16] [ -5.03981713e+16] [ -4.32233452e+16] [ -4.85270745e+16] [ -4.43350805e+16] [ -5.03365323e+16] [ -4.49540810e+16] [ -5.09059241e+16] [ -4.88910944e+16] [ -5.70605516e+16] [ -5.02593700e+16] [ -5.08479183e+16] [ -4.86163541e+16] [ -5.29736210e+16] [ -5.61115250e+16] [ -5.10881272e+16] [ -5.29617649e+16] [ -5.55133624e+16] [ -5.27542895e+16] [ -5.37925059e+16] [ -4.88971667e+16] [ -5.53656564e+16]] [[ 8.64994947e+15] [ 3.93558059e+16] [ 2.65232715e+16] [ 4.45517554e+16] [ 6.02588019e+16] [ 6.31682998e+16] [ 1.01458835e+17] [ 8.69404437e+16] [ 1.20809564e+17] [ 1.01326829e+17] [ 1.70722049e+17] [ 1.52325835e+17] [ 1.12038014e+17] [ 1.55911606e+17] [ 1.60858815e+17] [ 1.84325235e+17] [ 2.35005632e+17] [ 1.64342408e+17] [ 2.00880958e+17] [ 1.96110969e+17] [ 2.63535151e+17] [ 2.70693838e+17] [ 2.68947321e+17] [ 2.80385313e+17] [ 2.54775727e+17] [ 2.27411784e+17] [ 2.91300086e+17] [ 3.22489404e+17] [ 2.79108760e+17] [ 3.35926603e+17] [ 2.88103143e+17] [ 3.23454897e+17] [ 2.95513362e+17] [ 3.35515752e+17] [ 2.99639280e+17] [ 3.39311006e+17] [ 3.25881255e+17] [ 3.80334382e+17] [ 3.35001430e+17] [ 3.38924371e+17] [ 3.24049986e+17] [ 3.53093140e+17] [ 3.74008690e+17] [ 3.40525472e+17] [ 3.53014114e+17] [ 3.70021665e+17] [ 3.51631197e+17] [ 3.58551378e+17] [ 3.25921729e+17] [ 3.69037139e+17]] [[ -5.76558251e+16] [ -2.62324244e+17] [ -1.76789599e+17] [ -2.96957598e+17] [ -4.01652166e+17] [ -4.21045285e+17] [ -6.76269016e+17] [ -5.79497375e+17] [ -8.05250377e+17] [ -6.75389141e+17] [ -1.13793966e+18] [ -1.01532058e+18] [ -7.46784032e+17] [ -1.03922137e+18] [ -1.07219675e+18] [ -1.22861105e+18] [ -1.56641882e+18] [ -1.09541647e+18] [ -1.33896243e+18] [ -1.30716830e+18] [ -1.75658096e+18] [ -1.80429685e+18] [ -1.79265552e+18] [ -1.86889491e+18] [ -1.69819544e+18] [ -1.51580238e+18] [ -1.94164681e+18] [ -2.14953772e+18] [ -1.86038611e+18] [ -2.23910273e+18] [ -1.92033774e+18] [ -2.15597317e+18] [ -1.96973020e+18] [ -2.23636422e+18] [ -1.99723131e+18] [ -2.26166130e+18] [ -2.17214594e+18] [ -2.53510065e+18] [ -2.23293603e+18] [ -2.25908421e+18] [ -2.15993971e+18] [ -2.35352546e+18] [ -2.49293706e+18] [ -2.26975627e+18] [ -2.35299872e+18] [ -2.46636174e+18] [ -2.34378095e+18] [ -2.38990709e+18] [ -2.17241572e+18] [ -2.45979942e+18]] [[ 3.84302148e+17] [ 1.74850972e+18] [ 1.17838263e+18] [ 1.97935668e+18] [ 2.67719333e+18] [ 2.80645724e+18] [ 4.50763883e+18] [ 3.86261208e+18] [ 5.36735793e+18] [ 4.50177406e+18] [ 7.58488248e+18] [ 6.76757083e+18] [ 4.97765330e+18] [ 6.92688038e+18] [ 7.14667621e+18] [ 8.18924827e+18] [ 1.04408898e+19] [ 7.30144616e+18] [ 8.92479013e+18] [ 8.71286785e+18] [ 1.17084065e+19] [ 1.20264544e+19] [ 1.19488598e+19] [ 1.24570298e+19] [ 1.13192406e+19] [ 1.01035084e+19] [ 1.29419541e+19] [ 1.43276410e+19] [ 1.24003147e+19] [ 1.49246323e+19] [ 1.27999195e+19] [ 1.43705362e+19] [ 1.31291426e+19] [ 1.49063789e+19] [ 1.33124499e+19] [ 1.50749954e+19] [ 1.44783350e+19] [ 1.68975923e+19] [ 1.48835285e+19] [ 1.50578179e+19] [ 1.43969750e+19] [ 1.56873116e+19] [ 1.66165529e+19] [ 1.51289520e+19] [ 1.56838006e+19] [ 1.64394164e+19] [ 1.56223600e+19] [ 1.59298115e+19] [ 1.44801332e+19] [ 1.63956756e+19]] [[ -2.56154761e+18] [ -1.16546080e+19] [ -7.85445314e+18] [ -1.31933074e+19] [ -1.78447043e+19] [ -1.87063067e+19] [ -3.00454513e+19] [ -2.57460563e+19] [ -3.57758679e+19] [ -3.00063600e+19] [ -5.05566719e+19] [ -4.51089202e+19] [ -3.31783103e+19] [ -4.61707903e+19] [ -4.76358289e+19] [ -5.45850432e+19] [ -6.95932523e+19] [ -4.86674406e+19] [ -5.94877623e+19] [ -5.80752045e+19] [ -7.80418242e+19] [ -8.01617577e+19] [ -7.96445541e+19] [ -8.30317371e+19] [ -7.54478575e+19] [ -6.73444525e+19] [ -8.62639767e+19] [ -9.55002067e+19] [ -8.26537056e+19] [ -9.94794238e+19] [ -8.53172519e+19] [ -9.57861226e+19] [ -8.75116727e+19] [ -9.93577565e+19] [ -8.87334990e+19] [ -1.00481662e+20] [ -9.65046505e+19] [ -1.12630095e+20] [ -9.92054482e+19] [ -1.00367166e+20] [ -9.59623489e+19] [ -1.04563026e+20] [ -1.10756840e+20] [ -1.00841307e+20] [ -1.04539624e+20] [ -1.09576145e+20] [ -1.04130094e+20] [ -1.06179398e+20] [ -9.65166364e+19] [ -1.09284592e+20]] [[ 1.70738733e+19] [ 7.76832332e+19] [ 5.23534823e+19] [ 8.79393606e+19] [ 1.18943024e+20] [ 1.24685994e+20] [ 2.00266521e+20] [ 1.71609109e+20] [ 2.38462339e+20] [ 2.00005959e+20] [ 3.36983082e+20] [ 3.00671353e+20] [ 2.21148443e+20] [ 3.07749198e+20] [ 3.17514343e+20] [ 3.63833998e+20] [ 4.63870499e+20] [ 3.24390501e+20] [ 3.96512839e+20] [ 3.87097502e+20] [ 5.20184052e+20] [ 5.34314367e+20] [ 5.30866971e+20] [ 5.53444078e+20] [ 5.02894091e+20] [ 4.48881232e+20] [ 5.74988417e+20] [ 6.36552067e+20] [ 5.50924327e+20] [ 6.63075348e+20] [ 5.68678068e+20] [ 6.38457825e+20] [ 5.83304874e+20] [ 6.62264381e+20] [ 5.91448899e+20] [ 6.69755717e+20] [ 6.43247138e+20] [ 7.50730516e+20] [ 6.61249177e+20] [ 6.68992550e+20] [ 6.39632453e+20] [ 6.96959856e+20] [ 7.38244426e+20] [ 6.72152915e+20] [ 6.96803868e+20] [ 7.30374558e+20] [ 6.94074169e+20] [ 7.07733706e+20] [ 6.43327030e+20] [ 7.28431230e+20]] [[ -1.13805087e+20] [ -5.17793881e+20] [ -3.48959636e+20] [ -5.86155608e+20] [ -7.92809045e+20] [ -8.31088536e+20] [ -1.33486693e+21] [ -1.14385232e+21] [ -1.58945933e+21] [ -1.33313017e+21] [ -2.24614464e+21] [ -2.00411054e+21] [ -1.47405438e+21] [ -2.05128758e+21] [ -2.11637669e+21] [ -2.42511814e+21] [ -3.09190666e+21] [ -2.16220939e+21] [ -2.64293739e+21] [ -2.58017991e+21] [ -3.46726195e+21] [ -3.56144690e+21] [ -3.53846844e+21] [ -3.68895507e+21] [ -3.35201654e+21] [ -2.99199641e+21] [ -3.83255783e+21] [ -4.24290740e+21] [ -3.67215979e+21] [ -4.41969706e+21] [ -3.79049650e+21] [ -4.25561014e+21] [ -3.88799078e+21] [ -4.41429160e+21] [ -3.94227439e+21] [ -4.46422474e+21] [ -4.28753308e+21] [ -5.00395840e+21] [ -4.40752480e+21] [ -4.45913789e+21] [ -4.26343957e+21] [ -4.64555263e+21] [ -4.92073296e+21] [ -4.48020315e+21] [ -4.64451290e+21] [ -4.86827673e+21] [ -4.62631822e+21] [ -4.71736522e+21] [ -4.28806559e+21] [ -4.85532358e+21]] [[ 7.58562371e+20] [ 3.45133038e+21] [ 2.32597378e+21] [ 3.90699221e+21] [ 5.28443083e+21] [ 5.53958094e+21] [ 8.89749175e+21] [ 7.62429299e+21] [ 1.05944653e+22] [ 8.88591547e+21] [ 1.49715698e+22] [ 1.33583031e+22] [ 9.82523909e+21] [ 1.36727594e+22] [ 1.41066077e+22] [ 1.61645091e+22] [ 2.06089561e+22] [ 1.44121034e+22] [ 1.76163729e+22] [ 1.71980659e+22] [ 2.31108689e+22] [ 2.37386542e+22] [ 2.35854924e+22] [ 2.45885538e+22] [ 2.23427062e+22] [ 1.99430092e+22] [ 2.55457312e+22] [ 2.82808966e+22] [ 2.44766057e+22] [ 2.94592797e+22] [ 2.52653734e+22] [ 2.83655661e+22] [ 2.59152168e+22] [ 2.94232498e+22] [ 2.62770416e+22] [ 2.97560768e+22] [ 2.85783469e+22] [ 3.33536457e+22] [ 2.93781461e+22] [ 2.97221706e+22] [ 2.84177528e+22] [ 3.09647092e+22] [ 3.27989105e+22] [ 2.98625801e+22] [ 3.09577789e+22] [ 3.24492660e+22] [ 3.08365032e+22] [ 3.14433725e+22] [ 2.85818964e+22] [ 3.23629274e+22]] [[ -5.05616125e+21] [ -2.30046778e+22] [ -1.55036672e+22] [ -2.60418700e+22] [ -3.52231213e+22] [ -3.69238122e+22] [ -5.93058063e+22] [ -5.08193608e+22] [ -7.06169021e+22] [ -5.92286452e+22] [ -9.97922834e+22] [ -8.90391312e+22] [ -6.54896619e+22] [ -9.11351246e+22] [ -9.40269200e+22] [ -1.07743763e+23] [ -1.37368013e+23] [ -9.60631870e+22] [ -1.17421092e+23] [ -1.14632887e+23] [ -1.54044393e+23] [ -1.58228866e+23] [ -1.57207973e+23] [ -1.63893831e+23] [ -1.48924241e+23] [ -1.32929175e+23] [ -1.70273851e+23] [ -1.88504966e+23] [ -1.63147646e+23] [ -1.96359422e+23] [ -1.68405139e+23] [ -1.89069326e+23] [ -1.72736640e+23] [ -1.96119266e+23] [ -1.75148366e+23] [ -1.98337709e+23] [ -1.90487607e+23] [ -2.22317132e+23] [ -1.95818629e+23] [ -1.98111709e+23] [ -1.89417174e+23] [ -2.06393790e+23] [ -2.18619571e+23] [ -1.99047602e+23] [ -2.06347597e+23] [ -2.16289033e+23] [ -2.05539239e+23] [ -2.09584298e+23] [ -1.90511265e+23] [ -2.15713547e+23]] [[ 3.37016013e+22] [ 1.53336581e+23] [ 1.03338953e+23] [ 1.73580840e+23] [ 2.34778033e+23] [ 2.46113907e+23] [ 3.95300019e+23] [ 3.38734022e+23] [ 4.70693588e+23] [ 3.94785705e+23] [ 6.65160699e+23] [ 5.93486077e+23] [ 4.36518213e+23] [ 6.07456820e+23] [ 6.26731944e+23] [ 7.18160909e+23] [ 9.15619929e+23] [ 6.40304585e+23] [ 7.82664682e+23] [ 7.64080034e+23] [ 1.02677555e+24] [ 1.05466695e+24] [ 1.04786224e+24] [ 1.09242650e+24] [ 9.92647414e+23] [ 8.86033068e+23] [ 1.13495222e+24] [ 1.25647084e+24] [ 1.08745284e+24] [ 1.30882435e+24] [ 1.12249641e+24] [ 1.26023256e+24] [ 1.15136782e+24] [ 1.30722360e+24] [ 1.16744307e+24] [ 1.32201052e+24] [ 1.26968604e+24] [ 1.48184423e+24] [ 1.30521972e+24] [ 1.32050413e+24] [ 1.26255113e+24] [ 1.37570795e+24] [ 1.45719831e+24] [ 1.32674228e+24] [ 1.37540005e+24] [ 1.44166422e+24] [ 1.37001198e+24] [ 1.39697413e+24] [ 1.26984374e+24] [ 1.43782834e+24]] [[ -2.24636414e+23] [ -1.02205766e+24] [ -6.88800855e+23] [ -1.15699480e+24] [ -1.56490176e+24] [ -1.64046049e+24] [ -2.63485340e+24] [ -2.25781544e+24] [ -3.13738563e+24] [ -2.63142526e+24] [ -4.43359687e+24] [ -3.95585310e+24] [ -2.90959130e+24] [ -4.04897442e+24] [ -4.17745184e+24] [ -4.78686724e+24] [ -6.10302090e+24] [ -4.26791963e+24] [ -5.21681406e+24] [ -5.09293897e+24] [ -6.84392335e+24] [ -7.02983216e+24] [ -6.98447568e+24] [ -7.28151668e+24] [ -6.61644393e+24] [ -5.90581109e+24] [ -7.56496982e+24] [ -8.37494640e+24] [ -7.24836498e+24] [ -8.72390617e+24] [ -7.48194623e+24] [ -8.40001996e+24] [ -7.67438724e+24] [ -8.71323648e+24] [ -7.78153601e+24] [ -8.81179801e+24] [ -8.46303168e+24] [ -9.87716192e+24] [ -8.69987972e+24] [ -8.80175723e+24] [ -8.41547423e+24] [ -9.16971563e+24] [ -9.71288575e+24] [ -8.84333731e+24] [ -9.16766334e+24] [ -9.60934399e+24] [ -9.13174942e+24] [ -9.31146432e+24] [ -8.46408279e+24] [ -9.58377615e+24]] [[ 1.49730329e+24] [ 6.81247651e+24] [ 4.59116920e+24] [ 7.71189359e+24] [ 1.04307780e+25] [ 1.09344110e+25] [ 1.75624895e+25] [ 1.50493610e+25] [ 2.09120941e+25] [ 1.75396394e+25] [ 2.95519282e+25] [ 2.63675499e+25] [ 1.93937418e+25] [ 2.69882456e+25] [ 2.78446057e+25] [ 3.19066350e+25] [ 4.06793943e+25] [ 2.84476144e+25] [ 3.47724249e+25] [ 3.39467415e+25] [ 4.56178442e+25] [ 4.68570105e+25] [ 4.65546891e+25] [ 4.85346016e+25] [ 4.41015910e+25] [ 3.93649018e+25] [ 5.04239450e+25] [ 5.58228052e+25] [ 4.83136306e+25] [ 5.81487799e+25] [ 4.98705552e+25] [ 5.59899319e+25] [ 5.11532616e+25] [ 5.80776616e+25] [ 5.18674566e+25] [ 5.87346188e+25] [ 5.64099335e+25] [ 6.58357511e+25] [ 5.79886327e+25] [ 5.86676924e+25] [ 5.60929415e+25] [ 6.11203016e+25] [ 6.47407761e+25] [ 5.89448425e+25] [ 6.11066221e+25] [ 6.40506234e+25] [ 6.08672396e+25] [ 6.20651207e+25] [ 5.64169396e+25] [ 6.38802022e+25]] [[ -9.98020355e+24] [ -4.54082368e+25] [ -3.06022190e+25] [ -5.14032584e+25] [ -6.95258522e+25] [ -7.28827946e+25] [ -1.17061935e+26] [ -1.00310797e+26] [ -1.39388564e+26] [ -1.16909629e+26] [ -1.96976966e+26] [ -1.75751644e+26] [ -1.29268060e+26] [ -1.79888862e+26] [ -1.85596889e+26] [ -2.12672151e+26] [ -2.71146560e+26] [ -1.89616215e+26] [ -2.31773938e+26] [ -2.26270384e+26] [ -3.04063562e+26] [ -3.12323165e+26] [ -3.10308056e+26] [ -3.23505069e+26] [ -2.93957048e+26] [ -2.62384873e+26] [ -3.36098397e+26] [ -3.72084242e+26] [ -3.22032197e+26] [ -3.87587915e+26] [ -3.32409804e+26] [ -3.73198217e+26] [ -3.40959622e+26] [ -3.87113880e+26] [ -3.45720055e+26] [ -3.91492796e+26] [ -3.75997717e+26] [ -4.38825054e+26] [ -3.86520462e+26] [ -3.91046702e+26] [ -3.73884823e+26] [ -4.07394451e+26] [ -4.31526551e+26] [ -3.92894032e+26] [ -4.07303271e+26] [ -4.26926371e+26] [ -4.05707678e+26] [ -4.13692098e+26] [ -3.76044416e+26] [ -4.25790437e+26]] [[ 6.65225701e+25] [ 3.02666433e+26] [ 2.03977629e+26] [ 3.42625964e+26] [ 4.63421247e+26] [ 4.85796786e+26] [ 7.80270736e+26] [ 6.68616826e+26] [ 9.29087818e+26] [ 7.79255547e+26] [ 1.31294056e+27] [ 1.17146419e+27] [ 8.61630080e+26] [ 1.19904062e+27] [ 1.23708720e+27] [ 1.41755607e+27] [ 1.80731444e+27] [ 1.26387783e+27] [ 1.54487811e+27] [ 1.50819444e+27] [ 2.02672115e+27] [ 2.08177514e+27] [ 2.06834353e+27] [ 2.15630759e+27] [ 1.95935667e+27] [ 1.74891384e+27] [ 2.24024781e+27] [ 2.48010974e+27] [ 2.14649023e+27] [ 2.58344874e+27] [ 2.21566167e+27] [ 2.48753489e+27] [ 2.27265008e+27] [ 2.58028908e+27] [ 2.30438051e+27] [ 2.60947653e+27] [ 2.50619483e+27] [ 2.92496744e+27] [ 2.57633368e+27] [ 2.60650311e+27] [ 2.49211143e+27] [ 2.71546825e+27] [ 2.87631961e+27] [ 2.61881641e+27] [ 2.71486050e+27] [ 2.84565734e+27] [ 2.70422515e+27] [ 2.75744492e+27] [ 2.50650610e+27] [ 2.83808582e+27]] [[ -4.43403014e+26] [ -2.01740866e+27] [ -1.35960315e+27] [ -2.28375700e+27] [ -3.08891219e+27] [ -3.23805528e+27] [ -5.20085732e+27] [ -4.45663352e+27] [ -6.19279047e+27] [ -5.19409063e+27] [ -8.75134258e+27] [ -7.80833861e+27] [ -5.74315414e+27] [ -7.99214798e+27] [ -8.24574566e+27] [ -9.44865226e+27] [ -1.20465681e+28] [ -8.42431729e+27] [ -1.02973113e+28] [ -1.00527980e+28] [ -1.35090130e+28] [ -1.38759728e+28] [ -1.37864450e+28] [ -1.43727652e+28] [ -1.30599983e+28] [ -1.16573016e+28] [ -1.49322648e+28] [ -1.65310530e+28] [ -1.43073281e+28] [ -1.72198542e+28] [ -1.47683871e+28] [ -1.65805450e+28] [ -1.51482405e+28] [ -1.71987936e+28] [ -1.53597382e+28] [ -1.73933412e+28] [ -1.67049219e+28] [ -1.94962308e+28] [ -1.71724291e+28] [ -1.73735220e+28] [ -1.66110497e+28] [ -1.80998240e+28] [ -1.91719710e+28] [ -1.74555957e+28] [ -1.80957730e+28] [ -1.89675931e+28] [ -1.80248836e+28] [ -1.83796174e+28] [ -1.67069967e+28] [ -1.89171255e+28]] [[ 2.95548161e+27] [ 1.34469410e+28] [ 9.06236983e+27] [ 1.52222732e+28] [ 2.05889967e+28] [ 2.15831028e+28] [ 3.46660661e+28] [ 2.97054779e+28] [ 4.12777491e+28] [ 3.46209630e+28] [ 5.83316560e+28] [ 5.20461081e+28] [ 3.82807196e+28] [ 5.32712806e+28] [ 5.49616239e+28] [ 6.29795404e+28] [ 8.02958244e+28] [ 5.61518845e+28] [ 6.86362364e+28] [ 6.70064448e+28] [ 9.00436811e+28] [ 9.24896338e+28] [ 9.18928907e+28] [ 9.58009801e+28] [ 8.70507947e+28] [ 7.77011873e+28] [ 9.95302978e+28] [ 1.10186944e+29] [ 9.53648121e+28] [ 1.14778115e+29] [ 9.84379785e+28] [ 1.10516831e+29] [ 1.00969874e+29] [ 1.14637737e+29] [ 1.02379602e+29] [ 1.15934485e+29] [ 1.11345859e+29] [ 1.29951195e+29] [ 1.14462006e+29] [ 1.15802381e+29] [ 1.10720158e+29] [ 1.20643512e+29] [ 1.27789857e+29] [ 1.16349439e+29] [ 1.20616511e+29] [ 1.26427586e+29] [ 1.20144001e+29] [ 1.22508462e+29] [ 1.11359688e+29] [ 1.26091196e+29]] [[ -1.96996215e+28] [ -8.96299433e+28] [ -6.04047933e+28] [ -1.01463335e+29] [ -1.37234974e+29] [ -1.43861141e+29] [ -2.31065008e+29] [ -1.98000444e+29] [ -2.75134865e+29] [ -2.30764375e+29] [ -3.88806867e+29] [ -3.46910848e+29] [ -2.55158308e+29] [ -3.55077177e+29] [ -3.66344079e+29] [ -4.19787118e+29] [ -5.35207982e+29] [ -3.74277704e+29] [ -4.57491556e+29] [ -4.46628258e+29] [ -6.00181856e+29] [ -6.16485237e+29] [ -6.12507674e+29] [ -6.38556857e+29] [ -5.80232914e+29] [ -5.17913553e+29] [ -6.63414446e+29] [ -7.34445816e+29] [ -6.35649600e+29] [ -7.65048046e+29] [ -6.56133644e+29] [ -7.36644656e+29] [ -6.73009871e+29] [ -7.64112362e+29] [ -6.82406345e+29] [ -7.72755773e+29] [ -7.42170506e+29] [ -8.66183483e+29] [ -7.62941033e+29] [ -7.71875240e+29] [ -7.37999927e+29] [ -8.04143567e+29] [ -8.51777188e+29] [ -7.75521630e+29] [ -8.03963591e+29] [ -8.42697033e+29] [ -8.00814098e+29] [ -8.16574302e+29] [ -7.42262684e+29] [ -8.40454846e+29]] [[ 1.31306886e+29] [ 5.97424106e+29] [ 4.02625265e+29] [ 6.76299012e+29] [ 9.14733164e+29] [ 9.58899562e+29] [ 1.54015277e+30] [ 1.31976250e+30] [ 1.83389830e+30] [ 1.53814892e+30] [ 2.59157360e+30] [ 2.31231769e+30] [ 1.70074552e+30] [ 2.36674996e+30] [ 2.44184896e+30] [ 2.79807099e+30] [ 3.56740324e+30] [ 2.49473016e+30] [ 3.04938811e+30] [ 2.97697932e+30] [ 4.00048349e+30] [ 4.10915290e+30] [ 4.08264065e+30] [ 4.25627023e+30] [ 3.86751478e+30] [ 3.45212805e+30] [ 4.42195730e+30] [ 4.89541351e+30] [ 4.23689205e+30] [ 5.09939121e+30] [ 4.37342747e+30] [ 4.91006977e+30] [ 4.48591515e+30] [ 5.09315445e+30] [ 4.54854690e+30] [ 5.15076669e+30] [ 4.94690206e+30] [ 5.77350464e+30] [ 5.08534701e+30] [ 5.14489755e+30] [ 4.91910327e+30] [ 5.35998054e+30] [ 5.67748017e+30] [ 5.16920239e+30] [ 5.35878092e+30] [ 5.61695683e+30] [ 5.33778812e+30] [ 5.44283701e+30] [ 4.94751647e+30] [ 5.60201164e+30]] [[ -8.75219779e+29] [ -3.98210184e+30] [ -2.68367947e+30] [ -4.50783876e+30] [ -6.09711022e+30] [ -6.39149923e+30] [ -1.02658147e+31] [ -8.79681393e+30] [ -1.22237615e+31] [ -1.02524582e+31] [ -1.72740101e+31] [ -1.54126431e+31] [ -1.13362380e+31] [ -1.57754588e+31] [ -1.62760276e+31] [ -1.86504085e+31] [ -2.37783559e+31] [ -1.66285047e+31] [ -2.03255508e+31] [ -1.98429135e+31] [ -2.66650316e+31] [ -2.73893624e+31] [ -2.72126462e+31] [ -2.83699660e+31] [ -2.57787350e+31] [ -2.30099946e+31] [ -2.94743452e+31] [ -3.26301449e+31] [ -2.82408016e+31] [ -3.39897486e+31] [ -2.91508719e+31] [ -3.27278356e+31] [ -2.99006532e+31] [ -3.39481778e+31] [ -3.03181221e+31] [ -3.43321894e+31] [ -3.29733394e+31] [ -3.84830194e+31] [ -3.38961376e+31] [ -3.42930689e+31] [ -3.27880479e+31] [ -3.57266943e+31] [ -3.78429729e+31] [ -3.44550716e+31] [ -3.57186982e+31] [ -3.74395575e+31] [ -3.55787718e+31] [ -3.62789701e+31] [ -3.29774347e+31] [ -3.73399410e+31]] [[ 5.83373564e+30] [ 2.65425096e+31] [ 1.78879373e+31] [ 3.00467840e+31] [ 4.06399969e+31] [ 4.26022329e+31] [ 6.84262979e+31] [ 5.86347431e+31] [ 8.14768988e+31] [ 6.83372703e+31] [ 1.15139089e+32] [ 1.02732236e+32] [ 7.55611531e+31] [ 1.05150567e+32] [ 1.08487085e+32] [ 1.24313407e+32] [ 1.58493496e+32] [ 1.10836504e+32] [ 1.35478988e+32] [ 1.32261992e+32] [ 1.77734495e+32] [ 1.82562487e+32] [ 1.81384594e+32] [ 1.89098653e+32] [ 1.71826927e+32] [ 1.53372020e+32] [ 1.96459840e+32] [ 2.17494672e+32] [ 1.88237714e+32] [ 2.26557046e+32] [ 1.94303744e+32] [ 2.18145825e+32] [ 1.99301376e+32] [ 2.26279957e+32] [ 2.02083995e+32] [ 2.28839569e+32] [ 2.19782219e+32] [ 2.56506728e+32] [ 2.25933086e+32] [ 2.28578813e+32] [ 2.18547167e+32] [ 2.38134574e+32] [ 2.52240528e+32] [ 2.29658634e+32] [ 2.38081277e+32] [ 2.49551582e+32] [ 2.37148604e+32] [ 2.41815742e+32] [ 2.19809516e+32] [ 2.48887593e+32]] [[ -3.88844862e+31] [ -1.76917830e+32] [ -1.19231192e+32] [ -2.00275403e+32] [ -2.70883959e+32] [ -2.83963148e+32] [ -4.56092220e+32] [ -3.90827079e+32] [ -5.43080377e+32] [ -4.55498811e+32] [ -7.67454098e+32] [ -6.84756815e+32] [ -5.03649257e+32] [ -7.00876084e+32] [ -7.23115481e+32] [ -8.28605078e+32] [ -1.05643083e+33] [ -7.38775425e+32] [ -9.03028725e+32] [ -8.81585991e+32] [ -1.18468079e+33] [ -1.21686154e+33] [ -1.20901035e+33] [ -1.26042804e+33] [ -1.14530418e+33] [ -1.02229388e+33] [ -1.30949368e+33] [ -1.44970035e+33] [ -1.25468949e+33] [ -1.51010517e+33] [ -1.29512233e+33] [ -1.45404058e+33] [ -1.32843380e+33] [ -1.50825825e+33] [ -1.34698122e+33] [ -1.52531921e+33] [ -1.46494788e+33] [ -1.70973334e+33] [ -1.50594620e+33] [ -1.52358116e+33] [ -1.45671570e+33] [ -1.58727463e+33] [ -1.68129719e+33] [ -1.53077866e+33] [ -1.58691938e+33] [ -1.66337415e+33] [ -1.58070269e+33] [ -1.61181128e+33] [ -1.46512983e+33] [ -1.65894837e+33]] [[ 2.59182686e+32] [ 1.17923735e+33] [ 7.94729815e+32] [ 1.33492614e+33] [ 1.80556409e+33] [ 1.89274280e+33] [ 3.04006091e+33] [ 2.60503923e+33] [ 3.61987632e+33] [ 3.03610557e+33] [ 5.11542864e+33] [ 4.56421384e+33] [ 3.35705007e+33] [ 4.67165607e+33] [ 4.81989170e+33] [ 5.52302757e+33] [ 7.04158921e+33] [ 4.92427231e+33] [ 6.01909484e+33] [ 5.87616931e+33] [ 7.89643320e+33] [ 8.11093245e+33] [ 8.05860072e+33] [ 8.40132290e+33] [ 7.63397028e+33] [ 6.81405100e+33] [ 8.72836759e+33] [ 9.66290845e+33] [ 8.36307289e+33] [ 1.00655339e+34] [ 8.63257602e+33] [ 9.69183801e+33] [ 8.85461205e+33] [ 1.00532233e+34] [ 8.97823896e+33] [ 1.01669424e+34] [ 9.76454014e+33] [ 1.13961459e+34] [ 1.00378124e+34] [ 1.01553574e+34] [ 9.70966895e+33] [ 1.05799032e+34] [ 1.12066061e+34] [ 1.02033320e+34] [ 1.05775353e+34] [ 1.10871410e+34] [ 1.05360983e+34] [ 1.07434511e+34] [ 9.76575290e+33] [ 1.10576411e+34]] [[ -1.72756981e+33] [ -7.86015022e+33] [ -5.29723362e+33] [ -8.89788640e+33] [ -1.20349012e+34] [ -1.26159868e+34] [ -2.02633808e+34] [ -1.73637646e+34] [ -2.41281127e+34] [ -2.02370167e+34] [ -3.40966452e+34] [ -3.04225494e+34] [ -2.23762569e+34] [ -3.11387004e+34] [ -3.21267579e+34] [ -3.68134765e+34] [ -4.69353766e+34] [ -3.28225019e+34] [ -4.01199892e+34] [ -3.91673259e+34] [ -5.26332983e+34] [ -5.40630329e+34] [ -5.37142182e+34] [ -5.59986165e+34] [ -5.08838643e+34] [ -4.54187315e+34] [ -5.81785173e+34] [ -6.44076547e+34] [ -5.57436629e+34] [ -6.70913352e+34] [ -5.75400231e+34] [ -6.46004833e+34] [ -5.90199937e+34] [ -6.70092799e+34] [ -5.98440230e+34] [ -6.77672688e+34] [ -6.50850760e+34] [ -7.59604664e+34] [ -6.69065594e+34] [ -6.76900500e+34] [ -6.47193346e+34] [ -7.05198398e+34] [ -7.46970980e+34] [ -6.80098223e+34] [ -7.05040566e+34] [ -7.39008085e+34] [ -7.02278601e+34] [ -7.16099603e+34] [ -6.50931595e+34] [ -7.37041786e+34]] [[ 1.15150341e+34] [ 5.23914559e+34] [ 3.53084577e+34] [ 5.93084368e+34] [ 8.02180591e+34] [ 8.40912571e+34] [ 1.35064597e+35] [ 1.15737343e+35] [ 1.60824783e+35] [ 1.34888868e+35] [ 2.27269560e+35] [ 2.02780050e+35] [ 1.49147871e+35] [ 2.07553520e+35] [ 2.14139371e+35] [ 2.45378469e+35] [ 3.12845511e+35] [ 2.18776819e+35] [ 2.67417871e+35] [ 2.61067940e+35] [ 3.50824736e+35] [ 3.60354563e+35] [ 3.58029556e+35] [ 3.73256104e+35] [ 3.39163967e+35] [ 3.02736386e+35] [ 3.87786128e+35] [ 4.29306146e+35] [ 3.71556723e+35] [ 4.47194090e+35] [ 3.83530276e+35] [ 4.30591436e+35] [ 3.93394949e+35] [ 4.46647154e+35] [ 3.98887477e+35] [ 4.51699493e+35] [ 4.33821465e+35] [ 5.06310861e+35] [ 4.45962476e+35] [ 4.51184795e+35] [ 4.31383634e+35] [ 4.70046624e+35] [ 4.97889939e+35] [ 4.53316222e+35] [ 4.69941422e+35] [ 4.92582309e+35] [ 4.68100447e+35] [ 4.77312770e+35] [ 4.33875346e+35] [ 4.91271682e+35]] [[ -7.67529097e+34] [ -3.49212747e+35] [ -2.35346838e+35] [ -3.95317553e+35] [ -5.34689641e+35] [ -5.60506258e+35] [ -9.00266619e+35] [ -7.71441735e+35] [ -1.07196991e+36] [ -8.99095307e+35] [ -1.51485440e+36] [ -1.35162074e+36] [ -9.94138013e+35] [ -1.38343807e+36] [ -1.42733575e+36] [ -1.63555847e+36] [ -2.08525681e+36] [ -1.45824643e+36] [ -1.78246105e+36] [ -1.74013589e+36] [ -2.33840552e+36] [ -2.40192613e+36] [ -2.38642890e+36] [ -2.48792073e+36] [ -2.26068122e+36] [ -2.01787492e+36] [ -2.58476992e+36] [ -2.86151961e+36] [ -2.47659359e+36] [ -2.98075085e+36] [ -2.55640274e+36] [ -2.87008665e+36] [ -2.62215524e+36] [ -2.97710527e+36] [ -2.65876543e+36] [ -3.01078140e+36] [ -2.89161626e+36] [ -3.37479086e+36] [ -2.97254159e+36] [ -3.00735070e+36] [ -2.87536701e+36] [ -3.13307332e+36] [ -3.31866161e+36] [ -3.02155763e+36] [ -3.13237210e+36] [ -3.28328386e+36] [ -3.12010117e+36] [ -3.18150547e+36] [ -2.89197540e+36] [ -3.27454793e+36]] [[ 5.11592854e+35] [ 2.32766088e+36] [ 1.56869311e+36] [ 2.63497027e+36] [ 3.56394827e+36] [ 3.73602769e+36] [ 6.00068415e+36] [ 5.14200804e+36] [ 7.14516422e+36] [ 5.99287683e+36] [ 1.00971896e+37] [ 9.00916347e+36] [ 6.62637945e+36] [ 9.22124041e+36] [ 9.51383825e+36] [ 1.09017368e+37] [ 1.38991797e+37] [ 9.71987196e+36] [ 1.18809090e+37] [ 1.15987926e+37] [ 1.55865303e+37] [ 1.60099239e+37] [ 1.59066279e+37] [ 1.65831168e+37] [ 1.50684627e+37] [ 1.34500489e+37] [ 1.72286604e+37] [ 1.90733223e+37] [ 1.65076163e+37] [ 1.98680524e+37] [ 1.70395803e+37] [ 1.91304255e+37] [ 1.74778505e+37] [ 1.98437530e+37] [ 1.77218740e+37] [ 2.00682196e+37] [ 1.92739300e+37] [ 2.24945073e+37] [ 1.98133340e+37] [ 2.00453525e+37] [ 1.91656215e+37] [ 2.08833506e+37] [ 2.21203804e+37] [ 2.01400480e+37] [ 2.08786766e+37] [ 2.18845717e+37] [ 2.07968853e+37] [ 2.12061728e+37] [ 1.92763239e+37] [ 2.18263428e+37]] [[ -3.40999773e+36] [ -1.55149125e+37] [ -1.04560490e+37] [ -1.75632685e+37] [ -2.37553269e+37] [ -2.49023141e+37] [ -3.99972736e+37] [ -3.42738090e+37] [ -4.76257508e+37] [ -3.99452342e+37] [ -6.73023353e+37] [ -6.00501488e+37] [ -4.41678157e+37] [ -6.14637375e+37] [ -6.34140344e+37] [ -7.26650063e+37] [ -9.26443183e+37] [ -6.47873423e+37] [ -7.91916314e+37] [ -7.73111983e+37] [ -1.03891273e+38] [ -1.06713383e+38] [ -1.06024868e+38] [ -1.10533973e+38] [ -1.00438119e+38] [ -8.96506585e+37] [ -1.14836813e+38] [ -1.27132319e+38] [ -1.10030728e+38] [ -1.32429555e+38] [ -1.13576509e+38] [ -1.27512937e+38] [ -1.16497778e+38] [ -1.32267588e+38] [ -1.18124304e+38] [ -1.33763759e+38] [ -1.28469460e+38] [ -1.49936064e+38] [ -1.32064831e+38] [ -1.33611339e+38] [ -1.27747534e+38] [ -1.39196976e+38] [ -1.47442339e+38] [ -1.34242528e+38] [ -1.39165822e+38] [ -1.45870568e+38] [ -1.38620646e+38] [ -1.41348732e+38] [ -1.28485416e+38] [ -1.45482446e+38]] [[ 2.27291770e+37] [ 1.03413908e+38] [ 6.96942953e+37] [ 1.17067127e+38] [ 1.58339997e+38] [ 1.65985186e+38] [ 2.66599917e+38] [ 2.28450437e+38] [ 3.17447168e+38] [ 2.66253051e+38] [ 4.48600502e+38] [ 4.00261399e+38] [ 2.94398466e+38] [ 4.09683607e+38] [ 4.22683218e+38] [ 4.84345129e+38] [ 6.17516278e+38] [ 4.31836936e+38] [ 5.27848037e+38] [ 5.15314100e+38] [ 6.92482321e+38] [ 7.11292958e+38] [ 7.06703696e+38] [ 7.36758918e+38] [ 6.69465482e+38] [ 5.97562182e+38] [ 7.65439293e+38] [ 8.47394400e+38] [ 7.33404562e+38] [ 8.82702871e+38] [ 7.57038795e+38] [ 8.49931394e+38] [ 7.76510375e+38] [ 8.81623290e+38] [ 7.87351909e+38] [ 8.91595949e+38] [ 8.56307051e+38] [ 9.99391674e+38] [ 8.80271825e+38] [ 8.90580002e+38] [ 8.51495089e+38] [ 9.27810794e+38] [ 9.82769870e+38] [ 8.94787161e+38] [ 9.27603139e+38] [ 9.72293301e+38] [ 9.23969294e+38] [ 9.42153220e+38] [ 8.56413404e+38] [ 9.69706294e+38]] [[ -1.51500244e+38] [ -6.89300464e+38] [ -4.64543996e+38] [ -7.80305345e+38] [ -1.05540769e+39] [ -1.10636633e+39] [ -1.77700901e+39] [ -1.52272547e+39] [ -2.11592893e+39] [ -1.77469699e+39] [ -2.99012522e+39] [ -2.66792324e+39] [ -1.96229891e+39] [ -2.73072651e+39] [ -2.81737480e+39] [ -3.22837932e+39] [ -4.11602526e+39] [ -2.87838847e+39] [ -3.51834588e+39] [ -3.43480152e+39] [ -4.61570784e+39] [ -4.74108924e+39] [ -4.71049973e+39] [ -4.91083138e+39] [ -4.46229020e+39] [ -3.98302218e+39] [ -5.10199905e+39] [ -5.64826690e+39] [ -4.88847308e+39] [ -5.88361382e+39] [ -5.04600593e+39] [ -5.66517711e+39] [ -5.17579282e+39] [ -5.87641793e+39] [ -5.24805654e+39] [ -5.94289021e+39] [ -5.70767375e+39] [ -6.66139747e+39] [ -5.86740980e+39] [ -5.93611847e+39] [ -5.67559985e+39] [ -6.18427853e+39] [ -6.55060563e+39] [ -5.96416108e+39] [ -6.18289442e+39] [ -6.48077456e+39] [ -6.15867320e+39] [ -6.27987729e+39] [ -5.70838264e+39] [ -6.46353099e+39]] [[ 1.00981764e+39] [ 4.59449932e+39] [ 3.09639582e+39] [ 5.20108801e+39] [ 7.03476954e+39] [ 7.37443191e+39] [ 1.18445687e+40] [ 1.01496539e+40] [ 1.41036232e+40] [ 1.18291581e+40] [ 1.99305368e+40] [ 1.77829149e+40] [ 1.30796097e+40] [ 1.82015272e+40] [ 1.87790772e+40] [ 2.15186082e+40] [ 2.74351698e+40] [ 1.91857609e+40] [ 2.34513664e+40] [ 2.28945055e+40] [ 3.07657801e+40] [ 3.16015039e+40] [ 3.13976110e+40] [ 3.27329120e+40] [ 2.97431822e+40] [ 2.65486441e+40] [ 3.40071310e+40] [ 3.76482532e+40] [ 3.25838838e+40] [ 3.92169469e+40] [ 3.36339115e+40] [ 3.77609675e+40] [ 3.44989998e+40] [ 3.91689830e+40] [ 3.49806702e+40] [ 3.96120509e+40] [ 3.80442267e+40] [ 4.44012267e+40] [ 3.91089398e+40] [ 3.95669141e+40] [ 3.78304397e+40] [ 4.12210132e+40] [ 4.36627490e+40] [ 3.97538308e+40] [ 4.12117875e+40] [ 4.31972933e+40] [ 4.10503421e+40] [ 4.18582221e+40] [ 3.80489519e+40] [ 4.30823571e+40]] [[ -6.73089124e+39] [ -3.06244158e+40] [ -2.06388784e+40] [ -3.46676037e+40] [ -4.68899204e+40] [ -4.91539236e+40] [ -7.89494069e+40] [ -6.76520335e+40] [ -9.40070271e+40] [ -7.88466880e+40] [ -1.32846041e+41] [ -1.18531170e+41] [ -8.71815137e+40] [ -1.21321410e+41] [ -1.25171042e+41] [ -1.43431255e+41] [ -1.82867814e+41] [ -1.27881773e+41] [ -1.56313963e+41] [ -1.52602233e+41] [ -2.05067838e+41] [ -2.10638315e+41] [ -2.09279276e+41] [ -2.18179662e+41] [ -1.98251760e+41] [ -1.76958719e+41] [ -2.26672907e+41] [ -2.50942634e+41] [ -2.17186321e+41] [ -2.61398687e+41] [ -2.24185231e+41] [ -2.51693925e+41] [ -2.29951436e+41] [ -2.61078986e+41] [ -2.33161987e+41] [ -2.64032233e+41] [ -2.53581976e+41] [ -2.95954256e+41] [ -2.60678770e+41] [ -2.63731376e+41] [ -2.52156989e+41] [ -2.74756695e+41] [ -2.91031968e+41] [ -2.64977261e+41] [ -2.74695201e+41] [ -2.87929495e+41] [ -2.73619094e+41] [ -2.79003980e+41] [ -2.53613471e+41] [ -2.87163393e+41]] [[ 4.48644341e+40] [ 2.04125581e+41] [ 1.37567459e+41] [ 2.31075257e+41] [ 3.12542525e+41] [ 3.27633130e+41] [ 5.26233501e+41] [ 4.50931398e+41] [ 6.26599349e+41] [ 5.25548833e+41] [ 8.85478943e+41] [ 7.90063852e+41] [ 5.81104215e+41] [ 8.08662063e+41] [ 8.34321601e+41] [ 9.56034180e+41] [ 1.21889668e+42] [ 8.52389848e+41] [ 1.04190325e+42] [ 1.01716289e+42] [ 1.36686988e+42] [ 1.40399963e+42] [ 1.39494102e+42] [ 1.45426611e+42] [ 1.32143764e+42] [ 1.17950989e+42] [ 1.51087744e+42] [ 1.67264614e+42] [ 1.44764505e+42] [ 1.74234046e+42] [ 1.49429595e+42] [ 1.67765384e+42] [ 1.53273031e+42] [ 1.74020951e+42] [ 1.55413009e+42] [ 1.75989424e+42] [ 1.69023855e+42] [ 1.97266895e+42] [ 1.73754189e+42] [ 1.75788889e+42] [ 1.68074037e+42] [ 1.83137763e+42] [ 1.93985968e+42] [ 1.76619328e+42] [ 1.83096774e+42] [ 1.91918030e+42] [ 1.82379501e+42] [ 1.85968770e+42] [ 1.69044848e+42] [ 1.91407388e+42]] [[ -2.99041742e+41] [ -1.36058931e+42] [ -9.16949324e+41] [ -1.54022109e+42] [ -2.08323727e+42] [ -2.18382298e+42] [ -3.50758427e+42] [ -3.00566169e+42] [ -4.17656803e+42] [ -3.50302065e+42] [ -5.90211760e+42] [ -5.26613287e+42] [ -3.87332239e+42] [ -5.39009836e+42] [ -5.56113079e+42] [ -6.37240016e+42] [ -8.12449759e+42] [ -5.68156382e+42] [ -6.94475637e+42] [ -6.77985069e+42] [ -9.11080590e+42] [ -9.35829246e+42] [ -9.29791275e+42] [ -9.69334133e+42] [ -8.80797946e+42] [ -7.86196685e+42] [ -1.00706814e+43] [ -1.11489430e+43] [ -9.64920894e+42] [ -1.16134871e+43] [ -9.96015828e+42] [ -1.11823215e+43] [ -1.02163407e+43] [ -1.15992834e+43] [ -1.03589799e+43] [ -1.17304910e+43] [ -1.12662043e+43] [ -1.31487307e+43] [ -1.15815025e+43] [ -1.17171244e+43] [ -1.12028946e+43] [ -1.22069601e+43] [ -1.29300421e+43] [ -1.17724769e+43] [ -1.22042280e+43] [ -1.27922046e+43] [ -1.21564185e+43] [ -1.23956596e+43] [ -1.12676036e+43] [ -1.27581680e+43]] [[ 1.99324845e+42] [ 9.06894305e+42] [ 6.11188193e+42] [ 1.02662701e+43] [ 1.38857185e+43] [ 1.45561678e+43] [ 2.33796354e+43] [ 2.00340945e+43] [ 2.78387148e+43] [ 2.33492168e+43] [ 3.93402830e+43] [ 3.51011571e+43] [ 2.58174454e+43] [ 3.59274432e+43] [ 3.70674516e+43] [ 4.24749289e+43] [ 5.41534507e+43] [ 3.78701923e+43] [ 4.62899419e+43] [ 4.51907709e+43] [ 6.07276416e+43] [ 6.23772514e+43] [ 6.19747934e+43] [ 6.46105036e+43] [ 5.87091664e+43] [ 5.24035645e+43] [ 6.71256459e+43] [ 7.43127468e+43] [ 6.43163413e+43] [ 7.74091437e+43] [ 6.63889593e+43] [ 7.45352300e+43] [ 6.80965308e+43] [ 7.73144693e+43] [ 6.90472855e+43] [ 7.81890274e+43] [ 7.50943469e+43] [ 8.76422365e+43] [ 7.71959517e+43] [ 7.80999334e+43] [ 7.46723591e+43] [ 8.13649094e+43] [ 8.61845777e+43] [ 7.84688826e+43] [ 8.13466990e+43] [ 8.52658289e+43] [ 8.10280269e+43] [ 8.26226769e+43] [ 7.51036736e+43] [ 8.50389598e+43]] [[ -1.32859024e+43] [ -6.04486067e+43] [ -4.07384571e+43] [ -6.84293329e+43] [ -9.25545936e+43] [ -9.70234411e+43] [ -1.55835843e+44] [ -1.33536300e+44] [ -1.85557624e+44] [ -1.55633089e+44] [ -2.62220777e+44] [ -2.33965086e+44] [ -1.72084949e+44] [ -2.39472657e+44] [ -2.47071328e+44] [ -2.83114610e+44] [ -3.60957239e+44] [ -2.52421958e+44] [ -3.08543397e+44] [ -3.01216925e+44] [ -4.04777194e+44] [ -4.15772590e+44] [ -4.13090025e+44] [ -4.30658226e+44] [ -3.91323145e+44] [ -3.49293457e+44] [ -4.47422786e+44] [ -4.95328064e+44] [ -4.28697501e+44] [ -5.15966950e+44] [ -4.42512437e+44] [ -4.96811016e+44] [ -4.53894174e+44] [ -5.15335902e+44] [ -4.60231384e+44] [ -5.21165228e+44] [ -5.00537783e+44] [ -5.84175141e+44] [ -5.14545929e+44] [ -5.20571376e+44] [ -4.97725043e+44] [ -5.42333918e+44] [ -5.74459186e+44] [ -5.23030589e+44] [ -5.42212537e+44] [ -5.68335310e+44] [ -5.40088443e+44] [ -5.50717506e+44] [ -5.00599950e+44] [ -5.66823125e+44]] [[ 8.85565476e+43] [ 4.02917301e+44] [ 2.71540240e+44] [ 4.56112450e+44] [ 6.16918223e+44] [ 6.46705113e+44] [ 1.03871637e+45] [ 8.90079828e+44] [ 1.23682548e+45] [ 1.03736492e+45] [ 1.74782007e+45] [ 1.55948311e+45] [ 1.14702401e+45] [ 1.59619355e+45] [ 1.64684214e+45] [ 1.88708691e+45] [ 2.40594324e+45] [ 1.68250650e+45] [ 2.05658127e+45] [ 2.00774703e+45] [ 2.69802306e+45] [ 2.77131234e+45] [ 2.75343183e+45] [ 2.87053184e+45] [ 2.60834573e+45] [ 2.32819885e+45] [ 2.98227521e+45] [ 3.30158555e+45] [ 2.85746272e+45] [ 3.43915306e+45] [ 2.94954551e+45] [ 3.31147010e+45] [ 3.02540994e+45] [ 3.43494684e+45] [ 3.06765031e+45] [ 3.47380193e+45] [ 3.33631068e+45] [ 3.89379150e+45] [ 3.42968132e+45] [ 3.46984364e+45] [ 3.31756250e+45] [ 3.61490083e+45] [ 3.82903027e+45] [ 3.48623541e+45] [ 3.61409177e+45] [ 3.78821186e+45] [ 3.59993372e+45] [ 3.67078123e+45] [ 3.33672505e+45] [ 3.77813246e+45]] [[ -5.90269438e+44] [ -2.68562603e+45] [ -1.80993850e+45] [ -3.04019575e+45] [ -4.11203895e+45] [ -4.31058204e+45] [ -6.92351435e+45] [ -5.93278458e+45] [ -8.24400114e+45] [ -6.91450635e+45] [ -1.16500112e+46] [ -1.03946602e+46] [ -7.64543376e+45] [ -1.06393519e+46] [ -1.09769476e+46] [ -1.25782877e+46] [ -1.60366997e+46] [ -1.12146667e+46] [ -1.37080443e+46] [ -1.33825419e+46] [ -1.79835438e+46] [ -1.84720500e+46] [ -1.83528684e+46] [ -1.91333929e+46] [ -1.73858039e+46] [ -1.55184982e+46] [ -1.98782130e+46] [ -2.20065608e+46] [ -1.90462813e+46] [ -2.29235105e+46] [ -1.96600548e+46] [ -2.20724458e+46] [ -2.01657254e+46] [ -2.28954741e+46] [ -2.04472766e+46] [ -2.31544609e+46] [ -2.22380195e+46] [ -2.59538813e+46] [ -2.28603770e+46] [ -2.31280771e+46] [ -2.21130544e+46] [ -2.40949488e+46] [ -2.55222184e+46] [ -2.32373356e+46] [ -2.40895561e+46] [ -2.52501452e+46] [ -2.39951863e+46] [ -2.44674170e+46] [ -2.22407815e+46] [ -2.51829615e+46]] [[ 3.93441275e+45] [ 1.79009120e+46] [ 1.20640586e+46] [ 2.02642796e+46] [ 2.74085992e+46] [ 2.87319787e+46] [ 4.61483542e+46] [ 3.95446923e+46] [ 5.49499959e+46] [ 4.60883118e+46] [ 7.76525931e+46] [ 6.92851109e+46] [ 5.09602735e+46] [ 7.09160919e+46] [ 7.31663201e+46] [ 8.38399757e+46] [ 1.06891856e+47] [ 7.47508256e+46] [ 9.13703142e+46] [ 8.92006940e+46] [ 1.19868452e+47] [ 1.23124567e+47] [ 1.22330168e+47] [ 1.27532716e+47] [ 1.15884245e+47] [ 1.03437809e+47] [ 1.32497279e+47] [ 1.46683680e+47] [ 1.26952078e+47] [ 1.52795564e+47] [ 1.31043156e+47] [ 1.47122833e+47] [ 1.34413680e+47] [ 1.52608689e+47] [ 1.36290346e+47] [ 1.54334953e+47] [ 1.48226457e+47] [ 1.72994356e+47] [ 1.52374751e+47] [ 1.54159093e+47] [ 1.47393508e+47] [ 1.60603730e+47] [ 1.70117128e+47] [ 1.54887351e+47] [ 1.60567785e+47] [ 1.68303637e+47] [ 1.59938768e+47] [ 1.63086399e+47] [ 1.48244867e+47] [ 1.67855827e+47]] [[ -2.62246402e+46] [ -1.19317674e+47] [ -8.04124065e+46] [ -1.35070588e+47] [ -1.82690709e+47] [ -1.91511632e+47] [ -3.07599651e+47] [ -2.63583258e+47] [ -3.66266573e+47] [ -3.07199441e+47] [ -5.17589650e+47] [ -4.61816597e+47] [ -3.39673269e+47] [ -4.72687824e+47] [ -4.87686612e+47] [ -5.58831354e+47] [ -7.12482561e+47] [ -4.98248057e+47] [ -6.09024466e+47] [ -5.94562965e+47] [ -7.98977444e+47] [ -8.20680922e+47] [ -8.15385889e+47] [ -8.50063229e+47] [ -7.72420903e+47] [ -6.89459775e+47] [ -8.83154287e+47] [ -9.77713064e+47] [ -8.46193014e+47] [ -1.01845154e+48] [ -8.73461897e+47] [ -9.80640216e+47] [ -8.95927962e+47] [ -1.01720593e+48] [ -9.08436789e+47] [ -1.02871226e+48] [ -9.87996369e+47] [ -1.15308562e+48] [ -1.01564663e+48] [ -1.02754007e+48] [ -9.82444388e+47] [ -1.07049649e+48] [ -1.13390759e+48] [ -1.03239424e+48] [ -1.07025690e+48] [ -1.12181985e+48] [ -1.06606422e+48] [ -1.08704461e+48] [ -9.88119078e+47] [ -1.11883500e+48]] [[ 1.74799087e+47] [ 7.95306257e+47] [ 5.35985055e+47] [ 9.00306551e+47] [ 1.21771620e+48] [ 1.27651163e+48] [ 2.05029078e+48] [ 1.75690161e+48] [ 2.44133235e+48] [ 2.04762320e+48] [ 3.44996909e+48] [ 3.07821647e+48] [ 2.26407595e+48] [ 3.15067811e+48] [ 3.25065182e+48] [ 3.72486370e+48] [ 4.74901848e+48] [ 3.32104863e+48] [ 4.05942349e+48] [ 3.96303105e+48] [ 5.32554599e+48] [ 5.47020949e+48] [ 5.43491570e+48] [ 5.66605585e+48] [ 5.14853464e+48] [ 4.59556120e+48] [ 5.88662272e+48] [ 6.51689973e+48] [ 5.64025912e+48] [ 6.78844007e+48] [ 5.82201856e+48] [ 6.53641052e+48] [ 5.97176504e+48] [ 6.78013754e+48] [ 6.05514203e+48] [ 6.85683243e+48] [ 6.58544261e+48] [ 7.68583711e+48] [ 6.76974407e+48] [ 6.84901927e+48] [ 6.54843614e+48] [ 7.13534325e+48] [ 7.55800688e+48] [ 6.88137449e+48] [ 7.13374628e+48] [ 7.47743666e+48] [ 7.10580014e+48] [ 7.24564390e+48] [ 6.58626052e+48] [ 7.45754124e+48]] [[ -1.16511497e+48] [ -5.30107587e+48] [ -3.57258278e+48] [ -6.00095031e+48] [ -8.11662914e+48] [ -8.50852733e+48] [ -1.36661153e+49] [ -1.17105438e+49] [ -1.62725842e+49] [ -1.36483347e+49] [ -2.29956042e+49] [ -2.05177049e+49] [ -1.50910901e+49] [ -2.10006945e+49] [ -2.16670644e+49] [ -2.48279011e+49] [ -3.16543559e+49] [ -2.21362910e+49] [ -2.70578934e+49] [ -2.64153942e+49] [ -3.54971725e+49] [ -3.64614201e+49] [ -3.62261710e+49] [ -3.77668246e+49] [ -3.43173117e+49] [ -3.06314937e+49] [ -3.92370026e+49] [ -4.34380839e+49] [ -3.75948777e+49] [ -4.52480230e+49] [ -3.88063866e+49] [ -4.35681322e+49] [ -3.98045147e+49] [ -4.51926829e+49] [ -4.03602600e+49] [ -4.57038890e+49] [ -4.38949532e+49] [ -5.12295802e+49] [ -4.51234058e+49] [ -4.56518108e+49] [ -4.36482884e+49] [ -4.75602896e+49] [ -5.03775339e+49] [ -4.58674730e+49] [ -4.75496451e+49] [ -4.98404969e+49] [ -4.73633714e+49] [ -4.82954933e+49] [ -4.39004049e+49] [ -4.97078849e+49]] [[ 7.76601816e+48] [ 3.53340680e+49] [ 2.38128799e+49] [ 3.99990477e+49] [ 5.41010039e+49] [ 5.67131825e+49] [ 9.10908387e+49] [ 7.80560704e+49] [ 1.08464133e+50] [ 9.09723228e+49] [ 1.53276101e+50] [ 1.36759782e+50] [ 1.00588940e+50] [ 1.39979126e+50] [ 1.44420783e+50] [ 1.65489189e+50] [ 2.10990597e+50] [ 1.47548391e+50] [ 1.80353096e+50] [ 1.76070548e+50] [ 2.36604708e+50] [ 2.43031854e+50] [ 2.41463813e+50] [ 2.51732966e+50] [ 2.28740402e+50] [ 2.04172758e+50] [ 2.61532367e+50] [ 2.89534473e+50] [ 2.50586862e+50] [ 3.01598536e+50] [ 2.58662116e+50] [ 2.90401304e+50] [ 2.65315091e+50] [ 3.01229670e+50] [ 2.69019385e+50] [ 3.04637089e+50] [ 2.92579714e+50] [ 3.41468320e+50] [ 3.00767906e+50] [ 3.04289964e+50] [ 2.90935581e+50] [ 3.17010839e+50] [ 3.35789046e+50] [ 3.05727450e+50] [ 3.16939888e+50] [ 3.32209452e+50] [ 3.15698290e+50] [ 3.21911304e+50] [ 2.92616052e+50] [ 3.31325533e+50]] [[ -5.17640231e+49] [ -2.35517543e+50] [ -1.58723614e+50] [ -2.66611742e+50] [ -3.60607657e+50] [ -3.78019010e+50] [ -6.07161634e+50] [ -5.20279009e+50] [ -7.22962495e+50] [ -6.06371673e+50] [ -1.02165453e+51] [ -9.11565794e+50] [ -6.70470778e+50] [ -9.33024178e+50] [ -9.62629832e+50] [ -1.10306028e+51] [ -1.40634775e+51] [ -9.83476750e+50] [ -1.20213495e+51] [ -1.17358983e+51] [ -1.57707738e+51] [ -1.61991722e+51] [ -1.60946551e+51] [ -1.67791406e+51] [ -1.52465823e+51] [ -1.36090377e+51] [ -1.74323150e+51] [ -1.92987820e+51] [ -1.67027476e+51] [ -2.01029064e+51] [ -1.72409998e+51] [ -1.93565602e+51] [ -1.76844507e+51] [ -2.00783197e+51] [ -1.79313586e+51] [ -2.03054397e+51] [ -1.95017611e+51] [ -2.27604078e+51] [ -2.00475411e+51] [ -2.02823022e+51] [ -1.93921722e+51] [ -2.11302061e+51] [ -2.23818585e+51] [ -2.03781172e+51] [ -2.11254769e+51] [ -2.21432623e+51] [ -2.10427187e+51] [ -2.14568442e+51] [ -1.95041832e+51] [ -2.20843452e+51]] [[ 3.45030624e+50] [ 1.56983093e+51] [ 1.05796467e+51] [ 1.77708783e+51] [ 2.40361312e+51] [ 2.51966765e+51] [ 4.04700687e+51] [ 3.46789488e+51] [ 4.81887198e+51] [ 4.04174142e+51] [ 6.80978950e+51] [ 6.07599826e+51] [ 4.46899095e+51] [ 6.21902810e+51] [ 6.41636317e+51] [ 7.35239563e+51] [ 9.37394375e+51] [ 6.55531730e+51] [ 8.01277307e+51] [ 7.82250695e+51] [ 1.05119339e+52] [ 1.07974808e+52] [ 1.07278155e+52] [ 1.11840560e+52] [ 1.01625366e+52] [ 9.07103906e+51] [ 1.16194263e+52] [ 1.28635110e+52] [ 1.11331366e+52] [ 1.33994962e+52] [ 1.14919061e+52] [ 1.29020228e+52] [ 1.17874861e+52] [ 1.33831081e+52] [ 1.19520614e+52] [ 1.35344938e+52] [ 1.29988057e+52] [ 1.51708411e+52] [ 1.33625928e+52] [ 1.35190717e+52] [ 1.29257598e+52] [ 1.40842379e+52] [ 1.49185209e+52] [ 1.35829367e+52] [ 1.40810857e+52] [ 1.47594857e+52] [ 1.40259237e+52] [ 1.43019570e+52] [ 1.30004202e+52] [ 1.47202148e+52]] [[ -2.29978514e+51] [ -1.04636331e+52] [ -7.05181295e+51] [ -1.18450941e+52] [ -1.60211684e+52] [ -1.67947244e+52] [ -2.69751310e+52] [ -2.31150877e+52] [ -3.21199610e+52] [ -2.69400344e+52] [ -4.53903266e+52] [ -4.04992762e+52] [ -2.97878457e+52] [ -4.14526347e+52] [ -4.27679623e+52] [ -4.90070419e+52] [ -6.24815743e+52] [ -4.36941544e+52] [ -5.34087562e+52] [ -5.21405465e+52] [ -7.00667936e+52] [ -7.19700928e+52] [ -7.15057417e+52] [ -7.45467913e+52] [ -6.77379022e+52] [ -6.04625776e+52] [ -7.74487310e+52] [ -8.57411181e+52] [ -7.42073905e+52] [ -8.93137022e+52] [ -7.65987512e+52] [ -8.59978165e+52] [ -7.85689259e+52] [ -8.92044680e+52] [ -7.96658947e+52] [ -9.02135223e+52] [ -8.66429186e+52] [ -1.01120517e+53] [ -8.90677240e+52] [ -9.01107267e+52] [ -8.61560344e+52] [ -9.38778152e+52] [ -9.94386882e+52] [ -9.05364157e+52] [ -9.38568043e+52] [ -9.83786473e+52] [ -9.34891244e+52] [ -9.53290115e+52] [ -8.66536797e+52] [ -9.81168885e+52]] [[ 1.53291080e+52] [ 6.97448467e+52] [ 4.70035224e+52] [ 7.89529088e+52] [ 1.06788333e+53] [ 1.11944433e+53] [ 1.79801447e+53] [ 1.54072513e+53] [ 2.14094066e+53] [ 1.79567513e+53] [ 3.02547054e+53] [ 2.69945991e+53] [ 1.98549463e+53] [ 2.76300557e+53] [ 2.85067810e+53] [ 3.26654097e+53] [ 4.16467949e+53] [ 2.91241299e+53] [ 3.55993513e+53] [ 3.47540322e+53] [ 4.67026866e+53] [ 4.79713216e+53] [ 4.76618106e+53] [ 4.96888077e+53] [ 4.51503752e+53] [ 4.03010423e+53] [ 5.16230817e+53] [ 5.71503328e+53] [ 4.94625817e+53] [ 5.95316216e+53] [ 5.10565317e+53] [ 5.73214338e+53] [ 5.23697423e+53] [ 5.94588121e+53] [ 5.31009216e+53] [ 6.01313924e+53] [ 5.77514235e+53] [ 6.74013974e+53] [ 5.93676660e+53] [ 6.00628745e+53] [ 5.74268932e+53] [ 6.25738094e+53] [ 6.62803827e+53] [ 6.03466155e+53] [ 6.25598046e+53] [ 6.55738175e+53] [ 6.23147293e+53] [ 6.35410973e+53] [ 5.77585963e+53] [ 6.53993434e+53]] [[ -1.02175437e+53] [ -4.64880944e+53] [ -3.13299734e+53] [ -5.26256842e+53] [ -7.11792533e+53] [ -7.46160274e+53] [ -1.19845796e+54] [ -1.02696298e+54] [ -1.42703377e+54] [ -1.19689868e+54] [ -2.01661294e+54] [ -1.79931211e+54] [ -1.32342196e+54] [ -1.84166817e+54] [ -1.90010587e+54] [ -2.17729728e+54] [ -2.77594723e+54] [ -1.94125497e+54] [ -2.37285776e+54] [ -2.31651342e+54] [ -3.11294527e+54] [ -3.19750553e+54] [ -3.17687522e+54] [ -3.31198374e+54] [ -3.00947669e+54] [ -2.68624673e+54] [ -3.44091185e+54] [ -3.80932813e+54] [ -3.29690476e+54] [ -3.96805181e+54] [ -3.40314874e+54] [ -3.82073279e+54] [ -3.49068016e+54] [ -3.96319872e+54] [ -3.53941657e+54] [ -4.00802924e+54] [ -3.84939355e+54] [ -4.49260795e+54] [ -3.95712342e+54] [ -4.00346221e+54] [ -3.82776214e+54] [ -4.17082738e+54] [ -4.41788725e+54] [ -4.02237483e+54] [ -4.16989390e+54] [ -4.37079148e+54] [ -4.15355852e+54] [ -4.23530149e+54] [ -3.84987165e+54] [ -4.35916200e+54]] [[ 6.81045498e+53] [ 3.09864173e+54] [ 2.08828441e+54] [ 3.50773985e+54] [ 4.74441913e+54] [ 4.97349566e+54] [ 7.98826429e+54] [ 6.84517268e+54] [ 9.51182545e+54] [ 7.97787098e+54] [ 1.34416372e+55] [ 1.19932289e+55] [ 8.82120588e+54] [ 1.22755512e+55] [ 1.26650649e+55] [ 1.45126711e+55] [ 1.85029436e+55] [ 1.29393423e+55] [ 1.58161701e+55] [ 1.54406096e+55] [ 2.07491880e+55] [ 2.13128204e+55] [ 2.11753101e+55] [ 2.20758695e+55] [ 2.00595232e+55] [ 1.79050493e+55] [ 2.29352336e+55] [ 2.53908947e+55] [ 2.19753612e+55] [ 2.64488598e+55] [ 2.26835254e+55] [ 2.54669120e+55] [ 2.32669619e+55] [ 2.64165118e+55] [ 2.35918121e+55] [ 2.67153275e+55] [ 2.56579489e+55] [ 2.99452637e+55] [ 2.63760172e+55] [ 2.66848861e+55] [ 2.55137657e+55] [ 2.78004507e+55] [ 2.94472165e+55] [ 2.68109473e+55] [ 2.77942286e+55] [ 2.91333019e+55] [ 2.76853459e+55] [ 2.82301998e+55] [ 2.56611356e+55] [ 2.90557861e+55]] [[ -4.53947624e+54] [ -2.06538485e+55] [ -1.39193600e+55] [ -2.33806724e+55] [ -3.16236991e+55] [ -3.31505978e+55] [ -5.32453941e+55] [ -4.56261715e+55] [ -6.34006182e+55] [ -5.31761180e+55] [ -8.95945910e+55] [ -7.99402947e+55] [ -5.87973265e+55] [ -8.18221002e+55] [ -8.44183853e+55] [ -9.67335158e+55] [ -1.23330487e+56] [ -8.62465679e+55] [ -1.05421926e+56] [ -1.02918645e+56] [ -1.38302722e+56] [ -1.42059586e+56] [ -1.41143018e+56] [ -1.47145653e+56] [ -1.33705794e+56] [ -1.19345251e+56] [ -1.52873704e+56] [ -1.69241796e+56] [ -1.46475721e+56] [ -1.76293612e+56] [ -1.51195955e+56] [ -1.69748485e+56] [ -1.55084823e+56] [ -1.76077998e+56] [ -1.57250097e+56] [ -1.78069739e+56] [ -1.71021833e+56] [ -1.99598725e+56] [ -1.75808083e+56] [ -1.77866834e+56] [ -1.70060787e+56] [ -1.85302576e+56] [ -1.96279015e+56] [ -1.78707089e+56] [ -1.85261103e+56] [ -1.94186632e+56] [ -1.84535351e+56] [ -1.88167049e+56] [ -1.71043074e+56] [ -1.93669955e+56]] [[ 3.02576620e+55] [ 1.37667240e+56] [ 9.27788293e+55] [ 1.55842755e+56] [ 2.10786256e+56] [ 2.20963726e+56] [ 3.54904631e+56] [ 3.04119067e+56] [ 4.22593793e+56] [ 3.54442875e+56] [ 5.97188467e+56] [ 5.32838216e+56] [ 3.91910771e+56] [ 5.45381301e+56] [ 5.62686716e+56] [ 6.44772629e+56] [ 8.22053471e+56] [ 5.74872379e+56] [ 7.02684814e+56] [ 6.85999316e+56] [ 9.21850186e+56] [ 9.46891387e+56] [ 9.40782044e+56] [ 9.80792325e+56] [ 8.91209581e+56] [ 7.95490068e+56] [ 1.01897237e+57] [ 1.12807311e+57] [ 9.76326919e+56] [ 1.17507665e+57] [ 1.00778942e+57] [ 1.13145042e+57] [ 1.03371048e+57] [ 1.17363948e+57] [ 1.04814301e+57] [ 1.18691534e+57] [ 1.13993786e+57] [ 1.33041577e+57] [ 1.17184038e+57] [ 1.18556289e+57] [ 1.13353205e+57] [ 1.23512547e+57] [ 1.30828840e+57] [ 1.19116356e+57] [ 1.23484904e+57] [ 1.29434172e+57] [ 1.23001157e+57] [ 1.25421848e+57] [ 1.14007944e+57] [ 1.29089783e+57]] [[ -2.01681001e+56] [ -9.17614416e+56] [ -6.18412855e+56] [ -1.03876244e+57] [ -1.40498572e+57] [ -1.47282316e+57] [ -2.36559987e+57] [ -2.02709112e+57] [ -2.81677874e+57] [ -2.36252205e+57] [ -3.98053120e+57] [ -3.55160768e+57] [ -2.61226253e+57] [ -3.63521301e+57] [ -3.75056143e+57] [ -4.29770116e+57] [ -5.47935815e+57] [ -3.83178439e+57] [ -4.68371207e+57] [ -4.57249568e+57] [ -6.14454838e+57] [ -6.31145932e+57] [ -6.27073778e+57] [ -6.53742440e+57] [ -5.94031489e+57] [ -5.30230105e+57] [ -6.79191169e+57] [ -7.51911743e+57] [ -6.50766045e+57] [ -7.83241728e+57] [ -6.71737222e+57] [ -7.54162874e+57] [ -6.89014784e+57] [ -7.82283792e+57] [ -6.98634716e+57] [ -7.91132752e+57] [ -7.59820134e+57] [ -8.86782277e+57] [ -7.81084607e+57] [ -7.90231280e+57] [ -7.55550375e+57] [ -8.23266983e+57] [ -8.72033384e+57] [ -7.93964385e+57] [ -8.23082727e+57] [ -8.62737293e+57] [ -8.19858336e+57] [ -8.35993334e+57] [ -7.59914504e+57] [ -8.60441784e+57]] [[ 1.34429508e+57] [ 6.11631506e+57] [ 4.12200135e+57] [ 6.92382144e+57] [ 9.36486522e+57] [ 9.81703246e+57] [ 1.57677929e+58] [ 1.35114790e+58] [ 1.87751042e+58] [ 1.57472779e+58] [ 2.65320406e+58] [ 2.36730714e+58] [ 1.74119111e+58] [ 2.42303387e+58] [ 2.49991881e+58] [ 2.86461218e+58] [ 3.65224000e+58] [ 2.55405758e+58] [ 3.12190591e+58] [ 3.04777515e+58] [ 4.09561938e+58] [ 4.20687306e+58] [ 4.17973032e+58] [ 4.35748901e+58] [ 3.95948852e+58] [ 3.53422344e+58] [ 4.52711630e+58] [ 5.01183181e+58] [ 4.33764999e+58] [ 5.22066033e+58] [ 4.47743237e+58] [ 5.02683662e+58] [ 4.59259513e+58] [ 5.21427525e+58] [ 4.65671633e+58] [ 5.27325757e+58] [ 5.06454482e+58] [ 5.91080491e+58] [ 5.20628214e+58] [ 5.26724885e+58] [ 5.03608494e+58] [ 5.48744676e+58] [ 5.81249687e+58] [ 5.29213169e+58] [ 5.48621860e+58] [ 5.75053422e+58] [ 5.46472658e+58] [ 5.57227364e+58] [ 5.06517383e+58] [ 5.73523362e+58]] [[ -8.96033465e+57] [ -4.07680059e+58] [ -2.74750031e+58] [ -4.61504011e+58] [ -6.24210618e+58] [ -6.54349610e+58] [ -1.05099471e+59] [ -9.00601181e+58] [ -1.25144560e+59] [ -1.04962729e+59] [ -1.76848050e+59] [ -1.57791726e+59] [ -1.16058262e+59] [ -1.61506165e+59] [ -1.66630894e+59] [ -1.90939357e+59] [ -2.43438313e+59] [ -1.70239488e+59] [ -2.08089147e+59] [ -2.03147997e+59] [ -2.72991553e+59] [ -2.80407115e+59] [ -2.78597928e+59] [ -2.90446349e+59] [ -2.63917816e+59] [ -2.35571975e+59] [ -3.01752775e+59] [ -3.34061255e+59] [ -2.89123988e+59] [ -3.47980621e+59] [ -2.98441116e+59] [ -3.35061394e+59] [ -3.06117235e+59] [ -3.47555027e+59] [ -3.10391203e+59] [ -3.51486465e+59] [ -3.37574816e+59] [ -3.93981878e+59] [ -3.47022250e+59] [ -3.51085957e+59] [ -3.35677836e+59] [ -3.65763143e+59] [ -3.87429203e+59] [ -3.52744510e+59] [ -3.65681281e+59] [ -3.83299112e+59] [ -3.64248740e+59] [ -3.71417238e+59] [ -3.37616743e+59] [ -3.82279257e+59]] [[ 5.97246827e+58] [ 2.71737196e+59] [ 1.83133321e+59] [ 3.07613294e+59] [ 4.16064606e+59] [ 4.36153607e+59] [ 7.00535503e+59] [ 6.00291416e+59] [ 8.34145088e+59] [ 6.99624055e+59] [ 1.17877223e+60] [ 1.05175322e+60] [ 7.73580801e+59] [ 1.07651163e+60] [ 1.11067027e+60] [ 1.27269716e+60] [ 1.62262645e+60] [ 1.13472318e+60] [ 1.38700827e+60] [ 1.35407327e+60] [ 1.81961216e+60] [ 1.86904023e+60] [ 1.85698118e+60] [ 1.93595626e+60] [ 1.75913160e+60] [ 1.57019375e+60] [ 2.01131871e+60] [ 2.22666934e+60] [ 1.92714214e+60] [ 2.31944821e+60] [ 1.98924500e+60] [ 2.23333572e+60] [ 2.04040981e+60] [ 2.31661143e+60] [ 2.06889774e+60] [ 2.34281624e+60] [ 2.25008881e+60] [ 2.62606739e+60] [ 2.31306023e+60] [ 2.34014667e+60] [ 2.23744458e+60] [ 2.43797676e+60] [ 2.58239085e+60] [ 2.35120168e+60] [ 2.43743111e+60] [ 2.55486192e+60] [ 2.42788258e+60] [ 2.47566386e+60] [ 2.25036827e+60] [ 2.54806413e+60]] [[ -3.98092020e+59] [ -1.81125130e+60] [ -1.22066640e+60] [ -2.05038172e+60] [ -2.77325876e+60] [ -2.90716104e+60] [ -4.66938593e+60] [ -4.00121376e+60] [ -5.55995424e+60] [ -4.66331072e+60] [ -7.85704998e+60] [ -7.01041083e+60] [ -5.15626588e+60] [ -7.17543686e+60] [ -7.40311960e+60] [ -8.48310216e+60] [ -1.08155391e+61] [ -7.56344314e+60] [ -9.24503739e+60] [ -9.02551072e+60] [ -1.21285379e+61] [ -1.24579984e+61] [ -1.23776194e+61] [ -1.29040240e+61] [ -1.17254076e+61] [ -1.04660515e+61] [ -1.34063488e+61] [ -1.48417581e+61] [ -1.28452739e+61] [ -1.54601712e+61] [ -1.32592176e+61] [ -1.48861926e+61] [ -1.36002541e+61] [ -1.54412628e+61] [ -1.37901391e+61] [ -1.56159298e+61] [ -1.49978595e+61] [ -1.75039268e+61] [ -1.54175925e+61] [ -1.55981359e+61] [ -1.49135800e+61] [ -1.62502176e+61] [ -1.72128028e+61] [ -1.56718225e+61] [ -1.62465806e+61] [ -1.70293101e+61] [ -1.61829354e+61] [ -1.65014192e+61] [ -1.49997222e+61] [ -1.69839998e+61]] [[ 2.65346334e+60] [ 1.20728091e+61] [ 8.13629362e+60] [ 1.36667214e+61] [ 1.84850238e+61] [ 1.93775430e+61] [ 3.11235689e+61] [ 2.66698992e+61] [ 3.70596094e+61] [ 3.10830749e+61] [ 5.23707914e+61] [ 4.67275586e+61] [ 3.43688440e+61] [ 4.78275317e+61] [ 4.93451401e+61] [ 5.65437123e+61] [ 7.20904593e+61] [ 5.04137690e+61] [ 6.16223551e+61] [ 6.01591106e+61] [ 8.08421904e+61] [ 8.30381932e+61] [ 8.25024308e+61] [ 8.60111557e+61] [ 7.81551447e+61] [ 6.97609661e+61] [ 8.93593775e+61] [ 9.89270301e+61] [ 8.56195594e+61] [ 1.03049033e+62] [ 8.83786814e+61] [ 9.92232055e+61] [ 9.06518444e+61] [ 1.02923000e+62] [ 9.19175133e+61] [ 1.04087234e+62] [ 9.99675162e+61] [ 1.16671588e+62] [ 1.02765226e+62] [ 1.03968630e+62] [ 9.94057553e+61] [ 1.08315049e+62] [ 1.14731115e+62] [ 1.04459784e+62] [ 1.08290807e+62] [ 1.13508053e+62] [ 1.07866583e+62] [ 1.09989422e+62] [ 9.99799322e+61] [ 1.13206039e+62]] [[ -1.76865332e+61] [ -8.04707321e+61] [ -5.42320765e+61] [ -9.10948790e+61] [ -1.23211043e+62] [ -1.29160088e+62] [ -2.07452662e+62] [ -1.77766939e+62] [ -2.47019057e+62] [ -2.07182751e+62] [ -3.49075009e+62] [ -3.11460310e+62] [ -2.29083888e+62] [ -3.18792128e+62] [ -3.28907675e+62] [ -3.76889413e+62] [ -4.80515513e+62] [ -3.36030570e+62] [ -4.10740865e+62] [ -4.00987679e+62] [ -5.38849759e+62] [ -5.53487111e+62] [ -5.49916012e+62] [ -5.73303251e+62] [ -5.20939384e+62] [ -4.64988387e+62] [ -5.95620662e+62] [ -6.59393394e+62] [ -5.70693084e+62] [ -6.86868407e+62] [ -5.89083880e+62] [ -6.61367537e+62] [ -6.04235539e+62] [ -6.86028340e+62] [ -6.12671795e+62] [ -6.93788487e+62] [ -6.66328704e+62] [ -7.77668896e+62] [ -6.84976708e+62] [ -6.92997936e+62] [ -6.62584314e+62] [ -7.21968789e+62] [ -7.64734769e+62] [ -6.96271704e+62] [ -7.21807204e+62] [ -7.56582508e+62] [ -7.18979556e+62] [ -7.33129237e+62] [ -6.66411463e+62] [ -7.54569448e+62]] [[ 1.17888742e+62] [ 5.36373821e+62] [ 3.61481316e+62] [ 6.07188565e+62] [ 8.21257325e+62] [ 8.60910395e+62] [ 1.38276581e+63] [ 1.18489704e+63] [ 1.64649372e+63] [ 1.38096673e+63] [ 2.32674280e+63] [ 2.07602382e+63] [ 1.52694771e+63] [ 2.12489370e+63] [ 2.19231840e+63] [ 2.51213838e+63] [ 3.20285320e+63] [ 2.23979571e+63] [ 2.73777362e+63] [ 2.67276422e+63] [ 3.59167733e+63] [ 3.68924190e+63] [ 3.66543891e+63] [ 3.82132543e+63] [ 3.47229658e+63] [ 3.09935789e+63] [ 3.97008108e+63] [ 4.39515517e+63] [ 3.80392749e+63] [ 4.57828856e+63] [ 3.92651046e+63] [ 4.40831373e+63] [ 4.02750312e+63] [ 4.57268913e+63] [ 4.08373458e+63] [ 4.62441403e+63] [ 4.44138215e+63] [ 5.18351489e+63] [ 4.56567953e+63] [ 4.61914464e+63] [ 4.41642410e+63] [ 4.81224848e+63] [ 5.09730308e+63] [ 4.64096579e+63] [ 4.81117144e+63] [ 5.04296457e+63] [ 4.79232389e+63] [ 4.88663791e+63] [ 4.44193378e+63] [ 5.02954661e+63]] [[ -7.85781780e+62] [ -3.57517409e+63] [ -2.40943644e+63] [ -4.04718638e+63] [ -5.47405147e+63] [ -5.73835712e+63] [ -9.21675947e+63] [ -7.89787465e+63] [ -1.09746253e+64] [ -9.20476779e+63] [ -1.55087930e+64] [ -1.38376376e+64] [ -1.01777971e+64] [ -1.41633775e+64] [ -1.46127936e+64] [ -1.67445384e+64] [ -2.13484650e+64] [ -1.49292513e+64] [ -1.82484993e+64] [ -1.78151823e+64] [ -2.39401537e+64] [ -2.45904657e+64] [ -2.44318080e+64] [ -2.54708622e+64] [ -2.31444270e+64] [ -2.06586220e+64] [ -2.64623858e+64] [ -2.92956969e+64] [ -2.53548970e+64] [ -3.05163638e+64] [ -2.61719680e+64] [ -2.93834046e+64] [ -2.68451297e+64] [ -3.04790410e+64] [ -2.72199378e+64] [ -3.08238108e+64] [ -2.96038206e+64] [ -3.45504709e+64] [ -3.04323188e+64] [ -3.07886880e+64] [ -2.94374639e+64] [ -3.20758124e+64] [ -3.39758302e+64] [ -3.09341358e+64] [ -3.20686335e+64] [ -3.36136395e+64] [ -3.19430060e+64] [ -3.25716516e+64] [ -2.96074974e+64] [ -3.35242027e+64]] [[ 5.23759093e+63] [ 2.38301521e+64] [ 1.60599835e+64] [ 2.69763275e+64] [ 3.64870286e+64] [ 3.82487453e+64] [ 6.14338700e+64] [ 5.26429063e+64] [ 7.31508406e+64] [ 6.13539401e+64] [ 1.03373119e+65] [ 9.22341126e+64] [ 6.78396202e+64] [ 9.44053162e+64] [ 9.74008776e+64] [ 1.11609921e+65] [ 1.42297174e+65] [ 9.95102118e+64] [ 1.21634500e+65] [ 1.18746246e+65] [ 1.59571951e+65] [ 1.63906575e+65] [ 1.62849049e+65] [ 1.69774815e+65] [ 1.54268073e+65] [ 1.37699058e+65] [ 1.76383769e+65] [ 1.95269068e+65] [ 1.69001855e+65] [ 2.03405365e+65] [ 1.74448002e+65] [ 1.95853680e+65] [ 1.78934930e+65] [ 2.03156592e+65] [ 1.81433196e+65] [ 2.05454639e+65] [ 1.97322852e+65] [ 2.30294514e+65] [ 2.02845168e+65] [ 2.05220529e+65] [ 1.96214010e+65] [ 2.13799796e+65] [ 2.26464273e+65] [ 2.06190005e+65] [ 2.13751945e+65] [ 2.24050108e+65] [ 2.12914581e+65] [ 2.17104788e+65] [ 1.97347360e+65] [ 2.23453972e+65]] [[ -3.49109122e+64] [ -1.58838741e+65] [ -1.07047053e+65] [ -1.79809423e+65] [ -2.43202547e+65] [ -2.54945185e+65] [ -4.09484526e+65] [ -3.50888777e+65] [ -4.87583434e+65] [ -4.08951757e+65] [ -6.89028587e+65] [ -6.14782072e+65] [ -4.52181748e+65] [ -6.29254126e+65] [ -6.49220897e+65] [ -7.43930598e+65] [ -9.48475017e+65] [ -6.63280564e+65] [ -8.10748953e+65] [ -7.91497434e+65] [ -1.06361922e+66] [ -1.09251144e+66] [ -1.08546256e+66] [ -1.13162592e+66] [ -1.02826647e+66] [ -9.17826494e+65] [ -1.17567759e+66] [ -1.30155665e+66] [ -1.12647379e+66] [ -1.35578875e+66] [ -1.16277483e+66] [ -1.30545335e+66] [ -1.19268223e+66] [ -1.35413056e+66] [ -1.20933430e+66] [ -1.36944808e+66] [ -1.31524605e+66] [ -1.53501708e+66] [ -1.35205478e+66] [ -1.36788764e+66] [ -1.30785511e+66] [ -1.42507233e+66] [ -1.50948680e+66] [ -1.37434963e+66] [ -1.42475338e+66] [ -1.49339530e+66] [ -1.41917197e+66] [ -1.44710160e+66] [ -1.31540940e+66] [ -1.48942178e+66]] [[ 2.32697018e+65] [ 1.05873204e+66] [ 7.13517020e+65] [ 1.19851112e+66] [ 1.62105496e+66] [ 1.69932495e+66] [ 2.72939955e+66] [ 2.33883239e+66] [ 3.24996409e+66] [ 2.72584841e+66] [ 4.59268714e+66] [ 4.09780054e+66] [ 3.01399584e+66] [ 4.19426332e+66] [ 4.32735088e+66] [ 4.95863387e+66] [ 6.32201493e+66] [ 4.42106492e+66] [ 5.40400842e+66] [ 5.27568834e+66] [ 7.08950310e+66] [ 7.28208285e+66] [ 7.23509885e+66] [ 7.54279854e+66] [ 6.85386105e+66] [ 6.11772866e+66] [ 7.83642279e+66] [ 8.67546368e+66] [ 7.50845726e+66] [ 9.03694513e+66] [ 7.75042008e+66] [ 8.70143695e+66] [ 7.94976643e+66] [ 9.02589259e+66] [ 8.06076001e+66] [ 9.12799078e+66] [ 8.76670971e+66] [ 1.02315830e+67] [ 9.01205654e+66] [ 9.11758971e+66] [ 8.71744576e+66] [ 9.49875152e+66] [ 1.00614122e+67] [ 9.16066181e+66] [ 9.49662559e+66] [ 9.95415502e+66] [ 9.45942298e+66] [ 9.64558657e+66] [ 8.76779854e+66] [ 9.92766973e+66]] [[ -1.55103086e+66] [ -7.05692785e+66] [ -4.75591362e+66] [ -7.98861861e+66] [ -1.08050644e+67] [ -1.13267693e+67] [ -1.81926824e+67] [ -1.55893756e+67] [ -2.16624804e+67] [ -1.81690124e+67] [ -3.06123367e+67] [ -2.73136938e+67] [ -2.00896453e+67] [ -2.79566619e+67] [ -2.88437506e+67] [ -3.30515372e+67] [ -4.21390885e+67] [ -2.94683971e+67] [ -3.60201600e+67] [ -3.51648487e+67] [ -4.72547443e+67] [ -4.85383754e+67] [ -4.82252058e+67] [ -5.02761634e+67] [ -4.56840835e+67] [ -4.07774282e+67] [ -5.22333018e+67] [ -5.78258888e+67] [ -5.00472633e+67] [ -6.02353261e+67] [ -5.16600549e+67] [ -5.79990124e+67] [ -5.29887885e+67] [ -6.01616559e+67] [ -5.37286108e+67] [ -6.08421866e+67] [ -5.84340849e+67] [ -6.81981280e+67] [ -6.00694324e+67] [ -6.07728587e+67] [ -5.81057184e+67] [ -6.33134746e+67] [ -6.70638622e+67] [ -6.10599537e+67] [ -6.32993043e+67] [ -6.63489449e+67] [ -6.30513320e+67] [ -6.42921966e+67] [ -5.84413424e+67] [ -6.61724084e+67]] [[ 1.03383221e+67] [ 4.70376155e+67] [ 3.17003151e+67] [ 5.32477558e+67] [ 7.20206407e+67] [ 7.54980398e+67] [ 1.21262455e+68] [ 1.03910238e+68] [ 1.44390229e+68] [ 1.21104684e+68] [ 2.04045068e+68] [ 1.82058121e+68] [ 1.33906572e+68] [ 1.86343794e+68] [ 1.92256642e+68] [ 2.20303443e+68] [ 2.80876082e+68] [ 1.96420193e+68] [ 2.40090656e+68] [ 2.34389619e+68] [ 3.14974241e+68] [ 3.23530223e+68] [ 3.21442806e+68] [ 3.35113366e+68] [ 3.04505077e+68] [ 2.71800000e+68] [ 3.48158579e+68] [ 3.85435700e+68] [ 3.33587643e+68] [ 4.01495689e+68] [ 3.44337628e+68] [ 3.86589647e+68] [ 3.53194239e+68] [ 4.01004644e+68] [ 3.58125489e+68] [ 4.05540689e+68] [ 3.89489602e+68] [ 4.54571365e+68] [ 4.00389933e+68] [ 4.05078587e+68] [ 3.87300891e+68] [ 4.22012941e+68] [ 4.47010970e+68] [ 4.06992205e+68] [ 4.21918490e+68] [ 4.42245723e+68] [ 4.20265642e+68] [ 4.28536565e+68] [ 3.89537977e+68] [ 4.41069028e+68]] [[ -6.89095921e+67] [ -3.13526980e+68] [ -2.11296936e+68] [ -3.54920373e+68] [ -4.80050141e+68] [ -5.03228579e+68] [ -8.08269104e+68] [ -6.92608730e+68] [ -9.62426173e+68] [ -8.07217487e+68] [ -1.36005266e+69] [ -1.21349971e+69] [ -8.92547857e+68] [ -1.24206566e+69] [ -1.28147747e+69] [ -1.46842208e+69] [ -1.87216611e+69] [ -1.30922942e+69] [ -1.60031280e+69] [ -1.56231281e+69] [ -2.09944576e+69] [ -2.15647525e+69] [ -2.14256167e+69] [ -2.23368213e+69] [ -2.02966404e+69] [ -1.81166992e+69] [ -2.32063438e+69] [ -2.56910325e+69] [ -2.22351250e+69] [ -2.67615034e+69] [ -2.29516602e+69] [ -2.57679483e+69] [ -2.35419933e+69] [ -2.67287730e+69] [ -2.38706834e+69] [ -2.70311209e+69] [ -2.59612434e+69] [ -3.02992372e+69] [ -2.66877997e+69] [ -2.70003198e+69] [ -2.58153559e+69] [ -2.81290710e+69] [ -2.97953027e+69] [ -2.71278711e+69] [ -2.81227754e+69] [ -2.94776774e+69] [ -2.80126057e+69] [ -2.85639001e+69] [ -2.59644678e+69] [ -2.93992454e+69]] [[ 4.59313595e+68] [ 2.08979911e+69] [ 1.40838964e+69] [ 2.36570480e+69] [ 3.19975129e+69] [ 3.35424606e+69] [ 5.38747911e+69] [ 4.61655041e+69] [ 6.41500569e+69] [ 5.38046961e+69] [ 9.06536603e+69] [ 8.08852437e+69] [ 5.94923511e+69] [ 8.27892934e+69] [ 8.54162684e+69] [ 9.78769721e+69] [ 1.24788338e+70] [ 8.72660614e+69] [ 1.06668085e+70] [ 1.04135214e+70] [ 1.39937554e+70] [ 1.43738828e+70] [ 1.42811425e+70] [ 1.48885016e+70] [ 1.35286287e+70] [ 1.20755993e+70] [ 1.54680776e+70] [ 1.71242350e+70] [ 1.48207164e+70] [ 1.78377523e+70] [ 1.52983195e+70] [ 1.71755028e+70] [ 1.56918032e+70] [ 1.78159360e+70] [ 1.59108900e+70] [ 1.80174645e+70] [ 1.73043428e+70] [ 2.01958119e+70] [ 1.77886254e+70] [ 1.79969342e+70] [ 1.72071022e+70] [ 1.87492979e+70] [ 1.98599167e+70] [ 1.80819529e+70] [ 1.87451016e+70] [ 1.96482051e+70] [ 1.86716685e+70] [ 1.90391312e+70] [ 1.73064920e+70] [ 1.95959266e+70]] [[ -3.06153283e+69] [ -1.39294561e+70] [ -9.38755385e+69] [ -1.57684923e+70] [ -2.13277894e+70] [ -2.23575669e+70] [ -3.59099847e+70] [ -3.07713963e+70] [ -4.27589140e+70] [ -3.58632632e+70] [ -6.04247643e+70] [ -5.39136728e+70] [ -3.96543425e+70] [ -5.51828081e+70] [ -5.69338057e+70] [ -6.52394283e+70] [ -8.31770706e+70] [ -5.81667764e+70] [ -7.10991029e+70] [ -6.94108297e+70] [ -9.32747085e+70] [ -9.58084291e+70] [ -9.51902732e+70] [ -9.92385961e+70] [ -9.01744288e+70] [ -8.04893305e+70] [ -1.03101732e+71] [ -1.14140770e+71] [ -9.87867771e+70] [ -1.18896686e+71] [ -1.01970218e+71] [ -1.14482494e+71] [ -1.04592965e+71] [ -1.18751270e+71] [ -1.06053278e+71] [ -1.20094549e+71] [ -1.15341270e+71] [ -1.34614219e+71] [ -1.18569233e+71] [ -1.19957705e+71] [ -1.14693118e+71] [ -1.24972550e+71] [ -1.32375326e+71] [ -1.20524393e+71] [ -1.24944580e+71] [ -1.30964173e+71] [ -1.24455115e+71] [ -1.26904420e+71] [ -1.15355596e+71] [ -1.30615713e+71]] [[ 2.04065009e+70] [ 9.28461246e+70] [ 6.25722918e+70] [ 1.05104132e+71] [ 1.42159361e+71] [ 1.49023294e+71] [ 2.39356288e+71] [ 2.05105272e+71] [ 2.85007499e+71] [ 2.39044868e+71] [ 4.02758381e+71] [ 3.59359011e+71] [ 2.64314126e+71] [ 3.67818372e+71] [ 3.79489563e+71] [ 4.34850293e+71] [ 5.54412791e+71] [ 3.87707870e+71] [ 4.73907675e+71] [ 4.62654570e+71] [ 6.21718115e+71] [ 6.38606508e+71] [ 6.34486219e+71] [ 6.61470123e+71] [ 6.01053348e+71] [ 5.36497788e+71] [ 6.87219673e+71] [ 7.60799854e+71] [ 6.58458545e+71] [ 7.92500181e+71] [ 6.79677615e+71] [ 7.63077595e+71] [ 6.97159410e+71] [ 7.91530922e+71] [ 7.06893057e+71] [ 8.00484483e+71] [ 7.68801728e+71] [ 8.97264650e+71] [ 7.90317561e+71] [ 7.99572354e+71] [ 7.64481497e+71] [ 8.32998562e+71] [ 8.82341415e+71] [ 8.03349587e+71] [ 8.32812127e+71] [ 8.72935438e+71] [ 8.29549622e+71] [ 8.45875347e+71] [ 7.68897214e+71] [ 8.70612795e+71]] [[ -1.36018557e+71] [ -6.18861409e+71] [ -4.17072622e+71] [ -7.00566575e+71] [ -9.47556433e+71] [ -9.93307650e+71] [ -1.59541790e+72] [ -1.36711939e+72] [ -1.89970387e+72] [ -1.59334215e+72] [ -2.68456675e+72] [ -2.39529033e+72] [ -1.76177318e+72] [ -2.45167579e+72] [ -2.52946956e+72] [ -2.89847386e+72] [ -3.69541198e+72] [ -2.58424829e+72] [ -3.15880897e+72] [ -3.08380194e+72] [ -4.14403240e+72] [ -4.25660118e+72] [ -4.22913759e+72] [ -4.40899751e+72] [ -4.00629238e+72] [ -3.57600038e+72] [ -4.58062991e+72] [ -5.07107509e+72] [ -4.38892398e+72] [ -5.28237210e+72] [ -4.53035868e+72] [ -5.08625726e+72] [ -4.64688275e+72] [ -5.27591155e+72] [ -4.71176191e+72] [ -5.33559109e+72] [ -5.12441120e+72] [ -5.98067467e+72] [ -5.26782396e+72] [ -5.32951134e+72] [ -5.09561491e+72] [ -5.55231213e+72] [ -5.88120456e+72] [ -5.35468830e+72] [ -5.55106946e+72] [ -5.81850947e+72] [ -5.52932339e+72] [ -5.63814173e+72] [ -5.12504765e+72] [ -5.80302800e+72]] [[ 9.06625194e+71] [ 4.12499116e+72] [ 2.77997764e+72] [ 4.66959304e+72] [ 6.31589215e+72] [ 6.62084470e+72] [ 1.06341818e+73] [ 9.11246903e+72] [ 1.26623854e+73] [ 1.06203460e+73] [ 1.78938514e+73] [ 1.59656933e+73] [ 1.17430150e+73] [ 1.63415279e+73] [ 1.68600585e+73] [ 1.93196391e+73] [ 2.46315920e+73] [ 1.72251835e+73] [ 2.10548903e+73] [ 2.05549345e+73] [ 2.76218500e+73] [ 2.83721719e+73] [ 2.81891146e+73] [ 2.93879623e+73] [ 2.67037505e+73] [ 2.38356597e+73] [ 3.05319699e+73] [ 3.38010088e+73] [ 2.92541632e+73] [ 3.52093990e+73] [ 3.01968894e+73] [ 3.39022049e+73] [ 3.09735750e+73] [ 3.51663365e+73] [ 3.14060239e+73] [ 3.55641275e+73] [ 3.41565181e+73] [ 3.98639013e+73] [ 3.51124290e+73] [ 3.55236033e+73] [ 3.39645778e+73] [ 3.70086713e+73] [ 3.92008881e+73] [ 3.56914191e+73] [ 3.70003883e+73] [ 3.87829970e+73] [ 3.68554409e+73] [ 3.75807644e+73] [ 3.41607604e+73] [ 3.86798060e+73]] [[ -6.04306693e+72] [ -2.74949316e+73] [ -1.85298082e+73] [ -3.11249494e+73] [ -4.20982775e+73] [ -4.41309241e+73] [ -7.08816311e+73] [ -6.07387271e+73] [ -8.44005253e+73] [ -7.07894090e+73] [ -1.19270612e+74] [ -1.06418566e+74] [ -7.82725056e+73] [ -1.08923674e+74] [ -1.12379915e+74] [ -1.28774131e+74] [ -1.64180700e+74] [ -1.14813638e+74] [ -1.40340366e+74] [ -1.37007934e+74] [ -1.84112122e+74] [ -1.89113356e+74] [ -1.87893197e+74] [ -1.95884059e+74] [ -1.77992574e+74] [ -1.58875452e+74] [ -2.03509387e+74] [ -2.25299010e+74] [ -1.94992227e+74] [ -2.34686567e+74] [ -2.01275924e+74] [ -2.25973527e+74] [ -2.06452885e+74] [ -2.34399536e+74] [ -2.09335353e+74] [ -2.37050994e+74] [ -2.27668640e+74] [ -2.65710930e+74] [ -2.34040218e+74] [ -2.36780881e+74] [ -2.26389271e+74] [ -2.46679531e+74] [ -2.61291647e+74] [ -2.37899450e+74] [ -2.46624321e+74] [ -2.58506214e+74] [ -2.45658181e+74] [ -2.50492790e+74] [ -2.27696917e+74] [ -2.57818399e+74]] [[ 4.02797740e+73] [ 1.83266153e+74] [ 1.23509552e+74] [ 2.07461864e+74] [ 2.80604058e+74] [ 2.94152567e+74] [ 4.72458127e+74] [ 4.04851084e+74] [ 5.62567670e+74] [ 4.71843424e+74] [ 7.94992569e+74] [ 7.09327868e+74] [ 5.21721647e+74] [ 7.26025544e+74] [ 7.49062954e+74] [ 8.58337823e+74] [ 1.09433862e+75] [ 7.65284822e+74] [ 9.35432005e+74] [ 9.13219843e+74] [ 1.22719056e+75] [ 1.26052604e+75] [ 1.25239313e+75] [ 1.30565584e+75] [ 1.18640100e+75] [ 1.05897673e+75] [ 1.35648210e+75] [ 1.50171979e+75] [ 1.29971138e+75] [ 1.56429210e+75] [ 1.34159506e+75] [ 1.50621575e+75] [ 1.37610184e+75] [ 1.56237891e+75] [ 1.39531479e+75] [ 1.58005208e+75] [ 1.51751445e+75] [ 1.77108352e+75] [ 1.55998390e+75] [ 1.57825165e+75] [ 1.50898687e+75] [ 1.64423063e+75] [ 1.74162699e+75] [ 1.58570742e+75] [ 1.64386263e+75] [ 1.72306082e+75] [ 1.63742287e+75] [ 1.66964773e+75] [ 1.51770292e+75] [ 1.71847623e+75]] [[ -2.68482910e+74] [ -1.22155179e+75] [ -8.23247017e+74] [ -1.38282714e+75] [ -1.87035294e+75] [ -1.96065988e+75] [ -3.14914708e+75] [ -2.69851557e+75] [ -3.74976794e+75] [ -3.14504981e+75] [ -5.29898500e+75] [ -4.72799103e+75] [ -3.47751072e+75] [ -4.83928858e+75] [ -4.99284334e+75] [ -5.72120976e+75] [ -7.29426178e+75] [ -5.10096943e+75] [ -6.23507735e+75] [ -6.08702325e+75] [ -8.17978004e+75] [ -8.40197615e+75] [ -8.34776660e+75] [ -8.70278664e+75] [ -7.90789919e+75] [ -7.05855884e+75] [ -9.04156664e+75] [ -1.00096415e+76] [ -8.66316411e+75] [ -1.04267143e+76] [ -8.94233779e+75] [ -1.00396092e+76] [ -9.17234112e+75] [ -1.04139620e+76] [ -9.30040412e+75] [ -1.05317616e+76] [ -1.01149201e+76] [ -1.18050726e+76] [ -1.03979982e+76] [ -1.05197610e+76] [ -1.00580799e+76] [ -1.09595407e+76] [ -1.16087316e+76] [ -1.05694571e+76] [ -1.09570879e+76] [ -1.14849796e+76] [ -1.09141640e+76] [ -1.11289572e+76] [ -1.01161763e+76] [ -1.14544212e+76]] [[ 1.78956001e+75] [ 8.14219512e+75] [ 5.48731367e+75] [ 9.21716828e+75] [ 1.24667482e+76] [ 1.30686848e+76] [ 2.09904894e+76] [ 1.79868266e+76] [ 2.49938991e+76] [ 2.09631793e+76] [ 3.53201314e+76] [ 3.15141984e+76] [ 2.31791816e+76] [ 3.22560469e+76] [ 3.32795588e+76] [ 3.81344504e+76] [ 4.86195535e+76] [ 3.40002681e+76] [ 4.15596103e+76] [ 4.05727627e+76] [ 5.45219332e+76] [ 5.60029707e+76] [ 5.56416396e+76] [ 5.80080087e+76] [ 5.27097244e+76] [ 4.70484868e+76] [ 6.02661306e+76] [ 6.67187875e+76] [ 5.77439066e+76] [ 6.94987662e+76] [ 5.96047254e+76] [ 6.69185354e+76] [ 6.11378016e+76] [ 6.94137664e+76] [ 6.19913994e+76] [ 7.01989542e+76] [ 6.74205165e+76] [ 7.86861475e+76] [ 6.93073601e+76] [ 7.01189646e+76] [ 6.70416514e+76] [ 7.30502954e+76] [ 7.73774457e+76] [ 7.04502112e+76] [ 7.30339459e+76] [ 7.65525830e+76] [ 7.27478386e+76] [ 7.41795326e+76] [ 6.74288902e+76] [ 7.63488975e+76]] [[ -1.19282268e+76] [ -5.42714127e+76] [ -3.65754272e+76] [ -6.14365949e+76] [ -8.30965148e+76] [ -8.71086944e+76] [ -1.39911105e+77] [ -1.19890334e+77] [ -1.66595641e+77] [ -1.39729070e+77] [ -2.35424649e+77] [ -2.10056384e+77] [ -1.54499728e+77] [ -2.15001140e+77] [ -2.21823310e+77] [ -2.54183358e+77] [ -3.24071312e+77] [ -2.26627163e+77] [ -2.77013597e+77] [ -2.70435812e+77] [ -3.63413341e+77] [ -3.73285126e+77] [ -3.70876691e+77] [ -3.86649612e+77] [ -3.51334150e+77] [ -3.13599442e+77] [ -4.01701015e+77] [ -4.44710891e+77] [ -3.84889252e+77] [ -4.63240707e+77] [ -3.97292450e+77] [ -4.46042301e+77] [ -4.07511097e+77] [ -4.62674145e+77] [ -4.13200712e+77] [ -4.67907776e+77] [ -4.49388233e+77] [ -5.24478758e+77] [ -4.61964898e+77] [ -4.67374609e+77] [ -4.46862925e+77] [ -4.86913255e+77] [ -5.15755669e+77] [ -4.69582518e+77] [ -4.86804278e+77] [ -5.10257586e+77] [ -4.84897244e+77] [ -4.94440131e+77] [ -4.49444047e+77] [ -5.08899930e+77]] [[ 7.95070258e+76] [ 3.61743509e+77] [ 2.43791763e+77] [ 4.09502689e+77] [ 5.53875851e+77] [ 5.80618842e+77] [ 9.32570787e+77] [ 7.99123293e+77] [ 1.11043529e+78] [ 9.31357444e+77] [ 1.56921175e+78] [ 1.40012079e+78] [ 1.02981056e+78] [ 1.43307983e+78] [ 1.47855268e+78] [ 1.69424703e+78] [ 2.16008185e+78] [ 1.51057253e+78] [ 1.84642090e+78] [ 1.80257699e+78] [ 2.42231427e+78] [ 2.48811418e+78] [ 2.47206087e+78] [ 2.57719452e+78] [ 2.34180100e+78] [ 2.09028210e+78] [ 2.67751893e+78] [ 2.96419921e+78] [ 2.56546092e+78] [ 3.08770880e+78] [ 2.64813385e+78] [ 2.97307365e+78] [ 2.71624575e+78] [ 3.08393241e+78] [ 2.75416961e+78] [ 3.11881693e+78] [ 2.99537580e+78] [ 3.49588811e+78] [ 3.07920497e+78] [ 3.11526314e+78] [ 2.97854348e+78] [ 3.24549704e+78] [ 3.43774478e+78] [ 3.12997985e+78] [ 3.24477066e+78] [ 3.40109757e+78] [ 3.23205942e+78] [ 3.29566708e+78] [ 2.99574783e+78] [ 3.39204817e+78]] [[ -5.29950283e+77] [ -2.41118408e+78] [ -1.62498235e+78] [ -2.72952062e+78] [ -3.69183303e+78] [ -3.87008716e+78] [ -6.21600604e+78] [ -5.32651814e+78] [ -7.40155336e+78] [ -6.20791856e+78] [ -1.04595060e+79] [ -9.33243829e+78] [ -6.86415309e+78] [ -9.55212516e+78] [ -9.85522226e+78] [ -1.12929227e+79] [ -1.43979224e+79] [ -1.00686491e+79] [ -1.23072303e+79] [ -1.20149908e+79] [ -1.61458201e+79] [ -1.65844062e+79] [ -1.64774037e+79] [ -1.71781670e+79] [ -1.56091627e+79] [ -1.39326755e+79] [ -1.78468745e+79] [ -1.97577282e+79] [ -1.70999573e+79] [ -2.05809756e+79] [ -1.76510097e+79] [ -1.98168804e+79] [ -1.81050063e+79] [ -2.05558042e+79] [ -1.83577860e+79] [ -2.07883253e+79] [ -1.99655343e+79] [ -2.33016752e+79] [ -2.05242936e+79] [ -2.07646376e+79] [ -1.98533393e+79] [ -2.16327055e+79] [ -2.29141236e+79] [ -2.08627312e+79] [ -2.16278639e+79] [ -2.26698534e+79] [ -2.15431377e+79] [ -2.19671115e+79] [ -1.99680141e+79] [ -2.26095351e+79]] [[ 3.53235830e+78] [ 1.60716323e+79] [ 1.08312423e+79] [ 1.81934893e+79] [ 2.46077367e+79] [ 2.57958811e+79] [ 4.14324914e+79] [ 3.55036523e+79] [ 4.93347004e+79] [ 4.13785847e+79] [ 6.97173376e+79] [ 6.22049217e+79] [ 4.57526846e+79] [ 6.36692340e+79] [ 6.56895132e+79] [ 7.52724367e+79] [ 9.59686641e+79] [ 6.71120994e+79] [ 8.20332560e+79] [ 8.00853475e+79] [ 1.07619193e+80] [ 1.10542568e+80] [ 1.09829347e+80] [ 1.14500251e+80] [ 1.04042128e+80] [ 9.28675831e+79] [ 1.18957490e+80] [ 1.31694194e+80] [ 1.13978948e+80] [ 1.37181510e+80] [ 1.17651962e+80] [ 1.32088470e+80] [ 1.20678055e+80] [ 1.37013731e+80] [ 1.22362946e+80] [ 1.38563590e+80] [ 1.33079316e+80] [ 1.55316203e+80] [ 1.36803699e+80] [ 1.38405700e+80] [ 1.32331485e+80] [ 1.44191766e+80] [ 1.52732996e+80] [ 1.39059538e+80] [ 1.44159494e+80] [ 1.51104825e+80] [ 1.43594755e+80] [ 1.46420733e+80] [ 1.33095844e+80] [ 1.50702776e+80]] [[ -2.35447655e+79] [ -1.07124698e+80] [ -7.21951280e+79] [ -1.21267834e+80] [ -1.64021694e+80] [ -1.71941214e+80] [ -2.76166292e+80] [ -2.36647899e+80] [ -3.28838089e+80] [ -2.75806980e+80] [ -4.64697584e+80] [ -4.14623934e+80] [ -3.04962334e+80] [ -4.24384239e+80] [ -4.37850313e+80] [ -5.01724831e+80] [ -6.39674547e+80] [ -4.47332493e+80] [ -5.46788750e+80] [ -5.33805059e+80] [ -7.17330588e+80] [ -7.36816205e+80] [ -7.32062267e+80] [ -7.63195957e+80] [ -6.93487838e+80] [ -6.19004440e+80] [ -7.92905467e+80] [ -8.77801359e+80] [ -7.59721236e+80] [ -9.14376800e+80] [ -7.84203535e+80] [ -8.80429389e+80] [ -8.04373811e+80] [ -9.13258481e+80] [ -8.15604371e+80] [ -9.23588988e+80] [ -8.87033822e+80] [ -1.03525273e+81] [ -9.11858521e+80] [ -9.22536586e+80] [ -8.82049194e+80] [ -9.61103326e+80] [ -1.01803449e+81] [ -9.26894710e+80] [ -9.60888220e+80] [ -1.00718199e+81] [ -9.57123983e+80] [ -9.75960400e+80] [ -8.87143992e+80] [ -1.00450216e+81]] [[ 1.56936510e+80] [ 7.14034556e+80] [ 4.81213177e+80] [ 8.08304955e+80] [ 1.09327877e+81] [ 1.14606595e+81] [ 1.84077323e+81] [ 1.57736526e+81] [ 2.19185458e+81] [ 1.83837825e+81] [ 3.09741955e+81] [ 2.76365603e+81] [ 2.03271187e+81] [ 2.82871287e+81] [ 2.91847035e+81] [ 3.34422290e+81] [ 4.26372014e+81] [ 2.98167337e+81] [ 3.64459430e+81] [ 3.55805212e+81] [ 4.78133277e+81] [ 4.91121322e+81] [ 4.87952608e+81] [ 5.08704620e+81] [ 4.62241006e+81] [ 4.12594453e+81] [ 5.28507352e+81] [ 5.85094303e+81] [ 5.06388562e+81] [ 6.09473489e+81] [ 5.22707121e+81] [ 5.86846004e+81] [ 5.36151522e+81] [ 6.08728079e+81] [ 5.43637198e+81] [ 6.15613829e+81] [ 5.91248157e+81] [ 6.90042766e+81] [ 6.07794942e+81] [ 6.14912355e+81] [ 5.87925677e+81] [ 6.40618832e+81] [ 6.78566029e+81] [ 6.17817242e+81] [ 6.40475454e+81] [ 6.71332348e+81] [ 6.37966419e+81] [ 6.50521743e+81] [ 5.91321591e+81] [ 6.69546115e+81]] [[ -1.04605281e+81] [ -4.75936323e+81] [ -3.20750345e+81] [ -5.38771807e+81] [ -7.28719740e+81] [ -7.63904783e+81] [ -1.22695861e+82] [ -1.05138528e+82] [ -1.46097020e+82] [ -1.22536224e+82] [ -2.06457021e+82] [ -1.84210173e+82] [ -1.35489439e+82] [ -1.88546505e+82] [ -1.94529247e+82] [ -2.22907580e+82] [ -2.84196229e+82] [ -1.98742014e+82] [ -2.42928692e+82] [ -2.37160265e+82] [ -3.18697452e+82] [ -3.27354572e+82] [ -3.25242480e+82] [ -3.39074635e+82] [ -3.08104535e+82] [ -2.75012862e+82] [ -3.52274051e+82] [ -3.89991813e+82] [ -3.37530877e+82] [ -4.06241643e+82] [ -3.48407935e+82] [ -3.91159401e+82] [ -3.57369236e+82] [ -4.05744793e+82] [ -3.62358778e+82] [ -4.10334457e+82] [ -3.94093635e+82] [ -4.59944710e+82] [ -4.05122816e+82] [ -4.09866893e+82] [ -3.91879052e+82] [ -4.27001423e+82] [ -4.52294946e+82] [ -4.11803132e+82] [ -4.26905855e+82] [ -4.47473370e+82] [ -4.25233470e+82] [ -4.33602161e+82] [ -3.94142582e+82] [ -4.46282765e+82]] [[ 6.97241507e+81] [ 3.17233083e+82] [ 2.13794610e+82] [ 3.59115775e+82] [ 4.85724662e+82] [ 5.09177085e+82] [ 8.17823398e+82] [ 7.00795839e+82] [ 9.73802709e+82] [ 8.16759350e+82] [ 1.37612941e+83] [ 1.22784411e+83] [ 9.03098383e+82] [ 1.25674773e+83] [ 1.29662541e+83] [ 1.48577983e+83] [ 1.89429639e+83] [ 1.32470540e+83] [ 1.61922959e+83] [ 1.58078042e+83] [ 2.12426265e+83] [ 2.18196626e+83] [ 2.16788822e+83] [ 2.26008578e+83] [ 2.05365606e+83] [ 1.83308510e+83] [ 2.34806586e+83] [ 2.59947181e+83] [ 2.24979594e+83] [ 2.70778427e+83] [ 2.32229645e+83] [ 2.60725430e+83] [ 2.38202757e+83] [ 2.70447254e+83] [ 2.41528513e+83] [ 2.73506473e+83] [ 2.62681230e+83] [ 3.06573949e+83] [ 2.70032678e+83] [ 2.73194820e+83] [ 2.61205110e+83] [ 2.84615759e+83] [ 3.01475036e+83] [ 2.74485410e+83] [ 2.84552058e+83] [ 2.98261238e+83] [ 2.83437338e+83] [ 2.89015449e+83] [ 2.62713855e+83] [ 2.97467646e+83]] [[ -4.64742996e+82] [ -2.11450196e+83] [ -1.42503776e+83] [ -2.39366904e+83] [ -3.23757453e+83] [ -3.39389554e+83] [ -5.45116280e+83] [ -4.67112119e+83] [ -6.49083544e+83] [ -5.44407044e+83] [ -9.17252486e+83] [ -8.18413626e+83] [ -6.01955914e+83] [ -8.37679195e+83] [ -8.64259471e+83] [ -9.90339448e+83] [ -1.26263421e+84] [ -8.82976059e+83] [ -1.07928975e+84] [ -1.05366164e+84] [ -1.41591712e+84] [ -1.45437919e+84] [ -1.44499554e+84] [ -1.50644938e+84] [ -1.36885464e+84] [ -1.22183412e+84] [ -1.56509209e+84] [ -1.73266552e+84] [ -1.49959074e+84] [ -1.80486067e+84] [ -1.54791561e+84] [ -1.73785290e+84] [ -1.58772910e+84] [ -1.80265326e+84] [ -1.60989676e+84] [ -1.82304433e+84] [ -1.75088919e+84] [ -2.04345402e+84] [ -1.79988992e+84] [ -1.82096702e+84] [ -1.74105019e+84] [ -1.89709275e+84] [ -2.00946745e+84] [ -1.82956939e+84] [ -1.89666815e+84] [ -1.98804603e+84] [ -1.88923804e+84] [ -1.92641867e+84] [ -1.75110666e+84] [ -1.98275638e+84]] [[ 3.09772224e+83] [ 1.40941118e+84] [ 9.49852116e+83] [ 1.59548867e+84] [ 2.15798984e+84] [ 2.26218486e+84] [ 3.63344653e+84] [ 3.11351352e+84] [ 4.32643536e+84] [ 3.62871915e+84] [ 6.11390264e+84] [ 5.45509693e+84] [ 4.01230839e+84] [ 5.58351066e+84] [ 5.76068023e+84] [ 6.60106029e+84] [ 8.41602804e+84] [ 5.88543475e+84] [ 7.19395429e+84] [ 7.02313132e+84] [ 9.43772794e+84] [ 9.69409503e+84] [ 9.63154873e+84] [ 1.00411664e+85] [ 9.12403522e+84] [ 8.14407694e+84] [ 1.04320465e+85] [ 1.15489992e+85] [ 9.99545044e+84] [ 1.20302126e+85] [ 1.03175575e+85] [ 1.15835755e+85] [ 1.05829325e+85] [ 1.20154992e+85] [ 1.07306900e+85] [ 1.21514149e+85] [ 1.16704683e+85] [ 1.36205452e+85] [ 1.19970803e+85] [ 1.21375687e+85] [ 1.16048869e+85] [ 1.26449811e+85] [ 1.33940093e+85] [ 1.21949074e+85] [ 1.26421510e+85] [ 1.32512259e+85] [ 1.25926259e+85] [ 1.28404517e+85] [ 1.16719178e+85] [ 1.32159680e+85]] [[ -2.06477197e+84] [ -9.39436293e+84] [ -6.33119391e+84] [ -1.06346535e+85] [ -1.43839782e+85] [ -1.50784851e+85] [ -2.42185643e+85] [ -2.07529757e+85] [ -2.88376483e+85] [ -2.41870542e+85] [ -4.07519260e+85] [ -3.63606881e+85] [ -2.67438500e+85] [ -3.72166236e+85] [ -3.83975389e+85] [ -4.39990521e+85] [ -5.60966330e+85] [ -3.92290842e+85] [ -4.79509587e+85] [ -4.68123464e+85] [ -6.29067248e+85] [ -6.46155274e+85] [ -6.41986280e+85] [ -6.69289152e+85] [ -6.08158210e+85] [ -5.42839559e+85] [ -6.95343080e+85] [ -7.69793028e+85] [ -6.66241975e+85] [ -8.01868075e+85] [ -6.87711870e+85] [ -7.72097694e+85] [ -7.05400311e+85] [ -8.00887359e+85] [ -7.15249016e+85] [ -8.09946757e+85] [ -7.77889490e+85] [ -9.07870932e+85] [ -7.99659656e+85] [ -8.09023846e+85] [ -7.73518192e+85] [ -8.42845175e+85] [ -8.92771294e+85] [ -8.12845729e+85] [ -8.42656537e+85] [ -8.83254132e+85] [ -8.39355466e+85] [ -8.55874172e+85] [ -7.77986105e+85] [ -8.80904034e+85]] [[ 1.37626389e+85] [ 6.26176774e+85] [ 4.22002706e+85] [ 7.08847751e+85] [ 9.58757199e+85] [ 1.00504923e+86] [ 1.61427684e+86] [ 1.38327968e+86] [ 1.92215967e+86] [ 1.61217654e+86] [ 2.71630016e+86] [ 2.42360430e+86] [ 1.78259855e+86] [ 2.48065627e+86] [ 2.55936962e+86] [ 2.93273580e+86] [ 3.73909428e+86] [ 2.61479587e+86] [ 3.19614825e+86] [ 3.12025459e+86] [ 4.19301770e+86] [ 4.30691712e+86] [ 4.27912889e+86] [ 4.46111488e+86] [ 4.05364950e+86] [ 3.61827115e+86] [ 4.63477609e+86] [ 5.13101866e+86] [ 4.44080406e+86] [ 5.34481336e+86] [ 4.58391062e+86] [ 5.14638030e+86] [ 4.70181208e+86] [ 5.33827644e+86] [ 4.76745816e+86] [ 5.39866142e+86] [ 5.18498524e+86] [ 6.05137033e+86] [ 5.33009324e+86] [ 5.39250981e+86] [ 5.15584856e+86] [ 5.61794426e+86] [ 5.95072442e+86] [ 5.41798438e+86] [ 5.61668690e+86] [ 5.88728823e+86] [ 5.59468377e+86] [ 5.70478842e+86] [ 5.18562922e+86] [ 5.87162376e+86]] [[ -9.17342124e+85] [ -4.17375138e+86] [ -2.81283888e+86] [ -4.72479083e+86] [ -6.39055031e+86] [ -6.69910761e+86] [ -1.07598851e+87] [ -9.22018464e+86] [ -1.28120635e+87] [ -1.07458857e+87] [ -1.81053690e+87] [ -1.61544187e+87] [ -1.18818255e+87] [ -1.65346959e+87] [ -1.70593559e+87] [ -1.95480104e+87] [ -2.49227543e+87] [ -1.74287970e+87] [ -2.13037735e+87] [ -2.07979079e+87] [ -2.79483592e+87] [ -2.87075503e+87] [ -2.85223292e+87] [ -2.97353481e+87] [ -2.70194071e+87] [ -2.41174135e+87] [ -3.08928787e+87] [ -3.42005598e+87] [ -2.95999674e+87] [ -3.56255981e+87] [ -3.05538373e+87] [ -3.43029521e+87] [ -3.13397039e+87] [ -3.55820266e+87] [ -3.17772646e+87] [ -3.59845199e+87] [ -3.45602715e+87] [ -4.03351199e+87] [ -3.55274819e+87] [ -3.59435166e+87] [ -3.43660623e+87] [ -3.74461391e+87] [ -3.96642694e+87] [ -3.61133161e+87] [ -3.74377582e+87] [ -3.92414385e+87] [ -3.72910975e+87] [ -3.80249947e+87] [ -3.45645639e+87] [ -3.91370277e+87]] [[ 6.11450011e+86] [ 2.78199405e+87] [ 1.87488432e+87] [ 3.14928676e+87] [ 4.25959079e+87] [ 4.46525818e+87] [ 7.17195005e+87] [ 6.14567004e+87] [ 8.53981973e+87] [ 7.16261882e+87] [ 1.20680472e+88] [ 1.07676506e+88] [ 7.91977401e+87] [ 1.10211226e+88] [ 1.13708322e+88] [ 1.30296330e+88] [ 1.66121429e+88] [ 1.16170814e+88] [ 1.41999285e+88] [ 1.38627462e+88] [ 1.86288453e+88] [ 1.91348806e+88] [ 1.90114223e+88] [ 1.98199543e+88] [ 1.80096568e+88] [ 1.60753468e+88] [ 2.05915007e+88] [ 2.27962198e+88] [ 1.97297169e+88] [ 2.37460723e+88] [ 2.03655143e+88] [ 2.28644689e+88] [ 2.08893299e+88] [ 2.37170299e+88] [ 2.11809840e+88] [ 2.39853098e+88] [ 2.30359839e+88] [ 2.68851815e+88] [ 2.36806734e+88] [ 2.39579793e+88] [ 2.29065347e+88] [ 2.49595452e+88] [ 2.64380293e+88] [ 2.40711584e+88] [ 2.49539589e+88] [ 2.61561934e+88] [ 2.48562029e+88] [ 2.53453786e+88] [ 2.30388450e+88] [ 2.60865989e+88]] [[ -4.07559084e+87] [ -1.85432485e+88] [ -1.24969519e+88] [ -2.09914205e+88] [ -2.83920990e+88] [ -2.97629651e+88] [ -4.78042905e+88] [ -4.09636701e+88] [ -5.69217605e+88] [ -4.77420937e+88] [ -8.04389925e+88] [ -7.17712609e+88] [ -5.27888754e+88] [ -7.34607662e+88] [ -7.57917391e+88] [ -8.68483963e+88] [ -1.10727445e+89] [ -7.74331012e+88] [ -9.46489451e+88] [ -9.24014726e+88] [ -1.24169679e+89] [ -1.27542632e+89] [ -1.26719727e+89] [ -1.32108959e+89] [ -1.20042507e+89] [ -1.07149456e+89] [ -1.37251665e+89] [ -1.51947114e+89] [ -1.31507486e+89] [ -1.58278311e+89] [ -1.35745363e+89] [ -1.52402025e+89] [ -1.39236831e+89] [ -1.58084730e+89] [ -1.41180837e+89] [ -1.59872937e+89] [ -1.53545250e+89] [ -1.79201893e+89] [ -1.57842397e+89] [ -1.59690767e+89] [ -1.52682413e+89] [ -1.66366656e+89] [ -1.76221422e+89] [ -1.60445156e+89] [ -1.66329422e+89] [ -1.74342858e+89] [ -1.65677833e+89] [ -1.68938411e+89] [ -1.53564321e+89] [ -1.73878979e+89]] [[ 2.71656561e+88] [ 1.23599137e+89] [ 8.32978360e+88] [ 1.39917311e+89] [ 1.89246180e+89] [ 1.98383623e+89] [ 3.18637215e+89] [ 2.73041387e+89] [ 3.79409276e+89] [ 3.18222645e+89] [ 5.36162262e+89] [ 4.78387911e+89] [ 3.51861728e+89] [ 4.89649229e+89] [ 5.05186217e+89] [ 5.78883837e+89] [ 7.38048494e+89] [ 5.16126638e+89] [ 6.30878023e+89] [ 6.15897603e+89] [ 8.27647063e+89] [ 8.50129325e+89] [ 8.44644291e+89] [ 8.80565954e+89] [ 8.00137597e+89] [ 7.14199584e+89] [ 9.14844414e+89] [ 1.01279623e+90] [ 8.76556864e+89] [ 1.05499652e+90] [ 9.04804234e+89] [ 1.01582842e+90] [ 9.28076446e+89] [ 1.05370622e+90] [ 9.41034126e+89] [ 1.06562543e+90] [ 1.02344853e+90] [ 1.19446166e+90] [ 1.05209096e+90] [ 1.06441118e+90] [ 1.01769733e+90] [ 1.10890900e+90] [ 1.17459547e+90] [ 1.06943953e+90] [ 1.10866081e+90] [ 1.16207399e+90] [ 1.10431768e+90] [ 1.12605091e+90] [ 1.02357565e+90] [ 1.15898203e+90]] [[ -1.81071383e+89] [ -8.23844144e+89] [ -5.55217747e+89] [ -9.32612151e+89] [ -1.26141137e+90] [ -1.32231656e+90] [ -2.12386114e+90] [ -1.81994432e+90] [ -2.52893441e+90] [ -2.12109784e+90] [ -3.57376395e+90] [ -3.18867177e+90] [ -2.34531754e+90] [ -3.26373355e+90] [ -3.36729460e+90] [ -3.85852257e+90] [ -4.91942699e+90] [ -3.44021745e+90] [ -4.20508733e+90] [ -4.10523605e+90] [ -5.51664197e+90] [ -5.66649641e+90] [ -5.62993618e+90] [ -5.86937031e+90] [ -5.33327894e+90] [ -4.76046321e+90] [ -6.09785175e+90] [ -6.75074492e+90] [ -5.84264791e+90] [ -7.03202891e+90] [ -6.03092940e+90] [ -6.77095582e+90] [ -6.18604923e+90] [ -7.02342846e+90] [ -6.27241802e+90] [ -7.10287539e+90] [ -6.82174732e+90] [ -7.96162716e+90] [ -7.01266205e+90] [ -7.09478187e+90] [ -6.78341296e+90] [ -7.39137999e+90] [ -7.82921000e+90] [ -7.12829809e+90] [ -7.38972571e+90] [ -7.74574869e+90] [ -7.36077678e+90] [ -7.50563854e+90] [ -6.82259458e+90] [ -7.72513937e+90]] [[ 1.20692266e+90] [ 5.49129379e+90] [ 3.70077738e+90] [ 6.21628175e+90] [ 8.40787725e+90] [ 8.81383788e+90] [ 1.41564950e+91] [ 1.21307519e+91] [ 1.68564915e+91] [ 1.41380764e+91] [ 2.38207529e+91] [ 2.12539394e+91] [ 1.56326020e+91] [ 2.17542601e+91] [ 2.24445413e+91] [ 2.57187979e+91] [ 3.27902056e+91] [ 2.29306051e+91] [ 2.80288088e+91] [ 2.73632548e+91] [ 3.67709136e+91] [ 3.77697612e+91] [ 3.75260707e+91] [ 3.91220074e+91] [ 3.55487160e+91] [ 3.17306402e+91] [ 4.06449396e+91] [ 4.49967678e+91] [ 3.89438906e+91] [ 4.68716529e+91] [ 4.01988719e+91] [ 4.51314826e+91] [ 4.12328157e+91] [ 4.68143270e+91] [ 4.18085027e+91] [ 4.73438766e+91] [ 4.54700309e+91] [ 5.30678455e+91] [ 4.67425640e+91] [ 4.72899297e+91] [ 4.52145151e+91] [ 4.92668903e+91] [ 5.21852253e+91] [ 4.75133305e+91] [ 4.92558638e+91] [ 5.16289180e+91] [ 4.90629061e+91] [ 5.00284752e+91] [ 4.54756783e+91] [ 5.14915475e+91]] [[ -8.04468533e+90] [ -3.66019565e+91] [ -2.46673548e+91] [ -4.14343291e+91] [ -5.60423042e+91] [ -5.87482154e+91] [ -9.43594412e+91] [ -8.08569477e+91] [ -1.12356139e+92] [ -9.42366727e+91] [ -1.58776091e+92] [ -1.41667118e+92] [ -1.04198362e+92] [ -1.45001981e+92] [ -1.49603018e+92] [ -1.71427419e+92] [ -2.18561550e+92] [ -1.52842853e+92] [ -1.86824686e+92] [ -1.82388468e+92] [ -2.45094768e+92] [ -2.51752539e+92] [ -2.50128232e+92] [ -2.60765872e+92] [ -2.36948269e+92] [ -2.11499067e+92] [ -2.70916904e+92] [ -2.99923807e+92] [ -2.59578643e+92] [ -3.12420763e+92] [ -2.67943661e+92] [ -3.00821742e+92] [ -2.74835363e+92] [ -3.12038660e+92] [ -2.78672578e+92] [ -3.15568348e+92] [ -3.03078319e+92] [ -3.53721190e+92] [ -3.11560328e+92] [ -3.15208768e+92] [ -3.01375191e+92] [ -3.28386104e+92] [ -3.47838127e+92] [ -3.16697835e+92] [ -3.28312607e+92] [ -3.44130087e+92] [ -3.27026457e+92] [ -3.33462412e+92] [ -3.03115962e+92] [ -3.43214450e+92]] [[ 5.36214658e+91] [ 2.43968593e+92] [ 1.64419076e+92] [ 2.76178542e+92] [ 3.73547302e+92] [ 3.91583424e+92] [ 6.28948348e+92] [ 5.38948123e+92] [ 7.48904478e+92] [ 6.28130041e+92] [ 1.05831445e+93] [ 9.44275409e+92] [ 6.94529207e+92] [ 9.66503782e+92] [ 9.97171772e+92] [ 1.14264127e+93] [ 1.45681157e+93] [ 1.01876674e+93] [ 1.24527102e+93] [ 1.21570162e+93] [ 1.63366747e+93] [ 1.67804453e+93] [ 1.66721778e+93] [ 1.73812247e+93] [ 1.57936737e+93] [ 1.40973693e+93] [ 1.80578368e+93] [ 1.99912781e+93] [ 1.73020904e+93] [ 2.08242568e+93] [ 1.78596567e+93] [ 2.00511295e+93] [ 1.83190199e+93] [ 2.07987879e+93] [ 1.85747876e+93] [ 2.10340575e+93] [ 2.02015406e+93] [ 2.35771170e+93] [ 2.07669048e+93] [ 2.10100899e+93] [ 2.00880194e+93] [ 2.18884189e+93] [ 2.31849842e+93] [ 2.11093429e+93] [ 2.18835200e+93] [ 2.29378266e+93] [ 2.17977923e+93] [ 2.22267778e+93] [ 2.02040497e+93] [ 2.28767953e+93]] [[ -3.57411319e+92] [ -1.62616100e+93] [ -1.09592750e+93] [ -1.84085488e+93] [ -2.48986170e+93] [ -2.61008061e+93] [ -4.19222518e+93] [ -3.59233297e+93] [ -4.99178703e+93] [ -4.18677078e+93] [ -7.05414442e+93] [ -6.29402264e+93] [ -4.62935126e+93] [ -6.44218479e+93] [ -6.64660082e+93] [ -7.61622083e+93] [ -9.71030793e+93] [ -6.79054103e+93] [ -8.30029452e+93] [ -8.10320111e+93] [ -1.08891325e+94] [ -1.11849256e+94] [ -1.11127605e+94] [ -1.15853723e+94] [ -1.05271978e+94] [ -9.39653415e+93] [ -1.20363649e+94] [ -1.33250909e+94] [ -1.15326258e+94] [ -1.38803089e+94] [ -1.19042689e+94] [ -1.33649846e+94] [ -1.22104552e+94] [ -1.38633327e+94] [ -1.23809359e+94] [ -1.40201506e+94] [ -1.34652404e+94] [ -1.57152147e+94] [ -1.38420812e+94] [ -1.40041750e+94] [ -1.33895734e+94] [ -1.45896211e+94] [ -1.54538405e+94] [ -1.40703317e+94] [ -1.45863558e+94] [ -1.52890987e+94] [ -1.45292143e+94] [ -1.48151526e+94] [ -1.34669128e+94] [ -1.52484186e+94]] [[ 2.38230808e+93] [ 1.08390985e+94] [ 7.30485238e+93] [ 1.22701303e+94] [ 1.65960542e+94] [ 1.73973676e+94] [ 2.79430767e+94] [ 2.39445239e+94] [ 3.32725180e+94] [ 2.79067207e+94] [ 4.70190627e+94] [ 4.19525073e+94] [ 3.08567197e+94] [ 4.29400751e+94] [ 4.43026003e+94] [ 5.07655562e+94] [ 6.47235938e+94] [ 4.52620269e+94] [ 5.53252167e+94] [ 5.40115000e+94] [ 7.25809926e+94] [ 7.45525877e+94] [ 7.40715744e+94] [ 7.72217456e+94] [ 7.01685339e+94] [ 6.26321496e+94] [ 8.02278152e+94] [ 8.88177572e+94] [ 7.68701661e+94] [ 9.25185359e+94] [ 7.93473357e+94] [ 8.90836666e+94] [ 8.13882060e+94] [ 9.24053821e+94] [ 8.25245372e+94] [ 9.34506441e+94] [ 8.97519168e+94] [ 1.04749013e+95] [ 9.22637313e+94] [ 9.33441599e+94] [ 8.92475618e+94] [ 9.72464225e+94] [ 1.03006836e+95] [ 9.37851239e+94] [ 9.72246576e+94] [ 1.01908757e+95] [ 9.68437843e+94] [ 9.87496919e+94] [ 8.97630641e+94] [ 1.01637606e+95]] [[ -1.58791607e+94] [ -7.22474933e+94] [ -4.86901446e+94] [ -8.17859673e+94] [ -1.10620207e+95] [ -1.15961323e+95] [ -1.86253243e+95] [ -1.59601080e+95] [ -2.21776380e+95] [ -1.86010914e+95] [ -3.13403317e+95] [ -2.79632434e+95] [ -2.05673991e+95] [ -2.86215019e+95] [ -2.95296866e+95] [ -3.38375390e+95] [ -4.31412023e+95] [ -3.01691878e+95] [ -3.68767589e+95] [ -3.60011073e+95] [ -4.83785139e+95] [ -4.96926712e+95] [ -4.93720541e+95] [ -5.14717857e+95] [ -4.67705011e+95] [ -4.17471602e+95] [ -5.34754670e+95] [ -5.92010518e+95] [ -5.12374421e+95] [ -6.16677882e+95] [ -5.28885876e+95] [ -5.93782925e+95] [ -5.42489200e+95] [ -6.15923661e+95] [ -5.50063361e+95] [ -6.22890805e+95] [ -5.98237115e+95] [ -6.98199543e+95] [ -6.14979494e+95] [ -6.22181040e+95] [ -5.94875361e+95] [ -6.48191385e+95] [ -6.86587144e+95] [ -6.25120264e+95] [ -6.48046312e+95] [ -6.79267955e+95] [ -6.45507619e+95] [ -6.58211355e+95] [ -5.98311416e+95] [ -6.77460608e+95]] [[ 1.05841787e+95] [ 4.81562216e+95] [ 3.24541833e+95] [ 5.45140458e+95] [ 7.37333705e+95] [ 7.72934660e+95] [ 1.24146210e+96] [ 1.06381338e+96] [ 1.47823987e+96] [ 1.23984686e+96] [ 2.08897484e+96] [ 1.86387663e+96] [ 1.37091017e+96] [ 1.90775254e+96] [ 1.96828716e+96] [ 2.25542500e+96] [ 2.87555623e+96] [ 2.01091281e+96] [ 2.45800275e+96] [ 2.39963661e+96] [ 3.22464674e+96] [ 3.31224127e+96] [ 3.29087069e+96] [ 3.43082729e+96] [ 3.11746541e+96] [ 2.78263702e+96] [ 3.56438171e+96] [ 3.94601783e+96] [ 3.41520723e+96] [ 4.11043697e+96] [ 3.52526355e+96] [ 3.95783173e+96] [ 3.61593585e+96] [ 4.10540974e+96] [ 3.66642106e+96] [ 4.15184891e+96] [ 3.98752092e+96] [ 4.65381571e+96] [ 4.09911644e+96] [ 4.14711800e+96] [ 3.96511330e+96] [ 4.32048871e+96] [ 4.57641381e+96] [ 4.16670926e+96] [ 4.31952174e+96] [ 4.52762811e+96] [ 4.30260020e+96] [ 4.38727635e+96] [ 3.98801617e+96] [ 4.51558133e+96]] [[ -7.05483378e+95] [ -3.20982995e+96] [ -2.16321809e+96] [ -3.63360769e+96] [ -4.91466260e+96] [ -5.15195906e+96] [ -8.27490630e+96] [ -7.09079725e+96] [ -9.85313723e+96] [ -8.26414004e+96] [ -1.39239620e+97] [ -1.24235807e+97] [ -9.13773624e+96] [ -1.27160334e+97] [ -1.31195241e+97] [ -1.50334276e+97] [ -1.91668827e+97] [ -1.34036433e+97] [ -1.63837000e+97] [ -1.59946632e+97] [ -2.14937288e+97] [ -2.20775860e+97] [ -2.19351414e+97] [ -2.28680154e+97] [ -2.07793168e+97] [ -1.85475342e+97] [ -2.37582161e+97] [ -2.63019934e+97] [ -2.27639007e+97] [ -2.73979213e+97] [ -2.34974758e+97] [ -2.63807383e+97] [ -2.41018477e+97] [ -2.73644126e+97] [ -2.44383545e+97] [ -2.76739506e+97] [ -2.65786302e+97] [ -3.10197863e+97] [ -2.73224649e+97] [ -2.76424170e+97] [ -2.64292733e+97] [ -2.87980112e+97] [ -3.05038677e+97] [ -2.77730016e+97] [ -2.87915658e+97] [ -3.01786889e+97] [ -2.86787761e+97] [ -2.92431810e+97] [ -2.65819313e+97] [ -3.00983917e+97]] [[ 4.70236576e+96] [ 2.13949682e+97] [ 1.44188269e+97] [ 2.42196385e+97] [ 3.27584488e+97] [ 3.43401370e+97] [ 5.51559927e+97] [ 4.72633704e+97] [ 6.56756156e+97] [ 5.50842307e+97] [ 9.28095038e+97] [ 8.28087835e+97] [ 6.09071444e+97] [ 8.47581136e+97] [ 8.74475609e+97] [ 1.00204594e+98] [ 1.27755941e+98] [ 8.93413440e+97] [ 1.09204770e+98] [ 1.06611664e+98] [ 1.43265423e+98] [ 1.47157095e+98] [ 1.46207637e+98] [ 1.52425665e+98] [ 1.38503544e+98] [ 1.23627703e+98] [ 1.58359255e+98] [ 1.75314681e+98] [ 1.51731693e+98] [ 1.82619536e+98] [ 1.56621303e+98] [ 1.75839551e+98] [ 1.60649715e+98] [ 1.82396185e+98] [ 1.62892685e+98] [ 1.84459396e+98] [ 1.77158590e+98] [ 2.06760904e+98] [ 1.82116585e+98] [ 1.84249210e+98] [ 1.76163059e+98] [ 1.91951768e+98] [ 2.03322073e+98] [ 1.85119616e+98] [ 1.91908807e+98] [ 2.01154610e+98] [ 1.91157013e+98] [ 1.94919026e+98] [ 1.77180593e+98] [ 2.00619392e+98]] [[ -3.13433944e+97] [ -1.42607139e+98] [ -9.61080017e+97] [ -1.61434844e+98] [ -2.18349876e+98] [ -2.28892543e+98] [ -3.67639635e+98] [ -3.15031738e+98] [ -4.37757679e+98] [ -3.67161309e+98] [ -6.18617315e+98] [ -5.51957991e+98] [ -4.05973662e+98] [ -5.64951158e+98] [ -5.82877541e+98] [ -6.67908934e+98] [ -8.51551125e+98] [ -5.95500461e+98] [ -7.27899175e+98] [ -7.10614954e+98] [ -9.54928834e+98] [ -9.80868586e+98] [ -9.74540023e+98] [ -1.01598599e+99] [ -9.23188756e+98] [ -8.24034550e+98] [ -1.05553605e+99] [ -1.16855163e+99] [ -1.01136035e+99] [ -1.21724179e+99] [ -1.04395181e+99] [ -1.17205013e+99] [ -1.07080300e+99] [ -1.21575306e+99] [ -1.08575341e+99] [ -1.22950529e+99] [ -1.18084212e+99] [ -1.37815493e+99] [ -1.21388940e+99] [ -1.22810431e+99] [ -1.17420646e+99] [ -1.27944534e+99] [ -1.35523357e+99] [ -1.23390596e+99] [ -1.27915899e+99] [ -1.34078644e+99] [ -1.27414794e+99] [ -1.29922346e+99] [ -1.18098878e+99] [ -1.33721898e+99]] [[ 2.08917898e+98] [ 9.50541073e+98] [ 6.40603295e+98] [ 1.07603624e+99] [ 1.45540067e+99] [ 1.52567232e+99] [ 2.45048443e+99] [ 2.09982900e+99] [ 2.91785290e+99] [ 2.44729617e+99] [ 4.12336417e+99] [ 3.67904963e+99] [ 2.70599806e+99] [ 3.76565496e+99] [ 3.88514241e+99] [ 4.45191510e+99] [ 5.67597335e+99] [ 3.96927988e+99] [ 4.85177719e+99] [ 4.73657003e+99] [ 6.36503253e+99] [ 6.53793271e+99] [ 6.49574997e+99] [ 6.77200608e+99] [ 6.15347056e+99] [ 5.49256294e+99] [ 7.03562511e+99] [ 7.78892509e+99] [ 6.74117411e+99] [ 8.11346704e+99] [ 6.95841094e+99] [ 7.81224417e+99] [ 7.13738625e+99] [ 8.10354395e+99] [ 7.23703749e+99] [ 8.19520881e+99] [ 7.87084676e+99] [ 9.18602587e+99] [ 8.09112180e+99] [ 8.18587062e+99] [ 7.82661706e+99] [ 8.52808182e+99] [ 9.03324461e+99] [ 8.22454121e+99] [ 8.52617313e+99] [ 8.93694800e+99] [ 8.49277222e+99] [ 8.65991191e+99] [ 7.87182433e+99] [ 8.91316922e+99]] [[ -1.39253227e+099] [ -6.33578612e+099] [ -4.26991067e+099] [ -7.17226816e+099] [ -9.70090364e+099] [ -1.01692959e+100] [ -1.63335869e+100] [ -1.39963099e+100] [ -1.94488091e+100] [ -1.63123357e+100] [ -2.74840869e+100] [ -2.45225296e+100] [ -1.80367008e+100] [ -2.50997933e+100] [ -2.58962312e+100] [ -2.96740275e+100] [ -3.78329293e+100] [ -2.64570455e+100] [ -3.23392891e+100] [ -3.15713813e+100] [ -4.24258203e+100] [ -4.35782782e+100] [ -4.32971112e+100] [ -4.51384831e+100] [ -4.10156641e+100] [ -3.66104159e+100] [ -4.68956231e+100] [ -5.19167081e+100] [ -4.49329741e+100] [ -5.40799270e+100] [ -4.63809558e+100] [ -5.20721404e+100] [ -4.75739072e+100] [ -5.40137852e+100] [ -4.82381277e+100] [ -5.46247729e+100] [ -5.24627532e+100] [ -6.12290167e+100] [ -5.39309859e+100] [ -5.45625296e+100] [ -5.21679422e+100] [ -5.68435220e+100] [ -6.02106605e+100] [ -5.48202866e+100] [ -5.68307998e+100] [ -5.95688000e+100] [ -5.66081676e+100] [ -5.77222292e+100] [ -5.24692691e+100] [ -5.94103037e+100]] [[ 9.28185735e+099] [ 4.22308798e+100] [ 2.84608856e+100] [ 4.78064109e+100] [ 6.46609099e+100] [ 6.77829564e+100] [ 1.08870743e+101] [ 9.32917353e+100] [ 1.29635108e+101] [ 1.08729095e+101] [ 1.83193869e+101] [ 1.63453750e+101] [ 1.20222768e+101] [ 1.67301473e+101] [ 1.72610092e+101] [ 1.97790813e+101] [ 2.52173583e+101] [ 1.76348173e+101] [ 2.15555987e+101] [ 2.10437534e+101] [ 2.82787279e+101] [ 2.90468932e+101] [ 2.88594827e+101] [ 3.00868403e+101] [ 2.73387950e+101] [ 2.44024979e+101] [ 3.12580536e+101] [ 3.46048339e+101] [ 2.99498593e+101] [ 3.60467171e+101] [ 3.09150046e+101] [ 3.47084365e+101] [ 3.17101606e+101] [ 3.60026305e+101] [ 3.21528936e+101] [ 3.64098815e+101] [ 3.49687976e+101] [ 4.08119086e+101] [ 3.59474411e+101] [ 3.63683935e+101] [ 3.47722927e+101] [ 3.78887781e+101] [ 4.01331282e+101] [ 3.65402002e+101] [ 3.78802982e+101] [ 3.97052992e+101] [ 3.77319037e+101] [ 3.84744762e+101] [ 3.49731407e+101] [ 3.95996542e+101]] [[ -6.18677769e+100] [ -2.81487912e+101] [ -1.89704674e+101] [ -3.18651348e+101] [ -4.30994207e+101] [ -4.51804059e+101] [ -7.25672741e+101] [ -6.21831606e+101] [ -8.64076624e+101] [ -7.24728588e+101] [ -1.22106998e+102] [ -1.08949316e+102] [ -8.01339116e+101] [ -1.11513998e+102] [ -1.15052432e+102] [ -1.31836522e+102] [ -1.68085097e+102] [ -1.17544032e+102] [ -1.43677814e+102] [ -1.40266133e+102] [ -1.88490510e+102] [ -1.93610679e+102] [ -1.92361503e+102] [ -2.00542397e+102] [ -1.82225432e+102] [ -1.62653684e+102] [ -2.08349063e+102] [ -2.30656867e+102] [ -1.99629356e+102] [ -2.40267671e+102] [ -2.06062486e+102] [ -2.31347426e+102] [ -2.11362561e+102] [ -2.39973814e+102] [ -2.14313577e+102] [ -2.42688326e+102] [ -2.33082850e+102] [ -2.72029828e+102] [ -2.39605951e+102] [ -2.42411790e+102] [ -2.31773056e+102] [ -2.52545841e+102] [ -2.67505449e+102] [ -2.43556959e+102] [ -2.52489318e+102] [ -2.64653775e+102] [ -2.51500202e+102] [ -2.56449783e+102] [ -2.33111799e+102] [ -2.63949604e+102]] [[ 4.12376712e+101] [ 1.87624423e+102] [ 1.26446744e+102] [ 2.12395534e+102] [ 2.87277130e+102] [ 3.01147837e+102] [ 4.83693699e+102] [ 4.14478887e+102] [ 5.75946147e+102] [ 4.83064379e+102] [ 8.13898364e+102] [ 7.26196463e+102] [ 5.34128760e+102] [ 7.43291227e+102] [ 7.66876493e+102] [ 8.78750038e+102] [ 1.12036319e+103] [ 7.83484134e+102] [ 9.57677604e+102] [ 9.34937213e+102] [ 1.25637449e+103] [ 1.29050274e+103] [ 1.28217641e+103] [ 1.33670577e+103] [ 1.21461492e+103] [ 1.08416036e+103] [ 1.38874073e+103] [ 1.53743233e+103] [ 1.33061994e+103] [ 1.60149269e+103] [ 1.37349966e+103] [ 1.54203522e+103] [ 1.40882706e+103] [ 1.59953400e+103] [ 1.42849691e+103] [ 1.61762745e+103] [ 1.55360260e+103] [ 1.81320182e+103] [ 1.59708203e+103] [ 1.61578421e+103] [ 1.54487224e+103] [ 1.68333224e+103] [ 1.78304479e+103] [ 1.62341728e+103] [ 1.68295549e+103] [ 1.76403710e+103] [ 1.67636259e+103] [ 1.70935378e+103] [ 1.55379556e+103] [ 1.75934348e+103]] [[ -2.74867728e+102] [ -1.25060163e+103] [ -8.42824734e+102] [ -1.41571229e+103] [ -1.91483199e+103] [ -2.00728653e+103] [ -3.22403725e+103] [ -2.76268923e+103] [ -3.83894153e+103] [ -3.21984254e+103] [ -5.42500067e+103] [ -4.84042784e+103] [ -3.56020974e+103] [ -4.95437217e+103] [ -5.11157863e+103] [ -5.85726640e+103] [ -7.46772733e+103] [ -5.22227607e+103] [ -6.38335433e+103] [ -6.23177934e+103] [ -8.37430418e+103] [ -8.60178435e+103] [ -8.54628564e+103] [ -8.90974845e+103] [ -8.09595771e+103] [ -7.22641911e+103] [ -9.25658501e+103] [ -1.02476818e+104] [ -8.86918366e+103] [ -1.06746730e+104] [ -9.15499639e+103] [ -1.02783621e+104] [ -9.39046945e+103] [ -1.06616175e+104] [ -9.52157793e+103] [ -1.07822185e+104] [ -1.03554640e+104] [ -1.20858102e+104] [ -1.06452740e+104] [ -1.07699325e+104] [ -1.02972721e+104] [ -1.12201706e+104] [ -1.18847999e+104] [ -1.08208103e+104] [ -1.12176594e+104] [ -1.17581050e+104] [ -1.11737147e+104] [ -1.13936160e+104] [ -1.03567501e+104] [ -1.17268199e+104]] [[ 1.83211771e+103] [ 8.33582545e+103] [ 5.61780801e+103] [ 9.43636265e+103] [ 1.27632212e+104] [ 1.33794725e+104] [ 2.14896663e+104] [ 1.84145731e+104] [ 2.55882814e+104] [ 2.14617067e+104] [ 3.61600829e+104] [ 3.22636406e+104] [ 2.37304080e+104] [ 3.30231311e+104] [ 3.40709833e+104] [ 3.90413295e+104] [ 4.97757798e+104] [ 3.48088317e+104] [ 4.25479434e+104] [ 4.15376275e+104] [ 5.58185245e+104] [ 5.73347828e+104] [ 5.69648588e+104] [ 5.93875028e+104] [ 5.39632194e+104] [ 4.81673514e+104] [ 6.16993253e+104] [ 6.83054334e+104] [ 5.91171201e+104] [ 7.11515231e+104] [ 6.10221912e+104] [ 6.85099315e+104] [ 6.25917256e+104] [ 7.10645019e+104] [ 6.34656229e+104] [ 7.18683623e+104] [ 6.90238504e+104] [ 8.05573903e+104] [ 7.09555651e+104] [ 7.17864704e+104] [ 6.86359754e+104] [ 7.47875115e+104] [ 7.92175662e+104] [ 7.21255945e+104] [ 7.47707732e+104] [ 7.83730874e+104] [ 7.44778620e+104] [ 7.59436032e+104] [ 6.90324232e+104] [ 7.81645580e+104]] [[ -1.22118931e+104] [ -5.55620464e+104] [ -3.74452310e+104] [ -6.28976245e+104] [ -8.50726411e+104] [ -8.91802347e+104] [ -1.43238344e+105] [ -1.22741457e+105] [ -1.70557468e+105] [ -1.43051981e+105] [ -2.41023305e+105] [ -2.15051755e+105] [ -1.58173901e+105] [ -2.20114103e+105] [ -2.27098512e+105] [ -2.60228117e+105] [ -3.31778083e+105] [ -2.32016605e+105] [ -2.83601285e+105] [ -2.76867072e+105] [ -3.72055709e+105] [ -3.82162256e+105] [ -3.79696545e+105] [ -3.95844563e+105] [ -3.59689262e+105] [ -3.21057180e+105] [ -4.11253906e+105] [ -4.55286604e+105] [ -3.94042340e+105] [ -4.74257079e+105] [ -4.06740501e+105] [ -4.56649676e+105] [ -4.17202157e+105] [ -4.73677043e+105] [ -4.23027078e+105] [ -4.79035136e+105] [ -4.60075178e+105] [ -5.36951438e+105] [ -4.72950930e+105] [ -4.78489290e+105] [ -4.57489816e+105] [ -4.98492586e+105] [ -5.28020904e+105] [ -4.80749705e+105] [ -4.98381018e+105] [ -5.22392071e+105] [ -4.96428632e+105] [ -5.06198460e+105] [ -4.60132319e+105] [ -5.21002128e+105]] [[ 8.13977901e+104] [ 3.70346167e+105] [ 2.49589399e+105] [ 4.19241112e+105] [ 5.67047626e+105] [ 5.94426595e+105] [ 9.54748343e+105] [ 8.18127321e+105] [ 1.13684265e+106] [ 9.53506146e+105] [ 1.60652933e+106] [ 1.43341720e+106] [ 1.05430058e+106] [ 1.46716004e+106] [ 1.51371428e+106] [ 1.73453808e+106] [ 2.21145097e+106] [ 1.54649560e+106] [ 1.89033082e+106] [ 1.84544425e+106] [ 2.47991956e+106] [ 2.54728426e+106] [ 2.53084919e+106] [ 2.63848303e+106] [ 2.39749159e+106] [ 2.13999130e+106] [ 2.74119327e+106] [ 3.03469111e+106] [ 2.62647040e+106] [ 3.16113791e+106] [ 2.71110938e+106] [ 3.04377661e+106] [ 2.78084105e+106] [ 3.15727171e+106] [ 2.81966679e+106] [ 3.19298582e+106] [ 3.06660912e+106] [ 3.57902416e+106] [ 3.15243184e+106] [ 3.18934751e+106] [ 3.04937652e+106] [ 3.32267852e+106] [ 3.51949812e+106] [ 3.20441420e+106] [ 3.32193487e+106] [ 3.48197940e+106] [ 3.30892133e+106] [ 3.37404166e+106] [ 3.06699000e+106] [ 3.47271480e+106]] [[ -5.42553082e+105] [ -2.46852468e+106] [ -1.66362622e+106] [ -2.79443161e+106] [ -3.77962887e+106] [ -3.96212208e+106] [ -6.36382948e+106] [ -5.45318859e+106] [ -7.57757041e+106] [ -6.35554967e+106] [ -1.07082445e+107] [ -9.55437390e+106] [ -7.02739018e+106] [ -9.77928518e+106] [ -1.00895902e+107] [ -1.15614808e+107] [ -1.47403208e+107] [ -1.03080926e+107] [ -1.25999098e+107] [ -1.23007205e+107] [ -1.65297854e+107] [ -1.69788016e+107] [ -1.68692544e+107] [ -1.75866826e+107] [ -1.59803657e+107] [ -1.42640098e+107] [ -1.82712928e+107] [ -2.02275887e+107] [ -1.75066130e+107] [ -2.10704137e+107] [ -1.80707701e+107] [ -2.02881476e+107] [ -1.85355632e+107] [ -2.10446438e+107] [ -1.87943543e+107] [ -2.12826945e+107] [ -2.04403366e+107] [ -2.38558146e+107] [ -2.10123838e+107] [ -2.12584435e+107] [ -2.03254735e+107] [ -2.21471550e+107] [ -2.34590466e+107] [ -2.13588698e+107] [ -2.21421982e+107] [ -2.32089674e+107] [ -2.20554571e+107] [ -2.24895135e+107] [ -2.04428753e+107] [ -2.31472146e+107]] [[ 3.61636166e+106] [ 1.64538334e+107] [ 1.10888211e+107] [ 1.86261504e+107] [ 2.51929357e+107] [ 2.64093355e+107] [ 4.24178014e+107] [ 3.63479681e+107] [ 5.05079337e+107] [ 4.23626128e+107] [ 7.13752923e+107] [ 6.36842229e+107] [ 4.68407336e+107] [ 6.51833583e+107] [ 6.72516819e+107] [ 7.70624977e+107] [ 9.82509042e+107] [ 6.87080987e+107] [ 8.39840968e+107] [ 8.19898649e+107] [ 1.10178495e+108] [ 1.13171391e+108] [ 1.12441209e+108] [ 1.17223193e+108] [ 1.06516364e+108] [ 9.50760761e+107] [ 1.21786429e+108] [ 1.34826026e+108] [ 1.16689493e+108] [ 1.40443836e+108] [ 1.20449855e+108] [ 1.35229679e+108] [ 1.23547911e+108] [ 1.40272068e+108] [ 1.25272871e+108] [ 1.41858784e+108] [ 1.36244088e+108] [ 1.59009793e+108] [ 1.40057041e+108] [ 1.41697140e+108] [ 1.35478473e+108] [ 1.47620804e+108] [ 1.56365155e+108] [ 1.42366526e+108] [ 1.47587765e+108] [ 1.54698263e+108] [ 1.47009596e+108] [ 1.49902778e+108] [ 1.36261009e+108] [ 1.54286654e+108]] [[ -2.41046859e+107] [ -1.09672241e+108] [ -7.39120073e+107] [ -1.24151716e+108] [ -1.67922310e+108] [ -1.76030164e+108] [ -2.82733830e+108] [ -2.42275645e+108] [ -3.36658220e+108] [ -2.82365972e+108] [ -4.75748602e+108] [ -4.24484146e+108] [ -3.12214672e+108] [ -4.34476561e+108] [ -4.48262874e+108] [ -5.13656398e+108] [ -6.54886709e+108] [ -4.57970550e+108] [ -5.59791986e+108] [ -5.46499528e+108] [ -7.34389496e+108] [ -7.54338502e+108] [ -7.49471511e+108] [ -7.81345594e+108] [ -7.09979739e+108] [ -6.33725044e+108] [ -8.11761629e+108] [ -8.98676438e+108] [ -7.77788241e+108] [ -9.36121683e+108] [ -8.02852755e+108] [ -9.01366965e+108] [ -8.23502702e+108] [ -9.34976769e+108] [ -8.35000337e+108] [ -9.45552947e+108] [ -9.08128459e+108] [ -1.05987218e+109] [ -9.33543517e+108] [ -9.44475517e+108] [ -9.03025291e+108] [ -9.83959417e+108] [ -1.04224447e+109] [ -9.48937282e+108] [ -9.83739196e+108] [ -1.03113389e+109] [ -9.79885441e+108] [ -9.99169808e+108] [ -9.08241249e+108] [ -1.02839032e+109]] [[ 1.60668633e+108] [ 7.31015081e+108] [ 4.92656954e+108] [ 8.27527333e+108] [ 1.11927813e+109] [ 1.17332065e+109] [ 1.88454884e+109] [ 1.61487674e+109] [ 2.24397928e+109] [ 1.88209690e+109] [ 3.17107958e+109] [ 2.82937880e+109] [ 2.08105199e+109] [ 2.89598277e+109] [ 2.98787477e+109] [ 3.42375219e+109] [ 4.36511608e+109] [ 3.05258083e+109] [ 3.73126674e+109] [ 3.64266650e+109] [ 4.89503811e+109] [ 5.02800726e+109] [ 4.99556656e+109] [ 5.20802174e+109] [ 4.73233604e+109] [ 4.22406401e+109] [ 5.41075836e+109] [ 5.99008488e+109] [ 5.18431037e+109] [ 6.23967437e+109] [ 5.35137669e+109] [ 6.00801845e+109] [ 5.48901793e+109] [ 6.23204300e+109] [ 5.56565487e+109] [ 6.30253801e+109] [ 6.05308687e+109] [ 7.06452739e+109] [ 6.22248972e+109] [ 6.29535645e+109] [ 6.01907194e+109] [ 6.55853450e+109] [ 6.94703073e+109] [ 6.32509613e+109] [ 6.55706662e+109] [ 6.87297367e+109] [ 6.53137960e+109] [ 6.65991864e+109] [ 6.05383866e+109] [ 6.85468656e+109]] [[ -1.07092910e+109] [ -4.87254611e+109] [ -3.28378140e+109] [ -5.51584392e+109] [ -7.46049494e+109] [ -7.82071277e+109] [ -1.25613703e+110] [ -1.07638838e+110] [ -1.49571367e+110] [ -1.25450270e+110] [ -2.11366795e+110] [ -1.88590893e+110] [ -1.38711526e+110] [ -1.93030348e+110] [ -1.99155366e+110] [ -2.28208567e+110] [ -2.90954727e+110] [ -2.03468317e+110] [ -2.48705803e+110] [ -2.42800196e+110] [ -3.26276427e+110] [ -3.35139422e+110] [ -3.32977103e+110] [ -3.47138202e+110] [ -3.15431599e+110] [ -2.81552969e+110] [ -3.60651514e+110] [ -3.99266246e+110] [ -3.45557731e+110] [ -4.15902515e+110] [ -3.56693457e+110] [ -4.00461601e+110] [ -3.65867868e+110] [ -4.15393850e+110] [ -3.70976067e+110] [ -4.20092661e+110] [ -4.03465614e+110] [ -4.70882699e+110] [ -4.14757081e+110] [ -4.19613977e+110] [ -4.01198366e+110] [ -4.37155985e+110] [ -4.63051015e+110] [ -4.21596262e+110] [ -4.37058144e+110] [ -4.58114777e+110] [ -4.35345988e+110] [ -4.43913696e+110] [ -4.03515725e+110] [ -4.56895859e+110]] [[ 7.13822674e+109] [ 3.24777234e+110] [ 2.18878881e+110] [ 3.67655942e+110] [ 4.97275727e+110] [ 5.21285874e+110] [ 8.37272135e+110] [ 7.17461533e+110] [ 9.96960805e+110] [ 8.36182783e+110] [ 1.40885528e+111] [ 1.25704359e+111] [ 9.24575053e+110] [ 1.28663457e+111] [ 1.32746058e+111] [ 1.52111330e+111] [ 1.93934484e+111] [ 1.35620835e+111] [ 1.65773665e+111] [ 1.61837311e+111] [ 2.17477994e+111] [ 2.23385581e+111] [ 2.21944298e+111] [ 2.31383310e+111] [ 2.10249425e+111] [ 1.87667787e+111] [ 2.40390544e+111] [ 2.66129010e+111] [ 2.30329855e+111] [ 2.77217835e+111] [ 2.37752320e+111] [ 2.66925767e+111] [ 2.43867480e+111] [ 2.76878786e+111] [ 2.47272325e+111] [ 2.80010756e+111] [ 2.68928078e+111] [ 3.13864614e+111] [ 2.76454351e+111] [ 2.79691692e+111] [ 2.67416854e+111] [ 2.91384234e+111] [ 3.08644443e+111] [ 2.81012975e+111] [ 2.91319018e+111] [ 3.05354217e+111] [ 2.90177789e+111] [ 2.95888554e+111] [ 2.68961479e+111] [ 3.04541753e+111]] [[ -4.75795094e+110] [ -2.16478714e+111] [ -1.45892673e+111] [ -2.45059312e+111] [ -3.31456761e+111] [ -3.47460610e+111] [ -5.58079743e+111] [ -4.78220558e+111] [ -6.64519463e+111] [ -5.57353640e+111] [ -9.39065756e+111] [ -8.37876399e+111] [ -6.16271086e+111] [ -8.57600125e+111] [ -8.84812509e+111] [ -1.01389081e+112] [ -1.29266104e+112] [ -9.03974198e+111] [ -1.10495645e+112] [ -1.07871887e+112] [ -1.44958918e+112] [ -1.48896592e+112] [ -1.47935911e+112] [ -1.54227440e+112] [ -1.40140750e+112] [ -1.25089067e+112] [ -1.60231169e+112] [ -1.77387020e+112] [ -1.53525265e+112] [ -1.84778224e+112] [ -1.58472674e+112] [ -1.77918095e+112] [ -1.62548704e+112] [ -1.84552233e+112] [ -1.64818187e+112] [ -1.86639832e+112] [ -1.79252726e+112] [ -2.09204959e+112] [ -1.84269327e+112] [ -1.86427162e+112] [ -1.78245427e+112] [ -1.94220769e+112] [ -2.05725479e+112] [ -1.87307856e+112] [ -1.94177300e+112] [ -2.03532395e+112] [ -1.93416619e+112] [ -1.97223102e+112] [ -1.79274989e+112] [ -2.02990851e+112]] [[ 3.17138947e+111] [ 1.44292853e+112] [ 9.72440641e+111] [ 1.63343114e+112] [ 2.20930921e+112] [ 2.31598210e+112] [ 3.71985387e+112] [ 3.18755629e+112] [ 4.42932274e+112] [ 3.71501407e+112] [ 6.25929795e+112] [ 5.58482512e+112] [ 4.10772549e+112] [ 5.71629267e+112] [ 5.89767552e+112] [ 6.75804075e+112] [ 8.61617042e+112] [ 6.02539684e+112] [ 7.36503442e+112] [ 7.19014908e+112] [ 9.66216746e+112] [ 9.92463124e+112] [ 9.86059752e+112] [ 1.02799564e+113] [ 9.34101478e+112] [ 8.33775202e+112] [ 1.06801321e+113] [ 1.18236471e+113] [ 1.02331532e+113] [ 1.23163042e+113] [ 1.05629204e+113] [ 1.18590457e+113] [ 1.08346062e+113] [ 1.23012409e+113] [ 1.09858776e+113] [ 1.24403889e+113] [ 1.19480048e+113] [ 1.39444567e+113] [ 1.22823840e+113] [ 1.24262134e+113] [ 1.18808638e+113] [ 1.29456926e+113] [ 1.37125336e+113] [ 1.24849157e+113] [ 1.29427952e+113] [ 1.35663546e+113] [ 1.28920924e+113] [ 1.31458117e+113] [ 1.19494888e+113] [ 1.35302582e+113]] [[ -2.11387451e+112] [ -9.61777118e+112] [ -6.48175664e+112] [ -1.08875572e+113] [ -1.47260450e+113] [ -1.54370681e+113] [ -2.47945083e+113] [ -2.12465042e+113] [ -2.95234392e+113] [ -2.47622488e+113] [ -4.17210515e+113] [ -3.72253851e+113] [ -2.73798480e+113] [ -3.81016758e+113] [ -3.93106745e+113] [ -4.50453978e+113] [ -5.74306724e+113] [ -4.01619949e+113] [ -4.90912851e+113] [ -4.79255953e+113] [ -6.44027157e+113] [ -6.61521555e+113] [ -6.57253418e+113] [ -6.85205582e+113] [ -6.22620879e+113] [ -5.55748879e+113] [ -7.11879101e+113] [ -7.88099551e+113] [ -6.82085940e+113] [ -8.20937377e+113] [ -7.04066412e+113] [ -7.90459024e+113] [ -7.22175504e+113] [ -8.19933338e+113] [ -7.32258422e+113] [ -8.29208179e+113] [ -7.96388556e+113] [ -9.29461098e+113] [ -8.18676439e+113] [ -8.28263321e+113] [ -7.91913302e+113] [ -8.62888958e+113] [ -9.14002375e+113] [ -8.32176092e+113] [ -8.62695833e+113] [ -9.04258884e+113] [ -8.59316260e+113] [ -8.76227799e+113] [ -7.96487467e+113] [ -9.01852897e+113]] [[ 1.40899296e+113] [ 6.41067945e+113] [ 4.32038393e+113] [ 7.25704928e+113] [ 9.81557496e+113] [ 1.02895040e+114] [ 1.65266611e+114] [ 1.41617559e+114] [ 1.96787074e+114] [ 1.65051587e+114] [ 2.78089676e+114] [ 2.48124026e+114] [ 1.82499069e+114] [ 2.53964900e+114] [ 2.62023423e+114] [ 3.00247948e+114] [ 3.82801404e+114] [ 2.67697859e+114] [ 3.27215616e+114] [ 3.19445766e+114] [ 4.29273226e+114] [ 4.40934033e+114] [ 4.38089127e+114] [ 4.56720508e+114] [ 4.15004973e+114] [ 3.70431761e+114] [ 4.74499615e+114] [ 5.25303992e+114] [ 4.54641126e+114] [ 5.47191888e+114] [ 4.69292104e+114] [ 5.26876687e+114] [ 4.81362633e+114] [ 5.46522650e+114] [ 4.88083354e+114] [ 5.52704751e+114] [ 5.30828988e+114] [ 6.19527856e+114] [ 5.45684870e+114] [ 5.52074960e+114] [ 5.27846029e+114] [ 5.75154513e+114] [ 6.09223917e+114] [ 5.54682999e+114] [ 5.75025787e+114] [ 6.02729440e+114] [ 5.72773149e+114] [ 5.84045454e+114] [ 5.30894917e+114] [ 6.01125741e+114]] [[ -9.39157525e+113] [ -4.27300777e+114] [ -2.87973128e+114] [ -4.83715154e+114] [ -6.54252461e+114] [ -6.85841973e+114] [ -1.10157670e+115] [ -9.43945075e+114] [ -1.31167484e+115] [ -1.10014347e+115] [ -1.85359345e+115] [ -1.65385885e+115] [ -1.21643883e+115] [ -1.69279091e+115] [ -1.74650461e+115] [ -2.00128835e+115] [ -2.55154447e+115] [ -1.78432729e+115] [ -2.18104006e+115] [ -2.12925050e+115] [ -2.86130018e+115] [ -2.93902474e+115] [ -2.92006215e+115] [ -3.04424873e+115] [ -2.76619583e+115] [ -2.46909521e+115] [ -3.16275452e+115] [ -3.50138867e+115] [ -3.03038871e+115] [ -3.64728139e+115] [ -3.12804411e+115] [ -3.51187139e+115] [ -3.20849964e+115] [ -3.64282062e+115] [ -3.25329628e+115] [ -3.68402712e+115] [ -3.53821527e+115] [ -4.12943333e+115] [ -3.63723644e+115] [ -3.67982928e+115] [ -3.51833250e+115] [ -3.83366494e+115] [ -4.06075292e+115] [ -3.69721304e+115] [ -3.83280692e+115] [ -4.01746430e+115] [ -3.81779207e+115] [ -3.89292708e+115] [ -3.53865471e+115] [ -4.00677492e+115]] [[ 6.25990963e+114] [ 2.84815292e+115] [ 1.91947113e+115] [ 3.22418025e+115] [ 4.36088853e+115] [ 4.57144692e+115] [ 7.34250689e+115] [ 6.29182081e+115] [ 8.74290600e+115] [ 7.33295375e+115] [ 1.23550386e+116] [ 1.10237172e+116] [ 8.10811492e+115] [ 1.12832170e+116] [ 1.16412431e+116] [ 1.33394919e+116] [ 1.70071978e+116] [ 1.18933483e+116] [ 1.45376184e+116] [ 1.41924175e+116] [ 1.90718597e+116] [ 1.95899290e+116] [ 1.94635348e+116] [ 2.02912945e+116] [ 1.84379462e+116] [ 1.64576362e+116] [ 2.10811892e+116] [ 2.33383389e+116] [ 2.01989112e+116] [ 2.43107799e+116] [ 2.08498286e+116] [ 2.34082110e+116] [ 2.13861011e+116] [ 2.42810469e+116] [ 2.16846910e+116] [ 2.45557068e+116] [ 2.35838049e+116] [ 2.75245406e+116] [ 2.42438258e+116] [ 2.45277263e+116] [ 2.34512772e+116] [ 2.55531105e+116] [ 2.70667547e+116] [ 2.46435969e+116] [ 2.55473915e+116] [ 2.67782164e+116] [ 2.54473107e+116] [ 2.59481195e+116] [ 2.35867340e+116] [ 2.67069668e+116]] [[ -4.17251286e+115] [ -1.89842272e+116] [ -1.27941431e+116] [ -2.14906195e+116] [ -2.90672942e+116] [ -3.04707611e+116] [ -4.89411290e+116] [ -4.19378311e+116] [ -5.82754224e+116] [ -4.88774530e+116] [ -8.23519199e+116] [ -7.34780602e+116] [ -5.40442528e+116] [ -7.52077438e+116] [ -7.75941497e+116] [ -8.89137464e+116] [ -1.13360665e+117] [ -7.92745452e+116] [ -9.68998008e+116] [ -9.45988810e+116] [ -1.27122570e+117] [ -1.30575736e+117] [ -1.29733262e+117] [ -1.35250654e+117] [ -1.22897249e+117] [ -1.09697588e+117] [ -1.40515660e+117] [ -1.55560583e+117] [ -1.34634878e+117] [ -1.62042342e+117] [ -1.38973537e+117] [ -1.56026313e+117] [ -1.42548035e+117] [ -1.61844158e+117] [ -1.44538272e+117] [ -1.63674891e+117] [ -1.57196725e+117] [ -1.83463511e+117] [ -1.61596063e+117] [ -1.63488388e+117] [ -1.56313368e+117] [ -1.70323038e+117] [ -1.80412160e+117] [ -1.64260718e+117] [ -1.70284918e+117] [ -1.78488922e+117] [ -1.69617834e+117] [ -1.72955951e+117] [ -1.57216249e+117] [ -1.78014012e+117]] [[ 2.78116852e+116] [ 1.26538460e+117] [ 8.52787499e+116] [ 1.43244698e+117] [ 1.93746661e+117] [ 2.03101403e+117] [ 3.26214758e+117] [ 2.79534610e+117] [ 3.88432045e+117] [ 3.25790328e+117] [ 5.48912789e+117] [ 4.89764501e+117] [ 3.60229385e+117] [ 5.01293624e+117] [ 5.17200099e+117] [ 5.92650330e+117] [ 7.55600097e+117] [ 5.28400695e+117] [ 6.45880994e+117] [ 6.30544323e+117] [ 8.47329418e+117] [ 8.70346333e+117] [ 8.64730858e+117] [ 9.01506778e+117] [ 8.19165747e+117] [ 7.31184033e+117] [ 9.36600418e+117] [ 1.03688164e+118] [ 8.97402348e+117] [ 1.08008549e+118] [ 9.26321471e+117] [ 1.03998593e+118] [ 9.50147122e+117] [ 1.07876451e+118] [ 9.63412949e+117] [ 1.09096717e+118] [ 1.04778726e+118] [ 1.22286727e+118] [ 1.07711084e+118] [ 1.08972404e+118] [ 1.04189929e+118] [ 1.13528007e+118] [ 1.20252864e+118] [ 1.09487197e+118] [ 1.13502598e+118] [ 1.18970939e+118] [ 1.13057957e+118] [ 1.15282963e+118] [ 1.04791740e+118] [ 1.18654389e+118]] [[ -1.85377459e+117] [ -8.43436061e+117] [ -5.68421434e+117] [ -9.54790691e+117] [ -1.29140912e+118] [ -1.35376270e+118] [ -2.17436889e+118] [ -1.86322459e+118] [ -2.58907524e+118] [ -2.17153987e+118] [ -3.65875198e+118] [ -3.26450189e+118] [ -2.40109176e+118] [ -3.34134871e+118] [ -3.44737256e+118] [ -3.95028247e+118] [ -5.03641636e+118] [ -3.52202959e+118] [ -4.30508892e+118] [ -4.20286307e+118] [ -5.64783377e+118] [ -5.80125191e+118] [ -5.76382224e+118] [ -6.00895037e+118] [ -5.46011016e+118] [ -4.87367225e+118] [ -6.24286535e+118] [ -6.91128504e+118] [ -5.98159249e+118] [ -7.19925827e+118] [ -6.17435152e+118] [ -6.93197658e+118] [ -6.33316026e+118] [ -7.19045329e+118] [ -6.42158300e+118] [ -7.27178955e+118] [ -6.98397595e+118] [ -8.15096338e+118] [ -7.17943084e+118] [ -7.26350356e+118] [ -6.94472995e+118] [ -7.56715511e+118] [ -8.01539720e+118] [ -7.29781684e+118] [ -7.56546149e+118] [ -7.92995109e+118] [ -7.53582413e+118] [ -7.68413085e+118] [ -6.98484336e+118] [ -7.90885165e+118]] [[ 1.23562460e+118] [ 5.62188278e+118] [ 3.78878592e+118] [ 6.36411175e+118] [ 8.60782579e+118] [ 9.02344061e+118] [ 1.44931520e+119] [ 1.24192345e+119] [ 1.72573574e+119] [ 1.44742953e+119] [ 2.43872366e+119] [ 2.17593814e+119] [ 1.60043625e+119] [ 2.22716002e+119] [ 2.29782971e+119] [ 2.63304191e+119] [ 3.35699927e+119] [ 2.34759200e+119] [ 2.86953646e+119] [ 2.80139830e+119] [ 3.76453662e+119] [ 3.86679675e+119] [ 3.84184818e+119] [ 4.00523717e+119] [ 3.63941035e+119] [ 3.24852296e+119] [ 4.16115208e+119] [ 4.60668403e+119] [ 3.98700190e+119] [ 4.79863122e+119] [ 4.11548452e+119] [ 4.62047588e+119] [ 4.22133772e+119] [ 4.79276230e+119] [ 4.28027548e+119] [ 4.84697659e+119] [ 4.65513581e+119] [ 5.43298571e+119] [ 4.78541534e+119] [ 4.84145361e+119] [ 4.62897658e+119] [ 5.04385109e+119] [ 5.34262472e+119] [ 4.86432496e+119] [ 5.04272222e+119] [ 5.28567102e+119] [ 5.02296758e+119] [ 5.12182072e+119] [ 4.65571398e+119] [ 5.27160730e+119]] [[ -8.23599677e+118] [ -3.74723912e+119] [ -2.52539716e+119] [ -4.24196829e+119] [ -5.73750517e+119] [ -6.01453124e+119] [ -9.66034121e+119] [ -8.27798146e+119] [ -1.15028091e+120] [ -9.64777241e+119] [ -1.62551960e+120] [ -1.45036117e+120] [ -1.06676314e+120] [ -1.48450288e+120] [ -1.53160742e+120] [ -1.75504151e+120] [ -2.23759183e+120] [ -1.56477624e+120] [ -1.91267582e+120] [ -1.86725866e+120] [ -2.50923391e+120] [ -2.57739491e+120] [ -2.56076556e+120] [ -2.66967170e+120] [ -2.42583158e+120] [ -2.16528746e+120] [ -2.77359605e+120] [ -3.07056324e+120] [ -2.65751707e+120] [ -3.19850472e+120] [ -2.74315655e+120] [ -3.07975613e+120] [ -2.81371250e+120] [ -3.19459282e+120] [ -2.85299718e+120] [ -3.23072910e+120] [ -3.10285854e+120] [ -3.62133068e+120] [ -3.18969574e+120] [ -3.22704778e+120] [ -3.08542223e+120] [ -3.36195486e+120] [ -3.56110099e+120] [ -3.24229257e+120] [ -3.36120241e+120] [ -3.52313878e+120] [ -3.34803505e+120] [ -3.41392514e+120] [ -3.10324392e+120] [ -3.51376466e+120]] [[ 5.48966431e+119] [ 2.49770433e+120] [ 1.68329142e+120] [ 2.82746370e+120] [ 3.82430666e+120] [ 4.00895707e+120] [ 6.43905430e+120] [ 5.51764900e+120] [ 7.66714248e+120] [ 6.43067662e+120] [ 1.08348233e+121] [ 9.66731314e+120] [ 7.11045873e+120] [ 9.89488302e+120] [ 1.02088561e+121] [ 1.16981454e+121] [ 1.49145615e+121] [ 1.04299413e+121] [ 1.27488493e+121] [ 1.24461234e+121] [ 1.67251787e+121] [ 1.71795027e+121] [ 1.70686605e+121] [ 1.77945692e+121] [ 1.61692646e+121] [ 1.44326202e+121] [ 1.84872720e+121] [ 2.04666926e+121] [ 1.77135531e+121] [ 2.13194804e+121] [ 1.82843789e+121] [ 2.05279674e+121] [ 1.87546663e+121] [ 2.12934059e+121] [ 1.90165164e+121] [ 2.15342705e+121] [ 2.06819554e+121] [ 2.41378067e+121] [ 2.12607646e+121] [ 2.15097328e+121] [ 2.05657345e+121] [ 2.24089495e+121] [ 2.37363486e+121] [ 2.16113462e+121] [ 2.24039341e+121] [ 2.34833132e+121] [ 2.23161677e+121] [ 2.27553550e+121] [ 2.06845241e+121] [ 2.34208305e+121]] [[ -3.65910952e+120] [ -1.66483289e+121] [ -1.12198985e+121] [ -1.88463243e+121] [ -2.54907334e+121] [ -2.67215119e+121] [ -4.29192089e+121] [ -3.67776259e+121] [ -5.11049720e+121] [ -4.28633679e+121] [ -7.22189971e+121] [ -6.44370140e+121] [ -4.73944231e+121] [ -6.59538701e+121] [ -6.80466428e+121] [ -7.79734292e+121] [ -9.94122971e+121] [ -6.95202754e+121] [ -8.49768463e+121] [ -8.29590412e+121] [ -1.11480880e+122] [ -1.14509154e+122] [ -1.13770341e+122] [ -1.18608851e+122] [ -1.07775461e+122] [ -9.61999404e+121] [ -1.23226028e+122] [ -1.36419762e+122] [ -1.18068842e+122] [ -1.42103979e+122] [ -1.21873655e+122] [ -1.36828186e+122] [ -1.25008332e+122] [ -1.41930180e+122] [ -1.26753682e+122] [ -1.43535651e+122] [ -1.37854586e+122] [ -1.60889397e+122] [ -1.41712611e+122] [ -1.43372097e+122] [ -1.37079921e+122] [ -1.49365783e+122] [ -1.58213498e+122] [ -1.44049396e+122] [ -1.49332353e+122] [ -1.56526903e+122] [ -1.48747350e+122] [ -1.51674732e+122] [ -1.37871708e+122] [ -1.56110427e+122]] [[ 2.43896198e+121] [ 1.10968641e+122] [ 7.47856978e+121] [ 1.25619274e+122] [ 1.69907266e+122] [ 1.78110961e+122] [ 2.86075937e+122] [ 2.45139509e+122] [ 3.40637750e+122] [ 2.85703731e+122] [ 4.81372276e+122] [ 4.29501839e+122] [ 3.15905263e+122] [ 4.39612371e+122] [ 4.53561647e+122] [ 5.19728168e+122] [ 6.62627918e+122] [ 4.63384075e+122] [ 5.66409110e+122] [ 5.52959527e+122] [ 7.43070482e+122] [ 7.63255299e+122] [ 7.58330776e+122] [ 7.90581634e+122] [ 7.18372186e+122] [ 6.41216108e+122] [ 8.21357206e+122] [ 9.09299408e+122] [ 7.86982230e+122] [ 9.47187282e+122] [ 8.12343024e+122] [ 9.12021739e+122] [ 8.33237068e+122] [ 9.46028834e+122] [ 8.44870612e+122] [ 9.56730029e+122] [ 9.18863158e+122] [ 1.07240059e+123] [ 9.44578640e+122] [ 9.55639864e+122] [ 9.13699667e+122] [ 9.95590491e+122] [ 1.05456451e+123] [ 9.60154370e+122] [ 9.95367666e+122] [ 1.04332259e+123] [ 9.91468357e+122] [ 1.01098068e+123] [ 9.18977282e+122] [ 1.04054660e+123]] [[ -1.62567846e+122] [ -7.39656179e+122] [ -4.98480496e+122] [ -8.37309273e+122] [ -1.13250877e+123] [ -1.18719010e+123] [ -1.90682549e+123] [ -1.63396569e+123] [ -2.27050465e+123] [ -1.90434457e+123] [ -3.20856391e+123] [ -2.86282399e+123] [ -2.10565144e+123] [ -2.93021526e+123] [ -3.02319350e+123] [ -3.46422328e+123] [ -4.41671474e+123] [ -3.08866442e+123] [ -3.77537287e+123] [ -3.68572531e+123] [ -4.95290081e+123] [ -5.08744174e+123] [ -5.05461757e+123] [ -5.26958412e+123] [ -4.78827549e+123] [ -4.27399534e+123] [ -5.47471723e+123] [ -6.06089178e+123] [ -5.24559246e+123] [ -6.31343159e+123] [ -5.41463362e+123] [ -6.07903734e+123] [ -5.55390188e+123] [ -6.30571001e+123] [ -5.63144471e+123] [ -6.37703832e+123] [ -6.12463850e+123] [ -7.14803494e+123] [ -6.29604381e+123] [ -6.36977187e+123] [ -6.09022149e+123] [ -6.63606087e+123] [ -7.02914939e+123] [ -6.39986309e+123] [ -6.63457564e+123] [ -6.95421692e+123] [ -6.60858498e+123] [ -6.73864343e+123] [ -6.12539918e+123] [ -6.93571365e+123]] [[ 1.08358821e+123] [ 4.93014294e+123] [ 3.32259794e+123] [ 5.58104496e+123] [ 7.54868310e+123] [ 7.91315894e+123] [ 1.27098543e+124] [ 1.08911203e+124] [ 1.51339403e+124] [ 1.26933178e+124] [ 2.13865295e+124] [ 1.90820166e+124] [ 1.40351191e+124] [ 1.95312099e+124] [ 2.01509519e+124] [ 2.30906148e+124] [ 2.94394011e+124] [ 2.05873452e+124] [ 2.51645675e+124] [ 2.45670260e+124] [ 3.30133238e+124] [ 3.39101000e+124] [ 3.36913120e+124] [ 3.51241613e+124] [ 3.19160216e+124] [ 2.84881118e+124] [ 3.64914662e+124] [ 4.03985847e+124] [ 3.49642460e+124] [ 4.20818767e+124] [ 3.60909818e+124] [ 4.05195331e+124] [ 3.70192677e+124] [ 4.20304089e+124] [ 3.75361258e+124] [ 4.25058443e+124] [ 4.08234854e+124] [ 4.76448855e+124] [ 4.19659793e+124] [ 4.24574102e+124] [ 4.05940805e+124] [ 4.42323467e+124] [ 4.68524595e+124] [ 4.26579818e+124] [ 4.42224470e+124] [ 4.63530007e+124] [ 4.40492075e+124] [ 4.49161059e+124] [ 4.08285557e+124] [ 4.62296680e+124]] [[ -7.22260546e+123] [ -3.28616323e+124] [ -2.21466179e+124] [ -3.72001886e+124] [ -5.03153866e+124] [ -5.27447830e+124] [ -8.47169265e+124] [ -7.25942419e+124] [ -1.00874556e+125] [ -8.46067036e+124] [ -1.42550891e+125] [ -1.27190270e+125] [ -9.35504162e+124] [ -1.30184347e+125] [ -1.34315207e+125] [ -1.53909390e+125] [ -1.96226922e+125] [ -1.37223966e+125] [ -1.67733223e+125] [ -1.63750339e+125] [ -2.20048733e+125] [ -2.26026152e+125] [ -2.24567831e+125] [ -2.34118419e+125] [ -2.12734718e+125] [ -1.89886149e+125] [ -2.43232125e+125] [ -2.69274837e+125] [ -2.33052512e+125] [ -2.80494739e+125] [ -2.40562715e+125] [ -2.70081012e+125] [ -2.46750161e+125] [ -2.80151683e+125] [ -2.50195253e+125] [ -2.83320675e+125] [ -2.72106992e+125] [ -3.17574709e+125] [ -2.79722231e+125] [ -2.82997840e+125] [ -2.70577905e+125] [ -2.94828595e+125] [ -3.12292831e+125] [ -2.84334740e+125] [ -2.94762609e+125] [ -3.08963713e+125] [ -2.93607889e+125] [ -2.99386159e+125] [ -2.72140788e+125] [ -3.08141644e+125]] [[ 4.81419318e+124] [ 2.19037640e+125] [ 1.47617224e+125] [ 2.47956081e+125] [ 3.35374806e+125] [ 3.51567832e+125] [ 5.64676627e+125] [ 4.83873452e+125] [ 6.72374538e+125] [ 5.63941942e+125] [ 9.50166156e+125] [ 8.47780671e+125] [ 6.23555831e+125] [ 8.67737545e+125] [ 8.95271598e+125] [ 1.02587569e+126] [ 1.30794118e+126] [ 9.14659791e+125] [ 1.11801779e+126] [ 1.09147006e+126] [ 1.46672432e+126] [ 1.50656652e+126] [ 1.49684615e+126] [ 1.56050514e+126] [ 1.41797310e+126] [ 1.26567705e+126] [ 1.62125211e+126] [ 1.79483856e+126] [ 1.55340039e+126] [ 1.86962429e+126] [ 1.60345929e+126] [ 1.80021209e+126] [ 1.64470141e+126] [ 1.86733766e+126] [ 1.66766451e+126] [ 1.88846043e+126] [ 1.81371616e+126] [ 2.11677905e+126] [ 1.86447517e+126] [ 1.88630858e+126] [ 1.80352410e+126] [ 1.96516592e+126] [ 2.08157295e+126] [ 1.89521963e+126] [ 1.96472609e+126] [ 2.05938287e+126] [ 1.95702936e+126] [ 1.99554414e+126] [ 1.81394142e+126] [ 2.05390341e+126]] [[ -3.20887747e+125] [ -1.45998493e+126] [ -9.83935554e+125] [ -1.65273941e+126] [ -2.23542475e+126] [ -2.34335859e+126] [ -3.76382509e+126] [ -3.22523538e+126] [ -4.48168037e+126] [ -3.75892808e+126] [ -6.33328713e+126] [ -5.65084157e+126] [ -4.15628161e+126] [ -5.78386316e+126] [ -5.96739008e+126] [ -6.83792541e+126] [ -8.71801945e+126] [ -6.09662115e+126] [ -7.45209416e+126] [ -7.27514156e+126] [ -9.77638089e+126] [ -1.00419472e+127] [ -9.97715653e+126] [ -1.04014725e+127] [ -9.45143197e+126] [ -8.43630995e+126] [ -1.08063785e+127] [ -1.19634107e+127] [ -1.03541161e+127] [ -1.24618914e+127] [ -1.06877813e+127] [ -1.19992277e+127] [ -1.09626787e+127] [ -1.24466500e+127] [ -1.11157382e+127] [ -1.25874428e+127] [ -1.20892384e+127] [ -1.41092897e+127] [ -1.24275702e+127] [ -1.25730998e+127] [ -1.20213037e+127] [ -1.30987195e+127] [ -1.38746251e+127] [ -1.26324959e+127] [ -1.30957879e+127] [ -1.37267182e+127] [ -1.30444857e+127] [ -1.33012042e+127] [ -1.20907399e+127] [ -1.36901951e+127]] [[ 2.13886195e+126] [ 9.73145982e+126] [ 6.55837544e+126] [ 1.10162556e+127] [ 1.49001169e+127] [ 1.56195448e+127] [ 2.50875964e+127] [ 2.14976524e+127] [ 2.98724264e+127] [ 2.50549556e+127] [ 4.22142229e+127] [ 3.76654146e+127] [ 2.77034966e+127] [ 3.85520636e+127] [ 3.97753536e+127] [ 4.55778652e+127] [ 5.81095422e+127] [ 4.06367371e+127] [ 4.96715777e+127] [ 4.84921086e+127] [ 6.51639999e+127] [ 6.69341192e+127] [ 6.65022603e+127] [ 6.93305181e+127] [ 6.29980683e+127] [ 5.62318211e+127] [ 7.20293999e+127] [ 7.97415427e+127] [ 6.90148662e+127] [ 8.30641419e+127] [ 7.12388959e+127] [ 7.99802791e+127] [ 7.30712113e+127] [ 8.29625511e+127] [ 7.40914218e+127] [ 8.39009987e+127] [ 8.05802413e+127] [ 9.40447964e+127] [ 8.28353755e+127] [ 8.38053960e+127] [ 8.01274259e+127] [ 8.73088895e+127] [ 9.24806508e+127] [ 8.42012982e+127] [ 8.72893488e+127] [ 9.14947842e+127] [ 8.69473966e+127] [ 8.86585411e+127] [ 8.05902494e+127] [ 9.12513416e+127]] [[ -1.42564822e+127] [ -6.48645806e+127] [ -4.37145383e+127] [ -7.34283257e+127] [ -9.93160177e+127] [ -1.04111330e+128] [ -1.67220175e+128] [ -1.43291575e+128] [ -1.99113232e+128] [ -1.67002610e+128] [ -2.81376886e+128] [ -2.51057022e+128] [ -1.84656333e+128] [ -2.56966939e+128] [ -2.65120719e+128] [ -3.03797084e+128] [ -3.87326378e+128] [ -2.70862230e+128] [ -3.31083529e+128] [ -3.23221834e+128] [ -4.34347529e+128] [ -4.46146175e+128] [ -4.43267640e+128] [ -4.62119257e+128] [ -4.19910616e+128] [ -3.74810517e+128] [ -4.80108525e+128] [ -5.31513444e+128] [ -4.60015295e+128] [ -5.53660070e+128] [ -4.74839458e+128] [ -5.33104730e+128] [ -4.87052669e+128] [ -5.52982922e+128] [ -4.93852833e+128] [ -5.59238099e+128] [ -5.37103750e+128] [ -6.26851099e+128] [ -5.52135239e+128] [ -5.58600864e+128] [ -5.34085531e+128] [ -5.81953233e+128] [ -6.16425361e+128] [ -5.61239732e+128] [ -5.81822985e+128] [ -6.09854114e+128] [ -5.79543719e+128] [ -5.90949271e+128] [ -5.37170458e+128] [ -6.08231459e+128]] [[ 9.50259010e+127] [ 4.32351764e+128] [ 2.91377167e+128] [ 4.89432998e+128] [ 6.61986173e+128] [ 6.93949095e+128] [ 1.11459809e+129] [ 9.55103151e+128] [ 1.32717973e+129] [ 1.11314792e+129] [ 1.87550420e+129] [ 1.67340859e+129] [ 1.23081797e+129] [ 1.71280086e+129] [ 1.76714949e+129] [ 2.02494495e+129] [ 2.58170548e+129] [ 1.80541926e+129] [ 2.20682145e+129] [ 2.15441970e+129] [ 2.89512271e+129] [ 2.97376602e+129] [ 2.95457928e+129] [ 3.08023383e+129] [ 2.79889415e+129] [ 2.49828160e+129] [ 3.20014044e+129] [ 3.54277747e+129] [ 3.06620997e+129] [ 3.69039475e+129] [ 3.16501973e+129] [ 3.55338412e+129] [ 3.24642630e+129] [ 3.68588125e+129] [ 3.29175247e+129] [ 3.72757484e+129] [ 3.58003939e+129] [ 4.17824606e+129] [ 3.68023106e+129] [ 3.72332738e+129] [ 3.55992159e+129] [ 3.87898148e+129] [ 4.10875380e+129] [ 3.74091663e+129] [ 3.87811332e+129] [ 4.06495347e+129] [ 3.86292098e+129] [ 3.93894414e+129] [ 3.58048403e+129] [ 4.05413774e+129]] [[ -6.33390604e+128] [ -2.88182003e+129] [ -1.94216059e+129] [ -3.26229227e+129] [ -4.41243722e+129] [ -4.62548455e+129] [ -7.42930034e+129] [ -6.36619444e+129] [ -8.84625313e+129] [ -7.41963428e+129] [ -1.25010836e+130] [ -1.11540250e+130] [ -8.20395839e+129] [ -1.14165923e+130] [ -1.17788505e+130] [ -1.34971739e+130] [ -1.72082345e+130] [ -1.20339358e+130] [ -1.47094630e+130] [ -1.43601816e+130] [ -1.92973021e+130] [ -1.98214954e+130] [ -1.96936071e+130] [ -2.05311515e+130] [ -1.86558953e+130] [ -1.66521767e+130] [ -2.13303833e+130] [ -2.36142140e+130] [ -2.04376761e+130] [ -2.45981500e+130] [ -2.10962878e+130] [ -2.36849121e+130] [ -2.16388994e+130] [ -2.45680655e+130] [ -2.19410189e+130] [ -2.48459721e+130] [ -2.38625816e+130] [ -2.78498995e+130] [ -2.45304044e+130] [ -2.48176608e+130] [ -2.37284874e+130] [ -2.58551658e+130] [ -2.73867022e+130] [ -2.49349011e+130] [ -2.58493791e+130] [ -2.70947532e+130] [ -2.57481153e+130] [ -2.62548440e+130] [ -2.38655453e+130] [ -2.70226615e+130]] [[ 4.22183482e+129] [ 1.92086338e+130] [ 1.29453786e+130] [ 2.17446533e+130] [ 2.94108895e+130] [ 3.08309463e+130] [ 4.95196466e+130] [ 4.24335650e+130] [ 5.89642777e+130] [ 4.94552179e+130] [ 8.33253760e+130] [ 7.43466211e+130] [ 5.46830928e+130] [ 7.60967508e+130] [ 7.85113657e+130] [ 8.99647677e+130] [ 1.14700665e+131] [ 8.02116246e+130] [ 9.80452228e+130] [ 9.57171045e+130] [ 1.28625246e+131] [ 1.32119231e+131] [ 1.31266797e+131] [ 1.36849410e+131] [ 1.24349979e+131] [ 1.10994288e+131] [ 1.42176651e+131] [ 1.57399416e+131] [ 1.36226354e+131] [ 1.63957794e+131] [ 1.40616299e+131] [ 1.57870650e+131] [ 1.44233051e+131] [ 1.63757267e+131] [ 1.46246813e+131] [ 1.65609640e+131] [ 1.59054898e+131] [ 1.85632175e+131] [ 1.63506239e+131] [ 1.65420933e+131] [ 1.58161099e+131] [ 1.72336373e+131] [ 1.82544756e+131] [ 1.66202392e+131] [ 1.72297802e+131] [ 1.80598783e+131] [ 1.71622833e+131] [ 1.75000409e+131] [ 1.59074652e+131] [ 1.80118259e+131]] [[ -2.81404384e+130] [ -1.28034231e+131] [ -8.62868030e+130] [ -1.44937948e+131] [ -1.96036879e+131] [ -2.05502200e+131] [ -3.30070839e+131] [ -2.82838900e+131] [ -3.93023577e+131] [ -3.29641393e+131] [ -5.55401314e+131] [ -4.95553852e+131] [ -3.64487543e+131] [ -5.07219258e+131] [ -5.23313758e+131] [ -5.99655862e+131] [ -7.64531807e+131] [ -5.34646753e+131] [ -6.53515750e+131] [ -6.37997788e+131] [ -8.57345432e+131] [ -8.80634422e+131] [ -8.74952569e+131] [ -9.12163204e+131] [ -8.28848846e+131] [ -7.39827128e+131] [ -9.47671676e+131] [ -1.04913829e+132] [ -9.08010257e+131] [ -1.09285284e+132] [ -9.37271224e+131] [ -1.05227928e+132] [ -9.61378511e+131] [ -1.09151624e+132] [ -9.74801149e+131] [ -1.10386315e+132] [ -1.06017282e+132] [ -1.23732240e+132] [ -1.08984302e+132] [ -1.10260533e+132] [ -1.05421525e+132] [ -1.14869986e+132] [ -1.21674335e+132] [ -1.10781411e+132] [ -1.14844277e+132] [ -1.20377257e+132] [ -1.14394379e+132] [ -1.16645687e+132] [ -1.06030450e+132] [ -1.20056965e+132]] [[ 1.87568748e+131] [ 8.53406052e+131] [ 5.75140564e+131] [ 9.66076970e+131] [ 1.30667446e+132] [ 1.36976510e+132] [ 2.20007141e+132] [ 1.88524918e+132] [ 2.61967988e+132] [ 2.19720896e+132] [ 3.70200093e+132] [ 3.30309053e+132] [ 2.42947431e+132] [ 3.38084574e+132] [ 3.48812286e+132] [ 3.99697751e+132] [ 5.09595024e+132] [ 3.56366239e+132] [ 4.35597802e+132] [ 4.25254379e+132] [ 5.71459503e+132] [ 5.86982668e+132] [ 5.83195456e+132] [ 6.07998027e+132] [ 5.52465240e+132] [ 4.93128239e+132] [ 6.31666029e+132] [ 6.99298115e+132] [ 6.05229900e+132] [ 7.28435843e+132] [ 6.24733658e+132] [ 7.01391728e+132] [ 6.40802255e+132] [ 7.27544937e+132] [ 6.49749050e+132] [ 7.35774708e+132] [ 7.06653132e+132] [ 8.24731334e+132] [ 7.26429663e+132] [ 7.34936314e+132] [ 7.02682141e+132] [ 7.65660406e+132] [ 8.11014468e+132] [ 7.38408202e+132] [ 7.65489042e+132] [ 8.02368854e+132] [ 7.62490272e+132] [ 7.77496254e+132] [ 7.06740899e+132] [ 8.00233969e+132]] [[ -1.25023053e+132] [ -5.68833728e+132] [ -3.83357196e+132] [ -6.43933990e+132] [ -8.70957618e+132] [ -9.13010385e+132] [ -1.46644709e+133] [ -1.25660384e+133] [ -1.74613511e+133] [ -1.46453914e+133] [ -2.46755104e+133] [ -2.20165922e+133] [ -1.61935450e+133] [ -2.25348658e+133] [ -2.32499163e+133] [ -2.66416627e+133] [ -3.39668130e+133] [ -2.37534215e+133] [ -2.90345634e+133] [ -2.83451275e+133] [ -3.80903602e+133] [ -3.91250494e+133] [ -3.88726146e+133] [ -4.05258181e+133] [ -3.68243067e+133] [ -3.28692272e+133] [ -4.21033975e+133] [ -4.66113819e+133] [ -4.03413099e+133] [ -4.85535432e+133] [ -4.16413236e+133] [ -4.67509306e+133] [ -4.27123682e+133] [ -4.84941603e+133] [ -4.33087126e+133] [ -4.90427117e+133] [ -4.71016270e+133] [ -5.49720732e+133] [ -4.84198222e+133] [ -4.89868290e+133] [ -4.68369425e+133] [ -5.10347286e+133] [ -5.40577820e+133] [ -4.92182460e+133] [ -5.10233065e+133] [ -5.34815127e+133] [ -5.08234249e+133] [ -5.18236414e+133] [ -4.71074771e+133] [ -5.33392130e+133]] [[ 8.33335188e+132] [ 3.79153405e+133] [ 2.55524908e+133] [ 4.29211126e+133] [ 5.80532640e+133] [ 6.08562711e+133] [ 9.77453306e+133] [ 8.37583286e+133] [ 1.16387802e+134] [ 9.76181568e+133] [ 1.64473436e+134] [ 1.46750544e+134] [ 1.07937301e+134] [ 1.50205072e+134] [ 1.54971207e+134] [ 1.77578730e+134] [ 2.26404169e+134] [ 1.58327296e+134] [ 1.93528496e+134] [ 1.88933093e+134] [ 2.53889477e+134] [ 2.60786148e+134] [ 2.59103556e+134] [ 2.70122905e+134] [ 2.45450657e+134] [ 2.19088264e+134] [ 2.80638185e+134] [ 3.10685940e+134] [ 2.68893074e+134] [ 3.23631323e+134] [ 2.77558253e+134] [ 3.11616095e+134] [ 2.84697250e+134] [ 3.23235509e+134] [ 2.88672155e+134] [ 3.26891853e+134] [ 3.13953645e+134] [ 3.66413728e+134] [ 3.22740012e+134] [ 3.26519369e+134] [ 3.12189403e+134] [ 3.40169546e+134] [ 3.60319564e+134] [ 3.28061868e+134] [ 3.40093412e+134] [ 3.56478469e+134] [ 3.38761111e+134] [ 3.45428007e+134] [ 3.13992638e+134] [ 3.55529977e+134]] [[ -5.55455590e+133] [ -2.52722891e+134] [ -1.70318907e+134] [ -2.86088626e+134] [ -3.86951259e+134] [ -4.05634569e+134] [ -6.51516832e+134] [ -5.58287139e+134] [ -7.75777335e+134] [ -6.50669161e+134] [ -1.09628983e+135] [ -9.78158739e+134] [ -7.19450922e+134] [ -1.00118473e+135] [ -1.03295318e+135] [ -1.18364254e+135] [ -1.50908618e+135] [ -1.05532303e+135] [ -1.28995494e+135] [ -1.25932451e+135] [ -1.69228818e+135] [ -1.73825761e+135] [ -1.72704237e+135] [ -1.80049132e+135] [ -1.63603963e+135] [ -1.46032236e+135] [ -1.87058042e+135] [ -2.07086229e+135] [ -1.79229394e+135] [ -2.15714913e+135] [ -1.85005128e+135] [ -2.07706220e+135] [ -1.89763592e+135] [ -2.15451085e+135] [ -1.92413046e+135] [ -2.17888203e+135] [ -2.09264303e+135] [ -2.44231320e+135] [ -2.15120814e+135] [ -2.17639926e+135] [ -2.08088356e+135] [ -2.26738386e+135] [ -2.40169284e+135] [ -2.18668071e+135] [ -2.26687640e+135] [ -2.37609021e+135] [ -2.25799601e+135] [ -2.30243388e+135] [ -2.09290293e+135] [ -2.36976808e+135]] [[ 3.70236270e+134] [ 1.68451236e+135] [ 1.13525254e+135] [ 1.90691007e+135] [ 2.57920513e+135] [ 2.70373784e+135] [ 4.34265433e+135] [ 3.72123626e+135] [ 5.17090677e+135] [ 4.33700422e+135] [ 7.30726751e+135] [ 6.51987036e+135] [ 4.79546576e+135] [ 6.67334900e+135] [ 6.88510006e+135] [ 7.88951285e+135] [ 1.00587418e+136] [ 7.03420526e+135] [ 8.59813307e+135] [ 8.39396738e+135] [ 1.12798660e+136] [ 1.15862731e+136] [ 1.15115185e+136] [ 1.20010889e+136] [ 1.09049440e+136] [ 9.73370895e+135] [ 1.24682644e+136] [ 1.38032337e+136] [ 1.19464497e+136] [ 1.43783745e+136] [ 1.23314285e+136] [ 1.38445589e+136] [ 1.26486016e+136] [ 1.43607892e+136] [ 1.28251997e+136] [ 1.45232341e+136] [ 1.39484121e+136] [ 1.62791220e+136] [ 1.43387751e+136] [ 1.45066853e+136] [ 1.38700300e+136] [ 1.51131388e+136] [ 1.60083689e+136] [ 1.45752159e+136] [ 1.51097564e+136] [ 1.58377158e+136] [ 1.50505645e+136] [ 1.53467631e+136] [ 1.39501445e+136] [ 1.57955759e+136]] [[ -2.46779218e+135] [ -1.12280367e+136] [ -7.56697159e+135] [ -1.27104180e+136] [ -1.71915686e+136] [ -1.80216355e+136] [ -2.89457550e+136] [ -2.48037226e+136] [ -3.44664321e+136] [ -2.89080945e+136] [ -4.87062426e+136] [ -4.34578845e+136] [ -3.19639480e+136] [ -4.44808890e+136] [ -4.58923056e+136] [ -5.25871711e+136] [ -6.70460633e+136] [ -4.68861592e+136] [ -5.73104453e+136] [ -5.59495886e+136] [ -7.51854083e+136] [ -7.72277499e+136] [ -7.67294765e+136] [ -7.99926849e+136] [ -7.26863836e+136] [ -6.48795721e+136] [ -8.31066210e+136] [ -9.20047949e+136] [ -7.96284898e+136] [ -9.58383683e+136] [ -8.21945474e+136] [ -9.22802460e+136] [ -8.43086500e+136] [ -9.57211542e+136] [ -8.54857561e+136] [ -9.68039233e+136] [ -9.29724750e+136] [ -1.08507710e+137] [ -9.55744205e+136] [ -9.66936181e+136] [ -9.24500222e+136] [ -1.00735905e+137] [ -1.06703019e+137] [ -9.71504051e+136] [ -1.00713359e+137] [ -1.05565538e+137] [ -1.00318819e+137] [ -1.02293116e+137] [ -9.29840222e+136] [ -1.05284657e+137]] [[ 1.64489509e+136] [ 7.48399421e+136] [ 5.04372877e+136] [ 8.47206841e+136] [ 1.14589580e+137] [ 1.20122350e+137] [ 1.92936547e+137] [ 1.65328028e+137] [ 2.29734357e+137] [ 1.92685523e+137] [ 3.24649133e+137] [ 2.89666453e+137] [ 2.13054169e+137] [ 2.96485241e+137] [ 3.05892971e+137] [ 3.50517277e+137] [ 4.46892333e+137] [ 3.12517455e+137] [ 3.82000036e+137] [ 3.72929311e+137] [ 5.01144748e+137] [ 5.14757879e+137] [ 5.11436661e+137] [ 5.33187421e+137] [ 4.84487618e+137] [ 4.32451689e+137] [ 5.53943213e+137] [ 6.13253566e+137] [ 5.30759895e+137] [ 6.38806067e+137] [ 5.47863830e+137] [ 6.15089572e+137] [ 5.61955280e+137] [ 6.38024782e+137] [ 5.69801224e+137] [ 6.45241927e+137] [ 6.19703591e+137] [ 7.23252960e+137] [ 6.37046735e+137] [ 6.44506693e+137] [ 6.16221208e+137] [ 6.71450364e+137] [ 7.11223874e+137] [ 6.47551385e+137] [ 6.71300086e+137] [ 7.03642052e+137] [ 6.68670297e+137] [ 6.81829880e+137] [ 6.19780559e+137] [ 7.01769853e+137]] [[ -1.09639696e+137] [ -4.98842060e+137] [ -3.36187332e+137] [ -5.64701673e+137] [ -7.63791369e+137] [ -8.00669789e+137] [ -1.28600934e+138] [ -1.10198607e+138] [ -1.53128338e+138] [ -1.28433615e+138] [ -2.16393329e+138] [ -1.93075791e+138] [ -1.42010238e+138] [ -1.97620822e+138] [ -2.03891499e+138] [ -2.33635617e+138] [ -2.97873950e+138] [ -2.08307017e+138] [ -2.54620299e+138] [ -2.48574251e+138] [ -3.34035638e+138] [ -3.43109405e+138] [ -3.40895664e+138] [ -3.55393529e+138] [ -3.22932908e+138] [ -2.88248608e+138] [ -3.69228203e+138] [ -4.08761236e+138] [ -3.53775473e+138] [ -4.25793133e+138] [ -3.65176019e+138] [ -4.09985017e+138] [ -3.74568608e+138] [ -4.25272371e+138] [ -3.79798284e+138] [ -4.30082925e+138] [ -4.13060469e+138] [ -4.82080806e+138] [ -4.24620459e+138] [ -4.29592858e+138] [ -4.10739303e+138] [ -4.47552033e+138] [ -4.74062876e+138] [ -4.31622283e+138] [ -4.47451866e+138] [ -4.69009249e+138] [ -4.45698993e+138] [ -4.54470450e+138] [ -4.13111771e+138] [ -4.67761343e+138]] [[ 7.30798160e+137] [ 3.32500792e+138] [ 2.24084061e+138] [ 3.76399203e+138] [ 5.09101489e+138] [ 5.33682624e+138] [ 8.57183385e+138] [ 7.34523555e+138] [ 1.02066963e+139] [ 8.56068127e+138] [ 1.44235940e+139] [ 1.28693746e+139] [ 9.46562462e+138] [ 1.31723215e+139] [ 1.35902905e+139] [ 1.55728704e+139] [ 1.98546459e+139] [ 1.38846047e+139] [ 1.69715944e+139] [ 1.65685980e+139] [ 2.22649859e+139] [ 2.28697935e+139] [ 2.27222376e+139] [ 2.36885859e+139] [ 2.15249387e+139] [ 1.92130733e+139] [ 2.46107295e+139] [ 2.72457849e+139] [ 2.35807352e+139] [ 2.83810379e+139] [ 2.43406331e+139] [ 2.73273555e+139] [ 2.49666916e+139] [ 2.83463268e+139] [ 2.53152733e+139] [ 2.86669719e+139] [ 2.75323483e+139] [ 3.21328659e+139] [ 2.83028739e+139] [ 2.86343068e+139] [ 2.73776320e+139] [ 2.98313670e+139] [ 3.15984346e+139] [ 2.87695771e+139] [ 2.98246904e+139] [ 3.12615875e+139] [ 2.97078535e+139] [ 3.02925108e+139] [ 2.75357678e+139] [ 3.11784089e+139]] [[ -4.87110023e+138] [ -2.21626815e+139] [ -1.49362160e+139] [ -2.50887091e+139] [ -3.39339166e+139] [ -3.55723604e+139] [ -5.71351491e+139] [ -4.89593167e+139] [ -6.80322465e+139] [ -5.70608121e+139] [ -9.61397770e+139] [ -8.57802019e+139] [ -6.30926688e+139] [ -8.77994796e+139] [ -9.05854321e+139] [ -1.03800224e+140] [ -1.32340194e+140] [ -9.25471696e+139] [ -1.13123352e+140] [ -1.10437198e+140] [ -1.48406200e+140] [ -1.52437517e+140] [ -1.51453990e+140] [ -1.57895138e+140] [ -1.43473451e+140] [ -1.28063822e+140] [ -1.64041642e+140] [ -1.81605478e+140] [ -1.57176264e+140] [ -1.89172453e+140] [ -1.62241328e+140] [ -1.82149183e+140] [ -1.66414291e+140] [ -1.88941087e+140] [ -1.68737745e+140] [ -1.91078332e+140] [ -1.83515552e+140] [ -2.14180083e+140] [ -1.88651454e+140] [ -1.90860604e+140] [ -1.82484299e+140] [ -1.98839552e+140] [ -2.10617857e+140] [ -1.91762242e+140] [ -1.98795049e+140] [ -2.08372618e+140] [ -1.98016279e+140] [ -2.01913284e+140] [ -1.83538345e+140] [ -2.07818196e+140]] [[ 3.24680859e+139] [ 1.47724295e+140] [ 9.95566345e+139] [ 1.67227592e+140] [ 2.26184900e+140] [ 2.37105869e+140] [ 3.80831607e+140] [ 3.26335987e+140] [ 4.53465689e+140] [ 3.80336118e+140] [ 6.40815092e+140] [ 5.71763839e+140] [ 4.20541170e+140] [ 5.85223238e+140] [ 6.03792871e+140] [ 6.91875437e+140] [ 8.82107240e+140] [ 6.16868739e+140] [ 7.54018301e+140] [ 7.36113871e+140] [ 9.89194440e+140] [ 1.01606499e+141] [ 1.00950933e+141] [ 1.05244250e+141] [ 9.56315436e+140] [ 8.53603291e+140] [ 1.09341173e+141] [ 1.21048264e+141] [ 1.04765088e+141] [ 1.26091995e+141] [ 1.08141182e+141] [ 1.21410668e+141] [ 1.10922651e+141] [ 1.25937779e+141] [ 1.12471338e+141] [ 1.27362350e+141] [ 1.22321415e+141] [ 1.42760711e+141] [ 1.25744726e+141] [ 1.27217224e+141] [ 1.21634038e+141] [ 1.32535554e+141] [ 1.40386326e+141] [ 1.27818207e+141] [ 1.32505891e+141] [ 1.38889774e+141] [ 1.31986805e+141] [ 1.34584335e+141] [ 1.22336607e+141] [ 1.38520226e+141]] [[ -2.16414476e+140] [ -9.84649233e+140] [ -6.63589992e+140] [ -1.11464753e+141] [ -1.50762465e+141] [ -1.58041785e+141] [ -2.53841489e+141] [ -2.17517694e+141] [ -3.02255389e+141] [ -2.53511223e+141] [ -4.27132239e+141] [ -3.81106456e+141] [ -2.80309708e+141] [ -3.90077754e+141] [ -4.02455254e+141] [ -4.61166268e+141] [ -5.87964368e+141] [ -4.11170911e+141] [ -5.02587297e+141] [ -4.90653185e+141] [ -6.59342829e+141] [ -6.77253263e+141] [ -6.72883625e+141] [ -7.01500522e+141] [ -6.37427486e+141] [ -5.68965196e+141] [ -7.28808367e+141] [ -8.06841422e+141] [ -6.98306692e+141] [ -8.40460168e+141] [ -7.20809884e+141] [ -8.09257007e+141] [ -7.39349630e+141] [ -8.39432252e+141] [ -7.49672330e+141] [ -8.48927659e+141] [ -8.15327549e+141] [ -9.51564702e+141] [ -8.38145463e+141] [ -8.47960331e+141] [ -8.10745869e+141] [ -8.83409403e+141] [ -9.35738353e+141] [ -8.51966152e+141] [ -8.83211686e+141] [ -9.25763152e+141] [ -8.79751743e+141] [ -8.97065457e+141] [ -8.15428813e+141] [ -9.23299948e+141]] [[ 1.44250035e+141] [ 6.56313243e+141] [ 4.42312740e+141] [ 7.42962987e+141] [ 1.00490001e+142] [ 1.05341997e+142] [ 1.69196832e+142] [ 1.44985380e+142] [ 2.01466886e+142] [ 1.68976695e+142] [ 2.84702954e+142] [ 2.54024688e+142] [ 1.86839097e+142] [ 2.60004464e+142] [ 2.68254628e+142] [ 3.07388173e+142] [ 3.91904841e+142] [ 2.74064007e+142] [ 3.34997163e+142] [ 3.27042537e+142] [ 4.39481814e+142] [ 4.51419928e+142] [ 4.48507367e+142] [ 4.67581823e+142] [ 4.24874246e+142] [ 3.79241034e+142] [ 4.85783736e+142] [ 5.37796297e+142] [ 4.65452990e+142] [ 5.60204711e+142] [ 4.80452385e+142] [ 5.39406393e+142] [ 4.92809964e+142] [ 5.59519558e+142] [ 4.99690511e+142] [ 5.65848676e+142] [ 5.43452683e+142] [ 6.34260907e+142] [ 5.58661855e+142] [ 5.65203908e+142] [ 5.40398787e+142] [ 5.88832318e+142] [ 6.23711930e+142] [ 5.67873969e+142] [ 5.88700531e+142] [ 6.17063007e+142] [ 5.86394322e+142] [ 5.97934696e+142] [ 5.43520181e+142] [ 6.15421171e+142]] [[ -9.61491721e+141] [ -4.37462458e+142] [ -2.94821444e+142] [ -4.95218430e+142] [ -6.69811302e+142] [ -7.02152048e+142] [ -1.12777340e+143] [ -9.66393124e+142] [ -1.34286791e+143] [ -1.12630609e+143] [ -1.89767394e+143] [ -1.69318942e+143] [ -1.24536708e+143] [ -1.73304734e+143] [ -1.78803841e+143] [ -2.04888119e+143] [ -2.61222300e+143] [ -1.82676055e+143] [ -2.23290759e+143] [ -2.17988641e+143] [ -2.92934504e+143] [ -3.00891797e+143] [ -2.98950443e+143] [ -3.11664431e+143] [ -2.83197900e+143] [ -2.52781300e+143] [ -3.23796829e+143] [ -3.58465553e+143] [ -3.10245467e+143] [ -3.73401774e+143] [ -3.20243243e+143] [ -3.59538755e+143] [ -3.28480128e+143] [ -3.72945089e+143] [ -3.33066324e+143] [ -3.77163733e+143] [ -3.62235791e+143] [ -4.22763578e+143] [ -3.72373391e+143] [ -3.76733966e+143] [ -3.60200230e+143] [ -3.92483369e+143] [ -4.15732208e+143] [ -3.78513682e+143] [ -3.92395527e+143] [ -4.11300400e+143] [ -3.90858335e+143] [ -3.98550516e+143] [ -3.62280780e+143] [ -4.10206042e+143]] [[ 6.40877715e+142] [ 2.91588512e+143] [ 1.96511826e+143] [ 3.30085479e+143] [ 4.46459524e+143] [ 4.68016094e+143] [ 7.51711975e+143] [ 6.44144721e+143] [ 8.95082190e+143] [ 7.50733943e+143] [ 1.26488550e+144] [ 1.12858732e+144] [ 8.30093479e+143] [ 1.15515442e+144] [ 1.19180846e+144] [ 1.36567197e+144] [ 1.74116476e+144] [ 1.21761852e+144] [ 1.48833389e+144] [ 1.45299288e+144] [ 1.95254094e+144] [ 2.00557990e+144] [ 1.99263990e+144] [ 2.07738438e+144] [ 1.88764207e+144] [ 1.68490168e+144] [ 2.15825230e+144] [ 2.38933502e+144] [ 2.06792635e+144] [ 2.48889169e+144] [ 2.13456604e+144] [ 2.39648840e+144] [ 2.18946860e+144] [ 2.48584768e+144] [ 2.22003767e+144] [ 2.51396685e+144] [ 2.41446536e+144] [ 2.81791044e+144] [ 2.48203705e+144] [ 2.51110225e+144] [ 2.40089743e+144] [ 2.61607915e+144] [ 2.77104318e+144] [ 2.52296487e+144] [ 2.61549364e+144] [ 2.74150317e+144] [ 2.60524756e+144] [ 2.65651943e+144] [ 2.41476524e+144] [ 2.73420878e+144]] [[ -4.27173980e+143] [ -1.94356930e+144] [ -1.30984019e+144] [ -2.20016900e+144] [ -2.97585463e+144] [ -3.11953892e+144] [ -5.01050027e+144] [ -4.29351587e+144] [ -5.96612758e+144] [ -5.00398124e+144] [ -8.43103389e+144] [ -7.52254490e+144] [ -5.53294844e+144] [ -7.69962664e+144] [ -7.94394237e+144] [ -9.10282128e+144] [ -1.16056506e+145] [ -8.11597809e+144] [ -9.92041844e+144] [ -9.68485461e+144] [ -1.30145684e+145] [ -1.33680970e+145] [ -1.32818461e+145] [ -1.38467063e+145] [ -1.25819881e+145] [ -1.12306317e+145] [ -1.43857276e+145] [ -1.59259985e+145] [ -1.37836643e+145] [ -1.65895887e+145] [ -1.42278480e+145] [ -1.59736789e+145] [ -1.45937984e+145] [ -1.65692990e+145] [ -1.47975551e+145] [ -1.67567259e+145] [ -1.60935035e+145] [ -1.87826474e+145] [ -1.65438994e+145] [ -1.67376321e+145] [ -1.60030672e+145] [ -1.74373506e+145] [ -1.84702559e+145] [ -1.68167018e+145] [ -1.74334480e+145] [ -1.82733585e+145] [ -1.73651532e+145] [ -1.77069033e+145] [ -1.60955024e+145] [ -1.82247380e+145]] [[ 2.84730776e+144] [ 1.29547683e+145] [ 8.73067721e+144] [ 1.46651214e+145] [ 1.98354169e+145] [ 2.07931377e+145] [ 3.33972502e+145] [ 2.86182250e+145] [ 3.97669385e+145] [ 3.33537980e+145] [ 5.61966537e+145] [ 5.01411638e+145] [ 3.68796035e+145] [ 5.13214936e+145] [ 5.29499685e+145] [ 6.06744205e+145] [ 7.73569096e+145] [ 5.40966643e+145] [ 6.61240753e+145] [ 6.45539359e+145] [ 8.67479841e+145] [ 8.91044123e+145] [ 8.85295107e+145] [ 9.22945597e+145] [ 8.38646406e+145] [ 7.48572391e+145] [ 9.58873803e+145] [ 1.06153982e+146] [ 9.18743560e+145] [ 1.10577111e+146] [ 9.48350411e+145] [ 1.06471794e+146] [ 9.72742663e+145] [ 1.10441871e+146] [ 9.86323966e+145] [ 1.11691156e+146] [ 1.07270479e+146] [ 1.25194839e+146] [ 1.10272571e+146] [ 1.11563888e+146] [ 1.06667680e+146] [ 1.16227828e+146] [ 1.23112609e+146] [ 1.12090923e+146] [ 1.16201815e+146] [ 1.21800198e+146] [ 1.15746599e+146] [ 1.18024519e+146] [ 1.07283802e+146] [ 1.21476121e+146]] [[ -1.89785939e+145] [ -8.63493895e+145] [ -5.81939119e+145] [ -9.77496661e+145] [ -1.32212024e+146] [ -1.38595666e+146] [ -2.22607776e+146] [ -1.90753412e+146] [ -2.65064629e+146] [ -2.22318147e+146] [ -3.74576111e+146] [ -3.34213532e+146] [ -2.45819236e+146] [ -3.42080965e+146] [ -3.52935486e+146] [ -4.04422452e+146] [ -5.15618786e+146] [ -3.60578732e+146] [ -4.40746866e+146] [ -4.30281176e+146] [ -5.78214545e+146] [ -5.93921204e+146] [ -5.90089225e+146] [ -6.15184980e+146] [ -5.58995757e+146] [ -4.98957352e+146] [ -6.39132754e+146] [ -7.07564298e+146] [ -6.12384131e+146] [ -7.37046453e+146] [ -6.32118437e+146] [ -7.09682658e+146] [ -6.48376976e+146] [ -7.36145016e+146] [ -6.57429528e+146] [ -7.44472068e+146] [ -7.15006256e+146] [ -8.34480223e+146] [ -7.35016558e+146] [ -7.43623764e+146] [ -7.10988325e+146] [ -7.74711035e+146] [ -8.20601214e+146] [ -7.47136692e+146] [ -7.74537646e+146] [ -8.11853402e+146] [ -7.71503429e+146] [ -7.86686791e+146] [ -7.15095060e+146] [ -8.09693282e+146]] [[ 1.26500911e+146] [ 5.75557732e+146] [ 3.87888741e+146] [ 6.51545730e+146] [ 8.81252932e+146] [ 9.23802792e+146] [ 1.48378150e+147] [ 1.27145775e+147] [ 1.76677562e+147] [ 1.48185099e+147] [ 2.49671918e+147] [ 2.22768434e+147] [ 1.63849638e+147] [ 2.28012433e+147] [ 2.35247462e+147] [ 2.69565854e+147] [ 3.43683239e+147] [ 2.40342032e+147] [ 2.93777718e+147] [ 2.86801863e+147] [ 3.85406143e+147] [ 3.95875342e+147] [ 3.93321154e+147] [ 4.10048610e+147] [ 3.72595952e+147] [ 3.32577640e+147] [ 4.26010884e+147] [ 4.71623603e+147] [ 4.08181718e+147] [ 4.91274793e+147] [ 4.21335525e+147] [ 4.73035586e+147] [ 4.32172576e+147] [ 4.90673944e+147] [ 4.38206512e+147] [ 4.96224301e+147] [ 4.76584005e+147] [ 5.56218807e+147] [ 4.89921776e+147] [ 4.95658868e+147] [ 4.73905872e+147] [ 5.16379941e+147] [ 5.46967820e+147] [ 4.98000394e+147] [ 5.16264369e+147] [ 5.41137008e+147] [ 5.14241926e+147] [ 5.24362323e+147] [ 4.76643197e+147] [ 5.39697190e+147]] [[ -8.43185780e+146] [ -3.83635257e+147] [ -2.58545387e+147] [ -4.34284695e+147] [ -5.87394933e+147] [ -6.15756339e+147] [ -9.89007472e+147] [ -8.47484094e+147] [ -1.17763585e+148] [ -9.87720702e+147] [ -1.66417624e+148] [ -1.48485235e+148] [ -1.09213193e+148] [ -1.51980598e+148] [ -1.56803072e+148] [ -1.79677832e+148] [ -2.29080422e+148] [ -1.60198833e+148] [ -1.95816135e+148] [ -1.91166412e+148] [ -2.56890624e+148] [ -2.63868818e+148] [ -2.62166337e+148] [ -2.73315943e+148] [ -2.48352052e+148] [ -2.21678037e+148] [ -2.83955520e+148] [ -3.14358460e+148] [ -2.72071575e+148] [ -3.27456867e+148] [ -2.80839182e+148] [ -3.15299611e+148] [ -2.88062567e+148] [ -3.27056374e+148] [ -2.92084458e+148] [ -3.30755938e+148] [ -3.17664792e+148] [ -3.70744989e+148] [ -3.26555020e+148] [ -3.30379051e+148] [ -3.15879696e+148] [ -3.44190583e+148] [ -3.64578788e+148] [ -3.31939784e+148] [ -3.44113549e+148] [ -3.60692288e+148] [ -3.42765499e+148] [ -3.49511202e+148] [ -3.17704246e+148] [ -3.59732584e+148]] [[ 5.62021455e+147] [ 2.55710248e+148] [ 1.72332193e+148] [ 2.89470389e+148] [ 3.91525287e+148] [ 4.10429447e+148] [ 6.59218207e+148] [ 5.64886475e+148] [ 7.84947554e+148] [ 6.58360516e+148] [ 1.10924873e+149] [ 9.89721245e+148] [ 7.27955324e+148] [ 1.01301942e+149] [ 1.04516339e+149] [ 1.19763401e+149] [ 1.52692461e+149] [ 1.06779767e+149] [ 1.30520309e+149] [ 1.27421059e+149] [ 1.71229218e+149] [ 1.75880501e+149] [ 1.74745720e+149] [ 1.82177436e+149] [ 1.65537874e+149] [ 1.47758437e+149] [ 1.89269196e+149] [ 2.09534130e+149] [ 1.81348009e+149] [ 2.18264811e+149] [ 1.87192015e+149] [ 2.10161450e+149] [ 1.92006728e+149] [ 2.17997864e+149] [ 1.94687500e+149] [ 2.20463791e+149] [ 2.11737950e+149] [ 2.47118302e+149] [ 2.17663689e+149] [ 2.20212579e+149] [ 2.10548102e+149] [ 2.29418589e+149] [ 2.43008250e+149] [ 2.21252878e+149] [ 2.29367243e+149] [ 2.40417722e+149] [ 2.28468706e+149] [ 2.32965022e+149] [ 2.11764248e+149] [ 2.39778036e+149]] [[ -3.74612716e+148] [ -1.70442445e+149] [ -1.14867200e+149] [ -1.92945105e+149] [ -2.60969310e+149] [ -2.73569787e+149] [ -4.39398747e+149] [ -3.76522382e+149] [ -5.23203043e+149] [ -4.38827057e+149] [ -7.39364441e+149] [ -6.59693968e+149] [ -4.85215144e+149] [ -6.75223255e+149] [ -6.96648666e+149] [ -7.98277228e+149] [ -1.01776431e+150] [ -7.11735438e+149] [ -8.69976889e+149] [ -8.49318981e+149] [ -1.14132018e+150] [ -1.17232308e+150] [ -1.16475925e+150] [ -1.21429500e+150] [ -1.10338479e+150] [ -9.84876805e+149] [ -1.26156478e+150] [ -1.39663974e+150] [ -1.20876649e+150] [ -1.45483367e+150] [ -1.24771944e+150] [ -1.40082110e+150] [ -1.27981167e+150] [ -1.45305435e+150] [ -1.29768023e+150] [ -1.46949087e+150] [ -1.41132919e+150] [ -1.64715524e+150] [ -1.45082692e+150] [ -1.46781643e+150] [ -1.40339832e+150] [ -1.52917865e+150] [ -1.61975988e+150] [ -1.47475049e+150] [ -1.52883640e+150] [ -1.60249284e+150] [ -1.52284725e+150] [ -1.55281723e+150] [ -1.41150448e+150] [ -1.59822905e+150]] [[ 2.49696317e+149] [ 1.13607597e+150] [ 7.65641837e+149] [ 1.28606638e+150] [ 1.73947847e+150] [ 1.82346635e+150] [ 2.92879136e+150] [ 2.50969195e+150] [ 3.48738489e+150] [ 2.92498079e+150] [ 4.92819837e+150] [ 4.39715864e+150] [ 3.23417837e+150] [ 4.50066836e+150] [ 4.64347841e+150] [ 5.32087874e+150] [ 6.78385936e+150] [ 4.74403857e+150] [ 5.79878940e+150] [ 5.66109511e+150] [ 7.60741513e+150] [ 7.81406348e+150] [ 7.76364714e+150] [ 8.09382532e+150] [ 7.35455864e+150] [ 6.56464930e+150] [ 8.40889981e+150] [ 9.30923546e+150] [ 8.05697530e+150] [ 9.69712434e+150] [ 8.31661432e+150] [ 9.33710616e+150] [ 8.53052359e+150] [ 9.68526437e+150] [ 8.64962562e+150] [ 9.79482119e+150] [ 9.40714732e+150] [ 1.09790346e+151] [ 9.67041756e+150] [ 9.78366028e+150] [ 9.35428447e+150] [ 1.01926672e+151] [ 1.07964321e+151] [ 9.82987894e+150] [ 1.01903860e+151] [ 1.06813395e+151] [ 1.01504656e+151] [ 1.03502291e+151] [ 9.40831569e+150] [ 1.06529194e+151]] [[ -1.66433887e+150] [ -7.57246015e+150] [ -5.10334909e+150] [ -8.57221405e+150] [ -1.15944107e+151] [ -1.21542279e+151] [ -1.95217189e+151] [ -1.67282318e+151] [ -2.32449974e+151] [ -1.94963198e+151] [ -3.28486708e+151] [ -2.93090509e+151] [ -2.15572615e+151] [ -2.99989900e+151] [ -3.09508835e+151] [ -3.54660631e+151] [ -4.52174906e+151] [ -3.16211625e+151] [ -3.86515538e+151] [ -3.77337590e+151] [ -5.07068622e+151] [ -5.20842669e+151] [ -5.17482192e+151] [ -5.39490061e+151] [ -4.90214593e+151] [ -4.37563564e+151] [ -5.60491201e+151] [ -6.20502643e+151] [ -5.37033841e+151] [ -6.46357191e+151] [ -5.54339955e+151] [ -6.22360352e+151] [ -5.68597975e+151] [ -6.45566671e+151] [ -5.76536664e+151] [ -6.52869128e+151] [ -6.27028912e+151] [ -7.31802305e+151] [ -6.44577063e+151] [ -6.52125203e+151] [ -6.23505364e+151] [ -6.79387367e+151] [ -7.19631027e+151] [ -6.55205886e+151] [ -6.79235312e+151] [ -7.11959583e+151] [ -6.76574437e+151] [ -6.89889576e+151] [ -6.27106789e+151] [ -7.10065252e+151]] [[ 1.10935713e+151] [ 5.04738714e+151] [ 3.40161296e+151] [ 5.71376833e+151] [ 7.72819906e+151] [ 8.10134254e+151] [ 1.30121085e+152] [ 1.11501230e+152] [ 1.54938420e+152] [ 1.29951788e+152] [ 2.18951246e+152] [ 1.95358079e+152] [ 1.43688897e+152] [ 1.99956835e+152] [ 2.06301636e+152] [ 2.36397350e+152] [ 3.01395024e+152] [ 2.10769348e+152] [ 2.57630086e+152] [ 2.51512569e+152] [ 3.37984168e+152] [ 3.47165193e+152] [ 3.44925283e+152] [ 3.59594523e+152] [ 3.26750196e+152] [ 2.91655903e+152] [ 3.73592733e+152] [ 4.13593073e+152] [ 3.57957341e+152] [ 4.30826299e+152] [ 3.69492649e+152] [ 4.14831320e+152] [ 3.78996265e+152] [ 4.30299381e+152] [ 3.84287760e+152] [ 4.35166799e+152] [ 4.17943126e+152] [ 4.87779331e+152] [ 4.29639763e+152] [ 4.34670939e+152] [ 4.15594522e+152] [ 4.52842405e+152] [ 4.79666624e+152] [ 4.36724354e+152] [ 4.52741053e+152] [ 4.74553259e+152] [ 4.50967460e+152] [ 4.59842602e+152] [ 4.17995035e+152] [ 4.73290603e+152]] [[ -7.39436694e+151] [ -3.36431179e+152] [ -2.26732887e+152] [ -3.80848499e+152] [ -5.15119417e+152] [ -5.39991117e+152] [ -8.67315879e+152] [ -7.43206125e+152] [ -1.03273464e+153] [ -8.66187438e+152] [ -1.45940908e+153] [ -1.30214994e+153] [ -9.57751477e+152] [ -1.33280273e+153] [ -1.37509370e+153] [ -1.57569524e+153] [ -2.00893414e+153] [ -1.40487302e+153] [ -1.71722103e+153] [ -1.67644501e+153] [ -2.25281733e+153] [ -2.31401301e+153] [ -2.29908300e+153] [ -2.39686012e+153] [ -2.17793782e+153] [ -1.94401849e+153] [ -2.49016452e+153] [ -2.75678487e+153] [ -2.38594756e+153] [ -2.87165212e+153] [ -2.46283561e+153] [ -2.76503835e+153] [ -2.52618150e+153] [ -2.86813997e+153] [ -2.56145171e+153] [ -2.90058351e+153] [ -2.78577995e+153] [ -3.25126983e+153] [ -2.86374332e+153] [ -2.89727839e+153] [ -2.77012544e+153] [ -3.01839942e+153] [ -3.19719497e+153] [ -2.91096532e+153] [ -3.01772387e+153] [ -3.16311208e+153] [ -3.00590206e+153] [ -3.06505890e+153] [ -2.78612594e+153] [ -3.15469591e+153]] [[ 4.92867997e+152] [ 2.24246596e+153] [ 1.51127723e+153] [ 2.53852748e+153] [ 3.43350387e+153] [ 3.59928500e+153] [ 5.78105257e+153] [ 4.95380493e+153] [ 6.88364341e+153] [ 5.77353099e+153] [ 9.72762149e+153] [ 8.67941826e+153] [ 6.38384673e+153] [ 8.88373295e+153] [ 9.16562139e+153] [ 1.05027214e+154] [ 1.33904545e+154] [ 9.36411404e+153] [ 1.14460548e+154] [ 1.11742642e+154] [ 1.50160463e+154] [ 1.54239432e+154] [ 1.53244279e+154] [ 1.59761567e+154] [ 1.45169405e+154] [ 1.29577624e+154] [ 1.65980727e+154] [ 1.83752179e+154] [ 1.59034195e+154] [ 1.91408600e+154] [ 1.64159131e+154] [ 1.84302311e+154] [ 1.68381421e+154] [ 1.91174500e+154] [ 1.70732340e+154] [ 1.93337009e+154] [ 1.85684832e+154] [ 2.16711838e+154] [ 1.90881443e+154] [ 1.93116707e+154] [ 1.84641388e+154] [ 2.01189972e+154] [ 2.13107504e+154] [ 1.94029003e+154] [ 2.01144943e+154] [ 2.10835725e+154] [ 2.00356966e+154] [ 2.04300037e+154] [ 1.85707894e+154] [ 2.10274749e+154]] [[ -3.28518809e+153] [ -1.49470497e+154] [ -1.00733462e+154] [ -1.69204337e+154] [ -2.28858561e+154] [ -2.39908623e+154] [ -3.85333298e+154] [ -3.30193502e+154] [ -4.58825964e+154] [ -3.84831951e+154] [ -6.48389964e+154] [ -5.78522478e+154] [ -4.25512255e+154] [ -5.92140977e+154] [ -6.10930116e+154] [ -7.00053878e+154] [ -8.92534351e+154] [ -6.24160549e+154] [ -7.62931313e+154] [ -7.44815241e+154] [ -1.00088739e+155] [ -1.02807557e+155] [ -1.02144243e+155] [ -1.06488309e+155] [ -9.67619738e+154] [ -8.63693466e+154] [ -1.10633661e+155] [ -1.22479137e+155] [ -1.06003483e+155] [ -1.27582488e+155] [ -1.09419485e+155] [ -1.22845825e+155] [ -1.12233832e+155] [ -1.27426450e+155] [ -1.13800826e+155] [ -1.28867860e+155] [ -1.23767338e+155] [ -1.44448240e+155] [ -1.27231114e+155] [ -1.28721019e+155] [ -1.23071835e+155] [ -1.34102215e+155] [ -1.42045789e+155] [ -1.29329105e+155] [ -1.34072201e+155] [ -1.40531546e+155] [ -1.33546979e+155] [ -1.36175214e+155] [ -1.23782710e+155] [ -1.40157630e+155]] [[ 2.18972643e+154] [ 9.96288461e+154] [ 6.71434080e+154] [ 1.12782343e+155] [ 1.52544580e+155] [ 1.59909947e+155] [ 2.56842069e+155] [ 2.20088901e+155] [ 3.05828255e+155] [ 2.56507899e+155] [ 4.32181234e+155] [ 3.85611395e+155] [ 2.83623161e+155] [ 3.94688740e+155] [ 4.07212551e+155] [ 4.66617568e+155] [ 5.94914509e+155] [ 4.16031233e+155] [ 5.08528223e+155] [ 4.96453041e+155] [ 6.67136713e+155] [ 6.85258860e+155] [ 6.80837570e+155] [ 7.09792738e+155] [ 6.44962315e+155] [ 5.75690754e+155] [ 7.37423380e+155] [ 8.16378840e+155] [ 7.06561154e+155] [ 8.50394982e+155] [ 7.29330349e+155] [ 8.18822978e+155] [ 7.48089248e+155] [ 8.49354916e+155] [ 7.58533970e+155] [ 8.58962564e+155] [ 8.24965278e+155] [ 9.62812848e+155] [ 8.48052915e+155] [ 8.57983802e+155] [ 8.20329440e+155] [ 8.93851907e+155] [ 9.46799421e+155] [ 8.62036974e+155] [ 8.93651852e+155] [ 9.36706305e+155] [ 8.90151010e+155] [ 9.07669384e+155] [ 8.25067739e+155] [ 9.34213985e+155]] [[ -1.45955169e+155] [ -6.64071315e+155] [ -4.47541179e+155] [ -7.51745318e+155] [ -1.01677862e+156] [ -1.06587211e+156] [ -1.71196854e+156] [ -1.46699206e+156] [ -2.03848363e+156] [ -1.70974115e+156] [ -2.88068337e+156] [ -2.57027433e+156] [ -1.89047663e+156] [ -2.63077894e+156] [ -2.71425581e+156] [ -3.11021712e+156] [ -3.96537424e+156] [ -2.77303631e+156] [ -3.38957058e+156] [ -3.30908404e+156] [ -4.44676789e+156] [ -4.56756020e+156] [ -4.53809030e+156] [ -4.73108960e+156] [ -4.29896551e+156] [ -3.83723923e+156] [ -4.91526032e+156] [ -5.44153417e+156] [ -4.70954963e+156] [ -5.66826714e+156] [ -4.86131661e+156] [ -5.45782545e+156] [ -4.98635315e+156] [ -5.66133462e+156] [ -5.05597195e+156] [ -5.72537395e+156] [ -5.49876666e+156] [ -6.41758305e+156] [ -5.65265620e+156] [ -5.71885005e+156] [ -5.46786670e+156] [ -5.95792719e+156] [ -6.31084632e+156] [ -5.74586628e+156] [ -5.95659374e+156] [ -6.24357114e+156] [ -5.93325904e+156] [ -6.05002693e+156] [ -5.49944961e+156] [ -6.22695870e+156]] [[ 9.72857211e+155] [ 4.42633563e+156] [ 2.98306435e+156] [ 5.01072251e+156] [ 6.77728930e+156] [ 7.10451965e+156] [ 1.14110446e+157] [ 9.77816552e+156] [ 1.35874152e+157] [ 1.13961980e+157] [ 1.92010574e+157] [ 1.71320408e+157] [ 1.26008817e+157] [ 1.75353314e+157] [ 1.80917425e+157] [ 2.07310036e+157] [ 2.64310126e+157] [ 1.84835411e+157] [ 2.25930208e+157] [ 2.20565416e+157] [ 2.96397190e+157] [ 3.04448544e+157] [ 3.02484242e+157] [ 3.15348517e+157] [ 2.86545493e+157] [ 2.55769348e+157] [ 3.27624329e+157] [ 3.62702861e+157] [ 3.13912781e+157] [ 3.77815639e+157] [ 3.24028737e+157] [ 3.63788749e+157] [ 3.32362988e+157] [ 3.77353555e+157] [ 3.37003395e+157] [ 3.81622066e+157] [ 3.66517665e+157] [ 4.27760933e+157] [ 3.76775099e+157] [ 3.81187219e+157] [ 3.64458043e+157] [ 3.97122791e+157] [ 4.20646447e+157] [ 3.82987973e+157] [ 3.97033911e+157] [ 4.16162252e+157] [ 3.95478548e+157] [ 4.03261655e+157] [ 3.66563187e+157] [ 4.15054958e+157]] [[ -6.48453328e+156] [ -2.95035287e+157] [ -1.98834730e+157] [ -3.33987316e+157] [ -4.51736981e+157] [ -4.73548364e+157] [ -7.60597725e+157] [ -6.51758952e+157] [ -9.05662673e+157] [ -7.59608132e+157] [ -1.27983731e+158] [ -1.14192800e+158] [ -8.39905752e+157] [ -1.16880914e+158] [ -1.20589645e+158] [ -1.38181515e+158] [ -1.76174652e+158] [ -1.23201160e+158] [ -1.50592701e+158] [ -1.47016825e+158] [ -1.97562131e+158] [ -2.02928723e+158] [ -2.01619427e+158] [ -2.10194048e+158] [ -1.90995529e+158] [ -1.70481837e+158] [ -2.18376432e+158] [ -2.41757860e+158] [ -2.09237065e+158] [ -2.51831210e+158] [ -2.15979807e+158] [ -2.42481653e+158] [ -2.21534962e+158] [ -2.51523210e+158] [ -2.24628004e+158] [ -2.54368365e+158] [ -2.44300599e+158] [ -2.85122007e+158] [ -2.51137643e+158] [ -2.54078520e+158] [ -2.42927768e+158] [ -2.64700300e+158] [ -2.80379880e+158] [ -2.55278804e+158] [ -2.64641057e+158] [ -2.77390962e+158] [ -2.63604337e+158] [ -2.68792130e+158] [ -2.44330942e+158] [ -2.76652900e+158]] [[ 4.32223468e+157] [ 1.96654362e+158] [ 1.32532339e+158] [ 2.22617650e+158] [ 3.01103127e+158] [ 3.15641400e+158] [ 5.06972781e+158] [ 4.34426817e+158] [ 6.03665129e+158] [ 5.06313172e+158] [ 8.53069448e+158] [ 7.61146653e+158] [ 5.59835167e+158] [ 7.79064150e+158] [ 8.03784521e+158] [ 9.21042285e+158] [ 1.17428373e+159] [ 8.21191450e+158] [ 1.00376846e+159] [ 9.79933622e+158] [ 1.31684095e+159] [ 1.35261171e+159] [ 1.34388466e+159] [ 1.40103839e+159] [ 1.27307158e+159] [ 1.13633854e+159] [ 1.45557768e+159] [ 1.61142547e+159] [ 1.39465966e+159] [ 1.67856890e+159] [ 1.43960309e+159] [ 1.61624988e+159] [ 1.47663071e+159] [ 1.67651594e+159] [ 1.49724723e+159] [ 1.69548019e+159] [ 1.62837398e+159] [ 1.90046712e+159] [ 1.67394596e+159] [ 1.69354824e+159] [ 1.61922344e+159] [ 1.76434721e+159] [ 1.86885870e+159] [ 1.70154867e+159] [ 1.76395232e+159] [ 1.84893621e+159] [ 1.75704212e+159] [ 1.79162111e+159] [ 1.62857622e+159] [ 1.84401669e+159]] [[ -2.88096488e+158] [ -1.31079026e+159] [ -8.83387979e+158] [ -1.48384731e+159] [ -2.00698851e+159] [ -2.10389268e+159] [ -3.37920286e+159] [ -2.89565120e+159] [ -4.02370109e+159] [ -3.37480627e+159] [ -5.68609366e+159] [ -5.07338666e+159] [ -3.73155457e+159] [ -5.19281488e+159] [ -5.35758733e+159] [ -6.13916336e+159] [ -7.82713212e+159] [ -5.47361239e+159] [ -6.69057071e+159] [ -6.53170075e+159] [ -8.77734046e+159] [ -9.01576874e+159] [ -8.95759900e+159] [ -9.33855446e+159] [ -8.48559781e+159] [ -7.57421029e+159] [ -9.70208347e+159] [ -1.07408795e+160] [ -9.29603737e+159] [ -1.11884208e+160] [ -9.59560562e+159] [ -1.07730364e+160] [ -9.84241146e+159] [ -1.11747369e+160] [ -9.97982990e+159] [ -1.13011422e+160] [ -1.08538490e+160] [ -1.26674728e+160] [ -1.11576069e+160] [ -1.12882649e+160] [ -1.07928565e+160] [ -1.17601720e+160] [ -1.24567884e+160] [ -1.13415914e+160] [ -1.17575400e+160] [ -1.23239960e+160] [ -1.17114803e+160] [ -1.19419649e+160] [ -1.08551970e+160] [ -1.22912052e+160]] [[ 1.92029338e+159] [ 8.73700984e+159] [ 5.88818038e+159] [ 9.89051340e+159] [ 1.33774861e+160] [ 1.40233962e+160] [ 2.25239152e+160] [ 1.93008248e+160] [ 2.68197874e+160] [ 2.24946100e+160] [ 3.79003857e+160] [ 3.38164165e+160] [ 2.48724987e+160] [ 3.46124596e+160] [ 3.57107425e+160] [ 4.09203002e+160] [ 5.21713753e+160] [ 3.64841019e+160] [ 4.45956795e+160] [ 4.35367394e+160] [ 5.85049436e+160] [ 6.00941759e+160] [ 5.97064483e+160] [ 6.22456887e+160] [ 5.65603469e+160] [ 5.04855369e+160] [ 6.46687740e+160] [ 7.15928192e+160] [ 6.19622931e+160] [ 7.45758846e+160] [ 6.39590509e+160] [ 7.18071593e+160] [ 6.56041235e+160] [ 7.44846754e+160] [ 6.65200795e+160] [ 7.53272237e+160] [ 7.23458119e+160] [ 8.44344350e+160] [ 7.43704957e+160] [ 7.52413906e+160] [ 7.19392693e+160] [ 7.83868650e+160] [ 8.30301282e+160] [ 7.55968359e+160] [ 7.83693211e+160] [ 8.21450065e+160] [ 7.80623127e+160] [ 7.95985967e+160] [ 7.23547972e+160] [ 8.19264410e+160]] [[ -1.27996238e+160] [ -5.82361218e+160] [ -3.92473851e+160] [ -6.59247447e+160] [ -8.91669945e+160] [ -9.34722773e+160] [ -1.50132081e+161] [ -1.28648726e+161] [ -1.78766012e+161] [ -1.49936749e+161] [ -2.52623211e+161] [ -2.25401709e+161] [ -1.65786452e+161] [ -2.30707696e+161] [ -2.38028248e+161] [ -2.72752306e+161] [ -3.47745810e+161] [ -2.43183039e+161] [ -2.97250372e+161] [ -2.90192057e+161] [ -3.89961907e+161] [ -4.00554859e+161] [ -3.97970480e+161] [ -4.14895665e+161] [ -3.77000291e+161] [ -3.36508935e+161] [ -4.31046624e+161] [ -4.77198517e+161] [ -4.13006705e+161] [ -4.97081997e+161] [ -4.26315999e+161] [ -4.78627190e+161] [ -4.37281152e+161] [ -4.96474046e+161] [ -4.43386413e+161] [ -5.02090012e+161] [ -4.82217554e+161] [ -5.62793694e+161] [ -4.95712987e+161] [ -5.01517895e+161] [ -4.79507764e+161] [ -5.22483905e+161] [ -5.53433353e+161] [ -5.03887099e+161] [ -5.22366967e+161] [ -5.47533617e+161] [ -5.20320617e+161] [ -5.30560645e+161] [ -4.82277445e+161] [ -5.46076780e+161]] [[ 8.53152813e+160] [ 3.88170088e+161] [ 2.61601571e+161] [ 4.39418237e+161] [ 5.94338343e+161] [ 6.23035000e+161] [ 1.00069822e+162] [ 8.57501935e+161] [ 1.19155632e+162] [ 9.99396236e+161] [ 1.68384795e+162] [ 1.50240432e+162] [ 1.10504168e+162] [ 1.53777113e+162] [ 1.58656592e+162] [ 1.81801747e+162] [ 2.31788309e+162] [ 1.62092493e+162] [ 1.98130816e+162] [ 1.93426130e+162] [ 2.59927247e+162] [ 2.66987928e+162] [ 2.65265322e+162] [ 2.76546724e+162] [ 2.51287743e+162] [ 2.24298423e+162] [ 2.87312069e+162] [ 3.18074392e+162] [ 2.75287647e+162] [ 3.31327631e+162] [ 2.84158893e+162] [ 3.19026668e+162] [ 2.91467663e+162] [ 3.30922404e+162] [ 2.95537096e+162] [ 3.34665699e+162] [ 3.21419807e+162] [ 3.75127448e+162] [ 3.30415124e+162] [ 3.34284358e+162] [ 3.19613610e+162] [ 3.48259151e+162] [ 3.68888358e+162] [ 3.35863539e+162] [ 3.48181207e+162] [ 3.64955918e+162] [ 3.46817222e+162] [ 3.53642664e+162] [ 3.21459728e+162] [ 3.63984869e+162]] [[ -5.68664933e+161] [ -2.58732918e+162] [ -1.74369277e+162] [ -2.92892127e+162] [ -3.96153384e+162] [ -4.15281004e+162] [ -6.67010617e+162] [ -5.71563819e+162] [ -7.94226171e+162] [ -6.66142788e+162] [ -1.12236081e+163] [ -1.00142043e+163] [ -7.36560254e+162] [ -1.02499400e+163] [ -1.05751794e+163] [ -1.21179086e+163] [ -1.54497390e+163] [ -1.08041977e+163] [ -1.32063149e+163] [ -1.28927263e+163] [ -1.73253265e+163] [ -1.77959528e+163] [ -1.76811334e+163] [ -1.84330898e+163] [ -1.67494645e+163] [ -1.49505043e+163] [ -1.91506487e+163] [ -2.12010967e+163] [ -1.83491666e+163] [ -2.20844850e+163] [ -1.89404753e+163] [ -2.12645702e+163] [ -1.94276379e+163] [ -2.20574748e+163] [ -1.96988840e+163] [ -2.23069823e+163] [ -2.14240837e+163] [ -2.50039409e+163] [ -2.20236623e+163] [ -2.22815642e+163] [ -2.13036925e+163] [ -2.32130474e+163] [ -2.45880773e+163] [ -2.23868238e+163] [ -2.32078520e+163] [ -2.43259624e+163] [ -2.31169363e+163] [ -2.35718828e+163] [ -2.14267446e+163] [ -2.42612376e+163]] [[ 3.79040895e+162] [ 1.72457191e+163] [ 1.16225009e+163] [ 1.95225848e+163] [ 2.64054146e+163] [ 2.76803569e+163] [ 4.44592741e+163] [ 3.80973134e+163] [ 5.29387661e+163] [ 4.44014293e+163] [ 7.48104234e+163] [ 6.67492002e+163] [ 4.90950719e+163] [ 6.83204856e+163] [ 7.04883530e+163] [ 8.07713411e+163] [ 1.02979498e+164] [ 7.20148638e+163] [ 8.80260610e+163] [ 8.59358513e+163] [ 1.15481136e+164] [ 1.18618074e+164] [ 1.17852750e+164] [ 1.22864880e+164] [ 1.11642756e+164] [ 9.96518722e+163] [ 1.27647734e+164] [ 1.41314897e+164] [ 1.22305494e+164] [ 1.47203080e+164] [ 1.26246833e+164] [ 1.41737977e+164] [ 1.29493992e+164] [ 1.47023045e+164] [ 1.31301970e+164] [ 1.48686125e+164] [ 1.42801207e+164] [ 1.66662574e+164] [ 1.46797669e+164] [ 1.48516702e+164] [ 1.41998745e+164] [ 1.54725459e+164] [ 1.63890655e+164] [ 1.49218305e+164] [ 1.54690829e+164] [ 1.62143540e+164] [ 1.54084835e+164] [ 1.57117259e+164] [ 1.42818943e+164] [ 1.61712121e+164]] [[ -2.52647898e+163] [ -1.14950517e+164] [ -7.74692247e+163] [ -1.30126857e+164] [ -1.76004030e+164] [ -1.84502097e+164] [ -2.96341168e+164] [ -2.53935823e+164] [ -3.52860817e+164] [ -2.95955607e+164] [ -4.98645304e+164] [ -4.44913606e+164] [ -3.27240857e+164] [ -4.55386933e+164] [ -4.69836750e+164] [ -5.38377516e+164] [ -6.86404922e+164] [ -4.80011635e+164] [ -5.86733506e+164] [ -5.72801312e+164] [ -7.69733998e+164] [ -7.90643105e+164] [ -7.85541876e+164] [ -8.18949987e+164] [ -7.44149455e+164] [ -6.64224795e+164] [ -8.50829876e+164] [ -9.41927699e+164] [ -8.15221426e+164] [ -9.81175098e+164] [ -8.41492239e+164] [ -9.44747715e+164] [ -8.63136021e+164] [ -9.79975082e+164] [ -8.75187011e+164] [ -9.91060267e+164] [ -9.51834624e+164] [ -1.11088143e+165] [ -9.78472851e+164] [ -9.89930984e+164] [ -9.46485852e+164] [ -1.03131515e+165] [ -1.09240533e+165] [ -9.94607483e+164] [ -1.03108433e+165] [ -1.08076002e+165] [ -1.02704511e+165] [ -1.04725759e+165] [ -9.51952842e+164] [ -1.07788442e+165]] [[ 1.68401250e+164] [ 7.66197180e+164] [ 5.16367417e+164] [ 8.67354349e+164] [ 1.17314645e+165] [ 1.22978992e+165] [ 1.97524790e+165] [ 1.69259710e+165] [ 2.35197692e+165] [ 1.97267796e+165] [ 3.32369646e+165] [ 2.96555039e+165] [ 2.18120830e+165] [ 3.03535985e+165] [ 3.13167442e+165] [ 3.58852962e+165] [ 4.57519923e+165] [ 3.19949463e+165] [ 3.91084416e+165] [ 3.81797979e+165] [ 5.13062520e+165] [ 5.26999385e+165] [ 5.23599185e+165] [ 5.45867202e+165] [ 4.96009265e+165] [ 4.42735864e+165] [ 5.67116590e+165] [ 6.27837409e+165] [ 5.43381948e+165] [ 6.53997576e+165] [ 5.60892632e+165] [ 6.29717077e+165] [ 5.75319192e+165] [ 6.53197711e+165] [ 5.83351722e+165] [ 6.60586488e+165] [ 6.34440822e+165] [ 7.40452709e+165] [ 6.52196405e+165] [ 6.59833769e+165] [ 6.30875624e+165] [ 6.87418190e+165] [ 7.28137558e+165] [ 6.62950867e+165] [ 6.87264338e+165] [ 7.20375432e+165] [ 6.84572010e+165] [ 6.98044543e+165] [ 6.34519620e+165] [ 7.18458709e+165]] [[ -1.12247049e+165] [ -5.10705071e+165] [ -3.44182236e+165] [ -5.78130898e+165] [ -7.81955166e+165] [ -8.19710595e+165] [ -1.31659205e+166] [ -1.12819251e+166] [ -1.56769898e+166] [ -1.31487907e+166] [ -2.21539400e+166] [ -1.97667345e+166] [ -1.45387398e+166] [ -2.02320461e+166] [ -2.08740262e+166] [ -2.39191728e+166] [ -3.04957719e+166] [ -2.13260786e+166] [ -2.60675449e+166] [ -2.54485619e+166] [ -3.41979372e+166] [ -3.51268923e+166] [ -3.49002536e+166] [ -3.63845177e+166] [ -3.30612607e+166] [ -2.95103476e+166] [ -3.78008855e+166] [ -4.18482026e+166] [ -3.62188642e+166] [ -4.35918960e+166] [ -3.73860305e+166] [ -4.19734910e+166] [ -3.83476260e+166] [ -4.35385814e+166] [ -3.88830304e+166] [ -4.40310768e+166] [ -4.22883500e+166] [ -4.93545216e+166] [ -4.34718399e+166] [ -4.39809047e+166] [ -4.20507134e+166] [ -4.58195312e+166] [ -4.85336611e+166] [ -4.41886734e+166] [ -4.58092762e+166] [ -4.80162803e+166] [ -4.56298204e+166] [ -4.65278256e+166] [ -4.22936022e+166] [ -4.78885221e+166]] [[ 7.48177342e+165] [ 3.40408026e+166] [ 2.29413025e+166] [ 3.85350389e+166] [ 5.21208481e+166] [ 5.46374182e+166] [ 8.77568146e+166] [ 7.51991330e+166] [ 1.04494227e+167] [ 8.76426366e+166] [ 1.47666029e+167] [ 1.31754225e+167] [ 9.69072755e+166] [ 1.34855737e+167] [ 1.39134825e+167] [ 1.59432104e+167] [ 2.03268111e+167] [ 1.42147958e+167] [ 1.73751976e+167] [ 1.69626174e+167] [ 2.27944717e+167] [ 2.34136623e+167] [ 2.32625974e+167] [ 2.42519265e+167] [ 2.20368254e+167] [ 1.96699812e+167] [ 2.51959997e+167] [ 2.78937196e+167] [ 2.41415109e+167] [ 2.90559701e+167] [ 2.49194801e+167] [ 2.79772299e+167] [ 2.55604269e+167] [ 2.90204335e+167] [ 2.59172982e+167] [ 2.93487040e+167] [ 2.81870977e+167] [ 3.28970207e+167] [ 2.89759472e+167] [ 2.93152620e+167] [ 2.80287022e+167] [ 3.05407896e+167] [ 3.23498800e+167] [ 2.94537492e+167] [ 3.05339543e+167] [ 3.20050223e+167] [ 3.04143388e+167] [ 3.10128999e+167] [ 2.81905986e+167] [ 3.19198657e+167]] [[ -4.98694034e+166] [ -2.26897344e+167] [ -1.52914156e+167] [ -2.56853461e+167] [ -3.47409024e+167] [ -3.64183101e+167] [ -5.84938856e+167] [ -5.01236229e+167] [ -6.96501279e+167] [ -5.84177808e+167] [ -9.84260863e+167] [ -8.78201492e+167] [ -6.45930816e+167] [ -8.98874475e+167] [ -9.27396530e+167] [ -1.06268707e+168] [ -1.35487389e+168] [ -9.47480428e+167] [ -1.15813550e+168] [ -1.13063516e+168] [ -1.51935462e+168] [ -1.56062648e+168] [ -1.55055732e+168] [ -1.61650058e+168] [ -1.46885407e+168] [ -1.31109320e+168] [ -1.67942732e+168] [ -1.85924256e+168] [ -1.60914088e+168] [ -1.93671181e+168] [ -1.66099604e+168] [ -1.86480890e+168] [ -1.70371805e+168] [ -1.93434313e+168] [ -1.72750513e+168] [ -1.95622384e+168] [ -1.87879753e+168] [ -2.19273520e+168] [ -1.93137792e+168] [ -1.95399479e+168] [ -1.86823976e+168] [ -2.03568175e+168] [ -2.15626580e+168] [ -1.96322559e+168] [ -2.03522614e+168] [ -2.13327948e+168] [ -2.02725323e+168] [ -2.06715003e+168] [ -1.87903088e+168] [ -2.12760340e+168]] [[ 3.32402126e+167] [ 1.51237341e+168] [ 1.01924200e+168] [ 1.71204448e+168] [ 2.31563825e+168] [ 2.42744507e+168] [ 3.89888201e+168] [ 3.34096615e+168] [ 4.64249601e+168] [ 3.89380928e+168] [ 6.56054377e+168] [ 5.85361010e+168] [ 4.30542101e+168] [ 5.99140488e+168] [ 6.18151728e+168] [ 7.08328993e+168] [ 9.03084718e+168] [ 6.31538554e+168] [ 7.71949683e+168] [ 7.53619467e+168] [ 1.01271857e+169] [ 1.04022813e+169] [ 1.03351657e+169] [ 1.07747074e+169] [ 9.79057666e+168] [ 8.73902913e+168] [ 1.11941426e+169] [ 1.23926925e+169] [ 1.07256517e+169] [ 1.29090601e+169] [ 1.10712898e+169] [ 1.24297947e+169] [ 1.13560513e+169] [ 1.28932718e+169] [ 1.15146030e+169] [ 1.30391166e+169] [ 1.25230352e+169] [ 1.46155718e+169] [ 1.28735073e+169] [ 1.30242589e+169] [ 1.24526629e+169] [ 1.35687395e+169] [ 1.43724867e+169] [ 1.30857864e+169] [ 1.35657026e+169] [ 1.42192725e+169] [ 1.35125596e+169] [ 1.37784898e+169] [ 1.25245906e+169] [ 1.41814389e+169]] [[ -2.21561050e+168] [ -1.00806527e+169] [ -6.79370890e+168] [ -1.14115507e+169] [ -1.54347762e+169] [ -1.61800192e+169] [ -2.59878118e+169] [ -2.22690503e+169] [ -3.09443354e+169] [ -2.59539998e+169] [ -4.37289912e+169] [ -3.90169585e+169] [ -2.86975781e+169] [ -3.99354231e+169] [ -4.12026082e+169] [ -4.72133307e+169] [ -6.01946805e+169] [ -4.20949007e+169] [ -5.14539374e+169] [ -5.02321456e+169] [ -6.75022725e+169] [ -6.93359088e+169] [ -6.88885536e+169] [ -7.18182974e+169] [ -6.52586211e+169] [ -5.82495813e+169] [ -7.46140229e+169] [ -8.26028996e+169] [ -7.14913191e+169] [ -8.60447233e+169] [ -7.37951533e+169] [ -8.28502026e+169] [ -7.56932175e+169] [ -8.59394872e+169] [ -7.67500361e+169] [ -8.69116089e+169] [ -8.34716932e+169] [ -9.74193954e+169] [ -8.58077481e+169] [ -8.68125757e+169] [ -8.30026295e+169] [ -9.04417848e+169] [ -9.57991238e+169] [ -8.72226841e+169] [ -9.04215429e+169] [ -9.47778815e+169] [ -9.00673204e+169] [ -9.18398657e+169] [ -8.34820604e+169] [ -9.45257034e+169]] [[ 1.47680459e+169] [ 6.71921092e+169] [ 4.52831422e+169] [ 7.60631462e+169] [ 1.02879763e+170] [ 1.07847145e+170] [ 1.73220518e+170] [ 1.48433291e+170] [ 2.06257990e+170] [ 1.72995146e+170] [ 2.91473502e+170] [ 2.60065674e+170] [ 1.91282336e+170] [ 2.66187655e+170] [ 2.74634017e+170] [ 3.14698201e+170] [ 4.01224768e+170] [ 2.80581550e+170] [ 3.42963762e+170] [ 3.34819967e+170] [ 4.49933173e+170] [ 4.62155189e+170] [ 4.59173364e+170] [ 4.78701431e+170] [ 4.34978222e+170] [ 3.88259802e+170] [ 4.97336206e+170] [ 5.50585682e+170] [ 4.76521973e+170] [ 5.73526993e+170] [ 4.91878070e+170] [ 5.52234068e+170] [ 5.04529526e+170] [ 5.72825547e+170] [ 5.11573700e+170] [ 5.79305178e+170] [ 5.56376584e+170] [ 6.49344327e+170] [ 5.71947447e+170] [ 5.78645077e+170] [ 5.53250063e+170] [ 6.02835397e+170] [ 6.38544484e+170] [ 5.81378635e+170] [ 6.02700475e+170] [ 6.31737442e+170] [ 6.00339422e+170] [ 6.12154238e+170] [ 5.56445687e+170] [ 6.30056561e+170]] [[ -9.84357049e+169] [ -4.47865795e+170] [ -3.01832622e+170] [ -5.06995268e+170] [ -6.85740149e+170] [ -7.18849993e+170] [ -1.15459309e+171] [ -9.89375012e+170] [ -1.37480278e+171] [ -1.15309088e+171] [ -1.94280271e+171] [ -1.73345533e+171] [ -1.27498327e+171] [ -1.77426110e+171] [ -1.83055992e+171] [ -2.09760583e+171] [ -2.67434453e+171] [ -1.87020292e+171] [ -2.28600858e+171] [ -2.23172650e+171] [ -2.99900808e+171] [ -3.08047334e+171] [ -3.06059812e+171] [ -3.19076152e+171] [ -2.89932656e+171] [ -2.58792717e+171] [ -3.31497073e+171] [ -3.66990257e+171] [ -3.17623445e+171] [ -3.82281678e+171] [ -3.27858978e+171] [ -3.68088981e+171] [ -3.36291746e+171] [ -3.81814132e+171] [ -3.40987006e+171] [ -3.86133100e+171] [ -3.70850155e+171] [ -4.32817360e+171] [ -3.81228839e+171] [ -3.85693113e+171] [ -3.68766187e+171] [ -4.01817054e+171] [ -4.25618776e+171] [ -3.87515152e+171] [ -4.01727123e+171] [ -4.21081575e+171] [ -4.00153375e+171] [ -4.08028484e+171] [ -3.70896215e+171] [ -4.19961192e+171]] [[ 6.56118489e+170] [ 2.98522806e+171] [ 2.01185092e+171] [ 3.37935274e+171] [ 4.57076821e+171] [ 4.79146030e+171] [ 7.69588511e+171] [ 6.59463189e+171] [ 9.16368226e+171] [ 7.68587220e+171] [ 1.29496586e+172] [ 1.15542637e+172] [ 8.49834012e+171] [ 1.18262526e+172] [ 1.22015097e+172] [ 1.39814915e+172] [ 1.78257157e+172] [ 1.24657482e+172] [ 1.52372810e+172] [ 1.48754664e+172] [ 1.99897451e+172] [ 2.05327479e+172] [ 2.04002706e+172] [ 2.12678685e+172] [ 1.93253227e+172] [ 1.72497049e+172] [ 2.20957790e+172] [ 2.44615603e+172] [ 2.11710390e+172] [ 2.54808027e+172] [ 2.18532836e+172] [ 2.45347952e+172] [ 2.24153657e+172] [ 2.54496386e+172] [ 2.27283260e+172] [ 2.57375173e+172] [ 2.47188399e+172] [ 2.88492344e+172] [ 2.54106262e+172] [ 2.57081902e+172] [ 2.45799341e+172] [ 2.67829238e+172] [ 2.83694162e+172] [ 2.58296374e+172] [ 2.67769295e+172] [ 2.80669912e+172] [ 2.66720321e+172] [ 2.71969437e+172] [ 2.47219100e+172] [ 2.79923126e+172]] [[ -4.37332645e+171] [ -1.98978951e+172] [ -1.34098962e+172] [ -2.25249143e+172] [ -3.04662372e+172] [ -3.19372498e+172] [ -5.12965546e+172] [ -4.39562039e+172] [ -6.10800864e+172] [ -5.12298140e+172] [ -8.63153312e+172] [ -7.70143927e+172] [ -5.66452802e+172] [ -7.88273221e+172] [ -8.13285803e+172] [ -9.31929635e+172] [ -1.18816457e+173] [ -8.30898495e+172] [ -1.01563369e+173] [ -9.91517108e+172] [ -1.33240691e+173] [ -1.36860050e+173] [ -1.35977029e+173] [ -1.41759962e+173] [ -1.28812016e+173] [ -1.14977084e+173] [ -1.47278360e+173] [ -1.63047362e+173] [ -1.41114549e+173] [ -1.69841073e+173] [ -1.45662018e+173] [ -1.63535506e+173] [ -1.49408549e+173] [ -1.69633351e+173] [ -1.51494572e+173] [ -1.71552193e+173] [ -1.64762247e+173] [ -1.92293194e+173] [ -1.69373315e+173] [ -1.71356714e+173] [ -1.63836376e+173] [ -1.78520300e+173] [ -1.89094989e+173] [ -1.72166214e+173] [ -1.78480345e+173] [ -1.87079190e+173] [ -1.77781156e+173] [ -1.81279929e+173] [ -1.64782711e+173] [ -1.86581423e+173]] [[ 2.91501986e+172] [ 1.32628469e+173] [ 8.93830229e+172] [ 1.50138740e+173] [ 2.03071249e+173] [ 2.12876213e+173] [ 3.41914735e+173] [ 2.92987977e+173] [ 4.07126399e+173] [ 3.41469879e+173] [ 5.75330718e+173] [ 5.13335756e+173] [ 3.77566409e+173] [ 5.25419750e+173] [ 5.42091768e+173] [ 6.21173247e+173] [ 7.91965417e+173] [ 5.53831423e+173] [ 6.76965783e+173] [ 6.60890992e+173] [ 8.88109463e+173] [ 9.12234129e+173] [ 9.06348395e+173] [ 9.44894256e+173] [ 8.58590338e+173] [ 7.66374263e+173] [ 9.81676874e+173] [ 1.08678440e+174] [ 9.40592289e+173] [ 1.13206756e+174] [ 9.70903224e+173] [ 1.09003810e+174] [ 9.95875550e+173] [ 1.13068300e+174] [ 1.00977983e+174] [ 1.14347295e+174] [ 1.09821489e+174] [ 1.28172110e+174] [ 1.12894974e+174] [ 1.14216999e+174] [ 1.09204354e+174] [ 1.18991853e+174] [ 1.26040362e+174] [ 1.14756568e+174] [ 1.18965221e+174] [ 1.24696740e+174] [ 1.18499180e+174] [ 1.20831271e+174] [ 1.09835129e+174] [ 1.24364956e+174]] [[ -1.94299256e+173] [ -8.84028727e+173] [ -5.95778270e+173] [ -1.00074260e+174] [ -1.35356171e+174] [ -1.41891623e+174] [ -2.27901633e+174] [ -1.95289737e+174] [ -2.71368157e+174] [ -2.27605117e+174] [ -3.83483942e+174] [ -3.42161496e+174] [ -2.51665087e+174] [ -3.50216025e+174] [ -3.61328679e+174] [ -4.14040061e+174] [ -5.27880766e+174] [ -3.69153690e+174] [ -4.51228309e+174] [ -4.40513734e+174] [ -5.91965121e+174] [ -6.08045302e+174] [ -6.04122194e+174] [ -6.29814753e+174] [ -5.72289289e+174] [ -5.10823105e+174] [ -6.54332031e+174] [ -7.24390952e+174] [ -6.26947298e+174] [ -7.54574226e+174] [ -6.47150907e+174] [ -7.26559690e+174] [ -6.63796091e+174] [ -7.53651352e+174] [ -6.73063924e+174] [ -7.62176430e+174] [ -7.32009888e+174] [ -8.54325078e+174] [ -7.52496058e+174] [ -7.61307953e+174] [ -7.27896407e+174] [ -7.93134514e+174] [ -8.40116011e+174] [ -7.64904422e+174] [ -7.92957001e+174] [ -8.31160167e+174] [ -7.89850626e+174] [ -8.05395066e+174] [ -7.32100804e+174] [ -8.28948676e+174]] [[ 1.29509241e+174] [ 5.89245126e+174] [ 3.97113160e+174] [ 6.67040202e+174] [ 9.02210094e+174] [ 9.45771836e+174] [ 1.51906745e+175] [ 1.30169441e+175] [ 1.80879149e+175] [ 1.51709103e+175] [ 2.55609390e+175] [ 2.28066111e+175] [ 1.67746162e+175] [ 2.33434819e+175] [ 2.40841905e+175] [ 2.75976425e+175] [ 3.51856404e+175] [ 2.46057629e+175] [ 3.00764074e+175] [ 2.93622326e+175] [ 3.94571524e+175] [ 4.05289692e+175] [ 4.02674763e+175] [ 4.19800016e+175] [ 3.81456692e+175] [ 3.40486701e+175] [ 4.36141890e+175] [ 4.82839330e+175] [ 4.17888727e+175] [ 5.02957846e+175] [ 4.31355346e+175] [ 4.84284891e+175] [ 4.42450114e+175] [ 5.02342709e+175] [ 4.48627544e+175] [ 5.08025059e+175] [ 4.87917695e+175] [ 5.69446300e+175] [ 5.01572654e+175] [ 5.07446180e+175] [ 4.85175873e+175] [ 5.28660022e+175] [ 5.59975314e+175] [ 5.09843389e+175] [ 5.28541702e+175] [ 5.54005839e+175] [ 5.26471162e+175] [ 5.36832234e+175] [ 4.87978295e+175] [ 5.52531781e+175]] [[ -8.63237663e+174] [ -3.92758524e+175] [ -2.64693880e+175] [ -4.44612461e+175] [ -6.01363829e+175] [ -6.30399700e+175] [ -1.01252715e+176] [ -8.67638195e+175] [ -1.20564133e+176] [ -1.01120978e+176] [ -1.70375218e+176] [ -1.52016377e+176] [ -1.11810403e+176] [ -1.55594864e+176] [ -1.60532021e+176] [ -1.83950768e+176] [ -2.34528205e+176] [ -1.64008537e+176] [ -2.00472857e+176] [ -1.95712559e+176] [ -2.62999765e+176] [ -2.70143908e+176] [ -2.68400940e+176] [ -2.79815695e+176] [ -2.54258136e+176] [ -2.26949784e+176] [ -2.90708294e+176] [ -3.21834249e+176] [ -2.78541735e+176] [ -3.35244150e+176] [ -2.87517846e+176] [ -3.22797781e+176] [ -2.94913010e+176] [ -3.34834133e+176] [ -2.99030546e+176] [ -3.38621677e+176] [ -3.25219209e+176] [ -3.79561712e+176] [ -3.34320857e+176] [ -3.38235828e+176] [ -3.23391661e+176] [ -3.52375813e+176] [ -3.73248871e+176] [ -3.39833676e+176] [ -3.52296947e+176] [ -3.69269946e+176] [ -3.50916839e+176] [ -3.57822962e+176] [ -3.25259602e+176] [ -3.68287420e+176]] [[ 5.75386941e+175] [ 2.61791318e+176] [ 1.76430442e+176] [ 2.96354312e+176] [ 4.00836188e+176] [ 4.20189909e+176] [ 6.74895139e+176] [ 5.78320095e+176] [ 8.03614467e+176] [ 6.74017051e+176] [ 1.13562788e+177] [ 1.01325790e+177] [ 7.45266900e+176] [ 1.03711013e+177] [ 1.07001852e+177] [ 1.22611506e+177] [ 1.56323655e+177] [ 1.09319107e+177] [ 1.33624226e+177] [ 1.30451271e+177] [ 1.75301237e+177] [ 1.80063132e+177] [ 1.78901365e+177] [ 1.86509815e+177] [ 1.69474546e+177] [ 1.51272295e+177] [ 1.93770225e+177] [ 2.14517081e+177] [ 1.85660663e+177] [ 2.23455387e+177] [ 1.91643647e+177] [ 2.15159319e+177] [ 1.96572858e+177] [ 2.23182092e+177] [ 1.99317382e+177] [ 2.25706661e+177] [ 2.16773311e+177] [ 2.52995046e+177] [ 2.22839970e+177] [ 2.25449475e+177] [ 2.15555167e+177] [ 2.34874415e+177] [ 2.48787252e+177] [ 2.26514514e+177] [ 2.34821847e+177] [ 2.46135119e+177] [ 2.33901943e+177] [ 2.38505186e+177] [ 2.16800234e+177] [ 2.45480220e+177]] [[ -3.83521417e+176] [ -1.74495753e+177] [ -1.17598868e+177] [ -1.97533551e+177] [ -2.67175446e+177] [ -2.80075577e+177] [ -4.49848131e+177] [ -3.85476497e+177] [ -5.35645385e+177] [ -4.49262846e+177] [ -7.56947338e+177] [ -6.75382214e+177] [ -4.96754093e+177] [ -6.91280805e+177] [ -7.13215735e+177] [ -8.17261136e+177] [ -1.04196786e+178] [ -7.28661287e+177] [ -8.90665893e+177] [ -8.69516718e+177] [ -1.16846202e+178] [ -1.20020221e+178] [ -1.19245850e+178] [ -1.24317227e+178] [ -1.12962449e+178] [ -1.00829825e+178] [ -1.29156618e+178] [ -1.42985336e+178] [ -1.23751228e+178] [ -1.48943121e+178] [ -1.27739157e+178] [ -1.43413417e+178] [ -1.31024700e+178] [ -1.48760958e+178] [ -1.32854049e+178] [ -1.50443697e+178] [ -1.44489215e+178] [ -1.68632640e+178] [ -1.48532918e+178] [ -1.50272271e+178] [ -1.43677267e+178] [ -1.56554419e+178] [ -1.65827955e+178] [ -1.50982167e+178] [ -1.56519381e+178] [ -1.64060188e+178] [ -1.55906223e+178] [ -1.58974493e+178] [ -1.44507160e+178] [ -1.63623668e+178]] [[ 2.55634369e+177] [ 1.16309311e+178] [ 7.83849639e+177] [ 1.31665045e+178] [ 1.78084518e+178] [ 1.86683038e+178] [ 2.99844123e+178] [ 2.56937518e+178] [ 3.57031873e+178] [ 2.99454004e+178] [ 5.04539633e+178] [ 4.50172789e+178] [ 3.31109068e+178] [ 4.60769919e+178] [ 4.75390542e+178] [ 5.44741507e+178] [ 6.94518698e+178] [ 4.85685701e+178] [ 5.93669097e+178] [ 5.79572216e+178] [ 7.78832780e+178] [ 7.99989047e+178] [ 7.94827518e+178] [ 8.28630536e+178] [ 7.52945811e+178] [ 6.72076386e+178] [ 8.60887267e+178] [ 9.53061928e+178] [ 8.24857901e+178] [ 9.92773259e+178] [ 8.51439252e+178] [ 9.55915279e+178] [ 8.73338879e+178] [ 9.91559058e+178] [ 8.85532320e+178] [ 1.00277528e+179] [ 9.63085960e+178] [ 1.12401280e+179] [ 9.90039069e+178] [ 1.00163264e+179] [ 9.57673962e+178] [ 1.04350601e+179] [ 1.10531831e+179] [ 1.00636442e+179] [ 1.04327246e+179] [ 1.09353535e+179] [ 1.03918548e+179] [ 1.05963689e+179] [ 9.63205576e+178] [ 1.09062575e+179]] [[ -1.70391868e+178] [ -7.75254155e+178] [ -5.22471233e+178] [ -8.77607070e+178] [ -1.18701385e+179] [ -1.24432687e+179] [ -1.99859668e+179] [ -1.71260475e+179] [ -2.37977890e+179] [ -1.99599636e+179] [ -3.36298482e+179] [ -3.00060523e+179] [ -2.20699168e+179] [ -3.07123988e+179] [ -3.16869295e+179] [ -3.63094850e+179] [ -4.62928121e+179] [ -3.23731484e+179] [ -3.95707301e+179] [ -3.86311092e+179] [ -5.19127270e+179] [ -5.33228879e+179] [ -5.29788486e+179] [ -5.52319726e+179] [ -5.01872434e+179] [ -4.47969305e+179] [ -5.73820296e+179] [ -6.35258876e+179] [ -5.49805094e+179] [ -6.61728274e+179] [ -5.67522767e+179] [ -6.37160763e+179] [ -5.82119859e+179] [ -6.60918954e+179] [ -5.90247339e+179] [ -6.68395072e+179] [ -6.41940347e+179] [ -7.49205366e+179] [ -6.59905813e+179] [ -6.67633455e+179] [ -6.38333005e+179] [ -6.95543944e+179] [ -7.36744642e+179] [ -6.70787400e+179] [ -6.95388273e+179] [ -7.28890762e+179] [ -6.92664120e+179] [ -7.06295907e+179] [ -6.42020076e+179] [ -7.26951383e+179]] [[ 1.13573886e+179] [ 5.16741954e+179] [ 3.48250705e+179] [ 5.84964801e+179] [ 7.91198411e+179] [ 8.29400135e+179] [ 1.33215507e+180] [ 1.14152852e+180] [ 1.58623026e+180] [ 1.33042184e+180] [ 2.24158147e+180] [ 2.00003908e+180] [ 1.47105976e+180] [ 2.04712028e+180] [ 2.11207715e+180] [ 2.42019138e+180] [ 3.08562528e+180] [ 2.15781675e+180] [ 2.63756812e+180] [ 2.57493813e+180] [ 3.46021802e+180] [ 3.55421162e+180] [ 3.53127985e+180] [ 3.68146075e+180] [ 3.34520674e+180] [ 2.98591801e+180] [ 3.82477178e+180] [ 4.23428770e+180] [ 3.66469960e+180] [ 4.41071821e+180] [ 3.78279590e+180] [ 4.24696464e+180] [ 3.88009212e+180] [ 4.40532372e+180] [ 3.93426544e+180] [ 4.45515543e+180] [ 4.27882272e+180] [ 4.99379259e+180] [ 4.39857068e+180] [ 4.45007891e+180] [ 4.25477816e+180] [ 4.63611494e+180] [ 4.91073622e+180] [ 4.47110138e+180] [ 4.63507732e+180] [ 4.85838656e+180] [ 4.61691961e+180] [ 4.70778163e+180] [ 4.27935415e+180] [ 4.84545972e+180]] [[ -7.57021310e+179] [ -3.44431882e+180] [ -2.32124844e+180] [ -3.89905494e+180] [ -5.27369522e+180] [ -5.52832698e+180] [ -8.87941602e+180] [ -7.60880382e+180] [ -1.05729420e+181] [ -8.86786325e+180] [ -1.49411542e+181] [ -1.33311650e+181] [ -9.80527858e+180] [ -1.36449824e+181] [ -1.40779494e+181] [ -1.61316700e+181] [ -2.05670879e+181] [ -1.43828245e+181] [ -1.75805843e+181] [ -1.71631271e+181] [ -2.30639180e+181] [ -2.36904278e+181] [ -2.35375772e+181] [ -2.45386008e+181] [ -2.22973157e+181] [ -1.99024939e+181] [ -2.54938336e+181] [ -2.82234424e+181] [ -2.44268801e+181] [ -2.93994315e+181] [ -2.52140454e+181] [ -2.83079399e+181] [ -2.58625687e+181] [ -2.93634749e+181] [ -2.62236584e+181] [ -2.96956257e+181] [ -2.85202885e+181] [ -3.32858860e+181] [ -2.93184628e+181] [ -2.96617884e+181] [ -2.83600206e+181] [ -3.09018027e+181] [ -3.27322777e+181] [ -2.98019127e+181] [ -3.08948865e+181] [ -3.23833436e+181] [ -3.07738571e+181] [ -3.13794936e+181] [ -2.85238307e+181] [ -3.22971803e+181]] [[ 5.04588938e+180] [ 2.29579426e+181] [ 1.54721706e+181] [ 2.59889645e+181] [ 3.51515636e+181] [ 3.68487995e+181] [ 5.91853233e+181] [ 5.07161185e+181] [ 7.04734400e+181] [ 5.91083189e+181] [ 9.95895500e+181] [ 8.88582434e+181] [ 6.53566160e+181] [ 9.09499786e+181] [ 9.38358992e+181] [ 1.07524876e+182] [ 1.37088942e+182] [ 9.58680295e+181] [ 1.17182545e+182] [ 1.14400004e+182] [ 1.53731444e+182] [ 1.57907415e+182] [ 1.56888596e+182] [ 1.63560872e+182] [ 1.48621693e+182] [ 1.32659122e+182] [ 1.69927930e+182] [ 1.88122007e+182] [ 1.62816203e+182] [ 1.95960507e+182] [ 1.68063015e+182] [ 1.88685222e+182] [ 1.72385716e+182] [ 1.95720839e+182] [ 1.74792543e+182] [ 1.97934775e+182] [ 1.90100621e+182] [ 2.21865483e+182] [ 1.95420813e+182] [ 1.97709234e+182] [ 1.89032363e+182] [ 2.05974490e+182] [ 2.18175434e+182] [ 1.98643226e+182] [ 2.05928390e+182] [ 2.15849630e+182] [ 2.05121675e+182] [ 2.09158516e+182] [ 1.90124231e+182] [ 2.15275313e+182]] [[ -3.36331347e+181] [ -1.53025070e+182] [ -1.03129014e+182] [ -1.73228202e+182] [ -2.34301068e+182] [ -2.45613913e+182] [ -3.94496946e+182] [ -3.38045865e+182] [ -4.69737349e+182] [ -3.93983677e+182] [ -6.63809389e+182] [ -5.92280377e+182] [ -4.35631403e+182] [ -6.06222739e+182] [ -6.25458704e+182] [ -7.16701926e+182] [ -9.13759797e+182] [ -6.39003771e+182] [ -7.81074657e+182] [ -7.62527764e+182] [ -1.02468959e+183] [ -1.05252433e+183] [ -1.04573345e+183] [ -1.09020718e+183] [ -9.90630797e+182] [ -8.84233043e+182] [ -1.13264650e+183] [ -1.25391826e+183] [ -1.08524362e+183] [ -1.30616540e+183] [ -1.12021600e+183] [ -1.25767233e+183] [ -1.14902876e+183] [ -1.30456791e+183] [ -1.16507134e+183] [ -1.31932479e+183] [ -1.26710661e+183] [ -1.47883378e+183] [ -1.30256810e+183] [ -1.31782146e+183] [ -1.25998619e+183] [ -1.37291312e+183] [ -1.45423793e+183] [ -1.32404693e+183] [ -1.37260585e+183] [ -1.43873540e+183] [ -1.36722873e+183] [ -1.39413610e+183] [ -1.26726398e+183] [ -1.43490732e+183]] [[ 2.24180053e+182] [ 1.01998129e+183] [ 6.87401519e+182] [ 1.15464430e+183] [ 1.56172258e+183] [ 1.63712781e+183] [ 2.62950055e+183] [ 2.25322857e+183] [ 3.13101186e+183] [ 2.62607938e+183] [ 4.42458978e+183] [ 3.94781657e+183] [ 2.90368031e+183] [ 4.04074871e+183] [ 4.16896512e+183] [ 4.77714246e+183] [ 6.09062228e+183] [ 4.25924912e+183] [ 5.20621582e+183] [ 5.08259239e+183] [ 6.83001955e+183] [ 7.01555067e+183] [ 6.97028634e+183] [ 7.26672388e+183] [ 6.60300226e+183] [ 5.89381312e+183] [ 7.54960117e+183] [ 8.35793224e+183] [ 7.23363954e+183] [ 8.70618308e+183] [ 7.46674625e+183] [ 8.38295486e+183] [ 7.65879631e+183] [ 8.69553507e+183] [ 7.76572740e+183] [ 8.79389636e+183] [ 8.44583857e+183] [ 9.85709593e+183] [ 8.68220544e+183] [ 8.78387598e+183] [ 8.39837774e+183] [ 9.15108685e+183] [ 9.69315349e+183] [ 8.82537159e+183] [ 9.14903873e+183] [ 9.58982209e+183] [ 9.11319777e+183] [ 9.29254757e+183] [ 8.44688755e+183] [ 9.56430618e+183]] [[ -1.49426143e+183] [ -6.79863659e+183] [ -4.58184199e+183] [ -7.69622646e+183] [ -1.04095873e+184] [ -1.09121972e+184] [ -1.75268103e+184] [ -1.50187874e+184] [ -2.08696100e+184] [ -1.75040067e+184] [ -2.94918918e+184] [ -2.63139828e+184] [ -1.93543424e+184] [ -2.69334175e+184] [ -2.77880379e+184] [ -3.18418149e+184] [ -4.05967519e+184] [ -2.83898216e+184] [ -3.47017829e+184] [ -3.38777768e+184] [ -4.55251691e+184] [ -4.67618179e+184] [ -4.64601107e+184] [ -4.84360009e+184] [ -4.40119962e+184] [ -3.92849298e+184] [ -5.03215060e+184] [ -5.57093982e+184] [ -4.82154788e+184] [ -5.80306475e+184] [ -4.97692405e+184] [ -5.58761853e+184] [ -5.10493410e+184] [ -5.79596737e+184] [ -5.17620851e+184] [ -5.86152962e+184] [ -5.62953336e+184] [ -6.57020021e+184] [ -5.78708257e+184] [ -5.85485058e+184] [ -5.59789857e+184] [ -6.09961323e+184] [ -6.46092516e+184] [ -5.88250928e+184] [ -6.09824807e+184] [ -6.39205011e+184] [ -6.07435845e+184] [ -6.19390320e+184] [ -5.63023255e+184] [ -6.37504261e+184]] [[ 9.95992823e+183] [ 4.53159875e+184] [ 3.05400490e+184] [ 5.12988299e+184] [ 6.93846067e+184] [ 7.27347292e+184] [ 1.16824117e+185] [ 1.00107010e+185] [ 1.39105389e+185] [ 1.16672120e+185] [ 1.96576796e+185] [ 1.75394595e+185] [ 1.29005445e+185] [ 1.79523408e+185] [ 1.85219839e+185] [ 2.12240096e+185] [ 2.70595711e+185] [ 1.89231000e+185] [ 2.31303076e+185] [ 2.25810703e+185] [ 3.03445840e+185] [ 3.11688664e+185] [ 3.09677649e+185] [ 3.22847851e+185] [ 2.93359859e+185] [ 2.61851823e+185] [ 3.35415595e+185] [ 3.71328333e+185] [ 3.21377971e+185] [ 3.86800509e+185] [ 3.31734495e+185] [ 3.72440044e+185] [ 3.40266944e+185] [ 3.86327437e+185] [ 3.45017706e+185] [ 3.90697457e+185] [ 3.75233857e+185] [ 4.37933557e+185] [ 3.85735224e+185] [ 3.90252269e+185] [ 3.73125255e+185] [ 4.06566807e+185] [ 4.30649881e+185] [ 3.92095847e+185] [ 4.06475812e+185] [ 4.26059047e+185] [ 4.04883461e+185] [ 4.12851660e+185] [ 3.75280462e+185] [ 4.24925420e+185]] [[ -6.63874259e+184] [ -3.02051550e+185] [ -2.03563238e+185] [ -3.41929900e+185] [ -4.62479782e+185] [ -4.84809864e+185] [ -7.78685573e+185] [ -6.67258495e+185] [ -9.27200325e+185] [ -7.77672447e+185] [ -1.31027325e+186] [ -1.16908430e+186] [ -8.59879631e+185] [ -1.19660470e+186] [ -1.23457399e+186] [ -1.41467623e+186] [ -1.80364279e+186] [ -1.26131019e+186] [ -1.54173961e+186] [ -1.50513046e+186] [ -2.02260376e+186] [ -2.07754590e+186] [ -2.06414158e+186] [ -2.15192693e+186] [ -1.95537613e+186] [ -1.74536082e+186] [ -2.23569663e+186] [ -2.47507127e+186] [ -2.14212952e+186] [ -2.57820032e+186] [ -2.21116044e+186] [ -2.48248133e+186] [ -2.26803306e+186] [ -2.57504708e+186] [ -2.29969904e+186] [ -2.60417524e+186] [ -2.50110335e+186] [ -2.91902521e+186] [ -2.57109971e+186] [ -2.60120786e+186] [ -2.48704857e+186] [ -2.70995163e+186] [ -2.87047621e+186] [ -2.61349614e+186] [ -2.70934511e+186] [ -2.83987623e+186] [ -2.69873137e+186] [ -2.75184302e+186] [ -2.50141399e+186] [ -2.83232009e+186]] [[ 4.42502216e+185] [ 2.01331018e+186] [ 1.35684104e+186] [ 2.27911742e+186] [ 3.08263690e+186] [ 3.23147699e+186] [ 5.19029150e+186] [ 4.44757963e+186] [ 6.18020948e+186] [ 5.18353855e+186] [ 8.73356375e+186] [ 7.79247555e+186] [ 5.73148661e+186] [ 7.97591149e+186] [ 8.22899398e+186] [ 9.42945681e+186] [ 1.20220949e+187] [ 8.40720283e+186] [ 1.02763917e+187] [ 1.00323752e+187] [ 1.34815687e+187] [ 1.38477830e+187] [ 1.37584371e+187] [ 1.43435662e+187] [ 1.30334662e+187] [ 1.16336192e+187] [ 1.49019291e+187] [ 1.64974693e+187] [ 1.42782620e+187] [ 1.71848711e+187] [ 1.47383843e+187] [ 1.65468607e+187] [ 1.51174661e+187] [ 1.71638533e+187] [ 1.53285341e+187] [ 1.73580057e+187] [ 1.66709850e+187] [ 1.94566231e+187] [ 1.71375423e+187] [ 1.73382267e+187] [ 1.65773034e+187] [ 1.80630532e+187] [ 1.91330221e+187] [ 1.74201337e+187] [ 1.80590105e+187] [ 1.89290594e+187] [ 1.79882651e+187] [ 1.83422782e+187] [ 1.66730555e+187] [ 1.88786943e+187]] [[ -2.94947739e+186] [ -1.34196228e+187] [ -9.04395913e+186] [ -1.51913483e+187] [ -2.05471690e+187] [ -2.15392555e+187] [ -3.45956401e+187] [ -2.96451296e+187] [ -4.11938911e+187] [ -3.45506286e+187] [ -5.82131520e+187] [ -5.19403736e+187] [ -3.82029503e+187] [ -5.31630571e+187] [ -5.48499663e+187] [ -6.28515940e+187] [ -8.01326990e+187] [ -5.60378089e+187] [ -6.84967982e+187] [ -6.68703176e+187] [ -8.98607524e+187] [ -9.23017361e+187] [ -9.17062053e+187] [ -9.56063552e+187] [ -8.68739463e+187] [ -7.75433331e+187] [ -9.93280966e+187] [ -1.09963094e+188] [ -9.51710733e+187] [ -1.14544937e+188] [ -9.82379964e+187] [ -1.10292310e+188] [ -1.00764748e+188] [ -1.14404844e+188] [ -1.02171612e+188] [ -1.15698958e+188] [ -1.11119654e+188] [ -1.29687192e+188] [ -1.14229470e+188] [ -1.15567122e+188] [ -1.10495224e+188] [ -1.20398418e+188] [ -1.27530245e+188] [ -1.16113069e+188] [ -1.20371472e+188] [ -1.26170741e+188] [ -1.19899922e+188] [ -1.22259579e+188] [ -1.11133455e+188] [ -1.25835035e+188]] [[ 1.96596007e+187] [ 8.94478551e+187] [ 6.02820777e+187] [ 1.01257207e+188] [ 1.36956174e+188] [ 1.43568879e+188] [ 2.30595587e+188] [ 1.97598195e+188] [ 2.74575914e+188] [ 2.30295565e+188] [ 3.88016984e+188] [ 3.46206079e+188] [ 2.54639940e+188] [ 3.54355818e+188] [ 3.65599831e+188] [ 4.18934298e+188] [ 5.34120678e+188] [ 3.73517339e+188] [ 4.56562137e+188] [ 4.45720908e+188] [ 5.98962553e+188] [ 6.15232813e+188] [ 6.11263331e+188] [ 6.37259594e+188] [ 5.79054140e+188] [ 5.16861383e+188] [ 6.62066684e+188] [ 7.32953749e+188] [ 6.34358244e+188] [ 7.63493809e+188] [ 6.54800673e+188] [ 7.35148123e+188] [ 6.71642615e+188] [ 7.62560026e+188] [ 6.81020000e+188] [ 7.71185877e+188] [ 7.40662746e+188] [ 8.64423784e+188] [ 7.61391076e+188] [ 7.70307134e+188] [ 7.36500640e+188] [ 8.02509906e+188] [ 8.50046757e+188] [ 7.73946116e+188] [ 8.02330295e+188] [ 8.40985048e+188] [ 7.99187201e+188] [ 8.14915387e+188] [ 7.40754737e+188] [ 8.38747417e+188]] [[ -1.31040129e+188] [ -5.96210406e+188] [ -4.01807309e+188] [ -6.74925074e+188] [ -9.12874834e+188] [ -9.56951506e+188] [ -1.53702387e+189] [ -1.31708133e+189] [ -1.83017264e+189] [ -1.53502409e+189] [ -2.58630868e+189] [ -2.30762009e+189] [ -1.69729036e+189] [ -2.36194178e+189] [ -2.43688821e+189] [ -2.79238655e+189] [ -3.56015587e+189] [ -2.48966198e+189] [ -3.04319312e+189] [ -2.97093142e+189] [ -3.99235629e+189] [ -4.10080493e+189] [ -4.07434654e+189] [ -4.24762339e+189] [ -3.85965771e+189] [ -3.44511486e+189] [ -4.41297386e+189] [ -4.88546821e+189] [ -4.22828457e+189] [ -5.08903152e+189] [ -4.36454261e+189] [ -4.90009470e+189] [ -4.47680177e+189] [ -5.08280743e+189] [ -4.53930628e+189] [ -5.14030263e+189] [ -4.93685216e+189] [ -5.76177545e+189] [ -5.07501585e+189] [ -5.13444541e+189] [ -4.90910984e+189] [ -5.34909145e+189] [ -5.66594606e+189] [ -5.15870087e+189] [ -5.34789426e+189] [ -5.60554567e+189] [ -5.32694412e+189] [ -5.43177959e+189] [ -4.93746532e+189] [ -5.59063084e+189]] [[ 8.73441722e+188] [ 3.97401199e+189] [ 2.67822743e+189] [ 4.49868084e+189] [ 6.08472361e+189] [ 6.37851456e+189] [ 1.02449592e+190] [ 8.77894272e+189] [ 1.21989283e+190] [ 1.02316297e+190] [ 1.72389170e+190] [ 1.53813315e+190] [ 1.13132078e+190] [ 1.57434101e+190] [ 1.62429620e+190] [ 1.86125192e+190] [ 2.37300489e+190] [ 1.65947230e+190] [ 2.02842584e+190] [ 1.98026015e+190] [ 2.66108602e+190] [ 2.73337194e+190] [ 2.71573623e+190] [ 2.83123308e+190] [ 2.57263641e+190] [ 2.29632486e+190] [ 2.94144665e+190] [ 3.25638550e+190] [ 2.81834289e+190] [ 3.39206965e+190] [ 2.90916503e+190] [ 3.26613472e+190] [ 2.98399084e+190] [ 3.38792102e+190] [ 3.02565292e+190] [ 3.42624417e+190] [ 3.29063523e+190] [ 3.84048391e+190] [ 3.38272758e+190] [ 3.42234006e+190] [ 3.27214372e+190] [ 3.56541136e+190] [ 3.77660928e+190] [ 3.43850743e+190] [ 3.56461338e+190] [ 3.73634970e+190] [ 3.55064916e+190] [ 3.62052674e+190] [ 3.29104393e+190] [ 3.72640829e+190]] [[ -5.82188408e+189] [ -2.64885871e+190] [ -1.78515970e+190] [ -2.99857423e+190] [ -4.05574346e+190] [ -4.25156841e+190] [ -6.82872861e+190] [ -5.85156234e+190] [ -8.13113740e+190] [ -6.81984394e+190] [ -1.14905178e+191] [ -1.02523530e+191] [ -7.54076465e+190] [ -1.04936948e+191] [ -1.08266687e+191] [ -1.24060858e+191] [ -1.58171508e+191] [ -1.10611334e+191] [ -1.35203755e+191] [ -1.31993295e+191] [ -1.77373418e+191] [ -1.82191601e+191] [ -1.81016101e+191] [ -1.88714489e+191] [ -1.71477851e+191] [ -1.53060437e+191] [ -1.96060721e+191] [ -2.17052820e+191] [ -1.87855299e+191] [ -2.26096783e+191] [ -1.93909006e+191] [ -2.17702650e+191] [ -1.98896484e+191] [ -2.25820257e+191] [ -2.01673450e+191] [ -2.28374669e+191] [ -2.19335719e+191] [ -2.55985620e+191] [ -2.25474091e+191] [ -2.28114442e+191] [ -2.18103177e+191] [ -2.37650791e+191] [ -2.51728088e+191] [ -2.29192070e+191] [ -2.37597602e+191] [ -2.49044604e+191] [ -2.36666824e+191] [ -2.41324481e+191] [ -2.19362961e+191] [ -2.48381965e+191]] [[ 3.88054903e+190] [ 1.76558412e+191] [ 1.18988967e+191] [ 1.99868533e+191] [ 2.70333643e+191] [ 2.83386262e+191] [ 4.55165644e+191] [ 3.90033093e+191] [ 5.41977080e+191] [ 4.54573440e+191] [ 7.65894973e+191] [ 6.83365694e+191] [ 5.02626066e+191] [ 6.99452216e+191] [ 7.21646433e+191] [ 8.26921722e+191] [ 1.05428463e+192] [ 7.37274562e+191] [ 9.01194173e+191] [ 8.79795001e+191] [ 1.18227404e+192] [ 1.21438942e+192] [ 1.20655418e+192] [ 1.25786741e+192] [ 1.14297743e+192] [ 1.02021703e+192] [ 1.30683337e+192] [ 1.44675521e+192] [ 1.25214052e+192] [ 1.50703731e+192] [ 1.29249122e+192] [ 1.45108661e+192] [ 1.32573502e+192] [ 1.50519414e+192] [ 1.34424475e+192] [ 1.52222045e+192] [ 1.46197176e+192] [ 1.70625993e+192] [ 1.50288678e+192] [ 1.52048592e+192] [ 1.45375631e+192] [ 1.58405000e+192] [ 1.67788155e+192] [ 1.52766880e+192] [ 1.58369547e+192] [ 1.65999492e+192] [ 1.57749141e+192] [ 1.60853680e+192] [ 1.46215334e+192] [ 1.65557812e+192]] [[ -2.58656142e+191] [ -1.17684166e+192] [ -7.93115278e+191] [ -1.33221416e+192] [ -1.80189599e+192] [ -1.88889759e+192] [ -3.03388486e+192] [ -2.59974695e+192] [ -3.61252234e+192] [ -3.02993756e+192] [ -5.10503636e+192] [ -4.55494140e+192] [ -3.35023004e+192] [ -4.66216534e+192] [ -4.81009983e+192] [ -5.51180724e+192] [ -7.02728384e+192] [ -4.91426838e+192] [ -6.00686672e+192] [ -5.86423156e+192] [ -7.88039117e+192] [ -8.09445465e+192] [ -8.04222923e+192] [ -8.38425516e+192] [ -7.61846145e+192] [ -6.80020789e+192] [ -8.71063544e+192] [ -9.64327772e+192] [ -8.34608286e+192] [ -1.00450852e+193] [ -8.61503847e+192] [ -9.67214851e+192] [ -8.83662342e+192] [ -1.00327996e+193] [ -8.95999918e+192] [ -1.01462877e+193] [ -9.74470295e+192] [ -1.13729940e+193] [ -1.00174201e+193] [ -1.01347263e+193] [ -9.68994323e+192] [ -1.05584096e+193] [ -1.11838393e+193] [ -1.01826034e+193] [ -1.05560465e+193] [ -1.10646168e+193] [ -1.05146937e+193] [ -1.07216252e+193] [ -9.74591325e+192] [ -1.10351769e+193]] [[ 1.72406016e+192] [ 7.84418190e+192] [ 5.28647201e+192] [ 8.87980986e+192] [ 1.20104516e+193] [ 1.25903567e+193] [ 2.02222146e+193] [ 1.73284891e+193] [ 2.40790951e+193] [ 2.01959041e+193] [ 3.40273761e+193] [ 3.03607443e+193] [ 2.23307983e+193] [ 3.10754404e+193] [ 3.20614907e+193] [ 3.67386879e+193] [ 4.68400248e+193] [ 3.27558212e+193] [ 4.00384832e+193] [ 3.90877553e+193] [ 5.25263709e+193] [ 5.39532009e+193] [ 5.36050948e+193] [ 5.58848523e+193] [ 5.07804909e+193] [ 4.53264609e+193] [ 5.80603245e+193] [ 6.42768071e+193] [ 5.56304166e+193] [ 6.69550355e+193] [ 5.74231274e+193] [ 6.44692439e+193] [ 5.89000914e+193] [ 6.68731468e+193] [ 5.97224466e+193] [ 6.76295959e+193] [ 6.49528521e+193] [ 7.58061486e+193] [ 6.67706351e+193] [ 6.75525340e+193] [ 6.45878538e+193] [ 7.03765749e+193] [ 7.45453468e+193] [ 6.78716566e+193] [ 7.03608238e+193] [ 7.37506750e+193] [ 7.00851883e+193] [ 7.14644807e+193] [ 6.49609192e+193] [ 7.35544445e+193]] [[ -1.14916407e+193] [ -5.22850198e+193] [ -3.52367267e+193] [ -5.91879485e+193] [ -8.00550917e+193] [ -8.39204212e+193] [ -1.34790206e+194] [ -1.15502217e+194] [ -1.60498058e+194] [ -1.34614834e+194] [ -2.26807850e+194] [ -2.02368091e+194] [ -1.48844869e+194] [ -2.07131864e+194] [ -2.13704335e+194] [ -2.44879970e+194] [ -3.12209948e+194] [ -2.18332362e+194] [ -2.66874597e+194] [ -2.60537566e+194] [ -3.50112017e+194] [ -3.59622484e+194] [ -3.57302199e+194] [ -3.72497814e+194] [ -3.38474937e+194] [ -3.02121360e+194] [ -3.86998320e+194] [ -4.28433988e+194] [ -3.70801885e+194] [ -4.46285591e+194] [ -3.82751113e+194] [ -4.29716666e+194] [ -3.92595746e+194] [ -4.45739766e+194] [ -3.98077115e+194] [ -4.50781841e+194] [ -4.32940133e+194] [ -5.05282263e+194] [ -4.45056479e+194] [ -4.50268189e+194] [ -4.30507255e+194] [ -4.69091699e+194] [ -4.96878448e+194] [ -4.52395285e+194] [ -4.68986710e+194] [ -4.91581602e+194] [ -4.67149475e+194] [ -4.76343083e+194] [ -4.32993905e+194] [ -4.90273637e+194]] [[ 7.65969820e+193] [ 3.48503302e+194] [ 2.34868719e+194] [ 3.94514444e+194] [ 5.33603390e+194] [ 5.59367559e+194] [ 8.98437679e+194] [ 7.69874509e+194] [ 1.06979214e+195] [ 8.97268746e+194] [ 1.51177689e+195] [ 1.34887485e+195] [ 9.92118368e+194] [ 1.38062755e+195] [ 1.42443604e+195] [ 1.63223574e+195] [ 2.08102050e+195] [ 1.45528393e+195] [ 1.77883988e+195] [ 1.73660070e+195] [ 2.33365492e+195] [ 2.39704649e+195] [ 2.38158074e+195] [ 2.48286639e+195] [ 2.25608853e+195] [ 2.01377550e+195] [ 2.57951882e+195] [ 2.85570628e+195] [ 2.47156226e+195] [ 2.97469529e+195] [ 2.55120926e+195] [ 2.86425592e+195] [ 2.61682819e+195] [ 2.97105712e+195] [ 2.65336400e+195] [ 3.00466483e+195] [ 2.88574178e+195] [ 3.36793479e+195] [ 2.96650271e+195] [ 3.00124111e+195] [ 2.86952554e+195] [ 3.12670831e+195] [ 3.31191957e+195] [ 3.01541917e+195] [ 3.12600852e+195] [ 3.27661369e+195] [ 3.11376252e+195] [ 3.17504207e+195] [ 2.88610019e+195] [ 3.26789551e+195]] [[ -5.10553525e+194] [ -2.32293211e+195] [ -1.56550623e+195] [ -2.62961718e+195] [ -3.55670791e+195] [ -3.72843775e+195] [ -5.98849344e+195] [ -5.13156177e+195] [ -7.13064843e+195] [ -5.98070197e+195] [ -1.00766767e+196] [ -8.99086086e+195] [ -6.61291760e+195] [ -9.20250696e+195] [ -9.49451037e+195] [ -1.08795894e+196] [ -1.38709427e+196] [ -9.70012551e+195] [ -1.18567723e+196] [ -1.15752291e+196] [ -1.55548654e+196] [ -1.59773989e+196] [ -1.58743127e+196] [ -1.65494273e+196] [ -1.50378503e+196] [ -1.34227244e+196] [ -1.71936595e+196] [ -1.90345738e+196] [ -1.64740802e+196] [ -1.98276894e+196] [ -1.70049635e+196] [ -1.90915610e+196] [ -1.74423433e+196] [ -1.98034393e+196] [ -1.76858710e+196] [ -2.00274499e+196] [ -1.92347740e+196] [ -2.24488085e+196] [ -1.97730821e+196] [ -2.00046292e+196] [ -1.91266855e+196] [ -2.08409249e+196] [ -2.20754417e+196] [ -2.00991324e+196] [ -2.08362605e+196] [ -2.18401120e+196] [ -2.07546353e+196] [ -2.11630913e+196] [ -1.92371630e+196] [ -2.17820014e+196]] [[ 3.40307013e+195] [ 1.54833931e+196] [ 1.04348070e+196] [ 1.75275877e+196] [ 2.37070667e+196] [ 2.48517237e+196] [ 3.99160170e+196] [ 3.42041799e+196] [ 4.75289965e+196] [ 3.98640833e+196] [ 6.71656070e+196] [ 5.99281537e+196] [ 4.40780864e+196] [ 6.13388706e+196] [ 6.32852054e+196] [ 7.25173833e+196] [ 9.24561063e+196] [ 6.46557233e+196] [ 7.90307493e+196] [ 7.71541364e+196] [ 1.03680213e+197] [ 1.06496589e+197] [ 1.05809473e+197] [ 1.10309417e+197] [ 1.00234073e+197] [ 8.94685283e+196] [ 1.14603516e+197] [ 1.26874043e+197] [ 1.09807194e+197] [ 1.32160517e+197] [ 1.13345772e+197] [ 1.27253888e+197] [ 1.16261106e+197] [ 1.31998879e+197] [ 1.17884328e+197] [ 1.33492011e+197] [ 1.28208467e+197] [ 1.49631461e+197] [ 1.31796534e+197] [ 1.33339901e+197] [ 1.27488008e+197] [ 1.38914190e+197] [ 1.47142802e+197] [ 1.33969807e+197] [ 1.38883099e+197] [ 1.45574224e+197] [ 1.38339031e+197] [ 1.41061574e+197] [ 1.28224391e+197] [ 1.45186890e+197]] [[ -2.26830014e+196] [ -1.03203817e+197] [ -6.95527075e+196] [ -1.16829299e+197] [ -1.58018321e+197] [ -1.65647978e+197] [ -2.66058305e+197] [ -2.27986327e+197] [ -3.16802256e+197] [ -2.65712143e+197] [ -4.47689145e+197] [ -3.99448246e+197] [ -2.93800380e+197] [ -4.08851312e+197] [ -4.21824514e+197] [ -4.83361155e+197] [ -6.16261760e+197] [ -4.30959636e+197] [ -5.26775685e+197] [ -5.14267211e+197] [ -6.91075505e+197] [ -7.09847928e+197] [ -7.05267989e+197] [ -7.35262153e+197] [ -6.68105427e+197] [ -5.96348202e+197] [ -7.63884262e+197] [ -8.45672872e+197] [ -7.31914610e+197] [ -8.80909612e+197] [ -7.55500830e+197] [ -8.48204712e+197] [ -7.74932852e+197] [ -8.79832224e+197] [ -7.85752361e+197] [ -8.89784623e+197] [ -8.54567416e+197] [ -9.97361355e+197] [ -8.78483504e+197] [ -8.88770740e+197] [ -8.49765231e+197] [ -9.25925896e+197] [ -9.80773320e+197] [ -8.92969352e+197] [ -9.25718663e+197] [ -9.70318034e+197] [ -9.22092201e+197] [ -9.40239184e+197] [ -8.54673554e+197] [ -9.67736283e+197]] [[ 1.51192463e+197] [ 6.87900113e+197] [ 4.63600250e+197] [ 7.78720112e+197] [ 1.05326357e+198] [ 1.10411868e+198] [ 1.77339892e+198] [ 1.51963197e+198] [ 2.11163031e+198] [ 1.77109160e+198] [ 2.98405062e+198] [ 2.66250320e+198] [ 1.95831240e+198] [ 2.72517890e+198] [ 2.81165115e+198] [ 3.22182069e+198] [ 4.10766333e+198] [ 2.87254087e+198] [ 3.51119817e+198] [ 3.42782353e+198] [ 4.60633077e+198] [ 4.73145746e+198] [ 4.70093010e+198] [ 4.90085476e+198] [ 4.45322481e+198] [ 3.97493046e+198] [ 5.09163406e+198] [ 5.63679214e+198] [ 4.87854188e+198] [ 5.87166094e+198] [ 5.03575470e+198] [ 5.65366800e+198] [ 5.16527791e+198] [ 5.86447967e+198] [ 5.23739483e+198] [ 5.93081691e+198] [ 5.69607830e+198] [ 6.64786447e+198] [ 5.85548984e+198] [ 5.92405892e+198] [ 5.66406956e+198] [ 6.17171484e+198] [ 6.53729772e+198] [ 5.95204456e+198] [ 6.17033353e+198] [ 6.46760851e+198] [ 6.14616152e+198] [ 6.26711938e+198] [ 5.69678575e+198] [ 6.45039997e+198]] [[ -1.00776614e+198] [ -4.58516534e+198] [ -3.09010532e+198] [ -5.19052171e+198] [ -7.02047802e+198] [ -7.35945035e+198] [ -1.18205058e+199] [ -1.01290343e+199] [ -1.40749710e+199] [ -1.18051265e+199] [ -1.98900469e+199] [ -1.77467879e+199] [ -1.30530377e+199] [ -1.81645498e+199] [ -1.87409264e+199] [ -2.14748919e+199] [ -2.73794337e+199] [ -1.91467840e+199] [ -2.34037237e+199] [ -2.28479940e+199] [ -3.07032778e+199] [ -3.15373037e+199] [ -3.13338250e+199] [ -3.26664133e+199] [ -2.96827573e+199] [ -2.64947091e+199] [ -3.39380437e+199] [ -3.75717688e+199] [ -3.25176879e+199] [ -3.91372756e+199] [ -3.35655824e+199] [ -3.76842540e+199] [ -3.44289132e+199] [ -3.90894091e+199] [ -3.49096051e+199] [ -3.95315768e+199] [ -3.79669378e+199] [ -4.43110231e+199] [ -3.90294879e+199] [ -3.94865318e+199] [ -3.77535851e+199] [ -4.11372705e+199] [ -4.35740457e+199] [ -3.96730688e+199] [ -4.11280635e+199] [ -4.31095356e+199] [ -4.09669461e+199] [ -4.17731849e+199] [ -3.79716533e+199] [ -4.29948329e+199]] [[ 6.71721707e+198] [ 3.05622006e+199] [ 2.05969494e+199] [ 3.45971746e+199] [ 4.67946609e+199] [ 4.90540648e+199] [ 7.87890169e+199] [ 6.75145947e+199] [ 9.38160467e+199] [ 7.86865067e+199] [ 1.32576157e+200] [ 1.18290367e+200] [ 8.70043997e+199] [ 1.21074939e+200] [ 1.24916750e+200] [ 1.43139867e+200] [ 1.82496308e+200] [ 1.27621974e+200] [ 1.55996402e+200] [ 1.52292213e+200] [ 2.04651232e+200] [ 2.10210392e+200] [ 2.08854114e+200] [ 2.17736418e+200] [ 1.97849001e+200] [ 1.76599218e+200] [ 2.26212409e+200] [ 2.50432830e+200] [ 2.16745095e+200] [ 2.60867641e+200] [ 2.23729786e+200] [ 2.51182595e+200] [ 2.29484277e+200] [ 2.60548590e+200] [ 2.32688305e+200] [ 2.63495837e+200] [ 2.53066811e+200] [ 2.95353008e+200] [ 2.60149187e+200] [ 2.63195592e+200] [ 2.51644718e+200] [ 2.74198511e+200] [ 2.90440721e+200] [ 2.64438945e+200] [ 2.74137143e+200] [ 2.87344551e+200] [ 2.73063222e+200] [ 2.78437169e+200] [ 2.53098242e+200] [ 2.86580005e+200]] [[ -4.47732895e+199] [ -2.03710888e+200] [ -1.37287983e+200] [ -2.30605815e+200] [ -3.11907577e+200] [ -3.26967526e+200] [ -5.25164429e+200] [ -4.50015306e+200] [ -6.25326379e+200] [ -5.24481152e+200] [ -8.83680044e+200] [ -7.88458794e+200] [ -5.79923671e+200] [ -8.07019222e+200] [ -8.32626632e+200] [ -9.54091944e+200] [ -1.21642042e+201] [ -8.50658172e+200] [ -1.03978657e+201] [ -1.01509647e+201] [ -1.36409301e+201] [ -1.40114733e+201] [ -1.39210712e+201] [ -1.45131169e+201] [ -1.31875307e+201] [ -1.17711365e+201] [ -1.50780801e+201] [ -1.66924807e+201] [ -1.44470408e+201] [ -1.73880080e+201] [ -1.49126021e+201] [ -1.67424559e+201] [ -1.52961649e+201] [ -1.73667418e+201] [ -1.55097279e+201] [ -1.75631892e+201] [ -1.68680474e+201] [ -1.96866137e+201] [ -1.73401198e+201] [ -1.75431764e+201] [ -1.67732585e+201] [ -1.82765708e+201] [ -1.93591875e+201] [ -1.76260516e+201] [ -1.82724803e+201] [ -1.91528138e+201] [ -1.82008987e+201] [ -1.85590965e+201] [ -1.68701424e+201] [ -1.91018534e+201]] [[ 2.98434223e+200] [ 1.35782520e+201] [ 9.15086492e+200] [ 1.53709204e+201] [ 2.07900506e+201] [ 2.17938643e+201] [ 3.50045842e+201] [ 2.99955553e+201] [ 4.16808311e+201] [ 3.49590407e+201] [ 5.89012713e+201] [ 5.25543443e+201] [ 3.86545353e+201] [ 5.37914808e+201] [ 5.54983305e+201] [ 6.35945428e+201] [ 8.10799224e+201] [ 5.67002142e+201] [ 6.93064772e+201] [ 6.76607705e+201] [ 9.09229680e+201] [ 9.33928057e+201] [ 9.27902354e+201] [ 9.67364878e+201] [ 8.79008557e+201] [ 7.84599484e+201] [ 1.00502223e+202] [ 1.11262933e+202] [ 9.62960605e+201] [ 1.15898937e+202] [ 9.93992367e+201] [ 1.11596040e+202] [ 1.01955856e+202] [ 1.15757188e+202] [ 1.03379350e+202] [ 1.17066599e+202] [ 1.12433164e+202] [ 1.31220183e+202] [ 1.15579740e+202] [ 1.16933205e+202] [ 1.11801353e+202] [ 1.21821610e+202] [ 1.29037740e+202] [ 1.17485605e+202] [ 1.21794345e+202] [ 1.27662166e+202] [ 1.21317221e+202] [ 1.23704771e+202] [ 1.12447129e+202] [ 1.27322491e+202]] [[ -1.98919906e+201] [ -9.05051900e+201] [ -6.09946530e+201] [ -1.02454136e+202] [ -1.38575089e+202] [ -1.45265961e+202] [ -2.33321384e+202] [ -1.99933941e+202] [ -2.77821589e+202] [ -2.33017816e+202] [ -3.92603611e+202] [ -3.50298472e+202] [ -2.57649959e+202] [ -3.58544546e+202] [ -3.69921471e+202] [ -4.23886388e+202] [ -5.40434350e+202] [ -3.77932569e+202] [ -4.61959013e+202] [ -4.50989634e+202] [ -6.06042701e+202] [ -6.22505286e+202] [ -6.18488882e+202] [ -6.44792439e+202] [ -5.85898956e+202] [ -5.22971039e+202] [ -6.69892765e+202] [ -7.41617764e+202] [ -6.41856792e+202] [ -7.72518829e+202] [ -6.62540865e+202] [ -7.43838076e+202] [ -6.79581890e+202] [ -7.71574007e+202] [ -6.89070122e+202] [ -7.80301822e+202] [ -7.49417886e+202] [ -8.74641865e+202] [ -7.70391240e+202] [ -7.79412691e+202] [ -7.45206582e+202] [ -8.11996122e+202] [ -8.60094891e+202] [ -7.83094688e+202] [ -8.11814388e+202] [ -8.50926067e+202] [ -8.08634140e+202] [ -8.24548244e+202] [ -7.49510964e+202] [ -8.48661985e+202]] [[ 1.32589113e+202] [ 6.03258021e+202] [ 4.06556947e+202] [ 6.82903150e+202] [ 9.23665639e+202] [ 9.68263327e+202] [ 1.55519254e+203] [ 1.33265014e+203] [ 1.85180653e+203] [ 1.55316912e+203] [ 2.61688061e+203] [ 2.33489773e+203] [ 1.71735349e+203] [ 2.38986155e+203] [ 2.46569390e+203] [ 2.82539447e+203] [ 3.60223935e+203] [ 2.51909149e+203] [ 3.07916574e+203] [ 3.00604987e+203] [ 4.03954867e+203] [ 4.14927925e+203] [ 4.12250810e+203] [ 4.29783320e+203] [ 3.90528150e+203] [ 3.48583848e+203] [ 4.46513823e+203] [ 4.94321778e+203] [ 4.27826579e+203] [ 5.14918735e+203] [ 4.41613449e+203] [ 4.95801717e+203] [ 4.52972063e+203] [ 5.14288969e+203] [ 4.59296399e+203] [ 5.20106452e+203] [ 4.99520913e+203] [ 5.82988358e+203] [ 5.13500601e+203] [ 5.19513807e+203] [ 4.96713888e+203] [ 5.41232137e+203] [ 5.73292141e+203] [ 5.21968024e+203] [ 5.41111003e+203] [ 5.67180706e+203] [ 5.38991224e+203] [ 5.49598694e+203] [ 4.99582954e+203] [ 5.65671593e+203]] [[ -8.83766401e+202] [ -4.02098753e+203] [ -2.70988591e+203] [ -4.55185833e+203] [ -6.15664920e+203] [ -6.45391296e+203] [ -1.03660616e+204] [ -8.88271583e+203] [ -1.23431280e+204] [ -1.03525746e+204] [ -1.74426928e+204] [ -1.55631493e+204] [ -1.14469377e+204] [ -1.59295080e+204] [ -1.64349649e+204] [ -1.88325319e+204] [ -2.40105543e+204] [ -1.67908840e+204] [ -2.05240322e+204] [ -2.00366818e+204] [ -2.69254188e+204] [ -2.76568227e+204] [ -2.74783809e+204] [ -2.86470019e+204] [ -2.60304673e+204] [ -2.32346899e+204] [ -2.97621656e+204] [ -3.29487820e+204] [ -2.85165763e+204] [ -3.43216624e+204] [ -2.94355335e+204] [ -3.30474266e+204] [ -3.01926365e+204] [ -3.42796856e+204] [ -3.06141821e+204] [ -3.46674472e+204] [ -3.32953279e+204] [ -3.88588105e+204] [ -3.42271373e+204] [ -3.46279446e+204] [ -3.31082270e+204] [ -3.60755696e+204] [ -3.82125139e+204] [ -3.47915294e+204] [ -3.60674954e+204] [ -3.78051590e+204] [ -3.59262026e+204] [ -3.66332384e+204] [ -3.32994632e+204] [ -3.77045698e+204]] [[ 5.89070274e+203] [ 2.68017003e+204] [ 1.80626151e+204] [ 3.03401943e+204] [ 4.10368512e+204] [ 4.30182486e+204] [ 6.90944885e+204] [ 5.92073181e+204] [ 8.22725301e+204] [ 6.90045916e+204] [ 1.16263435e+205] [ 1.03735429e+205] [ 7.62990164e+204] [ 1.06177375e+205] [ 1.09546474e+205] [ 1.25527342e+205] [ 1.60041203e+205] [ 1.11918835e+205] [ 1.36801956e+205] [ 1.33553546e+205] [ 1.79470093e+205] [ 1.84345231e+205] [ 1.83155835e+205] [ 1.90945223e+205] [ 1.73504837e+205] [ 1.54869716e+205] [ 1.98378293e+205] [ 2.19618533e+205] [ 1.90075877e+205] [ 2.28769401e+205] [ 1.96201143e+205] [ 2.20276044e+205] [ 2.01247577e+205] [ 2.28489607e+205] [ 2.04057368e+205] [ 2.31074213e+205] [ 2.21928418e+205] [ 2.59011546e+205] [ 2.28139349e+205] [ 2.30810911e+205] [ 2.20681305e+205] [ 2.40459986e+205] [ 2.54703686e+205] [ 2.31901277e+205] [ 2.40406168e+205] [ 2.51988482e+205] [ 2.39464388e+205] [ 2.44177101e+205] [ 2.21955981e+205] [ 2.51318009e+205]] [[ -3.92641977e+204] [ -1.78645453e+205] [ -1.20395498e+205] [ -2.02231116e+205] [ -2.73529172e+205] [ -2.86736081e+205] [ -4.60546013e+205] [ -3.94643551e+205] [ -5.48383620e+205] [ -4.59946809e+205] [ -7.74948376e+205] [ -6.91443544e+205] [ -5.08567450e+205] [ -7.07720220e+205] [ -7.30176787e+205] [ -8.36696502e+205] [ -1.06674699e+206] [ -7.45989652e+205] [ -9.11846905e+205] [ -8.90194779e+205] [ -1.19624933e+206] [ -1.22874433e+206] [ -1.22081647e+206] [ -1.27273627e+206] [ -1.15648820e+206] [ -1.03227669e+206] [ -1.32228104e+206] [ -1.46385684e+206] [ -1.26694168e+206] [ -1.52485152e+206] [ -1.30776935e+206] [ -1.46823945e+206] [ -1.34140611e+206] [ -1.52298656e+206] [ -1.36013464e+206] [ -1.54021413e+206] [ -1.47925327e+206] [ -1.72642909e+206] [ -1.52065193e+206] [ -1.53845910e+206] [ -1.47094070e+206] [ -1.60277455e+206] [ -1.69771525e+206] [ -1.54572689e+206] [ -1.60241583e+206] [ -1.67961719e+206] [ -1.59613844e+206] [ -1.62755080e+206] [ -1.47943699e+206] [ -1.67514819e+206]] [[ 2.61713635e+205] [ 1.19075274e+206] [ 8.02490443e+205] [ 1.34796184e+206] [ 1.82319563e+206] [ 1.91122565e+206] [ 3.06974745e+206] [ 2.63047774e+206] [ 3.65522482e+206] [ 3.06575349e+206] [ 5.16538139e+206] [ 4.60878392e+206] [ 3.38983205e+206] [ 4.71727533e+206] [ 4.86695850e+206] [ 5.57696057e+206] [ 7.11035114e+206] [ 4.97235839e+206] [ 6.07787199e+206] [ 5.93355078e+206] [ 7.97354278e+206] [ 8.19013664e+206] [ 8.13729388e+206] [ 8.48336279e+206] [ 7.70851688e+206] [ 6.88059100e+206] [ 8.81360111e+206] [ 9.75726786e+206] [ 8.44473926e+206] [ 1.01638250e+207] [ 8.71687412e+206] [ 9.78647992e+206] [ 8.94107836e+206] [ 1.01513942e+207] [ 9.06591250e+206] [ 1.02662237e+207] [ 9.85989200e+206] [ 1.15074306e+207] [ 1.01358328e+207] [ 1.02545257e+207] [ 9.80448499e+206] [ 1.06832172e+207] [ 1.13160399e+207] [ 1.03029687e+207] [ 1.06808262e+207] [ 1.11954082e+207] [ 1.06389845e+207] [ 1.08483622e+207] [ 9.86111661e+206] [ 1.11656202e+207]] [[ -1.74443973e+206] [ -7.93690549e+206] [ -5.34896172e+206] [ -8.98477529e+206] [ -1.21524234e+207] [ -1.27391833e+207] [ -2.04612550e+207] [ -1.75333237e+207] [ -2.43637265e+207] [ -2.04346335e+207] [ -3.44296029e+207] [ -3.07196291e+207] [ -2.25947636e+207] [ -3.14427734e+207] [ -3.24404794e+207] [ -3.71729643e+207] [ -4.73937059e+207] [ -3.31430174e+207] [ -4.05117655e+207] [ -3.95497994e+207] [ -5.31472686e+207] [ -5.45909646e+207] [ -5.42387437e+207] [ -5.65454495e+207] [ -5.13807511e+207] [ -4.58622506e+207] [ -5.87466372e+207] [ -6.50366029e+207] [ -5.62880062e+207] [ -6.77464898e+207] [ -5.81019081e+207] [ -6.52313145e+207] [ -5.95963307e+207] [ -6.76636332e+207] [ -6.04284068e+207] [ -6.84290240e+207] [ -6.57206392e+207] [ -7.67022291e+207] [ -6.75599097e+207] [ -6.83510511e+207] [ -6.53513264e+207] [ -7.12084741e+207] [ -7.54265237e+207] [ -6.86739460e+207] [ -7.11925369e+207] [ -7.46224584e+207] [ -7.09136432e+207] [ -7.23092398e+207] [ -6.57288018e+207] [ -7.44239084e+207]] [[ 1.16274797e+207] [ 5.29030645e+207] [ 3.56532489e+207] [ 5.98875906e+207] [ 8.10013976e+207] [ 8.49124180e+207] [ 1.36383518e+208] [ 1.16867532e+208] [ 1.62395255e+208] [ 1.36206073e+208] [ 2.29488874e+208] [ 2.04760220e+208] [ 1.50604318e+208] [ 2.09580304e+208] [ 2.16230466e+208] [ 2.47774618e+208] [ 3.15900483e+208] [ 2.20913199e+208] [ 2.70029238e+208] [ 2.63617299e+208] [ 3.54250580e+208] [ 3.63873467e+208] [ 3.61525756e+208] [ 3.76900993e+208] [ 3.42475942e+208] [ 3.05692641e+208] [ 3.91572905e+208] [ 4.33498370e+208] [ 3.75185017e+208] [ 4.51560992e+208] [ 3.87275493e+208] [ 4.34796211e+208] [ 3.97236496e+208] [ 4.51008715e+208] [ 4.02782659e+208] [ 4.56110391e+208] [ 4.38057782e+208] [ 5.11255045e+208] [ 4.50317351e+208] [ 4.55590667e+208] [ 4.35596145e+208] [ 4.74636683e+208] [ 5.02751891e+208] [ 4.57742907e+208] [ 4.74530454e+208] [ 4.97392432e+208] [ 4.72671502e+208] [ 4.81973784e+208] [ 4.38112189e+208] [ 4.96069007e+208]] [[ -7.75024107e+207] [ -3.52622849e+208] [ -2.37645027e+208] [ -3.99177874e+208] [ -5.39910947e+208] [ -5.65979666e+208] [ -9.09057827e+208] [ -7.78974952e+208] [ -1.08243782e+209] [ -9.07875076e+208] [ -1.52964713e+209] [ -1.36481947e+209] [ -1.00384589e+209] [ -1.39694751e+209] [ -1.44127384e+209] [ -1.65152988e+209] [ -2.10561958e+209] [ -1.47248638e+209] [ -1.79986699e+209] [ -1.75712851e+209] [ -2.36124032e+209] [ -2.42538122e+209] [ -2.40973266e+209] [ -2.51221557e+209] [ -2.28275704e+209] [ -2.03757970e+209] [ -2.61001049e+209] [ -2.88946268e+209] [ -2.50077781e+209] [ -3.00985823e+209] [ -2.58136630e+209] [ -2.89811338e+209] [ -2.64776089e+209] [ -3.00617705e+209] [ -2.68472858e+209] [ -3.04018203e+209] [ -2.91985322e+209] [ -3.40774608e+209] [ -3.00156880e+209] [ -3.03671783e+209] [ -2.90344530e+209] [ -3.16366814e+209] [ -3.35106872e+209] [ -3.05106349e+209] [ -3.16296008e+209] [ -3.31534550e+209] [ -3.15056932e+209] [ -3.21257324e+209] [ -2.92021587e+209] [ -3.30652427e+209]] [[ 5.16588617e+208] [ 2.35039076e+209] [ 1.58401158e+209] [ 2.66070106e+209] [ 3.59875063e+209] [ 3.77251043e+209] [ 6.05928152e+209] [ 5.19222034e+209] [ 7.21493757e+209] [ 6.05139796e+209] [ 1.01957899e+210] [ 9.09713899e+209] [ 6.69108680e+209] [ 9.31128689e+209] [ 9.60674198e+209] [ 1.10081936e+210] [ 1.40349068e+210] [ 9.81478763e+209] [ 1.19969274e+210] [ 1.17120562e+210] [ 1.57387346e+210] [ 1.61662627e+210] [ 1.60619579e+210] [ 1.67450529e+210] [ 1.52156080e+210] [ 1.35813902e+210] [ 1.73969003e+210] [ 1.92595755e+210] [ 1.66688151e+210] [ 2.00620663e+210] [ 1.72059738e+210] [ 1.93172363e+210] [ 1.76485238e+210] [ 2.00375295e+210] [ 1.78949301e+210] [ 2.02641881e+210] [ 1.94621422e+210] [ 2.27141688e+210] [ 2.00068134e+210] [ 2.02410976e+210] [ 1.93527760e+210] [ 2.10872789e+210] [ 2.23363885e+210] [ 2.03367179e+210] [ 2.10825593e+210] [ 2.20982771e+210] [ 2.09999693e+210] [ 2.14132535e+210] [ 1.94645594e+210] [ 2.20394796e+210]] [[ -3.44329675e+209] [ -1.56664174e+210] [ -1.05581535e+210] [ -1.77347758e+210] [ -2.39873004e+210] [ -2.51454881e+210] [ -4.03878516e+210] [ -3.46084967e+210] [ -4.80908218e+210] [ -4.03353041e+210] [ -6.79595504e+210] [ -6.06365454e+210] [ -4.45991195e+210] [ -6.20639380e+210] [ -6.40332798e+210] [ -7.33745884e+210] [ -9.35490007e+210] [ -6.54199982e+210] [ -7.99649468e+210] [ -7.80661511e+210] [ -1.04905784e+211] [ -1.07755452e+211] [ -1.07060213e+211] [ -1.11613350e+211] [ -1.01418908e+211] [ -9.05261075e+210] [ -1.15958208e+211] [ -1.28373781e+211] [ -1.11105191e+211] [ -1.33722744e+211] [ -1.14685597e+211] [ -1.28758116e+211] [ -1.17635392e+211] [ -1.33559196e+211] [ -1.19277802e+211] [ -1.35069978e+211] [ -1.29723979e+211] [ -1.51400207e+211] [ -1.33354459e+211] [ -1.34916070e+211] [ -1.28995004e+211] [ -1.40556250e+211] [ -1.48882131e+211] [ -1.35553422e+211] [ -1.40524792e+211] [ -1.47295010e+211] [ -1.39974292e+211] [ -1.42729018e+211] [ -1.29740091e+211] [ -1.46903099e+211]] [[ 2.29511300e+210] [ 1.04423757e+211] [ 7.03748681e+210] [ 1.18210301e+211] [ 1.59886205e+211] [ 1.67606050e+211] [ 2.69203296e+211] [ 2.30681282e+211] [ 3.20547076e+211] [ 2.68853043e+211] [ 4.52981137e+211] [ 4.04169997e+211] [ 2.97273301e+211] [ 4.13684215e+211] [ 4.26810768e+211] [ 4.89074815e+211] [ 6.23546396e+211] [ 4.36053873e+211] [ 5.33002534e+211] [ 5.20346201e+211] [ 6.99244491e+211] [ 7.18238816e+211] [ 7.13604739e+211] [ 7.43953454e+211] [ 6.76002890e+211] [ 6.03397446e+211] [ 7.72913896e+211] [ 8.55669304e+211] [ 7.40566342e+211] [ 8.91322566e+211] [ 7.64431366e+211] [ 8.58231072e+211] [ 7.84093088e+211] [ 8.90232443e+211] [ 7.95040491e+211] [ 9.00302486e+211] [ 8.64668988e+211] [ 1.00915085e+212] [ 8.88867780e+211] [ 8.99276618e+211] [ 8.59810037e+211] [ 9.36870973e+211] [ 9.92366731e+211] [ 9.03524860e+211] [ 9.36661291e+211] [ 9.81787857e+211] [ 9.32991961e+211] [ 9.51353455e+211] [ 8.64776380e+211] [ 9.79175587e+211]] [[ -1.52979661e+211] [ -6.96031562e+211] [ -4.69080322e+211] [ -7.87925116e+211] [ -1.06571387e+212] [ -1.11717012e+212] [ -1.79436171e+212] [ -1.53759506e+212] [ -2.13659122e+212] [ -1.79202712e+212] [ -3.01932414e+212] [ -2.69397581e+212] [ -1.98146099e+212] [ -2.75739237e+212] [ -2.84488679e+212] [ -3.25990482e+212] [ -4.15621872e+212] [ -2.90649627e+212] [ -3.55270293e+212] [ -3.46834275e+212] [ -4.66078076e+212] [ -4.78738652e+212] [ -4.75649831e+212] [ -4.95878622e+212] [ -4.50586497e+212] [ -4.02191685e+212] [ -5.15182066e+212] [ -5.70342287e+212] [ -4.93620958e+212] [ -5.94106799e+212] [ -5.09528076e+212] [ -5.72049822e+212] [ -5.22633503e+212] [ -5.93380183e+212] [ -5.29930442e+212] [ -6.00092322e+212] [ -5.76340984e+212] [ -6.72644678e+212] [ -5.92470574e+212] [ -5.99408535e+212] [ -5.73102274e+212] [ -6.24466873e+212] [ -6.61457305e+212] [ -6.02240180e+212] [ -6.24327110e+212] [ -6.54406007e+212] [ -6.21881335e+212] [ -6.34120101e+212] [ -5.76412566e+212] [ -6.52664811e+212]] [[ 1.01967862e+212] [ 4.63936514e+212] [ 3.12663248e+212] [ 5.25187723e+212] [ 7.10346487e+212] [ 7.44644408e+212] [ 1.19602323e+213] [ 1.02487665e+213] [ 1.42413467e+213] [ 1.19446711e+213] [ 2.01251608e+213] [ 1.79565671e+213] [ 1.32073336e+213] [ 1.83792672e+213] [ 1.89624570e+213] [ 2.17287399e+213] [ 2.77030774e+213] [ 1.93731120e+213] [ 2.36803717e+213] [ 2.31180730e+213] [ 3.10662115e+213] [ 3.19100962e+213] [ 3.17042123e+213] [ 3.30525527e+213] [ 3.00336278e+213] [ 2.68078947e+213] [ 3.43392145e+213] [ 3.80158928e+213] [ 3.29020692e+213] [ 3.95999049e+213] [ 3.39623506e+213] [ 3.81297077e+213] [ 3.48358865e+213] [ 3.95514727e+213] [ 3.53222605e+213] [ 3.99988671e+213] [ 3.84157330e+213] [ 4.48348097e+213] [ 3.94908431e+213] [ 3.99532896e+213] [ 3.81998583e+213] [ 4.16235411e+213] [ 4.40891207e+213] [ 4.01420316e+213] [ 4.16142253e+213] [ 4.36191198e+213] [ 4.14512034e+213] [ 4.22669725e+213] [ 3.84205043e+213] [ 4.35030613e+213]] [[ -6.79661917e+212] [ -3.09234667e+213] [ -2.08404195e+213] [ -3.50061368e+213] [ -4.73478058e+213] [ -4.96339174e+213] [ -7.97203570e+213] [ -6.83126634e+213] [ -9.49250166e+213] [ -7.96166350e+213] [ -1.34143298e+214] [ -1.19688640e+214] [ -8.80328512e+213] [ -1.22506127e+214] [ -1.26393352e+214] [ -1.44831878e+214] [ -1.84653539e+214] [ -1.29130553e+214] [ -1.57840387e+214] [ -1.54092411e+214] [ -2.07070349e+214] [ -2.12695222e+214] [ -2.11322913e+214] [ -2.20310211e+214] [ -2.00187711e+214] [ -1.78686742e+214] [ -2.28886394e+214] [ -2.53393117e+214] [ -2.19307171e+214] [ -2.63951275e+214] [ -2.26374426e+214] [ -2.54151745e+214] [ -2.32196938e+214] [ -2.63628452e+214] [ -2.35438840e+214] [ -2.66610538e+214] [ -2.56058234e+214] [ -2.98844283e+214] [ -2.63224329e+214] [ -2.66306743e+214] [ -2.54619331e+214] [ -2.77439725e+214] [ -2.93873928e+214] [ -2.67564794e+214] [ -2.77377631e+214] [ -2.90741160e+214] [ -2.76291016e+214] [ -2.81728486e+214] [ -2.56090036e+214] [ -2.89967577e+214]] [[ 4.53025404e+213] [ 2.06118890e+214] [ 1.38910821e+214] [ 2.33331733e+214] [ 3.15594538e+214] [ 3.30832506e+214] [ 5.31372232e+214] [ 4.55334795e+214] [ 6.32718164e+214] [ 5.30680878e+214] [ 8.94125747e+214] [ 7.97778916e+214] [ 5.86778765e+214] [ 8.16558742e+214] [ 8.42468848e+214] [ 9.65369964e+214] [ 1.23079935e+215] [ 8.60713533e+214] [ 1.05207756e+215] [ 1.02709561e+215] [ 1.38021752e+215] [ 1.41770985e+215] [ 1.40856278e+215] [ 1.46846719e+215] [ 1.33434163e+215] [ 1.19102794e+215] [ 1.52563133e+215] [ 1.68897972e+215] [ 1.46178147e+215] [ 1.75935462e+215] [ 1.50888792e+215] [ 1.69403632e+215] [ 1.54769760e+215] [ 1.75720286e+215] [ 1.56930635e+215] [ 1.77707981e+215] [ 1.70674393e+215] [ 1.99193229e+215] [ 1.75450919e+215] [ 1.77505488e+215] [ 1.69715299e+215] [ 1.84926124e+215] [ 1.95880263e+215] [ 1.78344036e+215] [ 1.84884735e+215] [ 1.93792132e+215] [ 1.84160457e+215] [ 1.87784777e+215] [ 1.70695590e+215] [ 1.93276503e+215]] [[ -3.01961920e+214] [ -1.37387562e+215] [ -9.25903440e+214] [ -1.55526152e+215] [ -2.10358032e+215] [ -2.20514827e+215] [ -3.54183623e+215] [ -3.03501233e+215] [ -4.21735270e+215] [ -3.53722805e+215] [ -5.95975246e+215] [ -5.31755726e+215] [ -3.91114583e+215] [ -5.44273329e+215] [ -5.61543587e+215] [ -6.43462738e+215] [ -8.20383425e+215] [ -5.73704495e+215] [ -7.01257271e+215] [ -6.84605671e+215] [ -9.19977397e+215] [ -9.44967726e+215] [ -9.38870794e+215] [ -9.78799792e+215] [ -8.89399040e+215] [ -7.93873986e+215] [ -1.01690228e+216] [ -1.12578137e+216] [ -9.74343458e+215] [ -1.17268942e+216] [ -1.00574204e+216] [ -1.12915182e+216] [ -1.03161044e+216] [ -1.17125517e+216] [ -1.04601365e+216] [ -1.18450406e+216] [ -1.13762201e+216] [ -1.32771296e+216] [ -1.16945972e+216] [ -1.18315435e+216] [ -1.13122922e+216] [ -1.23261625e+216] [ -1.30563054e+216] [ -1.18874365e+216] [ -1.23234037e+216] [ -1.29171220e+216] [ -1.22751273e+216] [ -1.25167046e+216] [ -1.13776330e+216] [ -1.28827530e+216]] [[ 2.01271275e+215] [ 9.15750232e+215] [ 6.17156516e+215] [ 1.03665214e+216] [ 1.40213142e+216] [ 1.46983104e+216] [ 2.36079402e+216] [ 2.02297297e+216] [ 2.81105630e+216] [ 2.35772246e+216] [ 3.97244454e+216] [ 3.54439240e+216] [ 2.60695557e+216] [ 3.62782788e+216] [ 3.74294196e+216] [ 4.28897015e+216] [ 5.46822654e+216] [ 3.82399991e+216] [ 4.67419685e+216] [ 4.56320640e+216] [ 6.13206540e+216] [ 6.29863724e+216] [ 6.25799844e+216] [ 6.52414326e+216] [ 5.92824682e+216] [ 5.29152914e+216] [ 6.77811355e+216] [ 7.50384193e+216] [ 6.49443978e+216] [ 7.81650530e+216] [ 6.70372551e+216] [ 7.52630751e+216] [ 6.87615013e+216] [ 7.80694540e+216] [ 6.97215402e+216] [ 7.89525523e+216] [ 7.58276518e+216] [ 8.84980730e+216] [ 7.79497791e+216] [ 7.88625882e+216] [ 7.54015433e+216] [ 8.21594472e+216] [ 8.70261801e+216] [ 7.92351403e+216] [ 8.21410589e+216] [ 8.60984595e+216] [ 8.18192749e+216] [ 8.34294968e+216] [ 7.58370697e+216] [ 8.58693750e+216]] [[ -1.34156407e+216] [ -6.10388944e+216] [ -4.11362728e+216] [ -6.90975532e+216] [ -9.34583999e+216] [ -9.79708862e+216] [ -1.57357598e+217] [ -1.34840297e+217] [ -1.87369615e+217] [ -1.57152864e+217] [ -2.64781393e+217] [ -2.36249782e+217] [ -1.73765379e+217] [ -2.41811135e+217] [ -2.49484008e+217] [ -2.85879257e+217] [ -3.64482028e+217] [ -2.54886887e+217] [ -3.11556358e+217] [ -3.04158343e+217] [ -4.08729890e+217] [ -4.19832657e+217] [ -4.17123897e+217] [ -4.34863653e+217] [ -3.95144460e+217] [ -3.52704347e+217] [ -4.51791922e+217] [ -5.00165000e+217] [ -4.32883782e+217] [ -5.21005427e+217] [ -4.46833622e+217] [ -5.01662432e+217] [ -4.58326503e+217] [ -5.20368217e+217] [ -4.64725596e+217] [ -5.26254466e+217] [ -5.05425592e+217] [ -5.89879679e+217] [ -5.19570530e+217] [ -5.25654815e+217] [ -5.02585386e+217] [ -5.47629871e+217] [ -5.80068847e+217] [ -5.28138043e+217] [ -5.47507305e+217] [ -5.73885170e+217] [ -5.45362469e+217] [ -5.56095326e+217] [ -5.05488366e+217] [ -5.72358218e+217]] [[ 8.94213124e+216] [ 4.06851835e+217] [ 2.74191861e+217] [ 4.60566441e+217] [ 6.22942501e+217] [ 6.53020263e+217] [ 1.04885955e+218] [ 8.98771560e+217] [ 1.24890322e+218] [ 1.04749491e+218] [ 1.76488773e+218] [ 1.57471164e+218] [ 1.15822483e+218] [ 1.61178057e+218] [ 1.66292374e+218] [ 1.90551453e+218] [ 2.42943755e+218] [ 1.69893637e+218] [ 2.07666403e+218] [ 2.02735291e+218] [ 2.72436956e+218] [ 2.79837452e+218] [ 2.78031941e+218] [ 2.89856291e+218] [ 2.63381653e+218] [ 2.35093398e+218] [ 3.01139747e+218] [ 3.33382592e+218] [ 2.88536617e+218] [ 3.47273679e+218] [ 2.97834817e+218] [ 3.34380698e+218] [ 3.05495341e+218] [ 3.46848950e+218] [ 3.09760626e+218] [ 3.50772401e+218] [ 3.36889014e+218] [ 3.93181482e+218] [ 3.46317255e+218] [ 3.50372706e+218] [ 3.34995888e+218] [ 3.65020075e+218] [ 3.86642119e+218] [ 3.52027890e+218] [ 3.64938379e+218] [ 3.82520419e+218] [ 3.63508749e+218] [ 3.70662684e+218] [ 3.36930856e+218] [ 3.81502636e+218]] [[ -5.96033487e+217] [ -2.71185147e+218] [ -1.82761276e+218] [ -3.06988361e+218] [ -4.15219348e+218] [ -4.35267537e+218] [ -6.99112327e+218] [ -5.99071891e+218] [ -8.32450477e+218] [ -6.98202730e+218] [ -1.17637749e+219] [ -1.04961652e+219] [ -7.72009230e+218] [ -1.07432464e+219] [ -1.10841388e+219] [ -1.27011161e+219] [ -1.61932999e+219] [ -1.13241793e+219] [ -1.38419049e+219] [ -1.35132240e+219] [ -1.81591552e+219] [ -1.86524317e+219] [ -1.85320862e+219] [ -1.93202326e+219] [ -1.75555783e+219] [ -1.56700382e+219] [ -2.00723260e+219] [ -2.22214574e+219] [ -1.92322704e+219] [ -2.31473612e+219] [ -1.98520375e+219] [ -2.22879857e+219] [ -2.03626461e+219] [ -2.31190511e+219] [ -2.06469466e+219] [ -2.33805669e+219] [ -2.24551763e+219] [ -2.62073239e+219] [ -2.30836112e+219] [ -2.33539254e+219] [ -2.23289909e+219] [ -2.43302388e+219] [ -2.57714458e+219] [ -2.34642509e+219] [ -2.43247934e+219] [ -2.54967158e+219] [ -2.42295020e+219] [ -2.47063441e+219] [ -2.24579653e+219] [ -2.54288760e+219]] [[ 3.97283274e+218] [ 1.80757165e+219] [ 1.21818655e+219] [ 2.04621626e+219] [ 2.76762474e+219] [ 2.90125498e+219] [ 4.65989982e+219] [ 3.99308508e+219] [ 5.54865889e+219] [ 4.65383695e+219] [ 7.84108796e+219] [ 6.99616880e+219] [ 5.14579065e+219] [ 7.16085957e+219] [ 7.38807976e+219] [ 8.46586827e+219] [ 1.07935667e+220] [ 7.54807760e+219] [ 9.22625559e+219] [ 9.00717491e+219] [ 1.21038982e+220] [ 1.24326893e+220] [ 1.23524736e+220] [ 1.28778088e+220] [ 1.17015868e+220] [ 1.04447891e+220] [ 1.33791130e+220] [ 1.48116063e+220] [ 1.28191780e+220] [ 1.54287631e+220] [ 1.32322807e+220] [ 1.48559504e+220] [ 1.35726245e+220] [ 1.54098931e+220] [ 1.37621236e+220] [ 1.55842052e+220] [ 1.49673905e+220] [ 1.74683666e+220] [ 1.53862708e+220] [ 1.55664474e+220] [ 1.48832823e+220] [ 1.62172044e+220] [ 1.71778341e+220] [ 1.56399844e+220] [ 1.62135748e+220] [ 1.69947142e+220] [ 1.61500589e+220] [ 1.64678957e+220] [ 1.49692495e+220] [ 1.69494959e+220]] [[ -2.64807269e+219] [ -1.20482825e+220] [ -8.11976429e+219] [ -1.36389567e+220] [ -1.84474705e+220] [ -1.93381765e+220] [ -3.10603397e+220] [ -2.66157179e+220] [ -3.69843208e+220] [ -3.10199279e+220] [ -5.22643973e+220] [ -4.66326290e+220] [ -3.42990218e+220] [ -4.77303675e+220] [ -4.92448928e+220] [ -5.64288406e+220] [ -7.19440036e+220] [ -5.03113507e+220] [ -6.14971660e+220] [ -6.00368941e+220] [ -8.06779551e+220] [ -8.28694966e+220] [ -8.23348226e+220] [ -8.58364194e+220] [ -7.79963682e+220] [ -6.96192429e+220] [ -8.91778390e+220] [ -9.87260544e+220] [ -8.54456186e+220] [ -1.02839683e+221] [ -8.81991353e+220] [ -9.90216281e+220] [ -9.04676802e+220] [ -1.02713906e+221] [ -9.17307779e+220] [ -1.03875775e+221] [ -9.97644267e+220] [ -1.16434563e+221] [ -1.02556453e+221] [ -1.03757412e+221] [ -9.92038071e+220] [ -1.08095002e+221] [ -1.14498033e+221] [ -1.04247569e+221] [ -1.08070809e+221] [ -1.13277455e+221] [ -1.07647446e+221] [ -1.09765972e+221] [ -9.97768175e+220] [ -1.12976055e+221]] [[ 1.76506020e+220] [ 8.03072514e+220] [ 5.41219011e+220] [ 9.09098148e+220] [ 1.22960733e+221] [ 1.28897692e+221] [ 2.07031211e+221] [ 1.77405796e+221] [ 2.46517224e+221] [ 2.06761848e+221] [ 3.48365844e+221] [ 3.10827561e+221] [ 2.28618492e+221] [ 3.18144485e+221] [ 3.28239481e+221] [ 3.76123742e+221] [ 4.79539320e+221] [ 3.35347905e+221] [ 4.09906422e+221] [ 4.00173050e+221] [ 5.37755056e+221] [ 5.52362672e+221] [ 5.48798828e+221] [ 5.72138554e+221] [ 5.19881067e+221] [ 4.64043738e+221] [ 5.94410627e+221] [ 6.58053800e+221] [ 5.69533690e+221] [ 6.85472997e+221] [ 5.87887124e+221] [ 6.60023933e+221] [ 6.03008001e+221] [ 6.84634636e+221] [ 6.11427119e+221] [ 6.92379018e+221] [ 6.64975021e+221] [ 7.76089020e+221] [ 6.83585140e+221] [ 6.91590073e+221] [ 6.61238238e+221] [ 7.20502070e+221] [ 7.63181169e+221] [ 6.94857190e+221] [ 7.20340813e+221] [ 7.55045469e+221] [ 7.17518910e+221] [ 7.31639845e+221] [ 6.65057611e+221] [ 7.53036499e+221]] [[ -1.17649245e+221] [ -5.35284149e+221] [ -3.60746947e+221] [ -6.05955028e+221] [ -8.19588896e+221] [ -8.59161408e+221] [ -1.37995665e+222] [ -1.18248986e+222] [ -1.64314878e+222] [ -1.37816122e+222] [ -2.32201589e+222] [ -2.07180626e+222] [ -1.52384564e+222] [ -2.12057687e+222] [ -2.18786458e+222] [ -2.50703484e+222] [ -3.19634643e+222] [ -2.23524545e+222] [ -2.73221168e+222] [ -2.66733435e+222] [ -3.58438064e+222] [ -3.68174701e+222] [ -3.65799237e+222] [ -3.81356220e+222] [ -3.46524242e+222] [ -3.09306137e+222] [ -3.96201564e+222] [ -4.38622618e+222] [ -3.79619961e+222] [ -4.56898752e+222] [ -3.91853355e+222] [ -4.39935800e+222] [ -4.01932103e+222] [ -4.56339947e+222] [ -4.07543826e+222] [ -4.61501928e+222] [ -4.43235924e+222] [ -5.17298429e+222] [ -4.55640410e+222] [ -4.60976060e+222] [ -4.40745189e+222] [ -4.80247214e+222] [ -5.08694763e+222] [ -4.63153741e+222] [ -4.80139729e+222] [ -5.03271951e+222] [ -4.78258802e+222] [ -4.87671044e+222] [ -4.43290974e+222] [ -5.01932882e+222]] [[ 7.84185422e+221] [ 3.56791093e+222] [ 2.40454154e+222] [ 4.03896430e+222] [ 5.46293064e+222] [ 5.72669933e+222] [ 9.19803512e+222] [ 7.88182969e+222] [ 1.09523297e+223] [ 9.18606781e+222] [ 1.54772860e+223] [ 1.38095257e+223] [ 1.01571203e+223] [ 1.41346038e+223] [ 1.45831069e+223] [ 1.67105210e+223] [ 2.13050945e+223] [ 1.48989218e+223] [ 1.82114265e+223] [ 1.77789897e+223] [ 2.38915180e+223] [ 2.45405088e+223] [ 2.43821735e+223] [ 2.54191167e+223] [ 2.30974079e+223] [ 2.06166529e+223] [ 2.64086260e+223] [ 2.92361811e+223] [ 2.53033872e+223] [ 3.04543681e+223] [ 2.61187982e+223] [ 2.93237106e+223] [ 2.67905924e+223] [ 3.04171212e+223] [ 2.71646390e+223] [ 3.07611906e+223] [ 2.95436788e+223] [ 3.44802797e+223] [ 3.03704939e+223] [ 3.07261391e+223] [ 2.93776601e+223] [ 3.20106486e+223] [ 3.39068064e+223] [ 3.08712914e+223] [ 3.20034843e+223] [ 3.35453515e+223] [ 3.18781120e+223] [ 3.25054805e+223] [ 2.95473482e+223] [ 3.34560965e+223]] [[ -5.22695047e+222] [ -2.37817399e+223] [ -1.60273568e+223] [ -2.69215236e+223] [ -3.64129033e+223] [ -3.81710409e+223] [ -6.13090638e+223] [ -5.25359593e+223] [ -7.30022307e+223] [ -6.12292962e+223] [ -1.03163111e+224] [ -9.20467339e+223] [ -6.77018003e+223] [ -9.42135267e+223] [ -9.72030024e+223] [ -1.11383179e+224] [ -1.42008090e+224] [ -9.93080514e+223] [ -1.21387393e+224] [ -1.18505007e+224] [ -1.59247772e+224] [ -1.63573589e+224] [ -1.62518213e+224] [ -1.69429908e+224] [ -1.53954669e+224] [ -1.37419315e+224] [ -1.76025435e+224] [ -1.94872369e+224] [ -1.68658519e+224] [ -2.02992136e+224] [ -1.74093602e+224] [ -1.95455793e+224] [ -1.78571414e+224] [ -2.02743869e+224] [ -1.81064604e+224] [ -2.05037247e+224] [ -1.96921980e+224] [ -2.29826658e+224] [ -2.02433077e+224] [ -2.04803613e+224] [ -1.95815390e+224] [ -2.13365450e+224] [ -2.26004199e+224] [ -2.05771119e+224] [ -2.13317696e+224] [ -2.23594938e+224] [ -2.12482033e+224] [ -2.16663728e+224] [ -1.96946438e+224] [ -2.23000013e+224]] [[ 3.48399887e+223] [ 1.58516051e+224] [ 1.06829581e+224] [ 1.79444130e+224] [ 2.42708467e+224] [ 2.54427250e+224] [ 4.08652636e+224] [ 3.50175928e+224] [ 4.86592882e+224] [ 4.08120949e+224] [ 6.87628788e+224] [ 6.13533109e+224] [ 4.51263116e+224] [ 6.27975762e+224] [ 6.47901970e+224] [ 7.42419262e+224] [ 9.46548139e+224] [ 6.61933073e+224] [ 8.09101872e+224] [ 7.89889464e+224] [ 1.06145842e+225] [ 1.09029195e+225] [ 1.08325738e+225] [ 1.12932696e+225] [ 1.02617749e+225] [ 9.15961880e+224] [ 1.17328913e+225] [ 1.29891247e+225] [ 1.12418530e+225] [ 1.35303439e+225] [ 1.16041259e+225] [ 1.30280125e+225] [ 1.19025923e+225] [ 1.35137957e+225] [ 1.20687747e+225] [ 1.36666597e+225] [ 1.31257405e+225] [ 1.53189861e+225] [ 1.34930800e+225] [ 1.36510870e+225] [ 1.30519813e+225] [ 1.42217721e+225] [ 1.50642019e+225] [ 1.37155756e+225] [ 1.42185891e+225] [ 1.49036138e+225] [ 1.41628884e+225] [ 1.44416173e+225] [ 1.31273708e+225] [ 1.48639594e+225]] [[ -2.32224281e+224] [ -1.05658117e+225] [ -7.12067472e+224] [ -1.19607628e+225] [ -1.61776170e+225] [ -1.69587268e+225] [ -2.72385463e+225] [ -2.33408092e+225] [ -3.24336161e+225] [ -2.72031070e+225] [ -4.58335684e+225] [ -4.08947563e+225] [ -3.00787275e+225] [ -4.18574245e+225] [ -4.31855963e+225] [ -4.94856013e+225] [ -6.30917141e+225] [ -4.41208328e+225] [ -5.39302988e+225] [ -5.26497049e+225] [ -7.07510039e+225] [ -7.26728890e+225] [ -7.22040036e+225] [ -7.52747493e+225] [ -6.83993706e+225] [ -6.10530016e+225] [ -7.82050267e+225] [ -8.65783900e+225] [ -7.49320342e+225] [ -9.01858608e+225] [ -7.73467468e+225] [ -8.68375951e+225] [ -7.93361605e+225] [ -9.00755599e+225] [ -8.04438414e+225] [ -9.10944677e+225] [ -8.74889966e+225] [ -1.02107970e+226] [ -8.99374806e+225] [ -9.09906683e+225] [ -8.69973580e+225] [ -9.47945429e+225] [ -1.00409718e+226] [ -9.14205142e+225] [ -9.47733268e+225] [ -9.93393261e+225] [ -9.44020564e+225] [ -9.62599103e+225] [ -8.74998628e+225] [ -9.90750113e+225]] [[ 1.54787985e+225] [ 7.04259132e+225] [ 4.74625172e+225] [ 7.97238930e+225] [ 1.07831133e+226] [ 1.13037584e+226] [ 1.81557229e+226] [ 1.55577049e+226] [ 2.16184719e+226] [ 1.81321010e+226] [ 3.05501461e+226] [ 2.72582045e+226] [ 2.00488321e+226] [ 2.78998664e+226] [ 2.87851530e+226] [ 3.29843912e+226] [ 4.20534807e+226] [ 2.94085304e+226] [ 3.59469831e+226] [ 3.50934094e+226] [ 4.71587437e+226] [ 4.84397671e+226] [ 4.81272337e+226] [ 5.01740246e+226] [ 4.55912738e+226] [ 4.06945866e+226] [ 5.21271870e+226] [ 5.77084123e+226] [ 4.99455895e+226] [ 6.01129548e+226] [ 5.15551047e+226] [ 5.78811842e+226] [ 5.28811389e+226] [ 6.00394343e+226] [ 5.36194582e+226] [ 6.07185824e+226] [ 5.83153728e+226] [ 6.80595798e+226] [ 5.99473981e+226] [ 6.06493954e+226] [ 5.79876734e+226] [ 6.31848498e+226] [ 6.69276183e+226] [ 6.09359071e+226] [ 6.31707083e+226] [ 6.62141534e+226] [ 6.29232398e+226] [ 6.41615835e+226] [ 5.83226156e+226] [ 6.60379756e+226]] [[ -1.03173192e+226] [ -4.69420560e+226] [ -3.16359142e+226] [ -5.31395801e+226] [ -7.18743269e+226] [ -7.53446615e+226] [ -1.21016104e+227] [ -1.03699139e+227] [ -1.44096892e+227] [ -1.20858653e+227] [ -2.03630540e+227] [ -1.81688260e+227] [ -1.33634533e+227] [ -1.85965227e+227] [ -1.91866062e+227] [ -2.19855884e+227] [ -2.80305467e+227] [ -1.96021155e+227] [ -2.39602899e+227] [ -2.33913444e+227] [ -3.14334354e+227] [ -3.22872954e+227] [ -3.20789778e+227] [ -3.34432565e+227] [ -3.03886458e+227] [ -2.71247824e+227] [ -3.47451276e+227] [ -3.84652666e+227] [ -3.32909941e+227] [ -4.00680029e+227] [ -3.43638088e+227] [ -3.85804269e+227] [ -3.52476705e+227] [ -4.00189981e+227] [ -3.57397938e+227] [ -4.04716811e+227] [ -3.88698332e+227] [ -4.53647879e+227] [ -3.99576519e+227] [ -4.04255648e+227] [ -3.86514068e+227] [ -4.21155599e+227] [ -4.46102843e+227] [ -4.06165378e+227] [ -4.21061339e+227] [ -4.41347277e+227] [ -4.19411850e+227] [ -4.27665970e+227] [ -3.88746609e+227] [ -4.40172972e+227]] [[ 6.87695985e+226] [ 3.12890033e+227] [ 2.10867675e+227] [ 3.54199333e+227] [ 4.79074893e+227] [ 5.02206242e+227] [ 8.06627062e+227] [ 6.91201658e+227] [ 9.60470952e+227] [ 8.05577581e+227] [ 1.35728964e+228] [ 1.21103442e+228] [ 8.90734597e+227] [ 1.23954234e+228] [ 1.27887407e+228] [ 1.46543890e+228] [ 1.86836270e+228] [ 1.30656964e+228] [ 1.59706168e+228] [ 1.55913889e+228] [ 2.09518062e+228] [ 2.15209425e+228] [ 2.13820894e+228] [ 2.22914429e+228] [ 2.02554067e+228] [ 1.80798941e+228] [ 2.31591988e+228] [ 2.56388397e+228] [ 2.21899531e+228] [ 2.67071360e+228] [ 2.29050326e+228] [ 2.57155993e+228] [ 2.34941664e+228] [ 2.66744721e+228] [ 2.38221889e+228] [ 2.69762057e+228] [ 2.59085017e+228] [ 3.02376827e+228] [ 2.66335820e+228] [ 2.69454671e+228] [ 2.57629106e+228] [ 2.80719253e+228] [ 2.97347719e+228] [ 2.70727593e+228] [ 2.80656425e+228] [ 2.94177919e+228] [ 2.79556965e+228] [ 2.85058710e+228] [ 2.59117195e+228] [ 2.93395192e+228]] [[ -4.58380475e+227] [ -2.08555357e+228] [ -1.40552842e+228] [ -2.36089874e+228] [ -3.19325082e+228] [ -3.34743172e+228] [ -5.37653415e+228] [ -4.60717164e+228] [ -6.40197326e+228] [ -5.36953889e+228] [ -9.04694925e+228] [ -8.07209209e+228] [ -5.93714891e+228] [ -8.26211024e+228] [ -8.52427406e+228] [ -9.76781297e+228] [ -1.24534824e+229] [ -8.70887756e+228] [ -1.06451383e+229] [ -1.03923658e+229] [ -1.39653264e+229] [ -1.43446815e+229] [ -1.42521296e+229] [ -1.48582548e+229] [ -1.35011446e+229] [ -1.20510671e+229] [ -1.54366534e+229] [ -1.70894462e+229] [ -1.47906073e+229] [ -1.78015139e+229] [ -1.52672401e+229] [ -1.71406099e+229] [ -1.56599244e+229] [ -1.77797420e+229] [ -1.58785662e+229] [ -1.79808611e+229] [ -1.72691881e+229] [ -2.01547830e+229] [ -1.77524869e+229] [ -1.79603724e+229] [ -1.71721450e+229] [ -1.87112077e+229] [ -1.98195702e+229] [ -1.80452184e+229] [ -1.87070199e+229] [ -1.96082887e+229] [ -1.86337360e+229] [ -1.90004521e+229] [ -1.72713329e+229] [ -1.95561164e+229]] [[ 3.05531316e+228] [ 1.39011577e+229] [ 9.36848252e+228] [ 1.57364578e+229] [ 2.12844608e+229] [ 2.23121463e+229] [ 3.58370316e+229] [ 3.07088825e+229] [ 4.26720470e+229] [ 3.57904050e+229] [ 6.03020081e+229] [ 5.38041443e+229] [ 3.95737825e+229] [ 5.50707012e+229] [ 5.68181416e+229] [ 6.51068908e+229] [ 8.30080918e+229] [ 5.80486074e+229] [ 7.09546612e+229] [ 6.92698178e+229] [ 9.30852159e+229] [ 9.56137891e+229] [ 9.49968889e+229] [ 9.90369875e+229] [ 8.99912345e+229] [ 8.03258120e+229] [ 1.02892276e+230] [ 1.13908887e+230] [ 9.85860864e+229] [ 1.18655141e+230] [ 1.01763060e+230] [ 1.14249916e+230] [ 1.04380478e+230] [ 1.18510021e+230] [ 1.05837825e+230] [ 1.19850571e+230] [ 1.15106948e+230] [ 1.34340743e+230] [ 1.18328353e+230] [ 1.19714005e+230] [ 1.14460112e+230] [ 1.24718661e+230] [ 1.32106399e+230] [ 1.20279541e+230] [ 1.24690748e+230] [ 1.30698112e+230] [ 1.24202277e+230] [ 1.26646606e+230] [ 1.15121245e+230] [ 1.30350360e+230]] [[ -2.03650439e+229] [ -9.26575027e+229] [ -6.24451728e+229] [ -1.04890608e+230] [ -1.41870557e+230] [ -1.48720545e+230] [ -2.38870022e+230] [ -2.04688590e+230] [ -2.84428491e+230] [ -2.38559235e+230] [ -4.01940155e+230] [ -3.58628954e+230] [ -2.63777157e+230] [ -3.67071129e+230] [ -3.78718609e+230] [ -4.33966871e+230] [ -5.53286471e+230] [ -3.86920220e+230] [ -4.72944905e+230] [ -4.61714662e+230] [ -6.20455061e+230] [ -6.37309144e+230] [ -6.33197226e+230] [ -6.60126310e+230] [ -5.99832275e+230] [ -5.35407863e+230] [ -6.85823549e+230] [ -7.59254247e+230] [ -6.57120850e+230] [ -7.90890174e+230] [ -6.78296813e+230] [ -7.61527361e+230] [ -6.95743093e+230] [ -7.89922883e+230] [ -7.05456965e+230] [ -7.98858255e+230] [ -7.67239866e+230] [ -8.95441808e+230] [ -7.88711988e+230] [ -7.97947980e+230] [ -7.62928412e+230] [ -8.31306280e+230] [ -8.80548890e+230] [ -8.01717539e+230] [ -8.31120224e+230] [ -8.71162022e+230] [ -8.27864347e+230] [ -8.44156905e+230] [ -7.67335157e+230] [ -8.68844097e+230]] [[ 1.35742228e+230] [ 6.17604158e+230] [ 4.16225317e+230] [ 6.99143336e+230] [ 9.45631421e+230] [ 9.91289691e+230] [ 1.59217673e+231] [ 1.36434202e+231] [ 1.89584452e+231] [ 1.59010519e+231] [ 2.67911291e+231] [ 2.39042416e+231] [ 1.75819404e+231] [ 2.44669508e+231] [ 2.52433080e+231] [ 2.89258545e+231] [ 3.68790455e+231] [ 2.57899825e+231] [ 3.15239168e+231] [ 3.07753703e+231] [ 4.13561357e+231] [ 4.24795366e+231] [ 4.22054587e+231] [ 4.40004039e+231] [ 3.99815338e+231] [ 3.56873554e+231] [ 4.57132411e+231] [ 5.06077292e+231] [ 4.38000764e+231] [ 5.27164068e+231] [ 4.52115501e+231] [ 5.07592426e+231] [ 4.63744235e+231] [ 5.26519325e+231] [ 4.70218971e+231] [ 5.32475154e+231] [ 5.11400068e+231] [ 5.96852460e+231] [ 5.25712209e+231] [ 5.31868415e+231] [ 5.08526289e+231] [ 5.54103231e+231] [ 5.86925657e+231] [ 5.34380996e+231] [ 5.53979216e+231] [ 5.80668885e+231] [ 5.51809026e+231] [ 5.62668753e+231] [ 5.11463584e+231] [ 5.79123883e+231]] [[ -9.04783335e+230] [ -4.11661102e+231] [ -2.77432997e+231] [ -4.66010651e+231] [ -6.30306107e+231] [ -6.60739409e+231] [ -1.06125779e+232] [ -9.09395655e+231] [ -1.26366611e+232] [ -1.05987702e+232] [ -1.78574991e+232] [ -1.59332581e+232] [ -1.17191584e+232] [ -1.63083292e+232] [ -1.68258064e+232] [ -1.92803902e+232] [ -2.45815516e+232] [ -1.71901896e+232] [ -2.10121162e+232] [ -2.05131761e+232] [ -2.75657347e+232] [ -2.83145322e+232] [ -2.81318469e+232] [ -2.93282591e+232] [ -2.66495004e+232] [ -2.37872363e+232] [ -3.04699425e+232] [ -3.37323402e+232] [ -2.91947317e+232] [ -3.51378691e+232] [ -3.01355428e+232] [ -3.38333307e+232] [ -3.09106505e+232] [ -3.50948941e+232] [ -3.13422209e+232] [ -3.54918771e+232] [ -3.40871273e+232] [ -3.97829157e+232] [ -3.50410962e+232] [ -3.54514351e+232] [ -3.38955769e+232] [ -3.69334862e+232] [ -3.91212494e+232] [ -3.56189101e+232] [ -3.69252200e+232] [ -3.87042072e+232] [ -3.67805671e+232] [ -3.75044170e+232] [ -3.40913609e+232] [ -3.86012258e+232]] [[ 6.03079011e+231] [ 2.74390741e+232] [ 1.84921639e+232] [ 3.10617174e+232] [ 4.20127525e+232] [ 4.40412697e+232] [ 7.07376312e+232] [ 6.06153330e+232] [ 8.42290611e+232] [ 7.06455964e+232] [ 1.19028307e+233] [ 1.06202371e+233] [ 7.81134907e+232] [ 1.08702389e+233] [ 1.12151609e+233] [ 1.28512520e+233] [ 1.63847158e+233] [ 1.14580388e+233] [ 1.40055257e+233] [ 1.36729595e+233] [ 1.83738088e+233] [ 1.88729162e+233] [ 1.87511482e+233] [ 1.95486110e+233] [ 1.77630972e+233] [ 1.58552687e+233] [ 2.03095947e+233] [ 2.24841302e+233] [ 1.94596090e+233] [ 2.34209789e+233] [ 2.00867021e+233] [ 2.25514450e+233] [ 2.06033465e+233] [ 2.33923341e+233] [ 2.08910076e+233] [ 2.36569412e+233] [ 2.27206119e+233] [ 2.65171124e+233] [ 2.33564753e+233] [ 2.36299848e+233] [ 2.25929349e+233] [ 2.46178388e+233] [ 2.60760819e+233] [ 2.37416144e+233] [ 2.46123291e+233] [ 2.57981044e+233] [ 2.45159113e+233] [ 2.49983900e+233] [ 2.27234338e+233] [ 2.57294627e+233]] [[ -4.01979434e+232] [ -1.82893838e+233] [ -1.23258635e+233] [ -2.07040394e+233] [ -2.80033995e+233] [ -2.93554980e+233] [ -4.71498302e+233] [ -4.04028607e+233] [ -5.61424784e+233] [ -4.70884849e+233] [ -7.93377498e+233] [ -7.07886830e+233] [ -5.20661741e+233] [ -7.24550583e+233] [ -7.47541192e+233] [ -8.56594063e+233] [ -1.09211541e+234] [ -7.63730104e+233] [ -9.33531624e+233] [ -9.11364587e+233] [ -1.22469745e+234] [ -1.25796522e+234] [ -1.24984883e+234] [ -1.30300333e+234] [ -1.18399076e+234] [ -1.05682537e+234] [ -1.35372633e+234] [ -1.49866896e+234] [ -1.29707094e+234] [ -1.56111416e+234] [ -1.33886954e+234] [ -1.50315579e+234] [ -1.37330622e+234] [ -1.55920485e+234] [ -1.39248014e+234] [ -1.57684211e+234] [ -1.51443153e+234] [ -1.76748546e+234] [ -1.55681471e+234] [ -1.57504535e+234] [ -1.50592128e+234] [ -1.64089029e+234] [ -1.73808878e+234] [ -1.58248597e+234] [ -1.64052304e+234] [ -1.71956033e+234] [ -1.63409636e+234] [ -1.66625575e+234] [ -1.51461963e+234] [ -1.71498505e+234]] [[ 2.67937472e+233] [ 1.21907014e+234] [ 8.21574546e+233] [ 1.38001785e+234] [ 1.86655322e+234] [ 1.95667670e+234] [ 3.14274942e+234] [ 2.69303338e+234] [ 3.74215008e+234] [ 3.13866047e+234] [ 5.28821982e+234] [ 4.71838585e+234] [ 3.47044597e+234] [ 4.82945731e+234] [ 4.98270011e+234] [ 5.70958681e+234] [ 7.27944309e+234] [ 5.09060653e+234] [ 6.22241045e+234] [ 6.07465713e+234] [ 8.16316237e+234] [ 8.38490707e+234] [ 8.33080766e+234] [ 8.68510646e+234] [ 7.89183386e+234] [ 7.04421899e+234] [ 9.02319821e+234] [ 9.98930639e+234] [ 8.64556443e+234] [ 1.04055319e+235] [ 8.92417094e+234] [ 1.00192131e+235] [ 9.15370701e+234] [ 1.03928055e+235] [ 9.28150984e+234] [ 1.05103658e+235] [ 1.00943710e+235] [ 1.17810899e+235] [ 1.03768741e+235] [ 1.04983895e+235] [ 1.00376464e+235] [ 1.09372758e+235] [ 1.15851478e+235] [ 1.05479846e+235] [ 1.09348280e+235] [ 1.14616472e+235] [ 1.08919912e+235] [ 1.11063481e+235] [ 1.00956248e+235] [ 1.14311509e+235]] [[ -1.78592442e+234] [ -8.12565381e+234] [ -5.47616590e+234] [ -9.19844310e+234] [ -1.24414213e+235] [ -1.30421351e+235] [ -2.09478461e+235] [ -1.79502854e+235] [ -2.49431226e+235] [ -2.09205915e+235] [ -3.52483766e+235] [ -3.14501756e+235] [ -2.31320919e+235] [ -3.21905170e+235] [ -3.32119496e+235] [ -3.80569782e+235] [ -4.85207802e+235] [ -3.39311947e+235] [ -4.14751797e+235] [ -4.04903370e+235] [ -5.44111689e+235] [ -5.58891976e+235] [ -5.55286005e+235] [ -5.78901623e+235] [ -5.26026417e+235] [ -4.69529053e+235] [ -6.01436967e+235] [ -6.65832446e+235] [ -5.76265967e+235] [ -6.93575757e+235] [ -5.94836352e+235] [ -6.67825867e+235] [ -6.10135968e+235] [ -6.92727486e+235] [ -6.18654605e+235] [ -7.00563412e+235] [ -6.72835481e+235] [ -7.85262923e+235] [ -6.91665584e+235] [ -6.99765141e+235] [ -6.69054526e+235] [ -7.29018897e+235] [ -7.72202492e+235] [ -7.03070878e+235] [ -7.28855735e+235] [ -7.63970623e+235] [ -7.26000474e+235] [ -7.40288328e+235] [ -6.72919047e+235] [ -7.61937905e+235]] [[ 1.19039939e+235] [ 5.41611574e+235] [ 3.65011222e+235] [ 6.13117831e+235] [ 8.29276997e+235] [ 8.69317284e+235] [ 1.39626868e+236] [ 1.19646770e+236] [ 1.66257193e+236] [ 1.39445203e+236] [ 2.34946371e+236] [ 2.09629643e+236] [ 1.54185853e+236] [ 2.14564354e+236] [ 2.21372664e+236] [ 2.53666970e+236] [ 3.23412943e+236] [ 2.26166758e+236] [ 2.76450829e+236] [ 2.69886407e+236] [ 3.62675047e+236] [ 3.72526777e+236] [ 3.70123235e+236] [ 3.85864112e+236] [ 3.50620396e+236] [ 3.12962347e+236] [ 4.00884938e+236] [ 4.43807437e+236] [ 3.84107328e+236] [ 4.62299608e+236] [ 3.96485329e+236] [ 4.45136142e+236] [ 4.06683216e+236] [ 4.61734197e+236] [ 4.12361272e+236] [ 4.66957196e+236] [ 4.48475276e+236] [ 5.23413251e+236] [ 4.61026392e+236] [ 4.66425112e+236] [ 4.45955099e+236] [ 4.85924064e+236] [ 5.14707883e+236] [ 4.68628536e+236] [ 4.85815309e+236] [ 5.09220970e+236] [ 4.83912148e+236] [ 4.93435649e+236] [ 4.48530977e+236] [ 5.07866072e+236]] [[ -7.93455030e+235] [ -3.61008608e+236] [ -2.43296487e+236] [ -4.08670762e+236] [ -5.52750622e+236] [ -5.79439283e+236] [ -9.30676219e+236] [ -7.97499831e+236] [ -1.10817938e+237] [ -9.29465341e+236] [ -1.56602381e+237] [ -1.39727637e+237] [ -1.02771844e+237] [ -1.43016845e+237] [ -1.47554892e+237] [ -1.69080508e+237] [ -2.15569353e+237] [ -1.50750372e+237] [ -1.84266980e+237] [ -1.79891496e+237] [ -2.41739321e+237] [ -2.48305944e+237] [ -2.46703874e+237] [ -2.57195881e+237] [ -2.33704350e+237] [ -2.08603558e+237] [ -2.67207940e+237] [ -2.95817727e+237] [ -2.56024905e+237] [ -3.08143596e+237] [ -2.64275402e+237] [ -2.96703369e+237] [ -2.71072755e+237] [ -3.07766724e+237] [ -2.74857437e+237] [ -3.11248089e+237] [ -2.98929053e+237] [ -3.48878602e+237] [ -3.07294939e+237] [ -3.10893431e+237] [ -2.97249241e+237] [ -3.23890364e+237] [ -3.43076081e+237] [ -3.12362112e+237] [ -3.23817874e+237] [ -3.39418805e+237] [ -3.22549331e+237] [ -3.28897175e+237] [ -2.98966180e+237] [ -3.38515704e+237]] [[ 5.28873661e+236] [ 2.40628563e+237] [ 1.62168111e+237] [ 2.72397545e+237] [ 3.68433287e+237] [ 3.86222487e+237] [ 6.20337788e+237] [ 5.31569703e+237] [ 7.38651670e+237] [ 6.19530684e+237] [ 1.04382569e+238] [ 9.31347893e+237] [ 6.85020819e+237] [ 9.53271950e+237] [ 9.83520083e+237] [ 1.12699805e+238] [ 1.43686722e+238] [ 1.00481940e+238] [ 1.22822275e+238] [ 1.19905817e+238] [ 1.61130190e+238] [ 1.65507141e+238] [ 1.64439289e+238] [ 1.71432686e+238] [ 1.55774518e+238] [ 1.39043705e+238] [ 1.78106176e+238] [ 1.97175893e+238] [ 1.70652178e+238] [ 2.05391642e+238] [ 1.76151507e+238] [ 1.97766214e+238] [ 1.80682250e+238] [ 2.05140440e+238] [ 1.83204912e+238] [ 2.07460927e+238] [ 1.99249733e+238] [ 2.32543366e+238] [ 2.04825974e+238] [ 2.07224531e+238] [ 1.98130062e+238] [ 2.15887575e+238] [ 2.28675723e+238] [ 2.08203474e+238] [ 2.15839257e+238] [ 2.26237984e+238] [ 2.14993716e+238] [ 2.19224841e+238] [ 1.99274480e+238] [ 2.25636026e+238]] [[ -3.52518212e+237] [ -1.60389819e+238] [ -1.08092380e+238] [ -1.81565282e+238] [ -2.45577448e+238] [ -2.57434754e+238] [ -4.13483190e+238] [ -3.54315247e+238] [ -4.92344743e+238] [ -4.12945218e+238] [ -6.95757030e+238] [ -6.20785490e+238] [ -4.56597355e+238] [ -6.35398865e+238] [ -6.55560614e+238] [ -7.51195166e+238] [ -9.57736985e+238] [ -6.69757575e+238] [ -8.18666010e+238] [ -7.99226497e+238] [ -1.07400558e+239] [ -1.10317994e+239] [ -1.09606223e+239] [ -1.14267638e+239] [ -1.03830761e+239] [ -9.26789176e+238] [ -1.18715821e+239] [ -1.31426650e+239] [ -1.13747394e+239] [ -1.36902818e+239] [ -1.17412946e+239] [ -1.31820125e+239] [ -1.20432891e+239] [ -1.36735380e+239] [ -1.22114359e+239] [ -1.38282090e+239] [ -1.32808958e+239] [ -1.55000670e+239] [ -1.36525775e+239] [ -1.38124522e+239] [ -1.32062646e+239] [ -1.43898832e+239] [ -1.52422711e+239] [ -1.38777031e+239] [ -1.43866626e+239] [ -1.50797847e+239] [ -1.43303035e+239] [ -1.46123271e+239] [ -1.32825453e+239] [ -1.50396615e+239]] [[ 2.34969331e+238] [ 1.06907068e+239] [ 7.20484596e+238] [ 1.21021472e+239] [ 1.63688475e+239] [ 1.71591906e+239] [ 2.75605245e+239] [ 2.36167135e+239] [ 3.28170037e+239] [ 2.75246663e+239] [ 4.63753526e+239] [ 4.13781603e+239] [ 3.04342786e+239] [ 4.23522079e+239] [ 4.36960796e+239] [ 5.00705550e+239] [ 6.38375013e+239] [ 4.46423713e+239] [ 5.45677919e+239] [ 5.32720605e+239] [ 7.15873292e+239] [ 7.35319323e+239] [ 7.30575043e+239] [ 7.61645483e+239] [ 6.92078980e+239] [ 6.17746899e+239] [ 7.91294636e+239] [ 8.76018058e+239] [ 7.58177821e+239] [ 9.12519194e+239] [ 7.82610383e+239] [ 8.78640748e+239] [ 8.02739682e+239] [ 9.11403147e+239] [ 8.13947426e+239] [ 9.21712666e+239] [ 8.85231764e+239] [ 1.03314956e+240] [ 9.10006031e+239] [ 9.20662402e+239] [ 8.80257263e+239] [ 9.59150793e+239] [ 1.01596630e+240] [ 9.25011673e+239] [ 9.58936124e+239] [ 1.00513585e+240] [ 9.55179533e+239] [ 9.73977683e+239] [ 8.85341711e+239] [ 1.00246146e+240]] [[ -1.56617685e+239] [ -7.12583956e+239] [ -4.80235566e+239] [ -8.06662840e+239] [ -1.09105771e+240] [ -1.14373765e+240] [ -1.83703360e+240] [ -1.57416076e+240] [ -2.18740171e+240] [ -1.83464349e+240] [ -3.09112697e+240] [ -2.75804151e+240] [ -2.02858230e+240] [ -2.82296619e+240] [ -2.91254132e+240] [ -3.33742893e+240] [ -4.25505816e+240] [ -2.97561593e+240] [ -3.63719010e+240] [ -3.55082375e+240] [ -4.77161923e+240] [ -4.90123583e+240] [ -4.86961305e+240] [ -5.07671159e+240] [ -4.61301938e+240] [ -4.11756245e+240] [ -5.27433661e+240] [ -5.83905652e+240] [ -5.05359806e+240] [ -6.08235310e+240] [ -5.21645213e+240] [ -5.85653794e+240] [ -5.35062301e+240] [ -6.07491414e+240] [ -5.42532769e+240] [ -6.14363176e+240] [ -5.90047004e+240] [ -6.88640906e+240] [ -6.06560174e+240] [ -6.13663127e+240] [ -5.86731274e+240] [ -6.39317380e+240] [ -6.77187485e+240] [ -6.16562112e+240] [ -6.39174293e+240] [ -6.69968499e+240] [ -6.36670356e+240] [ -6.49200173e+240] [ -5.90120289e+240] [ -6.68185896e+240]] [[ 1.04392770e+240] [ 4.74969433e+240] [ 3.20098723e+240] [ 5.37677263e+240] [ 7.27239306e+240] [ 7.62352869e+240] [ 1.22446597e+241] [ 1.04924934e+241] [ 1.45800216e+241] [ 1.22287285e+241] [ 2.06037592e+241] [ 1.83835940e+241] [ 1.35214185e+241] [ 1.88163463e+241] [ 1.94134050e+241] [ 2.22454731e+241] [ 2.83618869e+241] [ 1.98338259e+241] [ 2.42435169e+241] [ 2.36678461e+241] [ 3.18050001e+241] [ 3.26689533e+241] [ 3.24581732e+241] [ 3.38385786e+241] [ 3.07478604e+241] [ 2.74454158e+241] [ 3.51558387e+241] [ 3.89199524e+241] [ 3.36845165e+241] [ 4.05416341e+241] [ 3.47700125e+241] [ 3.90364739e+241] [ 3.56643221e+241] [ 4.04920501e+241] [ 3.61622626e+241] [ 4.09500840e+241] [ 3.93293013e+241] [ 4.59010307e+241] [ 4.04299787e+241] [ 4.09034226e+241] [ 3.91082928e+241] [ 4.26133946e+241] [ 4.51376084e+241] [ 4.10966531e+241] [ 4.26038572e+241] [ 4.46564303e+241] [ 4.24369585e+241] [ 4.32721274e+241] [ 3.93341860e+241] [ 4.45376118e+241]] [[ -6.95825022e+240] [ -3.16588607e+241] [ -2.13360275e+241] [ -3.58386212e+241] [ -4.84737886e+241] [ -5.08142664e+241] [ -8.16161945e+241] [ -6.99372134e+241] [ -9.71824376e+241] [ -8.15100059e+241] [ -1.37333373e+242] [ -1.22534968e+242] [ -9.01263689e+241] [ -1.25419458e+242] [ -1.29399124e+242] [ -1.48276139e+242] [ -1.89044803e+242] [ -1.32201419e+242] [ -1.61594004e+242] [ -1.57756898e+242] [ -2.11994709e+242] [ -2.17753348e+242] [ -2.16348403e+242] [ -2.25549430e+242] [ -2.04948395e+242] [ -1.82936108e+242] [ -2.34329564e+242] [ -2.59419084e+242] [ -2.24522536e+242] [ -2.70228326e+242] [ -2.31757858e+242] [ -2.60195753e+242] [ -2.37718836e+242] [ -2.69897826e+242] [ -2.41037834e+242] [ -2.72950829e+242] [ -2.62147579e+242] [ -3.05951128e+242] [ -2.69484092e+242] [ -2.72639810e+242] [ -2.60674458e+242] [ -2.84037546e+242] [ -3.00862573e+242] [ -2.73927778e+242] [ -2.83973975e+242] [ -2.97655304e+242] [ -2.82861520e+242] [ -2.88428299e+242] [ -2.62180138e+242] [ -2.96863324e+242]] [[ 4.63798845e+241] [ 2.11020624e+242] [ 1.42214272e+242] [ 2.38880618e+242] [ 3.23099723e+242] [ 3.38700065e+242] [ 5.44008846e+242] [ 4.66163156e+242] [ 6.47764896e+242] [ 5.43301051e+242] [ 9.15389038e+242] [ 8.16750974e+242] [ 6.00733008e+242] [ 8.35977404e+242] [ 8.62503681e+242] [ 9.88327520e+242] [ 1.26006910e+243] [ 8.81182245e+242] [ 1.07709712e+243] [ 1.05152107e+243] [ 1.41304061e+243] [ 1.45142454e+243] [ 1.44205995e+243] [ 1.50338895e+243] [ 1.36607374e+243] [ 1.21935189e+243] [ 1.56191252e+243] [ 1.72914551e+243] [ 1.49654424e+243] [ 1.80119400e+243] [ 1.54477093e+243] [ 1.73432236e+243] [ 1.58450354e+243] [ 1.79899107e+243] [ 1.60662617e+243] [ 1.81934072e+243] [ 1.74733217e+243] [ 2.03930263e+243] [ 1.79623334e+243] [ 1.81726763e+243] [ 1.73751315e+243] [ 1.89323870e+243] [ 2.00538511e+243] [ 1.82585252e+243] [ 1.89281497e+243] [ 1.98400721e+243] [ 1.88539995e+243] [ 1.92250505e+243] [ 1.74754919e+243] [ 1.97872831e+243]] [[ -3.09142905e+242] [ -1.40654789e+243] [ -9.47922439e+242] [ -1.59224735e+243] [ -2.15360577e+243] [ -2.25758911e+243] [ -3.62606498e+243] [ -3.10718825e+243] [ -4.31764597e+243] [ -3.62134721e+243] [ -6.10148191e+243] [ -5.44401461e+243] [ -4.00415717e+243] [ -5.57216746e+243] [ -5.74897709e+243] [ -6.58764988e+243] [ -8.39893043e+243] [ -5.87347817e+243] [ -7.17933938e+243] [ -7.00886345e+243] [ -9.41855468e+243] [ -9.67440095e+243] [ -9.61198172e+243] [ -1.00207672e+244] [ -9.10549924e+243] [ -8.12753180e+243] [ -1.04108533e+244] [ -1.15255368e+244] [ -9.97514414e+243] [ -1.20057726e+244] [ -1.02965969e+244] [ -1.15600429e+244] [ -1.05614327e+244] [ -1.19910890e+244] [ -1.07088900e+244] [ -1.21267286e+244] [ -1.16467591e+244] [ -1.35928743e+244] [ -1.19727076e+244] [ -1.21129106e+244] [ -1.15813109e+244] [ -1.26192921e+244] [ -1.33667987e+244] [ -1.21701328e+244] [ -1.26164678e+244] [ -1.32243053e+244] [ -1.25670433e+244] [ -1.28143656e+244] [ -1.16482056e+244] [ -1.31891190e+244]] [[ 2.06057727e+243] [ 9.37527777e+243] [ 6.31833174e+243] [ 1.06130486e+244] [ 1.43547564e+244] [ 1.50478524e+244] [ 2.41693630e+244] [ 2.07108149e+244] [ 2.87790630e+244] [ 2.41379169e+244] [ 4.06691363e+244] [ 3.62868193e+244] [ 2.66895184e+244] [ 3.71410161e+244] [ 3.83195322e+244] [ 4.39096656e+244] [ 5.59826696e+244] [ 3.91493882e+244] [ 4.78535438e+244] [ 4.67172445e+244] [ 6.27789264e+244] [ 6.44842574e+244] [ 6.40682050e+244] [ 6.67929455e+244] [ 6.06922703e+244] [ 5.41736751e+244] [ 6.93930453e+244] [ 7.68229152e+244] [ 6.64888468e+244] [ 8.00239036e+244] [ 6.86314746e+244] [ 7.70529136e+244] [ 7.03967252e+244] [ 7.99260312e+244] [ 7.13795949e+244] [ 8.08301306e+244] [ 7.76309166e+244] [ 9.06026542e+244] [ 7.98035103e+244] [ 8.07380270e+244] [ 7.71946747e+244] [ 8.41132889e+244] [ 8.90957581e+244] [ 8.11194388e+244] [ 8.40944634e+244] [ 8.81459753e+244] [ 8.37650270e+244] [ 8.54135417e+244] [ 7.76405584e+244] [ 8.79114429e+244]] [[ -1.37346794e+244] [ -6.24904662e+244] [ -4.21145385e+244] [ -7.07407688e+244] [ -9.56809431e+244] [ -1.00300741e+245] [ -1.61099734e+245] [ -1.38046947e+245] [ -1.91825470e+245] [ -1.60890132e+245] [ -2.71078185e+245] [ -2.41868061e+245] [ -1.77897710e+245] [ -2.47561669e+245] [ -2.55417012e+245] [ -2.92677779e+245] [ -3.73149810e+245] [ -2.60948377e+245] [ -3.18965510e+245] [ -3.11391562e+245] [ -4.18449935e+245] [ -4.29816738e+245] [ -4.27043561e+245] [ -4.45205188e+245] [ -4.04541429e+245] [ -3.61092044e+245] [ -4.62536029e+245] [ -5.12059472e+245] [ -4.43178233e+245] [ -5.33395508e+245] [ -4.57459816e+245] [ -5.13592515e+245] [ -4.69226010e+245] [ -5.32743144e+245] [ -4.75777280e+245] [ -5.38769375e+245] [ -5.17445167e+245] [ -6.03907664e+245] [ -5.31926487e+245] [ -5.38155463e+245] [ -5.14537417e+245] [ -5.60653110e+245] [ -5.93863520e+245] [ -5.40697745e+245] [ -5.60527629e+245] [ -5.87532788e+245] [ -5.58331787e+245] [ -5.69319883e+245] [ -5.17509433e+245] [ -5.85969524e+245]] [[ 9.15478493e+244] [ 4.16527218e+245] [ 2.80712445e+245] [ 4.71519216e+245] [ 6.37756756e+245] [ 6.68549801e+245] [ 1.07380259e+246] [ 9.20145334e+245] [ 1.27860351e+246] [ 1.07240549e+246] [ 1.80685870e+246] [ 1.61216001e+246] [ 1.18576869e+246] [ 1.65011048e+246] [ 1.70246989e+246] [ 1.95082976e+246] [ 2.48721224e+246] [ 1.73933894e+246] [ 2.12604937e+246] [ 2.07556558e+246] [ 2.78915806e+246] [ 2.86492294e+246] [ 2.84643846e+246] [ 2.96749392e+246] [ 2.69645157e+246] [ 2.40684177e+246] [ 3.08301181e+246] [ 3.41310795e+246] [ 2.95398334e+246] [ 3.55532228e+246] [ 3.04917655e+246] [ 3.42332638e+246] [ 3.12760356e+246] [ 3.55097398e+246] [ 3.17127074e+246] [ 3.59114153e+246] [ 3.44900604e+246] [ 4.02531769e+246] [ 3.54553059e+246] [ 3.58704954e+246] [ 3.42962458e+246] [ 3.73700653e+246] [ 3.95836893e+246] [ 3.60399500e+246] [ 3.73617014e+246] [ 3.91617174e+246] [ 3.72153386e+246] [ 3.79477449e+246] [ 3.44943441e+246] [ 3.90575187e+246]] [[ -6.10207817e+245] [ -2.77634228e+246] [ -1.87107539e+246] [ -3.14288881e+246] [ -4.25093720e+246] [ -4.45618676e+246] [ -7.15737984e+246] [ -6.13318477e+246] [ -8.52247062e+246] [ -7.14806757e+246] [ -1.20435303e+247] [ -1.07457755e+247] [ -7.90368456e+246] [ -1.09987326e+247] [ -1.13477318e+247] [ -1.30031626e+247] [ -1.65783944e+247] [ -1.15934807e+247] [ -1.41710806e+247] [ -1.38345833e+247] [ -1.85909998e+247] [ -1.90960070e+247] [ -1.89727996e+247] [ -1.97796890e+247] [ -1.79730692e+247] [ -1.60426889e+247] [ -2.05496680e+247] [ -2.27499080e+247] [ -1.96896349e+247] [ -2.36978308e+247] [ -2.03241407e+247] [ -2.28180185e+247] [ -2.08468921e+247] [ -2.36688475e+247] [ -2.11379536e+247] [ -2.39365824e+247] [ -2.29891850e+247] [ -2.68305628e+247] [ -2.36325648e+247] [ -2.39093074e+247] [ -2.28599988e+247] [ -2.49088385e+247] [ -2.63843190e+247] [ -2.40222565e+247] [ -2.49032636e+247] [ -2.61030557e+247] [ -2.48057062e+247] [ -2.52938881e+247] [ -2.29920403e+247] [ -2.60336026e+247]] [[ 4.06731106e+246] [ 1.85055768e+247] [ 1.24715637e+247] [ 2.09487753e+247] [ 2.83344189e+247] [ 2.97025000e+247] [ 4.77071735e+247] [ 4.08804502e+247] [ 5.68061208e+247] [ 4.76451030e+247] [ 8.02755763e+247] [ 7.16254537e+247] [ 5.26816319e+247] [ 7.33115267e+247] [ 7.56377640e+247] [ 8.66719591e+247] [ 1.10502496e+248] [ 7.72757916e+247] [ 9.44566606e+247] [ 9.22137540e+247] [ 1.23917421e+248] [ 1.27283523e+248] [ 1.26462289e+248] [ 1.31840572e+248] [ 1.19798634e+248] [ 1.06931776e+248] [ 1.36972830e+248] [ 1.51638426e+248] [ 1.31240321e+248] [ 1.57956760e+248] [ 1.35469589e+248] [ 1.52092412e+248] [ 1.38953964e+248] [ 1.57763572e+248] [ 1.40894020e+248] [ 1.59548147e+248] [ 1.53233315e+248] [ 1.78837835e+248] [ 1.57521732e+248] [ 1.59366346e+248] [ 1.52372230e+248] [ 1.66028673e+248] [ 1.75863418e+248] [ 1.60119203e+248] [ 1.65991514e+248] [ 1.73988671e+248] [ 1.65341250e+248] [ 1.68595203e+248] [ 1.53252347e+248] [ 1.73525734e+248]] [[ -2.71104676e+247] [ -1.23348039e+248] [ -8.31286119e+247] [ -1.39633061e+248] [ -1.88861716e+248] [ -1.97980595e+248] [ -3.17989886e+248] [ -2.72486688e+248] [ -3.78638485e+248] [ -3.17576158e+248] [ -5.35073020e+248] [ -4.77416040e+248] [ -3.51146901e+248] [ -4.88654479e+248] [ -5.04159904e+248] [ -5.77707803e+248] [ -7.36549109e+248] [ -5.15078098e+248] [ -6.29596360e+248] [ -6.14646373e+248] [ -8.25965653e+248] [ -8.48402241e+248] [ -8.42928350e+248] [ -8.78777036e+248] [ -7.98512074e+248] [ -7.12748648e+248] [ -9.12985858e+248] [ -1.01073868e+249] [ -8.74776091e+248] [ -1.05285324e+249] [ -9.02966075e+248] [ -1.01376471e+249] [ -9.26191008e+248] [ -1.05156556e+249] [ -9.39122364e+248] [ -1.06346055e+249] [ -1.02136934e+249] [ -1.19203505e+249] [ -1.04995358e+249] [ -1.06224877e+249] [ -1.01562982e+249] [ -1.10665619e+249] [ -1.17220922e+249] [ -1.06726690e+249] [ -1.10640851e+249] [ -1.15971318e+249] [ -1.10207420e+249] [ -1.12376327e+249] [ -1.02149620e+249] [ -1.15662750e+249]] [[ 1.80703527e+248] [ 8.22170459e+248] [ 5.54089792e+248] [ 9.30717499e+248] [ 1.25884875e+249] [ 1.31963020e+249] [ 2.11954640e+249] [ 1.81624701e+249] [ 2.52379674e+249] [ 2.11678872e+249] [ 3.56650366e+249] [ 3.18219381e+249] [ 2.34055290e+249] [ 3.25710309e+249] [ 3.36045376e+249] [ 3.85068377e+249] [ 4.90943291e+249] [ 3.43322846e+249] [ 4.19654447e+249] [ 4.09689604e+249] [ 5.50543461e+249] [ 5.65498462e+249] [ 5.61849866e+249] [ 5.85744636e+249] [ 5.32244409e+249] [ 4.75079207e+249] [ 6.08546363e+249] [ 6.73703041e+249] [ 5.83077825e+249] [ 7.01774296e+249] [ 6.01867724e+249] [ 6.75720026e+249] [ 6.17348193e+249] [ 7.00915999e+249] [ 6.25967526e+249] [ 7.08844551e+249] [ 6.80788857e+249] [ 7.94545268e+249] [ 6.99841545e+249] [ 7.08036843e+249] [ 6.76963208e+249] [ 7.37636400e+249] [ 7.81330454e+249] [ 7.11381656e+249] [ 7.37471308e+249] [ 7.73001278e+249] [ 7.34582296e+249] [ 7.49039043e+249] [ 6.80873411e+249] [ 7.70944533e+249]] [[ -1.20447073e+249] [ -5.48013793e+249] [ -3.69325905e+249] [ -6.20365304e+249] [ -8.39079619e+249] [ -8.79593209e+249] [ -1.41277353e+250] [ -1.21061077e+250] [ -1.68222466e+250] [ -1.41093541e+250] [ -2.37723598e+250] [ -2.12107609e+250] [ -1.56008436e+250] [ -2.17100651e+250] [ -2.23989440e+250] [ -2.56665488e+250] [ -3.27235905e+250] [ -2.28840203e+250] [ -2.79718667e+250] [ -2.73076649e+250] [ -3.66962115e+250] [ -3.76930299e+250] [ -3.74498344e+250] [ -3.90425290e+250] [ -3.54764969e+250] [ -3.16661776e+250] [ -4.05623672e+250] [ -4.49053545e+250] [ -3.88647740e+250] [ -4.67764305e+250] [ -4.01172057e+250] [ -4.50397956e+250] [ -4.11490489e+250] [ -4.67192211e+250] [ -4.17235665e+250] [ -4.72476950e+250] [ -4.53776561e+250] [ -5.29600353e+250] [ -4.66476039e+250] [ -4.71938576e+250] [ -4.51226593e+250] [ -4.91668019e+250] [ -5.20792082e+250] [ -4.74168045e+250] [ -4.91557978e+250] [ -5.15240310e+250] [ -4.89632321e+250] [ -4.99268396e+250] [ -4.53832920e+250] [ -5.13869396e+250]] [[ 8.02834211e+249] [ 3.65275976e+250] [ 2.46172418e+250] [ 4.13501530e+250] [ 5.59284512e+250] [ 5.86288652e+250] [ 9.41677448e+250] [ 8.06926824e+250] [ 1.12127881e+251] [ 9.40452257e+250] [ 1.58453529e+251] [ 1.41379314e+251] [ 1.03986678e+251] [ 1.44707402e+251] [ 1.49299092e+251] [ 1.71079155e+251] [ 2.18117530e+251] [ 1.52532345e+251] [ 1.86445142e+251] [ 1.82017936e+251] [ 2.44596845e+251] [ 2.51241090e+251] [ 2.49620083e+251] [ 2.60236112e+251] [ 2.36466895e+251] [ 2.11069395e+251] [ 2.70366521e+251] [ 2.99314495e+251] [ 2.59051295e+251] [ 3.11786064e+251] [ 2.67399319e+251] [ 3.00210606e+251] [ 2.74277020e+251] [ 3.11404737e+251] [ 2.78106439e+251] [ 3.14927254e+251] [ 3.02462599e+251] [ 3.53002586e+251] [ 3.10927376e+251] [ 3.14568404e+251] [ 3.00762931e+251] [ 3.27718970e+251] [ 3.47131475e+251] [ 3.16054446e+251] [ 3.27645622e+251] [ 3.43430968e+251] [ 3.26362085e+251] [ 3.32784965e+251] [ 3.02500165e+251] [ 3.42517192e+251]] [[ -5.35125309e+250] [ -2.43472957e+251] [ -1.64085049e+251] [ -2.75617470e+251] [ -3.72788420e+251] [ -3.90787901e+251] [ -6.27670605e+251] [ -5.37853221e+251] [ -7.47383038e+251] [ -6.26853960e+251] [ -1.05616443e+252] [ -9.42357062e+251] [ -6.93118233e+251] [ -9.64540277e+251] [ -9.95145964e+251] [ -1.14031994e+252] [ -1.45385198e+252] [ -1.01669706e+252] [ -1.24274119e+252] [ -1.21323186e+252] [ -1.63034859e+252] [ -1.67463549e+252] [ -1.66383074e+252] [ -1.73459137e+252] [ -1.57615880e+252] [ -1.40687297e+252] [ -1.80211513e+252] [ -1.99506647e+252] [ -1.72669403e+252] [ -2.07819512e+252] [ -1.78233739e+252] [ -2.00103946e+252] [ -1.82818038e+252] [ -2.07565340e+252] [ -1.85370519e+252] [ -2.09913257e+252] [ -2.01605001e+252] [ -2.35292188e+252] [ -2.07247157e+252] [ -2.09674067e+252] [ -2.00472095e+252] [ -2.18439514e+252] [ -2.31378827e+252] [ -2.10664581e+252] [ -2.18390625e+252] [ -2.28912271e+252] [ -2.17535089e+252] [ -2.21816229e+252] [ -2.01630040e+252] [ -2.28303198e+252]] [[ 3.56685219e+251] [ 1.62285737e+252] [ 1.09370106e+252] [ 1.83711508e+252] [ 2.48480341e+252] [ 2.60477809e+252] [ 4.18370844e+252] [ 3.58503495e+252] [ 4.98164594e+252] [ 4.17826513e+252] [ 7.03981354e+252] [ 6.28123599e+252] [ 4.61994648e+252] [ 6.42909715e+252] [ 6.63309789e+252] [ 7.60074807e+252] [ 9.69058092e+252] [ 6.77674568e+252] [ 8.28343202e+252] [ 8.08673901e+252] [ 1.08670106e+253] [ 1.11622029e+253] [ 1.10901843e+253] [ 1.15618360e+253] [ 1.05058112e+253] [ 9.37744458e+252] [ 1.20119123e+253] [ 1.32980203e+253] [ 1.15091966e+253] [ 1.38521103e+253] [ 1.18800847e+253] [ 1.33378329e+253] [ 1.21856490e+253] [ 1.38351686e+253] [ 1.23557834e+253] [ 1.39916679e+253] [ 1.34378850e+253] [ 1.56832884e+253] [ 1.38139603e+253] [ 1.39757248e+253] [ 1.33623717e+253] [ 1.45599815e+253] [ 1.54224452e+253] [ 1.40417471e+253] [ 1.45567228e+253] [ 1.52580381e+253] [ 1.44996975e+253] [ 1.47850548e+253] [ 1.34395540e+253] [ 1.52174406e+253]] [[ -2.37746829e+252] [ -1.08170783e+253] [ -7.29001217e+252] [ -1.22452028e+253] [ -1.65623385e+253] [ -1.73620239e+253] [ -2.78863088e+253] [ -2.38958793e+253] [ -3.32049231e+253] [ -2.78500267e+253] [ -4.69235409e+253] [ -4.18672785e+253] [ -3.07940326e+253] [ -4.28528400e+253] [ -4.42125972e+253] [ -5.06624232e+253] [ -6.45921042e+253] [ -4.51700746e+253] [ -5.52128205e+253] [ -5.39017727e+253] [ -7.24335404e+253] [ -7.44011300e+253] [ -7.39210939e+253] [ -7.70648654e+253] [ -7.00259827e+253] [ -6.25049090e+253] [ -8.00648280e+253] [ -8.86373191e+253] [ -7.67140002e+253] [ -9.23305795e+253] [ -7.91861373e+253] [ -8.89026883e+253] [ -8.12228614e+253] [ -9.22176555e+253] [ -8.23568841e+253] [ -9.32607941e+253] [ -8.95695809e+253] [ -1.04536210e+254] [ -9.20762925e+253] [ -9.31545262e+253] [ -8.90662506e+253] [ -9.70488611e+253] [ -1.02797572e+254] [ -9.35945943e+253] [ -9.70271404e+253] [ -1.01701724e+254] [ -9.66470409e+253] [ -9.85490766e+253] [ -8.95807055e+253] [ -1.01431124e+254]] [[ 1.58469013e+253] [ 7.21007186e+253] [ 4.85912279e+253] [ 8.16198146e+253] [ 1.10395476e+254] [ 1.15725741e+254] [ 1.85874859e+254] [ 1.59276842e+254] [ 2.21325829e+254] [ 1.85633023e+254] [ 3.12766621e+254] [ 2.79064345e+254] [ 2.05256154e+254] [ 2.85633558e+254] [ 2.94696955e+254] [ 3.37687962e+254] [ 4.30535586e+254] [ 3.01078975e+254] [ 3.68018418e+254] [ 3.59279691e+254] [ 4.82802304e+254] [ 4.95917179e+254] [ 4.92717521e+254] [ 5.13672179e+254] [ 4.66754843e+254] [ 4.16623485e+254] [ 5.33668287e+254] [ 5.90807817e+254] [ 5.11333504e+254] [ 6.15425068e+254] [ 5.27811416e+254] [ 5.92576623e+254] [ 5.41387103e+254] [ 6.14672378e+254] [ 5.48945878e+254] [ 6.21625369e+254] [ 5.97021764e+254] [ 6.96781113e+254] [ 6.13730130e+254] [ 6.20917045e+254] [ 5.93666839e+254] [ 6.46874548e+254] [ 6.85192304e+254] [ 6.23850298e+254] [ 6.46729770e+254] [ 6.77887985e+254] [ 6.44196235e+254] [ 6.56874163e+254] [ 5.97095914e+254] [ 6.76084310e+254]] [[ -1.05626764e+254] [ -4.80583896e+254] [ -3.23882509e+254] [ -5.44032976e+254] [ -7.35835772e+254] [ -7.71364401e+254] [ -1.23894000e+255] [ -1.06165218e+255] [ -1.47523674e+255] [ -1.23732805e+255] [ -2.08473097e+255] [ -1.86009006e+255] [ -1.36812509e+255] [ -1.90387684e+255] [ -1.96428848e+255] [ -2.25084298e+255] [ -2.86971438e+255] [ -2.00682753e+255] [ -2.45300919e+255] [ -2.39476162e+255] [ -3.21809570e+255] [ -3.30551227e+255] [ -3.28418510e+255] [ -3.42385738e+255] [ -3.11113211e+255] [ -2.77698394e+255] [ -3.55714048e+255] [ -3.93800128e+255] [ -3.40826905e+255] [ -4.10208640e+255] [ -3.51810178e+255] [ -3.94979118e+255] [ -3.60858988e+255] [ -4.09706938e+255] [ -3.65897253e+255] [ -4.14341420e+255] [ -3.97942005e+255] [ -4.64436123e+255] [ -4.09078887e+255] [ -4.13869291e+255] [ -3.95705796e+255] [ -4.31171141e+255] [ -4.56711658e+255] [ -4.15824437e+255] [ -4.31074640e+255] [ -4.51842999e+255] [ -4.29385923e+255] [ -4.37836336e+255] [ -3.97991430e+255] [ -4.50640768e+255]] [[ 7.04050150e+254] [ 3.20330901e+255] [ 2.15882339e+255] [ 3.62622582e+255] [ 4.90467819e+255] [ 5.14149257e+255] [ 8.25809538e+255] [ 7.07639191e+255] [ 9.83312004e+255] [ 8.24735099e+255] [ 1.38956747e+256] [ 1.23983415e+256] [ 9.11917243e+255] [ 1.26902001e+256] [ 1.30928710e+256] [ 1.50028864e+256] [ 1.91279442e+256] [ 1.33764130e+256] [ 1.63504156e+256] [ 1.59621692e+256] [ 2.14500632e+256] [ 2.20327341e+256] [ 2.18905789e+256] [ 2.28215578e+256] [ 2.07371025e+256] [ 1.85098538e+256] [ 2.37099500e+256] [ 2.62485595e+256] [ 2.27176546e+256] [ 2.73422609e+256] [ 2.34497394e+256] [ 2.63271444e+256] [ 2.40528835e+256] [ 2.73088203e+256] [ 2.43887066e+256] [ 2.76177295e+256] [ 2.65246343e+256] [ 3.09567679e+256] [ 2.72669578e+256] [ 2.75862599e+256] [ 2.63755808e+256] [ 2.87395064e+256] [ 3.04418974e+256] [ 2.77165792e+256] [ 2.87330742e+256] [ 3.01173793e+256] [ 2.86205136e+256] [ 2.91837718e+256] [ 2.65279287e+256] [ 3.00372452e+256]] [[ -4.69281265e+255] [ -2.13515032e+256] [ -1.43895342e+256] [ -2.41704350e+256] [ -3.26918982e+256] [ -3.42703732e+256] [ -5.50439403e+256] [ -4.71673523e+256] [ -6.55421920e+256] [ -5.49723241e+256] [ -9.26209562e+256] [ -8.26405529e+256] [ -6.07834083e+256] [ -8.45859229e+256] [ -8.72699065e+256] [ -1.00001023e+257] [ -1.27496398e+257] [ -8.91598422e+256] [ -1.08982914e+257] [ -1.06395076e+257] [ -1.42974371e+257] [ -1.46858137e+257] [ -1.45910608e+257] [ -1.52116004e+257] [ -1.38222166e+257] [ -1.23376547e+257] [ -1.58037539e+257] [ -1.74958519e+257] [ -1.51423441e+257] [ -1.82248534e+257] [ -1.56303118e+257] [ -1.75482324e+257] [ -1.60323346e+257] [ -1.82025637e+257] [ -1.62561759e+257] [ -1.84084657e+257] [ -1.76798683e+257] [ -2.06340858e+257] [ -1.81746605e+257] [ -1.83874898e+257] [ -1.75805174e+257] [ -1.91561808e+257] [ -2.02909013e+257] [ -1.84743535e+257] [ -1.91518934e+257] [ -2.00745953e+257] [ -1.90768667e+257] [ -1.94523037e+257] [ -1.76820641e+257] [ -2.00211823e+257]] [[ 3.12797186e+256] [ 1.42317425e+257] [ 9.59127531e+256] [ 1.61106880e+257] [ 2.17906286e+257] [ 2.28427536e+257] [ 3.66892755e+257] [ 3.14391734e+257] [ 4.36868350e+257] [ 3.66415401e+257] [ 6.17360560e+257] [ 5.50836658e+257] [ 4.05148905e+257] [ 5.63803429e+257] [ 5.81693393e+257] [ 6.66552041e+257] [ 8.49821153e+257] [ 5.94290670e+257] [ 7.26420409e+257] [ 7.09171301e+257] [ 9.52988844e+257] [ 9.78875898e+257] [ 9.72560191e+257] [ 1.01392196e+258] [ 9.21313247e+257] [ 8.22360479e+257] [ 1.05339167e+258] [ 1.16617766e+258] [ 1.00930572e+258] [ 1.21476890e+258] [ 1.04183097e+258] [ 1.16966905e+258] [ 1.06862761e+258] [ 1.21328319e+258] [ 1.08354764e+258] [ 1.22700749e+258] [ 1.17844318e+258] [ 1.37535513e+258] [ 1.21142331e+258] [ 1.22560935e+258] [ 1.17182099e+258] [ 1.27684608e+258] [ 1.35248034e+258] [ 1.23139921e+258] [ 1.27656031e+258] [ 1.33806256e+258] [ 1.27155944e+258] [ 1.29658401e+258] [ 1.17858954e+258] [ 1.33450234e+258]] [[ -2.08493470e+257] [ -9.48609997e+257] [ -6.39301875e+257] [ -1.07385021e+258] [ -1.45244394e+258] [ -1.52257283e+258] [ -2.44550614e+258] [ -2.09556309e+258] [ -2.91192512e+258] [ -2.44232435e+258] [ -4.11498733e+258] [ -3.67157544e+258] [ -2.70050067e+258] [ -3.75800483e+258] [ -3.87724953e+258] [ -4.44287079e+258] [ -5.66444230e+258] [ -3.96121607e+258] [ -4.84192054e+258] [ -4.72694743e+258] [ -6.35210162e+258] [ -6.52465055e+258] [ -6.48255350e+258] [ -6.75824838e+258] [ -6.14096945e+258] [ -5.48140449e+258] [ -7.02133185e+258] [ -7.77310146e+258] [ -6.72747905e+258] [ -8.09698409e+258] [ -6.94427455e+258] [ -7.79637317e+258] [ -7.12288627e+258] [ -8.08708116e+258] [ -7.22233505e+258] [ -8.17855980e+258] [ -7.85485671e+258] [ -9.16736396e+258] [ -8.07468424e+258] [ -8.16924058e+258] [ -7.81071686e+258] [ -8.51075655e+258] [ -9.01489309e+258] [ -8.20783261e+258] [ -8.50885175e+258] [ -8.91879210e+258] [ -8.47551869e+258] [ -8.64231882e+258] [ -7.85583229e+258] [ -8.89506163e+258]] [[ 1.38970327e+258] [ 6.32291463e+258] [ 4.26123611e+258] [ 7.15769731e+258] [ 9.68119573e+258] [ 1.01486365e+259] [ 1.63004043e+259] [ 1.39678757e+259] [ 1.94092978e+259] [ 1.62791963e+259] [ 2.74282515e+259] [ 2.44727107e+259] [ 1.80000583e+259] [ 2.50488017e+259] [ 2.58436216e+259] [ 2.96137431e+259] [ 3.77560696e+259] [ 2.64032965e+259] [ 3.22735901e+259] [ 3.15072423e+259] [ 4.23396300e+259] [ 4.34897466e+259] [ 4.32091508e+259] [ 4.50467818e+259] [ 4.09323385e+259] [ 3.65360399e+259] [ 4.68003521e+259] [ 5.18112365e+259] [ 4.48416903e+259] [ 5.39700607e+259] [ 4.62867304e+259] [ 5.19663530e+259] [ 4.74772582e+259] [ 5.39040532e+259] [ 4.81401293e+259] [ 5.45137997e+259] [ 5.23561722e+259] [ 6.11046266e+259] [ 5.38214222e+259] [ 5.44516829e+259] [ 5.20619602e+259] [ 5.67280413e+259] [ 6.00883393e+259] [ 5.47089162e+259] [ 5.67153449e+259] [ 5.94477828e+259] [ 5.64931650e+259] [ 5.76049633e+259] [ 5.23626749e+259] [ 5.92896084e+259]] [[ -9.26300075e+258] [ -4.21450854e+259] [ -2.84030658e+259] [ -4.77092896e+259] [ -6.45295477e+259] [ -6.76452517e+259] [ -1.08649567e+260] [ -9.31022081e+259] [ -1.29371748e+260] [ -1.08508206e+260] [ -1.82821700e+260] [ -1.63121684e+260] [ -1.19978529e+260] [ -1.66961591e+260] [ -1.72259425e+260] [ -1.97388990e+260] [ -2.51661279e+260] [ -1.75989912e+260] [ -2.15118073e+260] [ -2.10010019e+260] [ -2.82212781e+260] [ -2.89878829e+260] [ -2.88008531e+260] [ -3.00257172e+260] [ -2.72832548e+260] [ -2.43529229e+260] [ -3.11945512e+260] [ -3.45345322e+260] [ -2.98890145e+260] [ -3.59734862e+260] [ -3.08521990e+260] [ -3.46379244e+260] [ -3.16457397e+260] [ -3.59294892e+260] [ -3.20875733e+260] [ -3.63359128e+260] [ -3.48977565e+260] [ -4.07289970e+260] [ -3.58744119e+260] [ -3.62945091e+260] [ -3.47016509e+260] [ -3.78118050e+260] [ -4.00515956e+260] [ -3.64659668e+260] [ -3.78033423e+260] [ -3.96246357e+260] [ -3.76552493e+260] [ -3.83963132e+260] [ -3.49020909e+260] [ -3.95192053e+260]] [[ 6.17420891e+259] [ 2.80916054e+260] [ 1.89319278e+260] [ 3.18003991e+260] [ 4.30118618e+260] [ 4.50886194e+260] [ 7.24198497e+260] [ 6.20568321e+260] [ 8.62321205e+260] [ 7.23256262e+260] [ 1.21858931e+261] [ 1.08727980e+261] [ 7.99711152e+260] [ 1.11287451e+261] [ 1.14818697e+261] [ 1.31568688e+261] [ 1.67743623e+261] [ 1.17305235e+261] [ 1.43385924e+261] [ 1.39981175e+261] [ 1.88107582e+261] [ 1.93217349e+261] [ 1.91970711e+261] [ 2.00134984e+261] [ 1.81855231e+261] [ 1.62323244e+261] [ 2.07925791e+261] [ 2.30188275e+261] [ 1.99223798e+261] [ 2.39779554e+261] [ 2.05643859e+261] [ 2.30877431e+261] [ 2.10933166e+261] [ 2.39486294e+261] [ 2.13878187e+261] [ 2.42195292e+261] [ 2.32609329e+261] [ 2.71477184e+261] [ 2.39119179e+261] [ 2.41919317e+261] [ 2.31302197e+261] [ 2.52032780e+261] [ 2.66961997e+261] [ 2.43062160e+261] [ 2.51976372e+261] [ 2.64116117e+261] [ 2.50989266e+261] [ 2.55928792e+261] [ 2.32638220e+261] [ 2.63413376e+261]] [[ -4.11538946e+260] [ -1.87243254e+261] [ -1.26189861e+261] [ -2.11964041e+261] [ -2.86693511e+261] [ -3.00536039e+261] [ -4.82711049e+261] [ -4.13636851e+261] [ -5.74776081e+261] [ -4.82083007e+261] [ -8.12244885e+261] [ -7.24721155e+261] [ -5.33043649e+261] [ -7.41781191e+261] [ -7.65318541e+261] [ -8.76964809e+261] [ -1.11808711e+262] [ -7.81892443e+261] [ -9.55732030e+261] [ -9.33037837e+261] [ -1.25382210e+262] [ -1.28788101e+262] [ -1.27957160e+262] [ -1.33399018e+262] [ -1.21214736e+262] [ -1.08195783e+262] [ -1.38591943e+262] [ -1.53430895e+262] [ -1.32791671e+262] [ -1.59823917e+262] [ -1.37070932e+262] [ -1.53890249e+262] [ -1.40596495e+262] [ -1.59628446e+262] [ -1.42559484e+262] [ -1.61434115e+262] [ -1.55044638e+262] [ -1.80951821e+262] [ -1.59383747e+262] [ -1.61250165e+262] [ -1.54173374e+262] [ -1.67991246e+262] [ -1.77942244e+262] [ -1.62011922e+262] [ -1.67953647e+262] [ -1.76045336e+262] [ -1.67295696e+262] [ -1.70588114e+262] [ -1.55063894e+262] [ -1.75576927e+262]] [[ 2.74309319e+261] [ 1.24806097e+262] [ 8.41112490e+261] [ 1.41283619e+262] [ 1.91094190e+262] [ 2.00320861e+262] [ 3.21748744e+262] [ 2.75707667e+262] [ 3.83114251e+262] [ 3.21330126e+262] [ 5.41397949e+262] [ 4.83059424e+262] [ 3.55297698e+262] [ 4.94430710e+262] [ 5.10119418e+262] [ 5.84536705e+262] [ 7.45255623e+262] [ 5.21166674e+262] [ 6.37038620e+262] [ 6.21911914e+262] [ 8.35729132e+262] [ 8.58430936e+262] [ 8.52892340e+262] [ 8.89164782e+262] [ 8.07951033e+262] [ 7.21173825e+262] [ 9.23777976e+262] [ 1.02268631e+263] [ 8.85116543e+262] [ 1.06529868e+263] [ 9.13639751e+262] [ 1.02574810e+263] [ 9.37139220e+262] [ 1.06399578e+263] [ 9.50223432e+262] [ 1.07603138e+263] [ 1.03344263e+263] [ 1.20612572e+263] [ 1.06236475e+263] [ 1.07480528e+263] [ 1.02763526e+263] [ 1.11973763e+263] [ 1.18606553e+263] [ 1.07988273e+263] [ 1.11948702e+263] [ 1.17342178e+263] [ 1.11510147e+263] [ 1.13704692e+263] [ 1.03357098e+263] [ 1.17029962e+263]] [[ -1.82839566e+262] [ -8.31889077e+262] [ -5.60639513e+262] [ -9.41719216e+262] [ -1.27372920e+263] [ -1.33522913e+263] [ -2.14460089e+263] [ -1.83771629e+263] [ -2.55362974e+263] [ -2.14181061e+263] [ -3.60866217e+263] [ -3.21980952e+263] [ -2.36821984e+263] [ -3.29560428e+263] [ -3.40017662e+263] [ -3.89620149e+263] [ -4.96746576e+263] [ -3.47381157e+263] [ -4.24615049e+263] [ -4.14532416e+263] [ -5.57051262e+263] [ -5.72183040e+263] [ -5.68491315e+263] [ -5.92668538e+263] [ -5.38535902e+263] [ -4.80694968e+263] [ -6.15739797e+263] [ -6.81666672e+263] [ -5.89970204e+263] [ -7.10069749e+263] [ -6.08982213e+263] [ -6.83707499e+263] [ -6.24645671e+263] [ -7.09201305e+263] [ -6.33366890e+263] [ -7.17223578e+263] [ -6.88836247e+263] [ -8.03937336e+263] [ -7.08114150e+263] [ -7.16406323e+263] [ -6.84965376e+263] [ -7.46355766e+263] [ -7.90566314e+263] [ -7.19790674e+263] [ -7.46188723e+263] [ -7.82138682e+263] [ -7.43265561e+263] [ -7.57893196e+263] [ -6.88921800e+263] [ -7.80057624e+263]] [[ 1.21870840e+263] [ 5.54491691e+263] [ 3.73691589e+263] [ 6.27698446e+263] [ 8.48998114e+263] [ 8.89990602e+263] [ 1.42947348e+264] [ 1.22492101e+264] [ 1.70210971e+264] [ 1.42761363e+264] [ 2.40533653e+264] [ 2.14614866e+264] [ 1.57852562e+264] [ 2.19666929e+264] [ 2.26637149e+264] [ 2.59699449e+264] [ 3.31104058e+264] [ 2.31545251e+264] [ 2.83025133e+264] [ 2.76304602e+264] [ 3.71299858e+264] [ 3.81385873e+264] [ 3.78925171e+264] [ 3.95040384e+264] [ 3.58958533e+264] [ 3.20404935e+264] [ 4.10418421e+264] [ 4.54361665e+264] [ 3.93241822e+264] [ 4.73293600e+264] [ 4.05914185e+264] [ 4.55721968e+264] [ 4.16354589e+264] [ 4.72714743e+264] [ 4.22167676e+264] [ 4.78061950e+264] [ 4.59140510e+264] [ 5.35860592e+264] [ 4.71990105e+264] [ 4.77517213e+264] [ 4.56560400e+264] [ 4.97479871e+264] [ 5.26948201e+264] [ 4.79773036e+264] [ 4.97368530e+264] [ 5.21330803e+264] [ 4.95420110e+264] [ 5.05170090e+264] [ 4.59197536e+264] [ 5.19943684e+264]] [[ -8.12324261e+263] [ -3.69593788e+264] [ -2.49082344e+264] [ -4.18389401e+264] [ -5.65895638e+264] [ -5.93218985e+264] [ -9.52808720e+264] [ -8.16465251e+264] [ -1.13453310e+265] [ -9.51569046e+264] [ -1.60326558e+265] [ -1.43050514e+265] [ -1.05215871e+265] [ -1.46417943e+265] [ -1.51063909e+265] [ -1.73101428e+265] [ -2.20695828e+265] [ -1.54335381e+265] [ -1.88649051e+265] [ -1.84169513e+265] [ -2.47488147e+265] [ -2.54210932e+265] [ -2.52570763e+265] [ -2.63312281e+265] [ -2.39262096e+265] [ -2.13564379e+265] [ -2.73562438e+265] [ -3.02852597e+265] [ -2.62113458e+265] [ -3.15471588e+265] [ -2.70560161e+265] [ -3.03759301e+265] [ -2.77519162e+265] [ -3.15085754e+265] [ -2.81393848e+265] [ -3.18649910e+265] [ -3.06037914e+265] [ -3.57175318e+265] [ -3.14602750e+265] [ -3.18286818e+265] [ -3.04318154e+265] [ -3.31592832e+265] [ -3.51234806e+265] [ -3.19790426e+265] [ -3.31518618e+265] [ -3.47490557e+265] [ -3.30219908e+265] [ -3.36718711e+265] [ -3.06075924e+265] [ -3.46565979e+265]] [[ 5.41450856e+264] [ 2.46350974e+265] [ 1.66024647e+265] [ 2.78875457e+265] [ 3.77195034e+265] [ 3.95407281e+265] [ 6.35090101e+265] [ 5.44211014e+265] [ 7.56217617e+265] [ 6.34263803e+265] [ 1.06864901e+266] [ 9.53496367e+265] [ 7.01311365e+265] [ 9.75941803e+265] [ 1.00690927e+266] [ 1.15379930e+266] [ 1.47103750e+266] [ 1.02871511e+266] [ 1.25743124e+266] [ 1.22757309e+266] [ 1.64962042e+266] [ 1.69443082e+266] [ 1.68349836e+266] [ 1.75509543e+266] [ 1.59479007e+266] [ 1.42350317e+266] [ 1.82341737e+266] [ 2.01864952e+266] [ 1.74710474e+266] [ 2.10276080e+266] [ 1.80340583e+266] [ 2.02469311e+266] [ 1.84979072e+266] [ 2.10018904e+266] [ 1.87561725e+266] [ 2.12394575e+266] [ 2.03988110e+266] [ 2.38073502e+266] [ 2.09696960e+266] [ 2.12152558e+266] [ 2.02841812e+266] [ 2.21021619e+266] [ 2.34113883e+266] [ 2.13154781e+266] [ 2.20972151e+266] [ 2.31618171e+266] [ 2.20106503e+266] [ 2.24438248e+266] [ 2.04013445e+266] [ 2.31001898e+266]] [[ -3.60901482e+265] [ -1.64204065e+266] [ -1.10662935e+266] [ -1.85883104e+266] [ -2.51417549e+266] [ -2.63556835e+266] [ -4.23316274e+266] [ -3.62741252e+266] [ -5.04053241e+266] [ -4.22765508e+266] [ -7.12302896e+266] [ -6.35548449e+266] [ -4.67455741e+266] [ -6.50509347e+266] [ -6.71150564e+266] [ -7.69059411e+266] [ -9.80513021e+266] [ -6.85685145e+266] [ -8.38134785e+266] [ -8.18232980e+266] [ -1.09954661e+267] [ -1.12941477e+267] [ -1.12212779e+267] [ -1.16985048e+267] [ -1.06299971e+267] [ -9.48829239e+266] [ -1.21539014e+267] [ -1.34552120e+267] [ -1.16452432e+267] [ -1.40158517e+267] [ -1.20205154e+267] [ -1.34954952e+267] [ -1.23296917e+267] [ -1.39987098e+267] [ -1.25018372e+267] [ -1.41570590e+267] [ -1.35967300e+267] [ -1.58686756e+267] [ -1.39772507e+267] [ -1.41409274e+267] [ -1.35203241e+267] [ -1.47320904e+267] [ -1.56047490e+267] [ -1.42077301e+267] [ -1.47287932e+267] [ -1.54383986e+267] [ -1.46710938e+267] [ -1.49598243e+267] [ -1.35984188e+267] [ -1.53973212e+267]] [[ 2.40557159e+266] [ 1.09449435e+267] [ 7.37618510e+266] [ 1.23899495e+267] [ 1.67581166e+267] [ 1.75672549e+267] [ 2.82159440e+267] [ 2.41783449e+267] [ 3.35974280e+267] [ 2.81792330e+267] [ 4.74782093e+267] [ 4.23621783e+267] [ 3.11580391e+267] [ 4.33593898e+267] [ 4.47352203e+267] [ 5.12612877e+267] [ 6.53556271e+267] [ 4.57040158e+267] [ 5.58654738e+267] [ 5.45389285e+267] [ 7.32897543e+267] [ 7.52806023e+267] [ 7.47948918e+267] [ 7.79758248e+267] [ 7.08537377e+267] [ 6.32437597e+267] [ 8.10112491e+267] [ 8.96850728e+267] [ 7.76208121e+267] [ 9.34219901e+267] [ 8.01221716e+267] [ 8.99535789e+267] [ 8.21829712e+267] [ 9.33077313e+267] [ 8.33303988e+267] [ 9.43632005e+267] [ 9.06283546e+267] [ 1.05771899e+268] [ 9.31646972e+267] [ 9.42556764e+267] [ 9.01190746e+267] [ 9.81960450e+267] [ 1.04012709e+268] [ 9.47009465e+267] [ 9.81740676e+267] [ 1.02903908e+268] [ 9.77894750e+267] [ 9.97139941e+267] [ 9.06396107e+267] [ 1.02630109e+268]] [[ -1.60342225e+267] [ -7.29529984e+267] [ -4.91656095e+267] [ -8.25846167e+267] [ -1.11700426e+268] [ -1.17093699e+268] [ -1.88072027e+268] [ -1.61159603e+268] [ -2.23942052e+268] [ -1.87827332e+268] [ -3.16463736e+268] [ -2.82363076e+268] [ -2.07682422e+268] [ -2.89009942e+268] [ -2.98180474e+268] [ -3.41679665e+268] [ -4.35624811e+268] [ -3.04637934e+268] [ -3.72368647e+268] [ -3.63526623e+268] [ -4.88509357e+268] [ -5.01779259e+268] [ -4.98541779e+268] [ -5.19744136e+268] [ -4.72272204e+268] [ -4.21548260e+268] [ -5.39976611e+268] [ -5.97791569e+268] [ -5.17377816e+268] [ -6.22699813e+268] [ -5.34050508e+268] [ -5.99581284e+268] [ -5.47786669e+268] [ -6.21938226e+268] [ -5.55434793e+268] [ -6.28973406e+268] [ -6.04078969e+268] [ -7.05017542e+268] [ -6.20984840e+268] [ -6.28256709e+268] [ -6.00684387e+268] [ -6.54521048e+268] [ -6.93291746e+268] [ -6.31224635e+268] [ -6.54374559e+268] [ -6.85901085e+268] [ -6.51811075e+268] [ -6.64638865e+268] [ -6.04153996e+268] [ -6.84076089e+268]] [[ 1.06875345e+268] [ 4.86264727e+268] [ 3.27711022e+268] [ 5.50463818e+268] [ 7.44533854e+268] [ 7.80482456e+268] [ 1.25358512e+269] [ 1.07420164e+269] [ 1.49267505e+269] [ 1.25195411e+269] [ 2.10937392e+269] [ 1.88207760e+269] [ 1.38429726e+269] [ 1.92638196e+269] [ 1.98750771e+269] [ 2.27744949e+269] [ 2.90363637e+269] [ 2.03054960e+269] [ 2.48200543e+269] [ 2.42306934e+269] [ 3.25613579e+269] [ 3.34458568e+269] [ 3.32300642e+269] [ 3.46432971e+269] [ 3.14790782e+269] [ 2.80980979e+269] [ 3.59918831e+269] [ 3.98455115e+269] [ 3.44855712e+269] [ 4.15057586e+269] [ 3.55968815e+269] [ 3.99648041e+269] [ 3.65124588e+269] [ 4.14549954e+269] [ 3.70222409e+269] [ 4.19239219e+269] [ 4.02645952e+269] [ 4.69926075e+269] [ 4.13914479e+269] [ 4.18761509e+269] [ 4.00383309e+269] [ 4.36267878e+269] [ 4.62110302e+269] [ 4.20739766e+269] [ 4.36170237e+269] [ 4.57184092e+269] [ 4.34461559e+269] [ 4.43011861e+269] [ 4.02695961e+269] [ 4.55967650e+269]] [[ -7.12372505e+268] [ -3.24117431e+269] [ -2.18434216e+269] [ -3.66909029e+269] [ -4.96265484e+269] [ -5.20226853e+269] [ -8.35571171e+269] [ -7.16003970e+269] [ -9.94935425e+269] [ -8.34484032e+269] [ -1.40599311e+270] [ -1.25448983e+270] [ -9.22696728e+269] [ -1.28402070e+270] [ -1.32476377e+270] [ -1.51802308e+270] [ -1.93540495e+270] [ -1.35345314e+270] [ -1.65436887e+270] [ -1.61508530e+270] [ -2.17036176e+270] [ -2.22931761e+270] [ -2.21493406e+270] [ -2.30913242e+270] [ -2.09822292e+270] [ -1.87286530e+270] [ -2.39902178e+270] [ -2.65588354e+270] [ -2.29861928e+270] [ -2.76654652e+270] [ -2.37269314e+270] [ -2.66383493e+270] [ -2.43372050e+270] [ -2.76316292e+270] [ -2.46769978e+270] [ -2.79441899e+270] [ -2.68381736e+270] [ -3.13226981e+270] [ -2.75892719e+270] [ -2.79123484e+270] [ -2.66873582e+270] [ -2.90792271e+270] [ -3.08017415e+270] [ -2.80442081e+270] [ -2.90727188e+270] [ -3.04733873e+270] [ -2.89588277e+270] [ -2.95287440e+270] [ -2.68415069e+270] [ -3.03923059e+270]] [[ 4.74828490e+269] [ 2.16038926e+270] [ 1.45596284e+270] [ 2.44561461e+270] [ 3.30783388e+270] [ 3.46754724e+270] [ 5.56945973e+270] [ 4.77249027e+270] [ 6.63169455e+270] [ 5.56221346e+270] [ 9.37157993e+270] [ 8.36174208e+270] [ 6.15019097e+270] [ 8.55857863e+270] [ 8.83014965e+270] [ 1.01183103e+271] [ 1.29003493e+271] [ 9.02137725e+270] [ 1.10271167e+271] [ 1.07652739e+271] [ 1.44664426e+271] [ 1.48594101e+271] [ 1.47635372e+271] [ 1.53914119e+271] [ 1.39856047e+271] [ 1.24834942e+271] [ 1.59905651e+271] [ 1.77026649e+271] [ 1.53213370e+271] [ 1.84402837e+271] [ 1.58150728e+271] [ 1.77556645e+271] [ 1.62218478e+271] [ 1.84177305e+271] [ 1.64483350e+271] [ 1.86260663e+271] [ 1.78888564e+271] [ 2.08779948e+271] [ 1.83894974e+271] [ 1.86048425e+271] [ 1.77883312e+271] [ 1.93826199e+271] [ 2.05307537e+271] [ 1.86927330e+271] [ 1.93782819e+271] [ 2.03118908e+271] [ 1.93023683e+271] [ 1.96822433e+271] [ 1.78910782e+271] [ 2.02578464e+271]] [[ -3.16494662e+270] [ -1.43999714e+271] [ -9.70465074e+270] [ -1.63011273e+271] [ -2.20482088e+271] [ -2.31127705e+271] [ -3.71229678e+271] [ -3.18108059e+271] [ -4.42032433e+271] [ -3.70746682e+271] [ -6.24658184e+271] [ -5.57347924e+271] [ -4.09938042e+271] [ -5.70467971e+271] [ -5.88569407e+271] [ -6.74431142e+271] [ -8.59866620e+271] [ -6.01315592e+271] [ -7.35007195e+271] [ -7.17554190e+271] [ -9.64253824e+271] [ -9.90446881e+271] [ -9.84056518e+271] [ -1.02590721e+272] [ -9.32203800e+271] [ -8.32081342e+271] [ -1.06584348e+272] [ -1.17996267e+272] [ -1.02123640e+272] [ -1.22912830e+272] [ -1.05414612e+272] [ -1.18349534e+272] [ -1.08125951e+272] [ -1.22762503e+272] [ -1.09635592e+272] [ -1.24151156e+272] [ -1.19237318e+272] [ -1.39161277e+272] [ -1.22574317e+272] [ -1.24009689e+272] [ -1.18567272e+272] [ -1.29193927e+272] [ -1.36846758e+272] [ -1.24595519e+272] [ -1.29165012e+272] [ -1.35387938e+272] [ -1.28659014e+272] [ -1.31191052e+272] [ -1.19252127e+272] [ -1.35027707e+272]] [[ 2.10958006e+271] [ 9.59823216e+271] [ 6.46858860e+271] [ 1.08654386e+272] [ 1.46961282e+272] [ 1.54057068e+272] [ 2.47441369e+272] [ 2.12033408e+272] [ 2.94634607e+272] [ 2.47119430e+272] [ 4.16362929e+272] [ 3.71497597e+272] [ 2.73242244e+272] [ 3.80242701e+272] [ 3.92308127e+272] [ 4.49538856e+272] [ 5.73139989e+272] [ 4.00804036e+272] [ 4.89915535e+272] [ 4.78282318e+272] [ 6.42718781e+272] [ 6.60177638e+272] [ 6.55918172e+272] [ 6.83813550e+272] [ 6.21355991e+272] [ 5.54619845e+272] [ 7.10432880e+272] [ 7.86498484e+272] [ 6.80700245e+272] [ 8.19269598e+272] [ 7.02636063e+272] [ 7.88853164e+272] [ 7.20708365e+272] [ 8.18267599e+272] [ 7.30770799e+272] [ 8.27523597e+272] [ 7.94770649e+272] [ 9.27572847e+272] [ 8.17013253e+272] [ 8.26580659e+272] [ 7.90304487e+272] [ 8.61135952e+272] [ 9.12145529e+272] [ 8.30485481e+272] [ 8.60943219e+272] [ 9.02421833e+272] [ 8.57570512e+272] [ 8.74447694e+272] [ 7.94869360e+272] [ 9.00020734e+272]] [[ -1.40613051e+272] [ -6.39765580e+272] [ -4.31160684e+272] [ -7.24230619e+272] [ -9.79563409e+272] [ -1.02686003e+273] [ -1.64930863e+273] [ -1.41329855e+273] [ -1.96387290e+273] [ -1.64716276e+273] [ -2.77524722e+273] [ -2.47619949e+273] [ -1.82128313e+273] [ -2.53448957e+273] [ -2.61491108e+273] [ -2.99637978e+273] [ -3.82023722e+273] [ -2.67154016e+273] [ -3.26550860e+273] [ -3.18796795e+273] [ -4.28401134e+273] [ -4.40038252e+273] [ -4.37199125e+273] [ -4.55792655e+273] [ -4.14161868e+273] [ -3.69679208e+273] [ -4.73535643e+273] [ -5.24236808e+273] [ -4.53717497e+273] [ -5.46080238e+273] [ -4.68338712e+273] [ -5.25806308e+273] [ -4.80384719e+273] [ -5.45412360e+273] [ -4.87091786e+273] [ -5.51581901e+273] [ -5.29750580e+273] [ -6.18269251e+273] [ -5.44576282e+273] [ -5.50953390e+273] [ -5.26773681e+273] [ -5.73986056e+273] [ -6.07986246e+273] [ -5.53556130e+273] [ -5.73857591e+273] [ -6.01504962e+273] [ -5.71609529e+273] [ -5.82858934e+273] [ -5.29816375e+273] [ -5.99904522e+273]] [[ 9.37249576e+272] [ 4.26432692e+273] [ 2.87388094e+273] [ 4.82732460e+273] [ 6.52923311e+273] [ 6.84448648e+273] [ 1.09933879e+274] [ 9.42027399e+273] [ 1.30901010e+274] [ 1.09790847e+274] [ 1.84982778e+274] [ 1.65049894e+274] [ 1.21396757e+274] [ 1.68935191e+274] [ 1.74295649e+274] [ 1.99722263e+274] [ 2.54636087e+274] [ 1.78070233e+274] [ 2.17660916e+274] [ 2.12492481e+274] [ 2.85548730e+274] [ 2.93305395e+274] [ 2.91412988e+274] [ 3.03806417e+274] [ 2.76057615e+274] [ 2.46407911e+274] [ 3.15632921e+274] [ 3.49427540e+274] [ 3.02423231e+274] [ 3.63987174e+274] [ 3.12168931e+274] [ 3.50473684e+274] [ 3.20198140e+274] [ 3.63542003e+274] [ 3.24668703e+274] [ 3.67654282e+274] [ 3.53102719e+274] [ 4.12104416e+274] [ 3.62984720e+274] [ 3.67235351e+274] [ 3.51118481e+274] [ 3.82587664e+274] [ 4.05250328e+274] [ 3.68970195e+274] [ 3.82502036e+274] [ 4.00930260e+274] [ 3.81003601e+274] [ 3.88501839e+274] [ 3.53146574e+274] [ 3.99863494e+274]] [[ -6.24719228e+273] [ -2.84236674e+274] [ -1.91557162e+274] [ -3.21763015e+274] [ -4.35202915e+274] [ -4.56215977e+274] [ -7.32759019e+274] [ -6.27903863e+274] [ -8.72514432e+274] [ -7.31805646e+274] [ -1.23299387e+275] [ -1.10013219e+275] [ -8.09164285e+274] [ -1.12602945e+275] [ -1.16175933e+275] [ -1.33123920e+275] [ -1.69726468e+275] [ -1.18691863e+275] [ -1.45080844e+275] [ -1.41635848e+275] [ -1.90331142e+275] [ -1.95501310e+275] [ -1.94239936e+275] [ -2.02500717e+275] [ -1.84004885e+275] [ -1.64242016e+275] [ -2.10383616e+275] [ -2.32909258e+275] [ -2.01578760e+275] [ -2.42613913e+275] [ -2.08074710e+275] [ -2.33606560e+275] [ -2.13426541e+275] [ -2.42317186e+275] [ -2.16406374e+275] [ -2.45058206e+275] [ -2.35358931e+275] [ -2.74686230e+275] [ -2.41945731e+275] [ -2.44778969e+275] [ -2.34036347e+275] [ -2.55011980e+275] [ -2.70117671e+275] [ -2.45935321e+275] [ -2.54954905e+275] [ -2.67238150e+275] [ -2.53956131e+275] [ -2.58954045e+275] [ -2.35388162e+275] [ -2.66527102e+275]] [[ 4.16403618e+274] [ 1.89456597e+275] [ 1.27681511e+275] [ 2.14469601e+275] [ 2.90082424e+275] [ 3.04088581e+275] [ 4.88417024e+275] [ 4.18526321e+275] [ 5.81570327e+275] [ 4.87781558e+275] [ 8.21846175e+275] [ 7.33287855e+275] [ 5.39344589e+275] [ 7.50549552e+275] [ 7.74365130e+275] [ 8.87331133e+275] [ 1.13130366e+276] [ 7.91134947e+275] [ 9.67029436e+275] [ 9.44066982e+275] [ 1.26864314e+276] [ 1.30310464e+276] [ 1.29469701e+276] [ 1.34975885e+276] [ 1.22647577e+276] [ 1.09474731e+276] [ 1.40230194e+276] [ 1.55244554e+276] [ 1.34361360e+276] [ 1.61713145e+276] [ 1.38691204e+276] [ 1.55709337e+276] [ 1.42258441e+276] [ 1.61515363e+276] [ 1.44244635e+276] [ 1.63342376e+276] [ 1.56877371e+276] [ 1.83090795e+276] [ 1.61267772e+276] [ 1.63156253e+276] [ 1.55995809e+276] [ 1.69977017e+276] [ 1.80045643e+276] [ 1.63927014e+276] [ 1.69938974e+276] [ 1.78126312e+276] [ 1.69273246e+276] [ 1.72604582e+276] [ 1.56896855e+276] [ 1.77652367e+276]] [[ -2.77551843e+275] [ -1.26281390e+276] [ -8.51055015e+275] [ -1.42953688e+276] [ -1.93353054e+276] [ -2.02688791e+276] [ -3.25552035e+276] [ -2.78966720e+276] [ -3.87642924e+276] [ -3.25128468e+276] [ -5.47797643e+276] [ -4.88769517e+276] [ -3.59497560e+276] [ -5.00275219e+276] [ -5.16149379e+276] [ -5.91446329e+276] [ -7.54065054e+276] [ -5.27327220e+276] [ -6.44568852e+276] [ -6.29263338e+276] [ -8.45608022e+276] [ -8.68578177e+276] [ -8.62974111e+276] [ -8.99675318e+276] [ -8.17501567e+276] [ -7.29698593e+276] [ -9.34697663e+276] [ -1.03477516e+277] [ -8.95579226e+276] [ -1.07789124e+277] [ -9.24439598e+276] [ -1.03787315e+277] [ -9.48216846e+276] [ -1.07657294e+277] [ -9.61455723e+276] [ -1.08875081e+277] [ -1.04565862e+277] [ -1.22038295e+277] [ -1.07492263e+277] [ -1.08751021e+277] [ -1.03978261e+277] [ -1.13297369e+277] [ -1.20008564e+277] [ -1.09264768e+277] [ -1.13272012e+277] [ -1.18729243e+277] [ -1.12828274e+277] [ -1.15048760e+277] [ -1.04578850e+277] [ -1.18413337e+277]] [[ 1.85000855e+276] [ 8.41722575e+276] [ 5.67266655e+276] [ 9.52850982e+276] [ 1.28878555e+277] [ 1.35101246e+277] [ 2.16995154e+277] [ 1.85943935e+277] [ 2.58381539e+277] [ 2.16712827e+277] [ 3.65131902e+277] [ 3.25786987e+277] [ 2.39621381e+277] [ 3.33456058e+277] [ 3.44036903e+277] [ 3.94225725e+277] [ 5.02618460e+277] [ 3.51487440e+277] [ 4.29634290e+277] [ 4.19432472e+277] [ 5.63635989e+277] [ 5.78946635e+277] [ 5.75211272e+277] [ 5.99674286e+277] [ 5.44901765e+277] [ 4.86377112e+277] [ 6.23018263e+277] [ 6.89724439e+277] [ 5.96944055e+277] [ 7.18463259e+277] [ 6.16180799e+277] [ 6.91789389e+277] [ 6.32029410e+277] [ 7.17584550e+277] [ 6.40853720e+277] [ 7.25701652e+277] [ 6.96978762e+277] [ 8.13440425e+277] [ 7.16484544e+277] [ 7.24874736e+277] [ 6.93062136e+277] [ 7.55178202e+277] [ 7.99911349e+277] [ 7.28299092e+277] [ 7.55009184e+277] [ 7.91384096e+277] [ 7.52051469e+277] [ 7.66852012e+277] [ 6.97065327e+277] [ 7.89278439e+277]] [[ -1.23311436e+277] [ -5.61046162e+277] [ -3.78108880e+277] [ -6.35118271e+277] [ -8.59033852e+277] [ -9.00510900e+277] [ -1.44637083e+278] [ -1.23940042e+278] [ -1.72222981e+278] [ -1.44448900e+278] [ -2.43376926e+278] [ -2.17151760e+278] [ -1.59718487e+278] [ -2.22263543e+278] [ -2.29316155e+278] [ -2.62769274e+278] [ -3.35017934e+278] [ -2.34282274e+278] [ -2.86370684e+278] [ -2.79570711e+278] [ -3.75688876e+278] [ -3.85894115e+278] [ -3.83404326e+278] [ -3.99710031e+278] [ -3.63201669e+278] [ -3.24192341e+278] [ -4.15269848e+278] [ -4.59732530e+278] [ -3.97890209e+278] [ -4.78888254e+278] [ -4.10712368e+278] [ -4.61108913e+278] [ -4.21276185e+278] [ -4.78302554e+278] [ -4.27157986e+278] [ -4.83712970e+278] [ -4.64567865e+278] [ -5.42194830e+278] [ -4.77569351e+278] [ -4.83161793e+278] [ -4.61957256e+278] [ -5.03360424e+278] [ -5.33177089e+278] [ -4.85444281e+278] [ -5.03247766e+278] [ -5.27493290e+278] [ -5.01276315e+278] [ -5.11141546e+278] [ -4.64625565e+278] [ -5.26089774e+278]] [[ 8.21926489e+277] [ 3.73962640e+278] [ 2.52026668e+278] [ 4.23335050e+278] [ 5.72584911e+278] [ 6.00231239e+278] [ 9.64071571e+278] [ 8.26116429e+278] [ 1.14794405e+279] [ 9.62817243e+278] [ 1.62221727e+279] [ 1.44741469e+279] [ 1.06459595e+279] [ 1.48148703e+279] [ 1.52849588e+279] [ 1.75147605e+279] [ 2.23304604e+279] [ 1.56159731e+279] [ 1.90879011e+279] [ 1.86346522e+279] [ 2.50413626e+279] [ 2.57215879e+279] [ 2.55556322e+279] [ 2.66424812e+279] [ 2.42090338e+279] [ 2.16088856e+279] [ 2.76796133e+279] [ 3.06432522e+279] [ 2.65211818e+279] [ 3.19200678e+279] [ 2.73758368e+279] [ 3.07349944e+279] [ 2.80799629e+279] [ 3.18810283e+279] [ 2.84720116e+279] [ 3.22416570e+279] [ 3.09655492e+279] [ 3.61397375e+279] [ 3.18321570e+279] [ 3.22049186e+279] [ 3.07915403e+279] [ 3.35512486e+279] [ 3.55386642e+279] [ 3.23570567e+279] [ 3.35437395e+279] [ 3.51598133e+279] [ 3.34123333e+279] [ 3.40698956e+279] [ 3.09693951e+279] [ 3.50662626e+279]] [[ -5.47851176e+278] [ -2.49263011e+279] [ -1.67987172e+279] [ -2.82171956e+279] [ -3.81653738e+279] [ -4.00081266e+279] [ -6.42597301e+279] [ -5.50643960e+279] [ -7.65156626e+279] [ -6.41761235e+279] [ -1.08128118e+280] [ -9.64767346e+279] [ -7.09601345e+279] [ -9.87478102e+279] [ -1.01881163e+280] [ -1.16743799e+280] [ -1.48842617e+280] [ -1.04087523e+280] [ -1.27229493e+280] [ -1.24208384e+280] [ -1.66912006e+280] [ -1.71446016e+280] [ -1.70339846e+280] [ -1.77584186e+280] [ -1.61364158e+280] [ -1.44032995e+280] [ -1.84497141e+280] [ -2.04251134e+280] [ -1.76775671e+280] [ -2.12761687e+280] [ -1.82472332e+280] [ -2.04862637e+280] [ -1.87165651e+280] [ -2.12501471e+280] [ -1.89778833e+280] [ -2.14905225e+280] [ -2.06399389e+280] [ -2.40887694e+280] [ -2.12175722e+280] [ -2.14660346e+280] [ -2.05239541e+280] [ -2.23634245e+280] [ -2.36881269e+280] [ -2.15674416e+280] [ -2.23584193e+280] [ -2.34356056e+280] [ -2.22708312e+280] [ -2.27091262e+280] [ -2.06425024e+280] [ -2.33732499e+280]] [[ 3.65167584e+279] [ 1.66145070e+280] [ 1.11971047e+280] [ 1.88080369e+280] [ 2.54389476e+280] [ 2.66672257e+280] [ 4.28320162e+280] [ 3.67029102e+280] [ 5.10011495e+280] [ 4.27762886e+280] [ 7.20722803e+280] [ 6.43061067e+280] [ 4.72981387e+280] [ 6.58198813e+280] [ 6.79084023e+280] [ 7.78150219e+280] [ 9.92103356e+280] [ 6.93790412e+280] [ 8.48042112e+280] [ 8.27905054e+280] [ 1.11254401e+281] [ 1.14276523e+281] [ 1.13539211e+281] [ 1.18367891e+281] [ 1.07556509e+281] [ 9.60045050e+280] [ 1.22975688e+281] [ 1.36142618e+281] [ 1.17828979e+281] [ 1.41815287e+281] [ 1.21626062e+281] [ 1.36550212e+281] [ 1.24754371e+281] [ 1.41641841e+281] [ 1.26496175e+281] [ 1.43244051e+281] [ 1.37574527e+281] [ 1.60562542e+281] [ 1.41424714e+281] [ 1.43080829e+281] [ 1.36801436e+281] [ 1.49062338e+281] [ 1.57892078e+281] [ 1.43756752e+281] [ 1.49028976e+281] [ 1.56208910e+281] [ 1.48445162e+281] [ 1.51366596e+281] [ 1.37591614e+281] [ 1.55793281e+281]] [[ -2.43400709e+280] [ -1.10743203e+281] [ -7.46337665e+280] [ -1.25364072e+281] [ -1.69562090e+281] [ -1.77749119e+281] [ -2.85494758e+281] [ -2.44641495e+281] [ -3.39945726e+281] [ -2.85123309e+281] [ -4.80394342e+281] [ -4.28629283e+281] [ -3.15263485e+281] [ -4.38719275e+281] [ -4.52640212e+281] [ -5.18672312e+281] [ -6.61281753e+281] [ -4.62442685e+281] [ -5.65258419e+281] [ -5.51836159e+281] [ -7.41560894e+281] [ -7.61704705e+281] [ -7.56790186e+281] [ -7.88975524e+281] [ -7.16912773e+281] [ -6.39913442e+281] [ -8.19688574e+281] [ -9.07452117e+281] [ -7.85383432e+281] [ -9.45263019e+281] [ -8.10692705e+281] [ -9.10168917e+281] [ -8.31544301e+281] [ -9.44106925e+281] [ -8.43154212e+281] [ -9.54786380e+281] [ -9.16996438e+281] [ -1.07022195e+282] [ -9.42659677e+281] [ -9.53698430e+281] [ -9.11843437e+281] [ -9.93567895e+281] [ -1.05242211e+282] [ -9.58203764e+281] [ -9.93345523e+281] [ -1.04120303e+282] [ -9.89454135e+281] [ -1.00892682e+282] [ -9.17110329e+281] [ -1.03843267e+282]] [[ 1.62237580e+281] [ 7.38153528e+281] [ 4.97467806e+281] [ 8.35608233e+281] [ 1.13020801e+282] [ 1.18477826e+282] [ 1.90295167e+282] [ 1.63064620e+282] [ 2.26589200e+282] [ 1.90047579e+282] [ 3.20204554e+282] [ 2.85700801e+282] [ 2.10137370e+282] [ 2.92426237e+282] [ 3.01705172e+282] [ 3.45718552e+282] [ 4.40774194e+282] [ 3.08238963e+282] [ 3.76770299e+282] [ 3.67823756e+282] [ 4.94283872e+282] [ 5.07710633e+282] [ 5.04434884e+282] [ 5.25887867e+282] [ 4.77854785e+282] [ 4.26531249e+282] [ 5.46359504e+282] [ 6.04857875e+282] [ 5.23493576e+282] [ 6.30060551e+282] [ 5.40363350e+282] [ 6.06668745e+282] [ 5.54261882e+282] [ 6.29289962e+282] [ 5.62000413e+282] [ 6.36408302e+282] [ 6.11219596e+282] [ 7.13351332e+282] [ 6.28325305e+282] [ 6.35683133e+282] [ 6.07784888e+282] [ 6.62257935e+282] [ 7.01486929e+282] [ 6.38686142e+282] [ 6.62109714e+282] [ 6.94008905e+282] [ 6.59515928e+282] [ 6.72495351e+282] [ 6.11295510e+282] [ 6.92162337e+282]] [[ -1.08138684e+282] [ -4.92012708e+282] [ -3.31584790e+282] [ -5.56970677e+282] [ -7.53334753e+282] [ -7.89708293e+282] [ -1.26840335e+283] [ -1.08689943e+283] [ -1.51031949e+283] [ -1.26675306e+283] [ -2.13430816e+283] [ -1.90432504e+283] [ -1.40066060e+283] [ -1.94915312e+283] [ -2.01100141e+283] [ -2.30437050e+283] [ -2.93795934e+283] [ -2.05455209e+283] [ -2.51134444e+283] [ -2.45171168e+283] [ -3.29462554e+283] [ -3.38412097e+283] [ -3.36228663e+283] [ -3.50528046e+283] [ -3.18511825e+283] [ -2.84302367e+283] [ -3.64173318e+283] [ -4.03165127e+283] [ -3.48932143e+283] [ -4.19963851e+283] [ -3.60176610e+283] [ -4.04372154e+283] [ -3.69440610e+283] [ -4.19450218e+283] [ -3.74598691e+283] [ -4.24194914e+283] [ -4.07405502e+283] [ -4.75480923e+283] [ -4.18807231e+283] [ -4.23711556e+283] [ -4.05116114e+283] [ -4.41424863e+283] [ -4.67572762e+283] [ -4.25713198e+283] [ -4.41326067e+283] [ -4.62588321e+283] [ -4.39597192e+283] [ -4.48248564e+283] [ -4.07456102e+283] [ -4.61357500e+283]] [[ 7.20793235e+282] [ 3.27948721e+283] [ 2.21016258e+283] [ 3.71246144e+283] [ 5.02131682e+283] [ 5.26376291e+283] [ 8.45448194e+283] [ 7.24467627e+283] [ 1.00669624e+284] [ 8.44348204e+283] [ 1.42261291e+284] [ 1.26931876e+284] [ 9.33603635e+283] [ 1.29919870e+284] [ 1.34042339e+284] [ 1.53596715e+284] [ 1.95828277e+284] [ 1.36945188e+284] [ 1.67392464e+284] [ 1.63417671e+284] [ 2.19601692e+284] [ 2.25566967e+284] [ 2.24111609e+284] [ 2.33642795e+284] [ 2.12302535e+284] [ 1.89500385e+284] [ 2.42737986e+284] [ 2.68727790e+284] [ 2.32579053e+284] [ 2.79924899e+284] [ 2.40073999e+284] [ 2.69532328e+284] [ 2.46248874e+284] [ 2.79582540e+284] [ 2.49686968e+284] [ 2.82745094e+284] [ 2.71554192e+284] [ 3.16929538e+284] [ 2.79153960e+284] [ 2.82422914e+284] [ 2.70028211e+284] [ 2.94229634e+284] [ 3.11658391e+284] [ 2.83757099e+284] [ 2.94163782e+284] [ 3.08336036e+284] [ 2.93011408e+284] [ 2.98777939e+284] [ 2.71587919e+284] [ 3.07515638e+284]] [[ -4.80441288e+283] [ -2.18592653e+284] [ -1.47317331e+284] [ -2.47452344e+284] [ -3.34693474e+284] [ -3.50853602e+284] [ -5.63529456e+284] [ -4.82890437e+284] [ -6.71008572e+284] [ -5.62796263e+284] [ -9.48235842e+284] [ -8.46058359e+284] [ -6.22289044e+284] [ -8.65974689e+284] [ -8.93452805e+284] [ -1.02379157e+285] [ -1.30528402e+285] [ -9.12801610e+284] [ -1.11574647e+285] [ -1.08925268e+285] [ -1.46374459e+285] [ -1.50350585e+285] [ -1.49380523e+285] [ -1.55733489e+285] [ -1.41509241e+285] [ -1.26310576e+285] [ -1.61795845e+285] [ -1.79119225e+285] [ -1.55024457e+285] [ -1.86582604e+285] [ -1.60020178e+285] [ -1.79655486e+285] [ -1.64136011e+285] [ -1.86354406e+285] [ -1.66427656e+285] [ -1.88462392e+285] [ -1.81003149e+285] [ -2.11247870e+285] [ -1.86068738e+285] [ -1.88247644e+285] [ -1.79986014e+285] [ -1.96117357e+285] [ -2.07734412e+285] [ -1.89136939e+285] [ -1.96073464e+285] [ -2.05519912e+285] [ -1.95305355e+285] [ -1.99149009e+285] [ -1.81025630e+285] [ -2.04973080e+285]] [[ 3.20235846e+284] [ 1.45701889e+285] [ 9.81936635e+284] [ 1.64938178e+285] [ 2.23088337e+285] [ 2.33859793e+285] [ 3.75617867e+285] [ 3.21868314e+285] [ 4.47257559e+285] [ 3.75129161e+285] [ 6.32042071e+285] [ 5.63936158e+285] [ 4.14783790e+285] [ 5.77211293e+285] [ 5.95526700e+285] [ 6.82403379e+285] [ 8.70030832e+285] [ 6.08423554e+285] [ 7.43695483e+285] [ 7.26036172e+285] [ 9.75651964e+285] [ 1.00215464e+286] [ 9.95688739e+285] [ 1.03803413e+286] [ 9.43223087e+285] [ 8.41917113e+285] [ 1.07844248e+286] [ 1.19391064e+286] [ 1.03330811e+286] [ 1.24365744e+286] [ 1.06660685e+286] [ 1.19748506e+286] [ 1.09404074e+286] [ 1.24213640e+286] [ 1.10931559e+286] [ 1.25618707e+286] [ 1.20646785e+286] [ 1.40806259e+286] [ 1.24023229e+286] [ 1.25475568e+286] [ 1.19968818e+286] [ 1.30721088e+286] [ 1.38464380e+286] [ 1.26068323e+286] [ 1.30691831e+286] [ 1.36988316e+286] [ 1.30179851e+286] [ 1.32741820e+286] [ 1.20661769e+286] [ 1.36623827e+286]] [[ -2.13451674e+285] [ -9.71168983e+285] [ -6.54505174e+285] [ -1.09938755e+286] [ -1.48698465e+286] [ -1.55878128e+286] [ -2.50366295e+286] [ -2.14539787e+286] [ -2.98117390e+286] [ -2.50040551e+286] [ -4.21284624e+286] [ -3.75888953e+286] [ -2.76472154e+286] [ -3.84737430e+286] [ -3.96945478e+286] [ -4.54852713e+286] [ -5.79914895e+286] [ -4.05541814e+286] [ -4.95706672e+286] [ -4.83935943e+286] [ -6.50316157e+286] [ -6.67981389e+286] [ -6.63671574e+286] [ -6.91896693e+286] [ -6.28700843e+286] [ -5.61175830e+286] [ -7.18830683e+286] [ -7.95795434e+286] [ -6.88746588e+286] [ -8.28953925e+286] [ -7.10941702e+286] [ -7.98177948e+286] [ -7.29227631e+286] [ -8.27940082e+286] [ -7.39409010e+286] [ -8.37305492e+286] [ -8.04165382e+286] [ -9.38537393e+286] [ -8.26670909e+286] [ -8.36351408e+286] [ -7.99646427e+286] [ -8.71315168e+286] [ -9.22927713e+286] [ -8.40302387e+286] [ -8.71120157e+286] [ -9.13089076e+286] [ -8.67707582e+286] [ -8.84784264e+286] [ -8.04265259e+286] [ -9.10659595e+286]] [[ 1.42275193e+286] [ 6.47328047e+286] [ 4.36257298e+286] [ 7.32791520e+286] [ 9.91142518e+286] [ 1.03899822e+287] [ 1.66880458e+287] [ 1.43000471e+287] [ 1.98708722e+287] [ 1.66663335e+287] [ 2.80805254e+287] [ 2.50546986e+287] [ 1.84281194e+287] [ 2.56444897e+287] [ 2.64582112e+287] [ 3.03179903e+287] [ 3.86539504e+287] [ 2.70311959e+287] [ 3.30410915e+287] [ 3.22565191e+287] [ 4.33465128e+287] [ 4.45239805e+287] [ 4.42367118e+287] [ 4.61180436e+287] [ 4.19057544e+287] [ 3.74049070e+287] [ 4.79133158e+287] [ 5.30433646e+287] [ 4.59080748e+287] [ 5.52535280e+287] [ 4.73874796e+287] [ 5.32021699e+287] [ 4.86063195e+287] [ 5.51859507e+287] [ 4.92849544e+287] [ 5.58101977e+287] [ 5.36012594e+287] [ 6.25577616e+287] [ 5.51013546e+287] [ 5.57466036e+287] [ 5.33000507e+287] [ 5.80770963e+287] [ 6.15173059e+287] [ 5.60099543e+287] [ 5.80640980e+287] [ 6.08615162e+287] [ 5.78366344e+287] [ 5.89748725e+287] [ 5.36079167e+287] [ 6.06995804e+287]] [[ -9.48328507e+286] [ -4.31473418e+287] [ -2.90785218e+287] [ -4.88438688e+287] [ -6.60641312e+287] [ -6.92539299e+287] [ -1.11233372e+288] [ -9.53162807e+287] [ -1.32448350e+288] [ -1.11088650e+288] [ -1.87169401e+288] [ -1.67000897e+288] [ -1.22831750e+288] [ -1.70932121e+288] [ -1.76355943e+288] [ -2.02083117e+288] [ -2.57646060e+288] [ -1.80175145e+288] [ -2.20233817e+288] [ -2.15004288e+288] [ -2.88924111e+288] [ -2.96772465e+288] [ -2.94857689e+288] [ -3.07397617e+288] [ -2.79320805e+288] [ -2.49320621e+288] [ -3.19363918e+288] [ -3.53558013e+288] [ -3.05998080e+288] [ -3.68289752e+288] [ -3.15858982e+288] [ -3.54616522e+288] [ -3.23983101e+288] [ -3.67839319e+288] [ -3.28506510e+288] [ -3.72000207e+288] [ -3.57276635e+288] [ -4.16975772e+288] [ -3.67275447e+288] [ -3.71576324e+288] [ -3.55268942e+288] [ -3.87110112e+288] [ -4.10040664e+288] [ -3.73331675e+288] [ -3.87023472e+288] [ -4.05669530e+288] [ -3.85507324e+288] [ -3.93094196e+288] [ -3.57321009e+288] [ -4.04590154e+288]] [[ 6.32103837e+287] [ 2.87596546e+288] [ 1.93821498e+288] [ 3.25566474e+288] [ 4.40347311e+288] [ 4.61608762e+288] [ 7.41420731e+288] [ 6.35326117e+288] [ 8.82828149e+288] [ 7.40456089e+288] [ 1.24756870e+289] [ 1.11313650e+289] [ 8.18729160e+288] [ 1.13933989e+289] [ 1.17549212e+289] [ 1.34697536e+289] [ 1.71732751e+289] [ 1.20094882e+289] [ 1.46795799e+289] [ 1.43310081e+289] [ 1.92580986e+289] [ 1.97812269e+289] [ 1.96535985e+289] [ 2.04894414e+289] [ 1.86179948e+289] [ 1.66183469e+289] [ 2.12870494e+289] [ 2.35662405e+289] [ 2.03961559e+289] [ 2.45481775e+289] [ 2.10534296e+289] [ 2.36367949e+289] [ 2.15949388e+289] [ 2.45181541e+289] [ 2.18964445e+289] [ 2.47954961e+289] [ 2.38141034e+289] [ 2.77933209e+289] [ 2.44805695e+289] [ 2.47672424e+289] [ 2.36802816e+289] [ 2.58026396e+289] [ 2.73310646e+289] [ 2.48842445e+289] [ 2.57968647e+289] [ 2.70397087e+289] [ 2.56958066e+289] [ 2.62015059e+289] [ 2.38170612e+289] [ 2.69677635e+289]] [[ -4.21325793e+288] [ -1.91696104e+289] [ -1.29190794e+289] [ -2.17004779e+289] [ -2.93511397e+289] [ -3.07683116e+289] [ -4.94190447e+289] [ -4.23473589e+289] [ -5.88444886e+289] [ -4.93547470e+289] [ -8.31560959e+289] [ -7.41955819e+289] [ -5.45720011e+289] [ -7.59421561e+289] [ -7.83518656e+289] [ -8.97819994e+289] [ -1.14467645e+290] [ -8.00486703e+289] [ -9.78460386e+289] [ -9.55226500e+289] [ -1.28363936e+290] [ -1.31850823e+290] [ -1.31000122e+290] [ -1.36571393e+290] [ -1.24097355e+290] [ -1.10768797e+290] [ -1.41887811e+290] [ -1.57079650e+290] [ -1.35949603e+290] [ -1.63624705e+290] [ -1.40330629e+290] [ -1.57549928e+290] [ -1.43940033e+290] [ -1.63424585e+290] [ -1.45949705e+290] [ -1.65273195e+290] [ -1.58731769e+290] [ -1.85255053e+290] [ -1.63174067e+290] [ -1.65084871e+290] [ -1.57839786e+290] [ -1.71986262e+290] [ -1.82173906e+290] [ -1.65864743e+290] [ -1.71947769e+290] [ -1.80231887e+290] [ -1.71274172e+290] [ -1.74644886e+290] [ -1.58751484e+290] [ -1.79752339e+290]] [[ 2.80832695e+289] [ 1.27774123e+290] [ 8.61115067e+289] [ 1.44643499e+290] [ 1.95638620e+290] [ 2.05084711e+290] [ 3.29400282e+290] [ 2.82264298e+290] [ 3.92225128e+290] [ 3.28971709e+290] [ 5.54272986e+290] [ 4.94547107e+290] [ 3.63747067e+290] [ 5.06188814e+290] [ 5.22250618e+290] [ 5.98437629e+290] [ 7.62978619e+290] [ 5.33560589e+290] [ 6.52188097e+290] [ 6.36701661e+290] [ 8.55603688e+290] [ 8.78845365e+290] [ 8.73175055e+290] [ 9.10310095e+290] [ 8.27164994e+290] [ 7.38324129e+290] [ 9.45746429e+290] [ 1.04700691e+291] [ 9.06165585e+290] [ 1.09063265e+291] [ 9.35367107e+290] [ 1.05014152e+291] [ 9.59425418e+290] [ 1.08929877e+291] [ 9.72820788e+290] [ 1.10162059e+291] [ 1.05801903e+291] [ 1.23480871e+291] [ 1.08762895e+291] [ 1.10036532e+291] [ 1.05207356e+291] [ 1.14636621e+291] [ 1.21427147e+291] [ 1.10556352e+291] [ 1.14610964e+291] [ 1.20132704e+291] [ 1.14161981e+291] [ 1.16408715e+291] [ 1.05815043e+291] [ 1.19813063e+291]] [[ -1.87187692e+290] [ -8.51672311e+290] [ -5.73972135e+290] [ -9.64114332e+290] [ -1.30401988e+291] [ -1.36698235e+291] [ -2.19560185e+291] [ -1.88141920e+291] [ -2.61435786e+291] [ -2.19274521e+291] [ -3.69448011e+291] [ -3.29638012e+291] [ -2.42453870e+291] [ -3.37397736e+291] [ -3.48103655e+291] [ -3.98885743e+291] [ -5.08559754e+291] [ -3.55642262e+291] [ -4.34712861e+291] [ -4.24390451e+291] [ -5.70298551e+291] [ -5.85790180e+291] [ -5.82010662e+291] [ -6.06762846e+291] [ -5.51342876e+291] [ -4.92126422e+291] [ -6.30382765e+291] [ -6.97877453e+291] [ -6.04000343e+291] [ -7.26955986e+291] [ -6.23464477e+291] [ -6.99966813e+291] [ -6.39500430e+291] [ -7.26066890e+291] [ -6.48429050e+291] [ -7.34279941e+291] [ -7.05217528e+291] [ -8.23055848e+291] [ -7.24953881e+291] [ -7.33443251e+291] [ -7.01254604e+291] [ -7.64104925e+291] [ -8.09366848e+291] [ -7.36908086e+291] [ -7.63933909e+291] [ -8.00738798e+291] [ -7.60941232e+291] [ -7.75916728e+291] [ -7.05305116e+291] [ -7.98608250e+291]] [[ 1.24769062e+291] [ 5.67678111e+291] [ 3.82578385e+291] [ 6.42625803e+291] [ 8.69188220e+291] [ 9.11155555e+291] [ 1.46346793e+292] [ 1.25405098e+292] [ 1.74258774e+292] [ 1.46156385e+292] [ 2.46253807e+292] [ 2.19718643e+292] [ 1.61606469e+292] [ 2.24890850e+292] [ 2.32026828e+292] [ 2.65875387e+292] [ 3.38978075e+292] [ 2.37051651e+292] [ 2.89755781e+292] [ 2.82875428e+292] [ 3.80129776e+292] [ 3.90455647e+292] [ 3.87936427e+292] [ 4.04434877e+292] [ 3.67494961e+292] [ 3.28024516e+292] [ 4.20178621e+292] [ 4.65166883e+292] [ 4.02593543e+292] [ 4.84549040e+292] [ 4.15567270e+292] [ 4.66559536e+292] [ 4.26255957e+292] [ 4.83956418e+292] [ 4.32207286e+292] [ 4.89430788e+292] [ 4.70059375e+292] [ 5.48603944e+292] [ 4.83214547e+292] [ 4.88873096e+292] [ 4.67417907e+292] [ 5.09310488e+292] [ 5.39479607e+292] [ 4.91182565e+292] [ 5.09196499e+292] [ 5.33728621e+292] [ 5.07201744e+292] [ 5.17183589e+292] [ 4.70117757e+292] [ 5.32308515e+292]] [[ -8.31642222e+291] [ -3.78383134e+292] [ -2.55005795e+292] [ -4.28339160e+292] [ -5.79353256e+292] [ -6.07326383e+292] [ -9.75467556e+292] [ -8.35881690e+292] [ -1.16151354e+293] [ -9.74198402e+292] [ -1.64139299e+293] [ -1.46452412e+293] [ -1.07718020e+293] [ -1.49899922e+293] [ -1.54656374e+293] [ -1.77217969e+293] [ -2.25944217e+293] [ -1.58005646e+293] [ -1.93135332e+293] [ -1.88549266e+293] [ -2.53373686e+293] [ -2.60256347e+293] [ -2.58577173e+293] [ -2.69574136e+293] [ -2.44952011e+293] [ -2.18643174e+293] [ -2.80068053e+293] [ -3.10054764e+293] [ -2.68346804e+293] [ -3.22973849e+293] [ -2.76994379e+293] [ -3.10983030e+293] [ -2.84118872e+293] [ -3.22578839e+293] [ -2.88085702e+293] [ -3.26227754e+293] [ -3.13315831e+293] [ -3.65669339e+293] [ -3.22084348e+293] [ -3.25856027e+293] [ -3.11555174e+293] [ -3.39478473e+293] [ -3.59587555e+293] [ -3.27395393e+293] [ -3.39402494e+293] [ -3.55754263e+293] [ -3.38072900e+293] [ -3.44726251e+293] [ -3.13354745e+293] [ -3.54807698e+293]] [[ 5.54327151e+292] [ 2.52209471e+293] [ 1.69972895e+293] [ 2.85507421e+293] [ 3.86165146e+293] [ 4.04810500e+293] [ 6.50193240e+293] [ 5.57152948e+293] [ 7.74201301e+293] [ 6.49347292e+293] [ 1.09406266e+294] [ 9.76171556e+293] [ 7.17989318e+293] [ 9.99150769e+293] [ 1.03085468e+294] [ 1.18123791e+294] [ 1.50602039e+294] [ 1.05317909e+294] [ 1.28733433e+294] [ 1.25676612e+294] [ 1.68885020e+294] [ 1.73472625e+294] [ 1.72353379e+294] [ 1.79683353e+294] [ 1.63271593e+294] [ 1.45735564e+294] [ 1.86678023e+294] [ 2.06665522e+294] [ 1.78865280e+294] [ 2.15276676e+294] [ 1.84629280e+294] [ 2.07284253e+294] [ 1.89378077e+294] [ 2.15013384e+294] [ 1.92022149e+294] [ 2.17445551e+294] [ 2.08839171e+294] [ 2.43735151e+294] [ 2.14683784e+294] [ 2.17197779e+294] [ 2.07665613e+294] [ 2.26277755e+294] [ 2.39681367e+294] [ 2.18223835e+294] [ 2.26227111e+294] [ 2.37126305e+294] [ 2.25340877e+294] [ 2.29775636e+294] [ 2.08865109e+294] [ 2.36495377e+294]] [[ -3.69484115e+293] [ -1.68109018e+294] [ -1.13294621e+294] [ -1.90303608e+294] [ -2.57396534e+294] [ -2.69824505e+294] [ -4.33383199e+294] [ -3.71367637e+294] [ -5.16040179e+294] [ -4.32819336e+294] [ -7.29242240e+294] [ -6.50662489e+294] [ -4.78572351e+294] [ -6.65979173e+294] [ -6.87111261e+294] [ -7.87348487e+294] [ -1.00383070e+295] [ -7.01991489e+294] [ -8.58066550e+294] [ -8.37691458e+294] [ -1.12569504e+295] [ -1.15627349e+295] [ -1.14881322e+295] [ -1.19767080e+295] [ -1.08827900e+295] [ -9.71393439e+294] [ -1.24429345e+295] [ -1.37751917e+295] [ -1.19221798e+295] [ -1.43491640e+295] [ -1.23063765e+295] [ -1.38164329e+295] [ -1.26229053e+295] [ -1.43316144e+295] [ -1.27991446e+295] [ -1.44937294e+295] [ -1.39200752e+295] [ -1.62460501e+295] [ -1.43096451e+295] [ -1.44772142e+295] [ -1.38418522e+295] [ -1.50824357e+295] [ -1.59758471e+295] [ -1.45456055e+295] [ -1.50790601e+295] [ -1.58055406e+295] [ -1.50199885e+295] [ -1.53155853e+295] [ -1.39218041e+295] [ -1.57634864e+295]] [[ 2.46277872e+294] [ 1.12052263e+295] [ 7.55159887e+294] [ 1.26845961e+295] [ 1.71566430e+295] [ 1.79850235e+295] [ 2.88869501e+295] [ 2.47533324e+295] [ 3.43964117e+295] [ 2.88493661e+295] [ 4.86072932e+295] [ 4.33695974e+295] [ 3.18990115e+295] [ 4.43905237e+295] [ 4.57990729e+295] [ 5.24803373e+295] [ 6.69098556e+295] [ 4.67909074e+295] [ 5.71940160e+295] [ 5.58359240e+295] [ 7.50326651e+295] [ 7.70708575e+295] [ 7.65735964e+295] [ 7.98301754e+295] [ 7.25387173e+295] [ 6.47477657e+295] [ 8.29377854e+295] [ 9.18178822e+295] [ 7.94667202e+295] [ 9.56436675e+295] [ 8.20275647e+295] [ 9.20927737e+295] [ 8.41373724e+295] [ 9.55266915e+295] [ 8.53120871e+295] [ 9.66072608e+295] [ 9.27835963e+295] [ 1.08287271e+296] [ 9.53802559e+295] [ 9.64971797e+295] [ 9.22622050e+295] [ 1.00531255e+296] [ 1.06486246e+296] [ 9.69530388e+295] [ 1.00508755e+296] [ 1.05351076e+296] [ 1.00115016e+296] [ 1.02085302e+296] [ 9.27951201e+295] [ 1.05070765e+296]] [[ -1.64155339e+295] [ -7.46879007e+295] [ -5.03348216e+295] [ -8.45485694e+295] [ -1.14356785e+296] [ -1.19878315e+296] [ -1.92544586e+296] [ -1.64992155e+296] [ -2.29267639e+296] [ -1.92294072e+296] [ -3.23989591e+296] [ -2.89077980e+296] [ -2.12621337e+296] [ -2.95882915e+296] [ -3.05271533e+296] [ -3.49805182e+296] [ -4.45984447e+296] [ -3.11882559e+296] [ -3.81223982e+296] [ -3.72171685e+296] [ -5.00126645e+296] [ -5.13712120e+296] [ -5.10397649e+296] [ -5.32104221e+296] [ -4.83503355e+296] [ -4.31573140e+296] [ -5.52817847e+296] [ -6.12007708e+296] [ -5.29681628e+296] [ -6.37508297e+296] [ -5.46750815e+296] [ -6.13839984e+296] [ -5.60813637e+296] [ -6.36728600e+296] [ -5.68643642e+296] [ -6.43931083e+296] [ -6.18444630e+296] [ -7.21783632e+296] [ -6.35752540e+296] [ -6.43197343e+296] [ -6.14969321e+296] [ -6.70086276e+296] [ -7.09778984e+296] [ -6.46235850e+296] [ -6.69936303e+296] [ -7.02212565e+296] [ -6.67311857e+296] [ -6.80444706e+296] [ -6.18521441e+296] [ -7.00344169e+296]] [[ 1.09416957e+296] [ 4.97828635e+296] [ 3.35504349e+296] [ 5.63554451e+296] [ 7.62239686e+296] [ 7.99043185e+296] [ 1.28339674e+297] [ 1.09974733e+297] [ 1.52817250e+297] [ 1.28172695e+297] [ 2.15953714e+297] [ 1.92683547e+297] [ 1.41721737e+297] [ 1.97219344e+297] [ 2.03477282e+297] [ 2.33160973e+297] [ 2.97268803e+297] [ 2.07883830e+297] [ 2.54103024e+297] [ 2.48069259e+297] [ 3.33357027e+297] [ 3.42412360e+297] [ 3.40203115e+297] [ 3.54671527e+297] [ 3.22276852e+297] [ 2.87663015e+297] [ 3.68478096e+297] [ 4.07930815e+297] [ 3.53056759e+297] [ 4.24928111e+297] [ 3.64434144e+297] [ 4.09152110e+297] [ 3.73807651e+297] [ 4.24408407e+297] [ 3.79026704e+297] [ 4.29209188e+297] [ 4.12221314e+297] [ 4.81101433e+297] [ 4.23757819e+297] [ 4.28720117e+297] [ 4.09904863e+297] [ 4.46642807e+297] [ 4.73099792e+297] [ 4.30745419e+297] [ 4.46542843e+297] [ 4.68056431e+297] [ 4.44793531e+297] [ 4.53547169e+297] [ 4.12272512e+297] [ 4.66811061e+297]] [[ -7.29313504e+296] [ -3.31825299e+297] [ -2.23628822e+297] [ -3.75634528e+297] [ -5.08067222e+297] [ -5.32598419e+297] [ -8.55441970e+297] [ -7.33031330e+297] [ -1.01859608e+298] [ -8.54328978e+297] [ -1.43942917e+298] [ -1.28432298e+298] [ -9.44639468e+297] [ -1.31455612e+298] [ -1.35626811e+298] [ -1.55412333e+298] [ -1.98143101e+298] [ -1.38563974e+298] [ -1.69371157e+298] [ -1.65349380e+298] [ -2.22197534e+298] [ -2.28233323e+298] [ -2.26760762e+298] [ -2.36404613e+298] [ -2.14812097e+298] [ -1.91740409e+298] [ -2.45607315e+298] [ -2.71904336e+298] [ -2.35328297e+298] [ -2.83233803e+298] [ -2.42911838e+298] [ -2.72718385e+298] [ -2.49159704e+298] [ -2.82887397e+298] [ -2.52638439e+298] [ -2.86087334e+298] [ -2.74764148e+298] [ -3.20675862e+298] [ -2.82453750e+298] [ -2.85761346e+298] [ -2.73220129e+298] [ -2.97707630e+298] [ -3.15342407e+298] [ -2.87111302e+298] [ -2.97640999e+298] [ -3.11980779e+298] [ -2.96475004e+298] [ -3.02309699e+298] [ -2.74798274e+298] [ -3.11150683e+298]] [[ 4.86120433e+297] [ 2.21176568e+298] [ 1.49058723e+298] [ 2.50377400e+298] [ 3.38649780e+298] [ 3.55000932e+298] [ 5.70190759e+298] [ 4.88598532e+298] [ 6.78940352e+298] [ 5.69448900e+298] [ 9.59444638e+298] [ 8.56059347e+298] [ 6.29644926e+298] [ 8.76211102e+298] [ 9.04014029e+298] [ 1.03589348e+299] [ 1.32071337e+299] [ 9.23591550e+298] [ 1.12893536e+299] [ 1.10212839e+299] [ 1.48104705e+299] [ 1.52127831e+299] [ 1.51146303e+299] [ 1.57574365e+299] [ 1.43181977e+299] [ 1.27803654e+299] [ 1.63708383e+299] [ 1.81236537e+299] [ 1.56856952e+299] [ 1.88788138e+299] [ 1.61911725e+299] [ 1.81779137e+299] [ 1.66076211e+299] [ 1.88557243e+299] [ 1.68394945e+299] [ 1.90690146e+299] [ 1.83142731e+299] [ 2.13744964e+299] [ 1.88268198e+299] [ 1.90472860e+299] [ 1.82113572e+299] [ 1.98435599e+299] [ 2.10189975e+299] [ 1.91372667e+299] [ 1.98391187e+299] [ 2.07949298e+299] [ 1.97613998e+299] [ 2.01503086e+299] [ 1.83165477e+299] [ 2.07396002e+299]] [[ -3.24021253e+298] [ -1.47424185e+299] [ -9.93543798e+298] [ -1.66887860e+299] [ -2.25725393e+299] [ -2.36624176e+299] [ -3.80057927e+299] [ -3.25673018e+299] [ -4.52544449e+299] [ -3.79563444e+299] [ -6.39513241e+299] [ -5.70602269e+299] [ -4.19686818e+299] [ -5.84034325e+299] [ -6.02566233e+299] [ -6.90469854e+299] [ -8.80315191e+299] [ -6.15615536e+299] [ -7.52486472e+299] [ -7.34618416e+299] [ -9.87184837e+299] [ -1.01400079e+300] [ -1.00745846e+300] [ -1.05030441e+300] [ -9.54372629e+299] [ -8.51869149e+299] [ -1.09119041e+300] [ -1.20802348e+300] [ -1.04552252e+300] [ -1.25835832e+300] [ -1.07921487e+300] [ -1.21164015e+300] [ -1.10697305e+300] [ -1.25681930e+300] [ -1.12242846e+300] [ -1.27103606e+300] [ -1.22072912e+300] [ -1.42470685e+300] [ -1.25489268e+300] [ -1.26958775e+300] [ -1.21386932e+300] [ -1.32266301e+300] [ -1.40101124e+300] [ -1.27558537e+300] [ -1.32236698e+300] [ -1.38607611e+300] [ -1.31718666e+300] [ -1.34310920e+300] [ -1.22088074e+300] [ -1.38238814e+300]] [[ 2.15974818e+299] [ 9.82648865e+299] [ 6.62241873e+299] [ 1.11238306e+300] [ 1.50456183e+300] [ 1.57720714e+300] [ 2.53325796e+300] [ 2.17075795e+300] [ 3.01641341e+300] [ 2.52996201e+300] [ 4.26264496e+300] [ 3.80332217e+300] [ 2.79740244e+300] [ 3.89285290e+300] [ 4.01637645e+300] [ 4.60229383e+300] [ 5.86769886e+300] [ 4.10335595e+300] [ 5.01566264e+300] [ 4.89656396e+300] [ 6.58003338e+300] [ 6.75877386e+300] [ 6.71516626e+300] [ 7.00075386e+300] [ 6.36132517e+300] [ 5.67809313e+300] [ 7.27327753e+300] [ 8.05202280e+300] [ 6.96888044e+300] [ 8.38752728e+300] [ 7.19345519e+300] [ 8.07612957e+300] [ 7.37847601e+300] [ 8.37726900e+300] [ 7.48149330e+300] [ 8.47203016e+300] [ 8.13671167e+300] [ 9.49631547e+300] [ 8.36442725e+300] [ 8.46237653e+300] [ 8.09098795e+300] [ 8.81614709e+300] [ 9.33837350e+300] [ 8.50235336e+300] [ 8.81417393e+300] [ 9.23882413e+300] [ 8.77964479e+300] [ 8.95243019e+300] [ 8.13772225e+300] [ 9.21424214e+300]] [[ -1.43956984e+300] [ -6.54979907e+300] [ -4.41414158e+300] [ -7.41453617e+300] [ -1.00285850e+301] [ -1.05127989e+301] [ -1.68853100e+301] [ -1.44690834e+301] [ -2.01057595e+301] [ -1.68633409e+301] [ -2.84124564e+301] [ -2.53508623e+301] [ -1.86459524e+301] [ -2.59476251e+301] [ -2.67709654e+301] [ -3.06763697e+301] [ -3.91108665e+301] [ -2.73507231e+301] [ -3.34316598e+301] [ -3.26378132e+301] [ -4.38588982e+301] [ -4.50502844e+301] [ -4.47596199e+301] [ -4.66631905e+301] [ -4.24011091e+301] [ -3.78470585e+301] [ -4.84796840e+301] [ -5.36703734e+301] [ -4.64507397e+301] [ -5.59066625e+301] [ -4.79476320e+301] [ -5.38310559e+301] [ -4.91808794e+301] [ -5.58382864e+301] [ -4.98675363e+301] [ -5.64699124e+301] [ -5.42348630e+301] [ -6.32972372e+301] [ -5.57526903e+301] [ -5.64055666e+301] [ -5.39300937e+301] [ -5.87636074e+301] [ -6.22444825e+301] [ -5.66720302e+301] [ -5.87504554e+301] [ -6.15809410e+301] [ -5.85203030e+301] [ -5.96719959e+301] [ -5.42415990e+301] [ -6.14170909e+301]] [[ 9.59538399e+300] [ 4.36573729e+301] [ 2.94222498e+301] [ 4.94212367e+301] [ 6.68450544e+301] [ 7.00725588e+301] [ 1.12548227e+302] [ 9.64429844e+301] [ 1.34013980e+302] [ 1.12401794e+302] [ 1.89381871e+302] [ 1.68974962e+302] [ 1.24283705e+302] [ 1.72952656e+302] [ 1.78440591e+302] [ 2.04471877e+302] [ 2.60691613e+302] [ 1.82304939e+302] [ 2.22837131e+302] [ 2.17545785e+302] [ 2.92339391e+302] [ 3.00280519e+302] [ 2.98343109e+302] [ 3.11031267e+302] [ 2.82622568e+302] [ 2.52267761e+302] [ 3.23139018e+302] [ 3.57737311e+302] [ 3.09615187e+302] [ 3.72643188e+302] [ 3.19592651e+302] [ 3.58808332e+302] [ 3.27812803e+302] [ 3.72187431e+302] [ 3.32389681e+302] [ 3.76397504e+302] [ 3.61499889e+302] [ 4.21904711e+302] [ 3.71616894e+302] [ 3.75968610e+302] [ 3.59468464e+302] [ 3.91686018e+302] [ 4.14887625e+302] [ 3.77744711e+302] [ 3.91598354e+302] [ 4.10464821e+302] [ 3.90064285e+302] [ 3.97740838e+302] [ 3.61544787e+302] [ 4.09372686e+302]] [[ -6.39575737e+301] [ -2.90996134e+302] [ -1.96112601e+302] [ -3.29414893e+302] [ -4.45552517e+302] [ -4.67065294e+302] [ -7.50184832e+302] [ -6.42836106e+302] [ -8.93263782e+302] [ -7.49208786e+302] [ -1.26231582e+303] [ -1.12629454e+303] [ -8.28407099e+302] [ -1.15280767e+303] [ -1.18938724e+303] [ -1.36289753e+303] [ -1.73762749e+303] [ -1.21514486e+303] [ -1.48531026e+303] [ -1.45004104e+303] [ -1.94857425e+303] [ -2.00150546e+303] [ -1.98859174e+303] [ -2.07316406e+303] [ -1.88380723e+303] [ -1.68147871e+303] [ -2.15386769e+303] [ -2.38448096e+303] [ -2.06372524e+303] [ -2.48383537e+303] [ -2.13022955e+303] [ -2.39161980e+303] [ -2.18502058e+303] [ -2.48079754e+303] [ -2.21552754e+303] [ -2.50885958e+303] [ -2.40956024e+303] [ -2.81218570e+303] [ -2.47699466e+303] [ -2.50600081e+303] [ -2.39601988e+303] [ -2.61076445e+303] [ -2.76541365e+303] [ -2.51783933e+303] [ -2.61018013e+303] [ -2.73593366e+303] [ -2.59995486e+303] [ -2.65112256e+303] [ -2.40985951e+303] [ -2.72865409e+303]] array must not contain infs or NaNs ```python np.abs(A[0,:]).sum() ``` 21036 ```python A[0,0] ``` 20016 ```python N = A.shape[0] D_inv = np.zeros((N,N)) D_inv[range(N),range(N)] = 1/A[range(N),range(N)] R = A.copy() R[range(N),range(N)] = 0 ``` ```python D_inv ``` array([[ 4.99600320e-05, 0.00000000e+00, 0.00000000e+00, ..., 0.00000000e+00, 0.00000000e+00, 0.00000000e+00], [ 0.00000000e+00, 4.95933347e-05, 0.00000000e+00, ..., 0.00000000e+00, 0.00000000e+00, 0.00000000e+00], [ 0.00000000e+00, 0.00000000e+00, 4.98355427e-05, ..., 0.00000000e+00, 0.00000000e+00, 0.00000000e+00], ..., [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, ..., 4.58631444e-05, 0.00000000e+00, 0.00000000e+00], [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, ..., 0.00000000e+00, 4.61723151e-05, 0.00000000e+00], [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, ..., 0.00000000e+00, 0.00000000e+00, 4.51854864e-05]]) ```python R ``` array([[ 0, 40, 4, ..., 28, 16, 12], [ 40, 0, 66, ..., 150, 48, 110], [ 4, 66, 0, ..., 105, 31, 97], ..., [ 28, 150, 105, ..., 0, 1352, 1635], [ 16, 48, 31, ..., 1352, 0, 1525], [ 12, 110, 97, ..., 1635, 1525, 0]]) ```python D_invR = D_inv.dot(R) D_invR ``` array([[ 0. , 0.0019984 , 0.00019984, ..., 0.00139888, 0.00079936, 0.00059952], [ 0.00198373, 0. , 0.00327316, ..., 0.007439 , 0.00238048, 0.00545527], [ 0.00019934, 0.00328915, 0. , ..., 0.00523273, 0.0015449 , 0.00483405], ..., [ 0.00128417, 0.00687947, 0.00481563, ..., 0. , 0.06200697, 0.07498624], [ 0.00073876, 0.00221627, 0.00143134, ..., 0.06242497, 0. , 0.07041278], [ 0.00054223, 0.0049704 , 0.00438299, ..., 0.07387827, 0.06890787, 0. ]]) ```python eig = sp.linalg.eigvals(D_invR) ``` ```python eig.shape ``` (50,) ```python eig ``` array([ 1.46138930e+00+0.j, 1.38422414e-01+0.j, 3.34053619e-02+0.j, 1.36009235e-02+0.j, -8.00992956e-02+0.j, -7.75198996e-02+0.j, 4.86097750e-03+0.j, -7.52190613e-02+0.j, 2.26808430e-03+0.j, -3.24259739e-04+0.j, -7.10356170e-02+0.j, -6.98061099e-02+0.j, -1.97000545e-03+0.j, -6.66305109e-02+0.j, -6.63882281e-02+0.j, -3.95245554e-03+0.j, -5.81413995e-03+0.j, -6.53925027e-03+0.j, -7.70665939e-03+0.j, -1.00201453e-02+0.j, -1.05857716e-02+0.j, -1.16879851e-02+0.j, -1.33628818e-02+0.j, -6.27296603e-02+0.j, -6.18518058e-02+0.j, -5.99549818e-02+0.j, -5.83651641e-02+0.j, -1.65421983e-02+0.j, -1.85040101e-02+0.j, -1.94203921e-02+0.j, -2.08643741e-02+0.j, -2.22502339e-02+0.j, -2.36202057e-02+0.j, -5.47104157e-02+0.j, -5.42774334e-02+0.j, -5.27933387e-02+0.j, -2.75694746e-02+0.j, -2.92900384e-02+0.j, -3.02246726e-02+0.j, -3.12431801e-02+0.j, -3.39796164e-02+0.j, -3.57691632e-02+0.j, -3.89296685e-02+0.j, -5.09338538e-02+0.j, -4.88809144e-02+0.j, -4.71045906e-02+0.j, -4.52472441e-02+0.j, -4.56075289e-02+0.j, -4.25669129e-02+0.j, -4.20537146e-02+0.j]) ```python max_eig = np.abs(eig).max() max_eig ``` 1.4613893032019745 ```python test = np.array([[1,2,3],[4,5,6]]) print(np.sum(test,axis = 0), end = '\n\n') print(np.sum(test,axis = 1), end = '\n\n') ``` [5 7 9] [ 6 15] ```python A_row_sum = np.sum(A,axis = 1) A_row_sum ``` array([21036, 24774, 23142, 25320, 26978, 27342, 31660, 30144, 34140, 31817, 39683, 37511, 32270, 37459, 37854, 40866, 46718, 38138, 41997, 41774, 49916, 50907, 50341, 51847, 47742, 44471, 52538, 56312, 50413, 57619, 51503, 56320, 51750, 56751, 52784, 57240, 55945, 63139, 56551, 57553, 55252, 59475, 62511, 57795, 60179, 61489, 59361, 60251, 55708, 62087]) ```python A_row_sum - A[range(A.shape[0]),range(A.shape[0])] ``` array([ 1020, 4610, 3076, 5182, 6794, 7180, 11338, 9887, 13705, 11532, 19054, 17025, 11999, 17009, 17375, 20269, 25809, 17688, 21366, 21192, 28873, 29932, 29258, 30740, 26850, 23790, 31415, 35017, 29291, 36177, 30405, 35053, 30630, 35266, 31594, 35747, 34488, 41412, 35087, 36071, 33793, 37758, 40542, 36059, 38271, 39870, 37608, 38447, 34050, 39956]) ```python A[49,:].sum() - A[49,49] ``` 39956 ```python A[49,:] ``` array([ 12, 110, 97, 92, 121, 143, 225, 225, 297, 231, 460, 431, 313, 470, 446, 504, 669, 494, 496, 514, 731, 798, 873, 870, 726, 722, 896, 1014, 881, 1146, 968, 1074, 1070, 1253, 973, 1307, 1210, 1457, 1269, 1305, 1189, 1357, 1511, 1456, 1438, 1457, 1495, 1635, 1525, 22131]) ```python for i in range(A.shape[0]): #non-pithonic way ans = A[i,:].sum() - A[i,i] print('diagonal - sum(non-diagonal) = ', A[i,i] - ans) ``` diagonal - sum(non-diagonal) = 18996 diagonal - sum(non-diagonal) = 15554 diagonal - sum(non-diagonal) = 16990 diagonal - sum(non-diagonal) = 14956 diagonal - sum(non-diagonal) = 13390 diagonal - sum(non-diagonal) = 12982 diagonal - sum(non-diagonal) = 8984 diagonal - sum(non-diagonal) = 10370 diagonal - sum(non-diagonal) = 6730 diagonal - sum(non-diagonal) = 8753 diagonal - sum(non-diagonal) = 1575 diagonal - sum(non-diagonal) = 3461 diagonal - sum(non-diagonal) = 8272 diagonal - sum(non-diagonal) = 3441 diagonal - sum(non-diagonal) = 3104 diagonal - sum(non-diagonal) = 328 diagonal - sum(non-diagonal) = -4900 diagonal - sum(non-diagonal) = 2762 diagonal - sum(non-diagonal) = -735 diagonal - sum(non-diagonal) = -610 diagonal - sum(non-diagonal) = -7830 diagonal - sum(non-diagonal) = -8957 diagonal - sum(non-diagonal) = -8175 diagonal - sum(non-diagonal) = -9633 diagonal - sum(non-diagonal) = -5958 diagonal - sum(non-diagonal) = -3109 diagonal - sum(non-diagonal) = -10292 diagonal - sum(non-diagonal) = -13722 diagonal - sum(non-diagonal) = -8169 diagonal - sum(non-diagonal) = -14735 diagonal - sum(non-diagonal) = -9307 diagonal - sum(non-diagonal) = -13786 diagonal - sum(non-diagonal) = -9510 diagonal - sum(non-diagonal) = -13781 diagonal - sum(non-diagonal) = -10404 diagonal - sum(non-diagonal) = -14254 diagonal - sum(non-diagonal) = -13031 diagonal - sum(non-diagonal) = -19685 diagonal - sum(non-diagonal) = -13623 diagonal - sum(non-diagonal) = -14589 diagonal - sum(non-diagonal) = -12334 diagonal - sum(non-diagonal) = -16041 diagonal - sum(non-diagonal) = -18573 diagonal - sum(non-diagonal) = -14323 diagonal - sum(non-diagonal) = -16363 diagonal - sum(non-diagonal) = -18251 diagonal - sum(non-diagonal) = -15855 diagonal - sum(non-diagonal) = -16643 diagonal - sum(non-diagonal) = -12392 diagonal - sum(non-diagonal) = -17825 # Gauss–Seidel method:&nbsp; $\mathbf{x}^{(k+1)} = L_*^{-1} (\mathbf{b} - U \mathbf{x}^{(k)})$ ## $A=L_*+U$ ```python def gauss_seidel(A,b, maxit = 200000): N = A.shape[0] x = np.ones((N,1)) L = np.tril(A) U = np.triu(A, k = 1) for i in range(maxit): x_old = x #L x = rhs rhs = b - U.dot(x) x = forward_sub(L,rhs,N) diff = (((x-x_old)**2).sum())**0.5 if diff < 1e-10: print('total iteration = ', i) print('diff = ', diff) SSE = ((A.dot(x) - b)**2).sum() print('SSE = ', SSE) break return x def forward_sub(L,b,N): y = np.empty((N,1)) #(N,1) is needed here for i in range(N): #to return column vector Sum = 0 for k in range(i): Sum = Sum + L[i,k] * y[k] y[i] = (b[i] - Sum)/L[i,i] return y ``` ```python A = np.ones((5,5)) ``` ```python L = np.tril(A) L ``` array([[ 1., 0., 0., 0., 0.], [ 1., 1., 0., 0., 0.], [ 1., 1., 1., 0., 0.], [ 1., 1., 1., 1., 0.], [ 1., 1., 1., 1., 1.]]) ```python U = np.triu(A,k=1) U ``` array([[ 0., 1., 1., 1., 1.], [ 0., 0., 1., 1., 1.], [ 0., 0., 0., 1., 1.], [ 0., 0., 0., 0., 1.], [ 0., 0., 0., 0., 0.]]) ```python U.sum() ``` 10.0 ```python U.dot(U) ``` array([[ 0., 0., 1., 2., 3.], [ 0., 0., 0., 1., 2.], [ 0., 0., 0., 0., 1.], [ 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0.]]) ```python np.random.seed(100) A = np.ceil(100*np.random.random((5,5))) A[range(5),range(5)] = A[range(5),range(5)] + 200 np.random.seed(1) b = np.ceil(100*np.random.random((5,1))) ``` ```python L = np.tril(A) L ``` array([[ 255., 0., 0., 0., 0.], [ 13., 268., 0., 0., 0.], [ 90., 21., 219., 0., 0.], [ 98., 82., 18., 282., 0.], [ 44., 95., 82., 34., 218.]]) ```python b ``` array([[ 42.], [ 73.], [ 1.], [ 31.], [ 15.]]) ```python x = forward_sub(L,b,5) x ``` array([[ 0.16470588], [ 0.2643986 ], [-0.08847443], [-0.01854369], [-0.04348411]]) ```python L.dot(x) ``` array([[ 42.], [ 73.], [ 1.], [ 31.], [ 15.]]) ```python x.shape ``` (5, 1) ```python x = gauss_seidel(A,b) x ``` total iteration = 16 diff = 2.90075908798e-11 SSE = 8.46717808723e-19 array([[ 0.15168993], [ 0.30401938], [-0.07977452], [-0.02002359], [-0.06116461]]) ```python A.dot(x) ``` array([[ 42.], [ 73.], [ 1.], [ 31.], [ 15.]]) ```python b ``` array([[ 42.], [ 73.], [ 1.], [ 31.], [ 15.]]) ```python np.random.seed(4) a = np.ceil(10*np.random.random([4,4])).astype(int) b = np.tril(a) A = b.dot(b.T) np.random.seed(40) b = np.ceil(10* np.random.random([4,1])) ``` ```python A ``` array([[100, 70, 30, 90], [ 70, 58, 36, 93], [ 30, 36, 98, 93], [ 90, 93, 93, 221]]) ```python b ``` array([[ 5.], [ 1.], [ 8.], [ 3.]]) ```python gs_ans = gauss_seidel(A,b) gs_ans ``` total iteration = 324 diff = 9.54818421815e-11 SSE = 2.30842019339e-17 array([[ 0.52970679], [-0.89313271], [ 0.13773148], [ 0.11574074]]) ```python try: ans = jacobi(A,b) except Exception as e: print(e) ``` [[-1.85 ] [-3.4137931 ] [-1.54081633] [-1.23529412]] [[ 51.85088851] [ 75.92438803] [ 41.14473748] [ 36.79018941]] [[-1336.94933638] [-1983.2186419 ] [-1064.51135282] [ -952.58893314]] [[ 34691.21979451] [ 51431.71258188] [ 27617.97872464] [ 24712.53155695]] [[ -899915.64514228] [-1334207.46170767] [ -716435.03634274] [ -641066.8730261 ]] [[ 23344743.94262251] [ 34610690.4767043 ] [ 18585062.05663865] [ 16629932.76285485]] [[ -6.05586550e+08] [ -8.97836767e+08] [ -4.82115536e+08] [ -4.31397482e+08]] [[ 1.57095352e+10] [ 2.32908050e+10] [ 1.25065706e+10] [ 1.11908924e+10]] [[ -4.07521428e+11] [ -6.04187330e+11] [ -3.24433248e+11] [ -2.90303207e+11]] [[ 1.05715231e+13] [ 1.56732380e+13] [ 8.41613064e+12] [ 7.53076243e+12]] [[ -2.74236134e+14] [ -4.06579841e+14] [ -2.18323046e+14] [ -1.95355688e+14]] [[ 7.11396610e+15] [ 1.05470974e+16] [ 5.66352336e+15] [ 5.06772658e+15]] [[ -1.84543565e+17] [ -2.73602505e+17] [ -1.46917595e+17] [ -1.31462016e+17]] [[ 4.78724902e+18] [ 7.09752914e+18] [ 3.81119285e+18] [ 3.41025931e+18]] [[ -1.24186141e+20] [ -1.84117173e+20] [ -9.88662447e+19] [ -8.84656184e+19]] [[ 3.22151566e+21] [ 4.77618799e+21] [ 2.56469162e+21] [ 2.29488872e+21]] [[ -8.35694148e+22] [ -1.23899207e+23] [ -6.65307266e+22] [ -5.95317632e+22]] [[ 2.16787619e+24] [ 3.21407229e+24] [ 1.72587517e+24] [ 1.54431489e+24]] [[ -5.62369280e+25] [ -8.33763260e+25] [ -4.47709690e+25] [ -4.00611096e+25]] [[ 1.45884350e+27] [ 2.16286727e+27] [ 1.16140478e+27] [ 1.03922620e+27]] [[ -3.78438940e+28] [ -5.61069915e+28] [ -3.01280291e+28] [ -2.69585917e+28]] [[ 9.81709359e+29] [ 1.45547281e+30] [ 7.81551924e+29] [ 6.99333473e+29]] [[ -2.54665460e+31] [ -3.77564549e+31] [ -2.02742571e+31] [ -1.81414263e+31]] [[ 6.60628277e+32] [ 9.79441100e+32] [ 5.25934990e+32] [ 4.70607174e+32]] [[ -1.71373739e+34] [ -2.54077050e+34] [ -1.36432921e+34] [ -1.22080320e+34]] [[ 4.44561027e+35] [ 6.59101883e+35] [ 3.53920968e+35] [ 3.16688849e+35]] [[ -1.15323682e+37] [ -1.70977777e+37] [ -9.18107228e+36] [ -8.21523301e+36]] [[ 2.99161440e+38] [ 4.43533860e+38] [ 2.38166416e+38] [ 2.13111556e+38]] [[ -7.76055407e+39] [ -1.15057225e+40] [ -6.17828068e+39] [ -5.52833197e+39]] [[ 2.01316719e+41] [ 2.98470222e+41] [ 1.60270927e+41] [ 1.43410592e+41]] [[ -5.22236182e+42] [ -7.74262316e+42] [ -4.15759195e+42] [ -3.72021759e+42]] [[ 1.35473413e+44] [ 2.00851572e+44] [ 1.07852192e+44] [ 9.65062534e+43]] [[ -3.51431904e+45] [ -5.21029544e+45] [ -2.79779629e+45] [ -2.50347103e+45]] [[ 9.11650342e+46] [ 1.35160399e+47] [ 7.25777006e+46] [ 6.49426018e+46]] [[ -2.36491433e+48] [ -3.50619913e+48] [ -1.88273987e+48] [ -1.68467759e+48]] [[ 6.13482993e+49] [ 9.09543956e+49] [ 4.88401999e+49] [ 4.37022615e+49]] [[ -1.59143770e+51] [ -2.35945015e+51] [ -1.26696479e+51] [ -1.13368141e+51]] [[ 4.12835234e+52] [ 6.12065528e+52] [ 3.28663640e+52] [ 2.94088566e+52]] [[ -1.07093687e+54] [ -1.58776065e+54] [ -8.52587134e+53] [ -7.62895853e+53]] [[ 2.77811989e+55] [ 4.11881370e+55] [ 2.21169832e+55] [ 1.97902996e+55]] [[ -7.20672744e+56] [ -1.06846245e+57] [ -5.73737191e+56] [ -5.13380634e+56]] [[ 1.86949889e+58] [ 2.77170099e+58] [ 1.48833302e+58] [ 1.33176193e+58]] [[ -4.84967154e+59] [ -7.19007615e+59] [ -3.86088824e+59] [ -3.45472681e+59]] [[ 1.25805445e+61] [ 1.86517937e+61] [ 1.00155394e+61] [ 8.96191507e+60]] [[ -3.26352207e+62] [ -4.83846628e+62] [ -2.59813348e+62] [ -2.32481253e+62]] [[ 8.46591040e+63] [ 1.25514769e+64] [ 6.73982427e+63] [ 6.03080175e+63]] [[ -2.19614384e+65] [ -3.25598163e+65] [ -1.74837942e+65] [ -1.56445172e+65]] [[ 5.69702199e+66] [ 8.44634974e+66] [ 4.53547525e+66] [ 4.05834796e+66]] [[ -1.47786584e+68] [ -2.19106961e+68] [ -1.17654872e+68] [ -1.05277701e+68]] [[ 3.83373530e+69] [ 5.68385890e+69] [ 3.05208784e+69] [ 2.73101137e+69]] [[ -9.94510190e+70] [ -1.47445120e+71] [ -7.91742837e+70] [ -7.08452312e+70]] [[ 2.57986126e+72] [ 3.82487738e+72] [ 2.05386199e+72] [ 1.83779784e+72]] [[ -6.69242427e+73] [ -9.92212355e+73] [ -5.32792829e+73] [ -4.76743576e+73]] [[ 1.73608338e+75] [ 2.57390044e+75] [ 1.38211915e+75] [ 1.23672165e+75]] [[ -4.50357805e+76] [ -6.67696129e+76] [ -3.58535860e+76] [ -3.20818259e+76]] [[ 1.16827426e+78] [ 1.73207212e+78] [ 9.30078731e+77] [ 8.32235414e+77]] [[ -3.03062305e+79] [ -4.49317244e+79] [ -2.41271946e+79] [ -2.15890388e+79]] [[ 7.86174649e+80] [ 1.16557494e+81] [ 6.25884132e+80] [ 5.60041771e+80]] [[ -2.03941754e+82] [ -3.02362073e+82] [ -1.62360752e+82] [ -1.45280570e+82]] [[ 5.29045791e+83] [ 7.84358177e+83] [ 4.21180416e+83] [ 3.76872673e+83]] [[ -1.37239895e+85] [ -2.03470542e+85] [ -1.09258512e+85] [ -9.77646303e+84]] [[ 3.56014340e+86] [ 5.27823420e+86] [ 2.83427768e+86] [ 2.53611461e+86]] [[ -9.23537651e+87] [ -1.36922800e+88] [ -7.35240651e+87] [ -6.57894098e+87]] [[ 2.39575123e+89] [ 3.55191762e+89] [ 1.90728953e+89] [ 1.70664465e+89]] [[ -6.21482399e+90] [ -9.21403799e+90] [ -4.94770433e+90] [ -4.42721097e+90]] [[ 1.61218898e+92] [ 2.39021580e+92] [ 1.28348516e+92] [ 1.14846386e+92]] [[ -4.18218328e+93] [ -6.20046452e+93] [ -3.32949193e+93] [ -2.97923284e+93]] [[ 1.08490117e+95] [ 1.60846398e+95] [ 8.63704299e+94] [ 7.72843503e+94]] [[ -2.81434471e+96] [ -4.17252026e+96] [ -2.24053739e+96] [ -2.00483518e+96]] [[ 7.30069834e+97] [ 1.08239448e+98] [ 5.81218339e+97] [ 5.20074773e+97]] [[ -1.89387590e+99] [ -2.80784211e+99] [ -1.50773988e+99] [ -1.34912721e+99]] [[ 4.91290800e+100] [ 7.28382993e+100] [ 3.91123164e+100] [ 3.49977413e+100]] [[ -1.27445864e+102] [ -1.88950006e+102] [ -1.01461353e+102] [ -9.07877243e+101]] [[ 3.30607620e+103] [ 4.90155663e+103] [ 2.63201140e+103] [ 2.35512653e+103]] [[ -8.57630019e+104] [ -1.27151398e+105] [ -6.82770705e+104] [ -6.10943936e+104]] [[ 2.22478008e+106] [ 3.29843745e+106] [ 1.77117712e+106] [ 1.58485112e+106]] [[ -5.77130733e+107] [ -8.55648448e+107] [ -4.59461480e+107] [ -4.11126609e+107]] [[ 1.49713622e+109] [ 2.21963968e+109] [ 1.19189012e+109] [ 1.06650452e+109]] [[ -3.88372464e+110] [ -5.75797261e+110] [ -3.09188502e+110] [ -2.76662193e+110]] [[ 1.00747794e+112] [ 1.49367705e+112] [ 8.02066633e+111] [ 7.17690055e+111]] [[ -2.61350092e+113] [ -3.87475121e+113] [ -2.08064297e+113] [ -1.86176149e+113]] [[ 6.77968900e+114] [ 1.00515014e+115] [ 5.39740090e+114] [ 4.82959993e+114]] [[ -1.75872074e+116] [ -2.60746238e+116] [ -1.40014106e+116] [ -1.25284767e+116]] [[ 4.56230169e+117] [ 6.76402439e+117] [ 3.63210927e+117] [ 3.25001514e+117]] [[ -1.18350777e+119] [ -1.75465718e+119] [ -9.42206334e+118] [ -8.43087205e+118]] [[ 3.07014034e+120] [ 4.55176040e+120] [ 2.44417970e+120] [ 2.18705453e+120]] [[ -7.96425840e+121] [ -1.18077325e+122] [ -6.34045241e+121] [ -5.67344340e+121]] [[ 2.06601017e+123] [ 3.06304672e+123] [ 1.64477828e+123] [ 1.47174931e+123]] [[ -5.35944193e+124] [ -7.94585682e+124] [ -4.26672326e+124] [ -3.81786840e+124]] [[ 1.39029411e+126] [ 2.06123662e+126] [ 1.10683170e+126] [ 9.90394155e+125]] [[ -3.60656528e+127] [ -5.34705884e+127] [ -2.87123476e+127] [ -2.56918384e+127]] [[ 9.35579963e+128] [ 1.38708181e+129] [ 7.44827696e+128] [ 6.66472596e+128]] [[ -2.42699022e+130] [ -3.59823224e+130] [ -1.93215931e+130] [ -1.72889815e+130]] [[ 6.29586113e+131] [ 9.33418287e+131] [ 5.01221908e+131] [ 4.48493883e+131]] [[ -1.63321084e+133] [ -2.42138261e+133] [ -1.30022095e+133] [ -1.16343906e+133]] [[ 4.23671615e+134] [ 6.28131442e+134] [ 3.37290628e+134] [ 3.01808004e+134]] [[ -1.09904755e+136] [ -1.62943727e+136] [ -8.74966423e+135] [ -7.82920864e+135]] [[ 2.85104188e+137] [ 4.22692713e+137] [ 2.26975249e+137] [ 2.03097688e+137]] [[ -7.39589455e+138] [ -1.09650818e+139] [ -5.88797037e+138] [ -5.26856199e+138]] [[ 1.91857077e+140] [ 2.84445449e+140] [ 1.52739980e+140] [ 1.36671893e+140]] [[ -4.97696903e+141] [ -7.37880618e+141] [ -3.96223147e+141] [ -3.54540884e+141]] [[ 1.29107673e+143] [ 1.91413788e+143] [ 1.02784341e+143] [ 9.19715354e+142]] [[ -3.34918522e+144] [ -4.96546964e+144] [ -2.66633106e+144] [ -2.38583580e+144]] [[ 8.68812939e+145] [ 1.28809367e+146] [ 6.91673577e+145] [ 6.18910235e+145]] [[ -2.25378972e+147] [ -3.34144686e+147] [ -1.79427208e+147] [ -1.60551652e+147]] [[ 5.84656131e+148] [ 8.66805529e+148] [ 4.65452550e+148] [ 4.16487424e+148]] [[ -1.51665787e+150] [ -2.24858230e+150] [ -1.20743157e+150] [ -1.08041102e+150]] [[ 3.93436580e+151] [ 5.83305271e+151] [ 3.13220113e+151] [ 2.80269682e+151]] [[ -1.02061477e+153] [ -1.51315360e+153] [ -8.12525043e+152] [ -7.27048252e+152]] [[ 2.64757922e+154] [ 3.92527537e+154] [ 2.10777316e+154] [ 1.88603761e+154]] [[ -6.86809159e+155] [ -1.01825662e+156] [ -5.46777939e+155] [ -4.89257467e+155]] [[ 1.78165328e+157] [ 2.64146194e+157] [ 1.41839796e+157] [ 1.26918397e+157]] [[ -4.62179104e+158] [ -6.85222273e+158] [ -3.67946954e+158] [ -3.29239316e+158]] [[ 1.19893992e+160] [ 1.77753674e+160] [ 9.54492072e+159] [ 8.54080497e+159]] [[ -3.11017291e+161] [ -4.61111230e+161] [ -2.47605017e+161] [ -2.21557226e+161]] [[ 8.06810698e+162] [ 1.19616974e+163] [ 6.42312766e+162] [ 5.74742130e+162]] [[ -2.09294956e+164] [ -3.10298679e+164] [ -1.66622508e+164] [ -1.49093993e+164]] [[ 5.42932546e+165] [ 8.04946546e+165] [ 4.32235847e+165] [ 3.86765084e+165]] [[ -1.40842262e+167] [ -2.08811376e+167] [ -1.12126404e+167] [ -1.00330823e+167]] [[ 3.65359248e+168] [ 5.41678090e+168] [ 2.90867374e+168] [ 2.60268428e+168]] [[ -9.47779298e+169] [ -1.40516843e+170] [ -7.54539750e+169] [ -6.75162951e+169]] [[ 2.45863654e+171] [ 3.64515078e+171] [ 1.95735337e+171] [ 1.75144183e+171]] [[ -6.37795494e+172] [ -9.45589435e+172] [ -5.07757506e+172] [ -4.54341943e+172]] [[ 1.65450682e+174] [ 2.45295581e+174] [ 1.31717496e+174] [ 1.17860952e+174]] [[ -4.29196009e+175] [ -6.36321854e+175] [ -3.41688671e+175] [ -3.05743378e+175]] [[ 1.11337839e+177] [ 1.65068404e+177] [ 8.86375399e+176] [ 7.93129627e+176]] [[ -2.88821755e+178] [ -4.28204343e+178] [ -2.29934854e+178] [ -2.05745946e+178]] [[ 7.49233205e+179] [ 1.11080591e+180] [ 5.96474555e+179] [ 5.33726051e+179]] [[ -1.94358765e+181] [ -2.88154426e+181] [ -1.54731607e+181] [ -1.38454002e+181]] [[ 5.04186537e+182] [ 7.47502087e+182] [ 4.01389632e+182] [ 3.59163859e+182]] [[ -1.30791150e+184] [ -1.93909695e+184] [ -1.04124580e+184] [ -9.31707825e+183]] [[ 3.39285635e+185] [ 5.03021604e+185] [ 2.70109823e+185] [ 2.41694550e+185]] [[ -8.80141677e+186] [ -1.30488954e+187] [ -7.00692537e+186] [ -6.26980409e+186]] [[ 2.28317763e+188] [ 3.38501709e+188] [ 1.81766819e+188] [ 1.62645138e+188]] [[ -5.92279655e+189] [ -8.78108092e+189] [ -4.71521739e+189] [ -4.21918141e+189]] [[ 1.53643407e+191] [ 2.27790230e+191] [ 1.22317567e+191] [ 1.09449886e+191]] [[ -3.98566729e+192] [ -5.90911181e+192] [ -3.17304293e+192] [ -2.83924211e+192]] [[ 1.03392291e+194] [ 1.53288411e+194] [ 8.23119825e+193] [ 7.36528474e+193]] [[ -2.68210188e+195] [ -3.97645833e+195] [ -2.13525711e+195] [ -1.91063027e+195]] [[ 6.95764690e+196] [ 1.03153401e+197] [ 5.53907557e+196] [ 4.95637056e+196]] [[ -1.80488485e+198] [ -2.67590483e+198] [ -1.43689292e+198] [ -1.28573327e+198]] [[ 4.68205610e+199] [ 6.94157112e+199] [ 3.72744735e+199] [ 3.33532376e+199]] [[ -1.21457329e+201] [ -1.80071462e+201] [ -9.66938009e+200] [ -8.65217134e+200]] [[ 3.15072748e+202] [ 4.67123811e+202] [ 2.50833620e+202] [ 2.24446184e+202]] [[ -8.17330969e+203] [ -1.21176700e+204] [ -6.50688094e+203] [ -5.82236381e+203]] [[ 2.12024022e+205] [ 3.14344766e+205] [ 1.68795154e+205] [ 1.51038079e+205]] [[ -5.50012023e+206] [ -8.15442509e+206] [ -4.37871912e+206] [ -3.91808242e+206]] [[ 1.42678750e+208] [ 2.11534136e+208] [ 1.13588458e+208] [ 1.01639070e+208]] [[ -3.70123287e+209] [ -5.48741209e+209] [ -2.94660089e+209] [ -2.63662154e+209]] [[ 9.60137704e+210] [ 1.42349088e+211] [ 7.64378442e+210] [ 6.83966623e+210]] [[ -2.49069552e+212] [ -3.69268110e+212] [ -1.98287594e+212] [ -1.77427945e+212]] [[ 6.46111919e+213] [ 9.57919287e+213] [ 5.14378323e+213] [ 4.60266257e+213]] [[ -1.67608047e+215] [ -2.48494071e+215] [ -1.33435004e+215] [ -1.19397780e+215]] [[ 4.34792437e+216] [ 6.44619065e+216] [ 3.46144062e+216] [ 3.09730066e+216]] [[ -1.12789610e+218] [ -1.67220786e+218] [ -8.97933140e+217] [ -8.03471505e+217]] [[ 2.92587797e+219] [ 4.33787840e+219] [ 2.32933050e+219] [ 2.08428735e+219]] [[ -7.59002703e+220] [ -1.12529007e+221] [ -6.04252183e+220] [ -5.40685481e+220]] [[ 1.96893073e+222] [ 2.91911767e+222] [ 1.56749203e+222] [ 1.40259350e+222]] [[ -5.10760791e+223] [ -7.57249012e+223] [ -4.06623483e+223] [ -3.63847115e+223]] [[ 1.32496579e+225] [ 1.96438147e+225] [ 1.05482295e+225] [ 9.43856670e+224]] [[ -3.43709692e+226] [ -5.09580668e+226] [ -2.73631874e+226] [ -2.44846085e+226]] [[ 8.91618134e+227] [ 1.32190443e+228] [ 7.09829098e+227] [ 6.35155813e+227]] [[ -2.31294873e+229] [ -3.42915544e+229] [ -1.84136936e+229] [ -1.64765921e+229]] [[ 6.00002584e+230] [ 8.89558033e+230] [ 4.77670066e+230] [ 4.27419671e+230]] [[ -1.55646814e+232] [ -2.30760462e+232] [ -1.23912506e+232] [ -1.10877039e+232]] [[ 4.03763772e+233] [ 5.98616266e+233] [ 3.21441728e+233] [ 2.87626392e+233]] [[ -1.04740457e+235] [ -1.55287189e+235] [ -8.33852756e+234] [ -7.46132311e+234]] [[ 2.71707469e+236] [ 4.02830868e+236] [ 2.16309943e+236] [ 1.93554361e+236]] [[ -7.04836995e+237] [ -1.04498452e+238] [ -5.61130140e+237] [ -5.02099831e+237]] [[ 1.82841933e+239] [ 2.71079683e+239] [ 1.45562904e+239] [ 1.30249837e+239]] [[ -4.74310697e+240] [ -7.03208455e+240] [ -3.77605077e+240] [ -3.37881414e+240]] [[ 1.23041051e+242] [ 1.82419473e+242] [ 9.79546232e+241] [ 8.76498985e+241]] [[ -3.19181085e+243] [ -4.73214792e+243] [ -2.54104322e+243] [ -2.27372811e+243]] [[ 8.27988416e+244] [ 1.22756762e+245] [ 6.59172630e+244] [ 5.89828353e+244]] [[ -2.14788673e+246] [ -3.18443611e+246] [ -1.70996130e+246] [ -1.53007514e+246]] [[ 5.57183810e+247] [ 8.26075332e+247] [ 4.43581469e+247] [ 3.96917158e+247]] [[ -1.44539185e+249] [ -2.14292400e+249] [ -1.15069575e+249] [ -1.02964375e+249]] [[ 3.74949447e+250] [ 5.55896426e+250] [ 2.98502259e+250] [ 2.67100131e+250]] [[ -9.72657256e+251] [ -1.44205225e+252] [ -7.74345424e+251] [ -6.92885089e+251]] [[ 2.52317251e+253] [ 3.74083118e+253] [ 2.00873132e+253] [ 1.79741487e+253]] [[ -6.54536785e+254] [ -9.70409912e+254] [ -5.21085471e+254] [ -4.66267820e+254]] [[ 1.69793544e+256] [ 2.51734267e+256] [ 1.35174907e+256] [ 1.20954647e+256]] [[ -4.40461838e+257] [ -6.53024465e+257] [ -3.50657549e+257] [ -3.13768738e+257]] [[ 1.14260311e+259] [ 1.69401233e+259] [ 9.09641585e+258] [ 8.13948235e+258]] [[ -2.96402945e+260] [ -4.39444142e+260] [ -2.35970341e+260] [ -2.11146506e+260]] [[ 7.68899590e+261] [ 1.13996310e+262] [ 6.12131226e+261] [ 5.47735657e+261]] [[ -1.99460426e+263] [ -2.95718101e+263] [ -1.58793108e+263] [ -1.42088238e+263]] [[ 5.17420769e+264] [ 7.67123032e+264] [ 4.11925581e+264] [ 3.68591437e+264]] [[ -1.34224245e+266] [ -1.98999569e+266] [ -1.06857713e+266] [ -9.56163928e+265]] [[ 3.48191436e+267] [ 5.16225258e+267] [ 2.77199850e+267] [ 2.48038713e+267]] [[ -9.03244237e+268] [ -1.33914118e+269] [ -7.19084793e+268] [ -6.43437819e+268]] [[ 2.34310803e+270] [ 3.47386933e+270] [ 1.86537958e+270] [ 1.66914359e+270]] [[ -6.07826215e+271] [ -9.01157273e+271] [ -4.83898563e+271] [ -4.32992937e+271]] [[ 1.57676344e+273] [ 2.33769424e+273] [ 1.25528242e+273] [ 1.12322801e+273]] [[ -4.09028581e+274] [ -6.06421821e+274] [ -3.25633113e+274] [ -2.91376847e+274]] [[ 1.06106202e+276] [ 1.57312030e+276] [ 8.44725636e+275] [ 7.55861376e+275]] [[ -2.75250352e+277] [ -4.08083512e+277] [ -2.19130479e+277] [ -1.96078180e+277]] [[ 7.14027597e+278] [ 1.05861041e+279] [ 5.68446900e+278] [ 5.08646876e+278]] [[ -1.85226070e+280] [ -2.74614381e+280] [ -1.47460946e+280] [ -1.31948208e+280]] [[ 4.80495390e+281] [ 7.12377822e+281] [ 3.82528793e+281] [ 3.42287161e+281]] [[ -1.24645424e+283] [ -1.84798101e+283] [ -9.92318858e+282] [ -8.87927944e+282]] [[ 3.23342992e+284] [ 4.79385196e+284] [ 2.57417672e+284] [ 2.30337600e+284]] [[ -8.38784830e+285] [ -1.24357428e+286] [ -6.67767799e+285] [ -5.97519319e+285]] [[ 2.17589373e+287] [ 3.22595902e+287] [ 1.73225804e+287] [ 1.55002630e+287]] [[ -5.64449114e+288] [ -8.36846801e+288] [ -4.49365473e+288] [ -4.02092692e+288]] [[ 1.46423880e+290] [ 2.17086629e+290] [ 1.16570005e+290] [ 1.04306961e+290]] [[ -3.79838536e+291] [ -5.63144943e+291] [ -3.02394528e+291] [ -2.70582938e+291]] [[ 9.85340054e+292] [ 1.46085564e+293] [ 7.84442369e+292] [ 7.01919846e+292]] [[ -2.55607299e+294] [ -3.78960911e+294] [ -2.03492383e+294] [ -1.82085195e+294]] [[ 6.63071506e+295] [ 9.83063406e+295] [ 5.27880076e+295] [ 4.72347640e+295]] [[ -1.72007538e+297] [ -2.55016713e+297] [ -1.36937497e+297] [ -1.22531814e+297]] [[ 4.46205165e+298] [ 6.61539466e+298] [ 3.55229888e+298] [ 3.17860072e+298]] [[ -1.15750189e+300] [ -1.71610111e+300] [ -9.21502702e+299] [ -8.24561574e+299]] [[ 3.00267841e+301] [ 4.45174199e+301] [ 2.39047236e+301] [ 2.13899715e+301]] [[ -7.78925525e+302] [ -1.15482746e+303] [ -6.20113007e+302] [ -5.54877762e+302]] [[ 2.02061257e+304] [ 2.99574066e+304] [ 1.60863663e+304] [ 1.43940973e+304]] [[ -5.24167589e+305] [ -7.77125802e+305] [ -4.17296814e+305] [ -3.73397622e+305]] array must not contain infs or NaNs ```python A.dot(gs_ans) ``` array([[ 5.], [ 1.], [ 8.], [ 3.]]) ```python b ``` array([[ 5.], [ 1.], [ 8.], [ 3.]]) ### Gauss-Seidel work for positive-definite matrix, but Jacobi does not # Try Gauss-Seidel with larger matrix (dense) ```python np.random.seed(46) a = np.ceil(10*np.random.random([7,7])).astype(int) b = np.tril(a) A = b.dot(b.T) np.random.seed(406) b = np.ceil(10* np.random.random([7,1])) ``` ```python A ``` array([[ 64, 40, 72, 24, 24, 72, 8], [ 40, 125, 85, 25, 65, 55, 85], [ 72, 85, 101, 39, 49, 89, 43], [ 24, 25, 39, 35, 27, 57, 24], [ 24, 65, 49, 27, 53, 85, 59], [ 72, 55, 89, 57, 85, 236, 62], [ 8, 85, 43, 24, 59, 62, 87]]) ```python b ``` array([[ 10.], [ 6.], [ 4.], [ 9.], [ 2.], [ 9.], [ 7.]]) ```python gs_ans2 = gauss_seidel(A,b) gs_ans2 ``` total iteration = 34006 diff = 9.99998018289e-11 SSE = 4.26448700686e-17 array([[ 6.21458333], [ 79.92416659], [ -60.22916661], [ 42.14166663], [-113.40833322], [ 35.14999996], [ -8.57499999]]) ```python %%timeit -r 1 -n 1 -o gs_ans2 = gauss_seidel(A,b) ``` total iteration = 34006 diff = 9.99998018289e-11 SSE = 4.26448700686e-17 1 loop, best of 1: 3.46 s per loop <TimeitResult : 1 loop, best of 1: 3.46 s per loop> ```python gs_time = _ ``` ```python def backward_sub(U,b,N): y = np.empty((N,1)) #(N,1) is needed here for i in range(N-1,-1,-1): #to return column vector Sum = 0 for k in range(i+1, N): Sum = Sum + U[i,k] * y[k] y[i] = (b[i] - Sum)/U[i,i] return y ``` ```python L = cholesky_user(A) y = forward_sub(L,b,L.shape[0]) x = backward_sub(L.T,y,L.shape[0]) A.dot(x)-b ``` array([[ -4.54747351e-13], [ 1.81898940e-12], [ -9.09494702e-13], [ 2.27373675e-13], [ 0.00000000e+00], [ 9.09494702e-13], [ 4.54747351e-13]]) ```python x ``` array([[ 6.21458333], [ 79.92416667], [ -60.22916667], [ 42.14166667], [-113.40833333], [ 35.15 ], [ -8.575 ]]) ```python x - gs_ans2 ``` array([[ 1.01020081e-09], [ 8.14847567e-08], [ -5.66639926e-08], [ 4.14630748e-08], [ -1.17847847e-07], [ 3.70258206e-08], [ -9.60257829e-09]]) ```python %%timeit -r 1 -n 1 -o L = cholesky_user(A) y = forward_sub(L,b,L.shape[0]) x = backward_sub(L.T,y,L.shape[0]) ``` 1 loop, best of 1: 813 µs per loop <TimeitResult : 1 loop, best of 1: 813 µs per loop> ```python cho_time = _ ``` ```python cho_time.best ``` 0.0008132590000968776 ```python print('Cholesky decomposition is ', gs_time.best / cho_time.best, ' faster than Gaussian elimination', ' for this 7x7 symmetric positive-definite case') ``` Cholesky decomposition is 4254.432930454434 faster than Gaussian elimination for this 7x7 symmetric positive-definite case ## Asymmetric, non-positive definite case ```python np.random.seed(46) A = np.ceil(10*np.random.random([5,5])).astype(int) np.random.seed(406) b = np.ceil(10* np.random.random([5,1])) ``` ```python A ``` array([[ 8, 7, 3, 8, 4], [10, 1, 5, 10, 5], [ 6, 1, 5, 8, 9], [ 4, 2, 9, 4, 7], [ 1, 3, 1, 4, 3]]) ```python b ``` array([[ 10.], [ 6.], [ 4.], [ 9.], [ 2.]]) ```python try: L = cholesky_user(A) y = forward_sub(L,b,L.shape[0]) x = backward_sub(L.T,y,L.shape[0]) except Exception as e: print(e) ``` /home/me/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:11: RuntimeWarning: invalid value encountered in double_scalars # This is added back by InteractiveShellApp.init_path() ```python x ``` array([[ nan], [ nan], [ nan], [ nan], [ nan]]) ```python try: gs_ans3 = gauss_seidel(A,b, maxit = 400) except Exception as e: print(e) gs_ans3 ``` /home/me/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:11: RuntimeWarning: overflow encountered in square # This is added back by InteractiveShellApp.init_path() /home/me/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:26: RuntimeWarning: overflow encountered in subtract /home/me/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:25: RuntimeWarning: invalid value encountered in add array([[ nan], [ nan], [ nan], [ nan], [ nan]]) ```python try: sp.linalg.cholesky(A) except Exception as e: print(e) ``` 2-th leading minor not positive definite In this case, both Cholesky and Gauss-Seidel fail ## Slightly off from positive definite case ```python np.random.seed(46) a = np.ceil(10*np.random.random([7,7])).astype(int) b = np.tril(a) A = b.dot(b.T) np.random.seed(406) b = np.ceil(10* np.random.random([7,1])) A[0,1] = A[0,1] + 5 ``` ```python try: gs_ans4 = gauss_seidel(A,b, maxit = 10000) except Exception as e: print(e) gs_ans4 ``` total iteration = 9149 diff = 9.99382550173e-11 SSE = 1.9424032781e-17 array([[-103.22931321], [ 38.5563819 ], [ 64.9451982 ], [ -2.03752094], [-114.74709657], [ 43.18257955], [ -12.59128978]]) ```python A.dot(gs_ans4) - b ``` array([[ 3.81896825e-09], [ 8.27640179e-11], [ -1.35241862e-09], [ -4.87943908e-10], [ 1.61389835e-09], [ -4.01087163e-10], [ -4.54747351e-13]]) ```python try: L = cholesky_user(A) y = forward_sub(L,b,L.shape[0]) x = backward_sub(L.T,y,L.shape[0]) except Exception as e: print(e) x ``` array([[ 6.21458333], [ 79.92416667], [ -60.22916667], [ 42.14166667], [-113.40833333], [ 35.15 ], [ -8.575 ]]) ```python np.set_printoptions(suppress=True) A.dot(x)-b ``` array([[ 399.62083333], [ 0. ], [ -0. ], [ 0. ], [ 0. ], [ 0. ], [ 0. ]]) ```python try: sp.linalg.cholesky(A) except Exception as e: print(e) ``` 6-th leading minor not positive definite ```python A.dot(gs_ans4) - b ``` array([[ 0.], [ 0.], [-0.], [-0.], [ 0.], [-0.], [-0.]]) ## In this case where it is slightly off from the positive-definite case, Gauss-Seidel works, but Cholesky does not work # SOR: &nbsp; $\mathbf{x}^{(k+1)} = (D+\omega L)^{-1} \big(\omega \mathbf{b} - [\omega U + (\omega-1) D ] \mathbf{x}^{(k)}\big)$ ```python class sor_ans: def __init__(self, x, SSE, i, diff): self.x = x self.SSE = SSE self.i = i self.diff = diff ``` ```python def sor_method(A,b, omega = 1, maxit = 2000): N = A.shape[0] x = np.ones((N,1)) L = np.tril(A, k = -1) U = np.triu(A, k = 1) D = np.zeros([N,N]) D[range(N),range(N)] = A[range(N),range(N)] for i in range(maxit): x_old = x #L x = rhs rhs = omega * b - (omega * U + (omega-1)*D).dot(x) x = forward_sub(D + omega * L,rhs,N) diff = (((x-x_old)**2).sum())**0.5 if diff < 1e-10: print('total iteration = ', i) print('diff = ', diff) SSE = ((A.dot(x) - b)**2).sum() print('SSE = ', SSE) break if i < maxit-1: return sor_ans(x, SSE, i, diff) else: SSE = ((A.dot(x) - b)**2).sum() return sor_ans(x, SSE, i, diff) ``` ```python np.random.seed(46) a = np.ceil(10*np.random.random([6,6])).astype(int) b = np.tril(a) A = b.dot(b.T) np.random.seed(406) b = np.ceil(10* np.random.random([6,1])) ``` ```python try: gs_ans5 = gauss_seidel(A,b, maxit = 10000) except Exception as e: print(e) gs_ans5 ``` total iteration = 476 diff = 9.39128589046e-11 SSE = 4.6035586437e-17 array([[ 0.00198788], [ 1.38833931], [ 0.2764575 ], [ 0.47060577], [-1.2407382 ], [ 0.3032617 ]]) ```python try: ans = sor_method(A,b, omega = 1, maxit = 10000) except Exception as e: print(e) ans.x ``` total iteration = 476 diff = 9.39128589046e-11 SSE = 4.6035586437e-17 array([[ 0.00198788], [ 1.38833931], [ 0.2764575 ], [ 0.47060577], [-1.2407382 ], [ 0.3032617 ]]) ```python try: ans = sor_method(A,b, omega = 1.1, maxit = 10000) except Exception as e: print(e) ans.x ``` total iteration = 457 diff = 9.6279266756e-11 SSE = 5.97252009536e-17 array([[ 0.00198788], [ 1.38833931], [ 0.2764575 ], [ 0.47060577], [-1.2407382 ], [ 0.3032617 ]]) ```python dat = [(j,sor_method(A,b, omega = j, maxit = 10000).i) for j in np.arange(0.1,2.0,0.1)] ``` total iteration = 7360 diff = 9.98746026931e-11 SSE = 9.90919019875e-15 total iteration = 3643 diff = 9.9735919303e-11 SSE = 2.01345432715e-15 total iteration = 2348 diff = 9.99914141769e-11 SSE = 7.18569115455e-16 total iteration = 1677 diff = 9.96869796507e-11 SSE = 3.13170000409e-16 total iteration = 1257 diff = 9.93262881044e-11 SSE = 1.50427150655e-16 total iteration = 958 diff = 9.87749464294e-11 SSE = 7.50908016768e-17 total iteration = 701 diff = 9.86262179718e-11 SSE = 3.75735705042e-17 total iteration = 571 diff = 9.79913809859e-11 SSE = 4.18909907815e-17 total iteration = 513 diff = 9.4095334482e-11 SSE = 3.90344119874e-17 total iteration = 476 diff = 9.39128589046e-11 SSE = 4.6035586437e-17 total iteration = 457 diff = 9.6279266756e-11 SSE = 5.97252009536e-17 total iteration = 477 diff = 9.52289381172e-11 SSE = 2.57478160426e-17 total iteration = 508 diff = 9.74482881038e-11 SSE = 2.25425876879e-17 total iteration = 556 diff = 9.79444714235e-11 SSE = 5.68303613885e-17 total iteration = 667 diff = 9.66550117045e-11 SSE = 2.7488277615e-17 total iteration = 830 diff = 9.54596124626e-11 SSE = 4.18280854715e-17 total iteration = 1134 diff = 9.73376890901e-11 SSE = 2.82304745617e-17 total iteration = 1743 diff = 9.78149412914e-11 SSE = 4.36048684875e-17 total iteration = 3602 diff = 9.78913569573e-11 SSE = 5.79638759874e-17 ```python ``` ```python x = [i[0] for i in dat] y = [i[1] for i in dat] ``` ```python plt.figure() plt.plot(x,y,'ro') plt.xlabel('omega') plt.ylabel('# iteration') plt.show() ``` ```python dat2 = [(j,sor_method(A,b, omega = j, maxit = 10000).i) for j in np.linspace(0.7,1.3,15)] ``` total iteration = 701 diff = 9.86262048882e-11 SSE = 3.75734271673e-17 total iteration = 632 diff = 9.95852253027e-11 SSE = 3.90352756799e-17 total iteration = 593 diff = 9.5718827768e-11 SSE = 3.0554969864e-17 total iteration = 541 diff = 9.83838694745e-11 SSE = 6.79008327114e-17 total iteration = 538 diff = 9.87193745242e-11 SSE = 2.56788398837e-17 total iteration = 498 diff = 9.82894267441e-11 SSE = 6.67750254964e-17 total iteration = 500 diff = 9.9725461057e-11 SSE = 2.09860710417e-17 total iteration = 476 diff = 9.39128589046e-11 SSE = 4.6035586437e-17 total iteration = 478 diff = 9.98118509541e-11 SSE = 1.82425388378e-17 total iteration = 462 diff = 9.51814041678e-11 SSE = 5.09843406492e-17 total iteration = 472 diff = 9.57024150573e-11 SSE = 2.00739660654e-17 total iteration = 461 diff = 9.84587724858e-11 SSE = 5.0568285984e-17 total iteration = 474 diff = 9.30208635468e-11 SSE = 4.25633161774e-17 total iteration = 490 diff = 9.71650586629e-11 SSE = 2.89511698969e-17 total iteration = 508 diff = 9.74472104447e-11 SSE = 2.25427673552e-17 ```python x = [i[0] for i in dat2] y = [i[1] for i in dat2] plt.figure() plt.plot(x,y,'bo') plt.xlabel('omega') plt.ylabel('# iteration') plt.show() ``` # Thomas Algorighm: Creating tridiagonal matrix ```python N = 10 A = np.zeros((N,N)) for i in range(N): for j in range(N): if i == j: A[i,j] = 6 if j == i+1: A[i,j] = -2 if j == i-1: A[i,j] = -1 print(A) ``` [[ 6. -2. 0. 0. 0. 0. 0. 0. 0. 0.] [-1. 6. -2. 0. 0. 0. 0. 0. 0. 0.] [ 0. -1. 6. -2. 0. 0. 0. 0. 0. 0.] [ 0. 0. -1. 6. -2. 0. 0. 0. 0. 0.] [ 0. 0. 0. -1. 6. -2. 0. 0. 0. 0.] [ 0. 0. 0. 0. -1. 6. -2. 0. 0. 0.] [ 0. 0. 0. 0. 0. -1. 6. -2. 0. 0.] [ 0. 0. 0. 0. 0. 0. -1. 6. -2. 0.] [ 0. 0. 0. 0. 0. 0. 0. -1. 6. -2.] [ 0. 0. 0. 0. 0. 0. 0. 0. -1. 6.]] ```python N = 10 a = np.array(N-1) a = -1 b = np.array(N) b = 6 c = np.array(N-1) c = -2 A2 = sp.sparse.diags(diagonals = [a,b,c], offsets = [-1,0,1], shape = (N,N)) A2.todense() ``` matrix([[ 6., -2., 0., 0., 0., 0., 0., 0., 0., 0.], [-1., 6., -2., 0., 0., 0., 0., 0., 0., 0.], [ 0., -1., 6., -2., 0., 0., 0., 0., 0., 0.], [ 0., 0., -1., 6., -2., 0., 0., 0., 0., 0.], [ 0., 0., 0., -1., 6., -2., 0., 0., 0., 0.], [ 0., 0., 0., 0., -1., 6., -2., 0., 0., 0.], [ 0., 0., 0., 0., 0., -1., 6., -2., 0., 0.], [ 0., 0., 0., 0., 0., 0., -1., 6., -2., 0.], [ 0., 0., 0., 0., 0., 0., 0., -1., 6., -2.], [ 0., 0., 0., 0., 0., 0., 0., 0., -1., 6.]]) ```python A2 ``` <10x10 sparse matrix of type '<class 'numpy.float64'>' with 28 stored elements (3 diagonals) in DIAgonal format> ```python A2.data ``` array([[-1., -1., -1., -1., -1., -1., -1., -1., -1., 0.], [ 6., 6., 6., 6., 6., 6., 6., 6., 6., 6.], [ 0., -2., -2., -2., -2., -2., -2., -2., -2., -2.]]) ```python A2.offsets ``` array([-1, 0, 1], dtype=int32) ```python A2.data[0] ``` array([-1., -1., -1., -1., -1., -1., -1., -1., -1., 0.]) ```python A3 = sp.sparse.diags([[40]*6,[-1 * i for i in range(1,6)],[range(-20,-10,2)]],[0,1,-1],(6,6)) A3.todense() ``` matrix([[ 40., -1., 0., 0., 0., 0.], [-20., 40., -2., 0., 0., 0.], [ 0., -18., 40., -3., 0., 0.], [ 0., 0., -16., 40., -4., 0.], [ 0., 0., 0., -14., 40., -5.], [ 0., 0., 0., 0., -12., 40.]]) ```python A3.offsets ``` array([ 0, 1, -1], dtype=int32) ```python A3.data ``` array([[ 40., 40., 40., 40., 40., 40.], [ 0., -1., -2., -3., -4., -5.], [-20., -18., -16., -14., -12., 0.]]) ```python A3.data[1,0] ``` 0.0 ```python A3_sort = sorted(zip(A3.data,A3.offsets), key = lambda x: x[1]) A4 = np.array([i[0] for i in A3_sort]) A4 ``` array([[-20., -18., -16., -14., -12., 0.], [ 40., 40., 40., 40., 40., 40.], [ 0., -1., -2., -3., -4., -5.]]) ```python a = A4[0] #A should start with 0, but it does not b = A4[1] c = A4[2] #C should not start with 0, but it does print(a,'\n',b,'\n',c,'\n') ``` [-20. -18. -16. -14. -12. 0.] [ 40. 40. 40. 40. 40. 40.] [ 0. -1. -2. -3. -4. -5.] # Thomas method coding ```python def thomas(A,d): #imput: A = scipy.sparse.diags #input: d = array shape (N,) #output: x from Ax = d where x.shape is (N,) N = d.shape[0] #sorting array data based on the offsets A_sort = sorted(zip(A.data,A.offsets), key = lambda x: x[1]) A_dat = np.array([i[0] for i in A_sort]) a = np.empty(N) b = np.empty(N) c = np.empty(N) a[0] = 0 a[1:] = A_dat[0,:-1] b = A_dat[1] c[-1] = 0 c[:-1]= A_dat[2,1:] #move 0 to the first cell of A #move 0 to the last cell of C cp = np.empty(N) dp = np.empty(N) x = np.empty(N) cp[0] = c[0] / b[0] dp[0] = d[0] / b[0] for i in range(1,N-1): cp[i] = c[i] / (b[i]-a[i]*cp[i-1]) dp[i] = (d[i] - a[i] * dp[i-1]) / (b[i]-a[i] * cp[i-1]) i = N-1 x[N-1] = (d[i] - a[i] * dp[i-1]) / (b[i]-a[i] * cp[i-1]) for i in range(N-2,-1,-1): x[i] = dp[i] - cp[i] * x[i+1] return x ``` ```python np.random.seed(1) d = 10 * np.random.random(10) d ``` array([ 4.17022005, 7.20324493, 0.00114375, 3.02332573, 1.46755891, 0.92338595, 1.86260211, 3.45560727, 3.96767474, 5.38816734]) ```python x = thomas(sp.sparse.dia_matrix(A),d) ``` ```python A ``` array([[ 6., -2., 0., 0., 0., 0., 0., 0., 0., 0.], [-1., 6., -2., 0., 0., 0., 0., 0., 0., 0.], [ 0., -1., 6., -2., 0., 0., 0., 0., 0., 0.], [ 0., 0., -1., 6., -2., 0., 0., 0., 0., 0.], [ 0., 0., 0., -1., 6., -2., 0., 0., 0., 0.], [ 0., 0., 0., 0., -1., 6., -2., 0., 0., 0.], [ 0., 0., 0., 0., 0., -1., 6., -2., 0., 0.], [ 0., 0., 0., 0., 0., 0., -1., 6., -2., 0.], [ 0., 0., 0., 0., 0., 0., 0., -1., 6., -2.], [ 0., 0., 0., 0., 0., 0., 0., 0., -1., 6.]]) ```python A.dot(x) ``` array([ 4.17022005, 7.20324493, 0.00114375, 3.02332573, 1.46755891, 0.92338595, 1.86260211, 3.45560727, 3.96767474, 5.38816734]) ```python d ``` array([ 4.17022005, 7.20324493, 0.00114375, 3.02332573, 1.46755891, 0.92338595, 1.86260211, 3.45560727, 3.96767474, 5.38816734]) ```python A.dot(x) - d ``` array([ 0., 0., 0., 0., 0., -0., -0., -0., -0., 0.]) # Test function with other matrix A and D ```python A3 ``` <6x6 sparse matrix of type '<class 'numpy.float64'>' with 16 stored elements (3 diagonals) in DIAgonal format> ```python A3.todense() ``` matrix([[ 40., -1., 0., 0., 0., 0.], [-20., 40., -2., 0., 0., 0.], [ 0., -18., 40., -3., 0., 0.], [ 0., 0., -16., 40., -4., 0.], [ 0., 0., 0., -14., 40., -5.], [ 0., 0., 0., 0., -12., 40.]]) ```python np.random.seed(22) d3 = np.random.random(6) * 3 d3 ``` array([ 0.62538161, 1.44504319, 1.26161411, 2.577546 , 0.51348466, 1.01659188]) ```python x3 = thomas(A3,d3) ``` ```python x3 ``` array([ 0.01682291, 0.04753462, 0.05994181, 0.09347838, 0.05063002, 0.0406038 ]) ```python A3.dot(x3) - d3 ``` array([ 0., 0., -0., 0., -0., 0.]) ```python A3.dot(x3) ``` array([ 0.62538161, 1.44504319, 1.26161411, 2.577546 , 0.51348466, 1.01659188]) ```python d3 ``` array([ 0.62538161, 1.44504319, 1.26161411, 2.577546 , 0.51348466, 1.01659188]) ```python from scipy.sparse import linalg ``` ```python A3.todense() ``` matrix([[ 40., -1., 0., 0., 0., 0.], [-20., 40., -2., 0., 0., 0.], [ 0., -18., 40., -3., 0., 0.], [ 0., 0., -16., 40., -4., 0.], [ 0., 0., 0., -14., 40., -5.], [ 0., 0., 0., 0., -12., 40.]]) ```python sp_ans3 = linalg.spsolve(A3,d3) ``` /home/me/anaconda3/lib/python3.6/site-packages/scipy/sparse/linalg/dsolve/linsolve.py:102: SparseEfficiencyWarning: spsolve requires A be CSC or CSR matrix format SparseEfficiencyWarning) ```python A3.dot(sp_ans3) ``` array([ 0.62538161, 1.44504319, 1.26161411, 2.577546 , 0.51348466, 1.01659188]) ```python d3 ``` array([ 0.62538161, 1.44504319, 1.26161411, 2.577546 , 0.51348466, 1.01659188]) # Re-create A3 in CSC sparse matrix format ```Compressed Sparse Column Format Compressed Sparse Row Format (CSR) is not covered (but it is similar)``` ### Important attributes of csc matrix data <br>indices <br>indptr ```python A3csc = A3.tocsc() ``` ```python A3csc.data ``` array([ 40., -20., 40., -1., -18., 40., -2., -16., 40., -3., -14., 40., -4., -12., 40., -5.]) ```python A3csc.data.shape ``` (16,) ```python A3csc.indices ``` array([0, 1, 1, 0, 2, 2, 1, 3, 3, 2, 4, 4, 3, 5, 5, 4], dtype=int32) ```python A3csc.indices.shape ``` (16,) ```python A3csc.indptr ``` array([ 0, 2, 5, 8, 11, 14, 16], dtype=int32) ```python A3csc.indptr.shape ``` (7,) ```python A3csc.todense() ``` matrix([[ 40., -1., 0., 0., 0., 0.], [-20., 40., -2., 0., 0., 0.], [ 0., -18., 40., -3., 0., 0.], [ 0., 0., -16., 40., -4., 0.], [ 0., 0., 0., -14., 40., -5.], [ 0., 0., 0., 0., -12., 40.]]) ## data (`.data`) is the non-zero value in sparse matrix A3csc.data = [ 40., -20., 40., -1., -18., 40., -2., -16., 40., -3., -14., 40., -4., -12., 40., -5.] ## indices (`.indicies`) tell the row that the data belong A3csc.indices = [0, 1, 1, 0, 2, 2, 1, 3, 3, 2, 4, 4, 3, 5, 5, 4] ## index pointer (`.indptr`) tell how to split the data before applying the row index A3csc.indptr = [ 0, 2, 5, 8, 11, 14, 16] means ## Explanation ### For the first column, look at A3csc.indptr <br>1st column pointer is [0:2] (start from 0 to 2 but exclude 2) <br>Use this index for the data <br>A3csc.data[0:2] = [40, -20] <br>The corresponding row is <br>A3csc.indices[0:2] = [0, 1] <br>This is for the first column <br>Thus, for the first column we have 40 in row 0, and -20 in row 1 ### For the second column, look at A3csc.indptr <br>2nd column pointer is [2:5] (start from 2 to 5 but exclude 5) <br>Use this index for the data <br>A3csc.data[2:5] = [ 40., -1., -18.] <br>The corresponding row is <br>A3csc.indices[2:5] = [1, 0, 2] <br>This is for the first column <br>Thus, for the first column we have 40 in row 1, and -1 in row 0, and -18 in row 2 ```python ptr = A3csc.indptr dat = A3csc.data rix = A3csc.indices print('data in each column') for i in range(ptr.shape[0]-1): print('column = {:1d} '.format(i), dat[ptr[i]:ptr[i+1]]) print('row index =', rix[ptr[i]:ptr[i+1]]) ``` data in each column column = 0 [ 40. -20.] row index = [0 1] column = 1 [ 40. -1. -18.] row index = [1 0 2] column = 2 [ 40. -2. -16.] row index = [2 1 3] column = 3 [ 40. -3. -14.] row index = [3 2 4] column = 4 [ 40. -4. -12.] row index = [4 3 5] column = 5 [ 40. -5.] row index = [5 4] ```python A3csc.todense() ``` matrix([[ 40., -1., 0., 0., 0., 0.], [-20., 40., -2., 0., 0., 0.], [ 0., -18., 40., -3., 0., 0.], [ 0., 0., -16., 40., -4., 0.], [ 0., 0., 0., -14., 40., -5.], [ 0., 0., 0., 0., -12., 40.]]) # sparse matrix solver ```python sparse_ans = linalg.spsolve(A3csc,d3) ``` ```python sparse_ans ``` array([ 0.01682291, 0.04753462, 0.05994181, 0.09347838, 0.05063002, 0.0406038 ]) ```python A3csc.dot(sparse_ans) ``` array([ 0.62538161, 1.44504319, 1.26161411, 2.577546 , 0.51348466, 1.01659188]) ```python d3 ``` array([ 0.62538161, 1.44504319, 1.26161411, 2.577546 , 0.51348466, 1.01659188]) # User Thomas algorithm ```python thomas(A3,d3) ``` array([ 0.01682291, 0.04753462, 0.05994181, 0.09347838, 0.05063002, 0.0406038 ]) # Solve as dense matrix ```python A3_dense = A3.todense() sp.linalg.solve(A3_dense,d3) ``` array([ 0.01682291, 0.04753462, 0.05994181, 0.09347838, 0.05063002, 0.0406038 ]) # Solve time comparison ```python %%timeit -n 5 thomas(A3,d3) ``` 5 loops, best of 3: 38.5 µs per loop ```python %%timeit -n 5 linalg.spsolve(A3csc,d3) ``` 5 loops, best of 3: 52.1 µs per loop ```python %%timeit -n 5 sp.linalg.solve(A3_dense,d3) ``` 5 loops, best of 3: 63.5 µs per loop Note that spsolve can solve any sparse matrix, but thomas just solve tri-diagonal matrix (with the input in diagonal sparse matrix format.
94341a25c263b74727f26308c1862084c56fae17
946,690
ipynb
Jupyter Notebook
Ipynb/L05_SysLin.ipynb
epmmko/counting_lines_of_code
efea2f4ceeef269e06da9b39662ab5acb5c0b4cf
[ "Unlicense" ]
null
null
null
Ipynb/L05_SysLin.ipynb
epmmko/counting_lines_of_code
efea2f4ceeef269e06da9b39662ab5acb5c0b4cf
[ "Unlicense" ]
null
null
null
Ipynb/L05_SysLin.ipynb
epmmko/counting_lines_of_code
efea2f4ceeef269e06da9b39662ab5acb5c0b4cf
[ "Unlicense" ]
null
null
null
28.886278
9,478
0.421378
true
312,278
Qwen/Qwen-72B
1. YES 2. YES
0.859664
0.805632
0.692573
__label__yue_Hant
0.114771
0.44741
###### Content provided under a Creative Commons Attribution license, CC-BY 4.0; code under BSD 3-Clause license. (c)2014 Lorena A. Barba, Olivier Mesnard. Thanks: NSF for support via CAREER award #1149784. # Source panel method We are now getting close to the finish line with *AeroPython*! Our first few lessons introduced the fundamental flow solutions of potential flow, and we quickly learned that using our superposition powers we could get some useful results in aerodynamics. The superposition of a [doublet](03_Lesson03_doublet.ipynb) and a free stream gave the flow around a circular cylinder, and we learned about the *D'Alembert paradox*: the result of zero drag for potential flow around a cylinder. Adding a [vortex](06_Lesson06_vortexLift.ipynb) at the center of the cylinder, we learned about lift and the *Kutta-Joukowski theorem* stating that lift is proportional to circulation: $L=\rho U \Gamma$. A most important result! Adding together fundamental solutions of potential flow and seeing what we get when interpreting a dividing streamline as a solid body is often called an *indirect method*. This method goes all the way back to Rankine in 1871! But its applicability is limited because we can't stipulate a geometry and find the flow associated to it. In [Lesson 9](09_Lesson09_flowOverCylinder.ipynb), we learned that it is possible to stipulate first the geometry, and then solve for the source strengths on a panel discretization of the body that makes the flow tangent at the boundary. This is called a *direct method* and it took off in the 1960s with the work of Hess and Smith at Douglas Aircraft Company. A set of panels (line segments in 2D) can represent the surface of any solid body immersed in a potential flow by making the source-sheet strengths such that the normal velocity at each panel is equal to zero. This is a very powerful idea! But you should realize that all the panel strengths are coupled to each other, which is why we end up with a linear system of equations. For an arbitrary geometry, we need to build a set of panels according to some points that define the geometry. In this lesson, we will read from a file a geometry definition corresponding to a **NACA0012 airfoil**, create a set of panels, and solve for the source-sheet strengths to get flow around the airfoil. *Make sure you have studied [Lesson 9](09_Lesson09_flowOverCylinder.ipynb) carefully before proceeding!* We will not repeat the full mathematical formulation in this notebook, so refer back as needed. First, load our favorite Python libraries, and the `integrate` module from SciPy: ```python import os import math import numpy from scipy import integrate from matplotlib import pyplot # display the figures in the Notebook %matplotlib inline ``` Next, we read the body geometry from a file using the NumPy function [`loadtxt()`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.loadtxt.html). The file comes from the [Airfoil Tools](http://airfoiltools.com/airfoil/details?airfoil=n0012-il) website and it contains a set of coordinates for the standard NACA0012 symmetric profile. We saved the file in the `resources` folder and load it from our local copy. The geometry points get loaded into one NumPy array, so we separate the data into two arrays: `x,y` (for better code readability). The subsequent code will plot the geometry of the airfoil. ```python # read of the geometry from a data file naca_filepath = os.path.join('resources', 'naca0012.dat') with open (naca_filepath, 'r') as file_name: x, y = numpy.loadtxt(file_name, dtype=float, delimiter='\t', unpack=True) # plot the geometry width = 10 pyplot.figure(figsize=(width, width)) pyplot.grid() pyplot.xlabel('x', fontsize=16) pyplot.ylabel('y', fontsize=16) pyplot.plot(x, y, color='k', linestyle='-', linewidth=2) pyplot.axis('scaled', adjustable='box') pyplot.xlim(-0.1, 1.1) pyplot.ylim(-0.1, 0.1); ``` ## Discretization into panels Like in [Lesson 9](09_Lesson09_flowOverCylinder.ipynb), we will create a discretization of the body geometry into panels (line segments in 2D). A panel's attributes are: its starting point, end point and mid-point, its length and its orientation. See the following figure for the nomenclature used in the code and equations below. We can modify the `Panel` class from our previous notebook slightly, to work better for our study of flow over an airfoil. The only difference is that we identify points on the top or bottom surfaces with the words `upper` and `lower`, which is only used later for plotting results with different colors for the top and bottom surfaces of the profile. ```python class Panel: """ Contains information related to a panel. """ def __init__(self, xa, ya, xb, yb): """ Initializes the panel. Sets the end-points and calculates the center, length, and angle (with the x-axis) of the panel. Defines if the panel is on the lower or upper surface of the geometry. Initializes the source-sheet strength, tangential velocity, and pressure coefficient to zero. Parameters ---------- xa: float x-coordinate of the first end-point. ya: float y-coordinate of the first end-point. xb: float x-coordinate of the second end-point. yb: float y-coordinate of the second end-point. """ self.xa, self.ya = xa, ya self.xb, self.yb = xb, yb self.xc, self.yc = (xa + xb) / 2, (ya + yb) / 2 # control-point (center-point) self.length = math.sqrt((xb - xa)**2 + (yb - ya)**2) # length of the panel # orientation of the panel (angle between x-axis and panel's normal) if xb - xa <= 0.0: self.beta = math.acos((yb - ya) / self.length) elif xb - xa > 0.0: self.beta = math.pi + math.acos(-(yb - ya) / self.length) # location of the panel if self.beta <= math.pi: self.loc = 'upper' else: self.loc = 'lower' self.sigma = 0.0 # source strength self.vt = 0.0 # tangential velocity self.cp = 0.0 # pressure coefficient ``` For the circular cylinder, the discretization into panels was really easy. This is the part that gets more complicated when you want to compute the flow around a general geometry, while the solution part is effectively the same as in [Lesson 9](09_Lesson09_flowOverCylinder.ipynb). The function below will create the panels from the geometry data that was read from a file. It is better to have small panels near the leading-edge and the trailing edge, where the curvature is large. One method to get a non uniform distribution around the airfoil is to first discretize a circle with diameter equal to the airfoil's chord, with the leading edge and trailing edge touching the circle at a node, as shown in the following sketch. Then, we store the $x$-coordinates of the circle points, `x_circle`, which will also be the $x$-coordinates of the panel nodes, `x`, and project the $y$-coordinates of the circle points onto the airfoil by interpolation. We end up with a node distribution on the airfoil that is refined near the leading edge and the trailing edge. It will look like this: With the discretization method just described, the function `define_panels()` returns an array of objects, each an instance of the class `Panel` and containing all information about a panel, given the desired number of panels and the set of body coordinates. A few remarks about the implementation of the function `define_panels()`: * we just need to compute the $x$-coordinates of the circle (`x_circle`) since the $y$-coordinates of the panel nodes will be computed by interpolation; * we create a circle with `N+1` points, but the first and last points coincide; * we extend our NumPy arrays by adding an extra value that is equal to the first one; thus we don't have to do anything special with the value `x[i+1]` in the different loops; * the *while*-loop is used to find two consecutive points, (`x[I]`,`y[I]`) and (`x[I+1]`,`y[I+1]`), on the foil such that the interval [`x[I]`,`x[I+1]`] contains the value `x_ends[i]`; we use the keyword `break` to get out of the loop; * once the two points have been identified, the value `y_ends[i]` is computed by interpolation. ```python def define_panels(x, y, N=40): """ Discretizes the geometry into panels using the 'cosine' method. Parameters ---------- x: 1D array of floats x-coordinate of the points defining the geometry. y: 1D array of floats y-coordinate of the points defining the geometry. N: integer, optional Number of panels; default: 40. Returns ------- panels: 1D Numpy array of Panel objects The discretization of the geometry into panels. """ R = (x.max() - x.min()) / 2 # radius of the circle x_center = (x.max() + x.min()) / 2 # x-coord of the center # define x-coord of the circle points x_circle = x_center + R * numpy.cos(numpy.linspace(0.0, 2 * math.pi, N + 1)) x_ends = numpy.copy(x_circle) # projection of the x-coord on the surface y_ends = numpy.empty_like(x_ends) # initialization of the y-coord Numpy array x, y = numpy.append(x, x[0]), numpy.append(y, y[0]) # extend arrays using numpy.append # computes the y-coordinate of end-points I = 0 for i in range(N): while I < len(x) - 1: if (x[I] <= x_ends[i] <= x[I + 1]) or (x[I + 1] <= x_ends[i] <= x[I]): break else: I += 1 a = (y[I + 1] - y[I]) / (x[I + 1] - x[I]) b = y[I + 1] - a * x[I + 1] y_ends[i] = a * x_ends[i] + b y_ends[N] = y_ends[0] panels = numpy.empty(N, dtype=object) for i in range(N): panels[i] = Panel(x_ends[i], y_ends[i], x_ends[i + 1], y_ends[i + 1]) return panels ``` Now we can use this function, calling it with a desired number of panels whenever we execute the cell below. We also plot the resulting geometry. ```python N = 40 # number of panels panels = define_panels(x, y, N) # discretizes of the geometry into panels # plot the geometry and the panels width = 10 pyplot.figure(figsize=(width, width)) pyplot.grid() pyplot.xlabel('x', fontsize=16) pyplot.ylabel('y', fontsize=16) pyplot.plot(x, y, color='k', linestyle='-', linewidth=2) pyplot.plot(numpy.append([panel.xa for panel in panels], panels[0].xa), numpy.append([panel.ya for panel in panels], panels[0].ya), linestyle='-', linewidth=1, marker='o', markersize=6, color='#CD2305') pyplot.axis('scaled', adjustable='box') pyplot.xlim(-0.1, 1.1) pyplot.ylim(-0.1, 0.1); ``` ## Freestream conditions The NACA0012 airfoil will be immersed in a uniform flow with velocity $U_\infty$ and an angle of attack $\alpha=0$. Even though it may seem like overkill to create a class for the freestream, we'll do it anyway. When creating a class, one is expecting to also create several instances of its objects. Here, we just have one freestream, so why define a class? Well, it makes the code more readable and does not block the programmer from using the variable names `u_inf` and `alpha` for something else outside of the class. Also, every time we need the freestream condition as input to a function, we will just have to pass the object as an argument and not all the attributes of the freestream. ```python class Freestream: """ Freestream conditions. """ def __init__(self, u_inf=1.0, alpha=0.0): """ Sets the freestream speed and angle (with the x-axis). Parameters ---------- u_inf: float, optional Freestream speed; default: 1.0. alpha: float, optional Angle of attack in degrees; default: 0.0. """ self.u_inf = u_inf self.alpha = numpy.radians(alpha) # degrees --> radians ``` ```python # define and creates the object freestream u_inf = 1.0 # freestream spee alpha = 0.0 # angle of attack (in degrees) freestream = Freestream(u_inf, alpha) # instantiation of the object freestream ``` ## Flow tangency boundary condition Enforcing the flow-tangency condition on each *control point* approximately makes the body geometry correspond to a dividing streamline (and the approximation improves if we represented the body with more and more panels). So, for each panel $i$, we make $u_n=0$ at $(x_{c_i},y_{c_i})$, which leads to the equation derived in the previous lesson: $$ \begin{equation} u_{n_i} = \frac{\partial}{\partial n_i}\left\lbrace \phi\left(x_{c_i},y_{c_i}\right) \right\rbrace = 0 \end{equation} $$ i.e. $$ \begin{equation} \begin{split} 0 = & U_\infty \cos\beta_i + \frac{\sigma_i}{2} \\ & + \sum_{j=1,j\neq i}^{N_p} \frac{\sigma_j}{2\pi} \int \frac{\left(x_{c_i}-x_j(s_j)\right) \cos\beta_i + \left(y_{c_i}-y_j(s_j)\right) \sin\beta_i}{\left(x_{c_i}-x_j(s)\right)^2 + \left(y_{c_i}-y_j(s)\right)^2} {\rm d}s_j \end{split} \end{equation} $$ In the equation above, we calculate the derivative of the potential in the normal direction to enforce the flow tangency condition on each panel. But later, we will have to calculate the derivative in the tangential direction to compute the surface pressure coefficient. And, when we are interested in plotting the velocity field onto a mesh, we will have to calculate the derivative in the $x$- and $y$-direction. Therefore the function below is similar to the one implemented in [Lesson 9](09_Lesson09_flowOverCylinder.ipynb) to obtain the integrals along each panel, but we've generalized it to adapt to the direction of derivation (by means of two new arguments, `dxdz` and `dydz`, which respectively represent the value of $\frac{\partial x_{c_i}}{\partial z_i}$ and $\frac{\partial y_{c_i}}{\partial z_i}$, $z_i$ being the desired direction). Moreover, the function is also more general in the sense of allowing any evaluation point, not just a control point on a panel (the argument `p_i` has been replaced by the coordinates `x` and `y` of the control-point, and `p_j` has been replaced with `panel`). ```python def integral(x, y, panel, dxdz, dydz): """ Evaluates the contribution of a panel at one point. Parameters ---------- x: float x-coordinate of the target point. y: float y-coordinate of the target point. panel: Panel object Source panel which contribution is evaluated. dxdz: float Derivative of x in the z-direction. dydz: float Derivative of y in the z-direction. Returns ------- Integral over the panel of the influence at the given target point. """ def integrand(s): return (((x - (panel.xa - math.sin(panel.beta) * s)) * dxdz + (y - (panel.ya + math.cos(panel.beta) * s)) * dydz) / ((x - (panel.xa - math.sin(panel.beta) * s))**2 + (y - (panel.ya + math.cos(panel.beta) * s))**2) ) return integrate.quad(integrand, 0.0, panel.length)[0] ``` ## Building the linear system Here, we build and solve the linear system of equations of the form $$ \begin{equation} [A][\sigma] = [b] \end{equation} $$ In building the matrix, below, we call the `integral()` function with the correct values for the last parameters: $\cos \beta_i$ and $\sin\beta_i$, corresponding to a derivative in the normal direction. Finally, we use `linalg.solve()` from NumPy to solve the system and find the strength of each panel. ```python def build_matrix(panels): """ Builds the source matrix. Parameters ---------- panels: 1D array of Panel object The source panels. Returns ------- A: 2D Numpy array of floats The source matrix (NxN matrix; N is the number of panels). """ N = len(panels) A = numpy.empty((N, N), dtype=float) numpy.fill_diagonal(A, 0.5) for i, p_i in enumerate(panels): for j, p_j in enumerate(panels): if i != j: A[i, j] = 0.5 / math.pi * integral(p_i.xc, p_i.yc, p_j, math.cos(p_i.beta), math.sin(p_i.beta)) return A def build_rhs(panels, freestream): """ Builds the RHS of the linear system. Parameters ---------- panels: 1D array of Panel objects The source panels. freestream: Freestream object The freestream conditions. Returns ------- b: 1D Numpy array of floats RHS of the linear system. """ b = numpy.empty(len(panels), dtype=float) for i, panel in enumerate(panels): b[i] = -freestream.u_inf * math.cos(freestream.alpha - panel.beta) return b ``` ```python A = build_matrix(panels) # compute the singularity matrix b = build_rhs(panels, freestream) # compute the freestream RHS ``` ```python # solve the linear system sigma = numpy.linalg.solve(A, b) for i, panel in enumerate(panels): panel.sigma = sigma[i] ``` ## Surface pressure coefficient From Bernoulli's equation, the pressure coefficient on the $i$-th panel is $$ \begin{equation} C_{p_i} = 1-\left(\frac{u_{t_i}}{U_\infty}\right)^2 \end{equation} $$ where $u_{t_i}$ is the tangential component of the velocity at the center point of the $i$-th panel, $$ \begin{equation} \begin{split} u_{t_i} = & -U_\infty \sin\beta_i \\ & + \sum_{j=1}^{N_p} \frac{\sigma_j}{2\pi} \int \frac{\left(x_{c_i}-x_j(s_j)\right) \frac{\partial x_{c_i}}{\partial t_i} + \left(y_{c_i}-y_j(s_j)\right) \frac{\partial y_{c_i}}{\partial t_i}}{\left(x_{c_i}-x_j(s)\right)^2 + \left(y_{c_i}-y_j(s)\right)^2} {\rm d}s_j \end{split} \end{equation} $$ with $$ \begin{equation} \frac{\partial x_{c_i}}{\partial t_i} = -\sin\beta_i \quad\text{and} \quad \frac{\partial y_{c_i}}{\partial t_i} = \cos\beta_i \end{equation} $$ Notice that below we call the function `integral()` with different arguments: $-\sin\beta_i$ and $\cos\beta_i$ to get the derivation in the tangential direction. ```python def get_tangential_velocity(panels, freestream): """ Computes the tangential velocity on the surface of the panels. Parameters --------- panels: 1D array of Panel objects The source panels. freestream: Freestream object The freestream conditions. """ N = len(panels) A = numpy.empty((N, N), dtype=float) numpy.fill_diagonal(A, 0.0) for i, p_i in enumerate(panels): for j, p_j in enumerate(panels): if i != j: A[i, j] = 0.5 / math.pi * integral(p_i.xc, p_i.yc, p_j, -math.sin(p_i.beta), math.cos(p_i.beta)) b = freestream.u_inf * numpy.sin([freestream.alpha - panel.beta for panel in panels]) sigma = numpy.array([panel.sigma for panel in panels]) vt = numpy.dot(A, sigma) + b for i, panel in enumerate(panels): panel.vt = vt[i] ``` ```python # compute the tangential velocity at the center-point of each panel get_tangential_velocity(panels, freestream) ``` ```python def get_pressure_coefficient(panels, freestream): """ Computes the surface pressure coefficients on the panels. Parameters --------- panels: 1D array of Panel objects The source panels. freestream: Freestream object The freestream conditions. """ for panel in panels: panel.cp = 1.0 - (panel.vt / freestream.u_inf)**2 ``` ```python # computes the surface pressure coefficients get_pressure_coefficient(panels, freestream) ``` ### Theoretical solution There is a classical method to obtain the theoretical characteristics of airfoils, known as *Theodorsen's method*. It uses the Joukowski transformation but is able to deal with any airfoil by an additional transformation between a "near circle" and a circle. The method is hairy indeed! But the resulting values of pressure coefficient are provided for some airfoils in table form in the 1945 [NACA Report No.824](http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19930090976.pdf), available from the NASA web server (see p. 71). The values of $(u/U_{\infty})^2$ are given for several stations along the chord length. We transcribed them here, saving them into an array: ```python voverVsquared=numpy.array([0.0, 0.64, 1.01, 1.241, 1.378, 1.402, 1.411, 1.411, 1.399, 1.378, 1.35, 1.288, 1.228, 1.166, 1.109, 1.044, 0.956, 0.906, 0.0]) print(voverVsquared) ``` [0. 0.64 1.01 1.241 1.378 1.402 1.411 1.411 1.399 1.378 1.35 1.288 1.228 1.166 1.109 1.044 0.956 0.906 0. ] ```python xtheo=numpy.array([0.0, 0.5, 1.25, 2.5, 5.0, 7.5, 10.0, 15.0, 20.0, 25.0, 30.0, 40.0, 50.0, 60.0, 70.0, 80.0, 90.0, 95.0, 100.0]) xtheo /= 100 print(xtheo) ``` [0. 0.005 0.0125 0.025 0.05 0.075 0.1 0.15 0.2 0.25 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.95 1. ] ### And plot the result! We will use the values from the NACA report (also given in the book by Abbot and von Doenhoff, ["Theory of Wing Sections,"](http://books.google.com/books/about/Theory_of_Wing_Sections_Including_a_Summ.html?id=DPZYUGNyuboC) 1949) to visually compare the pressure distribution with the result of our source panel method. Let's see how it looks! ```python # plot the surface pressure coefficient pyplot.figure(figsize=(10, 6)) pyplot.grid() pyplot.xlabel('x', fontsize=16) pyplot.ylabel('$C_p$', fontsize=16) pyplot.plot([panel.xc for panel in panels if panel.loc == 'upper'], [panel.cp for panel in panels if panel.loc == 'upper'], label='upper', color='r', linewidth=1, marker='x', markersize=8) pyplot.plot([panel.xc for panel in panels if panel.loc == 'lower'], [panel.cp for panel in panels if panel.loc == 'lower'], label='lower', color='b', linewidth=0, marker='d', markersize=6) pyplot.plot(xtheo, 1-voverVsquared, label='theoretical', color='k', linestyle='--',linewidth=2) pyplot.legend(loc='best', prop={'size':14}) pyplot.xlim(-0.1, 1.1) pyplot.ylim(1.0, -0.6) pyplot.title('Number of panels : {}'.format(N)); ``` That looks pretty good! The only place where the panel method doesn't quite match the tabulated data from Theordorsen's method is at the trailing edge. But note that the flow-tangency boundary condition in the panel method is applied at the control point of the panel (not at the endpoints), so this discrepancy is not surprising. ##### Accuracy check For a closed body, the sum of all the source strengths must be zero. If not, it means the body would be adding or absorbing mass from the flow! Therefore, we should have $$ \sum_{j=1}^{N} \sigma_j l_j = 0 $$ where $l_j$ is the length of the $j^{\text{th}}$ panel. With this, we can get a get an idea of the accuracy of the source panel method. ```python # calculate the accuracy accuracy = sum([panel.sigma*panel.length for panel in panels]) print('--> sum of source/sink strengths: {}'.format(accuracy)) ``` --> sum of source/sink strengths: 0.004617031175283089 ## Streamlines onto a mesh grid To get a streamline plot, we have to create a mesh (like we've done in all *AeroPython* lessons!) and compute the velocity field onto it. Knowing the strength of every panel, we find the $x$-component of the velocity by taking derivative of the velocity potential in the $x$-direction, and the $y$-component by taking derivative in the $y$-direction: $$ u\left(x,y\right) = \frac{\partial}{\partial x}\left\lbrace \phi\left(x,y\right) \right\rbrace $$ $$ v\left(x,y\right) = \frac{\partial}{\partial y}\left\lbrace \phi\left(x,y\right) \right\rbrace $$ Notice that here we call the function `integral()` with $1,0$ as the final arguments when calculating the derivatives in the $x$-direction, and $0,1$ for the derivatives in th $y$-direction. In addition, we use the function `numpy.vectorize()` (as we did in [Lesson 8](08_Lesson08_sourceSheet.ipynb)) to avoid the nested loops over the domain. ```python def get_velocity_field(panels, freestream, X, Y): """ Computes the velocity field on a given 2D mesh. Parameters --------- panels: 1D array of Panel objects The source panels. freestream: Freestream object The freestream conditions. X: 2D Numpy array of floats x-coordinates of the mesh points. Y: 2D Numpy array of floats y-coordinate of the mesh points. Returns ------- u: 2D Numpy array of floats x-component of the velocity vector field. v: 2D Numpy array of floats y-component of the velocity vector field. """ # freestream contribution u = freestream.u_inf * math.cos(freestream.alpha) * numpy.ones_like(X, dtype=float) v = freestream.u_inf * math.sin(freestream.alpha) * numpy.ones_like(X, dtype=float) # add the contribution from each source (superposition powers!!!) vec_intregral = numpy.vectorize(integral) for panel in panels: u += panel.sigma / (2.0 * math.pi) * vec_intregral(X, Y, panel, 1.0, 0.0) v += panel.sigma / (2.0 * math.pi) * vec_intregral(X, Y, panel, 0.0, 1.0) return u, v ``` ```python # define a mesh grid nx, ny = 20, 20 # number of points in the x and y directions x_start, x_end = -1.0, 2.0 y_start, y_end = -0.3, 0.3 X, Y = numpy.meshgrid(numpy.linspace(x_start, x_end, nx), numpy.linspace(y_start, y_end, ny)) # compute the velocity field on the mesh grid u, v = get_velocity_field(panels, freestream, X, Y) ``` ```python # plot the velocity field width = 10 pyplot.figure(figsize=(width, width)) pyplot.xlabel('x', fontsize=16) pyplot.ylabel('y', fontsize=16) pyplot.streamplot(X, Y, u, v, density=1, linewidth=1, arrowsize=1, arrowstyle='->') pyplot.fill([panel.xc for panel in panels], [panel.yc for panel in panels], color='k', linestyle='solid', linewidth=2, zorder=2) pyplot.axis('scaled', adjustable='box') pyplot.xlim(x_start, x_end) pyplot.ylim(y_start, y_end) pyplot.title('Streamlines around a NACA 0012 airfoil (AoA = ${}^o$)'.format(alpha), fontsize=16); ``` We can now calculate the pressure coefficient. In Lesson 9, we computed the pressure coefficient on the surface of the circular cylinder. That was useful because we have an analytical solution for the surface pressure on a cylinder in potential flow. For an airfoil, we are interested to see how the pressure looks all around it, and we make a contour plot in the flow domain. ```python # compute the pressure field cp = 1.0 - (u**2 + v**2) / freestream.u_inf**2 # plot the pressure field width = 10 pyplot.figure(figsize=(width, width)) pyplot.xlabel('x', fontsize=16) pyplot.ylabel('y', fontsize=16) contf = pyplot.contourf(X, Y, cp, levels=numpy.linspace(-2.0, 1.0, 100), extend='both') cbar = pyplot.colorbar(contf, orientation='horizontal', shrink=0.5, pad = 0.1, ticks=[-2.0, -1.0, 0.0, 1.0]) cbar.set_label('$C_p$', fontsize=16) pyplot.fill([panel.xc for panel in panels], [panel.yc for panel in panels], color='k', linestyle='solid', linewidth=2, zorder=2) pyplot.axis('scaled', adjustable='box') pyplot.xlim(x_start, x_end) pyplot.ylim(y_start, y_end) pyplot.title('Contour of pressure field', fontsize=16); ``` ### Final words We've learned to use a source-sheet to represent any solid body: first a [circular cylinder](09_Lesson09_flowOverCylinder.ipynb) (which we knew we could get by superposing a doublet and a freestream), and now an airfoil. But what is the feature of airfoils that makes them interesting? Well, the fact that we can use them to generate lift and make things that fly, of course! But what do we need to generate lift? Think, think ... what is it? ## References 1. [Airfoil Tools](http://airfoiltools.com/index), website providing airfoil data. 1. Ira Herbert Abbott, Albert Edward Von Doenhoff and Louis S. Stivers, Jr. (1945), "Summary of Airfoil Data," NACA Report No.824, [PDF on the NASA web server](http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19930090976.pdf) (see p. 71) 1. Ira Herbert Abbott, Albert Edward Von Doenhoff, "Theory of Wing Sections, Including a Summary of Airfoil Data" (1949), Dover Press. A further reference on Theodorsen's method is: * Roland Schinzinger, Patricio A. A. Laura (1991), "Conformal Mapping: Methods and Applications." Dover edition in 2003. [Read on Google Books](https://books.google.com/books?id=qe-7AQAAQBAJ&lpg=PA128&ots=wbg0jLlqq5&dq=method%20theodorsen&pg=PA128#v=onepage&q=%22method%20of%20theodorsen%20and%20garrick%22&f=false) --- ###### Please ignore the cell below. It just loads our style for the notebook. ```python from IPython.core.display import HTML def css_styling(filepath): styles = open(filepath, 'r').read() return HTML(styles) css_styling('../styles/custom.css') ``` <link href='http://fonts.googleapis.com/css?family=Fenix' rel='stylesheet' type='text/css'> <link href='http://fonts.googleapis.com/css?family=Alegreya+Sans:100,300,400,500,700,800,900,100italic,300italic,400italic,500italic,700italic,800italic,900italic' rel='stylesheet' type='text/css'> <link href='http://fonts.googleapis.com/css?family=Source+Code+Pro:300,400' rel='stylesheet' type='text/css'> <style> @font-face { font-family: "Computer Modern"; src: url('http://mirrors.ctan.org/fonts/cm-unicode/fonts/otf/cmunss.otf'); } #notebook_panel { /* main background */ background: rgb(245,245,245); } div.cell { /* set cell width */ width: 750px; } div #notebook { /* centre the content */ background: #fff; /* white background for content */ width: 1000px; margin: auto; padding-left: 0em; } #notebook li { /* More space between bullet points */ margin-top:0.8em; } /* draw border around running cells */ div.cell.border-box-sizing.code_cell.running { border: 1px solid #111; } /* Put a solid color box around each cell and its output, visually linking them*/ div.cell.code_cell { background-color: rgb(256,256,256); border-radius: 0px; padding: 0.5em; margin-left:1em; margin-top: 1em; } div.text_cell_render{ font-family: 'Alegreya Sans' sans-serif; line-height: 140%; font-size: 125%; font-weight: 400; width:600px; margin-left:auto; margin-right:auto; } /* Formatting for header cells */ .text_cell_render h1 { font-family: 'Alegreya Sans', sans-serif; font-style:regular; font-weight: 200; font-size: 50pt; line-height: 100%; color:#CD2305; margin-bottom: 0.5em; margin-top: 0.5em; display: block; } .text_cell_render h2 { font-family: 'Fenix', serif; font-size: 22pt; line-height: 100%; margin-bottom: 0.1em; margin-top: 0.3em; display: block; } .text_cell_render h3 { font-family: 'Fenix', serif; margin-top:12px; font-size: 16pt; margin-bottom: 3px; font-style: regular; } .text_cell_render h4 { /*Use this for captions*/ font-family: 'Fenix', serif; font-size: 2pt; text-align: center; margin-top: 0em; margin-bottom: 2em; font-style: regular; } .text_cell_render h5 { /*Use this for small titles*/ font-family: 'Alegreya Sans', sans-serif; font-weight: 300; font-size: 16pt; color: #CD2305; font-style: italic; margin-bottom: .5em; margin-top: 0.5em; display: block; } .text_cell_render h6 { /*use this for copyright note*/ font-family: 'Source Code Pro', sans-serif; font-weight: 300; font-size: 9pt; line-height: 100%; color: grey; margin-bottom: 1px; margin-top: 1px; } .CodeMirror{ font-family: "Source Code Pro"; font-size: 90%; } .warning{ color: rgb( 240, 20, 20 ) } </style>
5490f2cb2ea938945581b0f00d6fecd2bc5b9605
194,972
ipynb
Jupyter Notebook
lessons/10_Lesson10_sourcePanelMethod.ipynb
tobiassugandi/AeroPython
18cca97ec5c0ef2c253565c3b98dcf9bbf2e4dee
[ "CC-BY-4.0", "BSD-3-Clause" ]
null
null
null
lessons/10_Lesson10_sourcePanelMethod.ipynb
tobiassugandi/AeroPython
18cca97ec5c0ef2c253565c3b98dcf9bbf2e4dee
[ "CC-BY-4.0", "BSD-3-Clause" ]
null
null
null
lessons/10_Lesson10_sourcePanelMethod.ipynb
tobiassugandi/AeroPython
18cca97ec5c0ef2c253565c3b98dcf9bbf2e4dee
[ "CC-BY-4.0", "BSD-3-Clause" ]
null
null
null
159.160816
61,560
0.865314
true
8,854
Qwen/Qwen-72B
1. YES 2. YES
0.874077
0.845942
0.739419
__label__eng_Latn
0.977901
0.55625
Ответ на вопрос https://ru.stackoverflow.com/questions/1293488/ Пусть дана точка $P(x_p,y_p,z_p)$ и эллипсоид, у которого известны полуоси и центр, его уравнение: $x^2/a^2+y^2/a^2+z^2/b^2=1$. Точка $P$ лежит за пределами эллипсоида в некотором отдалении. Множество касательных к эллипсоиду, проведенных из $P$ образует коническую поверхность. Надо найти общий вид точек касания этого конуса и эллипсоида (понятно, что эти точки будут принадлежать эллипсу, то есть задача по сути сводится к отысканию уравнения эллипса касания конуса и эллипсоида). Можно ли здесь упростить задачу, перейдя к сферическим координатам? Если да, то как действовать дальше? Если нет, то вопрос аналогичный. ```python import sympy as sp import sympy.abc as abc import sympy.vector as spv ``` $a$ и $b$ - параметры эллипсоида $\frac{x^2}{a^2} + \frac{y^2}{a^2} + \frac{z^2}{b^2}$ ```python a,b = abc.a, abc.b ``` Три системы координат: - $A$ - исходная система координат, в которой задан эллипсоид - $B$ - система координат, в которой эллипсоид превращается в единичную сферу - $C$ - система координат, в которой эллипсоид превращен в сферу, а точка $P$ находится на оси $Z$ Матрицы: - $A2B$ переводит координаты из $A$ в $B$, а матрица $B2A$ переводит обратно - $B2C$ переводит координаты из $B$ в $C$, а матрица $C2B$ переводит обратно ```python A2B = sp.Matrix([[1/a, 0, 0], [0, 1/a, 0], [0, 0, 1/b]]) ``` ```python B2A = sp.Matrix([[a, 0, 0], [0, a, 0], [0, 0, b]]) ``` Проверка того, что матрицы $A2B$ и $B2A$ обратны ```python B2A*A2B ``` $\displaystyle \left[\begin{matrix}1 & 0 & 0\\0 & 1 & 0\\0 & 0 & 1\end{matrix}\right]$ В системе координат $B$ вектор $OP$ составляет с осью $x$ угол $\phi$, а с осью $\z$ угол $\theta$ Длина вектора $OP$ в этой системе координат равно $R = \sqrt{\frac{x^2}{a^2} + \frac{y^2}{a^2} + \frac{z^2}{b^2}}$, где $x,y,z$ - координаты точки $P$ в исходной системе координат. Углы $\theta$ и $\phi$ в системе $B$ определяются так: - $\cos\theta = \frac{z}{bR}$ - $\cos\phi = \frac{x}{aRsin\theta}$ - $\sin\phi = \frac{y}{aRsin\theta}$ ```python phi, theta = abc.phi, abc.theta ``` Поворот системы координат от $B$ к $C$, в которой точка $P$ окажется на оси $Z$, выполняется в два приёма: - сначала на угол $\phi$ вокруг оси $Z$. В результате точка $P$ окажется в плоскости $ZOX$ - затем на угол $\theta$ вокруг оси $Y$. В результате точка $P$ окажется на оси $Z$ в новой системе координат. ```python B2C = spv.BodyOrienter(phi, theta, 0, 'ZYZ').rotation_matrix() B2C ``` $\displaystyle \left[\begin{matrix}\cos{\left(\phi \right)} \cos{\left(\theta \right)} & \sin{\left(\phi \right)} \cos{\left(\theta \right)} & - \sin{\left(\theta \right)}\\- \sin{\left(\phi \right)} & \cos{\left(\phi \right)} & 0\\\sin{\left(\theta \right)} \cos{\left(\phi \right)} & \sin{\left(\phi \right)} \sin{\left(\theta \right)} & \cos{\left(\theta \right)}\end{matrix}\right]$ Матрица обратного преобразования строится в обратном порядке: - сначала на угол $-\theta$ вокруг оси $Y$, - затем на угол $-\phi$ вокруг оси $Z$. ```python C2B = spv.BodyOrienter(-theta, -phi, 0, 'YZX').rotation_matrix() C2B ``` $\displaystyle \left[\begin{matrix}\cos{\left(\phi \right)} \cos{\left(\theta \right)} & - \sin{\left(\phi \right)} & \sin{\left(\theta \right)} \cos{\left(\phi \right)}\\\sin{\left(\phi \right)} \cos{\left(\theta \right)} & \cos{\left(\phi \right)} & \sin{\left(\phi \right)} \sin{\left(\theta \right)}\\- \sin{\left(\theta \right)} & 0 & \cos{\left(\theta \right)}\end{matrix}\right]$ Проверка того, что матрицы $B2C$ и $C2B$ обратны друг другу ```python sp.trigsimp(C2B*B2C) # упрощение тригонометрических выражений ``` $\displaystyle \left[\begin{matrix}1 & 0 & 0\\0 & 1 & 0\\0 & 0 & 1\end{matrix}\right]$ ```python R = abc.R ``` Точка $P$ в системе координат $C$ находится на оси $Z$ на расстоянии $R$ от начала координат ```python P_C = sp.Matrix([[0],[0],[R]]) ``` Координаты точки $P$ в исходной системе координат ```python P_A = B2A*C2B*P_C ``` ```python P_A ``` $\displaystyle \left[\begin{matrix}R a \sin{\left(\theta \right)} \cos{\left(\phi \right)}\\R a \sin{\left(\phi \right)} \sin{\left(\theta \right)}\\R b \cos{\left(\theta \right)}\end{matrix}\right]$ В системе координат $C$ эллипс представлен единичной сферой. Касательные из точки $P$ в этой системе координат образуют круг радиусом $r$ в плоскости, отстоящей на расстояние $h$ от начала координат. Подробнее о касательной к кругу можно прочитать здесь: https://www.wikiwand.com/en/Tangent_lines_to_circles ```python r = sp.sqrt(1 - 1/(R*R)) r ``` $\displaystyle \sqrt{1 - \frac{1}{R^{2}}}$ ```python h = sp.sqrt(1 - r*r) ``` ```python h ``` $\displaystyle \sqrt{\frac{1}{R^{2}}}$ ```python ``` Построим уравнение касательного круга в параметрическом виде, как вращение точки вокруг оси $Z$ с круговой частотой $\omega$ ```python omega = abc.omega t = abc.t ``` ```python base = sp.Matrix([[0],[0], [1/R]]) # центр круга e1 = sp.Matrix([[r],[0],[0]]) # проекция точки на ось x e2 = sp.Matrix([[0], [r],[0]]) # проекция точки на ось y ``` Радиус-вектор точек, в которых конус касается эллипсоида, в системе координат $C$ ```python tangent = base + e1*sp.cos(t) + e2*sp.sin(t) tangent ``` $\displaystyle \left[\begin{matrix}\sqrt{1 - \frac{1}{R^{2}}} \cos{\left(t \right)}\\\sqrt{1 - \frac{1}{R^{2}}} \sin{\left(t \right)}\\\frac{1}{R}\end{matrix}\right]$ Проверка: касательная плоскость к единичной сфере в точке $(x_0, y_0, z_0)$ задаётся уравнением $2x_0(x-x_0) + 2y_0(y-y_0) + 2z_0(z-z_0) = 0$ Подставим в это уравнение в качестве $(x_0, y_0, z_0)$ координаты из $tangent$, а вместо $(x, y, z)$ координаты точки $P = (0, 0, R)$ Для справки уравнение касательной плоскости к поверхности, заданной уравнением $F(x,y,z)=0$ в точке $(x_0, y_0, z_0)$: $F'_x(x_0)\cdot(x-x_0) + F'_y(y_0)\cdot(y-y_0) + F'_z(z_0)\cdot(z-z_0) = 0$ ```python _t = tangent _u = _t[0]*(-_t[0]) + _t[1]*(-_t[1]) + _t[2]*(R-_t[2]) sp.trigsimp(_u) ``` $\displaystyle 0$ Радиус вектор точек , в которых конус касается эллипсоида, в системе координат $A$ ```python tangent_A = B2A*C2B*tangent ``` ```python tangent_A ``` $\displaystyle \left[\begin{matrix}- a \sqrt{1 - \frac{1}{R^{2}}} \sin{\left(\phi \right)} \sin{\left(t \right)} + a \sqrt{1 - \frac{1}{R^{2}}} \cos{\left(\phi \right)} \cos{\left(t \right)} \cos{\left(\theta \right)} + \frac{a \sin{\left(\theta \right)} \cos{\left(\phi \right)}}{R}\\a \sqrt{1 - \frac{1}{R^{2}}} \sin{\left(\phi \right)} \cos{\left(t \right)} \cos{\left(\theta \right)} + a \sqrt{1 - \frac{1}{R^{2}}} \sin{\left(t \right)} \cos{\left(\phi \right)} + \frac{a \sin{\left(\phi \right)} \sin{\left(\theta \right)}}{R}\\- b \sqrt{1 - \frac{1}{R^{2}}} \sin{\left(\theta \right)} \cos{\left(t \right)} + \frac{b \cos{\left(\theta \right)}}{R}\end{matrix}\right]$ Проверим, что найденные точки лежат на эллипсоиде ```python _t = tangent_A _u = _t[0]**2/a**2 + _t[1]**2/a**2 + _t[2]**2/b**2 sp.trigsimp(_u) ``` $\displaystyle 1$ Проверка 2: касательная плоскость к эллипсоиду в точке $(x_0, y_0, z_0)$ задаётся уравнением $2x_0/a^2(x-x_0) + 2y_0/a^2(y-y_0) + 2z_0/b^2(z-z_0) = 0$ ```python _t = tangent_A _p = P_A _u = _t[0]/a**2*(_p[0]-_t[0]) + _t[1]/a**2*(_p[1]-_t[1]) + _t[2]/b**2*(_p[2]-_t[2]) sp.trigsimp(_u) ``` $\displaystyle 0$ Подставим формулы для $R$ и углов $\theta$ и $\phi$ в выражение для точек касания конуса к эллипсу ```python x,y,z = sp.symbols("x y z") ``` ```python t_a = tangent_A.subs({ sp.cos(phi) : x/a/sp.sin(theta)/R, sp.sin(phi) : y/a/sp.sin(theta)/R }) \ .subs({sp.sin(theta) : sp.sqrt(1-sp.cos(theta)**2)}) \ .subs({sp.cos(theta) : z/b/R}) \ .subs({R:sp.sqrt((x/a)**2 + (y/a)**2 + (z/b)**2)}) t_a ``` $\displaystyle \left[\begin{matrix}\frac{x}{\frac{z^{2}}{b^{2}} + \frac{x^{2}}{a^{2}} + \frac{y^{2}}{a^{2}}} - \frac{y \sqrt{1 - \frac{1}{\frac{z^{2}}{b^{2}} + \frac{x^{2}}{a^{2}} + \frac{y^{2}}{a^{2}}}} \sin{\left(t \right)}}{\sqrt{1 - \frac{z^{2}}{b^{2} \left(\frac{z^{2}}{b^{2}} + \frac{x^{2}}{a^{2}} + \frac{y^{2}}{a^{2}}\right)}} \sqrt{\frac{z^{2}}{b^{2}} + \frac{x^{2}}{a^{2}} + \frac{y^{2}}{a^{2}}}} + \frac{x z \sqrt{1 - \frac{1}{\frac{z^{2}}{b^{2}} + \frac{x^{2}}{a^{2}} + \frac{y^{2}}{a^{2}}}} \cos{\left(t \right)}}{b \sqrt{1 - \frac{z^{2}}{b^{2} \left(\frac{z^{2}}{b^{2}} + \frac{x^{2}}{a^{2}} + \frac{y^{2}}{a^{2}}\right)}} \left(\frac{z^{2}}{b^{2}} + \frac{x^{2}}{a^{2}} + \frac{y^{2}}{a^{2}}\right)}\\\frac{x \sqrt{1 - \frac{1}{\frac{z^{2}}{b^{2}} + \frac{x^{2}}{a^{2}} + \frac{y^{2}}{a^{2}}}} \sin{\left(t \right)}}{\sqrt{1 - \frac{z^{2}}{b^{2} \left(\frac{z^{2}}{b^{2}} + \frac{x^{2}}{a^{2}} + \frac{y^{2}}{a^{2}}\right)}} \sqrt{\frac{z^{2}}{b^{2}} + \frac{x^{2}}{a^{2}} + \frac{y^{2}}{a^{2}}}} + \frac{y}{\frac{z^{2}}{b^{2}} + \frac{x^{2}}{a^{2}} + \frac{y^{2}}{a^{2}}} + \frac{y z \sqrt{1 - \frac{1}{\frac{z^{2}}{b^{2}} + \frac{x^{2}}{a^{2}} + \frac{y^{2}}{a^{2}}}} \cos{\left(t \right)}}{b \sqrt{1 - \frac{z^{2}}{b^{2} \left(\frac{z^{2}}{b^{2}} + \frac{x^{2}}{a^{2}} + \frac{y^{2}}{a^{2}}\right)}} \left(\frac{z^{2}}{b^{2}} + \frac{x^{2}}{a^{2}} + \frac{y^{2}}{a^{2}}\right)}\\- b \sqrt{1 - \frac{z^{2}}{b^{2} \left(\frac{z^{2}}{b^{2}} + \frac{x^{2}}{a^{2}} + \frac{y^{2}}{a^{2}}\right)}} \sqrt{1 - \frac{1}{\frac{z^{2}}{b^{2}} + \frac{x^{2}}{a^{2}} + \frac{y^{2}}{a^{2}}}} \cos{\left(t \right)} + \frac{z}{\frac{z^{2}}{b^{2}} + \frac{x^{2}}{a^{2}} + \frac{y^{2}}{a^{2}}}\end{matrix}\right]$ ```python center = B2A*C2B*base v1 = B2A*C2B*e1 v2 = B2A*C2B*e2 ``` ```python center ``` $\displaystyle \left[\begin{matrix}\frac{a \sin{\left(\theta \right)} \cos{\left(\phi \right)}}{R}\\\frac{a \sin{\left(\phi \right)} \sin{\left(\theta \right)}}{R}\\\frac{b \cos{\left(\theta \right)}}{R}\end{matrix}\right]$ ```python v1 ``` $\displaystyle \left[\begin{matrix}a \sqrt{1 - \frac{1}{R^{2}}} \cos{\left(\phi \right)} \cos{\left(\theta \right)}\\a \sqrt{1 - \frac{1}{R^{2}}} \sin{\left(\phi \right)} \cos{\left(\theta \right)}\\- b \sqrt{1 - \frac{1}{R^{2}}} \sin{\left(\theta \right)}\end{matrix}\right]$ ```python v2 ``` $\displaystyle \left[\begin{matrix}- a \sqrt{1 - \frac{1}{R^{2}}} \sin{\left(\phi \right)}\\a \sqrt{1 - \frac{1}{R^{2}}} \cos{\left(\phi \right)}\\0\end{matrix}\right]$ ```python import numpy as np ``` ```python def tangent_cone(a,b, P): """ Возвращает три вектора `c,v1,v2`, определяющие эллипс, по точкам которого эллипсоид касается конус с вершиной `P`. Точки r_tan эллипса вычисляются так: ``` c, v1, v2 = tangent_cone(a,b, P) t = np.linspace(0, 2*np.pi, 100) r_tan = c.reshape(3,1) + v1.reshape(3,1)*np.cos(t) + v2.reshape(3,1)*np.sin(t) ``` В этом случае r_tan - матрица из 3 строк и 100 столбцов. Первая строка - координата `x` точек, вторая строка - координата `y`, третья строка - координата `z`. """ assert P.ndim == 1 assert len(P) == 3 x,y,z = P R = np.sqrt(x**2/a**2 + y**2/a**2 + z**2/b**2) cos_theta = (z/b)/R # theta меняется от 0 до pi, поэтому sin(theta) >= 0 sin_theta = np.sqrt(1 - cos_theta**2) sin_phi = (y/a)/(sin_theta*R) cos_phi = (x/a)/(sin_theta*R) center = np.array([a*cos_phi*sin_theta/R, a*sin_phi*sin_theta/R, b*cos_theta/R]) r = np.sqrt(1 - 1/R**2) v1 = np.array([a*r*cos_phi*cos_theta, a*r*sin_phi*cos_theta, -b*r*sin_theta]) v2 = np.array([-a*r*sin_phi, a*r*cos_phi, 0]) return center, v1, v2 ``` ```python # Эллипсоид вытянут вдоль Z a = 1 b = 4 ``` ```python # Вершина конуса на бисектрисе первого квадранта P = np.array([5,5,5]) ``` ```python c, v1, v2 = tangent_cone(a,b, P) ``` ```python t = np.linspace(0, 2*np.pi, 100) r_tan = c.reshape(3,1)+v1.reshape(3,1)*np.cos(t) + v2.reshape(3,1)*np.sin(t) ``` ```python el_X, el_Y = np.meshgrid(np.linspace(-a, a, 200), np.linspace(-a, a, 200)) ``` ```python el_Z = b*np.sqrt(1 - el_X**2/a**2 - el_Y**2/a**2, ) ``` <ipython-input-104-80c6e10389cd>:1: RuntimeWarning: invalid value encountered in sqrt el_Z = b*np.sqrt(1 - el_X**2/a**2 - el_Y**2/a**2, ) ```python import matplotlib.pyplot as plt ``` ```python fig = plt.figure(figsize=(10, 10)) ax = fig.add_subplot(projection='3d') ax.set_xlabel('x') ax.set_ylabel('y') ax.view_init(30, 120) ax.plot_wireframe(el_X, el_Y, el_Z, cstride=10, rstride = 10, alpha=0.3) ax.plot_wireframe(el_X, el_Y, -el_Z, cstride=10, rstride = 10, alpha=0.3) ax.plot(r_tan[0], r_tan[1], r_tan[2], color="black") ax.scatter(*P) ``` ```python ```
7090b77d2e4afa14d40bf7334e6fc39839f772c2
132,795
ipynb
Jupyter Notebook
math/1293488/.ipynb_checkpoints/Untitled7-checkpoint.ipynb
pakuula/StackOverflow
4f3f37f98963980f592e2a1f5ee9165477166821
[ "MIT" ]
null
null
null
math/1293488/.ipynb_checkpoints/Untitled7-checkpoint.ipynb
pakuula/StackOverflow
4f3f37f98963980f592e2a1f5ee9165477166821
[ "MIT" ]
null
null
null
math/1293488/.ipynb_checkpoints/Untitled7-checkpoint.ipynb
pakuula/StackOverflow
4f3f37f98963980f592e2a1f5ee9165477166821
[ "MIT" ]
null
null
null
135.643514
104,316
0.861998
true
5,514
Qwen/Qwen-72B
1. YES 2. YES
0.92523
0.795658
0.736167
__label__rus_Cyrl
0.154973
0.548694
<a href="https://colab.research.google.com/github/Codingtheedge/testtest/blob/main/Numerical_Optimization_Assignment1.ipynb" target="_parent"></a> ```python import numpy as np from numpy import linalg as la ``` **1. Function Plot** with function *meshgrid*, *contour* and *contourf* $ f(x) = 10 (x_2 - x_1^2)^2 + (1-x_1)^2 $ ```python import matplotlib import matplotlib.pyplot as plt #f(x) = 10 (x2 - x1**2)**2 + (1-x1)**2 def rosen(x): f = 10 * (x[1] - x[0]**2)**2 + (1-x[0])**2 return f nx, ny = (240, 200) xv = np.linspace(-1.2, 1.2, nx) xh = np.linspace(-0.5,1.5, ny) x0, x1 = np.meshgrid(xv, xh, sparse= True) F = np.zeros((x1.shape[0],x0.shape[1])) # shape (200,240) for i in range(F.shape[0]): for j in range(F.shape[1]): x = [x0[0,j], x1[i,0]] F[i, j] = rosen(x) plt.figure('Contours') plt.contour(x0[0,:], x1[:,0], F, 50) plt.axis('scaled') plt.colorbar() plt.show() plt.figure('Contours') plt.contourf(x0[0,:], x1[:,0], F, 50) plt.axis('scaled') plt.colorbar() plt.show() ``` **2. Gradient Computation** with fucntion *symbols* and *diff* $ ∇f=\begin{bmatrix} -40x_1(x_2-x_1^2)+2x_1-2 \\ 20*(x_2-x_1^2) \end{bmatrix} $ ```python from sympy import symbols, diff x_1, x_2 = symbols('x_1 x_2', real= True) g0 = diff((10 * (x_2 - x_1**2)**2 + (1-x_1)**2),x_1) g1 = diff((10 * (x_2 - x_1**2)**2 + (1-x_1)**2),x_2) def rosen_grad(x): g = np.zeros(2) g[0] = g0.subs({x_1:x[0], x_2:x[1]}) g[1] = g1.subs({x_1:x[0], x_2:x[1]}) return g ``` **3. Backtracking Line Search** ```python def backtrack_linesearch(f, gk, pk, xk, alpha = 0.1, beta = 0.8): # Algorithm parameters alpha and beta t = 1 while(f(xk + t*pk) > f(xk) + alpha * t * gk @ pk): t *= beta # reduce t incrementally return t def steepest_descent_bt(f, grad, x0): tol = 1e-5 # converge to a gradient norm of 1e-5 x = x0 history = np.array( [x0] ) while ( la.norm(grad(x)) > tol ): p = -grad(x) t = backtrack_linesearch(f, grad(x), p, x) x += t * p history = np.vstack( (history, x) )# The returned array formed by stacking the given arrays, will be at least 2-D. return x, history # plot convergence behaviour x_startpoint = np.array([-1.2, 1.0]) # start point xstar, hist = steepest_descent_bt(rosen, rosen_grad, x_startpoint) nsteps = hist.shape[0] print('Optimal solution:',xstar) print('The minima is ',rosen(xstar)) print('Iteration count:', nsteps) ``` Optimal solution: [1.00000578 1.00001197] The minima is 3.514046547976252e-11 Iteration count: 1311 **4. Convergence Behavior** ```python fhist = np.zeros(nsteps) for i in range(nsteps): fhist[i] = rosen(hist[i,:]) plt.figure('Convergence behaviour') plt.semilogy(np.arange(0, nsteps), np.absolute(fhist)) plt.grid(True, which ="both") plt.title('Convergence of Steepest Descent') #plt.text(0,10e-10,'Y axis in Semilogy') #text annotation plt.xlabel('Iteration count') plt.ylabel(r'$|f^k - f^*|$') plt.show() plt.figure('Contours Behavior') plt.title('Contours Behavior') plt.contour(x0[0,:], x1[:,0], F, 150) plt.axis('scaled') plt.plot(hist[:,0],hist[:,1],'r-') plt.colorbar() plt.show() ``` On a semilog plot, $|f^k-f^*|$ vs *k* looks like a straight piecewise line segment.From the figure *Contours Behaviour*, the rate of convergence is very fast, but the search path is different with the samples given by slides.
52bb1f10c8e629b0e50cea6253263a9f9ee861f9
394,680
ipynb
Jupyter Notebook
Numerical_Optimization_Assignment1.ipynb
Codingtheedge/testtest
e735a29d1e29170dde475194cdc025edebda70f6
[ "Apache-2.0" ]
null
null
null
Numerical_Optimization_Assignment1.ipynb
Codingtheedge/testtest
e735a29d1e29170dde475194cdc025edebda70f6
[ "Apache-2.0" ]
null
null
null
Numerical_Optimization_Assignment1.ipynb
Codingtheedge/testtest
e735a29d1e29170dde475194cdc025edebda70f6
[ "Apache-2.0" ]
null
null
null
1,294.032787
198,094
0.955308
true
1,225
Qwen/Qwen-72B
1. YES 2. YES
0.964321
0.951863
0.917902
__label__eng_Latn
0.507403
0.970928
```python %matplotlib notebook ``` ```python import numpy as np import matplotlib.pyplot as plt from matplotlib.gridspec import GridSpec from matplotlib.ticker import ScalarFormatter import math ``` This notebook assumes you have completed the notebook [Introduction of sine waves](TDS_Introduction-sine_waves.ipynb). This notebook follows the same pattern of time domain waveform generation: instantaneous frequency -> phase step -> total phase -> time domain waveform. Our goal is to track features of different acoustic impedance in material using a low power time domain waveform. Time delay spectrometry (TDS) is one implementation of this goal. To understand TDS we need to understand the waveform which is used by TDS called a chirp. A chirp is a sinusoid that is constantly varying in frequency. The chirp is generated by integrating a varying phase step which is derived from an instantaneous frequency profile. We will generate a chirp in this notebook. The phase of the chirp can be found by integrating the instantaneous frequency: \begin{equation} f(t)=\frac{f_{end}-f_{start}}{T_c}t + f_{start} \end{equation} \begin{equation} \Delta\phi(t) = 2\pi f(t)\Delta t \end{equation} \begin{equation} \phi (t)=\int_{}^{} \Delta\phi(t) = \int_{}^{} 2\pi f(t) dt = \int_{}^{}\frac{f_{end}-f_{start}}{T_c}tdt + \int_{}^{}f_{start}dt \end{equation} \begin{equation} \phi (t)= \frac{f_{end}-f_{start}}{T_c}\int_{}^{}tdt + f_{start}\int_{}^{}dt \end{equation} \begin{equation} \phi (t)= \frac{f_{end}-f_{start}}{T_c}\frac{t^2}{2} + f_{start}t \end{equation} This gives the time series value of \begin{equation} x(t) = e^{j\phi (t)} = e^{j(\frac{f_{end}-f_{start}}{T_c}\frac{t^2}{2} + f_{start}t)} \end{equation} But the formula for phase requires squaring time which will cause numeric errors as the time increases. Another approach is to implement the formula for phase as a cummulative summation. \begin{equation} \phi_{sum} (N)=\sum_{k=1}^{N} \Delta\phi(k) = \sum_{k=1}^{N} 2\pi f(k) t_s = \sum_{k=1}^{N}(\frac{f_{end}-f_{start}}{T_c}k + f_{start})t_s \end{equation} This allow for the phase always stay between 0 and two pi by subtracting two phi whenever the phase exceeds the value. We will work with the cummlative sum of phase, but then compare it to the integral to determine how accurate the cummulative sum is. ```python speed_of_ultrasound_m_per_sec={ 'tissue':1540, #In most tissue 'water':1481, 'air': 343 } ``` ```python #Try out differment chirp paramters to determine which would produce the best seperation of distances for the gel pads c_m_per_sec = speed_of_ultrasound_m_per_sec['water'] f_start_Hz=200_000 #talk about difference and similarity of sine wave example, answer why not 32 samples f_stop_Hz=1_800_000 for Tc_sec in [0.01, 0.1, 1.0, 10.0]: S_Hz_per_sec = (f_stop_Hz - f_start_Hz)/Tc_sec d_m = 0.015 #1 cm tau_sec = 2 *d_m/c_m_per_sec f_location_Hz = S_Hz_per_sec * tau_sec print(f"Tc = {Tc_sec}. For a target at {d_m*100} cm the expected freq is {f_location_Hz} Hz with a time delay of {tau_sec} sec") ``` Tc = 0.01. For a target at 1.5 cm the expected freq is 3241.053342336259 Hz with a time delay of 2.025658338960162e-05 sec Tc = 0.1. For a target at 1.5 cm the expected freq is 324.1053342336259 Hz with a time delay of 2.025658338960162e-05 sec Tc = 1.0. For a target at 1.5 cm the expected freq is 32.41053342336259 Hz with a time delay of 2.025658338960162e-05 sec Tc = 10.0. For a target at 1.5 cm the expected freq is 3.241053342336259 Hz with a time delay of 2.025658338960162e-05 sec ```python #Tc is the max depth we are interested in Tc_sec=1.0 f_start_Hz=200_000 #talk about difference and similarity of sine wave example, answer why not 32 samples f_stop_Hz=1_800_000 #sample rate fs=2_000_000 ts=1/fs total_samples= math.ceil(fs*Tc_sec) n = np.arange(0,total_samples, step=1, dtype=np.float64) t_sec=n*ts t_usec = t_sec *1e6 #This is the frequency of the chirp over time. We assume linear change in frequency chirp_freq_slope_HzPerSec=(f_stop_Hz-f_start_Hz)/Tc_sec #Compute the instantaneous frequency which is a linear function chirp_instantaneous_freq_Hz=chirp_freq_slope_HzPerSec*t_sec+f_start_Hz chirp_instantaneous_angular_freq_radPerSec=2*np.pi*chirp_instantaneous_freq_Hz #Since frequency is a change in phase the we can plot it as a phase step chirp_phase_step_rad=chirp_instantaneous_angular_freq_radPerSec*ts #The phase step can be summed (or integrated) to produce the total phase which is the phase value #for each point in time for the chirp function chirp_phase_rad=np.cumsum(chirp_phase_step_rad) #The time domain chirp function chirp = np.exp(1j*chirp_phase_rad) ``` ```python #We can see, unlike the complex exponential, the chirp's instantaneous frequency is linearly increasing. #This corresponds with the linearly increasing phase step. fig, ax = plt.subplots(2, 1, sharex=True,figsize = [8, 8]) lns1=ax[0].plot(t_usec,chirp_instantaneous_freq_Hz,linewidth=4, label='instantanous frequency'); ax[0].set_title('Comparing the instantaneous frequency and phase step') ax[0].set_ylabel('instantaneous frequency (Hz)') axt = ax[0].twinx() lns2=axt.plot(t_usec,chirp_phase_step_rad,linewidth=2,color='black', linestyle=':', label='phase step'); axt.set_ylabel('phase step (rad)') #ref: https://stackoverflow.com/questions/5484922/secondary-axis-with-twinx-how-to-add-to-legend lns = lns1+lns2 labs = [l.get_label() for l in lns] ax[0].legend(lns, labs, loc=0) #We see that summing or integrating the linearly increasing phase step gives a quadratic function of total phase. ax[1].plot(t_usec,chirp_phase_rad,linewidth=4,label='chirp'); ax[1].plot([t_usec[0], t_usec[-1]],[chirp_phase_rad[0], chirp_phase_rad[-1]],linewidth=1, linestyle=':',label='linear (x=y)'); ax[1].set_title('Cumulative quandratic phase function of chirp') ax[1].set_xlabel('time ($\mu$sec)') ax[1].set_ylabel('total phase (rad)') ax[1].legend(); ``` <IPython.core.display.Javascript object> ```python print(f'chirp_phase_step_rad[0] = {chirp_phase_step_rad[0]} rad and the end is chirp_phase_step_rad[-1] = {chirp_phase_step_rad[-1]}') print(f'amplitude = {chirp_phase_step_rad[-1] - chirp_phase_step_rad[0]} rad') print(f'offset = {(chirp_phase_step_rad[-1] - chirp_phase_step_rad[0])/2+chirp_phase_step_rad[0]} rad') ``` chirp_phase_step_rad[0] = 0.6283185307179585 rad and the end is chirp_phase_step_rad[-1] = 5.654864263187504 amplitude = 5.026545732469545 rad offset = 3.141591396952731 rad ```python #We now what to load this into GNURadio. So we will only write the raw file and then load that into GNURadio. This will cause the array size to be lost chirp.astype('complex64').tofile(f'../data/make_chirp_for_GNURadio_conf_fs_{int(fs)}_Tc_sec_{Tc_sec}_complex64.bin') print(f'The sample rate is {int(fs)}') ``` The sample rate is 2000000
be0bb4d58272679d813b037a8d309762b5f61f3d
311,400
ipynb
Jupyter Notebook
course/tds-200/week_01/notebooks/make_chirp_for_GNURadio_conf.ipynb
potto216/tds-tutorials
2acf2002ac5514dc60781c3e2e6797a4595104e6
[ "MIT" ]
6
2020-07-12T19:17:59.000Z
2020-09-24T22:19:02.000Z
course/tds-200/week_01/notebooks/make_chirp_for_GNURadio_conf.ipynb
potto216/tds-tutorials
2acf2002ac5514dc60781c3e2e6797a4595104e6
[ "MIT" ]
7
2020-09-16T12:18:01.000Z
2020-12-17T23:04:37.000Z
course/tds-200/week_01/notebooks/make_chirp_for_GNURadio_conf.ipynb
potto216/tds-tutorials
2acf2002ac5514dc60781c3e2e6797a4595104e6
[ "MIT" ]
null
null
null
298.561841
266,763
0.902521
true
2,187
Qwen/Qwen-72B
1. YES 2. YES
0.872347
0.803174
0.700647
__label__eng_Latn
0.950746
0.466168
```python #import packages import numpy as np from numpy import loadtxt import pylab as pl from IPython import display from RcTorch import * from matplotlib import pyplot as plt from scipy.integrate import odeint import time #this method will ensure that the notebook can use multiprocessing (train multiple #RC's in parallel) on jupyterhub or any other linux based system. try: mp.set_start_method("spawn") except: pass torch.set_default_tensor_type(torch.FloatTensor) %matplotlib inline start_time = time.time() ``` ```python lineW = 3 lineBoxW=2 font = {'family' : 'normal', 'weight' : 'normal',#'bold', 'size' : 24} plt.rc('font', **font) plt.rcParams['text.usetex'] = True ``` ## Rc and systems of ODEs: an overview In this notebook we demonstrate that the RC can solve systems of differential equations. Any higher order ODE can be decomposed into a system of first order ODEs, hence solving systems of ODEs means that RC can solver higher order ODEs. This is a standard procedure followed by standard integrators . To apply the RC to systems, the RC architecture needs to be modified to return multiple outputs $N_j$, where $j$ indicates a different output. Specifically, the number of the outputs needs to be the same as the number of the equations in a system. Each output has a different set of weights $W_{out}^{(j)}$ while all the $N_j$ share the same hidden states. We exploit the RC solver in solving the equations of motion for a nonlinear Hamiltonian system, the nonlinear oscillator. The energy is conserved in this system and thus, we adopt hamiltonian energy regularization that drastically accelerates the training and improves the fidelity of the predicted solutions. ### Hamiltonian systems Hamiltonian systems obey the energy conservation law. In other words, these systems are characterized by a hamiltonian function that represents the total energy of the system which remains constant in time. The hamiltonian of a nonlinear oscillator with unity mass and frequency is given by: \begin{align} \label{eq:NL_ham} \mathcal{H} = \frac{p^2}{2} + \frac{x^2}{2} + \frac{x^4}{4}, \end{align} and the associated equations of motion reads: \begin{align} \label{eq:NL_x} \dot x &= p \\ \label{eq:NL_p} \dot p &= -x - x^3 \end{align} where $p$ is the momentum and $x$ represents the position of the system. The loss function consists of three parts: $L_\text{ODE}$ for the ODEs of x and p; a hamiltonian penalty $L_{\mathcal{H}}$ that penalizes violations in the energy conservation and is defined by Eq Subsequently, the total $L$ reads: \begin{align} L &= L_\text{ODE}+ L_{\mathcal{H}} + L_\text{reg} \nonumber \\ \label{eq:NL_loss} &= \sum_{n=0}^{K} \Big[ \left(\dot x_n-p_n\right)^2 + \left(\dot p_n + x_n + x_n^3 \right)^2 +\left(E - \mathcal{H}(x_n, p_n)\right)^2 \Big] + \lambda \sum_{j=x,p} W_{out}^{(j)T} W_{out}^{(j)}. \end{align} ```python def plot_predictions(RC, results, integrator_model, ax = None): """plots a RC prediction and integrator model prediction for comparison Parameters ---------- RC: RcTorchPrivate.esn the RcTorch echostate network to evaluate. This model should already have been fit. results: dictionary the dictionary of results returned by the RC after fitting integrator model: function the model to be passed to odeint which is a gold standard integrator numerical method for solving ODE's written in Fortran. You may find the documentation here: https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.odeint.html ax: matplotlib.axes._subplots.AxesSubplot If provided, the function will plot on this subplot axes """ X = RC.X.cpu() if not ax: fig, ax = plt.subplots(1,1, figsize = (6,6)) for i, y in enumerate(results["ys"]): y = y.cpu() if not i: labels = ["RC", "Integrator Solution"] else: labels = [None, None] ax.plot(X, y, color = "dodgerblue", label = labels[0], linewidth = lineW + 1, alpha = 0.9) #calculate the integrator prediction: int_sol = odeint(integrator_model, y0s[i], np.array(X.cpu().squeeze())) int_sol = torch.tensor(int_sol) #plot the integrator prediction ax.plot(X, int_sol, '--', color = "red", alpha = 0.9, label = labels[1], linewidth = lineW) plt.ylabel(r'$y(t)$'); ax.legend(); ax.tick_params(labelbottom=False) plt.tight_layout() def covert_ode_coefs(t, ode_coefs): """ converts coefficients from the string 't**n' or 't^n' where n is any float Parameters ---------- t: torch.tensor input time tensor ode_coefs: list list of associated floats. List items can either be (int/floats) or ('t**n'/'t^n') """ type_t = type(t) for i, coef in enumerate(ode_coefs): if type(coef) == str: if coef[0] == "t" and (coef[1] == "*" or (coef[1] == "*" and coef[2] == "*")): pow_ = float(re.sub("[^0-9.-]+", "", coef)) ode_coefs[i] = t ** pow_ print("alterning ode_coefs") elif type(coef) in [float, int, type_t]: pass else: assert False, "ode_coefs must be a list floats or strings of the form 't^pow', where pow is a real number." return ode_coefs def plot_rmsr(RC, results, force, ax = None): """plots the residuals of a RC prediction directly from the loss function Parameters ---------- RC: RcTorchPrivate.esn the RcTorch echostate network to evaluate. This model should already have been fit. results: dictionary the dictionary of results returned by the RC after fitting force: function the force function describing the force term in the population equation ax: matplotlib.axes._subplots.AxesSubplot If provided, the function will plot on this subplot axes """ if not ax: fig, ax = plt.subplots(1,1, figsize = (10, 4)) X = RC.X.cpu() ys, ydots = results["ys"], results["ydots"] residuals = [] force_t = force(X) for i, y in enumerate(ys): ydot = ydots[i] y = y.cpu() ydot = ydot.cpu() ode_coefs = covert_ode_coefs(t = X, ode_coefs = RC.ode_coefs) resids = custom_loss(X, y, ydot, None, force_t = force_t, ode_coefs = RC.ode_coefs, mean = False) if not i: resids_tensor = resids label = r'{Individual Trajectory RMSR}' else: resids_tensor = torch.cat((resids_tensor, resids), axis = 1) label = None resids_specific_rmsr = torch.sqrt(resids/1) ax.plot(X, resids_specific_rmsr, color = "orangered", alpha = 0.4, label = label, linewidth = lineW-1) residuals.append(resids) mean_resid = torch.mean(resids_tensor, axis =1) rmsr = torch.sqrt(mean_resid) ax.plot(X, rmsr, color = "blue", alpha = 0.9, label = r'{RMSR}', linewidth = lineW-0.5) ax.legend(prop={"size":16}); ax.set_xlabel(r'$t$') ax.set_yscale("log") ax.set_ylabel(r'{RMSR}') ``` ```python def f(u, t ,lam=0,A=0,W=1): x, px = u # unpack current values of u derivs = [px, -x - lam*x**3 +A*np.sin(W*t)] # you write the derivative here return derivs # Scipy Solver def NLosc_solution(t, x0, px0, lam=0, A=0,W=1): u0 = [x0, px0] # Call the ODE solver solPend = odeint(f, u0, t.cpu(), args=(lam,A,W,)) xP = solPend[:,0]; pxP = solPend[:,1]; return xP, pxP def plot_results(RC, results, integrator_model, y0s, ax = None): """plots a RC prediction and integrator model prediction for comparison Parameters ---------- RC: RcTorchPrivate.esn the RcTorch echostate network to evaluate. This model should already have been fit. results: dictionary the dictionary of results returned by the RC after fitting integrator model: function the model to be passed to odeint which is a gold standard integrator numerical method for solving ODE's written in Fortran. You may find the documentation here: https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.odeint.html ax: matplotlib.axes._subplots.AxesSubplot If provided, the function will plot on this subplot axes """ X = RC.X.cpu().detach() #int_sols = [] if not ax: fig, ax = plt.subplots(1,1, figsize = (6,6)) for i, y in enumerate(results["ys"]): y = y.cpu().detach() if not i: labels = ["RC","integrator"] else: labels = [None, None, None, None] try: labels except: pass ax.plot(y[:,0], y[:,1], label = labels[0], linewidth =7, alpha = 0.9, color = "dodgerblue") #calculate the integrator prediction: y_truth, p_truth = NLosc_solution(RC.X.squeeze().data,y0s[i],1,lam=1, A=0, W= 0) #p = y[:,1].cpu()# + v0 #yy = y[:,0].cpu()# + y0 #plot the integrator prediction ax.plot(y_truth, p_truth, color = "red", linewidth =3, alpha = 1.0, label = labels[1]) # ax.plot(X, p, color = "red", alpha = 1.0, linewidth =3, # label = labels[3]) ax.set_xlabel("x") ax.set_ylabel("p") ax.set_ylim(-3,3) ax.set_xlim(-3,3) ax.legend(); #return int_sols def force(X, A = 0): return torch.zeros_like(X) def plot_rmsr(RC, results, force, log = False, ax = None): """plots the residuals of a RC prediction directly from the loss function Parameters ---------- RC: RcTorchPrivate.esn the RcTorch echostate network to evaluate. This model should already have been fit. results: dictionary the dictionary of results returned by the RC after fitting force: function the force function describing the force term in the population equation ax: matplotlib.axes._subplots.AxesSubplot If provided, the function will plot on this subplot axes """ if not ax: fig, ax = plt.subplots(1,1, figsize = (10, 4)) X = RC.X.cpu().detach() ys, ydots = results["ys"], results["ydots"] residuals = [] for i, y in enumerate(ys): y = y.cpu().detach() ydot = ydots[i].cpu().detach() resids = custom_loss(X, y, ydot, None, force = force, ode_coefs = RC.ode_coefs, mean = False, init_conds = RC.init_conds, ham = False) if not i: resids_tensor = resids label = r'{Individual Trajectory RMSR}' else: resids_tensor = torch.cat((resids_tensor, resids), axis = 1) label = None resids_specific_rmsr = torch.sqrt(resids/1) ax.plot(X, resids_specific_rmsr, color = "orangered", alpha = 0.4, label = label, linewidth = lineW-1) residuals.append(resids) mean_resid = torch.mean(resids_tensor, axis =1) rmsr = torch.sqrt(mean_resid) ax.plot(X, rmsr, color = "blue", alpha = 0.9, label = r'{RMSR}', linewidth = lineW-0.5) ax.legend(prop={"size":16}); ax.set_xlabel(r'$t$') ax.set_yscale("log") ax.set_ylabel(r'{RMSR}') def driven_force(X, A = 1): return A * torch.sin(X) def no_force(X, A = 0): return A #define a reparameterization function, empirically we find that g= 1-e^(-t) works well) def reparam(t, order = 1): exp_t = torch.exp(-t) derivatives_of_g = [] g = 1 - exp_t g_dot = 1 - g return g, g_dot #first derivative #example code for higher derivatives: ##################################### #derivatives_of_g.append(g_dot) #derivatives_of_g.append(g) # for i in range(order): # if i %2 == 0: # #print("even") # derivatives_of_g.append(g_dot) # else: # #print("odd") # derivatives_of_g.append(-g_dot) # return derivatives_of_g def custom_loss(X, y, ydot, out_weights, force = force, reg = False, ode_coefs = None, mean = True, enet_strength = None, enet_alpha = None, init_conds = None, lam = 1, ham = True): y, p = y[:,0].view(-1,1), y[:,1].view(-1,1) ydot, pdot = ydot[:,0].view(-1,1), ydot[:,1].view(-1,1) #with paramization L = (ydot - p)**2 + (pdot + y + lam * y**3 - force(X))**2 if mean: L = torch.mean(L) if reg: #assert False weight_size_sq = torch.mean(torch.square(out_weights)) weight_size_L1 = torch.mean(torch.abs(out_weights)) L_reg = enet_strength*(enet_alpha * weight_size_sq + (1- enet_alpha) * weight_size_L1) L = L + 0.1 * L_reg if ham: y0, p0 = init_conds ham = hamiltonian(y, p) ham0 = hamiltonian(y0, p0) L_H = (( ham - ham0).pow(2)).mean() assert L_H >0 L = L + 0.1 * L_H #print("L1", hi, "L_elastic", L_reg, "L_H", L_H) return L ``` ```python def optimize_last_layer(esn, SAVE_AFTER_EPOCHS = 1, epochs = 45000, custom_loss = custom_loss, loss_threshold = 10**-10,#10 ** -8, f = force, lr = 0.05, reg = None, plott = True, plot_every_n_epochs = 2000):#gamma 0.1, spikethreshold 0.07 works with torch.enable_grad(): #define new_x new_X = esn.extended_states.detach() spikethreshold = esn.spikethreshold #force detach states_dot esn.states_dot = esn.states_dot.detach().requires_grad_(False) #define criterion criterion = torch.nn.MSELoss() #assert esn.LinOut.weight.requires_grad and esn.LinOut.bias.requires_grad #assert not new_X.requires_grad #define previous_loss (could be used to do a convergence stop) previous_loss = 0 #define best score so that we can save the best weights best_score = 0 #define the optimizer optimizer = optim.Adam(esn.parameters(), lr = lr) #optimizer = torch.optim.SGD(model.parameters(), lr=100) if esn.gamma_cyclic: cyclic_scheduler = torch.optim.lr_scheduler.CyclicLR(optimizer, 10**-6, 0.01, gamma = esn.gamma_cyclic,#0.9999, mode = "exp_range", cycle_momentum = False) scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1, gamma=esn.gamma) lrs = [] #define the loss history loss_history = [] if plott: #use pl for live plotting fig, ax = pl.subplots(1,3, figsize = (16,4)) t = esn.X#.view(*N.shape).detach() g, g_dot = esn.G y0 = esn.init_conds[0] floss_last = 0 try: assert esn.LinOut.weight.requires_grad and esn.LinOut.bias.requires_grad except: esn.LinOut.weight.requires_grad_(True) esn.LinOut.bias.requires_grad_(True) #begin optimization loop for e in range(epochs): optimizer.zero_grad() N = esn.forward( esn.extended_states ) N_dot = esn.calc_Ndot(esn.states_dot) y = g *N ydot = g_dot * N + g * N_dot y[:,0] = y[:,0] + esn.init_conds[0] y[:,1] = y[:,1] + esn.init_conds[1] assert N.shape == N_dot.shape, f'{N.shape} != {N_dot.shape}' loss = custom_loss(esn.X, y, ydot, esn.LinOut.weight, reg = reg, ode_coefs = esn.ode_coefs, init_conds = esn.init_conds, enet_alpha= esn.enet_alpha, enet_strength = esn.enet_strength) loss.backward() optimizer.step() if esn.gamma_cyclic and e > 0 and e <5000: cyclic_scheduler.step() lrs.append(optimizer.param_groups[0]["lr"]) floss = float(loss) if e > 0: loss_delta = float(np.log(floss_last) - np.log(floss)) if loss_delta > esn.spikethreshold:# or loss_delta < -3: lrs.append(optimizer.param_groups[0]["lr"]) scheduler.step() if not e and not best_score: best_bias, best_weight, best_fit = esn.LinOut.bias.detach(), esn.LinOut.weight.detach(), y.clone() if e > SAVE_AFTER_EPOCHS: if not best_score: best_score = min(loss_history) best_bias, best_weight = esn.LinOut.bias.detach(), esn.LinOut.weight.detach() best_score = float(loss) best_fit = y.clone() best_pred = y.clone() best_ydot = ydot.clone() loss_history.append(floss) floss_last = floss if plott and e: if e % plot_every_n_epochs == 0: for param_group in optimizer.param_groups: print('lr', param_group['lr']) ax[0].clear() logloss_str = 'Log(L) ' + '%.2E' % Decimal((loss).item()) delta_loss = ' delta Log(L) ' + '%.2E' % Decimal((loss-previous_loss).item()) print(logloss_str + ", " + delta_loss) ax[0].plot(y.detach().cpu(), label = "exact") ax[0].set_title(f"Epoch {e}" + ", " + logloss_str) ax[0].set_xlabel("t") ax[1].set_title(delta_loss) ax[1].plot(N_dot.detach().cpu()) #ax[0].plot(y_dot.detach(), label = "dy_dx") ax[2].clear() #weight_size = str(weight_size_sq.detach().item()) #ax[2].set_title("loss history \n and "+ weight_size) ax[2].loglog(loss_history) ax[2].set_xlabel("t") [ax[i].legend() for i in range(3)] previous_loss = loss.item() #clear the plot outputt and then re-plot display.clear_output(wait=True) display.display(pl.gcf()) return {"weights": best_weight, "bias" : best_bias, "loss" : {"loss_history" : loss_history}, "ydot" : best_ydot, "y" : best_pred, "best_score" : best_score} ``` ```python def force(X, A = 0): return torch.zeros_like(X) lam =1 def hamiltonian(x, p, lam = lam): return (1/2)*(x**2 + p**2) + lam*x**4/4 ``` ```python BURN_IN = 1000 x0,xf, nsteps = 0, 5, 1000 xtrain = torch.linspace(x0, xf, steps = nsteps, requires_grad=False) #the length of xtrain won't matter above. Only dt , x0, and xf matter for ODEs. #the reason for this is that the input time vector is reconstructed internally in rctorch #in order to satisfy the specified dt. xtrain = torch.linspace(x0, xf, steps = nsteps, requires_grad=False).view(-1,1) ``` ```python nl_oscillator_hp_set = {'dt': 0.001, 'regularization': 48.97788193684461, 'n_nodes': 500, 'connectivity': 0.017714821964432213, 'spectral_radius': 2.3660330772399902, 'leaking_rate': 0.0024312976747751236, 'bias': 0.37677669525146484, 'enet_alpha': 0.2082211971282959, 'enet_strength': 0.118459548397668, 'spikethreshold': 0.43705281615257263, 'gamma': 0.09469877928495407, 'gamma_cyclic': 0.999860422666841} ``` ```python base = 10*np.pi#10*np.pi x0, xf= 0, base nsteps = int(abs(xf - x0)/(nl_oscillator_hp_set["dt"])) xtrain = torch.linspace(x0, xf, nsteps, requires_grad=False).view(-1,1) float(xtrain[1]- xtrain[0]) ``` ```python %%time y0s = np.arange(0.7, 1.8, 0.2) v0 = 1 RC = EchoStateNetwork(**nl_oscillator_hp_set, random_state = 209, id_ = 10, activation_f = torch.sin, act_f_prime = torch.cos, dtype = torch.float32, n_outputs = 2) train_args = {"burn_in" : int(BURN_IN), "ODE_order" : 1, "force" : force, "reparam_f" : reparam, "init_conditions" : [y0s, float(v0)], "ode_coefs" : [1, 1], "X" : xtrain.view(-1, 1), "eq_system" : True, #"out_weights" : out_weights } #fit results = RC.fit(**train_args, SOLVE = True, train_score = True, backprop_f = optimize_last_layer, epochs = 10000, ODE_criterion = custom_loss) ``` ```python fig, ax = plt.subplots(1,2,figsize = (14,4)) plot_results(RC, results, NLosc_solution, y0s, ax = ax[0]) plot_data = plot_rmsr(RC, results, force = no_force, log = True, ax = ax[1]) ``` ```python end_time = time.time() print(f'Total notebook runtime: {end_time - start_time:.2f} seconds') ``` ```python ```
8b3e9bc0a3c2da535419e67754e9765c6417fb84
42,322
ipynb
Jupyter Notebook
final_notebooks/.ipynb_checkpoints/final_systems-checkpoint.ipynb
blindedjoy/RcTorch
5582cf829d2ee2b10dba80b125e44d47aee27f82
[ "MIT" ]
3
2021-05-28T14:55:18.000Z
2022-01-18T08:38:11.000Z
final_notebooks/.ipynb_checkpoints/final_systems-checkpoint.ipynb
blindedjoy/RcTorch
5582cf829d2ee2b10dba80b125e44d47aee27f82
[ "MIT" ]
1
2021-05-07T13:47:59.000Z
2021-05-07T13:47:59.000Z
final_notebooks/.ipynb_checkpoints/final_systems-checkpoint.ipynb
blindedjoy/RcTorch
5582cf829d2ee2b10dba80b125e44d47aee27f82
[ "MIT" ]
1
2021-10-21T08:13:26.000Z
2021-10-21T08:13:26.000Z
55.981481
1,163
0.553164
true
5,799
Qwen/Qwen-72B
1. YES 2. YES
0.689306
0.824462
0.568306
__label__eng_Latn
0.803286
0.158696
```python def f_demo5(x): return x[0]**2+x[1]**2+x[2]**3+(1-x[3])**2,[],[x[0]**2+x[1]**2-1,x[0]**2+x[2]**2-1] ``` # Exercise 5, answers ## Problem 1 ```python import numpy as np import ad #if k=0, returns the gradient of lagrangian, if k=1, returns the hessian def diff_L(f,x,m,k): #Define the lagrangian for given m and f L = lambda x_: f(x_)[0] + (np.matrix(f(x_)[2])*np.matrix(m).transpose())[0,0] return ad.gh(L)[k](x) #Returns the gradients of the equality constraints def grad_h(f,x): return [ad.gh(lambda y: f(y)[2][i])[0](x) for i in range(len(f(x)[2]))] #Solves the quadratic problem inside the SQP method def solve_QP(f,x,m): left_side_first_row = np.concatenate(( np.matrix(diff_L(f,x,m,1)), np.matrix(grad_h(f,x)).transpose()),axis=1) left_side_second_row = np.concatenate(( np.matrix(grad_h(f,x)), np.matrix(np.zeros((len(f(x)[2]),len(f(x)[2]))))),axis=1) right_hand_side = np.concatenate(( -1*np.matrix(diff_L(f,x,m,0)).transpose(), -np.matrix(f(x)[2]).transpose()),axis = 0) left_hand_side = np.concatenate(( left_side_first_row, left_side_second_row),axis = 0) temp = np.linalg.solve(left_hand_side,right_hand_side) return temp[:len(x)],temp[len(x):] def SQP(f,start,precision): x = start m = np.ones(len(f(x)[2])) f_old = float('inf') f_new = f(x)[0] while abs(f_old-f_new)>precision: print x f_old = f_new (p,v) = solve_QP(f,x,m) x = x+np.array(p.transpose())[0] m = m+v f_new = f(x)[0] return x ``` ```python SQP(f_demo5,[0.1,0.1,0.1,1.],0.00001) ``` [0.1, 0.1, 0.1, 1.0] [ 2.66904762 2.43095238 2.43095238 1. ] [ 1.43316532 1.31285412 1.31285412 1. ] [ 0.89811129 0.8391126 0.8391126 1. ] [ 0.7232535 0.72194696 0.72194696 1. ] [ 0.5275908 0.88728073 0.88728073 1. ] [ 3.07171309 -0.66247127 -0.66247127 1. ] [ 1.69814965 -0.33347293 -0.33347293 1. ] [ 1.13944518 -0.18745083 -0.18745083 1. ] [ 1.00162637 -0.13570611 -0.13570611 1. ] [ 0.993672 -0.11456882 -0.11456882 1. ] [ 0.9951009 -0.09994974 -0.09994974 1. ] [ 0.99614324 -0.08849291 -0.08849291 1. ] [ 0.99689678 -0.07926269 -0.07926269 1. ] [ 0.99745626 -0.07168508 -0.07168508 1. ] [ 0.99788143 -0.06536638 -0.06536638 1. ] [ 0.99821115 -0.06002606 -0.06002606 1. ] [ 0.99847142 -0.05545938 -0.05545938 1. ] [ 0.9986801 -0.05151381 -0.05151381 1. ] [ 0.99884973 -0.04807369 -0.04807369 1. ] [ 0.99898932 -0.04504989 -0.04504989 1. ] [ 0.99910547 -0.04237271 -0.04237271 1. ] [ 0.99920306 -0.03998695 -0.03998695 1. ] array([ 0.99928579, -0.03784837, -0.03784837, 1. ]) ### What is going on in here? * Many feasible solutions actually satisfy KKT conditions, but are not global optima * In addition, the problem has multiple local but not global optima ## Problem 2 ```python def augmented_langrangian(f,x,mu,c): second_term = float(numpy.matrix(mu)*numpy.matrix(f(x)[2]). transpose()) third_term = 0.5*c*numpy.linalg.norm(f(x)[2])**2 return f(x)[0]-second_term+third_term ``` ```python from scipy.optimize import minimize import numpy def augmented_langrangian_method(f,start,mu0,c0): x_old = [float('inf')]*4 x_new = start mu = mu0 c = c0 while numpy.linalg.norm(f(x_new)[2])>0.000001: res = minimize(lambda x:augmented_langrangian(f,x,mu,c),x_new) x_old = x_new mu = mu-c*numpy.matrix(f(res.x)[2]) x_new = res.x c=2*c return x_new,c ``` ```python %pdb ``` Automatic pdb calling has been turned ON ```python augmented_langrangian_method(f_demo5,[1.,1.,1,1.],[1,1],1) ``` (array([ 1.93341446e-07, 1.00000001e+00, -1.00000002e+00, 1.00000002e+00]), 256) ## Problem 3 Now, the stability rule becomes $$ (2x_1,2x_2)-\mu(1,1) = (0,0), $$ and completementary rule becomes $$ \mu(x_1+x_2-1) = 0. $$ Thus, we need to have $$ \left\{ \begin{align} 2x_1-\mu=0\\ 2x_2-\mu=0\\ \mu(x_1+x_2-1) = 0. \end{align} \right. $$ Now deducting equation (2), from equation (1) gives $2x_1-2x_2=0$, thus $x_1=x_2$. Now if $\mu= 0$, then $x_1=x_2=0$. However, this solution is not feasible. Thus, $\mu\neq0$, which implies $x_1+x_2-1=0$, which gives $x_1=x_2=\frac12$ and, thus, $\mu=1$. These variables satisfy KKT conditions. Because the problem is quadratic, it has an optimal solution. Since only solution satisfies KKT conditions, this solution optimal. ## Problem 4 Now, $$ \begin{align} \nabla_x L_c(x^*,\mu^*)& = \nabla f(x^*)+\sum_{k=1}^K \mu^*_k\nabla h_k(x^*)+c\nabla(\sum_{k=1}^Kh_k(x^*)^2)\\ &=\nabla f(x^*)+\sum_{k=1}^K \mu^*_k\nabla h_k(x^*)+2c\sum_{k=1}^Kh_k(x^*)\nabla h_k(x^*)\\ &=0+2c\sum_{k=1}^K0\nabla h_k(x^*)=0. \end{align} $$ The first zero is given by the KKT conditions and the second zero is due to the solution being feasible.
b5892fe093e96f8bd3f5e0c0b8061951405e5151
9,243
ipynb
Jupyter Notebook
Exercise 5, answers.ipynb
maeehart/TIES483
cce5c779aeb0ade5f959a2ed5cca982be5cf2316
[ "CC-BY-3.0" ]
4
2019-04-26T12:46:14.000Z
2021-11-23T03:38:59.000Z
Exercise 5, answers.ipynb
maeehart/TIES483
cce5c779aeb0ade5f959a2ed5cca982be5cf2316
[ "CC-BY-3.0" ]
null
null
null
Exercise 5, answers.ipynb
maeehart/TIES483
cce5c779aeb0ade5f959a2ed5cca982be5cf2316
[ "CC-BY-3.0" ]
6
2016-01-08T16:28:11.000Z
2021-04-10T05:18:10.000Z
26.713873
309
0.480039
true
1,993
Qwen/Qwen-72B
1. YES 2. YES
0.828939
0.705785
0.585053
__label__eng_Latn
0.502102
0.197603
## A Quick Tour of DifferentialEquations.jl DifferentialEquations.jl is a metapackage for solving differential equations in Julia. The basic workflow is: - Define a problem - Solve a problem - Plot the solution The API between different types of differential equations is unified through multiple dispatch. See the [DifferentialEquations.jl Documentation](http://docs.juliadiffeq.org/latest/index.html). ## Example: Lotka-Volterra ODE $$\begin{align} x' &= ax - bxy\\ y' &= -cy + dxy \end{align}$$ ```julia using DifferentialEquations # Define a problem p = (1.0,2.0,1.5,1.25) # a,b,c,d f = function (du,u,p,t) # Define f as an in-place update into du a,b,c,d = p du[1] = a*u[1] - b*u[1]*u[2] du[2] = -c*u[2]+ d*u[1]*u[2] end u0 = [1.0;1.0]; tspan = (0.0,10.0) prob = ODEProblem(f,u0,tspan,p); ``` INFO: Recompiling stale cache file C:\Users\Chris\.julia\lib\v0.6\DifferentialEquations.ji for module DifferentialEquations.  ```julia # Solve the problem sol = solve(prob); ``` ```julia # Plot the solution using the plot recipe using Plots; gr() # Using the Plotly Backend plot(sol,title="All Plots.jl Attributes are Available") ``` The plot recipe [contains special fields](http://docs.juliadiffeq.org/latest/basics/plot.html) for plotting phase diagrams and other transformations: ```julia plot(sol,title="Phase Diagram",vars=(1,2)) ``` ## Extra Features The solution object acts both as an array and as an interpolation of the solution ```julia @show sol.t[3] # Time at the 3rd timestep @show sol[3] # Value at the third timestep @show sol(5) # Value at t=5 using the interpolation ``` sol.t[3] = 0.2927716363825574 sol[3] = [0.768635, 0.887673] sol(5) = [1.45932, 0.99208] 2-element Array{Float64,1}: 1.45932 0.99208 ## Stochastic Differential Equations Also included are problems for stochastic differential equations ```julia g = function (du,u,p,t) du[1] = .5*u[1] du[2] = .1*u[2] end prob = SDEProblem(f,g,u0,tspan,p) sol = solve(prob,dt=1/2^4) plot(sol) ``` ## Documentation and Extended Tutorials For more information, see the documentation: https://github.com/JuliaDiffEq/DifferentialEquations.jl. The repository [DiffEqTutorials.jl](https://github.com/JuliaDiffEq/DiffEqTutorials.jl) has a large array of tutorials for using the package in depth. ## Problems ### Problem 1 The DifferentialEquations.jl algorithms choose the number type of their calculation given their input. Use this fact to solve the [Lorenz equation](https://en.wikipedia.org/wiki/Lorenz_system) using BigFloats. You may want to [check out the example notebooks](https://github.com/JuliaDiffEq/DiffEqTutorials.jl). Make a 3D plot of the Lorenz attractor using the plot recipe. ### Problem 2 Use the [event handling](http://docs.juliadiffeq.org/latest/features/callback_functions.html) the model a bouncing ball with friction, i.e. at every bounce the velocity flips but is decreased to 80%. Does the ball eventually stop bouncing?
58abb25a656fb762adc67b3371431a1ae68e0694
106,988
ipynb
Jupyter Notebook
Notebooks/.ipynb_checkpoints/DiffEq-checkpoint.ipynb
physicswangzhi/CPS
59d84b3d0afbdf5d42212d8d61822361401897bf
[ "MIT" ]
null
null
null
Notebooks/.ipynb_checkpoints/DiffEq-checkpoint.ipynb
physicswangzhi/CPS
59d84b3d0afbdf5d42212d8d61822361401897bf
[ "MIT" ]
null
null
null
Notebooks/.ipynb_checkpoints/DiffEq-checkpoint.ipynb
physicswangzhi/CPS
59d84b3d0afbdf5d42212d8d61822361401897bf
[ "MIT" ]
null
null
null
94.42895
382
0.64096
true
915
Qwen/Qwen-72B
1. YES 2. YES
0.867036
0.877477
0.760804
__label__eng_Latn
0.873847
0.605934
# "Wild Magic Surges" - categories: [dnd] - image: images/dnd/rolls.png ### A Comparison of Two Homebrew Methods In order to make Wild Magic Surges a more frequent occurence, we can tweak the rules to trigger them. Two such tweaks are: 1. The "Increasing Count" method. Start as usual with a Wild Magic Surge triggering when the player rolls a `1` on their Surge roll. Every time a Surge *does not occur*, increase the D.C. for avoiding the Surge by one: `1` $\rightarrow$ `2` $\rightarrow$ `3`, etc. When a Surge *does* occur, reset the D.C. to 1. 2. The "Decreasing Die" method. Start as usual with a Wild Magic Surge triggering when the player rolls a `1` on their `d20` Surge roll. Every time a surge *does not occur*, decrease the size of the die by one: `d20` $\rightarrow$ `d12` $\rightarrow$ `d10`, etc. When a Surge *does* occur, reset the die to a `d20`. If a player avoids triggering a Surge all the way down through a `d2` (a coin flip), their next Surge is automatic. Or, we can think of this as rolling a `1` on a "`d1`". Below we calculate the probabilities of triggering a Wild Magic Surge under both of the above systems. ```python import numpy as np import pandas as pd import altair as alt dice = [20, 12, 10, 8, 6, 4, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] count = [*range(1,21)] def pDiceSurge(d): return 1.0/d def pDiceNoSurge(d): return 1.0-(1.0/d) def pCountSurge(dc): return dc/20.0 def pCountNoSurge(dc): return 1-(dc/20.0) ``` First, we calculate the $PDF$ for the Decreasing Dice method &mdash; that is, the probability of rolling a Wild Magic surge on the $i^{th}$ roll exactly, no earlier and no later, for each $i$. This is equal to the probability that we *don't* rolls a surge for the first $i-1$ rolls, times the probability that we *do* roll a surge on the $i^{th}$ roll: ```python marginalP = list(map(pDiceNoSurge, dice)) dicePDF = [] for i in range(0,20): if (i == 0): P = pDiceSurge(dice[0]) else: P = np.prod(marginalP[:i]) * pDiceSurge(dice[i]) dicePDF.append(P) ``` We also calculate the $PDF$ for the Increasing Count method: ```python marginalP = list(map(pCountNoSurge, count)) countPDF = [] for i in range(0,20): if (i == 0): P = pCountSurge(count[0]) else: P = np.prod(marginalP[:i]) * pCountSurge(count[i]) countPDF.append(P) ``` Then we calculate the $CDF$ &mdash; that is, the probability of encountering a Wild Magic Surge in $k$ rolls or fewer, for each $k$. This is just the sum from $i=1$ to $i=k$ of the probabilities of getting a surge in exactly $i$ rolls &mdash; a partial sum of the $PDF$ we calculated above: ```python diceCDF = [] countCDF = [] for i in range(1,21): diceCDF.append(np.sum(dicePDF[:i])) countCDF.append(np.sum(countPDF[:i])) ``` And now, the fun part, we plot the results! ```python #collapse cData = [] pData = [] for i in range(0, len(diceCDF)): cData.append([i+1, diceCDF[i], 'Decreasing Die']) cData.append([i+1, countCDF[i], 'Increasing Count']) pData.append([i+1, dicePDF[i], 'Decreasing Die']) pData.append([i+1, countPDF[i], 'Increasing Count']) df = pd.DataFrame(cData, columns=['Number of Rolls', 'Probability', 'Method']) df.reset_index() # Create a selection that chooses the nearest point & selects based on x-value nearest = alt.selection(type='single', nearest=True, on='mouseover', fields=['Number of Rolls'], empty='none') points = alt.Chart(df).mark_circle().encode( x='Number of Rolls:O', y=alt.Y('Probability', title='Probability of a Surge'), color='Method', opacity=alt.condition(nearest, alt.value(1), alt.value(.6)) ) # Transparent selectors across the chart. This is what tells us # the x-value of the cursor selectors = alt.Chart(df).mark_point().encode( x='Number of Rolls:O', opacity=alt.value(0), ).add_selection( nearest ) # Draw text labels near the points, and highlight based on selection text = points.mark_text(align='left', dx=5, dy=-5).encode( text=alt.condition(nearest, 'Probability:Q',alt.value(' ')) ) # Draw a rule at the location of the selection rules = alt.Chart(df).mark_rule(color='gray').encode( x='Number of Rolls:O', ).transform_filter( nearest ) # Put the five layers into a chart and bind the data alt.layer( selectors, points, rules, text ).properties( width=600, height=300 ) ``` <div id="altair-viz-29ea6b60a555442ab8e119cb06a27c0b"></div> As you can see, the probability of a Wild Magic Surge is generally higher with the Increasing Count method. This is somewhat expected, as the probability is the same initially for both methods, then at the second roll we have: $$\begin{align} P_{\text{Increasing Count}}(S) = \frac{2}{20} &= \frac{1}{10}\\ P_{\text{Decreasing Die}}(S) &= \frac{1}{12} \end{align}$$ Similarly for the third roll, where the Increasing Count gives a $3/20$ probability of a surge vs. a $1/10$ for the Decreasing Die. So the Increasing Count takes an early lead which it maintains until the seventh roll, where the probability is about even between the methods. After this roll, the Decreasing Die method takes the lead because it gives an automatic Surge from here on out. My personal preference would lean towards the Increasing Count method, since you could conceivably get a string of rolls that build to a fairly high D.C., which feels a little more dramatic. On the other hand, the Decreasing Die method gives you a guaranteed Surge a fair bit sooner, which is part of the point of these tweaks in the first place. We can also compute the expected number of rolls to get a Wild Magic Surge for both methods: ```python def E(pdf): ex = 0 for i in range(0, len(pdf)): ex += (i+1)*pdf[i] return ex print("Expectation for Dice Method:", E(dicePDF)) print("Expectation for Count Method:", E(countPDF)) ``` Expectation for Dice Method: 5.504768880208333 Expectation for Count Method: 5.293584586000901 Which is to say, on average the Increasing Count method will give us a Wild Magic Surge slightly sooner. Perhaps another reason to favor it. For completeness, here is the same data as in the graph, but in a table view: ```python df.pivot(index='Number of Rolls', columns='Method') ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead tr th { text-align: left; } .dataframe thead tr:last-of-type th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr> <th></th> <th colspan="2" halign="left">Probability</th> </tr> <tr> <th>Method</th> <th>Decreasing Die</th> <th>Increasing Count</th> </tr> <tr> <th>Number of Rolls</th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <th>1</th> <td>0.050000</td> <td>0.050000</td> </tr> <tr> <th>2</th> <td>0.129167</td> <td>0.145000</td> </tr> <tr> <th>3</th> <td>0.216250</td> <td>0.273250</td> </tr> <tr> <th>4</th> <td>0.314219</td> <td>0.418600</td> </tr> <tr> <th>5</th> <td>0.428516</td> <td>0.563950</td> </tr> <tr> <th>6</th> <td>0.571387</td> <td>0.694765</td> </tr> <tr> <th>7</th> <td>0.785693</td> <td>0.801597</td> </tr> <tr> <th>8</th> <td>1.000000</td> <td>0.880958</td> </tr> <tr> <th>9</th> <td>1.000000</td> <td>0.934527</td> </tr> <tr> <th>10</th> <td>1.000000</td> <td>0.967264</td> </tr> <tr> <th>11</th> <td>1.000000</td> <td>0.985269</td> </tr> <tr> <th>12</th> <td>1.000000</td> <td>0.994107</td> </tr> <tr> <th>13</th> <td>1.000000</td> <td>0.997938</td> </tr> <tr> <th>14</th> <td>1.000000</td> <td>0.999381</td> </tr> <tr> <th>15</th> <td>1.000000</td> <td>0.999845</td> </tr> <tr> <th>16</th> <td>1.000000</td> <td>0.999969</td> </tr> <tr> <th>17</th> <td>1.000000</td> <td>0.999995</td> </tr> <tr> <th>18</th> <td>1.000000</td> <td>1.000000</td> </tr> <tr> <th>19</th> <td>1.000000</td> <td>1.000000</td> </tr> <tr> <th>20</th> <td>1.000000</td> <td>1.000000</td> </tr> </tbody> </table> </div>
0690bbe07a1416077b20b1e3cb718d64ded36d7f
22,967
ipynb
Jupyter Notebook
_notebooks/2020-05-03-Wild Magic Surges.ipynb
buggins333/mmnxi
8f322e7aae952bb18b4957cc1b62370e22e5f4f4
[ "Apache-2.0" ]
null
null
null
_notebooks/2020-05-03-Wild Magic Surges.ipynb
buggins333/mmnxi
8f322e7aae952bb18b4957cc1b62370e22e5f4f4
[ "Apache-2.0" ]
7
2020-05-03T23:08:57.000Z
2022-02-26T07:37:30.000Z
_notebooks/2020-05-03-Wild Magic Surges.ipynb
buggins333/mmnxi
8f322e7aae952bb18b4957cc1b62370e22e5f4f4
[ "Apache-2.0" ]
null
null
null
45.121807
5,310
0.493099
true
2,727
Qwen/Qwen-72B
1. YES 2. YES
0.90599
0.841826
0.762685
__label__eng_Latn
0.924659
0.610306
```python # needed imports import numpy as np from numpy import empty import matplotlib.pyplot as plt ``` # B-Splines We recall the Cox-DeBoor formula, used as a definition for B-Splines: The j-th B-spline of degree $p$ is defined by the recurrence relation: \begin{align} N_j^p = \frac{t-t_j}{t_{j+p}-t_{j}} N_j^{p-1} + \frac{t_{j+p+1}-t}{t_{j+p+1}-t_{j+1}} N_{j+1}^{p-1}, \label{eq:bspline-reccurence} \end{align} where $T=\{t_i\}_{0\leqslant i \leqslant m}$ is a sequence of non-decreasing real numbers, called the knot vector. ## Examples **Example 1.** We consider a linear B-Spline with the knot vector $T = [0, 1, 2]$ ```python def N1_0(t): if t >= 0 and t< 1: return t if t >= 1 and t< 2: return 2-t return 0. xs = np.linspace(0., 2., 200) plt.plot(xs, [N1_0(x) for x in xs], label='$N_0^1$') plt.legend() ``` **Example 2.** We consider a linear B-Spline with the knot vector $T = [0, 0, 1]$ ```python def N1_0(t): if t >= 0 and t< 1: return 1-t return 0. xs = np.linspace(0., 1., 200) plt.plot(xs, [N1_0(x) for x in xs], label='$N_0^1$') plt.legend() ``` **Example 3.** We consider a linear B-Spline with the knot vector $T = [0, 1, 1]$ ```python def N1_0(t): if t >= 0 and t< 1: return t return 0. xs = np.linspace(0., 1., 201)[:-1] plt.plot(xs, [N1_0(x) for x in xs], label='$N_0^1$') plt.legend() ``` **Example 4.** We consider linear B-Splines with the knot vector $T = [0, 0, 1, 1]$ ```python def N1_0(t): if t >= 0 and t< 1: return 1-t return 0. def N1_1(t): if t >= 0 and t< 1: return t return 0. xs = np.linspace(0., 1., 201)[:-1] plt.plot(xs, [N1_0(x) for x in xs], label='$N_0^1$') plt.plot(xs, [N1_1(x) for x in xs], label='$N_1^1$') plt.legend() ``` **Example 5.** We consider linear B-Splines with the knot vector $T = [0, 0, 1, 2]$ ```python def N1_0(t): if t >= 0 and t< 1: return 1-t return 0. def N1_1(t): if t >= 0 and t< 1: return t if t >= 1 and t< 2: return 2-t return 0. xs = np.linspace(0., 2., 201)[:-1] plt.plot(xs, [N1_0(x) for x in xs], label='$N_0^1$') plt.plot(xs, [N1_1(x) for x in xs], label='$N_1^1$') plt.legend() ``` **Example 6.** We consider a quadratic B-Spline with the knot vector $T = [0, 0, 1, 1]$ ```python def N2_0(t): if t >= 0 and t< 1: return 2*t*(1-t) return 0. xs = np.linspace(0., 1., 201)[:-1] plt.plot(xs, [N2_0(x) for x in xs], label='$N_0^2$') plt.legend() ``` **Example 7.** We consider a quadratic B-Spline with the knot vector $T = [0, 0, 1, 2]$ ```python T = [0, 0, 1, 2] def N2_0(t): if t >= 0 and t< 1: return 2*t-3./2.*t**2 if t >= 1 and t< 2: return 0.5*(2-t)**2 return 0. xs = np.linspace(0., 2., 200) plt.plot(xs, [N2_0(x) for x in xs], label='$N_0^2$') plt.legend() ``` **Example 8.** We consider linear B-Splines with the knot vector $T = [0, 0, 1, 2, 3, 3]$ ```python def N1_0(t): if t >= 0 and t< 1: return 1-t return 0. def N1_1(t): if t >= 0 and t< 1: return t if t >= 1 and t< 2: return 2-t return 0. def N1_2(t): if t >= 1 and t< 2: return t-1 if t >= 2 and t< 3: return 3-t return 0. def N1_3(t): if t >= 2 and t< 3: return t-2 return 0. xs = np.linspace(0., 3., 201)[:-1] plt.plot(xs, [N1_0(x) for x in xs], label='$N_0^1$') plt.plot(xs, [N1_1(x) for x in xs], label='$N_1^1$') plt.plot(xs, [N1_2(x) for x in xs], label='$N_2^1$') plt.plot(xs, [N1_3(x) for x in xs], label='$N_3^1$') plt.legend() ``` **Example 9.** We consider quadratic B-Splines with the knot vector $T = [0, 0, 0, 1, 1, 1]$ ```python def N2_0(t): if t >= 0 and t< 1: return (1-t)**2 return 0. def N2_1(t): if t >= 0 and t< 1: return 2*t*(1-t) return 0. def N2_2(t): if t >= 0 and t< 1: return t**2 return 0. xs = np.linspace(0., 1., 201)[:-1] plt.plot(xs, [N2_0(x) for x in xs], label='$N_0^2$') plt.plot(xs, [N2_1(x) for x in xs], label='$N_1^2$') plt.plot(xs, [N2_2(x) for x in xs], label='$N_2^2$') plt.legend() ``` ## Evaluation of B-Splines Given a knot sequence $T=\{t_i\}_{0\leqslant i \leqslant n + p}$, we are interested in the algorithmic evaluation of B-Splines of degree $p$. For a given real point $x$, it is done in two steps: 1. find the knot span index $j$, such that $x \in~ ] t_j,t_{j+1} [ $ 2. evaluate all non-vanishing B-Splines $N_{j-p}^p, \cdots, N_j^p$ The first point is achieved by the function implemented by the following function: ```python def find_span( knots, degree, x ): # Knot index at left/right boundary low = degree high = 0 high = len(knots)-1-degree # Check if point is exactly on left/right boundary, or outside domain if x <= knots[low ]: returnVal = low elif x >= knots[high]: returnVal = high-1 else: # Perform binary search span = (low+high)//2 while x < knots[span] or x >= knots[span+1]: if x < knots[span]: high = span else: low = span span = (low+high)//2 returnVal = span return returnVal ``` The second point is implemented by the following function, that returns all non-vanishing B-Splines at $x$ ```python def all_bsplines( knots, degree, x, span ): left = empty( degree , dtype=float ) right = empty( degree , dtype=float ) values = empty( degree+1, dtype=float ) values[0] = 1.0 for j in range(0,degree): left [j] = x - knots[span-j] right[j] = knots[span+1+j] - x saved = 0.0 for r in range(0,j+1): temp = values[r] / (right[r] + left[j-r]) values[r] = saved + right[r] * temp saved = left[j-r] * temp values[j+1] = saved return values ``` The following function plots all B-Splines given a knot vector and a polynomial degree. ```python def plot_splines(knots, degree, nx=100): xmin = knots[degree] xmax = knots[-degree-1] # grid points for evaluation xs = np.linspace(xmin,xmax,nx) # this is the number of the BSplines in the Schoenberg space N = len(knots) - degree - 1 ys = np.zeros((N,nx), dtype=np.double) for ix,x in enumerate(xs): span = find_span( knots, degree, x ) b = all_bsplines( knots, degree, x, span ) ys[span-degree:span+1, ix] = b[:] for i in range(0,N): plt.plot(xs,ys[i,:], label='$N_{}$'.format(i+1)) plt.legend(loc=9, ncol=4) ``` ### Knots vector families There are two kind of **knots vectors**, called **clamped** and **unclamped**. Both families contains **uniform** and **non-uniform** sequences. The following are examples of such knots vectors #### Clamped knots (open knots vector) ##### uniform ```python T = np.array([0, 0, 0, 1, 2, 3, 4, 5, 5, 5]) plot_splines(T, degree=2, nx=100) ``` ```python T = [-0.2, -0.2, 0.0, 0.2, 0.4, 0.6, 0.8, 0.8] plot_splines(T, degree=2, nx=100) ``` ##### non-uniform ```python T = [0, 0, 0, 1, 3, 4, 5, 5, 5] plot_splines(T, degree=2, nx=100) ``` ```python T = [-0.2, -0.2, 0.4, 0.6, 0.8, 0.8] plot_splines(T, degree=2, nx=100) ``` #### Unclamped knots ##### uniform ```python T = [0, 1, 2, 3, 4, 5, 6, 7, 8] plot_splines(T, degree=2, nx=100) ``` ```python T = [-0.2, 0.0, 0.2, 0.4, 0.6, 0.8, 1.0] plot_splines(T, degree=2, nx=100) ``` ##### non-uniform ```python T = [0, 0, 3, 4, 7, 8, 9] plot_splines(T, degree=2, nx=100) ``` ```python T = [-0.2, 0.2, 0.4, 0.6, 1.0, 2.0, 2.5] plot_splines(T, degree=2, nx=100) ``` ```python # css style from IPython.core.display import HTML def css_styling(): styles = open("../styles/custom.css", "r").read() return HTML(styles) css_styling() ``` <link href='http://fonts.googleapis.com/css?family=Fenix' rel='stylesheet' type='text/css'> <link href='http://fonts.googleapis.com/css?family=Alegreya+Sans:100,300,400,500,700,800,900,100italic,300italic,400italic,500italic,700italic,800italic,900italic' rel='stylesheet' type='text/css'> <link href='http://fonts.googleapis.com/css?family=Source+Code+Pro:300,400' rel='stylesheet' type='text/css'> <style> @font-face { font-family: "Computer Modern"; src: url('http://mirrors.ctan.org/fonts/cm-unicode/fonts/otf/cmunss.otf'); } div.cell{ width:600px; margin-left:16% !important; margin-right:auto; } h1 { font-family: 'Alegreya Sans', sans-serif; } h2 { font-family: 'Fenix', serif; } h3{ font-family: 'Fenix', serif; margin-top:12px; margin-bottom: 3px; } h4{ font-family: 'Fenix', serif; } h5 { font-family: 'Alegreya Sans', sans-serif; } div.text_cell_render{ font-family: 'Alegreya Sans',Computer Modern, "Helvetica Neue", Arial, Helvetica, Geneva, sans-serif; line-height: 135%; font-size: 120%; width:600px; margin-left:auto; margin-right:auto; } .CodeMirror{ font-family: "Source Code Pro"; font-size: 90%; } /* .prompt{ display: None; }*/ .text_cell_render h1 { font-weight: 200; font-size: 50pt; line-height: 100%; color:#054BCD; margin-bottom: 0.5em; margin-top: 0.5em; display: block; } .text_cell_render h5 { font-weight: 300; font-size: 16pt; color: #054BCD; font-style: italic; margin-bottom: .5em; margin-top: 0.5em; display: block; } .warning{ color: rgb( 240, 20, 20 ) } </style> ```python ``` ```python ```
be68eeedb881ca5dccd8806afde932e6491fdfd1
429,363
ipynb
Jupyter Notebook
cad/B-Splines.ipynb
ratnania/MA5938
c302cc419b4b422bbb66bbb886b8d5a705b8be90
[ "MIT" ]
4
2019-04-19T23:04:12.000Z
2022-01-14T00:18:28.000Z
cad/B-Splines.ipynb
ratnania/MA5938
c302cc419b4b422bbb66bbb886b8d5a705b8be90
[ "MIT" ]
null
null
null
cad/B-Splines.ipynb
ratnania/MA5938
c302cc419b4b422bbb66bbb886b8d5a705b8be90
[ "MIT" ]
2
2019-10-12T12:45:55.000Z
2020-09-11T22:33:12.000Z
429.363
38,388
0.938157
true
3,509
Qwen/Qwen-72B
1. YES 2. YES
0.865224
0.859664
0.743802
__label__eng_Latn
0.760135
0.566433
```python import numpy as np import matplotlib.pyplot as plt ``` Fourier Analysis of a Plucked String ------------------------------------ Let's assume we're studying a plucked string with a shape like this: \begin{eqnarray} f(x) & = & 2 A \frac{x}{L} & (x<L/2) \\ \\ f(x) & = & 2 A \left(\frac{L-x}{L}\right) & (x >= L/2) \end{eqnarray} Let's graph that and see what it looks like: ```python L=1.0 N=500 # make sure N is even for simpson's rule A=1.0 def fLeft(x): return 2*A*x/L def fRight(x): return 2*A*(L-x)/L def fa_vec(x): """ vector version 'where(cond, A, B)', returns A when cond is true and B when cond is false. """ return np.where(x<L/2, fLeft(x), fRight(x)) x=np.linspace(0,L,N) # define the 'x' array h=x[1]-x[0] # get x spacing y=fa_vec(x) plt.title("Vertical Displacement of a Plucked String at t=0") plt.xlabel("x") plt.ylabel("y") plt.plot(x,y) plt.grid() ``` Let's define the basis functions: \begin{equation} |n\rangle = b_n(x) = \sqrt{\frac{2}{L}} \sin(n \pi x/L) \end{equation} Let's look at a few of those. In python well use the function `basis(x,n)` for $|n\rangle$: ```python def basis(x, n): return np.sqrt(2/L)*np.sin(n*np.pi*x/L) for n in range(1,5): plt.plot(x,basis(x,n),label="n=%d"%n) plt.legend(loc=3) ``` If we guess that we can express $f(x)$ as a superposition of $b_n(x)$ then we have: \begin{equation} f(x) = c_1 b_1(x) + c_2 b_2(x) + c_3 b_3(x) + \cdots = \sum_{n=1}^\infty c_n b_n(x) \end{equation} What happens if we multiply $f(x)$ with $b_m(x)$ and then integrate from $x=0$ to $x=L$? \begin{equation} \int_0^L f(x) b_m(x) dx = c_1 \int_0^L b_m(x) b_1(x)dx + c_2 \int_0^L b_m(x) b_2(x)dx + c_3 \int_0^L b_m(x) b_3(x)dx + \cdots \end{equation} or more compactly in dirac notation: \begin{equation} \langle m|f\rangle = c_1 \langle m|1\rangle + c_2 \langle m|2\rangle + c_3 \langle m|3\rangle + \cdots \end{equation} or equivalently: \begin{equation} \langle m|f\rangle = \sum_{n=1}^\infty c_n \langle m|n\rangle \end{equation} Remember that the $b_n(x)$ are *orthonormal* so that means: $$\langle n|m \rangle = \int_0^L b_n(x) b_m(x)dx = \delta_{nm}$$ where $\delta_{nm}$ is defined as 0 if $n<>m$ and 1 if $n=m$. So: $$\sum_{n=1}^\infty c_n \langle m|n\rangle = c_m$$ or in other words: $$c_m = \langle m|f\rangle = \int_{0}^{L} b_m(x) f(x) dx $$ Yeah! Let's do that integral for this case. Note that the function $f(x)$ is symmetric about the midpoint, just like $b_m$ when $m$ is odd. When $m$ is even, the integral is zero. SO, for *odd* m we can write: $$ c_m = \int_{0}^{L} b_m(x) f(x) dx = 2 \int_{0}^{L/2} b_m(x) f(x) dx $$ (when $m$ is odd) or: $$ c_m = 2 \int_0^{L/2} \sqrt{\frac{2}{L}} \sin(\frac{m\pi x}{L}) 2A \frac{x}{L} dx $$ $$ c_m = \frac{4A}{L} \sqrt{\frac{2}{L}} \int_0^{L/2} x\sin(\frac{m\pi x}{L}) dx $$ $$ c_m = \frac{4A}{L} \sqrt{\frac{2}{L}} \frac{L^2}{\pi^2 m^2} (-1)^{\frac{m-1}{2}}$$ Or simplifying: $$ c_m = \frac{4A \sqrt{2L}}{\pi^2 m^2} (-1)^{\frac{m-1}{2}}$$ ```python def simpson_array(f, h): """ Use Simpson's Rule to estimate an integral of an array of function samples f: function samples (already in an array format) h: spacing in "x" between sample points The array is assumed to have an even number of elements. """ if len(f)%2 != 0: raise ValueError("Sorry, f must be an array with an even number of elements.") evens = f[2:-2:2] odds = f[1:-1:2] return (f[0] + f[-1] + 2*odds.sum() + 4*evens.sum())*h/3.0 def braket(n): """ Evaluate <n|f> """ return simpson_array(basis(x,n)*fa_vec(x),h) M=20 coefs = [0] coefs_th = [0] ys = [[]] sup = np.zeros(N) for n in range(1,M): coefs.append(braket(n)) # do numerical integral if n%2==0: coefs_th.append(0.0) else: coefs_th.append(4*A*np.sqrt(2*L)*(-1)**((n-1)/2.0)/(np.pi**2*n**2)) # compare theory ys.append(coefs[n]*basis(x,n)) sup += ys[n] plt.plot(x,sup) print("%10s\t%10s\t%10s" % ('n', 'coef','coef(theory)')) print("%10s\t%10s\t%10s" % ('---','-----','------------')) for n in range(1,M): print("%10d\t%10.5f\t%10.5f" % (n, coefs[n],coefs_th[n])) ``` Project 11 ============ Pick your own function and compute it's fourier coefficients analytically. Then, check your answer both graphically and numerically using simpson's rule. ```python ```
33324e80d4e3e6cce49cd9c4b155221c86a04f2e
100,457
ipynb
Jupyter Notebook
P11-FourierSeries.ipynb
sspickle/sci-comp-notebooks
c7f8afc087cc4013b5d13c698a96e5652a226dc3
[ "MIT" ]
20
2015-11-08T09:49:18.000Z
2021-08-09T16:07:51.000Z
P11-FourierSeries.ipynb
sspickle/sci-comp-notebooks
c7f8afc087cc4013b5d13c698a96e5652a226dc3
[ "MIT" ]
null
null
null
P11-FourierSeries.ipynb
sspickle/sci-comp-notebooks
c7f8afc087cc4013b5d13c698a96e5652a226dc3
[ "MIT" ]
15
2015-11-03T18:19:13.000Z
2022-01-26T20:15:30.000Z
311.012384
42,988
0.919289
true
1,658
Qwen/Qwen-72B
1. YES 2. YES
0.893309
0.912436
0.815088
__label__eng_Latn
0.79488
0.732055
# Problem: Find the moviment equations of the 2 DoF Scara arm. ## Frames and Variables Initialy, we need to define our reference frames and the symbolic variables.The first **Reference Frame** `B0`, is located in the same high as the arm and pointing in the rotating joint. The next frame,`B1`, is a **Moving Frame**. It follows the rotations of the first frame, therefora, related by a rotations of $\theta_1$ related to `B0`. Finally, the last frame rotates $\theta_2$ regarding `B1`. ```python # Funções e Bibliotecas Utilizadas from sympy import symbols, pprint, simplify, Eq, diff from sympy.physics.mechanics import * from sympy.physics.mechanics.functions import inertia init_vprinting() # Variáveis Simbólicas theta_1, theta_2 = dynamicsymbols('theta_1 theta_2') dtheta_1, dtheta_2 = dynamicsymbols('theta_1 theta_2', 1) tau_1, tau_2 = symbols('tau_1 tau_2') l_1, l_2 = symbols('l_1 l_2', positive = True) r_1, r_2 = symbols('r_1 r_2', positive = True) m_1, m_2, g = symbols('m_1 m_2 g') I_1_zz, I_2_zz = symbols('I_{1zz}, I_{2zz}') # Referenciais B0 = ReferenceFrame('B0') # Referencial Inercial B1 = ReferenceFrame('B1') B1.orient(B0, 'Axis', [theta_1, B0.z]) # Referencial móvel: theta_1 em relação a B0.z B2 = ReferenceFrame('B2') B2.orient(B1, 'Axis', [theta_2, B1.z]) # Referencial móvel: theta_2 em relação a B1.z ``` ## Points e Rigid Bodies Now that the frames are defined, we can start to place the fixed points.The points `CM_1` and `CM_2` represent the **Center of Mass ** of the arm's links. Each of this links have their own moment, `I_1` e `I_2`. Furthermore, points O and A represent the fixed joints. ```python # Pontos e Centros de Massa O = Point('O') O.set_vel(B0, 0) A = Point('A') A.set_pos(O, l_1 * B1.x) A.v2pt_theory(O, B0, B1) CM_1 = Point('CM_1') CM_1.set_pos(O, r_1 * B1.x) CM_1.v2pt_theory(O, B0, B1) CM_2 = Point('CM_2') CM_2.set_pos(A, r_2 * B2.x) CM_2.v2pt_theory(O, B0, B2) # Corpos Rígidos I_1 = inertia(B1, 0, 0, I_1_zz) E_1 = RigidBody('Elo_1', CM_1, B1, m_1, (I_1, CM_1)) # Elo 1 I_2 = inertia(B1, 0, 0, I_1_zz) E_2 = RigidBody('Elo_2', CM_2, B2, m_2, (I_2, CM_2)) # Elo 2 ``` ## Potencial Energy and Generalized Forces **Sympy** needs explicit definition of external potencial energies in the beodies.As the arm is only acting in the $(x,y)$ plane, the gravity force will not perfom any influence in the moviment equations. ```python # Energia Potencial P_1 = -m_1 * g * B0.z r_1_CM = CM_1.pos_from(O).express(B0) E_1.potential_energy = r_1_CM.dot(P_1) P_2 = -m_2 * g * B0.z r_2_CM = CM_2.pos_from(O).express(B0).simplify() E_2.potential_energy = r_2_CM.dot(P_2) # Forças/Momentos Generalizados FL = [(B1, tau_1 * B0.z),(B2, tau_2 * B0.z)] (E_2.potential_energy, E_1.potential_energy) ``` ## Lagrangian e Moviment Equations Finally, we can solve the langrangian and movimento equations of the system.In **Sympy**, we call the `Lagrangian` method andthe `LagrangesMethod` class to solve the system. ```python # Método de Lagrange L = Lagrangian(B0, E_1, E_2) L = L.simplify() LM = LagrangesMethod(L, [theta_1, theta_2], frame=B0, forcelist=FL) # Equações do Movimento L_eq = LM.form_lagranges_equations() L_eq ``` We still can decouple the **Moviment Equations** using the `rhs` method on the previous class. ```python # Equações Prontas para Solução Numérica rhs = LM.rhs() rhs ``` ```python ```
9b7b74e416ff35261d8fc4f32e64e6e60fb44202
62,039
ipynb
Jupyter Notebook
examples/scara_arm/Dynamics/NotebookDynamics_2DoF-Scara.ipynb
abhikamath/pydy
0d11df897c40178bb0ffd9caa9e25bccd1d8392a
[ "BSD-3-Clause" ]
298
2015-01-31T11:43:22.000Z
2022-03-15T02:18:21.000Z
examples/scara_arm/Dynamics/NotebookDynamics_2DoF-Scara.ipynb
abhikamath/pydy
0d11df897c40178bb0ffd9caa9e25bccd1d8392a
[ "BSD-3-Clause" ]
359
2015-01-17T16:56:42.000Z
2022-02-08T05:27:08.000Z
examples/scara_arm/Dynamics/NotebookDynamics_2DoF-Scara.ipynb
pydy/pydy
4a2c46faae44d06017b64335e48992ee8c53e1b6
[ "BSD-3-Clause" ]
109
2015-02-03T13:02:45.000Z
2021-12-21T12:57:21.000Z
161.14026
28,252
0.72132
true
1,199
Qwen/Qwen-72B
1. YES 2. YES
0.90053
0.855851
0.770719
__label__eng_Latn
0.749614
0.628972
``` from sympy import symbols from sympy.physics.mechanics import * import sympy.physics.mechanics as me ``` ``` q1, q2 = dynamicsymbols('q1 q2') q1d, q2d = dynamicsymbols('q1 q2', 1) u1, u2 = dynamicsymbols('u1 u2') u1d, u2d = dynamicsymbols('u1 u2', 1) l, m, g = symbols('l m g') ``` ``` N = ReferenceFrame('N') A = N.orientnew('A', 'Axis', [q1, N.z]) B = N.orientnew('B', 'Axis', [q2, N.z]) ``` ``` A.set_ang_vel(N, u1 * N.z) B.set_ang_vel(N, u2 * N.z) ``` ``` O = Point('O') P = O.locatenew('P', l * A.x) R = P.locatenew('R', l * B.x) ``` ``` O.v2pt_theory? ``` ``` O.set_vel(N, 0) P.v2pt_theory(O, N, A) R.v2pt_theory(P, N, B) ``` l*u1*A.y + l*u2*B.y ``` ParP = Particle('ParP', P, m) ParR = Particle('ParR', R, m) ``` ``` kd = [q1d - u1, q2d - u2] FL = [(P, m * g * N.x), (R, m * g * N.x)] BL = [ParP, ParR] ``` ``` KM = KanesMethod(N, q_ind=[q1, q2], u_ind=[u1, u2], kd_eqs=kd) ``` ``` (fr, frstar) = KM.kanes_equations(FL, BL) kdd = KM.kindiffdict() mm = KM.mass_matrix_full fo = KM.forcing_full qudots = mm.inv() * fo qudots = qudots.subs(kdd) ``` ``` qudots.simplify() ``` ``` qudots ``` ``` angular_momentum? ``` ``` me.rigidbody? ``` ``` am = me.angular_momentum(O, N, ParP, ParR) am ``` (l**2*m*u1 + l*m*(l*(sin(q1)*sin(q2) + cos(q1)*cos(q2)) + l)*u1)*A.z + l*m*(l*(sin(q1)*sin(q2) + cos(q1)*cos(q2)) + l)*u2*B.z ``` ```
f6ad0c65e76d07f607cdc1cc1c6b3e6fba5a3546
21,799
ipynb
Jupyter Notebook
examples/double pendulum.ipynb
isaacyeaton/snaketurn
b7894735f487dfd317bf037b081cdd1ffe0d9524
[ "MIT" ]
null
null
null
examples/double pendulum.ipynb
isaacyeaton/snaketurn
b7894735f487dfd317bf037b081cdd1ffe0d9524
[ "MIT" ]
null
null
null
examples/double pendulum.ipynb
isaacyeaton/snaketurn
b7894735f487dfd317bf037b081cdd1ffe0d9524
[ "MIT" ]
null
null
null
76.22028
11,099
0.713932
true
592
Qwen/Qwen-72B
1. YES 2. YES
0.933431
0.685949
0.640286
__label__kor_Hang
0.142971
0.32593
# 1 - Importing cmpx ```python import cmpx ``` ```python cmpx.__version__ ``` '0.8.4' # 2 - Instanciate a Complex class ```python number = cmpx.Complex(re=1, im=3.2, restore=True) ``` ```python number ``` 1 + 3.2j ```python type(number) ``` cmpx.number.Complex ### As you can see, whenever you create a new object you have to give: - re: real part of complex number. - im: imaginary part of complex number. - restore: this argument is used whenever you want to restore the last number you had to avoid assigning None to your object, sometimes operations are not allowed like dividing by zero so Python interpreter assign None whenever this happened. ### Default values of the constructor are as follows: - re = 0 - im = 0 - restore = True ```python number = cmpx.Complex() number ``` 0 ## There's another way to create a complex number ```python number = cmpx.Complex.fromComplex(1 - 1j) number ``` 1.0 - j - As you can see, you can put directly the complex number as you know of, but keep in mind that the imaginary part must be always multiplied by a coefficient as above. ```python number = cmpx.Complex.fromComplex(2 - 2j) number ``` 2.0 - 2.0j ## To avoid the long wrtinig you can import directly the Complex class ```python from cmpx import Complex number = Complex(1,3) number ``` 1 + 3j # 3 - Mathematical operations - There's a lot of operations you can do! ```python num1 = cmpx.Complex.fromComplex(1 - 2j) num2 = cmpx.Complex.fromComplex(-2 + 1j) ``` ## Addition ```python result = num1 + num2 result ``` -1.0 - j ## Substraction ```python result = num1 - num2 result ``` 3.0 - 3.0j ## Multiplication ```python result = num1 * num2 result ``` 5.0j ## Division ```python result = num1 / num2 result ``` -0.8 + 0.6j ```python result = num1 // num2 result ``` -1.0 - You can also affect directly to the number by unary operators (+=, -=, *=, /=, //=). - You can also make operations with non instanciated complex numbers as follows. ```python num1 += 1 num1 ``` 2.0 - 2.0j ```python num1 *= 1 - 3j num1 ``` -4.0 + 10.0j ```python num1 /= 0 num1 ``` Float division by zero Restoring last number -4.0 + 10.0j - Oh oh! you saw what happened there? as we spoke about it above, the restore field is playing a major factor here, because by default restore is True and because of that it will restore the last number you had. - Also, you can make restore=False if you don't want to recover the last number. ## Module ```python num2.mod() ``` 2.23606797749979 ## Conjugate ```python num2.con() ``` -2.0 - j # 4 - Comparisons - You can also do comparison between complex numbers based on their modules. ```python num1 = cmpx.Complex(1, 2) num2 = cmpx.Complex(-2, -3) ``` ```python if num1 > num2: print('({}) > ({})'.format(num1, num2)) ``` ```python if num1 >= num2: print('({}) >= ({})'.format(num1, num2)) ``` ```python if num1 < num2: print('({}) < ({})'.format(num1, num2)) ``` (1 + 2j) < (-2 - 3j) ```python if num1 <= num2: print('({}) <= ({})'.format(num1, num2)) ``` (1 + 2j) <= (-2 - 3j) ```python if num1 == num2: print('({}) == ({})'.format(num1, num2)) ``` ```python if num1 != num2: print('({}) != ({})'.format(num1, num2)) ``` (1 + 2j) != (-2 - 3j) - You can also do comparisons with non instanciated numbers as follows. ```python if num1 > (1 - 0.5j): print('({}) > ({})'.format(num1, (1-0.5j))) ``` (1 + 2j) > ((1-0.5j)) # 5 - Solving linear and second degree equations - You can solve first or second degree equation using solve function, it takes three arguments and those are the coefficients (a, b and c) : \begin{equation} ax^2 + bx + c = 0 \end{equation} - solve function returns a tuple of solutions which they are Complex objects. ```python from cmpx.equations import solve ``` ```python solutions = solve(-1, 2.5, -4) ``` ```python solutions ``` (1.25 + 1.5612494995995996j, 1.25 - 1.5612494995995996j) ```python type(solutions[0]) ``` cmpx.number.Complex
894032da6090a406990048f5f199264b8e7d0a98
13,484
ipynb
Jupyter Notebook
cmpx tutorial.ipynb
Omar-Belghaouti/PythonComplex
4f286ee4a4c8c042a02a5a2e92d063377c15c713
[ "MIT" ]
7
2019-09-10T20:35:44.000Z
2021-09-30T11:14:25.000Z
cmpx tutorial.ipynb
Omar-Belghaouti/PythonComplex
4f286ee4a4c8c042a02a5a2e92d063377c15c713
[ "MIT" ]
null
null
null
cmpx tutorial.ipynb
Omar-Belghaouti/PythonComplex
4f286ee4a4c8c042a02a5a2e92d063377c15c713
[ "MIT" ]
null
null
null
18.075067
251
0.468185
true
1,323
Qwen/Qwen-72B
1. YES 2. YES
0.880797
0.855851
0.753831
__label__eng_Latn
0.981002
0.589735
# Sistemas de Recomendacion avanzados Diego Galeano, Ph.D. Material basado en: https://developers.google.com/machine-learning/recommendation/labs/movie-rec-programming-exercise ``` import matplotlib.pyplot as plt import seaborn as sns import pandas as pd import numpy as np import copy import random import scipy.io from scipy.optimize import minimize from scipy.optimize import differential_evolution # If you want to have direct access to the datasets and codes you can clone the following github repository ! git clone https://github.com/saminehbagheri/Recommender-System.git %cd Recommender-System ``` /usr/local/lib/python3.6/dist-packages/statsmodels/tools/_testing.py:19: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead. import pandas.util.testing as tm Cloning into 'Recommender-System'... remote: Enumerating objects: 66, done. remote: Total 66 (delta 0), reused 0 (delta 0), pack-reused 66 Unpacking objects: 100% (66/66), done. /content/Recommender-System ### Leemos el Movielens dataset ``` mat = scipy.io.loadmat('ex8_movies.mat') movie_names = pd.read_csv('movie_ids.txt',delimiter=';',header=None)[1] Y=mat['Y'] R=mat['R'] num_user=Y.shape[1] num_movie=Y.shape[0] density= 100*np.sum(np.where(Y > 0,1,0))/(num_user*num_movie) print("numero de usuarios:"+str(num_user)) print("numero de peliculas:"+str(num_movie)) print("densidad del rating matrix: "+str(density) + '%') ``` numero de usuarios:943 numero de peliculas:1682 densidad del rating matrix: 6.304669364224532% ## Matrix factorizacion: collaborative filtering Recordando el sistema de recomendación basado en contenido, la idea era describir cada item y cada usuario con un vector de características importantes. En nuestro ejemplo de la clase anterior, el vector de características que usamos tenía solo tres elementos, pero somos conscientes de que hay criterios mucho más importantes que esos tres para capturar nuestro interés por una película y, a veces, estas características pueden ser más complicadas que simplemente el género de la película. Matrix Factorization supone que podemos describir una película y un usuario en forma de un vector de características. La idea principal de la factorización matricial es encontrar el vector de característica de usuario adecuado $ \vec {x}_i $ y el vector de característica de elemento $ \vec{\theta}_j $ para todos los usuarios $ i = 1 \cdots n_u $ y películas $ j = 1 \cdots n_m $ que sus productos punto dan una buena estimación de la calificación que el $ i $ -ésimo usuario daría a la $ j $ -ésima película $ y_ {ij} $. **Porque este metodo se llama decomposicion de matrices?** Porque el rating que un usuario da a una pelicula se modela como el producto escalar entre el vector de caracteristicas del usuario y el vector de caracteristicas de la pelicula, es decir: $\vec{x}_i\cdot\vec{\theta}_j=y_{ij}$. Supongamos que apilamos todos los vectores de características del usuario en la matriz de características del usuario $\mathbf{X}=\begin{bmatrix}-\vec{x}_1^T-\\ -\vec{x}_2^T- \\ \vdots \\-\vec{x}_{n_u}^T- \end{bmatrix}_{n_u \times n_f}$ y todos los vectores de características de la película juntos en la matriz de características de la película $\mathbf{\Theta}=\begin{bmatrix}-\vec{\theta}_1^T-\\ -\vec{\theta}_2^T- \\ \vdots \\-\vec{\theta}_{n_m}^T- \end{bmatrix}_{n_m \times n_f}$. Entonces, la matriz de calificación se puede determinar de la siguiente manera: \begin{equation}\mathbf{\Theta} \cdot \mathbf{X}^T=\mathbf{R}\end{equation} \begin{equation}\begin{bmatrix}- \vec{\theta}_1^T-\\ - \vec{\theta}_2^T- \\ \vdots \\- \vec{\theta}_{n_m}^T- \end{bmatrix} \cdot \begin{bmatrix}|&|&\cdots&|\\ \vec{x_1}&\vec{x_2}&\cdots&\vec{x}_{n_u}\\ |&|&\cdots&|\end{bmatrix}=\mathbf{R}\end{equation} Sería perfecto si tuviéramos las matrices de características adecuadas, pero no las tenemos. Lo que en realidad tenemos es la matriz de calificación $ \mathbf{Y}$ e intentamos *aprender* las matrices de características de $ \mathbf{Y}$. El nombre **factorización de matrices** proviene de este punto que este algoritmo tiende a encontrar matrices de características razonablemente óptimas al factorizar la matriz de calificación $ \mathbf{Y}$. **¿Cómo funciona la factorización de matrices?** Supongamos que iniciamos las matrices de características con valores completamente aleatorios. El producto escalar del vector de características de usuario $ i$ -ésimo y el vector de características de película $ j $ -ésimo dará $ p_{ij} $ que probablemente sea muy diferente a la calificación real dada $ y_{ij} $. La idea es encontrar las matrices de características de manera que el error $ | p_{ij} -y_{ij} | $ sea lo más mínimo posible para todas las calificaciones dadas. En otras palabras, la factorización matricial se convirtió en un problema de optimización de minimizar una función de costo que es la suma de todos los errores $ \color {green}{\text{al cuadrado}} $ para todas las celdas con una calificación. La función de costo a minimizar se define de la siguiente manera: \begin{equation}J(\mathbf{\Theta},\mathbf{X})=\frac{1}{2} \sum\limits_{(i,j):r(i,j)=\{1\}} (\vec{\theta}_{j} \vec{x}_{i}-y_{i,j})^2\end{equation} **¿Cómo resolver un problema de optimización de este tipo?** Podemos pasar la función de costo con un punto de partida aleatorio a un optimizador y esperar hasta que el optimizador encuentre una solución óptima, pero esto llevará un tiempo insoportablemente largo ya que el problema de optimización es dimensional muy alto. ¿Puede adivinar cuál es la dimensión de entrada de este problema de optimización? $ (n_u + n_m) * n_f $, donde $ n_f $ es el número de caracteristicas. Es más lógico calcular los gradientes hacia la solución óptima de manera iterativa mediante un descenso de gradiente o métodos de gradiente conjugado. Es muy sencillo calcular los gradientes ya que nuestra función de costo es una función cuadrática. ## The Gradients \begin{equation}\frac{\partial J}{\partial x_i^{(k)}}=\sum\limits_{j:r(i,j)=1}(\vec{\theta}_{j} \vec{x}_{i}-y_{i,j})\theta^{(k)}_j,\end{equation} \begin{equation}\frac{\partial J}{\partial \theta_j^{(k)}}=\sum\limits_{i:r(i,j)=1}(\vec{\theta}_{j} \vec{x}_{i}-y_{i,j})x^{(k)}_i,\end{equation} donde $x^{(k)}_j$ es el elemento $k$-esimo del $i$-esimo vector de caracteristicas del usuario $\vec{x}_i$. Los pasos principales del algoritmo de factorización matricial se pueden resumir a continuación: 1. Inicialice $ \mathbf {\Theta} $ y $ \mathbf{X} $ con números pequeños aleatorios 2. Minimice la función de costo $ J (\mathbf{\Theta}, \mathbf{X}) $ 3. Utilice las matrices de características optimizadas para predecir <a id='MFPy'> </a> # Implementación de factorización matricial ## Función de inicialización de parámetros Esta función simplemente obtiene el número de usuarios, el número de elementos y el número de características como entrada y devuelve la matriz de características del usuario iniciada aleatoriamente y la matriz de características del elemento. ``` def initilizeFeat(nu,ni,nf,seed=42): ''' Inicializacion de las matrices de caracteristicas. ''' # semilla de randomizacion np.random.seed(seed) # inicializacion aleatoria de matrices de caracteristicas Theta = np.random.rand(nu,nf)*0.05 X = np.random.rand(ni,nf)*0.05 return X, Theta ``` ## Funciones auxiliares Tenemos dos funciones auxiliares. Una para aplanar las matrices de características en una matriz 1-D, y otra hace lo inverso. ``` def flatterRev(x,nu,ni,nf): ''' Convierte un vector 1-D a las matrices de caracteristicas X y Theta. x: vector 1-D. nu: numero de usuarios. ni: numero de items. nf: numero de caracteristicas. ''' X=x[0:ni*nf].reshape((ni,nf),order='F') Theta=x[ni*nf:].reshape((nu,nf),order='F') return X,Theta def flatter(X, Theta): ''' Convierte las matrices de caracteristicas a un vector 1-D. X: matriz de caracteristicas de usuarios. Theta: matriz de caracteristicas de peliculas. ''' x=np.concatenate([X.reshape(X.shape[0]*X.shape[1],order='F'),Theta.reshape(Theta.shape[0]*Theta.shape[1],order='F')]) return(x) ``` # Función de costo La función de costo es simplemente la implementación de la fórmula de la función de costo que hemos discutido anteriormente en una forma vectorizada para evitar bucles for anidados para las sumas. La única diferencia es un término adicional vinculado con el parámetro $ \lambda $, el parámetro de regularización. ``` def costFunc(X,Theta,R,M,la=0): ''' Funcion auxiliar de la funcion de costo. ''' R=np.ma.array(R, mask=M) e=0.5*np.sum(np.power((np.dot(Theta,X.T)-R),2))+la*0.5*np.sum(np.power(Theta, 2))+la*0.5*np.sum(np.power(X, 2)) return(e/np.sum(M==False)) def CF(x,R,M,nu,ni,nf,la=0): ''' Funcion de costo con termino de regularizacion. x: 1-D vector que contiene X y Theta. R: matriz de ratings. M: matriz de enmascaramiento para solo optimizar en los ratings observados. nu: numero de usuarios. ni: numero de items. nf: numero de caracteristicas. la: lambda, termino de regularizacion L2. ''' X, Theta=flatterRev(x,nu,ni,nf) error=costFunc(X,Theta,R,M,la=la) return error ``` # Gradiente ``` def gradFunc(x,R,M,nu,ni,nf,la=0 ): ''' Retorna los gradientes para el optimizador. ''' X, Theta=flatterRev(x,nu,ni,nf) R=np.ma.array(R, mask=M) e=np.dot(Theta,X.T)-R TG=np.dot(e,X)+la*Theta XG=np.dot(e.T,Theta)+la*X grads=np.concatenate([XG.reshape(XG.shape[0]*XG.shape[1],order='F'),TG.reshape(TG.shape[0]*TG.shape[1],order='F')]) return grads/np.sum(M==False) ``` ## Función de entrenamiento La función de entrenamiento en realidad es solo aplicar un algoritmo de optimización en la función de costo. Usamos un optimizador de gradiente conjugado incorporado de la biblioteca scipy. Es posible que desee probar diferentes métodos. ``` def trainMF(R,M,nf,la=0,seed=42): nu=R.shape[0] ni=R.shape[1] R=np.ma.array(R, mask=M) X, Theta=initilizeFeat(nu,ni,nf,seed=seed) x=flatter(X, Theta) res = minimize(CF, x, args=(R,M,nu,ni,nf,la), method='CG',jac=gradFunc,options={ 'disp': True,'gtol':1e-5}) MSE=CF(res.x,R,M,nu,ni,nf,la) return(MSE, res,nu,ni,nf) ``` # Funcion de prediccion ``` def Predict(res,nu,ni,nf,la=0): X, Theta=flatterRev(res.x,nu,ni,nf) predict=np.dot(Theta,X.T) return(predict) ``` # Construyendo el modelo del sistema de recomendación ``` runEXAMPLE=True def buildRSModel(R,M,mu=None, nf=10,la=0,seed=42, movie_names=None): trainR=copy.copy(R) trainM=copy.copy(M) trainR=np.ma.array(trainR, mask=trainM) if mu is None: mu=np.average(trainR,axis=1) trainR=trainR-mu[:,None] trainingError, res,nu,ni,nf=trainMF(trainR,M,nf=nf,la=la,seed=seed) model={'trainingError': trainingError, 'res':res,'nu':nu,'ni':ni,'nf':nf, 'la':la, 'movie_names':movie_names, 'mu':mu, 'R':R,'M':M} return model #Ejemplo if runEXAMPLE: R=mat['Y'] M=mat['R'] trainR=copy.copy(R) trainM= (M==0) mymodel=buildRSModel(R=trainR,M=trainM,mu=None, nf=100,la=0,seed=42, movie_names=movie_names) #print(mymodel) ``` Warning: Desired error not necessarily achieved due to precision loss. Current function value: 0.158532 Iterations: 16 Function evaluations: 335 Gradient evaluations: 324 # Predicción para el usuario X ``` runEXAMPLE=True def predictForUserX(user_Id,model,movie_Id=None): trainingError=model['trainingError'] res=model['res'] nu=model['nu'] ni=model['ni'] nf=model['nf'] la=model['la'] movie_names=model['movie_names'] mu=model['mu'] R=model['R'] M=model['M'] mypredict=Predict(res,nu,ni,nf,la=0) mydata=pd.DataFrame() Pred=mypredict[:,user_Id]+mu[user_Id] mydata['names']=movie_names mydata['predictedRating']=Pred mydata['originalrating']=R[:,user_Id] mydata=mydata.sort_values(by=['predictedRating'], ascending=False) output=mydata[mydata['originalrating'] == 0] return(output) #Example if runEXAMPLE: user_id = 125 print(predictForUserX(user_id,mymodel,movie_Id=None).head()) ``` names predictedRating originalrating 471 Dragonheart(1996) 4.268084 0 679 Kull the Conqueror(1997) 4.233533 0 320 Mother(1996) 4.195186 0 627 Sleepers(1996) 4.182644 0 221 Star Trek: First Contact(1996) 4.162360 0 # Ingrese su propia calificación Puede usar esta función para ingresar su propia calificación y ver lo que sugiere el sistema. Si no configura el parámetro model, se utilizarán los valores predeterminados de nf = 100, la = 0.1. ``` def weRecommend(myratings,modelparam=None): movie_names = pd.read_csv('movie_ids.txt',delimiter=';',header=None)[1] mat = scipy.io.loadmat('ex8_movies.mat') print("Reading the data") R=mat['Y'] M=mat['R'] trainR=copy.copy(R) trainM= (M==0) num_user=R.shape[1] num_movie=R.shape[0] myratings=myratings.sort_values(by=['names'], ascending=False) movies=copy.copy(movie_names) movies=movies.sort_values( ascending=False) indices=movies[movies.isin( myratings['names'])].index newuserratingR=np.zeros(num_movie) newuserratingM=np.zeros(num_movie) newuserratingR[indices]=myratings['rating'] newuserratingM[indices]=1 newuserratingM= (newuserratingM==0) trainR=np.concatenate((newuserratingR[:,None],trainR),axis=1) trainM=np.concatenate((newuserratingM[:,None],trainM),axis=1) print("Training the Recommender System...") if modelparam is None: mymodel=buildRSModel(R=trainR,M=trainM,mu=None, nf=100,la=0.1, movie_names=movie_names) else: nf=modelparam['nf'] la=modelparam['la'] mymodel=buildRSModel(R=trainR,M=trainM,mu=None, nf=nf,la=la, movie_names=movie_names) print("Training is successfully finished") bests=predictForUserX(0,mymodel,movie_Id=None).head(15) worsts=predictForUserX(0,mymodel,movie_Id=None).tail(15) print("Predicting you're ratings:") bests=bests.iloc[:, :-1] worsts=worsts.iloc[:, :-1] output={'bests':bests,'worsts':worsts} return output ``` ``` movie_names = pd.read_csv('movie_ids.txt',delimiter=';',header=None)[1] ``` ``` df = pd.DataFrame() df['rating'] = [5,4,5,5,5] df['names'] = ['Toy Story(1995)', 'Batman Forever(1995)', 'Ace Ventura: Pet Detective(1994)', 'Lion King The(1994)', 'Mask The(1994)' ] ``` ``` weRecommend(df) ``` Reading the data Training the Recommender System... Warning: Desired error not necessarily achieved due to precision loss. Current function value: 0.164909 Iterations: 16 Function evaluations: 274 Gradient evaluations: 263 Training is successfully finished Predicting you're ratings: {'bests': names predictedRating 94 Aladdin(1992) 4.137037 587 Beauty and the Beast(1991) 4.103507 741 Ransom(1996) 4.080346 236 Jerry Maguire(1996) 4.039222 185 Blues Brothers The(1980) 4.032696 392 Mrs. Doubtfire(1993) 4.028297 293 Liar Liar(1997) 4.027884 158 Basic Instinct(1992) 4.027414 767 Casper(1995) 4.020870 229 Star Trek IV: The Voyage Home(1986) 4.020013 635 Escape from New York(1981) 4.018811 595 Hunchback of Notre Dame The(1996) 4.017570 6 Twelve Monkeys(1995) 4.015794 68 Forrest Gump(1994) 4.013471 95 Terminator 2: Judgment Day(1991) 4.012567, 'worsts': names predictedRating 273 Sabrina(1995) 3.787733 215 When Harry Met Sally...(1989) 3.784047 184 Psycho(1960) 3.782446 270 Starship Troopers(1997) 3.779023 746 Benny & Joon(1993) 3.778370 495 Its a Wonderful Life(1946) 3.776042 339 Boogie Nights(1997) 3.768823 244 Devils Own The(1997) 3.767892 474 Trainspotting(1996) 3.763347 214 Field of Dreams(1989) 3.761476 684 Executive Decision(1996) 3.748189 172 Princess Bride The(1987) 3.732079 143 Die Hard(1988) 3.730593 149 Swingers(1996) 3.724997 116 Rock The(1996) 3.685122} # Optimizar los parametros del sistema de recomendacion ## Función de error de prueba Existen diferentes enfoques para evaluar el desempeño de un sistema de recomendación en los datos de prueba. Un método consiste en calcular el error cuadrático medio de los datos de prueba. Otro enfoque es medir algún tipo de precisión de predicción. En la siguiente celda, definimos la precisión como el porcentaje de las calificaciones pronosticadas que tienen un error de 1 o menos. ``` def testMF(tR,tM,predict): tR=np.ma.array(tR, mask=tM) e=np.abs(tR-predict) testMSE=np.sum(np.power(e,2))/np.sum(tM==False) return(testMSE) ``` ``` def splitMatrix(R,M,testPer): trainPer=1-testPer num_user=R.shape[1] num_movie=R.shape[0] overallRating=np.sum(M) testsize=testPer*overallRating testsize=testsize.astype(int) #split tarining and test dataset random.seed( 9273482 ) ind1, ind2=np.where(M==1) testSamples=random.sample(range(ind1.shape[0]), testsize) testInd1=ind1[testSamples] testInd2=ind2[testSamples] trainR=copy.copy(R) trainM=copy.copy(M) trainR[testInd1,testInd2]=0 trainM[testInd1,testInd2]=0 M= (trainM==0) trainR=np.ma.array(trainR, mask=M) mu=np.average(trainR,axis=1) testR=copy.copy(R) testM=np.zeros(shape = (testR.shape[0],testR.shape[1])) testM[testInd1,testInd2]=1 tM=(testM==0) testR=testR*testM testR=np.ma.array(testR, mask=tM) return trainR, M, testR, tM, mu ``` ``` DORUN=True trainR, M, testR, tM, mu=splitMatrix(Y,R,0.1) NF=[1,5,10,20] myseed=5623 if DORUN: trainR=trainR-mu[:,None] testR=testR-mu[:,None] for nf in NF: trainingError, res,nu,ni,nf=trainMF(trainR,M,nf=nf,la=0.1,seed=myseed) mypredict=Predict(res,nu,ni,nf,la=0) zeropredict=np.zeros(shape = (mypredict.shape[0],mypredict.shape[1])) testError=testMF(testR,tM,mypredict) print('[nf='+str(nf)+']'+'Training Error:'+str(trainingError)) print('[nf='+str(nf)+']'+'Test Error:'+str(testError)) ``` Warning: Desired error not necessarily achieved due to precision loss. Current function value: 0.467095 Iterations: 4 Function evaluations: 74 Gradient evaluations: 63 [nf=1]Training Error:0.46709498777841846 [nf=1]Test Error:0.995016231120181 Warning: Desired error not necessarily achieved due to precision loss. Current function value: 0.466449 Iterations: 3 Function evaluations: 103 Gradient evaluations: 91 [nf=5]Training Error:0.46644884839687906 [nf=5]Test Error:0.9962897332608236 Warning: Desired error not necessarily achieved due to precision loss. Current function value: 0.463523 Iterations: 2 Function evaluations: 59 Gradient evaluations: 48 [nf=10]Training Error:0.4635231472060253 [nf=10]Test Error:0.9924336179545064 Warning: Desired error not necessarily achieved due to precision loss. Current function value: 0.466339 Iterations: 3 Function evaluations: 92 Gradient evaluations: 80 [nf=20]Training Error:0.4663389769881152 [nf=20]Test Error:0.9980363109345022 ``` ```
d9c08d720d0bf67ebea85c60c59ee174f4da3984
28,226
ipynb
Jupyter Notebook
clase 3.ipynb
orlandochr/RecommenderSystemClasses
5d73e01d195faef7ecc4f6297e03865be83d0c50
[ "MIT" ]
null
null
null
clase 3.ipynb
orlandochr/RecommenderSystemClasses
5d73e01d195faef7ecc4f6297e03865be83d0c50
[ "MIT" ]
null
null
null
clase 3.ipynb
orlandochr/RecommenderSystemClasses
5d73e01d195faef7ecc4f6297e03865be83d0c50
[ "MIT" ]
2
2020-09-05T17:35:49.000Z
2020-09-14T22:46:04.000Z
28,226
28,226
0.658329
true
5,988
Qwen/Qwen-72B
1. YES 2. YES
0.779993
0.718594
0.560499
__label__spa_Latn
0.642329
0.140556
# Working with outliers and missing data. Working with dataset: - https://archive.ics.uci.edu/ml/datasets/Vertebral+Column ```python import numpy as np import pandas as pd import matplotlib.pyplot as plt ``` An outlier is an observation that lies an abnormal distance from other values in a random sample from a population. In a sense, this definition leaves it up to the analyst (or a consensus process) to decide what will be considered abnormal. Before abnormal observations can be singled out, it is necessary to characterize normal observations. ```python names=["pelvic_incidence","pelvic_tilt","lumbar_lordosis_angle","sacral_slope","pelvic_radius","degree_spondylolisthesis","class"] data=pd.read_csv("../Datas/VertebralColumn/column_2C.dat",delimiter="\s+",names=names) ``` ```python data ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>pelvic_incidence</th> <th>pelvic_tilt</th> <th>lumbar_lordosis_angle</th> <th>sacral_slope</th> <th>pelvic_radius</th> <th>degree_spondylolisthesis</th> <th>class</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>63.03</td> <td>22.55</td> <td>39.61</td> <td>40.48</td> <td>98.67</td> <td>-0.25</td> <td>AB</td> </tr> <tr> <th>1</th> <td>39.06</td> <td>10.06</td> <td>25.02</td> <td>29.00</td> <td>114.41</td> <td>4.56</td> <td>AB</td> </tr> <tr> <th>2</th> <td>68.83</td> <td>22.22</td> <td>50.09</td> <td>46.61</td> <td>105.99</td> <td>-3.53</td> <td>AB</td> </tr> <tr> <th>3</th> <td>69.30</td> <td>24.65</td> <td>44.31</td> <td>44.64</td> <td>101.87</td> <td>11.21</td> <td>AB</td> </tr> <tr> <th>4</th> <td>49.71</td> <td>9.65</td> <td>28.32</td> <td>40.06</td> <td>108.17</td> <td>7.92</td> <td>AB</td> </tr> <tr> <th>...</th> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> </tr> <tr> <th>305</th> <td>47.90</td> <td>13.62</td> <td>36.00</td> <td>34.29</td> <td>117.45</td> <td>-4.25</td> <td>NO</td> </tr> <tr> <th>306</th> <td>53.94</td> <td>20.72</td> <td>29.22</td> <td>33.22</td> <td>114.37</td> <td>-0.42</td> <td>NO</td> </tr> <tr> <th>307</th> <td>61.45</td> <td>22.69</td> <td>46.17</td> <td>38.75</td> <td>125.67</td> <td>-2.71</td> <td>NO</td> </tr> <tr> <th>308</th> <td>45.25</td> <td>8.69</td> <td>41.58</td> <td>36.56</td> <td>118.55</td> <td>0.21</td> <td>NO</td> </tr> <tr> <th>309</th> <td>33.84</td> <td>5.07</td> <td>36.64</td> <td>28.77</td> <td>123.95</td> <td>-0.20</td> <td>NO</td> </tr> </tbody> </table> <p>310 rows × 7 columns</p> </div> ```python vertebral_stats=data.describe().T ``` ```python import scipy.stats as st # La mediana puede ser también calculada, así como el coeficiente de asimetría y curtosis respectivamente print(data.columns) print("median =",np.median(data.loc[:, data.columns != 'class'],axis=0)) print("skewness =",st.skew(data.loc[:, data.columns != 'class'],axis=0)) print("kurtosis =",st.kurtosis(data.loc[:, data.columns != 'class'],axis=0)) ``` Index(['pelvic_incidence', 'pelvic_tilt', 'lumbar_lordosis_angle', 'sacral_slope', 'pelvic_radius', 'degree_spondylolisthesis', 'class'], dtype='object') median = [ 58.69 16.36 49.565 42.405 118.265 11.765] skewness = [ 0.51786409 0.67329913 0.5964695 0.78883655 -0.17607403 4.29696644] kurtosis = [ 0.2006561 0.64597047 0.13977901 2.94050906 0.90035703 37.4374569 ] ```python vertebral_stats["median"]=np.median(data.loc[:, data.columns != 'class'],axis=0) vertebral_stats["skewness"]=st.skew(data.loc[:, data.columns != 'class'],axis=0) vertebral_stats["kurtosis"]=st.kurtosis(data.loc[:, data.columns != 'class'],axis=0) ``` ```python vertebral_stats ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>count</th> <th>mean</th> <th>std</th> <th>min</th> <th>25%</th> <th>50%</th> <th>75%</th> <th>max</th> <th>median</th> <th>skewness</th> <th>kurtosis</th> </tr> </thead> <tbody> <tr> <th>pelvic_incidence</th> <td>310.0</td> <td>60.496484</td> <td>17.236109</td> <td>26.15</td> <td>46.4325</td> <td>58.690</td> <td>72.8800</td> <td>129.83</td> <td>58.690</td> <td>0.517864</td> <td>0.200656</td> </tr> <tr> <th>pelvic_tilt</th> <td>310.0</td> <td>17.542903</td> <td>10.008140</td> <td>-6.55</td> <td>10.6675</td> <td>16.360</td> <td>22.1200</td> <td>49.43</td> <td>16.360</td> <td>0.673299</td> <td>0.645970</td> </tr> <tr> <th>lumbar_lordosis_angle</th> <td>310.0</td> <td>51.930710</td> <td>18.553766</td> <td>14.00</td> <td>37.0000</td> <td>49.565</td> <td>63.0000</td> <td>125.74</td> <td>49.565</td> <td>0.596469</td> <td>0.139779</td> </tr> <tr> <th>sacral_slope</th> <td>310.0</td> <td>42.953871</td> <td>13.422748</td> <td>13.37</td> <td>33.3475</td> <td>42.405</td> <td>52.6925</td> <td>121.43</td> <td>42.405</td> <td>0.788837</td> <td>2.940509</td> </tr> <tr> <th>pelvic_radius</th> <td>310.0</td> <td>117.920548</td> <td>13.317629</td> <td>70.08</td> <td>110.7100</td> <td>118.265</td> <td>125.4675</td> <td>163.07</td> <td>118.265</td> <td>-0.176074</td> <td>0.900357</td> </tr> <tr> <th>degree_spondylolisthesis</th> <td>310.0</td> <td>26.296742</td> <td>37.558883</td> <td>-11.06</td> <td>1.6000</td> <td>11.765</td> <td>41.2850</td> <td>418.54</td> <td>11.765</td> <td>4.296966</td> <td>37.437457</td> </tr> </tbody> </table> </div> ```python vertebral_stats["LII"]=vertebral_stats["25%"]-((vertebral_stats["75%"]-vertebral_stats["25%"])*1.5) vertebral_stats["LIS"]=vertebral_stats["75%"]+((vertebral_stats["75%"]-vertebral_stats["25%"])*1.5) vertebral_stats["LEI"]=vertebral_stats["25%"]-((vertebral_stats["75%"]-vertebral_stats["25%"])*3.0) vertebral_stats["LES"]=vertebral_stats["75%"]+((vertebral_stats["75%"]-vertebral_stats["25%"])*3.0) ``` ```python vertebral_stats ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>count</th> <th>mean</th> <th>std</th> <th>min</th> <th>25%</th> <th>50%</th> <th>75%</th> <th>max</th> <th>median</th> <th>skewness</th> <th>kurtosis</th> <th>LII</th> <th>LIS</th> <th>LEI</th> <th>LES</th> </tr> </thead> <tbody> <tr> <th>pelvic_incidence</th> <td>310.0</td> <td>60.496484</td> <td>17.236109</td> <td>26.15</td> <td>46.4325</td> <td>58.690</td> <td>72.8800</td> <td>129.83</td> <td>58.690</td> <td>0.517864</td> <td>0.200656</td> <td>6.76125</td> <td>112.55125</td> <td>-32.9100</td> <td>152.2225</td> </tr> <tr> <th>pelvic_tilt</th> <td>310.0</td> <td>17.542903</td> <td>10.008140</td> <td>-6.55</td> <td>10.6675</td> <td>16.360</td> <td>22.1200</td> <td>49.43</td> <td>16.360</td> <td>0.673299</td> <td>0.645970</td> <td>-6.51125</td> <td>39.29875</td> <td>-23.6900</td> <td>56.4775</td> </tr> <tr> <th>lumbar_lordosis_angle</th> <td>310.0</td> <td>51.930710</td> <td>18.553766</td> <td>14.00</td> <td>37.0000</td> <td>49.565</td> <td>63.0000</td> <td>125.74</td> <td>49.565</td> <td>0.596469</td> <td>0.139779</td> <td>-2.00000</td> <td>102.00000</td> <td>-41.0000</td> <td>141.0000</td> </tr> <tr> <th>sacral_slope</th> <td>310.0</td> <td>42.953871</td> <td>13.422748</td> <td>13.37</td> <td>33.3475</td> <td>42.405</td> <td>52.6925</td> <td>121.43</td> <td>42.405</td> <td>0.788837</td> <td>2.940509</td> <td>4.33000</td> <td>81.71000</td> <td>-24.6875</td> <td>110.7275</td> </tr> <tr> <th>pelvic_radius</th> <td>310.0</td> <td>117.920548</td> <td>13.317629</td> <td>70.08</td> <td>110.7100</td> <td>118.265</td> <td>125.4675</td> <td>163.07</td> <td>118.265</td> <td>-0.176074</td> <td>0.900357</td> <td>88.57375</td> <td>147.60375</td> <td>66.4375</td> <td>169.7400</td> </tr> <tr> <th>degree_spondylolisthesis</th> <td>310.0</td> <td>26.296742</td> <td>37.558883</td> <td>-11.06</td> <td>1.6000</td> <td>11.765</td> <td>41.2850</td> <td>418.54</td> <td>11.765</td> <td>4.296966</td> <td>37.437457</td> <td>-57.92750</td> <td>100.81250</td> <td>-117.4550</td> <td>160.3400</td> </tr> </tbody> </table> </div> ```python plt.figure(figsize=(16,6)) plt.subplot(121) plt.hist(data["pelvic_incidence"],label="pelvic_incidence",alpha=0.7,color='b') plt.hist(data["pelvic_tilt"],label="pelvic_tilt",alpha=0.7,color='r') plt.hist(data["pelvic_radius"],label="pelvic_radius",alpha=0.7,color='g') plt.legend(fontsize=14) plt.ylabel("Counts",fontsize=14) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.subplot(122) plt.hist(data["degree_spondylolisthesis"],label="degree_spondylolisthesis",alpha=0.7,color='b') plt.hist(data["sacral_slope"],label="sacral_slope",alpha=0.7,color='r') plt.hist(data["lumbar_lordosis_angle"],label="lumbar_lordosis_angle",alpha=0.7,color='g') plt.legend(fontsize=14) plt.ylabel("Counts",fontsize=14) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.savefig("../../figures/vertebral_hist1.png",bbox_inches ="tight") plt.show() ``` ```python plt.figure(figsize=(12,6)) data.boxplot(grid=True,fontsize=15) plt.legend(fontsize=16) plt.xticks(range(1,7),np.array(["pelvic \n incidence","pelvic \n tilt","lumbar \n lordosis \n angle","sacral \n slope","pelvic \n radius","degree \n spondylolisthesis"]),fontsize=16) plt.yticks(fontsize=16) plt.savefig("../../figures/vertebral_boxplot1.png",bbox_inches ="tight") plt.show() ``` Contabilizando los valores atípicos y extremos. ```python vertebral_stats2=pd.DataFrame() vertebral_stats2.index=vertebral_stats.index atipicos,extremos=[],[] for i in vertebral_stats2.index: LII=vertebral_stats[vertebral_stats.index==i]["LII"][0] LIS=vertebral_stats[vertebral_stats.index==i]["LIS"][0] LEI=vertebral_stats[vertebral_stats.index==i]["LEI"][0] LES=vertebral_stats[vertebral_stats.index==i]["LES"][0] atipicos.append(len(data[((data[i]<=LII)&(data[i]>=LIS))|((data[i]>=LIS)&(data[i]<=LES))])) extremos.append(len(data[((data[i]<=LEI)|(data[i]>=LES))])) vertebral_stats2["Valores atipicos"]=atipicos vertebral_stats2["Valores extremos"]=extremos vertebral_stats2["Valores atipicos (%)"]=np.round(vertebral_stats2["Valores atipicos"]/len(data)*100,2) vertebral_stats2["Valores extremos (%)"]=np.round(vertebral_stats2["Valores extremos"]/len(data)*100,2) vertebral_stats2.loc['Total'] = vertebral_stats2.sum(axis=0) ``` ```python vertebral_stats2 ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>Valores atipicos</th> <th>Valores extremos</th> <th>Valores atipicos (%)</th> <th>Valores extremos (%)</th> </tr> </thead> <tbody> <tr> <th>pelvic_incidence</th> <td>3.0</td> <td>0.0</td> <td>0.97</td> <td>0.00</td> </tr> <tr> <th>pelvic_tilt</th> <td>12.0</td> <td>0.0</td> <td>3.87</td> <td>0.00</td> </tr> <tr> <th>lumbar_lordosis_angle</th> <td>1.0</td> <td>0.0</td> <td>0.32</td> <td>0.00</td> </tr> <tr> <th>sacral_slope</th> <td>0.0</td> <td>1.0</td> <td>0.00</td> <td>0.32</td> </tr> <tr> <th>pelvic_radius</th> <td>5.0</td> <td>0.0</td> <td>1.61</td> <td>0.00</td> </tr> <tr> <th>degree_spondylolisthesis</th> <td>9.0</td> <td>1.0</td> <td>2.90</td> <td>0.32</td> </tr> <tr> <th>Total</th> <td>30.0</td> <td>2.0</td> <td>9.67</td> <td>0.64</td> </tr> </tbody> </table> </div> Supongamos que emiminamos los valores atípicos de los datos, ¿cómo luciría la distribución de frecuencias de los datos? ```python def filter_outlier(col): LII=vertebral_stats[vertebral_stats.index==col]["LII"][0] LIS=vertebral_stats[vertebral_stats.index==col]["LIS"][0] LEI=vertebral_stats[vertebral_stats.index==col]["LEI"][0] LES=vertebral_stats[vertebral_stats.index==col]["LES"][0] return data[~(((data[col]<=LII)&(data[col]>=LIS))|((data[col]>=LIS)&(data[col]<=LES)))][col] ``` ```python filter_outlier("degree_spondylolisthesis") ``` 0 -0.25 1 4.56 2 -3.53 3 11.21 4 7.92 ... 305 -4.25 306 -0.42 307 -2.71 308 0.21 309 -0.20 Name: degree_spondylolisthesis, Length: 301, dtype: float64 ```python plt.figure(figsize=(16,6)) plt.subplot(121) plt.hist(filter_outlier("pelvic_tilt"),label="pelvic_incidence (Outliers removed) ({} rows)".format(len(filter_outlier("pelvic_tilt"))),alpha=0.7,color='b') plt.hist(data["pelvic_tilt"],label="pelvic_incidence (All data) ({} rows)".format(len(data)),alpha=0.7,color='r') plt.legend(title="Absolute distribution",fontsize=14) plt.ylabel("Counts",fontsize=14) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.subplot(122) plt.hist(filter_outlier("pelvic_tilt"),label="pelvic_incidence (Outliers removed) ({} rows)".format(len(filter_outlier("pelvic_tilt"))),alpha=0.7,cumulative=True,color='b') plt.hist(data["pelvic_tilt"],label="pelvic_incidence (All data) ({} rows)".format(len(data)),alpha=0.7,cumulative=True,color='r') plt.legend(title="Cumulative distribution",fontsize=14) plt.ylabel("Counts",fontsize=14) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.savefig("../../figures/vertebral_hist2.png",bbox_inches ="tight") plt.show() ``` ## Criterio de Chauvenet Una manera de identificar datos extremos y con la idea de eliminarlos, es usar el criterio de Chauvenet, el cual se basa en la idea de encontrar una banda de probabilidad, centrada en la media de una distribución normal, que debería contener razonablemente todas las $n$ muestras de un conjunto de datos. De este modo, los puntos de datos de las $n$ muestras que se encuentran fuera de esta banda de probabilidad pueden considerarse valores atípicos, eliminarse del conjunto de datos y calcularse una nueva media y desviación típica basadas en los valores restantes y el nuevo tamaño de la muestra. Esta identificación de los valores atípicos se logra encontrando el número de desviaciones estándar que corresponden a los límites de la banda de probabilidad alrededor de la media ($D_{max}$) y comparando ese valor con el valor absoluto de la diferencia entre los presuntos valores atípicos y la media dividido por la desviación estándar \begin{equation} D_{max} \leq \frac{|x-\bar x|}{s_x}, \label{chauvenet1} \end{equation} donde $D_{max}$ es la desviación máxima permitida, $|\cdot |$ es el valor absoluto, $x$ es el valor del presunto valor atípico, $\bar x$ es la media de la muestra, y $s_x$ es la desviación estándar de la muestra. Para que se considere que se incluyen todas las $n$ observaciones de la muestra, la banda de probabilidad (centrada en la media) sólo debe tener en cuenta $n-1/2$ muestras (si $n=3$ entonces sólo 2.5 de las muestras deben tenerse en cuenta en la banda de probabilidad). En resumen, estamos buscando la probabilidad, $P$, que es igual a $n-1/2$ de $n$ muestras \begin{equation} P=\frac{n-1/2}{n}=1-\frac{1}{2n}, \end{equation} donde, $P$ es la banda de probabilidad centrada en la media de la muestra y $n$ es el tamaño de la muestra. La cantidad $1/(2n)$ corresponde a la probabilidad combinada representada por las dos colas de la distribución normal que caen fuera de la banda de probabilidad $P$. Para encontrar el nivel de desviación estándar asociado a $P$, sólo es necesario analizar la probabilidad de una de las colas de la distribución normal debido a su simetría $P_z=1(4n)$, donde $P_z$ es la probabilidad representada por una cola de la distribución normal y $n$ es el tamaño de la muestra. La ecuación \ref{chauvenet1} es análoga a la ecuación de puntuación $Z$ para una distribución normal \begin{equation} Z=\frac{x-\mu}{\sigma}, \label{normal1} \end{equation} donde, $Z$ es el valor $Z$ de la distribución normal, $x$ es el valor de la muestra, $\mu$ es la media de la distribución normal estándar, y $\sigma=1$ es la desviación estándar de la distribución normal estándar. Basado en la Ecuación \ref{normal1}, para encontrar el $D_{max}$ debemos encontrar la puntuación $Z$ correspondiente a $P_z$ en una tabla de puntuación de una distribución normal. $D_{max}$ es igual a la puntuación de $P_z$. Usando este método $D_{max}$ puede determinarse para cualquier tamaño de muestra. Para aplicar el criterio de Chauvenet, primero hay que calcular la media y la desviación estándar de los datos observados. En función de la diferencia entre el dato sospechoso y la media, utilice la función de distribución normal (o una tabla de la misma) para determinar la probabilidad de que un punto de datos dado se encuentre en el valor del punto de datos sospechoso. Multiplique esta probabilidad por el número de puntos de datos tomados. Si el resultado es inferior a 0.5, el punto de datos sospechoso puede ser descartado, es decir, una lectura puede ser rechazada si la probabilidad de obtener la desviación particular de la media es inferior a $\frac{1}{2n}$. Veamos esto de manera sencilla para el conjunto de datos 'Vertebral Column' y además sólo para la variable 'pelvic_tilt' en la que recordemos habíamos conseguido 12 valores atípicos. ```python print("La media es =",data["pelvic_tilt"].mean()) print("La desviación estándar es =",data["pelvic_tilt"].std()) # El z-score para todos los datos es: Z=abs(data["pelvic_tilt"]-data["pelvic_tilt"].mean())/data["pelvic_tilt"].std() ``` La media es = 17.542903225806448 La desviación estándar es = 10.008140050586714 ```python Z ``` 0 0.500302 1 0.747682 2 0.467329 3 0.710132 4 0.788648 ... 305 0.391971 306 0.317451 307 0.514291 308 0.884570 309 1.246276 Name: pelvic_tilt, Length: 310, dtype: float64 ```python import scipy.stats as st n=len(data["pelvic_tilt"]) PZ=1-(1/(4*n)) Z_score=st.norm.ppf(PZ) print("Z_score es = {} para n={} observaciones con una probabilidad de P_Z=1-(1((4*n))) = {}".format(Z_score,len(data["pelvic_tilt"]),PZ)) ``` Z_score es = 3.1535631591215094 para n=310 observaciones con una probabilidad de P_Z=1-(1((4*n))) = 0.9991935483870967 Luego veamos cuales Z están por encima del valor Z-score de la distribución normal. ```python Z[Z>=Z_score] ``` 179 3.186116 Name: pelvic_tilt, dtype: float64 ```python 1-(1/(4*30)) ``` 0.9916666666666667 ```python plt.figure(figsize=(16,6)) plt.subplot(121) #plt.hist(filter_outlier("pelvic_tilt"),label="pelvic_incidence (Outliers removed) ({} rows)".format(len(filter_outlier("pelvic_tilt"))),alpha=0.7,color='b') plt.scatter(data["pelvic_tilt"],Z,label="data",alpha=0.99,color='b') plt.scatter(data["pelvic_tilt"].iloc[179],Z.iloc[179],label="outlier (Chauvenet's criterion)",alpha=0.5,color='r',s=150,edgecolors='face',facecolor=None) plt.text(data["pelvic_tilt"].iloc[179]-7,Z.iloc[179],str(data["pelvic_tilt"].iloc[179]),fontsize=14) plt.legend(fontsize=14) plt.xlabel("pelvic_incidence",fontsize=14) plt.ylabel("Z_score",fontsize=14) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.subplot(122) hist=plt.hist(data["pelvic_tilt"],alpha=0.7,color='b',density=True) plt.vlines(data["pelvic_tilt"].iloc[179],0,hist[0].max()/2,alpha=0.99,color='r') plt.text(data["pelvic_tilt"].iloc[179]-7,hist[0].max()/2,str(data["pelvic_tilt"].iloc[179]),fontsize=14) #plt.scatter(data["pelvic_tilt"].iloc[179],Z.iloc[179],label="outlier (Chauvenet's criterion)",alpha=0.5,color='r',s=150,edgecolors='face',facecolor=None) #plt.text(data["pelvic_tilt"].iloc[179]-7,Z.iloc[179],str(data["pelvic_tilt"].iloc[179]),fontsize=14) plt.legend(fontsize=14) plt.xlabel("pelvic_incidence",fontsize=14) plt.ylabel("Density counts",fontsize=14) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.savefig("../../figures/vertebral_hist3.png",bbox_inches ="tight") plt.show() ``` ## Prueba de Grubbs El test de Grubbs (llamado así por Frank E. Grubbs, que publicó el test en 1950), también conocido como test del residuo máximo normalizado o test de la desviación estudiantil extrema, es un test utilizado para detectar valores atípicos en un conjunto de datos univariantes que se supone que provienen de una población normalmente distribuida. La prueba de Grubbs se basa en el supuesto de normalidad. Es decir, antes de aplicar la prueba de Grubbs se debe verificar que los datos pueden ser razonablemente aproximados por una distribución normal. La prueba de Grubbs detecta un valor atípico cada vez. Este valor atípico se elimina del conjunto de datos y la prueba se repite hasta que no se detectan valores atípicos. Sin embargo, las iteraciones múltiples cambian las probabilidades de detección, y la prueba no debería utilizarse para tamaños de muestra de seis o menos, ya que suele marcar la mayoría de los puntos como valores atípicos. La prueba es muy parecida a la de Chauvenet como en la sección anterior pero usa una nueva distribución y además un ligero cambio en el estadístico de prueba. El estadístico de la prueba de Grubbs se define como: \begin{equation} G_{max}=\frac{\mbox{max}_{i=1}^N|x-\bar x|}{s_x}, \end{equation} donde $G_{max}$ es la desviación máxima permitida, $|\cdot |$ es el valor absoluto, $x$ es el valor del presunto valor atípico, $\bar x$ es la media de la muestra, y $s_x$ es la desviación estándar de la muestra. Entonces se propone una prueba estadística de hipótesis de dos caras, para la que se rechaza la hipótesis de ausencia de valores atípicos al nivel de significación $\alpha$ si \begin{equation} G_{max}>\frac{N-1}{N}\sqrt{\frac{t^2_{\alpha/(2N),N-2}}{N-2+t^2_{\alpha/(2N),N-2}}}, \end{equation} con $t^2_{\alpha/(2N),N-2}$ denotando el valor crítico superior de la distribución $t$ con $N-2$ grados de libertad y un nivel de significación de $\alpha/(2N)$. Veamos esto de manera sencilla para el conjunto de datos ``Vertebral Column'' y además sólo para la variable ``pelvic\_tilt'' en la que recordemos habíamos conseguido 12 valores atípicos. En este caso, el promedio y la desviación estándar son 17.5429 y 10.0081, respectivamente, por tanto podemos calcular el valor $G_{max}$ para cada uno de los 310 datos de la muestra. Además podemos calcular el valor t de la distribución t-Student para una significancia de $\alpha=0.05$ y 310-2=308 grados de libertad. ```python alpha = 0.05 cv = st.t.ppf(1-alpha/2, 6) print("t_test es = {} para n={} observaciones con una significancia de {}".format(cv,len(data["pelvic_tilt"]),alpha)) ``` t_test es = 2.4469118487916806 para n=310 observaciones con una significancia de 0.05 ```python print("La media es =",data["pelvic_tilt"].mean()) print("La desviación estándar es =",data["pelvic_tilt"].std()) # El z-score para todos los datos es: G=np.max(abs(data["pelvic_tilt"]-data["pelvic_tilt"].mean())/data["pelvic_tilt"].std()) ``` La media es = 17.542903225806448 La desviación estándar es = 10.008140050586714 ```python Gi=abs(data["pelvic_tilt"]-data["pelvic_tilt"].mean())/data["pelvic_tilt"].std() Gi[Gi==Gi.max()] ``` 179 3.186116 Name: pelvic_tilt, dtype: float64 ```python print("El estadístico de prueba es",Gi.max(), "o sea el dato de la fila", Gi[Gi==Gi.max()]) ``` El estadístico de prueba es 3.186116162745366 o sea el dato de la fila 179 3.186116 Name: pelvic_tilt, dtype: float64 ```python if G>cv: print("El valor es un extremo") if G<=cv: print("El valor NO es un extremo") ``` El valor es un extremo ```python ```
ef129ac91098edc43765e35d6d68e751cc8010bf
176,465
ipynb
Jupyter Notebook
Notebooks/working_with_outliers.ipynb
sierraporta/Data_Mining_Excersices
3790466d1d8314d83178b61035fc6c28b567ab59
[ "MIT" ]
null
null
null
Notebooks/working_with_outliers.ipynb
sierraporta/Data_Mining_Excersices
3790466d1d8314d83178b61035fc6c28b567ab59
[ "MIT" ]
null
null
null
Notebooks/working_with_outliers.ipynb
sierraporta/Data_Mining_Excersices
3790466d1d8314d83178b61035fc6c28b567ab59
[ "MIT" ]
null
null
null
133.888467
42,204
0.829366
true
9,120
Qwen/Qwen-72B
1. YES 2. YES
0.718594
0.785309
0.564318
__label__spa_Latn
0.532458
0.14943
###### Content under Creative Commons Attribution license CC-BY 4.0, code under MIT license (c) 2019 Daniel Koehn, based on (c)2018 L.A. Barba, G.F. Forsyth [CFD Python](https://github.com/barbagroup/CFDPython#cfd-python), (c)2014 L.A. Barba, I. Hawke, B. Knaepen [Practical Numerical Methods with Python](https://github.com/numerical-mooc/numerical-mooc#practical-numerical-methods-with-python), also under CC-BY. ```python from IPython.core.display import HTML css_file = '../style/custom.css' HTML(open(css_file, 'r').read()) ``` <link href="https://fonts.googleapis.com/css?family=Merriweather:300,300i,400,400i,700,700i,900,900i" rel='stylesheet' > <link href="https://fonts.googleapis.com/css?family=Source+Sans+Pro:300,300i,400,400i,700,700i" rel='stylesheet' > <link href='http://fonts.googleapis.com/css?family=Source+Code+Pro:300,400' rel='stylesheet' > <style> @font-face { font-family: "Computer Modern"; src: url('http://mirrors.ctan.org/fonts/cm-unicode/fonts/otf/cmunss.otf'); } #notebook_panel { /* main background */ background: rgb(245,245,245); } div.cell { /* set cell width */ width: 800px; } div #notebook { /* centre the content */ background: #fff; /* white background for content */ width: 1000px; margin: auto; padding-left: 0em; } #notebook li { /* More space between bullet points */ margin-top:0.5em; } /* draw border around running cells */ div.cell.border-box-sizing.code_cell.running { border: 1px solid #111; } /* Put a solid color box around each cell and its output, visually linking them*/ div.cell.code_cell { background-color: rgb(256,256,256); border-radius: 0px; padding: 0.5em; margin-left:1em; margin-top: 1em; } div.text_cell_render{ font-family: 'Source Sans Pro', sans-serif; line-height: 140%; font-size: 110%; width:680px; margin-left:auto; margin-right:auto; } /* Formatting for header cells */ .text_cell_render h1 { font-family: 'Merriweather', serif; font-style:regular; font-weight: bold; font-size: 250%; line-height: 100%; color: #004065; margin-bottom: 1em; margin-top: 0.5em; display: block; } .text_cell_render h2 { font-family: 'Merriweather', serif; font-weight: bold; font-size: 180%; line-height: 100%; color: #0096d6; margin-bottom: 0.5em; margin-top: 0.5em; display: block; } .text_cell_render h3 { font-family: 'Merriweather', serif; font-size: 150%; margin-top:12px; margin-bottom: 3px; font-style: regular; color: #008367; } .text_cell_render h4 { /*Use this for captions*/ font-family: 'Merriweather', serif; font-weight: 300; font-size: 100%; line-height: 120%; text-align: left; width:500px; margin-top: 1em; margin-bottom: 2em; margin-left: 80pt; font-style: regular; } .text_cell_render h5 { /*Use this for small titles*/ font-family: 'Source Sans Pro', sans-serif; font-weight: regular; font-size: 130%; color: #e31937; font-style: italic; margin-bottom: .5em; margin-top: 1em; display: block; } .text_cell_render h6 { /*use this for copyright note*/ font-family: 'Source Code Pro', sans-serif; font-weight: 300; font-size: 9pt; line-height: 100%; color: grey; margin-bottom: 1px; margin-top: 1px; } .CodeMirror{ font-family: "Source Code Pro"; font-size: 90%; } /* .prompt{ display: None; }*/ .warning{ color: rgb( 240, 20, 20 ) } </style> # 1D Diffusion We introduced finite-difference methods for partial differential equations (PDEs) in the [second module](https://github.com/daniel-koehn/Differential-equations-earth-system/tree/master/02_finite_difference_intro#numerical-solution-of-differential-equations-introduction-to-the-finite-difference-method), and looked at advection problems in more depth in [module 4](https://github.com/daniel-koehn/Differential-equations-earth-system/tree/master/04_Advection_1D#differential-equations-in-earth-sciences-1d-nonlinear-advection). Now we'll look at solving problems dominated by diffusion. Why do we separate the discussion of how to solve advection-dominated and diffusion-dominated problems, you might ask? It's all about the harmony between mathematical model and numerical method. Advection and diffusion are inherently different physical processes. * _Advection_—imagine a surfer on a tall wave, moving fast towards the beach ... advection implies transport, speed, direction. The physics has a directional bias, and we discovered that numerical methods should be compatible with that. That's why we use _upwind_ methods for advection, and we pay attention to problems where waves move in opposite directions, needing special schemes like the _Marker-in-Cell_ approach * _Diffusion_—now imagine a drop of food dye in a cup of water, slowly spreading in all directions until all the liquid takes a uniform color. [Diffusion](http://en.wikipedia.org/wiki/Diffusion) spreads the concentration of something around (atoms, people, ideas, dirt, anything!). Since it is not a directional process, we need numerical methods that are isotropic (like central differences). ```python from IPython.display import Image Image(url='http://upload.wikimedia.org/wikipedia/commons/f/f9/Blausen_0315_Diffusion.png') ``` In the previous Jupyter notebooks of this series, we studied the numerical solution of the linear and non-linear advection equations using the finite-difference method, and learned about the CFL condition. Now, we will look at the one-dimensional diffusion equation: $$ \begin{equation} \frac{\partial u}{\partial t}= \nu \frac{\partial^2 u}{\partial x^2} \tag{1} \end{equation} $$ where $\nu$ is a constant known as the *diffusion coefficient*. The first thing you should notice is that this equation has a second-order derivative. We first need to learn what to do with it! ### Discretizing 2nd-order derivatives The second-order derivative can be represented geometrically as the line tangent to the curve given by the first derivative. We will discretize the second-order derivative with a Central Difference scheme: a combination of forward difference and backward difference of the first derivative. Consider the Taylor expansion of $u_{i+1}$ and $u_{i-1}$ around $u_i$: $$ u_{i+1} = u_i + \Delta x \frac{\partial u}{\partial x}\big|_i + \frac{\Delta x^2}{2!} \frac{\partial ^2 u}{\partial x^2}\big|_i + \frac{\Delta x^3}{3!} \frac{\partial ^3 u}{\partial x^3}\big|_i + {\mathcal O}(\Delta x^4) $$ $$ u_{i-1} = u_i - \Delta x \frac{\partial u}{\partial x}\big|_i + \frac{\Delta x^2}{2!} \frac{\partial ^2 u}{\partial x^2}\big|_i - \frac{\Delta x^3}{3!} \frac{\partial ^3 u}{\partial x^3}\big|_i + {\mathcal O}(\Delta x^4) $$ If we add these two expansions, the odd-numbered derivatives will cancel out. Neglecting any terms of ${\mathcal O}(\Delta x^4)$ or higher (and really, those are very small), we can rearrange the sum of these two expansions to solve for the second-derivative. $$ u_{i+1} + u_{i-1} = 2u_i+\Delta x^2 \frac{\partial ^2 u}{\partial x^2}\big|_i + {\mathcal O}(\Delta x^4) $$ And finally: $$ \begin{equation} \frac{\partial ^2 u}{\partial x^2}=\frac{u_{i+1}-2u_{i}+u_{i-1}}{\Delta x^2} + {\mathcal O}(\Delta x^2)\notag \end{equation} $$ The central difference approximation of the 2nd-order derivative is 2nd-order accurate. ### Back to diffusion We can now write the discretized version of the diffusion equation in 1D: $$ \begin{equation} \frac{u_{i}^{n+1}-u_{i}^{n}}{\Delta t}=\nu\frac{u_{i+1}^{n}-2u_{i}^{n}+u_{i-1}^{n}}{\Delta x^2} \notag \end{equation} $$ As before, we notice that once we have an initial condition, the only unknown is $u_{i}^{n+1}$, so we re-arrange the equation to isolate this term: $$ \begin{equation} u_{i}^{n+1}=u_{i}^{n}+\frac{\nu\Delta t}{\Delta x^2}(u_{i+1}^{n}-2u_{i}^{n}+u_{i-1}^{n}) \notag \end{equation} $$ This discrete equation allows us to write a program that advances a solution in time—but we need an initial condition. Let's continue using our favorite: the hat function. So, at $t=0$, $u=2$ in the interval $0.5\le x\le 1$ and $u=1$ everywhere else. ### Stability of the diffusion equation The diffusion equation is not free of stability constraints. Just like the linear and non-linear advection equations, there are a set of discretization parameters $\Delta x$ and $\Delta t$ that will make the numerical solution blow up. For the diffusion equation and the discretization used here, the stability condition for diffusion is $$ \begin{equation} \nu \frac{\Delta t}{\Delta x^2} \leq \frac{1}{2} \notag \end{equation} $$ ### And solve! We are ready for some number-crunching! The next two code cells initialize the problem by loading the needed libraries, then defining the solution parameters and initial condition. This time, we don't let the user choose just *any* $\Delta t$, though; we have decided this is not safe: people just like to blow things up. Instead, the code calculates a value of $\Delta t$ that will be in the stable range, according to the spatial discretization chosen! You can now experiment with different solution parameters to see how the numerical solution changes, but it won't blow up. ```python import numpy from matplotlib import pyplot %matplotlib inline ``` ```python # Set the font family and size to use for Matplotlib figures. pyplot.rcParams['font.family'] = 'serif' pyplot.rcParams['font.size'] = 16 ``` ```python # Set parameters. nx = 41 # number spatial grid points L = 2.0 # length of the domain dx = L / (nx - 1) # spatial grid size nu = 0.3 # viscosity sigma = 0.2 # CFL limit dt = sigma * dx**2 / nu # time-step size nt = 20 # number of time steps to compute # Get the grid point coordinates. x = numpy.linspace(0.0, L, num=nx) # Set the initial conditions. u0 = numpy.ones(nx) mask = numpy.where(numpy.logical_and(x >= 0.5, x <= 1.0)) u0[mask] = 2.0 ``` ```python # Integrate in time. u = u0.copy() # loop over time steps for n in range(nt): un = u.copy() # store old field u # loop over spatial grid points for i in range(1,nx-1): u[i] = un[i] + nu * dt / dx**2 * (un[i+1] - 2 * un[i] + un[i-1]) ``` ```python # Plot the solution after nt time steps # along with the initial conditions. pyplot.figure(figsize=(6.0, 4.0)) pyplot.xlabel('x') pyplot.ylabel('u') pyplot.grid() pyplot.plot(x, u0, label='Initial', color='C0', linestyle='--', linewidth=2) pyplot.plot(x, u, label='nt = {}'.format(nt), color='C1', linestyle='-', linewidth=2) pyplot.legend(loc='upper right') pyplot.xlim(0.0, L) pyplot.ylim(0.5, 2.5); ``` ## Animations Looking at before-and-after plots of the wave in motion is helpful, but it's even better if we can see it changing! We are going to create an animation. This takes a few steps, but it's actually not hard to do! First, we define a function, called `diffusion`, that computes and plots the numerical solution of the 1D diffusion equation over the time steps: ```python def diffusion(u0, sigma=0.5, nt=20): """ Computes the numerical solution of the 1D diffusion equation over the time steps. Parameters ---------- u0 : numpy.ndarray The initial conditions as a 1D array of floats. sigma : float, optional The value of nu * dt / dx^2; default: 0.5. nt : integer, optional The number of time steps to compute; default: 20. """ # copy initial condition u0 -> u u = u0.copy() # plot initial condition fig = pyplot.figure(figsize=(9.0, 6.0)) pyplot.xlabel('x') pyplot.ylabel('u') pyplot.grid() # initial u0 init = pyplot.plot(x, u0, color='C0', linestyle='-', linewidth=3, label='Initial u0') # initialize finite-difference solution u # Note: comma is needed to update the variable line, = pyplot.plot(x, u, color='C1', linestyle='-', linewidth=3, label='FD solution u') pyplot.xlim(0.0, L) pyplot.ylim(0.5, 2.5) pyplot.legend(loc='upper right') fig.tight_layout() # activate interactive plot (will not work in JupyterLab) pyplot.ion() pyplot.show(block=False) # finite difference solution of the 1D diffusion eq. # loop over timesteps for n in range(nt): un = u.copy() # store old field u # loop over spatial grid for i in range(1,nx-1): u[i] = un[i] + nu * dt / dx**2 * (un[i+1] - 2 * un[i] + un[i-1]) # update field u line.set_ydata(u) # update figure fig.canvas.draw() ``` We now call the function `diffusion` to compute and animate the history of the solution: ```python %matplotlib notebook # Compute the history of the numerical solution. diffusion(u0, sigma=sigma, nt=500) ``` <IPython.core.display.Javascript object> <IPython.core.display.Javascript object> ## What we learned: - How to solve the 1D diffusion equation using the FTCS finite difference method - Animating the time evolution of the 1D diffusion equation solution
8180e3be30b2f6ad716883d804ac8fa59f89d65c
114,373
ipynb
Jupyter Notebook
05_Diffusion_1D/01_Diffusion_1D.ipynb
daniel-koehn/Differential-equations-earth-system
3916cbc968da43d0971b7139476350c1dd798746
[ "MIT" ]
30
2019-10-16T19:07:36.000Z
2022-02-10T03:48:44.000Z
05_Diffusion_1D/01_Diffusion_1D.ipynb
daniel-koehn/Differential-equations-earth-system
3916cbc968da43d0971b7139476350c1dd798746
[ "MIT" ]
null
null
null
05_Diffusion_1D/01_Diffusion_1D.ipynb
daniel-koehn/Differential-equations-earth-system
3916cbc968da43d0971b7139476350c1dd798746
[ "MIT" ]
9
2020-11-19T08:21:55.000Z
2021-08-10T09:33:37.000Z
52.682174
19,228
0.584902
true
3,730
Qwen/Qwen-72B
1. YES 2. YES
0.675765
0.822189
0.555606
__label__eng_Latn
0.937139
0.129189
# Hydrogen storage tank A hydrogen-powered vehicle is storing fuel using three onboard cylindrical tanks, each with height 65 cm and diameter 28 cm. To refuel, the tanks are connected to a reservoir of hydrogen, supplied at 300 bar and ambient temperature; a fill valve connects the reservoir to the tanks. Initially, the storage tanks have hydrogen at 60 bar and ambient temperature. The ambient temperature is 25°C. The mass flow rate through the valve is given by \begin{equation} \dot{m} = C_{\text{valve}} \sqrt{ P_{\text{supply}} - P } \;, \end{equation} where $C_{\text{valve}} = 2.68 \cdot 10^{-6} \, \frac{\text{kg}}{\text{s Pa}^{0.5}}$ is the valve coefficient. The heat transfer rate from the tank walls to the hydrogen is given by \begin{equation} \dot{Q} = h_{\text{conv}} A_s \left( T_{\text{wall}} - T \right) \;, \end{equation} where $h_{\text{conv}} = 40 \frac{\text{W}}{\text{K m}^2}$ is the convection heat transfer coefficient. The tanks are connected for filling for three minutes. **Problem:** Determine the pressure and temperature of the hydrogen in the storage tanks as a function of time. Determine the mass of fuel added as a function of time. ```python import numpy as np import cantera as ct from scipy.integrate import solve_ivp from pint import UnitRegistry ureg = UnitRegistry() Q_ = ureg.Quantity import matplotlib.pyplot as plt %matplotlib inline # these are mostly for making the saved figures nicer import matplotlib_inline.backend_inline matplotlib_inline.backend_inline.set_matplotlib_formats('pdf', 'png') plt.rcParams['figure.dpi']= 150 plt.rcParams['savefig.dpi'] = 150 ``` First, let's specify the thermodynamic states of the supply and hydrogen in the tank (initially): ```python temp_ambient = Q_(25, 'degC') # supply of hydrogen has constant conditions supply = ct.Hydrogen() supply.TP = ( temp_ambient.to('K').magnitude, Q_(300, 'bar').to('Pa').magnitude ) # tank properties tank = {'number': 3, 'height': Q_(65, 'cm'), 'diameter': Q_(28, 'cm') } volume_tank = np.pi * tank['height'] * tank['diameter']**2 / 4.0 # initial state in the tanks, fixed by temperature and pressure temp_initial = temp_ambient pres_initial = Q_(60, 'bar') hydrogen = ct.Hydrogen() hydrogen.TP = temp_initial.to('K').magnitude, pres_initial.to('Pa').magnitude ``` To figure out how the system changes with time, we can do a mass balance and energy balance for the control volume. The mass balance gives \begin{equation} \frac{dm}{dt} = \dot{m} = C_{\text{valve}} \sqrt{ P_{\text{supply}} - P(t) } \end{equation} and the energy balance gives \begin{equation} \dot{m} h_{\text{supply}} + \dot{Q} = \frac{dU}{dt} \;, \end{equation} which provide the governing equations for how this system evolves. We can use mass ($m$) and total internal energy ($U$) as our state variables, and construct an ODE system for these variables. We need to be able to convert between internal system properties and these: $$ m = \frac{ N_{\text{tank}} V_{\text{tank}} }{v} \\ U = u \, m \;. $$ We need to define a function that evaluates the time derivative system: ```python def tank_rates(t, y, supply, tank, temp_ambient): '''Evaluates time derivatives for mass and internal energy (dm/dt and dU/dt) ''' mass = Q_(y[0], 'kg') internal_energy = Q_(y[1], 'J') surface_area_tank = tank['number'] * ( 2 * np.pi*tank['diameter']**2 / 4.0 + np.pi*tank['diameter']*tank['height'] ) volume_tank = np.pi * tank['height'] * tank['diameter']**2 / 4.0 # specify state based on mass and internal_energy specific_volume = (tank['number'] * volume_tank) / mass specific_internal_energy = internal_energy / mass f = ct.Hydrogen() f.UV = ( specific_internal_energy.to('J/kg').magnitude, specific_volume.to('m^3/kg').magnitude ) # evaluate dm/dt C_valve = Q_(2.68e-6, 'kg/(s*Pa**0.5)') mdot = C_valve * np.sqrt(Q_(supply.P, 'Pa') - Q_(f.P, 'Pa')) dmdt = mdot # evaluate dU/dt h_conv = Q_(40, 'W/(m**2 K)') Qdot = h_conv * surface_area_tank * (temp_ambient - Q_(f.T, 'K')) dUdt = (mdot * Q_(supply.h, 'J/kg')) + Qdot return [dmdt.to('kg/s').magnitude, dUdt.to('J/s').magnitude] ``` ```{margin} Choice of method Here we use the `'BDF'` for the `method` argument, which tells the `solve_ivp()` function to use an implicit integration method based on backward differentiation formula. Other methods [are available](https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.solve_ivp.html) and may be more efficient for some problems, but this is fairly accurate and can handle stiffness in the ODE system. ``` Finally, we can calcluate the initial mass and internal energy, then integrate over the filling time: ```python # initial system mass and (total) internal energy mass_initial = (tank['number'] * volume_tank) / Q_(hydrogen.v, 'm^3/kg') int_energy_initial = mass_initial * Q_(hydrogen.u, 'J/kg') # Integrate over 3 minutes, # using backward differentiation formula (BDF) method sol = solve_ivp( tank_rates, [0, 60*3], [mass_initial.to('kg').magnitude, int_energy_initial.to('J').magnitude], args=(supply, tank, temp_ambient,), method='BDF' ) ``` The integration was successful (it did not report an error), so now plot temperature and pressure of the tank as a function of time. We have the system mass and internal energy as a function of time, and we'll need to use those to specify the system at each state to obtain its properties as a function of time. ```python pressures = np.zeros(len(sol.y[0])) temperatures = np.zeros(len(sol.y[0])) for idx in range(len(sol.y[0])): mass = Q_(sol.y[0][idx], 'kg') internal_energy = Q_(sol.y[1][idx], 'J') volume_tank = np.pi * tank['height'] * tank['diameter']**2 / 4.0 # calculate specific volume and specific internal energy # based on mass and (total) internal energys specific_volume = (tank['number'] * volume_tank) / mass specific_internal_energy = internal_energy / mass f = ct.Hydrogen() f.UV = ( specific_internal_energy.to('J/kg').magnitude, specific_volume.to('m^3/kg').magnitude ) pressures[idx] = f.P temperatures[idx] = f.T pressures *= ureg.Pa temperatures *= ureg.K fig, ax1 = plt.subplots(figsize=(5, 3)) color = 'red' ax1.set_xlabel('Time (s)') ax1.set_ylabel('Pressure (bar)', color=color) ax1.plot(sol.t, pressures.to('bar').magnitude, color=color) ax1.tick_params(axis='y', labelcolor=color) ax1.axis([0, 180, 50, 300]) ax1.grid(True) ax2 = ax1.twinx() # instantiate a second axes that shares the same x-axis color = 'blue' # we already handled the x-label with ax1 ax2.set_ylabel('Temperature (°C)', color=color) ax2.plot(sol.t, temperatures.to('degC').magnitude, color=color) ax2.tick_params(axis='y', labelcolor=color) ax2.axis([0, 180, 20, 120]) ax2.grid(True) fig.tight_layout() # otherwise the right y-label is slightly clipped plt.show() ``` Now plot the mass of fuel added as a function of time: ```python mass_added = sol.y[0] - mass_initial.to('kg').magnitude fig, ax = plt.subplots(figsize=(5, 3)) ax.plot(sol.t, mass_added) plt.xlabel('Time (s)') plt.axis([0, 180, 0, 1.6]) plt.ylabel('Fuel added (kg)') plt.grid(True) fig.tight_layout() plt.show() ```
5a7e95bacdef19a227518dd83bb55584395144ee
117,048
ipynb
Jupyter Notebook
book/content/first-law/hydrogen-storage-tanks.ipynb
kyleniemeyer/computational-thermo
3f0d1d4a6d4247ac3bf3b74867411f2090c70cbd
[ "CC-BY-4.0", "BSD-3-Clause" ]
13
2020-04-01T05:52:06.000Z
2022-03-27T20:25:59.000Z
book/content/first-law/hydrogen-storage-tanks.ipynb
kyleniemeyer/computational-thermo
3f0d1d4a6d4247ac3bf3b74867411f2090c70cbd
[ "CC-BY-4.0", "BSD-3-Clause" ]
1
2020-04-28T04:02:05.000Z
2020-04-29T17:49:52.000Z
book/content/first-law/hydrogen-storage-tanks.ipynb
kyleniemeyer/computational-thermo
3f0d1d4a6d4247ac3bf3b74867411f2090c70cbd
[ "CC-BY-4.0", "BSD-3-Clause" ]
6
2020-04-03T14:52:24.000Z
2022-03-29T02:29:43.000Z
351.495495
41,768
0.931216
true
2,099
Qwen/Qwen-72B
1. YES 2. YES
0.90053
0.779993
0.702407
__label__eng_Latn
0.929611
0.470258
## Solving *steady* state series equations \begin{equation} \Psi _{\eta \eta \tau }^{(n)}=\frac{1}{R_s}\Psi_{\eta \eta \eta \eta }^{(n)}+ \sum\limits_{i = 0}^n{(2i+1)\left[ \Psi ^{(i)} \Psi _{\eta \eta \eta }^{(n-i)} -\Psi_{\eta}^{(n-i)} \Psi _{\eta \eta}^{(i)}\right] } \\ \quad \Psi^{(n)}(\eta=\mp 1) = 0, \qquad \Psi^{(n)}_\eta(\eta=\mp 1) = \frac{(-1)^{n+1}}{(2n+1)!}, \qquad -1 \le \eta \le1 \end{equation} The steady state equation above will be solved using the two point boundary value solver of Julia and shooting methods. ```julia using DifferentialEquations using Plots using Sundials using BenchmarkTools using Calculus using DelimitedFiles using Interpolations using LaTeXStrings using FFTW ``` ## Solving the leading order equation $\Psi(\eta)^{(0)}$ steady state solution ### Low Reynolds number $R<17$ For small values of the Reynolds number ($R<17)$, the first order approximation $\Psi^{(0)}$ can be calculated fairly quickly using standard boundary value solvers (memory allocation is around 15GB at R=50 using MIRK4 two point boundary value solver). With a good initial guess value and correct values for ```abstol``` and ```reltol``` the Shooting method also works very well with all solvers e.g. Vern7() ### Intermediate Reynolds numbers $17<R<65$ Despite the equations being unstable to computation for $R>65$, they are still solvable for $17<R<65$ and the only reliable way to solve for $\Psi^{(0)}$ is to use the ```TwoPointBVProblem``` or Shooting with GeneralMirk4/Rosenbrock23. The latter solver requirement is due to the equations becoming stiff. ```julia sol_psi = Array{ODESolution,1}(undef, 20) #Declaring an array for every \psi(\eta)^{(n)} solution series_term = 0 Reynolds_number = 1.2345 p=(Reynolds_number, series_term) const etaspan=(-1.0,1.0) ψ0= [0.0, -1.0, 3, -3] #This contains psi^{(0)}, psi^{(0)}_eta, psi^{(0)}_eta_eta, psi^{(0)}_eta_eta_eta """This is the differential equation for the ``\\psi(\\eta)^{(0)}`` term in the series. ``\\Psi_{\\eta \\eta \\eta \\eta }^{(n)} = R_s( \\Psi_{\\eta}^{(0)} \\Psi _{\\eta \\eta}^{(0)} - \\Psi ^{(0)} \\Psi _{\\eta \\eta \\eta }^{(0)})`` """ function steady_diffeq_psi0!(dψ, ψ, p, η) dψ[1] = ψ[2] dψ[2] = ψ[3] dψ[3] = ψ[4] dψ[4] = (ψ[2]*ψ[3]-ψ[4]*ψ[1])*p[1] end """This is the differential equation for the ``\\psi(\\eta)^{(n)}`` term in the series. ``\\Psi_{\\eta \\eta \\eta \\eta }^{(n)} = R_s\\sum\\limits_{i = 0}^n{(2i+1)( \\Psi_{\\eta}^{(n-i)} \\Psi _{\\eta \\eta}^{(i)} - \\Psi ^{(i)} \\Psi _{\\eta \\eta \\eta }^{(n-i)})}`` """ function steady_diffeq_series!(dψ, ψ, p, η) solut_psi0 = p[3]::ODESolution #This avoids the use of Global sol_psi0 and should speed up the code, this variable type checked solut_psi = p[4]::Vector{ODESolution} #This avoids the use of Global sol_psi and should speed up the code, this variable type checked sum_of_nonlinears = 0.0 n = round(Int, p[2]) # This rounding off prevents errors are thrown when p vector is changed by Julia from Int to Floats for i= 1:(n-1) sum_of_nonlinears += (2*i+1)*(solut_psi[n-i](η)[2]*solut_psi[i](η)[3] - solut_psi[i](η)[1]*solut_psi[n-i](η)[4]) end dψ[1] = ψ[2]::Float64 dψ[2] = ψ[3]::Float64 dψ[3] = ψ[4]::Float64 dψ[4] = (ψ[2]*solut_psi0(η)[3]-solut_psi0(η)[1]*ψ[4])*p[1] + (2*p[2]+1)*(solut_psi0(η)[2]*ψ[3]-ψ[1]*solut_psi0(η)[4])*p[1] + sum_of_nonlinears*p[1] end """ This is the Jacobian function for the steady ``\\psi(\\eta)^{(0)}`` differential equation """ function jacobian_steadypsi0(J,ψ,p,η) J[1,1] = J[1,3] = J[1,4] = 0.0; J[1,2] = 1.0 J[2,1] = J[2,2] = J[2,4] = 0.0; J[2,3] = 1.0 J[3,1] = J[3,2] = J[3,3] = 0.0; J[3,4] = 1.0 J[3,1] = -ψ[4]*p[1] J[3,2] = ψ[3]*p[1] J[3,3] = ψ[2]*p[1] J[3,4] = -ψ[1]*p[1] nothing end """ This is the boundary condition for the nth term ``\\psi(\\eta)^{(n)}``. The p-vector supplies the correct value of n in the 2nd position i.e. p[2] = n """ function bc!(residual, ψ, p, η) # psi[1] is the beginning of the etaspan, and psi[end] is the ending residual[1] = ψ[1][1] # The psi[1] (i.e,. psi^{0}) solution at the beginning of the time span should be 0 residual[2] = ψ[1][2] - (-1)^(p[2]+1)/factorial(2*p[2]+1) #First derivative of should be -1 at first time step residual[3] = ψ[end][1] # the solution at the end of the time span should be 0 residual[4] = ψ[end][2] - (-1)^(p[2]+1)/factorial(2*p[2]+1) #First derivative should be -1 at end time step end function psi_smallr(i, η ,r) return((-1.0)^(i+1)*η*(η^2-1)/(2.0*factorial(2*i+1)) + r*(-1)^i*(2.0)^(2*i-3)*η*(η^2-1)^2.0*(2.0+η^2)/(35.0*factorial(2*i+1))+r^2.0*η*(η^2-1)^2*((-591 -2294*η^2 + 161*η^4 + 1428*η^6)*(-1.0)^i + 3.0*(423 + 166*η^2 - 553*η^4 - 84*η^6)*(-1)^i*(3)^(2*i+1))/(62092800*factorial(2*i+1))) end function psi_smallr_d1(i, η, r ) return((-1.0)^(i+1)*(3.0*η^2-1)/(2.0*factorial(2*i+1)) + r*(-1)^i*(2.0)^(2*i-3)*(7*η^6-9*η^2+2)/(35.0*factorial(2*i+1))) end function psi_smallr_d2(i, η, r) return((-1.0)^(i+1)*(6*η)/(2.0*factorial(2*i+1)) + r*(-1)^i*(2.0)^(2*i-3)*(42*η^5-18*η)/(35.0*factorial(2*i+1))+ r^2.0*(3*(-197 - 1112*η^2 + 6930*η^4 - 2772*η^6 - 8085*η^8 + 5236*η^10)*(-1.0)^i + (1269 - 6120*η^2 - 6930*η^4 + 24948*η^6 - 10395*η^8 - 2772*η^10)*(-1)^i*(3)^(2*i+1)) /(62092800*factorial(2*i+1))) end function psi_smallr_d3(i, η, r) return((-1.0)^(i+1)*(6)/(2.0*factorial(2*i+1)) + r*(-1)^i*(2.0)^(2*i-3)*(210*η^4-18)/(35.0*factorial(2*i+1))) end function psi_bigr(i, η ,r) return( (-1)^i *sin(pi*η)/(pi*factorial(2*i+1))) end function psi_bigr_d1(i, η ,r) return( (-1)^i *cos(pi*η)/factorial(2*i+1)) end function psi_bigr_d3(i, η ,r) return( -(-1)^i*pi^2*cos(pi*η)/factorial(2*i+1)) end psi0_d2_stored = [2.91, 2.57, 2.18, 1.82, (1.5-0.3), 1.02, 0.847, 0.71, 0.599, 0.511, 0.442, 0.38, 0.33] psi0_d3_stored = [-2.32,0.22, 2.98, 5.2, (7.02+5.0), (9.29+2.7), 9.9598, (10.42+1), 10.72, 10.93, 11.05, 11.14, 11.19] Reynolds_values_stored = [1.0,5,10, 15, 20, 30, 35, 40, 45, 50, 55, 60, 65] itp_d2 = Interpolations.interpolate((Reynolds_values_stored,), psi0_d2_stored , Gridded(Linear())) itp_d3 = Interpolations.interpolate((Reynolds_values_stored,), psi0_d3_stored, Gridded(Linear())) function psi0_bc_initialguess(Reynolds_number) if (Reynolds_number <=65) return[0.0, -1.0, itp_d2(Reynolds_number), itp_d3(Reynolds_number)] #This contains psi, psi_eta, psi_eta_eta, psi_eta_eta_eta else #return([0.0, -1.0, 0.33, 12]) return[0.0, psi_bigr_d1(0, 0.0 ,Reynolds_number), 0.0, psi_bigr_d3(0, 0.0 ,Reynolds_number)] #This contains ψ, ψ_η, ψ_ηη, ψ_ηηη end end #The code below is for testing purposes only, it treats the psi problem as an IVP starting at eta=-1 #p=(Reynolds_number, 0) #ψ0= [0.0, -1.0, 0.02, 2.9836060306009644] #prob = ODEProblem(steady_diffeq_psi0!,ψ0,etaspan,p , jac = jacobian_steadypsi0) #sol = solve(prob,reltol=1e-6) #@benchmark solve(prob,reltol=1e-6,save_everystep=true) ``` psi0_bc_initialguess (generic function with 1 method) ```julia Reynolds_number = 10000.0 series_term = 0 p=(Reynolds_number, series_term) ψ0 = psi0_bc_initialguess(Reynolds_number) bvp_psi0_2point = TwoPointBVProblem(steady_diffeq_psi0!, bc!, ψ0, etaspan, p , jac = jacobian_steadypsi0) sol_psi0 = @time solve(bvp_psi0_2point, alg_hints = [:stiff], MIRK4(),dt=0.01) #Very accurate solver ``` 15.856637 seconds (220.40 M allocations: 20.656 GiB, 16.86% gc time, 0.20% compilation time) retcode: Success Interpolation: 1st order linear t: 201-element Vector{Float64}: -1.0 -0.99 -0.98 -0.97 -0.96 -0.95 -0.94 -0.93 -0.92 -0.91 -0.9 -0.89 -0.88 ⋮ 0.89 0.9 0.91 0.92 0.93 0.94 0.95 0.96 0.97 0.98 0.99 1.0 u: 201-element Vector{Vector{Float64}}: [-5.438454421878129e-37, -1.0, 0.00012300546589258518, 9.906643500646569] [-0.009998343261789869, -0.9995036509608582, 0.09912384398995155, 9.892868477000627] [-0.019986775490410332, -0.998018068540089, 0.19795957798109, 9.873031167561809] [-0.02995541373759932, -0.9955452515082693, 0.29655721166609045, 9.845061137444274] [-0.039894398957369744, -0.992088002421085, 0.39483122651067376, 9.808233581305148] [-0.049793904502366176, -0.9876500063959832, 0.49269133580438895, 9.762247958064613] [-0.059644145131804965, -0.9822358630244032, 0.5900452162596184, 9.706974606811162] [-0.06943538623439384, -0.9758511000072576, 0.6867997078941593, 9.642367041490468] [-0.07915795313930095, -0.9685021780843683, 0.7828614416842025, 9.568426041749166] [-0.08880224044997845, -0.9601964910890403, 0.8781372189316826, 9.485182971882717] [-0.09835872136191663, -0.9509423628802954, 0.9725342699164605, 9.392691231300525] [-0.1078179569380116, -0.9407490420413301, 1.0659604483402518, 9.291021509761542] [-0.11717060532189127, -0.9296266948334767, 1.1583243889991424, 9.180258971108556] ⋮ [0.10781795693803839, -0.9407490420420476, -1.065960448327419, 9.291021509653053] [0.09835872136193688, -0.9509423628808888, -0.9725342699047156, 9.392691231189698] [0.08880224044999327, -0.9601964910895239, -0.8781372189210467, 9.485182971771357] [0.0791579531393115, -0.9685021780847505, -0.7828614416746874, 9.568426041636064] [0.06943538623440214, -0.9758511000075507, -0.6867997078857929, 9.642367041375119] [0.05964414513181105, -0.9822358630246191, -0.5900452162524249, 9.706974606693553] [0.04979390450237011, -0.9876500063961322, -0.4926913357983836, 9.762247957946519] [0.0398943989573728, -0.9920880024211807, -0.3948312265058404, 9.80823358118756] [0.029955413737601678, -0.995545251508323, -0.29655721166244514, 9.8450611373168] [0.019986775490412237, -0.9980180685401135, -0.197959577978685, 9.873031167453409] [0.009998343261791635, -0.9995036509608635, -0.09912384398864327, 9.892868476887353] [-7.239780535325076e-36, -1.0, -0.0001230054660869085, 9.906643500568554] ```julia if(p[1] < 52.5) run(`cc -ansi steady_d4_julia.c -o steadyd4julia -lm `) io = open("Reynolds_number.txt", "w"); write(io, string(p[1])); close(io); sleep(1) run(`./steadyd4julia $p\[1\]`) MatrixPsis=readdlm("terms_c.data",Float64,); end ``` ```julia print(varinfo(r"sol_psi0")) pyplot() plot(sol_psi0,vars=(1), linecolor=:red, label = "Julia solution",xlabel=L"\eta", ylabel=L"\Psi^{(0)}") if(p[1] < 52.5) plot!(MatrixPsis[:,1], MatrixPsis[:, 2], linestyle=:dot, linecolor=:green, label = "C code solution") plot!( x-> psi_smallr(0, x, p[1]), -1,1,linestyle=:dot, linecolor=:orange, label = "Small R analytical",title="R = "*string(p[1])*L", \Psi^{(0)}", legend=:bottomright) end plot!( x-> psi_bigr(0, x, p[1]), -1,1,linestyle=:dash, linecolor=:blue, label = "Large R analytical",title="R = "*string(p[1])*L", \Psi^{(0)}", legend=:bottomright, yguidefontrotation=-90) ``` ```julia psi_plot = Array{Plots.Plot{Plots.PyPlotBackend},1}() u0 =[] max_terms = 5 for i = 1:max_terms series_term = i p=(Reynolds_number, series_term, sol_psi0, sol_psi) if (Reynolds_number > 20 ) push!(u0, [0.0, psi_smallr_d1(series_term, -1.0, p[1]), 0, pi^2*psi_smallr_d1(series_term, -1.0, p[1])]) else push!(u0, [0.0, psi_smallr_d1(series_term, -1.0, p[1]), psi_smallr_d2(series_term, -1.0, p[1]), psi_smallr_d3(series_term, -1.0, p[1])]) end bvp_sol= TwoPointBVProblem(steady_diffeq_series!, bc!, u0[i], etaspan, p) sol_psi[ i ] = @time solve(bvp_sol, alg_hints=[:stiff], MIRK4(), dt=0.04) plot(sol_psi[i],vars=(1), linecolor=:red, label = "Julia solution",xlabel=L"\eta", ylabel=L"\Psi^{(%$i)}", size=(800,400)) if(p[1] < 35) plot!(MatrixPsis[:,1],MatrixPsis[:,i+2], linecolor=:green, label = "C code solution") end if(p[1] < 35 ) plot!(x-> psi_smallr(i, x, p[1]),-1,1,linestyle=:dot, linecolor=:orange, label = "Small R analytical",title="R = "*string(p[1])*L", \Psi^{(%$i)}", legend=:outertopright) end push!(psi_plot,plot!(x-> psi_bigr(i, x, p[1]),-1,1,linestyle=:dash, linecolor=:blue, label = "Large R analytical",title="R = "*string(p[1])*L", \Psi^{(%$i)}", legend=:outertopright, yguidefontrotation=-90)) end ``` 0.489074 seconds (7.21 M allocations: 331.166 MiB, 10.30% gc time, 14.26% compilation time) 0.673325 seconds (12.80 M allocations: 601.316 MiB, 13.20% gc time) 0.974780 seconds (19.91 M allocations: 942.041 MiB, 11.86% gc time) 1.280404 seconds (27.03 M allocations: 1.253 GiB, 12.02% gc time) 4.370773 seconds (93.95 M allocations: 4.363 GiB, 12.27% gc time) ```julia for i in 1:max_terms display(psi_plot[i]) sleep(1) end ``` ## Exploring with Fourier Transforms ```julia alist= [sol_psi0.u[i][1] for i =1:length(sol_psi0.u)] x = range(-1,stop=1,length=length(sol_psi[5].u)) clist= [sol_psi0.u[i][3] for i =1:length(sol_psi0.u)] clist[25]/alist[25] ``` -11.30626039872988 ```julia N=length(sol_psi[5].u) Fy = fft(alist)[1:N÷2] ak = 2/N * real.(Fy) bk = -2/N * imag.(Fy) # fft sign convention ak[1] = ak[1]/2; ``` ```julia bk*10^5 ``` 25-element Vector{Float64}: -0.0 5.588135150784316 -1.4677891682918485 -1.7602036153999576 -0.8297559692080744 -0.016353052152772343 0.25189645884487194 0.21082570799079534 0.11893810212564905 0.06150421055572241 0.03369089940397449 0.02000129490045747 0.012565369887931823 0.008159797258145592 0.0053920674968345435 0.0035852750426532366 0.002376731460287785 0.0015572299338542958 0.0009991303702340953 0.0006209769328401041 0.0003687704513288578 0.00020545369856001777 0.00010470660852719787 4.7114774678100995e-5 1.7690957441206614e-5 ```julia ``` ## Solving the for $\Psi^{0}$ using shooting method ```julia Reynolds_number =65.0 series_term = 0 ψ0= psi0_bc_initialguess(Reynolds_number) p=(Reynolds_number, series_term) bvp_shooting = BVProblem(steady_diffeq_psi0!, bc!, ψ0, etaspan,p , jac=jacobian_steadypsi0) sol_shooting = @time solve(bvp_shooting, alg_hints=[:stiff], reltol=1e-5, abstol=1e-5 , GeneralMIRK4(),dt=0.01) #Excellent solver #sol_shooting = @time solve(bvp_shooting, alg_hints=[:stiff], reltol=1e-8, abstol=1e-6 , Shooting(CVODE_BDF())) #Ok but need to have tolerances adjusted some errors on boundary #sol_shooting = @time solve(bvp_shooting, alg_hints=[:stiff], reltol=1e-8, abstol=1e-6 , Shooting(Rodas4P())) #Ok but need to have tolerances adjusted some errors on boundary #sol_shooting = @time solve(bvp_shooting, alg_hints=[:stiff], reltol=1e-9, abstol=1e-9 , Shooting(RadauIIA5())) #Good, FAST, STABLE but some errors on boundary #sol_shooting = @time solve(bvp_shooting, alg_hints=[:stiff], reltol=1e-8, abstol=1e-6 , Shooting(TRBDF2())) #sol_shooting = @time solve(bvp_shooting, alg_hints=[:stiff], reltol=1e-6, abstol=1e-6 , Shooting(Rosenbrock23())) ``` 5.704660 seconds (78.20 M allocations: 7.331 GiB, 18.45% gc time) retcode: Success Interpolation: 1st order linear t: 201-element Vector{Float64}: -1.0 -0.99 -0.98 -0.97 -0.96 -0.95 -0.94 -0.93 -0.92 -0.91 -0.9 -0.89 -0.88 ⋮ 0.89 0.9 0.91 0.92 0.93 0.94 0.95 0.96 0.97 0.98 0.99 1.0 u: 201-element Vector{Vector{Float64}}: [0.0, -1.0, 0.3381169611051619, 11.193291745248118] [-0.009981237755586674, -0.9960628262890472, 0.4489518580668842, 10.973737934551135] [-0.0199175986148863, -0.991028279732876, 0.5575913604840704, 10.754082323461496] [-0.02979821866776514, -0.9849183318338913, 0.6640307658781293, 10.53361026174277] [-0.039612454067938174, -0.9777550347096055, 0.7682588601435602, 10.311730960405413] [-0.049349882144788734, -0.9695605803143844, 0.8702590678114968, 10.087960306035388] [-0.05900030305268773, -0.9603573489715499, 0.9700104441267903, 9.861906327821275] [-0.06855374185831681, -0.9501679486729377, 1.067488533169761, 9.633256872063251] [-0.07800045098008439, -0.9390152463814293, 1.1626661122064241, 9.401769117992206] [-0.08733091290498911, -0.9269223923874316, 1.2555138391191516, 9.167260632923332] [-0.0965358431181567, -0.9139128386157033, 1.3460008170328064, 8.929601717338844] [-0.10560619318841406, -0.900010351650049, 1.434095087958115, 8.688708833356118] [-0.11453315396047871, -0.8852390211341102, 1.519764065416849, 8.44453894522796] ⋮ [0.1056061931885953, -0.9000103516534113, -1.4340950879367813, 8.68870883360754] [0.09653584311830656, -0.913912838618844, -1.3460008170090008, 8.929601717570556] [0.08733091290511165, -0.9269223923903188, -1.2555138390930447, 9.167260633133235] [0.0780004509801867, -0.939015246384057, -1.162666112178576, 9.401769118182791] [0.06855374185839476, -0.9501679486752653, -1.0674885331402812, 9.633256872233433] [0.059000303052743154, -0.9603573489735784, -0.9700104440955563, 9.861906327970678] [0.04934988214482538, -0.9695605803161094, -0.870259067778888, 10.087960306165272] [0.039612454067963106, -0.9777550347109999, -0.7682588601097937, 10.311730960512747] [0.0297982186677809, -0.9849183318349432, -0.6640307658433727, 10.533610261829677] [0.01991759861489311, -0.9910282797335839, -0.5575913604487551, 10.754082323524418] [0.009981237755591175, -0.996062826289411, -0.448951858031075, 10.973737934592188] [1.1692924532960299e-33, -1.0, -0.33811696106907635, 11.193291745265805] ```julia plot(sol_shooting, vars=(1)) ``` ```julia ```
046759a30361724648710f3e9a4a971cb97fba7d
276,423
ipynb
Jupyter Notebook
Steadystreaming_series_solution_BVP.ipynb
gsagoo/SolvingDifferentialEquations
63c2b231b45ad64206a11824198b3d5fe20c062b
[ "Unlicense" ]
null
null
null
Steadystreaming_series_solution_BVP.ipynb
gsagoo/SolvingDifferentialEquations
63c2b231b45ad64206a11824198b3d5fe20c062b
[ "Unlicense" ]
null
null
null
Steadystreaming_series_solution_BVP.ipynb
gsagoo/SolvingDifferentialEquations
63c2b231b45ad64206a11824198b3d5fe20c062b
[ "Unlicense" ]
null
null
null
340.84217
39,161
0.896423
true
7,602
Qwen/Qwen-72B
1. YES 2. YES
0.874077
0.822189
0.718657
__label__eng_Latn
0.173195
0.508012
```python import numpy as np from numpy.linalg import det, inv, matrix_rank, eig from sympy import Matrix, symbols a = np.array([[1, 2], [3, 4]]) a ``` array([[1, 2], [3, 4]]) # Common matrix operations ## Multiplication ***With a number*** (aka scalar, because it scales the matrix). ```python 5 * a ``` array([[ 5, 10], [15, 20]]) **With another matrix.** It can be viewed as a linear transformation of a coordinate system. Example of 90 degree rotation transform: ```python transform = np.array([[0,-1],[1,0]]) i = np.array([1,0]) np.matmul(transform,i) ``` array([0, 1]) Multiplying two square matrices can be viewed as composing two transforms. The one on the right is applied first. Example of rotation then shear transforms: ```python transform = np.matmul(np.array([[1,1],[0,1]]), np.array([[0,-1],[1,0]])) transform ``` array([[ 1, -1], [ 1, 0]]) **Identity matrix** multiplication preserves the original matrix ```python np.matmul(a,np.eye(2)) ``` array([[1., 2.], [3., 4.]]) ## Dot product Dot product between two vectors (1d matrices or tensors) gives you an idea about their orientation: - if dot product is zero -> perpendicular, negative ->opposite directions (angle > 90 degrees) It is also a way to map vectors into a different space (e.g. lower/higher dimensional). For example the projection of a 2d vector to a 1d line. Dot product is just a shorthand for multiply a and b transponse: $A \bullet B = AB^T$ ```python a * a ``` array([[ 1, 4], [ 9, 16]]) ## Transpose Rows become columns and vice versa. If $A=A^T$ then A is **symmetric** (implies that A is square). ```python a.T ``` array([[1, 3], [2, 4]]) ## Determinant If determinant is zero, the matrix is called **singular**. It means that the matrix vectors are linearly dependent and that dimensionality of space is reduced. Negative determinant means that the orientation of space is inverted (e.g. flipped). The absolute value of the determinant shows how a shape's size will change. ```python det(a) # a flips space and increases area twofold ``` ## Inverse matrices Non-singular matrices (det != 0) are invertible: $A^{-1}A=I$, where $I$ is the identity matrix (np.eye) ```python inv(a) ``` array([[-2. , 1. ], [ 1.5, -0.5]]) ```python np.matmul(a,inv(a)) #shall give the identity matrix, but there is some quantization error ``` array([[1.0000000e+00, 0.0000000e+00], [8.8817842e-16, 1.0000000e+00]]) ## Rank The max number of linearly independent rows or columns in the matrix ```python #matrix_rank(a) #2 #matrix_rank(np.array([[1,0],[1,0]])) #1 det(np.array([[15.000000000000001,1],[30,2]])) ``` ## Echelon form Numerical algebra is not very suitable for finding exact solutions to the reduced row echelon form. Symbolic algebra with sympy might be a better fit here. http://numpy-discussion.10968.n7.nabble.com/Reduced-row-echelon-form-td16486.html http://docs.sympy.org/0.7.5/tutorial/matrices.html ```python A = Matrix([[1, -1], [3, 4], [0, 2]]) A.rref() ``` ## Cross product In 3D space, cross product of two vectors, v1 and v2, yields another vector that is perpendicular to the two. The resulting vector is computed by constructing the determinant with the identity vector $(\hat{i}, \hat{j}, \hat{k})$ in the first column and v1 and v2 in the second and third respectively. $A \times B$ ```python v1 = Matrix([1,2,4]) v2 = Matrix([3,2,5]) i,j,k = symbols('i j k') result = Matrix([[i,v1[0],v2[0]],[j,v1[1],v2[1]],[k,v1[2],v2[2]]]).det() print(result) [result.subs([(i,1),(j,0),(k,0)]), result.subs([(i,0),(j,1),(k,0)]), result.subs([(i,0),(j,0),(k,1)])] ``` ## Eigenvalues & Eigenvectors Given a transformation matrix $A$ if there exists a vector $\vec{v}$ and a scalar $\lambda$, such that: $A\vec{v} = \lambda\vec{v} \Rightarrow A\vec{v} = \lambda I\vec{v} \Rightarrow (A - \lambda I)\vec{v} = 0 \Rightarrow $ $$det(A - \lambda I) = 0$$ Then all possible values of $\lambda$ that satisfy the equation are called eigenvalues. Each of these values corresponds to one or more eigenvectors $\vec{v}$, which won't change their span after the transformation with matrix $A$. **Eigenbasis** with respect to the transformation A is the set of new basis vectors (e.g. at least two in 2d) that are also eigenvectors. Transforming the original transformation matrix $A$ to $A'$ in the new basis guarantees that $A'$ will be diagonal. ```python #eig(np.array([[3,1],[0,2]])) m = Matrix([[3,1],[0,2]]) print(m.eigenvals()) # value:algebraic multiplicity m.eigenvects() # eigenvalue, algebraic multiplicity, eigenvector ``` ## Null space In non-full rank matrices, this is the set of vectors that will get reduced to a line or a point. In other words, this is the space of all possible solutions of the system of equations (no single solution exists because det=0 & rank< full rank) ## Frobenius normal form ## Matrix equivalence ## Matrix congruence ## Singular value decomposition https://www.youtube.com/watch?v=P5mlg91as1c ## PCA ## SVD ## Jacobian matrix Jacobian is the matrix of partial derivatives of a $ R^n \longrightarrow R^m $ function. It has $m$ columns - one for each output (dependent) variable and $n$ rows - one for each input (independent) variable. Thus each column is the gradient of the respective dependent variable. https://www.value-at-risk.net/functions/ ```python ``` ## Conjugate transponse Conjugate transponse (Hermitian transponse) of a matrix with complex entries is obtained by first taking the transponse of the matrix and then taking the complex conjugate of each entry: $ A^* = \overline{A^T}$, where complex conjugate is the element-wise operation: $a + ib \Rightarrow a - ib$. # Some interesting matrix properties ## Similar Two **square** matrices $A$ and $B$ are similar if $B=P^{-1}AP$ and $P$ is invertible. ## Diagonalizable ## Normal A complex square matrix is normal if: $A^*A=AA^*$. For real matrices this reduces to: $A^TA=AA^T$ # Unitary matrices A **complex square** matrix is unitary if its inverse is equal to its conjugate transponse: $А^{*} = A^{-1} \implies A^*A=AA^*=I$ For real valued matrices, the unitary is called orthogonal: $A^{T}=A^{-1} \implies A^TA=AA^T=I$ Unitary matrices preserve the euclidian norm (length) of a vector x during multiplication: $\lVert x \rVert = \lVert Ax \rVert$ ```python np.conj(a).T ``` array([[1, 3], [2, 4]], dtype=int32) ```python a = np.array([[2,-1],[0.5,-0.5]]) np.matmul(a,a.T) ``` array([[5. , 1.5], [1.5, 0.5]]) ```python x = np.array([[1,2],[3,4],[5,6]]) np.matmul(x,x.T) # symmetrize ``` array([[ 5, 11, 17], [11, 25, 39], [17, 39, 61]]) ```python # ``` ```python ``` ```python ```
355f5e77f6b5cb74118c6701446c86a6d05887be
21,881
ipynb
Jupyter Notebook
ml/Matrix cheatsheet.ipynb
pgenevski/notebooks
186b0e41d1424cb33bb1dea8905c4aec3a7b4cc5
[ "MIT" ]
null
null
null
ml/Matrix cheatsheet.ipynb
pgenevski/notebooks
186b0e41d1424cb33bb1dea8905c4aec3a7b4cc5
[ "MIT" ]
null
null
null
ml/Matrix cheatsheet.ipynb
pgenevski/notebooks
186b0e41d1424cb33bb1dea8905c4aec3a7b4cc5
[ "MIT" ]
null
null
null
33.714946
2,528
0.632329
true
2,042
Qwen/Qwen-72B
1. YES 2. YES
0.953966
0.927363
0.884673
__label__eng_Latn
0.978529
0.893726
# Separation of Variable, the fourier approach following Griffiths Introduction of electrodynamics, third edition Two infinite grounded metal plates lie parallel to the $xz$ plane, one at $y=0$, the other at $y=a$. The left end, at $x=0$, is closed off with an infinite strip insulated from the two plates and mantained at a specific potential $V_0(y)$. Find the potential inside this "slot". Since the solution is independent of $z$ is is actually a 2D problem and thus: $$\frac{\partial^2V}{\partial x^2}+\frac{\partial^2V}{\partial y^2}=0 \label{eq:1}\tag{1}$$ with the boundary conditions: 1. $V=0$ when $y=0$ 2. $V=0$ when $y=a$ 3. $V=V_0(y)$ when $x=0$ 4. $V\rightarrow0$ as $x\rightarrow \infty$ We want to find the solutions in the form of products: $$V(x,y)=X(x)Y(y)\tag{2}$$ Putting eq.2 into eq.1: $$Y\frac{d^2X}{dx^2}+X\frac{d^2Y}{dy^2}$$ _Separation of variables_ by divinding by $V$: $$\frac 1 X \frac{d^2X}{dx^2}+\frac 1 Y \frac{d^2Y}{dy^2}=0\tag{3}$$ which gives as an equation in the form: $$f(x)+g(y)=0$$ which is only possible if f and g are both _constant_. With that it follows for eq.3: $$\frac 1 X \frac{d^2X}{dx^2}=C_1$$ and $$\frac 1 Y \frac{d^2Y}{dy^2}=C_2$$ with $$C_1+C_2=0$$ So either $C_1$ or $C_2$ has to be negative (or both are zero). We want $C_1$ to be positive (and thus $C_2$ negative). So we converted a PDE into two ODEs: $$ \frac{d^2X}{dx^2} = k^2X $$ with $k^2$ always positive, and $$ \frac{d^2Y}{dy^2} = -k^2Y $$ with $-k^2$ always negative Either we know the solutions for these ODEs (or look them up) or we let the computer solve them analyticly: ```python from sympy.interactive import printing # use latex for printing printing.init_printing(use_latex=True) from sympy import Function, dsolve, Eq, Derivative, sin, cos, symbols, simplify, real_roots # import necessary methods from sympy.abc import x,y # import variables ``` ```python X = Function('X',real=True) k = symbols('k',positive=True) f1_ode = Eq(Derivative(X(x), x, 2) - k**2*X(x)) f1_ode ``` ```python dsolve(f1_ode, X(x)) ``` ```python Y = Function('Y',real=True) f2_ode = Eq(Derivative(Y(y), y, 2) + k**2*Y(y)) f2_ode ``` ```python dsolve(f2_ode, Y(y)) ``` Which gives us: $$V(x,y)=\left (A\textrm{e}^{kx}+B\textrm{e}^{-kx}\right)\left(C\sin{ky}+D\cos{ky}\right)$$ Determine constants out of the boundary conditions: 4. $V\rightarrow0$ as $x\rightarrow \infty$ $\rightarrow A=0$ $$V(x,y)=\textrm{e}^{-kx}\left(C/B\sin{ky}+D/B\cos{ky}\right)$$ 1. $V=0$ when $y=0$ $\rightarrow D/B=0$ $$V(x,y)=C/B\textrm{e}^{-kx}\sin{ky}$$ 2. $V=0$ when $y=a$ $\rightarrow \sin ka=0$ $$k=\frac{n\pi}{a},\ \ (n=1,2,3,\ldots)$$ Out of: 3. $V=V_0(y)$ when $x=0$ this gives us the solution for one specific $V_0(y)\propto\sin(n\pi y/a)$ Since Laplace's equation is linear and thus: $$\Delta V=\alpha_1\Delta V_1+\alpha_2\Delta V_2+\ldots=0\alpha_1+0\alpha_2+\ldots=0$$ We can use the the sum (which is the Fourier series) which gives us all possible solutions for an arbitrary $V_0(y)$ $$ V(x,y)=\sum^{\infty}_{n=1}C_n\textrm{e}^{-n\pi x/a}\sin(n\pi y/a)$$ and satisfies the boundary conditions. Now we use boundary condition (3) to find the coefficients $C_n$, by multplying $V(0,y)$ by $\sin(n'\pi y/a)$ with $n'$ a positige integer and integratin from 0 to $a$: $$ \sum^{\infty}_{n=1}C_n\int_0^a \sin(n\pi y/a) \sin(n'\pi y/a)\,dy = \int_0^a V_0(y) \sin(n'\pi y/a)\,dy $$ ```python from sympy import integrate, pi n, m = symbols('n m', positive = True, integer=True) a = symbols('a', constant = True) integrate (sin(n*pi*y/a)*sin(m*pi*y/a), (y, 0, a)) # var y, from 0 to a ``` and thus all terms drop out, but $n=n'$ so we get for the coefficients $C_n$: $$ C_n = \frac 2 a \int_0^a V_0 \sin(n\pi y/a)\,dy$$ ## Example $V_0$: For the strip at $x=0$ at a constant potential $V_0$: $$C_n=\frac{2V_0}{a}\int_0^a\sin(n\pi y/a)dy=\frac{2V_0}{n\pi}(1-\cos n\pi)= \begin{cases} 0, & \text{if $n$ is even}.\\ \frac{4V_0}{n\pi}, & \text{if $n$ is odd}. \end{cases}$$ Putting it all together: $$ V(x,y)=\frac{4V_0}{\pi}\sum_{n=1,3,5,\ldots}\frac 1 n \textrm{e}^{-n\pi x/a}\sin(n\pi y/a)$$ ### Plot for $n=1$, $a=1$, $V_0=1$ ```python import numpy as np import matplotlib.pyplot as plt ``` ```python a = 1 x = np.arange(0,1.01,0.05) # range for x y = np.arange(0,a+0.01,0.05) # range for y X,Y = np.meshgrid(x,y) # generate 2D mesh from x and y ``` ### Meshgrid plot ```python %matplotlib notebook from mpl_toolkits.mplot3d import Axes3D fig = plt.figure(figsize=(5,5)) ax = fig.gca(projection='3d') # Plot the meshgrid. colortuple = ('w', 'b') # generating checker pattern for meshgrid plot colors = np.empty(X.shape, dtype=str) for i in range(len(x)): for j in range(len(x)): colors[i, j] = colortuple[(i + j) % len(colortuple)] surf = ax.plot_surface(X, Y, np.zeros(np.shape(X)), rstride=1, cstride=1, facecolors=colors) # actual plotting plt.show() ``` <IPython.core.display.Javascript object> ### V(x,y) contour plot ```python n=1 a=1 V0 = 1 x = np.arange(0,1.01,0.01) # range for x y = np.arange(0,a+0.01,0.01) # range for y X,Y = np.meshgrid(x,y) # generate 2D mesh from x and y V=(4*V0/np.pi)*(1/n*np.exp(-n*np.pi*X/a)*np.sin(n*np.pi*Y/a)) # define V(x,y) on the mesh X,Y fig = plt.figure(figsize=(5.,5.)) ax = fig.gca() cf = ax.contourf(X,Y,V,64,cmap='Blues') ax.grid() plt.colorbar(cf) ``` <IPython.core.display.Javascript object> <matplotlib.colorbar.Colorbar at 0x10afa8c50> ### V(x,y) surface plot ```python from mpl_toolkits.mplot3d import Axes3D fig = plt.figure(figsize=(5.,5.)) ax = fig.gca(projection='3d') # Plot the surface. surf = ax.plot_surface(X, Y, V, cmap='Blues', alpha=0.8, linewidth=0, antialiased=False) cset = ax.contour(X, Y, V, zdir='y', offset=1, cmap='Blues',levels=1) # lineplot on the xz plane ann = ax.text(.7,.8,1, "n={}".format(n), color='k',fontsize=24) ``` <IPython.core.display.Javascript object> ### V(x,y) animation for various n ```python from matplotlib import animation from IPython.display import HTML import matplotlib.animation as animation def update_plot(frame_number, zarray, cf): cf[0].remove() # remove plots first cf[1].remove() cf[0] = ax.plot_surface(X, Y, zarray[:,:,frame_number], cmap="Blues") # and set them excplicit again with frame number as index cf[1] = ax.text(.7,.8,1, "n={}".format(n_range[frame_number]), color='k',fontsize=24) fig = plt.figure() ax = fig.add_subplot(111, projection='3d') n_range = np.arange(1,28,2) zarray = np.zeros((np.shape(X)[0], np.shape(Y)[1], len(n_range))) V_n = lambda X,Y,n : (4*V0/np.pi)*(1/n*np.exp(-n*np.pi*X/a)*np.sin(n*np.pi*Y/a)) for i, n in enumerate(n_range): zarray[:,:,i] = V_n(X,Y,n) + zarray[:,:,i-1] # summing up in adding to the last element cf = [] cf.append(ax.plot_surface(X, Y, zarray[:,:,0], color='0.75')) cf.append(ax.text(.7,.8,1, "n={}".format('1'), color='k',fontsize=24)) ax.set_zlim(0,1.5) anim = animation.FuncAnimation(fig, update_plot, len(n_range), fargs=(zarray, cf),interval=400) plt.close(anim._fig) # Call our new function to display the animation HTML(anim.to_html5_video()) ``` <IPython.core.display.Javascript object> ```python ```
603cd6e5047a4d884b9f2c90bb23c77c3d81ce06
545,616
ipynb
Jupyter Notebook
Separation of Variable, Fourier approach.ipynb
laserlab/403_SeparationOfVariable
2d14c20239559c9dfaa5ef9875d22ff0a1784e30
[ "CC0-1.0" ]
null
null
null
Separation of Variable, Fourier approach.ipynb
laserlab/403_SeparationOfVariable
2d14c20239559c9dfaa5ef9875d22ff0a1784e30
[ "CC0-1.0" ]
null
null
null
Separation of Variable, Fourier approach.ipynb
laserlab/403_SeparationOfVariable
2d14c20239559c9dfaa5ef9875d22ff0a1784e30
[ "CC0-1.0" ]
2
2020-12-26T03:02:13.000Z
2021-06-10T23:02:08.000Z
109.429603
119,275
0.799966
true
2,628
Qwen/Qwen-72B
1. YES 2. YES
0.855851
0.853913
0.730822
__label__eng_Latn
0.790124
0.536277
## The SIR model The SIR model of epidemiology partitions the population into three compartments: susceptibles, S, who can catch the disease; infectives, I, who have already caught the disease and infect susceptibles; and removed individuals, R. Since the disease is assumed not to be fatal, the sum $N=S+I+R$ remains constant. The rate at which the susceptibles get infected is $$ \lambda(t) = \frac{\beta I}{N} $$ where the parameter $\beta$ is the probability of infection on contact. The infected individuals remove from the disease at a rate $\gamma$. Then, the ordinary differential equations of the SIR model are \begin{align} \dot S &= -\lambda(t)S \\ \dot I &= \lambda(t)I - \gamma I \\ \dot R &= \gamma I \end{align} This example integrates the above equations to obtain what is called the **epidemic curve**: a plot of the number of susceptibles and infectives as a function of time. Below we use the class `Model` to simulate SIR model with three age-groups ```python # M=3, SIR with three age-groups import numpy as np import pyross import copy import matplotlib.pyplot as plt model_spec = { "classes" : ["S", "I"], "S" : {"infection" : [ ["I","S", "-beta"] ]}, ## the I class passes infection to S class "I" : { "linear" : [ ["I", "-gamma"] ], ## this is recovery process for I class "infection" : [ ["I", "S", "beta"]]} ## the recovered class R is internally determined by number conservation } parameters = {'beta' : 0.1, 'gamma' : 0.1, } M=3; Ni=1000*np.ones(M); N=np.sum(Ni) # Initial conditions as an array x0 = np.array([ 980, 980, 980, # S 20, 20, 20, # I ]) # Or initial conditions as a dictionary x0 = {'S': [n-20 for n in Ni], 'I': [20, 20, 20] } CM = np.array( [[1, 0.5, 0.1], [0.5, 1, 0.5], [0.1, 0.5, 1 ]], dtype=float) def contactMatrix(t): return CM # duration of simulation and data file Tf = 160; Nf=Tf+1; model = pyross.deterministic.Model(model_spec, parameters, M, Ni) # simulate model data = model.simulate(x0, contactMatrix, Tf, Nf) # plot the data and obtain the epidemic curve S = np.sum(model.model_class_data('S', data), axis=1) I = np.sum(model.model_class_data('I', data), axis=1) R = np.sum(model.model_class_data('R', data), axis=1) t = data['t'] fig = plt.figure(num=None, figsize=(10, 8), dpi=80, facecolor='w', edgecolor='k') plt.rcParams.update({'font.size': 22}) plt.fill_between(t, 0, S/N, color="#348ABD", alpha=0.3) plt.plot(t, S, '-', color="#348ABD", label='$S$', lw=4) plt.fill_between(t, 0, I/N, color='#A60628', alpha=0.3) plt.plot(t, I, '-', color='#A60628', label='$I$', lw=4) plt.fill_between(t, 0, R/N, color="dimgrey", alpha=0.3) plt.plot(t, R, '-', color="dimgrey", label='$R$', lw=4) plt.legend(fontsize=26); plt.grid() plt.autoscale(enable=True, axis='x', tight=True) plt.ylabel('Compartment value') plt.xlabel('Days'); ``` ## A toy model for vaccinations, derived from SEIR ```python model_spec = { "classes" : ["S", "E", "I", "R"], "S" : { "infection" : [ ["I", "S", "-beta"] ] }, "E" : { "linear" : [ ["E", "-gammaE"] ], "infection" : [ ["I", "S", "beta"] ] }, "I" : { "linear" : [ ["E", "gammaE"], ["I", "-gammaI"]] }, "R" : { "linear" : [ ["I", "gammaI"] ], } } ``` ```python parameters = { 'beta' : 0.05, 'gammaE' : 0.5, 'gammaI' : 0.1 } ``` ```python M = 3 Ni = 50000*np.ones(M) N = np.sum(Ni) # initial conditions as a dictionary E0 = [30, 30, 30] S0 = [n-30 for n in Ni] I0 = [0, 0, 0] R0 = [0, 0, 0] x0 = { 'S' : S0, 'E' : E0, 'I' : I0, 'R' : R0 } CM = np.array([ [3, 0.8, 0.1], # youngest cohort spreading most [0.8, 2, 0.5], [0.1, 0.5, 1 ] ], dtype=float) # the contact matrix is time-dependent def contactMatrix(t): if t<40: xx = CM elif 40<=t<200: xx = 0.5*CM elif 200<=t<280: xx = 0.9*CM elif 280<=t<370: xx = 0.5*CM else: xx = CM return xx # duration of simulation and data file Tf = 1000; Nf=Tf+1; model = pyross.deterministic.Model(model_spec, parameters, M, Ni) # simulate model data = model.simulate(x0, contactMatrix, Tf, Nf) ``` ```python # plot the data and obtain the epidemic curve S = np.sum(model.model_class_data('S', data), axis=1) E = np.sum(model.model_class_data('E', data), axis=1) I = np.sum(model.model_class_data('I', data), axis=1) R = np.sum(model.model_class_data('R', data), axis=1) t = data['t'] fig = plt.figure(num=None, figsize=(10, 8), dpi=80, facecolor='w', edgecolor='k') plt.rcParams.update({'font.size': 22}) plt.semilogy(t, S, '-', label='$S$', lw=4) plt.semilogy(t, E, '-', label='$E$', lw=4) plt.semilogy(t, I, '-', label='$I$', lw=4) plt.semilogy(t, R, '-', label='$R$', lw=4) plt.legend(fontsize=26); plt.grid() plt.autoscale(enable=True, axis='x', tight=True) plt.ylabel('Compartment value') plt.xlabel('Days'); ``` ```python # age structure Ii = model.model_class_data('I', data) fig = plt.figure(num=None, figsize=(10, 8), dpi=80, facecolor='w', edgecolor='k') plt.rcParams.update({'font.size': 22}) plt.stackplot(t,np.transpose(Ii),labels=('cohort 1','cohort 2','cohort 3')) plt.legend(fontsize=26); plt.grid() plt.ylabel('Infections') plt.xlabel('Days'); ``` Without vaccinations, herd immunity is reached after the 3rd wave (no further lockdown). Total number of people that had been infected, by age group: ```python np.round(np.sum(Ii,axis=0)*parameters['gammaI']) ``` array([34921., 28938., 12600.]) ### The SEIR model with vaccinations This code takes the `Model` specification of SEIR, extends the number of classes by the vaccinated version of classes and sets up all the transitions. There are additional infection terms, the linear terms are the same in the vaccinated compartments (though they might be changed later, to account for a reduction in severity), and finite resource terms move individuals from the unvaccinated to the vaccinated classes. This avoids typing the whole model specification by hand, and should also work for our more detailed, calibrated models. The vaccinated version of class X is labeled XV, The extension to different types/doses of vaccinations should be straightforward (XV1, XV2, et.c). ```python model_spec_vaccinations = copy.deepcopy(model_spec) for cl in model_spec["classes"]: clV = cl+'V' model_spec_vaccinations["classes"].append(clV) model_spec_vaccinations[clV] = {} if "infection" in model_spec[cl]: model_spec_vaccinations[clV]["infection"] = [] for term in model_spec[cl]["infection"]: termI = term.copy() # vaccinated infecting unvaccinated termI[0] += 'V' termI[2] += "*thetaI" model_spec_vaccinations[cl]["infection"].append(termI) termS = term.copy() # unvaccinated infecting vaccinated termS[1] += 'V' termS[2] += "*thetaS" model_spec_vaccinations[clV]["infection"].append(termS) termSI = term.copy() # vaccinated infecting vaccinated termSI[0] += 'V' termSI[1] += 'V' termSI[2] += "*thetaS*thetaI" model_spec_vaccinations[clV]["infection"].append(termSI) if "linear" in model_spec[cl]: model_spec_vaccinations[clV]["linear"] = [] for term in model_spec[cl]["linear"]: termV = term.copy() termV[0] += 'V' model_spec_vaccinations[clV]["linear"].append(termV) if "finite-resource" in model_spec[cl]: model_spec_vaccinations[clV]["finite-resource"] = [] for term in model_spec[cl]["finite-resource"]: termV = term.copy() termV[0] += 'V' model_spec_vaccinations[clV]["finite-resource"].append(termV) else: model_spec_vaccinations[cl]["finite-resource"] = [] model_spec_vaccinations[clV]["finite-resource"] = [] term = [cl, "vaccination_rate", "vaccination_priority", "one"] # one: probability that the vaccination is successful. Set to 1, because we already # account for the imperfectness of vaccinations through the theta parameters model_spec_vaccinations[clV]["finite-resource"].append(term.copy()) term[1] = "-"+term[1] model_spec_vaccinations[cl]["finite-resource"].append(term) model_spec_vaccinations ``` {'classes': ['S', 'E', 'I', 'R', 'SV', 'EV', 'IV', 'RV'], 'S': {'infection': [['I', 'S', '-beta'], ['IV', 'S', '-beta*thetaI']], 'finite-resource': [['S', '-vaccination_rate', 'vaccination_priority', 'one']]}, 'E': {'linear': [['E', '-gammaE']], 'infection': [['I', 'S', 'beta'], ['IV', 'S', 'beta*thetaI']], 'finite-resource': [['E', '-vaccination_rate', 'vaccination_priority', 'one']]}, 'I': {'linear': [['E', 'gammaE'], ['I', '-gammaI']], 'finite-resource': [['I', '-vaccination_rate', 'vaccination_priority', 'one']]}, 'R': {'linear': [['I', 'gammaI']], 'finite-resource': [['R', '-vaccination_rate', 'vaccination_priority', 'one']]}, 'SV': {'infection': [['I', 'SV', '-beta*thetaS'], ['IV', 'SV', '-beta*thetaS*thetaI']], 'finite-resource': [['S', 'vaccination_rate', 'vaccination_priority', 'one']]}, 'EV': {'infection': [['I', 'SV', 'beta*thetaS'], ['IV', 'SV', 'beta*thetaS*thetaI']], 'linear': [['EV', '-gammaE']], 'finite-resource': [['E', 'vaccination_rate', 'vaccination_priority', 'one']]}, 'IV': {'linear': [['EV', 'gammaE'], ['IV', '-gammaI']], 'finite-resource': [['I', 'vaccination_rate', 'vaccination_priority', 'one']]}, 'RV': {'linear': [['IV', 'gammaI']], 'finite-resource': [['R', 'vaccination_rate', 'vaccination_priority', 'one']]}} ```python parameters = { 'beta' : 0.05, 'gammaE' : 0.5, 'gammaI' : 0.1, 'thetaS' : 0.6, 'thetaI' : 0.6, } ``` The age-dependence of vaccinations is introduced explicitly through an age-dependent vaccination rate. Alternatively, one could have specified a scalar vaccination rate, and implicilty realised the age-dependence though age-dependent priorities. ```python def vaccination_rate(t): rate=500*np.array([0.5*np.tanh((t-420)/10)-0.5*np.tanh((t-480)/10), 0.5*np.tanh((t-360)/10)-0.5*np.tanh((t-420)/10), 0.5*np.tanh((t-300)/20)-0.5*np.tanh((t-360)/10) ]) rate*=(rate>1) return rate def parameter_mapping(input_parameters, t): output_parameters = { 'beta' : input_parameters['beta'], 'gammaE' : input_parameters['gammaE'], 'gammaI' : input_parameters['gammaI'], 'beta*thetaS' : input_parameters['beta']*input_parameters['thetaS'], 'beta*thetaI' : input_parameters['beta']*input_parameters['thetaI'], 'beta*thetaS*thetaI' : input_parameters['beta']*input_parameters['thetaS']*input_parameters['thetaI'], 'one' : 1, 'vaccination_priority' : 1, 'vaccination_rate' : vaccination_rate(t) } return output_parameters ``` ```python t=np.linspace(200,600,1024) v=[vaccination_rate(tt) for tt in t] plt.plot(t,v) plt.xlabel('days') plt.ylabel('vaccination rate (by age)') plt.show() ``` ```python # initial conditions as a dictionary E0 = [30, 30, 30] S0 = [n-30 for n in Ni] I0 = [0, 0, 0] R0 = [0, 0, 0] x0 = { 'S' : S0, 'E' : E0, 'I' : I0, 'R' : R0 } for cl in model_spec["classes"]: x0[cl+'V'] = np.zeros(M) # duration of simulation and data file Tf = 1000; Nf=Tf+1; model = pyross.deterministic.Model(model_spec_vaccinations, parameters, M, Ni, time_dep_param_mapping=parameter_mapping) # simulate model data = model.simulate(x0, contactMatrix, Tf, Nf) ``` ```python # plot the data and obtain the epidemic curve S = np.sum(model.model_class_data('S', data), axis=1) E = np.sum(model.model_class_data('E', data), axis=1) I = np.sum(model.model_class_data('I', data), axis=1) R = np.sum(model.model_class_data('R', data), axis=1) SV = np.sum(model.model_class_data('SV', data), axis=1) EV = np.sum(model.model_class_data('EV', data), axis=1) IV = np.sum(model.model_class_data('IV', data), axis=1) RV = np.sum(model.model_class_data('RV', data), axis=1) V=SV+EV+IV+RV t = data['t'] fig = plt.figure(num=None, figsize=(10, 8), dpi=80, facecolor='w', edgecolor='k') plt.rcParams.update({'font.size': 22}) plt.semilogy(t, S+SV, '-', label='$S$', lw=4) plt.semilogy(t, E+EV, '-', label='$E$', lw=4) plt.semilogy(t, I+IV, '-', label='$I$', lw=4) plt.semilogy(t, R+RV, '-', label='$R$', lw=4) plt.semilogy(t, V, '-', label='$V$', lw=4) plt.legend(fontsize=26); plt.grid() plt.autoscale(enable=True, axis='x', tight=True) plt.ylabel('Compartment value') plt.xlabel('Days'); ``` ```python # age structure Ii = model.model_class_data('I', data) IVi = model.model_class_data('IV', data) fig = plt.figure(num=None, figsize=(10, 8), dpi=80, facecolor='w', edgecolor='k') plt.rcParams.update({'font.size': 22}) plt.stackplot(t,np.transpose(Ii+IVi),labels=('cohort 1','cohort 2','cohort 3')) plt.legend(fontsize=26); plt.grid() plt.ylabel('Infections') plt.xlabel('Days'); ``` With vaccinations, the 3rd wave is reduced, despite the lifting of lockdown. Total number of people that had been infected, by age group: ```python # unvaccinated np.round(np.sum(Ii,axis=0)*parameters['gammaI']) ``` array([13855., 7407., 2040.]) ```python # vaccinated np.round(np.sum(IVi,axis=0)*parameters['gammaI']) ``` array([2850., 2539., 852.])
eff0c099106f089a792d705c539baecef9225e57
364,777
ipynb
Jupyter Notebook
examples/deterministic/ex01b_Model.ipynb
vishalbelsare/pyross
98dbdd7896661c790f7a9d13fda8595ddccadf04
[ "MIT" ]
null
null
null
examples/deterministic/ex01b_Model.ipynb
vishalbelsare/pyross
98dbdd7896661c790f7a9d13fda8595ddccadf04
[ "MIT" ]
null
null
null
examples/deterministic/ex01b_Model.ipynb
vishalbelsare/pyross
98dbdd7896661c790f7a9d13fda8595ddccadf04
[ "MIT" ]
null
null
null
527.89725
86,944
0.935802
true
4,198
Qwen/Qwen-72B
1. YES 2. YES
0.870597
0.793106
0.690476
__label__eng_Latn
0.597185
0.442538
# Complex plane ```{index} Complex plane ``` From the definition of complex numbers it is clear that there is a natural correspondence between a set of complex numbers $\mathbb{C}$ and points in a plane: to every complex number $z = x +iy$ we can *uniquely* assign a point $P(x, y)$ in the $\mathbb{R}^2$ plane. We call a plane in which every point has a corresponding complex number assigned to it a **complex plane** (or Argand plane). However, for simplicity we would normally simply say "in a point $z$" when we mean "in a point (of a complex plane) assigned a complex number $z$." ```python z = complex(3, 2) fig = plt.figure() ax = fig.add_subplot(111) plt.plot(z.real, z.imag, 'o', zorder=10) plt.plot([3, 3], [0, 2], '--k', alpha=0.8) plt.plot([0, 3], [2, 2], '--k', alpha=0.8) ax.text(2.7, -0.25, "$Re(z)$", fontsize=12) ax.text(-0.7, 1.95, "$Im(z)$", fontsize=12) ax.arrow(-1., 0, 4.5, 0, shape='full', head_width=0.1, head_length=0.2) ax.arrow(0, -1.5, 0, 4, shape='full', head_width=0.1, head_length=0.2) ax.axis('equal') ax.set_xlim(-2, 4) ax.set_ylim(-2, 4) ax.axis('off') plt.show() ``` Having represented complex numbers geometrically, let us revisit the terms we introduced in the first notebook. - Re $z$ is equal to the abscissa and Im $z$ to the ordinate of a point $P$. We therefore call x-axis the *real axis* and y-axis the *imaginary axis*. - The modulus of a complex number $|z|$ is equal to the distance between the corresponding point $P$ and the origin of the complex plane. In general, the distance between points $P_1$ and $P_2$ assigned to numbers $z_1$ and $z_2$ is equal to the modulus of the difference of those two points: $$ d(T_1, T_2) = \sqrt{(x_1 - x_2)^2 + (y_1 - y_2)^2} = | z_1 - z_2 |$$ - If a number $z$ is assigned a point $P(x, y)$, then the complex conjugate $z^*$ is assigned a point $P^*(x, -y)$, which is just a reflection of $P$ with respect to the real axis. - Every point $P(x, y)$ has a corresponding vector $\vec{OP}$ from the origin to $P$. The sum $z_1 + z_2$ therefore represents vector addition \\( \overrightarrow{OP_1} + \overrightarrow{OP_2} \\). Because of the vector analogy, we can also say that complex numbers satisfy the *triangle inequalities*: $$ | z_1| - |z_2| \leq |z_1 + z_2| \leq |z_1| + |z_2| $$ (comp_trig_form)= # Trigonometric form ```{sidebar} Complex plane - Argand diagram Source: [Wikipedia](https://en.wikipedia.org/wiki/Complex_plane) ``` We can also observe the complex plane in polar coordinates $(r, \phi)$ which are related to the Cartesian coordinates $(x, y)$ through: $$ x = r \cos \varphi, \quad y = r \sin \varphi $$ This leads to the *trigonometric form* of a complex number $z$: $$ z = r(\cos \varphi + i \sin \varphi), $$ where $r$ is the **modulus** (or magnitude), equal to the absolute value of the complex number, a direct consequence of Pythagora's theorem: $$ r = \sqrt{x^2 + y^2} = |z| $$ and the polar angle $\varphi$ is defined (to a multiply $2\pi$) by: $$ \tan (\varphi + 2n \pi) = \frac{y}{x}, \quad n \in \mathbb{Z}, x \neq 0. $$ The angle $\varphi$ is called the **argument** (or phase) of a complex number $z$ and we write $\text{Arg}(z) = \varphi$. A special case is the complex number $z=0$ for which $r = 0$ and its argument is not defined. Finding $\varphi$ is delicate because $\tan$ is a multivalued function. To avoid ambiguity, the simplest choice is $n = 0$ so that the interval is of length $2\pi$ and \\( - \pi < \text{arg}(z) \leq \pi \\). The value of $\text{Arg}(z)$ with $n=0$ is called the **principal value** of the argument. With this: $$ \text{arg}(1) = 0, \quad \text{arg}(i) = \frac{\pi}{2}, \quad \text{arg}(-1) = \pi, \quad \text{arg}(-i) = -\frac{\pi}{2}, \quad \text{etc.} $$ THe relationship between $\text{Arg}(z)$ and $\text{arg}(z)$ is therefore: $$ \text{Arg}(z) = \text{arg}(z) + 2n \pi, \quad n \in \mathbb{Z}.$$ Multiplying two complex numbers results in their absolute values being multiplied and the arguments being added: ```{margin} Angle addition formulae $$ \sin(\alpha + \beta) = \sin \alpha \cos \beta + \cos \alpha \sin \beta \\ \cos (\alpha + \beta) = \cos \alpha \cos \beta - \sin \alpha \sin \beta$$ ``` $$ \begin{align} z_1 \cdot z_2 & = r_1(\cos \varphi_1 + i \sin \varphi_1) \cdot r_2(\cos \varphi_2 + i \sin \varphi_2) \\ & = r_1 r_2 (\cos (\varphi_1 + \varphi_2) + i \sin(\varphi_1 + \varphi_2)) \end{align} $$ For the multiplicative inverse we have $$ \begin{align} z^{-1} & = \frac{1}{r(\cos \varphi + i \sin \varphi)} \cdot \frac{cos \varphi - i \sin \varphi}{cos \varphi - i \sin \varphi} = \frac{\cos \varphi - i \sin \varphi}{r( \cos^2 \varphi + \sin^2 \varphi)} \\ & = \frac{1}{r} (\cos \varphi - i \sin \varphi) \end{align} $$ ```{admonition} Properties of $|z|$ and arg($z$) Let us summarise our findings in the form of the following equalities: $$ | z_1 \cdot z_2 | = |z_1| \cdot |z_2| = r_1 r_2, \quad \text{arg}(z_1 \cdot z_2) = \text{arg}(z_1) + \text{arg}(z_2), $$ $$|z^{-1}| = |z|^{-1} = r^{-1}, \quad \text{arg}(z^{-1}) = -\text{arg}(z) $$ where $z_1, z_2 \neq 0$. By induction we get: $$ |z^n| = |z|^n, \quad \text{arg}(z^n) = n\text{arg}(z), \quad \forall n \in \mathbb{Z}. $$ ``` (comp_polar_form)= # Polar form ```{index} Polar form of complex numbers ``` The property \\( \text{arg}(z_1 \cdot z_2) = \text{arg}(z_1) + \text{arg}(z_2) \\) might remind us of logarithms, where $\log (a \cdot b) = \log a + \log b$. This is not a coincidence! The exponential function, which we will look at in the next chapter, allows us to write complex numbers in a *polar form*: $$ z = r e^{i \varphi} $$ ```{margin} [Euler's identity](https://en.wikipedia.org/wiki/Euler%27s_identity) For a special angle $\varphi = \pi$ the Euler formula gives the Euler's identity: $$ e^{i \pi} + 1 = 0 $$ It is considered to be one of the most beautiful equations in mathematics. ``` where we have used the **Euler's formula**: $$ e^{i \varphi} = \cos(\varphi) + i \sin (\varphi). $$ In this representation, certain operations become much easier. For example, $$ z_1 \cdot z_2 = r_1 r_2 e^{i(\varphi_1 + \varphi_2)}, \quad z^n = r^n e^{in \varphi} $$ ```{index} De Moivre's formula ``` for all powers $n \in \mathbb{Z}$. If we write this using a complex number with unit modulus we recover **de Moivre's formula**: $$ (\cos \varphi + i \sin \varphi)^n = (e^{i \varphi})^n = e^{in \varphi} = \cos (n \varphi) + i \sin (n \varphi). $$ <div class="admonition tip"> <p class="admonition-title">Tip: Trigonometric identities</p> Trigonometric identities are also much easier to be recovered this way. For example, let us think about angle addition $\theta + \varphi$. $$ e^{i(\theta + \varphi)} = e^{i \theta} e^{i \varphi} $$ We Apply Euler's formula to each complex number. $$\begin{align} \cos (\theta + \varphi) + i \sin(\theta + \varphi) & = (\cos \theta + i \sin \theta)(\cos \varphi + i \sin \varphi) \\ & = \cos \theta \cos \varphi + i \cos \theta \sin \varphi + i \sin \theta \cos \varphi + i^2 \sin \theta \sin \varphi \\ & = (\cos \theta \cos \varphi - \sin \theta \sin \varphi) + i(\cos \theta \sin \varphi + \sin \theta \cos \varphi) \end{align} $$ Equate real and imaginary parts on both sides: $$\cos (\theta + \varphi) = \cos \theta \cos \varphi - \sin \theta \sin \varphi \\ \sin (\theta + \varphi ) = \cos \theta \sin \varphi + \sin \theta \cos \varphi $$ The reader is encouraged to try to recover other trigonometric identities. </div>
abfc150704522b622e40bc24741894321bec0849
15,321
ipynb
Jupyter Notebook
notebooks/c_mathematics/complex_analysis/2_complex_plane.ipynb
primer-computational-mathematics/book
305941b4f1fc4f15d472fd11f2c6e90741fb8b64
[ "MIT" ]
3
2020-08-02T07:32:14.000Z
2021-11-16T16:40:43.000Z
notebooks/c_mathematics/complex_analysis/2_complex_plane.ipynb
primer-computational-mathematics/book
305941b4f1fc4f15d472fd11f2c6e90741fb8b64
[ "MIT" ]
5
2020-07-27T10:45:26.000Z
2020-08-12T15:09:14.000Z
notebooks/c_mathematics/complex_analysis/2_complex_plane.ipynb
primer-computational-mathematics/book
305941b4f1fc4f15d472fd11f2c6e90741fb8b64
[ "MIT" ]
4
2020-08-05T13:57:32.000Z
2022-02-02T19:03:57.000Z
61.53012
4,692
0.670713
true
2,559
Qwen/Qwen-72B
1. YES 2. YES
0.891811
0.909907
0.811465
__label__eng_Latn
0.959153
0.723638
# Gibbs sampling in 2D This is BONUS content related to Day 22, where we introduce Gibbs sampling ## Random variables (We'll use 0-indexing so we have close alignment between math and python code) * 2D random variable $z = [z_0, z_1]$ * each entry $z_d$ is a real scalar: $z_d \in \mathbb{R}$ ## Target distribution \begin{align} p^*(z_0, z_1) = \mathcal{N}\left( \left[ \begin{array}{c} 0 \\ 0 \end{array} \right], \left[ \begin{array}{c c} 1 & 0.8 \\ 0.8 & 2 \end{array} \right] \right) \end{align} ## Key takeaways * New concept: 'Gibbs sampling', which just iterates between two conditional sampling distributions: \begin{align} z^{t+1}_0 &\sim p^* (z_0 | z_1 = z^t_1) \\ z^{t+1}_1 &\sim p^* (z_1 | z_0 = z^{t+1}_0) \end{align} ## Things to remember This is a simple example to illustrate the idea of how Gibbs sampling works. There are other "better" ways of sampling from a 2d normal. # Setup ```python import numpy as np ``` ```python import matplotlib.pyplot as plt import seaborn as sns sns.set_style("whitegrid") sns.set_context("notebook", font_scale=2.0) ``` # Step 1: Prepare for Gibbs sampling ## Define functions to sample from target's conditionals ```python def draw_z0_given_z1(z1, random_state): ## First, use Bishop textbook formulas to compute the conditional mean/var mean_01 = 0.4 * z1 var_01 = 0.68 ## Then, use simple transform to obtain a sample from this conditional ## Remember, if u ~ Normal(0, 1), a "standard" normal with mean 0 variance 1, ## then using transform: x <- T(u), with T(u) = \mu + \sigma * u ## we can say x ~ Normal(\mu, \sigma^2) u_samp = random_state.randn() z0_samp = mean_01 + np.sqrt(var_01) * u_samp return z0_samp ``` ```python def draw_z1_given_z0(z0, random_state): ## First, use Bishop textbook formulas to compute conditional mean/var mean_10 = 0.8 * z0 var_10 = 1.36 ## Then, use simple transform to obtain a sample from this conditional ## Remember, if u ~ Normal(0, 1), a "standard" normal with mean 0 variance 1, ## then using transform: x <- T(u), with T(u) = \mu + \sigma * u ## we can say x ~ Normal(\mu, \sigma^2) u_samp = random_state.randn() z1_samp = mean_10 + np.sqrt(var_10) * u_samp return z1_samp ``` # Step 2: Execute the Gibbs sampling algorithm Perform 6000 iterations. Discard the first 1000 as "not yet burned in". ```python S = 6000 sample_list = list() z_D = np.zeros(2) random_state = np.random.RandomState(0) # reproducible random seeds for t in range(S): z_D[0] = draw_z0_given_z1(z_D[1], random_state) z_D[1] = draw_z1_given_z0(z_D[0], random_state) if t > 1000: sample_list.append(z_D.copy()) # save copies so we get different vectors ``` ```python z_samples_SD = np.vstack(sample_list) ``` ## Step 3: Compare to samples from built-in routines for 2D MVNormal sampling ```python Cov_22 = np.asarray([[1.0, 0.8], [0.8, 2.0]]) true_samples_SD = random_state.multivariate_normal(np.zeros(2), Cov_22, size=S-1000) ``` ```python fig, ax_grid = plt.subplots(nrows=1, ncols=2, sharex=True, sharey=True, figsize=(10,4)) ax_grid[0].plot(z_samples_SD[:,0], z_samples_SD[:,1], 'k.') ax_grid[0].set_title('Gibbs sampler') ax_grid[0].set_aspect('equal', 'box'); ax_grid[1].plot(true_samples_SD[:,0], true_samples_SD[:,1], 'k.') ax_grid[1].set_title('np.random.multivariate_normal') ax_grid[1].set_aspect('equal', 'box'); ax_grid[1].set_xlim([-6, 6]); ax_grid[1].set_ylim([-6, 6]); ``` ```python ```
1f7380ced705a12595b94e44213102fbd0e0788a
50,341
ipynb
Jupyter Notebook
notebooks/GibbsSampling.ipynb
tufts-ml-courses/comp136-spr-20s-assignments-
c53cce8e376862eeef395aa0b55eca8b284a0115
[ "MIT" ]
1
2020-04-18T21:03:04.000Z
2020-04-18T21:03:04.000Z
notebooks/GibbsSampling.ipynb
tufts-ml-courses/comp136-spr-20s-assignments-
c53cce8e376862eeef395aa0b55eca8b284a0115
[ "MIT" ]
null
null
null
notebooks/GibbsSampling.ipynb
tufts-ml-courses/comp136-spr-20s-assignments-
c53cce8e376862eeef395aa0b55eca8b284a0115
[ "MIT" ]
6
2020-01-28T22:47:07.000Z
2020-04-12T23:56:18.000Z
208.020661
43,948
0.908901
true
1,145
Qwen/Qwen-72B
1. YES 2. YES
0.928409
0.914901
0.849402
__label__eng_Latn
0.806461
0.811779
# Sensibilidad OIS **Definición:** una curva cupón cero es una colección ordenada de plazos y tasas de interés donde cada una de las tasas es una tasa adecuada para traer a valor presente un flujo de caja al plazo que corresponde a la tasa. Consideremos un instrumento financiero cuyo valor depende de una curva cupón cero. Por ejemplo, un swap de tasa de interés. Matemáticamente, esta dependencia se expresa como: $$ V = V\left(z_1,\ldots,z_n;\alpha\right) $$ donde $z_1,\ldots,z_N$ son los valores de las tasas de la curva y $\alpha$ representa un vector de parámetros adicionales (como la tasa de cupón en un bono o un tipo de cambio en un swap de monedas). **Definición:** la cantidad $\Delta_i$ de $V$ respecto a $z_i$ se define como: $$ \Delta_i=\frac{\partial V}{\partial z_i} $$ **Definición:** la sensibilidad de $V$ respecto a un cambio $\delta z_i$ (positivo o negativo) en la tasa $z_i$ se define como: $$ S\left(V,z_i,\delta_i\right)=V\left(z_1,...,z_i+\delta_i,\ldots,z_n\right)-V\left(z_1,\ldots,z_i,\ldots,z_n\right) $$ Cuando se busca conocer la sensibilidad a un cambio $\delta z_i$ en la tasa $z_i$, donde $\delta z_i$ representa una magnitud sin signo, se utiliza la siguiente definición: **Definición:** la sensibilidad de $V$ respecto a un cambio de magnitud $\delta z_i$ en la tasa $z_i$ se define como: $$ \overline{S}\left(V,z_i,\delta_i\right)=\frac{V\left(z_1,\ldots ,z_i+\delta_i,\ldots,z_n\right)-V\left(z_1,\ldots,z_i-\delta_i,...,z_n\right)}{2} $$ En ocasiones, cuando se busca conocer la sensibilidad a un movimiento de magnitud pequeña, resulta conveniente realizar la siguiente aproximación: $$ \overline{S}\left(V,z_i,\delta_i\right)\approx\Delta_i \cdot\delta_i $$ este es el caso, cuando se busca conocer esta sensibilidad para un elevado número de operaciones. Es importante considerar que se puede asegurar que esta aproximación es válida cuando la función $V$ satisface los requerimientos de regularidad necesarios para aplicar el [Teorema de Taylor](https://en.wikipedia.org/wiki/Taylor%27s_theorem). ## Configuración ### Librerías ```python from finrisk import QC_Financial_3 as Qcf import modules.auxiliary as aux from functools import partial from enum import Enum import pandas as pd ``` ### Variables Globales ```python class BusCal(Enum): NY = 1 SCL = 2 ``` ```python def get_cal(code: BusCal) -> Qcf.BusinessCalendar: """ """ if code == BusCal.NY: cal = Qcf.BusinessCalendar(Qcf.QCDate(1, 1, 2020), 20) for agno in range(2020, 2071): f = Qcf.QCDate(12, 10, agno) if f.week_day() == Qcf.WeekDay.SAT: cal.add_holiday(Qcf.QCDate(14, 10, agno)) elif f.week_day() == Qcf.WeekDay.SUN: cal.add_holiday(Qcf.QCDate(13, 10, agno)) elif f.week_day() == Qcf.WeekDay.MON: cal.add_holiday(Qcf.QCDate(12, 10, agno)) elif f.week_day() == Qcf.WeekDay.TUE: cal.add_holiday(Qcf.QCDate(11, 10, agno)) elif f.week_day() == Qcf.WeekDay.WED: cal.add_holiday(Qcf.QCDate(10, 10, agno)) elif f.week_day() == Qcf.WeekDay.THU: cal.add_holiday(Qcf.QCDate(9, 10, agno)) else: cal.add_holiday(Qcf.QCDate(8, 10, agno)) cal.add_holiday(Qcf.QCDate(15, 2, 2021)) return cal ``` ```python get_cal(BusCal.NY) ``` <finrisk.QC_Financial_3.BusinessCalendar at 0x7f63b7087c38> ```python frmt = { 'tasa': '{:.6%}', 'df': '{:.6%}', 'valor_tasa': '{:.4%}', 'spread': '{:.4%}', 'nominal': '{:,.0f}', 'interes': '{:,.0f}', 'amortizacion': '{:,.0f}', 'flujo': '{:,.4f}', } ``` ```python class TypeOis(Enum): SOFR = 1 ICP = 2 ``` ```python type_ois_template = { TypeOis.SOFR: { 'currency': Qcf.QCUSD(), 'periodicity': Qcf.Tenor('1Y'), 'stub_period': Qcf.StubPeriod.SHORTFRONT, 'settlement_lag': 0, 'calendar': BusCal.NY, 'bus_adj_rule': Qcf.BusyAdjRules.MODFOLLOW, 'amort_is_cashflow': True, 'fixed_rate': Qcf.QCInterestRate(0.0, Qcf.QCAct360(), Qcf.QCLinearWf()), }, TypeOis.ICP: { 'currency': Qcf.QCCLP(), 'periodicity': Qcf.Tenor('6M'), 'stub_period': Qcf.StubPeriod.SHORTFRONT, 'settlement_lag': 0, 'calendar': BusCal.SCL, 'bus_adj_rule': Qcf.BusyAdjRules.MODFOLLOW, 'amort_is_cashflow': True, 'fixed_rate': Qcf.QCInterestRate(0.0, Qcf.QCAct360(), Qcf.QCLinearWf()), } } ``` ## Construye Curva Cero Cupón Se importa la data de la curva cupón cero que fue construida en el notebook 5. ```python df_curva = pd.read_excel('data/20201012_built_sofr_zero.xlsx') ``` ```python def get_curve_from_dataframe(yf: Qcf.QCYearFraction, wf: Qcf.QCWealthFactor, df_curva: pd.DataFrame) -> Qcf.ZeroCouponCurve: """ Retorna un objeto Qcf.ZeroCouponCurve. Esta función requiere que `df_curva` tenga una columna de nombre 'plazo' y una columna de nombre 'tasa'. Se usa interpolación lineal en la curva que se retorna. """ plazos = Qcf.long_vec() tasas = Qcf.double_vec() for row in df_curva.itertuples(): plazos.append(row.plazo) tasas.append(row.tasa) curva = Qcf.QCCurve(plazos, tasas) curva = Qcf.QCLinearInterpolator(curva) tipo_tasa = Qcf.QCInterestRate(0.0, yf, wf) curva = Qcf.ZeroCouponCurve(curva, tipo_tasa) return curva ``` ```python df_curva.head().style.format(frmt) ``` <style type="text/css" > </style><table id="T_ce3765e4_1314_11eb_833d_02cba411ec9d" ><thead> <tr> <th class="blank level0" ></th> <th class="col_heading level0 col0" >plazo</th> <th class="col_heading level0 col1" >tasa</th> <th class="col_heading level0 col2" >df</th> </tr></thead><tbody> <tr> <th id="T_ce3765e4_1314_11eb_833d_02cba411ec9dlevel0_row0" class="row_heading level0 row0" >0</th> <td id="T_ce3765e4_1314_11eb_833d_02cba411ec9drow0_col0" class="data row0 col0" >1</td> <td id="T_ce3765e4_1314_11eb_833d_02cba411ec9drow0_col1" class="data row0 col1" >0.081111%</td> <td id="T_ce3765e4_1314_11eb_833d_02cba411ec9drow0_col2" class="data row0 col2" >99.999778%</td> </tr> <tr> <th id="T_ce3765e4_1314_11eb_833d_02cba411ec9dlevel0_row1" class="row_heading level0 row1" >1</th> <td id="T_ce3765e4_1314_11eb_833d_02cba411ec9drow1_col0" class="data row1 col0" >7</td> <td id="T_ce3765e4_1314_11eb_833d_02cba411ec9drow1_col1" class="data row1 col1" >0.084051%</td> <td id="T_ce3765e4_1314_11eb_833d_02cba411ec9drow1_col2" class="data row1 col2" >99.998388%</td> </tr> <tr> <th id="T_ce3765e4_1314_11eb_833d_02cba411ec9dlevel0_row2" class="row_heading level0 row2" >2</th> <td id="T_ce3765e4_1314_11eb_833d_02cba411ec9drow2_col0" class="data row2 col0" >14</td> <td id="T_ce3765e4_1314_11eb_833d_02cba411ec9drow2_col1" class="data row2 col1" >0.077967%</td> <td id="T_ce3765e4_1314_11eb_833d_02cba411ec9drow2_col2" class="data row2 col2" >99.997010%</td> </tr> <tr> <th id="T_ce3765e4_1314_11eb_833d_02cba411ec9dlevel0_row3" class="row_heading level0 row3" >3</th> <td id="T_ce3765e4_1314_11eb_833d_02cba411ec9drow3_col0" class="data row3 col0" >21</td> <td id="T_ce3765e4_1314_11eb_833d_02cba411ec9drow3_col1" class="data row3 col1" >0.077358%</td> <td id="T_ce3765e4_1314_11eb_833d_02cba411ec9drow3_col2" class="data row3 col2" >99.995549%</td> </tr> <tr> <th id="T_ce3765e4_1314_11eb_833d_02cba411ec9dlevel0_row4" class="row_heading level0 row4" >4</th> <td id="T_ce3765e4_1314_11eb_833d_02cba411ec9drow4_col0" class="data row4 col0" >33</td> <td id="T_ce3765e4_1314_11eb_833d_02cba411ec9drow4_col1" class="data row4 col1" >0.078067%</td> <td id="T_ce3765e4_1314_11eb_833d_02cba411ec9drow4_col2" class="data row4 col2" >99.992942%</td> </tr> </tbody></table> ```python zcc = get_curve_from_dataframe(Qcf.QCAct365(),Qcf.QCCompoundWf(), df_curva) ``` <finrisk.QC_Financial_3.ZeroCouponCurve at 0x7f63b5e4fbc8> Algunos métodos del objeto`zcc`. ```python plazo = 900 print(f"Tasa a {plazo} días es igual a {zcc.get_rate_at(plazo):.4%}") print(f"Factor de descuento a {plazo} días es igual a {zcc.get_discount_factor_at(plazo):.6%}") ``` Tasa a 900 días es igual a 0.0652% Factor de descuento a 900 días es igual a 99.839384% ## Valorización ```python def get_ois_using_template(template, type_ois: TypeOis, rp: Qcf.RecPay, notional: float, start_date: Qcf.QCDate, tenor: Qcf.Tenor, fixed_rate_value: float, spread: float, gearing: float): """ """ template_dict = template[type_ois] meses = tenor.get_years() * 12 + tenor.get_months() end_date = start_date.add_months(meses) template_dict['fixed_rate'].set_value(fixed_rate_value) es_bono = False # Construye la pata fija fixed_rate_leg = Qcf.LegFactory.build_bullet_fixed_rate_leg( rp, start_date, end_date, template_dict['bus_adj_rule'], template_dict['periodicity'], template_dict['stub_period'], get_cal(template_dict['calendar']), template_dict['settlement_lag'], notional, template_dict['amort_is_cashflow'], template_dict['fixed_rate'], template_dict['currency'], es_bono) # Construye la pata ois rp = Qcf.RecPay.PAY if rp == Qcf.RecPay.RECEIVE else Qcf.RecPay.RECEIVE icp_clp_leg = Qcf.LegFactory.build_bullet_icp_clp2_leg( rp, start_date, end_date, template_dict['bus_adj_rule'], template_dict['periodicity'], template_dict['stub_period'], get_cal(template_dict['calendar']), template_dict['settlement_lag'], notional, template_dict['amort_is_cashflow'], spread, gearing, True ) for i in range(icp_clp_leg.size()): cshflw = icp_clp_leg.get_cashflow_at(i) cshflw.set_start_date_icp(1.0) cshflw.set_end_date_icp(1.0) return (fixed_rate_leg, icp_clp_leg) ``` ### Operación Ejemplo ```python op = get_ois_using_template( type_ois_template, TypeOis.SOFR, Qcf.RecPay.RECEIVE, 10000000, Qcf.QCDate(14, 10, 2020), Qcf.Tenor('2Y'), .01, 0.0, 1.0 ) op ``` (<finrisk.QC_Financial_3.Leg at 0x7f63b5e8b578>, <finrisk.QC_Financial_3.Leg at 0x7f63b5e8b1d0>) #### Digresión: `functools.partial` Supongamos que estoy en una situación en la que sólo quiero construir OIS de *SOFR*. Me gustaría no tener que repetir los argumentos `type_ois_template` y `TypeOis.SOFR` cada vez que llamo la función `get_ois_using_template`. Puedo definir una nueva función de la siguiente forma: ```python get_ois_sofr = partial(get_ois_using_template, type_ois_template, TypeOis.SOFR) ``` Con esta nueva función, `get_ois_sofr`, ahora puedo construir la operación `op` de la siguiente forma: ```python op = get_ois_sofr( Qcf.RecPay.RECEIVE, 10000000, Qcf.QCDate(14, 10, 2020), Qcf.Tenor('2Y'), .01, 0.0, 1.0 ) ``` #### Continuamos (fin digresión ...) ```python aux.show_leg(op[0], 'FixedRateCashflow', '').style.format(frmt) ``` <style type="text/css" > </style><table id="T_2c89ea4a_1324_11eb_833d_02cba411ec9d" ><thead> <tr> <th class="blank level0" ></th> <th class="col_heading level0 col0" >fecha_inicial</th> <th class="col_heading level0 col1" >fecha_final</th> <th class="col_heading level0 col2" >fecha_pago</th> <th class="col_heading level0 col3" >nominal</th> <th class="col_heading level0 col4" >amortizacion</th> <th class="col_heading level0 col5" >interes</th> <th class="col_heading level0 col6" >amort_es_flujo</th> <th class="col_heading level0 col7" >flujo</th> <th class="col_heading level0 col8" >moneda</th> <th class="col_heading level0 col9" >valor_tasa</th> <th class="col_heading level0 col10" >tipo_tasa</th> </tr></thead><tbody> <tr> <th id="T_2c89ea4a_1324_11eb_833d_02cba411ec9dlevel0_row0" class="row_heading level0 row0" >0</th> <td id="T_2c89ea4a_1324_11eb_833d_02cba411ec9drow0_col0" class="data row0 col0" >2020-10-14</td> <td id="T_2c89ea4a_1324_11eb_833d_02cba411ec9drow0_col1" class="data row0 col1" >2021-10-14</td> <td id="T_2c89ea4a_1324_11eb_833d_02cba411ec9drow0_col2" class="data row0 col2" >2021-10-14</td> <td id="T_2c89ea4a_1324_11eb_833d_02cba411ec9drow0_col3" class="data row0 col3" >10,000,000</td> <td id="T_2c89ea4a_1324_11eb_833d_02cba411ec9drow0_col4" class="data row0 col4" >0</td> <td id="T_2c89ea4a_1324_11eb_833d_02cba411ec9drow0_col5" class="data row0 col5" >101,389</td> <td id="T_2c89ea4a_1324_11eb_833d_02cba411ec9drow0_col6" class="data row0 col6" >True</td> <td id="T_2c89ea4a_1324_11eb_833d_02cba411ec9drow0_col7" class="data row0 col7" >101,388.8889</td> <td id="T_2c89ea4a_1324_11eb_833d_02cba411ec9drow0_col8" class="data row0 col8" >USD</td> <td id="T_2c89ea4a_1324_11eb_833d_02cba411ec9drow0_col9" class="data row0 col9" >1.0000%</td> <td id="T_2c89ea4a_1324_11eb_833d_02cba411ec9drow0_col10" class="data row0 col10" >LinAct360</td> </tr> <tr> <th id="T_2c89ea4a_1324_11eb_833d_02cba411ec9dlevel0_row1" class="row_heading level0 row1" >1</th> <td id="T_2c89ea4a_1324_11eb_833d_02cba411ec9drow1_col0" class="data row1 col0" >2021-10-14</td> <td id="T_2c89ea4a_1324_11eb_833d_02cba411ec9drow1_col1" class="data row1 col1" >2022-10-14</td> <td id="T_2c89ea4a_1324_11eb_833d_02cba411ec9drow1_col2" class="data row1 col2" >2022-10-14</td> <td id="T_2c89ea4a_1324_11eb_833d_02cba411ec9drow1_col3" class="data row1 col3" >10,000,000</td> <td id="T_2c89ea4a_1324_11eb_833d_02cba411ec9drow1_col4" class="data row1 col4" >10,000,000</td> <td id="T_2c89ea4a_1324_11eb_833d_02cba411ec9drow1_col5" class="data row1 col5" >101,389</td> <td id="T_2c89ea4a_1324_11eb_833d_02cba411ec9drow1_col6" class="data row1 col6" >True</td> <td id="T_2c89ea4a_1324_11eb_833d_02cba411ec9drow1_col7" class="data row1 col7" >10,101,388.8889</td> <td id="T_2c89ea4a_1324_11eb_833d_02cba411ec9drow1_col8" class="data row1 col8" >USD</td> <td id="T_2c89ea4a_1324_11eb_833d_02cba411ec9drow1_col9" class="data row1 col9" >1.0000%</td> <td id="T_2c89ea4a_1324_11eb_833d_02cba411ec9drow1_col10" class="data row1 col10" >LinAct360</td> </tr> </tbody></table> ```python aux.show_leg(op[1], 'IcpClpCashflow', '').style.format(frmt) ``` <style type="text/css" > </style><table id="T_2c89ea4b_1324_11eb_833d_02cba411ec9d" ><thead> <tr> <th class="blank level0" ></th> <th class="col_heading level0 col0" >fecha_inicial</th> <th class="col_heading level0 col1" >fecha_final</th> <th class="col_heading level0 col2" >fecha_pago</th> <th class="col_heading level0 col3" >nominal</th> <th class="col_heading level0 col4" >amortizacion</th> <th class="col_heading level0 col5" >amort_es_flujo</th> <th class="col_heading level0 col6" >flujo</th> <th class="col_heading level0 col7" >moneda</th> <th class="col_heading level0 col8" >icp_inicial</th> <th class="col_heading level0 col9" >icp_final</th> <th class="col_heading level0 col10" >valor_tasa</th> <th class="col_heading level0 col11" >interes</th> <th class="col_heading level0 col12" >spread</th> <th class="col_heading level0 col13" >gearing</th> <th class="col_heading level0 col14" >tipo_tasa</th> </tr></thead><tbody> <tr> <th id="T_2c89ea4b_1324_11eb_833d_02cba411ec9dlevel0_row0" class="row_heading level0 row0" >0</th> <td id="T_2c89ea4b_1324_11eb_833d_02cba411ec9drow0_col0" class="data row0 col0" >2020-10-14</td> <td id="T_2c89ea4b_1324_11eb_833d_02cba411ec9drow0_col1" class="data row0 col1" >2021-10-14</td> <td id="T_2c89ea4b_1324_11eb_833d_02cba411ec9drow0_col2" class="data row0 col2" >2021-10-14</td> <td id="T_2c89ea4b_1324_11eb_833d_02cba411ec9drow0_col3" class="data row0 col3" >-10,000,000</td> <td id="T_2c89ea4b_1324_11eb_833d_02cba411ec9drow0_col4" class="data row0 col4" >0</td> <td id="T_2c89ea4b_1324_11eb_833d_02cba411ec9drow0_col5" class="data row0 col5" >True</td> <td id="T_2c89ea4b_1324_11eb_833d_02cba411ec9drow0_col6" class="data row0 col6" >0.0000</td> <td id="T_2c89ea4b_1324_11eb_833d_02cba411ec9drow0_col7" class="data row0 col7" >CLP</td> <td id="T_2c89ea4b_1324_11eb_833d_02cba411ec9drow0_col8" class="data row0 col8" >1.000000</td> <td id="T_2c89ea4b_1324_11eb_833d_02cba411ec9drow0_col9" class="data row0 col9" >1.000000</td> <td id="T_2c89ea4b_1324_11eb_833d_02cba411ec9drow0_col10" class="data row0 col10" >0.0000%</td> <td id="T_2c89ea4b_1324_11eb_833d_02cba411ec9drow0_col11" class="data row0 col11" >-0</td> <td id="T_2c89ea4b_1324_11eb_833d_02cba411ec9drow0_col12" class="data row0 col12" >0.0000%</td> <td id="T_2c89ea4b_1324_11eb_833d_02cba411ec9drow0_col13" class="data row0 col13" >1.000000</td> <td id="T_2c89ea4b_1324_11eb_833d_02cba411ec9drow0_col14" class="data row0 col14" >LinAct360</td> </tr> <tr> <th id="T_2c89ea4b_1324_11eb_833d_02cba411ec9dlevel0_row1" class="row_heading level0 row1" >1</th> <td id="T_2c89ea4b_1324_11eb_833d_02cba411ec9drow1_col0" class="data row1 col0" >2021-10-14</td> <td id="T_2c89ea4b_1324_11eb_833d_02cba411ec9drow1_col1" class="data row1 col1" >2022-10-14</td> <td id="T_2c89ea4b_1324_11eb_833d_02cba411ec9drow1_col2" class="data row1 col2" >2022-10-14</td> <td id="T_2c89ea4b_1324_11eb_833d_02cba411ec9drow1_col3" class="data row1 col3" >-10,000,000</td> <td id="T_2c89ea4b_1324_11eb_833d_02cba411ec9drow1_col4" class="data row1 col4" >-10,000,000</td> <td id="T_2c89ea4b_1324_11eb_833d_02cba411ec9drow1_col5" class="data row1 col5" >True</td> <td id="T_2c89ea4b_1324_11eb_833d_02cba411ec9drow1_col6" class="data row1 col6" >-10,000,000.0000</td> <td id="T_2c89ea4b_1324_11eb_833d_02cba411ec9drow1_col7" class="data row1 col7" >CLP</td> <td id="T_2c89ea4b_1324_11eb_833d_02cba411ec9drow1_col8" class="data row1 col8" >1.000000</td> <td id="T_2c89ea4b_1324_11eb_833d_02cba411ec9drow1_col9" class="data row1 col9" >1.000000</td> <td id="T_2c89ea4b_1324_11eb_833d_02cba411ec9drow1_col10" class="data row1 col10" >0.0000%</td> <td id="T_2c89ea4b_1324_11eb_833d_02cba411ec9drow1_col11" class="data row1 col11" >-0</td> <td id="T_2c89ea4b_1324_11eb_833d_02cba411ec9drow1_col12" class="data row1 col12" >0.0000%</td> <td id="T_2c89ea4b_1324_11eb_833d_02cba411ec9drow1_col13" class="data row1 col13" >1.000000</td> <td id="T_2c89ea4b_1324_11eb_833d_02cba411ec9drow1_col14" class="data row1 col14" >LinAct360</td> </tr> </tbody></table> #### Valor Presente Pata Fija ```python vp = Qcf.PresentValue() ``` ```python fecha_val = Qcf.QCDate(14, 10, 2020) ``` ```python vp_fija = vp.pv(fecha_val, op[0], zcc) print(f'El valor presente de la pata fija es: USD {vp_fija:,.8f}') ``` El valor presente de la pata fija es: USD 10,191,249.70919298 **Ejercicio:** Replique el valor de la pata fija utilizando los flujos y los factores de descuento que obtiene de la curva `zcc`. ```python # Se dan de alta las fechas finales de ambos cupones fecha1 = Qcf.QCDate(14, 10, 2021) fecha2 = Qcf.QCDate(14, 10, 2022) # O de forma más automática: fecha1 = op[0].get_cashflow_at(0).get_settlement_date() fecha2 = op[0].get_cashflow_at(1).get_settlement_date() # Se calcula el número de días entre la fecha de valorización (fecha_val) y las fechas # finales de ambos cupones. plazo1 = fecha_val.day_diff(fecha1) plazo2 = fecha_val.day_diff(fecha2) # Utilizando la curva zcc se calculan los df a esos plazos df1 = zcc.get_discount_factor_at(plazo1) df2 = zcc.get_discount_factor_at(plazo2) # Se obtienen los flujos totales (interés y amortización) de ambos cupones flujo1 = op[0].get_cashflow_at(0).amount() flujo2 = op[0].get_cashflow_at(1).amount() # Finalmente, se calcula el valor presente como el producto (escalar) entre los df y los flujos. check_vp = df1 * flujo1 + df2 * flujo2 # Se muestra el resultado. print(f'El valor presente a mano es: {check_vp:,.8f}') ``` El valor presente a mano es: 10,191,249.70919298 #### Valor Presente Pata Flotante ```python fwd = Qcf.ForwardRates() ``` ```python print(f'VP: {vp.pv(fecha_val, op[1], zcc):,.2f}') ``` VP: -9,988,658.09 ```python print(f'{df2 * 10000000:,.2f}') ``` 9,988,658.09 ```python fwd.set_rates_icp_clp_leg(fecha_val, 1.0, op[1], zcc) ``` ```python aux.show_leg(op[1], 'IcpClpCashflow', '').style.format(frmt) ``` <style type="text/css" > </style><table id="T_2c89ea4c_1324_11eb_833d_02cba411ec9d" ><thead> <tr> <th class="blank level0" ></th> <th class="col_heading level0 col0" >fecha_inicial</th> <th class="col_heading level0 col1" >fecha_final</th> <th class="col_heading level0 col2" >fecha_pago</th> <th class="col_heading level0 col3" >nominal</th> <th class="col_heading level0 col4" >amortizacion</th> <th class="col_heading level0 col5" >amort_es_flujo</th> <th class="col_heading level0 col6" >flujo</th> <th class="col_heading level0 col7" >moneda</th> <th class="col_heading level0 col8" >icp_inicial</th> <th class="col_heading level0 col9" >icp_final</th> <th class="col_heading level0 col10" >valor_tasa</th> <th class="col_heading level0 col11" >interes</th> <th class="col_heading level0 col12" >spread</th> <th class="col_heading level0 col13" >gearing</th> <th class="col_heading level0 col14" >tipo_tasa</th> </tr></thead><tbody> <tr> <th id="T_2c89ea4c_1324_11eb_833d_02cba411ec9dlevel0_row0" class="row_heading level0 row0" >0</th> <td id="T_2c89ea4c_1324_11eb_833d_02cba411ec9drow0_col0" class="data row0 col0" >2020-10-14</td> <td id="T_2c89ea4c_1324_11eb_833d_02cba411ec9drow0_col1" class="data row0 col1" >2021-10-14</td> <td id="T_2c89ea4c_1324_11eb_833d_02cba411ec9drow0_col2" class="data row0 col2" >2021-10-14</td> <td id="T_2c89ea4c_1324_11eb_833d_02cba411ec9drow0_col3" class="data row0 col3" >-10,000,000</td> <td id="T_2c89ea4c_1324_11eb_833d_02cba411ec9drow0_col4" class="data row0 col4" >0</td> <td id="T_2c89ea4c_1324_11eb_833d_02cba411ec9drow0_col5" class="data row0 col5" >True</td> <td id="T_2c89ea4c_1324_11eb_833d_02cba411ec9drow0_col6" class="data row0 col6" >-7,023.7778</td> <td id="T_2c89ea4c_1324_11eb_833d_02cba411ec9drow0_col7" class="data row0 col7" >CLP</td> <td id="T_2c89ea4c_1324_11eb_833d_02cba411ec9drow0_col8" class="data row0 col8" >1.000000</td> <td id="T_2c89ea4c_1324_11eb_833d_02cba411ec9drow0_col9" class="data row0 col9" >1.000702</td> <td id="T_2c89ea4c_1324_11eb_833d_02cba411ec9drow0_col10" class="data row0 col10" >0.0700%</td> <td id="T_2c89ea4c_1324_11eb_833d_02cba411ec9drow0_col11" class="data row0 col11" >-7,097</td> <td id="T_2c89ea4c_1324_11eb_833d_02cba411ec9drow0_col12" class="data row0 col12" >0.0000%</td> <td id="T_2c89ea4c_1324_11eb_833d_02cba411ec9drow0_col13" class="data row0 col13" >1.000000</td> <td id="T_2c89ea4c_1324_11eb_833d_02cba411ec9drow0_col14" class="data row0 col14" >LinAct360</td> </tr> <tr> <th id="T_2c89ea4c_1324_11eb_833d_02cba411ec9dlevel0_row1" class="row_heading level0 row1" >1</th> <td id="T_2c89ea4c_1324_11eb_833d_02cba411ec9drow1_col0" class="data row1 col0" >2021-10-14</td> <td id="T_2c89ea4c_1324_11eb_833d_02cba411ec9drow1_col1" class="data row1 col1" >2022-10-14</td> <td id="T_2c89ea4c_1324_11eb_833d_02cba411ec9drow1_col2" class="data row1 col2" >2022-10-14</td> <td id="T_2c89ea4c_1324_11eb_833d_02cba411ec9drow1_col3" class="data row1 col3" >-10,000,000</td> <td id="T_2c89ea4c_1324_11eb_833d_02cba411ec9drow1_col4" class="data row1 col4" >-10,000,000</td> <td id="T_2c89ea4c_1324_11eb_833d_02cba411ec9drow1_col5" class="data row1 col5" >True</td> <td id="T_2c89ea4c_1324_11eb_833d_02cba411ec9drow1_col6" class="data row1 col6" >-10,004,327.9717</td> <td id="T_2c89ea4c_1324_11eb_833d_02cba411ec9drow1_col7" class="data row1 col7" >CLP</td> <td id="T_2c89ea4c_1324_11eb_833d_02cba411ec9drow1_col8" class="data row1 col8" >1.000702</td> <td id="T_2c89ea4c_1324_11eb_833d_02cba411ec9drow1_col9" class="data row1 col9" >1.001135</td> <td id="T_2c89ea4c_1324_11eb_833d_02cba411ec9drow1_col10" class="data row1 col10" >0.0400%</td> <td id="T_2c89ea4c_1324_11eb_833d_02cba411ec9drow1_col11" class="data row1 col11" >-4,056</td> <td id="T_2c89ea4c_1324_11eb_833d_02cba411ec9drow1_col12" class="data row1 col12" >0.0000%</td> <td id="T_2c89ea4c_1324_11eb_833d_02cba411ec9drow1_col13" class="data row1 col13" >1.000000</td> <td id="T_2c89ea4c_1324_11eb_833d_02cba411ec9drow1_col14" class="data row1 col14" >LinAct360</td> </tr> </tbody></table> ```python vp_flot = vp.pv(fecha_val, op[1], zcc) print(f'El valor presente de la pata flotante es: USD {vp_flot:,.2f}') ``` El valor presente de la pata flotante es: USD -10,000,000.00 ¿Porqué sabíamos que tenía que dar 10,000,000.00? Hint: la respuesta está en la construcción de la curva cero OIS. ## Sensibilidad En `QC_Financial_3` al calcular el valor presente, también se calculan las derivadas del valor presente respecto a cada uno de los vértices de la curva. ### Pata Fija f(g(x)) = y --> dy/dx = f'(g(x)) . g'(x) ```python vp.pv(fecha_val, op[0], zcc) der = vp.get_derivatives() ``` Con esas derivadas, se puede calcular la sensibilidad a cada vértice de la curva cupón cero para un movimiento de 1 punto básico. ```python print(type(der)) ``` <class 'finrisk.QC_Financial_3.double_vec'> ```python delta = .0001 total = 0 for i, d in enumerate(der): total += d * delta print(f"Sensibilidad en {i}: {d * delta: 0,.2f}") print(f"Sensibilidad total: {total:,.2f}") ``` Sensibilidad en 0: 0.00 Sensibilidad en 1: 0.00 Sensibilidad en 2: 0.00 Sensibilidad en 3: 0.00 Sensibilidad en 4: 0.00 Sensibilidad en 5: 0.00 Sensibilidad en 6: 0.00 Sensibilidad en 7: 0.00 Sensibilidad en 8: 0.00 Sensibilidad en 9: 0.00 Sensibilidad en 10: 0.00 Sensibilidad en 11: 0.00 Sensibilidad en 12: 0.00 Sensibilidad en 13: 0.00 Sensibilidad en 14: 0.00 Sensibilidad en 15: -10.12 Sensibilidad en 16: 0.00 Sensibilidad en 17: -2,016.84 Sensibilidad en 18: 0.00 Sensibilidad en 19: 0.00 Sensibilidad en 20: 0.00 Sensibilidad en 21: 0.00 Sensibilidad en 22: 0.00 Sensibilidad en 23: 0.00 Sensibilidad en 24: 0.00 Sensibilidad en 25: 0.00 Sensibilidad en 26: 0.00 Sensibilidad en 27: 0.00 Sensibilidad en 28: 0.00 Sensibilidad en 29: 0.00 Sensibilidad en 30: 0.00 Sensibilidad en 31: 0.00 Sensibilidad en 32: 0.00 Sensibilidad total: -2,026.97 **Ejercicio:** Verifique por diferencias finitas centrales la sensibilidad en el vértice 17. La aproximación de la derivada por diferencias finitas centrales es: $$ \begin{equation} f^{\prime}\left(x\right)\approx\frac{f\left(x+h\right)-f\left(x-h\right)}{2h} \end{equation} $$ ### Pata Flotante ```python vp.pv(fecha_val, op[1], zcc) der = vp.get_derivatives() ``` ```python total = 0 for i, d in enumerate(der): total += d * delta print(f"Sensibilidad en {i}: {d * delta: 0,.2f}") print(f"Sensibilidad total: {total:,.2f}") ``` Sensibilidad en 0: 0.00 Sensibilidad en 1: 0.00 Sensibilidad en 2: 0.00 Sensibilidad en 3: 0.00 Sensibilidad en 4: 0.00 Sensibilidad en 5: 0.00 Sensibilidad en 6: 0.00 Sensibilidad en 7: 0.00 Sensibilidad en 8: 0.00 Sensibilidad en 9: 0.00 Sensibilidad en 10: 0.00 Sensibilidad en 11: 0.00 Sensibilidad en 12: 0.00 Sensibilidad en 13: 0.00 Sensibilidad en 14: 0.00 Sensibilidad en 15: 0.70 Sensibilidad en 16: 0.00 Sensibilidad en 17: 1,997.46 Sensibilidad en 18: 0.00 Sensibilidad en 19: 0.00 Sensibilidad en 20: 0.00 Sensibilidad en 21: 0.00 Sensibilidad en 22: 0.00 Sensibilidad en 23: 0.00 Sensibilidad en 24: 0.00 Sensibilidad en 25: 0.00 Sensibilidad en 26: 0.00 Sensibilidad en 27: 0.00 Sensibilidad en 28: 0.00 Sensibilidad en 29: 0.00 Sensibilidad en 30: 0.00 Sensibilidad en 31: 0.00 Sensibilidad en 32: 0.00 Sensibilidad total: 1,998.16 La estructura es la misma que para una pata fija, lo que indica que se debe también incluir la sensibilidad a la curva de proyección. ```python import numpy as np result = [] for i in range(op[1].size()): cshflw = op[1].get_cashflow_at(i) amt_der = cshflw.get_amount_derivatives() df = zcc.get_discount_factor_at(fecha_val.day_diff(cshflw.get_settlement_date())) amt_der = [a * delta * df for a in amt_der] if len(amt_der) > 0: result.append(np.array(amt_der)) total = result[0] * 0 for r in result: total += r for i in range(len(total)): print(f"Sensibilidad en {i}: {total[i]:0,.2f}") print(f"Sensibilidad de proyección: {sum(total):,.2f} USD") ``` Sensibilidad en 0: -0.00 Sensibilidad en 1: -0.00 Sensibilidad en 2: -0.00 Sensibilidad en 3: -0.00 Sensibilidad en 4: -0.00 Sensibilidad en 5: -0.00 Sensibilidad en 6: -0.00 Sensibilidad en 7: -0.00 Sensibilidad en 8: -0.00 Sensibilidad en 9: -0.00 Sensibilidad en 10: -0.00 Sensibilidad en 11: -0.00 Sensibilidad en 12: -0.00 Sensibilidad en 13: -0.00 Sensibilidad en 14: -0.00 Sensibilidad en 15: -0.70 Sensibilidad en 16: -0.00 Sensibilidad en 17: -1,997.46 Sensibilidad en 18: -0.00 Sensibilidad en 19: -0.00 Sensibilidad en 20: -0.00 Sensibilidad en 21: -0.00 Sensibilidad en 22: -0.00 Sensibilidad en 23: -0.00 Sensibilidad en 24: -0.00 Sensibilidad en 25: -0.00 Sensibilidad en 26: -0.00 Sensibilidad en 27: -0.00 Sensibilidad en 28: -0.00 Sensibilidad en 29: -0.00 Sensibilidad en 30: -0.00 Sensibilidad en 31: -0.00 Sensibilidad en 32: -0.00 Sensibilidad de proyección: -1,998.16 USD **Ejercicio:** Ambas sensibilidades se cancelan. ¿Porqué?
d3256fec80f12e1de94b9437def6fbcd0419799b
51,717
ipynb
Jupyter Notebook
07_sensibilidad_ois.ipynb
MagicalUndeadToast/mif-2020
862d5beae62edb889ed178a8953b0d30f58fe74b
[ "Unlicense" ]
null
null
null
07_sensibilidad_ois.ipynb
MagicalUndeadToast/mif-2020
862d5beae62edb889ed178a8953b0d30f58fe74b
[ "Unlicense" ]
null
null
null
07_sensibilidad_ois.ipynb
MagicalUndeadToast/mif-2020
862d5beae62edb889ed178a8953b0d30f58fe74b
[ "Unlicense" ]
null
null
null
37.749635
1,076
0.544289
true
12,039
Qwen/Qwen-72B
1. YES 2. YES
0.795658
0.817574
0.65051
__label__spa_Latn
0.318087
0.349683
## Mesh refinement study using 5PTI Next, we tested our code on a real biomolecule - bovine pancreatic trypsin inhibitor (PDB code 5PTI). We ran this experiment on a single CPU node of Pegasus, and the raw result files are located at `/repro-pack/runs/5PTI_convergence`. This notebook demonstrates how we generated all results presented in section 3.3. We computed its solvation energy using 5 meshes with the element density ranging from 1 to 16 (see table below). #### Table: Mesh size and element density of 5 meshes used in the grid refinement study on 5PTI . | number of elements | element density (# elem / sq Angstrom) | |:------------------:|:---------------:| | 3032 | 1 | | 6196 | 2 | | 12512 | 4 | | 25204 | 8 | | 50596 | 16 | Other parameters are the same as the previous mesh refinement study using a spherical molecule. ```python import pandas as pd import numpy as np from matplotlib import pyplot as plt from bempp_pbs.postprocess import PLOT_PARAMS, get_df ``` ```python plt.rcParams.update(PLOT_PARAMS) # update plot style ``` **load results** Similar to the previous study, we obtained three sets of results: - `direct`: direct formulation with a block-diagonal preconditioner - `derivative`: derivative formulation with a mass-lumping preconditioner - `derivative_mass_matrix`: derivative formulation with a mass-matrix preconditioner ```python direct_df = get_df('../runs/5PTI_convergence/direct/', formulation='direct', skip4=True) direct_df ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>t_total_assembly</th> <th>t_total_gmres</th> <th>num_iter</th> <th>e_solv [kcal/Mol]</th> <th>memory [GB]</th> <th>t_fmm_init</th> <th>t_singular_assembler</th> <th>t_assemble_sparse</th> <th>t_assembly_other</th> <th>t_singular_correction</th> <th>t_laplace</th> <th>t_helmholtz</th> <th>t_avg_laplace</th> <th>t_avg_helmholtz</th> <th>t_gmres_other</th> </tr> <tr> <th>num_elem</th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <th>3032</th> <td>9.255245</td> <td>41.141969</td> <td>44</td> <td>-405.298901</td> <td>1.245852</td> <td>1.994944</td> <td>2.517442</td> <td>1.489074</td> <td>3.253786</td> <td>2.509389</td> <td>4.010676</td> <td>34.101236</td> <td>0.021797</td> <td>0.185333</td> <td>0.520669</td> </tr> <tr> <th>6196</th> <td>6.873639</td> <td>55.456878</td> <td>48</td> <td>-352.498510</td> <td>1.298352</td> <td>1.984072</td> <td>0.604215</td> <td>1.515710</td> <td>2.769642</td> <td>4.833908</td> <td>6.902839</td> <td>42.929911</td> <td>0.034514</td> <td>0.214650</td> <td>0.790220</td> </tr> <tr> <th>12512</th> <td>8.224664</td> <td>87.453475</td> <td>53</td> <td>-331.053002</td> <td>1.497760</td> <td>2.529693</td> <td>0.797550</td> <td>1.513400</td> <td>3.384022</td> <td>8.860051</td> <td>12.914149</td> <td>64.242070</td> <td>0.058701</td> <td>0.292009</td> <td>1.437205</td> </tr> <tr> <th>25204</th> <td>11.171512</td> <td>171.576826</td> <td>72</td> <td>-321.430106</td> <td>1.694780</td> <td>2.761671</td> <td>1.381049</td> <td>1.528130</td> <td>5.500661</td> <td>24.296032</td> <td>29.508209</td> <td>114.710456</td> <td>0.098361</td> <td>0.382368</td> <td>3.062129</td> </tr> <tr> <th>50596</th> <td>17.939818</td> <td>301.352296</td> <td>81</td> <td>-317.397593</td> <td>2.596536</td> <td>3.672400</td> <td>2.966928</td> <td>1.650310</td> <td>9.650179</td> <td>35.566245</td> <td>64.952653</td> <td>194.572055</td> <td>0.193311</td> <td>0.579083</td> <td>6.261343</td> </tr> </tbody> </table> </div> ```python derivative_df = get_df('../runs/5PTI_convergence/derivative_ex/', formulation='derivative', skip4=True) derivative_df ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>t_total_assembly</th> <th>t_total_gmres</th> <th>num_iter</th> <th>e_solv [kcal/Mol]</th> <th>memory [GB]</th> <th>t_fmm_init</th> <th>t_singular_assembler</th> <th>t_assemble_sparse</th> <th>t_assembly_other</th> <th>t_singular_correction</th> <th>t_laplace</th> <th>t_helmholtz</th> <th>t_avg_laplace</th> <th>t_avg_helmholtz</th> <th>t_gmres_other</th> </tr> <tr> <th>num_elem</th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <th>3032</th> <td>11.467519</td> <td>53.837476</td> <td>20</td> <td>-389.044489</td> <td>1.268228</td> <td>3.063636</td> <td>1.123780</td> <td>2.73863</td> <td>4.541472</td> <td>3.047618</td> <td>4.048155</td> <td>46.063559</td> <td>0.023001</td> <td>0.190345</td> <td>0.678145</td> </tr> <tr> <th>6196</th> <td>12.328889</td> <td>65.472754</td> <td>19</td> <td>-349.148460</td> <td>1.360512</td> <td>2.073360</td> <td>1.063001</td> <td>2.88178</td> <td>6.310748</td> <td>4.657528</td> <td>6.035649</td> <td>53.222862</td> <td>0.035926</td> <td>0.230402</td> <td>1.556715</td> </tr> <tr> <th>12512</th> <td>16.094679</td> <td>92.094376</td> <td>19</td> <td>-330.440148</td> <td>1.611068</td> <td>3.515644</td> <td>1.685500</td> <td>2.79936</td> <td>8.094174</td> <td>8.467109</td> <td>10.176532</td> <td>71.267346</td> <td>0.060575</td> <td>0.308517</td> <td>2.183389</td> </tr> <tr> <th>25204</th> <td>20.313739</td> <td>121.308250</td> <td>18</td> <td>-321.330782</td> <td>1.881372</td> <td>3.067885</td> <td>2.120460</td> <td>2.79688</td> <td>12.328514</td> <td>13.368390</td> <td>16.634562</td> <td>87.766347</td> <td>0.103966</td> <td>0.398938</td> <td>3.538950</td> </tr> <tr> <th>50596</th> <td>32.908272</td> <td>195.402694</td> <td>18</td> <td>-317.373973</td> <td>2.913360</td> <td>3.941171</td> <td>4.261869</td> <td>3.05930</td> <td>21.645932</td> <td>22.829768</td> <td>31.914846</td> <td>133.534401</td> <td>0.199468</td> <td>0.606975</td> <td>7.123678</td> </tr> </tbody> </table> </div> ```python derivative_mass_matrix_df = get_df('../runs/5PTI_convergence/derivative_ex_mass_matrix/', formulation='derivative', skip4=True) derivative_mass_matrix_df ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>t_total_assembly</th> <th>t_total_gmres</th> <th>num_iter</th> <th>e_solv [kcal/Mol]</th> <th>memory [GB]</th> <th>t_fmm_init</th> <th>t_singular_assembler</th> <th>t_assemble_sparse</th> <th>t_assembly_other</th> <th>t_singular_correction</th> <th>t_laplace</th> <th>t_helmholtz</th> <th>t_avg_laplace</th> <th>t_avg_helmholtz</th> <th>t_gmres_other</th> </tr> <tr> <th>num_elem</th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <th>3032</th> <td>8.878015</td> <td>38.294068</td> <td>14</td> <td>-389.044497</td> <td>1.266776</td> <td>1.927307</td> <td>0.594318</td> <td>2.69806</td> <td>3.658330</td> <td>2.184614</td> <td>2.939802</td> <td>32.401477</td> <td>0.022967</td> <td>0.184099</td> <td>0.768174</td> </tr> <tr> <th>6196</th> <td>10.256916</td> <td>50.961094</td> <td>15</td> <td>-349.148450</td> <td>1.381168</td> <td>2.041324</td> <td>0.730325</td> <td>2.68155</td> <td>4.803717</td> <td>4.241707</td> <td>4.847207</td> <td>40.772995</td> <td>0.035641</td> <td>0.218037</td> <td>1.099185</td> </tr> <tr> <th>12512</th> <td>14.382855</td> <td>73.263648</td> <td>16</td> <td>-330.440136</td> <td>1.610184</td> <td>3.419241</td> <td>1.042230</td> <td>2.73290</td> <td>7.188484</td> <td>6.526180</td> <td>8.560723</td> <td>56.647088</td> <td>0.059449</td> <td>0.286096</td> <td>1.529658</td> </tr> <tr> <th>25204</th> <td>19.547795</td> <td>112.650256</td> <td>17</td> <td>-321.330778</td> <td>1.934784</td> <td>2.833449</td> <td>1.917480</td> <td>2.83897</td> <td>11.957896</td> <td>13.460070</td> <td>15.276854</td> <td>80.377198</td> <td>0.100506</td> <td>0.384580</td> <td>3.536134</td> </tr> <tr> <th>50596</th> <td>32.397936</td> <td>179.180266</td> <td>17</td> <td>-317.373969</td> <td>3.027228</td> <td>3.722781</td> <td>4.091687</td> <td>3.05140</td> <td>21.532068</td> <td>19.940254</td> <td>30.031492</td> <td>122.258945</td> <td>0.197576</td> <td>0.584971</td> <td>6.949574</td> </tr> </tbody> </table> </div> The numbers of iteration show that the mass-lumping preconditioner is an effective substitution for the mass-matrix preconditioner for our study, only adding a few more iterations. Therefore, we only reported the results from using the latter in our manuscript. **Compare with an approximate solution using Richardson extrapolation** Since an analytical solution is not available for this geometry, the reference values for error estimation come from Richardson extrapolation using the middle three values: $$ \begin{equation} \bar{f}=\frac{f_{1} f_{3}-f_{2}^{2}}{f_{1}-2 f_{2}+f_{3}} \end{equation} $$ ```python def richardson_extrapolation(f1, f2, f3): return (f1*f3 - f2**2) / (f3 - 2*f2 + f1) ``` ```python e_solv = direct_df['e_solv [kcal/Mol]'].values e_solv_exact = richardson_extrapolation(*e_solv[-2:0:-1]) print('extrapolated solution using direct formulation: ', e_solv_exact) rel_error_direct = np.abs((e_solv-e_solv_exact)/e_solv_exact) e_solv = derivative_df['e_solv [kcal/Mol]'].values e_solv_exact = richardson_extrapolation(*e_solv[-2:0:-1]) print('extrapolated solution using derivative formulation: ', e_solv_exact) rel_error_derivative = np.abs((e_solv-e_solv_exact)/e_solv_exact) ``` extrapolated solution using direct formulation: -313.59764676311323 extrapolated solution using derivative formulation: -312.68602853438085 **Relative error of the solvation energy using the direct formulation** ```python pd.DataFrame({'number of elements': direct_df.index, 'relative error (%)': rel_error_direct*100}).set_index('number of elements') ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>relative error (%)</th> </tr> <tr> <th>number of elements</th> <th></th> </tr> </thead> <tbody> <tr> <th>3032</th> <td>29.241691</td> </tr> <tr> <th>6196</th> <td>12.404705</td> </tr> <tr> <th>12512</th> <td>5.566163</td> </tr> <tr> <th>25204</th> <td>2.497614</td> </tr> <tr> <th>50596</th> <td>1.211727</td> </tr> </tbody> </table> </div> **Relative error of the solvation energy using the derivative formulation** ```python pd.DataFrame({'number of elements': derivative_df.index, 'relative error (%)': rel_error_derivative*100}).set_index('number of elements') ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>relative error (%)</th> </tr> <tr> <th>number of elements</th> <th></th> </tr> </thead> <tbody> <tr> <th>3032</th> <td>24.420170</td> </tr> <tr> <th>6196</th> <td>11.661036</td> </tr> <tr> <th>12512</th> <td>5.677938</td> </tr> <tr> <th>25204</th> <td>2.764675</td> </tr> <tr> <th>50596</th> <td>1.499250</td> </tr> </tbody> </table> </div> The figure below shows that the error of computing solvation energy of 5PTI converges linearly with respect to N, for both direct and derivative formulation. #### Figure: Mesh convergence study of the solvation energy of bovine pancreatic trypsin inhibitor (PDB code 5PTI), using both direct formulation and derivative formulation. The error is with respect to the extrapolated solution using Richardson extrapolation. ```python fig = plt.figure(figsize=(3, 2)) ax = fig.add_subplot(111) N = direct_df.index.values N_ = np.array((1e3, 1e5)) asymp = N[2] * rel_error_direct[2] / N_ ax.loglog(N, rel_error_direct, linestyle='', marker='o', fillstyle='none', label='direct', color='black') ax.loglog(N, rel_error_derivative, linestyle='', marker='+', fillstyle='none', label='derivative', color='black') ax.loglog(N_, asymp, linestyle='--', color='#7f7f7f') ax.grid(which="both") ax.set_xlabel('number of elements') ax.set_ylabel('relative error') ax.legend() loc = (3*N[-2]+N[-1])/4 text_loc = np.array((loc, 1.2*N[-2]*rel_error_direct[-2]/loc)) ax.text(text_loc[0], text_loc[1],r'N$^{-1}$', fontsize=8, rotation=-30,rotation_mode='anchor') plt.tight_layout() # plt.savefig('../../tex/figs/5PTI_convergence.pdf', dpi=300); ```
a504b468075f609fa6e79c169bb31aa82286377d
91,460
ipynb
Jupyter Notebook
repro-pack/notebooks/5PTI_convergence.ipynb
barbagroup/bempp_exafmm_paper
d628305aa7a7713d8d37234e80260e2a4160b9c8
[ "MIT", "BSD-3-Clause" ]
2
2021-06-21T04:11:28.000Z
2021-12-01T03:18:36.000Z
repro-pack/notebooks/5PTI_convergence.ipynb
barbagroup/bempp_exafmm_paper
d628305aa7a7713d8d37234e80260e2a4160b9c8
[ "MIT", "BSD-3-Clause" ]
17
2021-02-06T19:28:51.000Z
2022-02-25T20:09:48.000Z
repro-pack/notebooks/5PTI_convergence.ipynb
barbagroup/bempp_exafmm_paper
d628305aa7a7713d8d37234e80260e2a4160b9c8
[ "MIT", "BSD-3-Clause" ]
1
2021-12-01T03:24:03.000Z
2021-12-01T03:24:03.000Z
94.777202
57,148
0.75105
true
5,693
Qwen/Qwen-72B
1. YES 2. YES
0.787931
0.727975
0.573595
__label__eng_Latn
0.328303
0.170982
# Market Equilibrium under different market forms Import various packages ```python import numpy as np import scipy as sp from scipy import linalg from scipy import optimize from scipy import interpolate import sympy as sm from scipy import optimize,arange from numpy import array import ipywidgets as widgets # Import a package for interactive plots %matplotlib inline import matplotlib.pyplot as plt from matplotlib import cm from mpl_toolkits.mplot3d import Axes3D ``` # Model Description We consider the standard economic promlem for a Monopoly firm maximizing it's profits. The aggregate market demand is given by $$Q(p)=A-\alpha\cdot p$$ which corresponds to the inverse market demand function $$p(Q)=\frac{A}{\alpha}-\frac{1}{\alpha}\cdot Q$$ and the Monopoly profits are given $$\pi(q)=p\cdot q-c(q)=\left(\frac{A}{\alpha}-\frac{1}{\alpha}\cdot q\right)\cdot q-c\cdot q$$ where $q=Q$, $p(Q)$ is a linear market demand curve and $c(q)$ is the firms cost-function with constant cost $c$. # Market Equilibrium ## Analytical Solution Using Sympy, we seek to find an analytical expression for the market equilibrium when one firm has monopoly power, i.e. solve the monopoly firm's maximization problem \\[ \max_{q}\pi(q)=\max_{q} \left(\frac{A}{\alpha}-\frac{1}{\alpha}\cdot q\right)\cdot q-c\cdot q \\] Which has the standard solution given by: $$q^{M\ast}=\frac{A-\alpha\cdot c}{2}\wedge p^{\ast}=\frac{A+\alpha\cdot c}{2\cdot\alpha}$$ ```python sm.init_printing(use_unicode=True) # sets printing on # Defining variables; A = sm.symbols('A') q = sm.symbols('q') c = sm.symbols('c') alpha=sm.symbols('alpha') ``` ```python Pi = (A/alpha-q/alpha)*q-c*q # Define the firms profit function F = sm.diff(Pi,q) # Take the first order condition sm.solve(F,q)[0] ``` ```python Mq = sm.solve(F,q)[0] # Solves F for market quantity # And the market price is given by; Mp=(A-Mq)*1/alpha sm.Matrix([Mq,Mp]) # Prints the market quantity and price ``` ```python #For later use, We turn the above solution into a Python function Mq_func = sm.lambdify((A,alpha,c),Mq) Mp_func = sm.lambdify((A,alpha,c),Mp) ``` ## Numerical Solution As a brief introduction to solving the problem numerically, we use a solver like fsolve to solve the first-order condition given the following parameter values: Remember, the first-order condition is given by: $$\frac{A}{\alpha}-c-\frac{2q}{\alpha}=0$$ ```python A = 4 alpha = 2 c = 1 output = optimize.fsolve(lambda q: 2-q-1,0) print(f'analytical solution for market quantity is: {Mq_func(A,alpha,c):.2f}') print(f' Solution with fsolve for market quantity is: {output}') print(f'analytical solution for market price is: {Mp_func(A,alpha,c):.2f}') ``` analytical solution for market quantity is: 1.00 Solution with fsolve for market quantity is: [1.] analytical solution for market price is: 1.50 However for later use, It is perhaps more efficent to make Python maximize the firm's profits directly. However, as scipy only has minimization procedueres. We continue to minimize $-\pi(q)$, i.e. minimizing negative profits is the same as maximizing profits. Below we first define functions for market demand and costs in python ```python def demand(Q): return A/alpha-1/alpha*Q def cost(q,c): # c is constant marginal cost return c*q ``` ```python def minus_profits(q,*args): # we want to see profits as a function of q when we maximize profits or return -(demand(q)*q-cost(q,c)) # minimize minus_profits; hence c is specified as "*args", when calling fmin # we specify the c in the "args=(c,)" x0 = 0 # Initial guess c = 1.0 # Specify the value of the constant cost 'c' A=4.0 # Specify the value of the Constant in the market demand function Q(p) alpha=2.0 # Specify the slope coefficient in Q(p) output = optimize.fmin(minus_profits,x0,args=(c,)) # note the comma in "args(c,)"; it needs to be there! price=A/alpha-1/alpha*output print(output,price) ``` Optimization terminated successfully. Current function value: -0.500000 Iterations: 25 Function evaluations: 50 [1.] [1.5] Hence, the optimal output is 1, which yields the maximum profits of $-\cdot(-0.5)=0.5$ For the specified parameter values, we have plotted the monopoly firm's profit function below. ```python # Define the expression whose roots we want to find A = 4.0 # Specify the value of the Constant in the market demand function Q(p) alpha = 2.0 # Specify the slope coefficient in Q(p) c = 1.0 # Specify the value of the constant cost 'c' func = lambda q : (A/alpha-q/alpha)*q-c*q # Defines the profit function using a lambda function. # Plot the profit function q = np.linspace(0, 2, 200) # Return evenly spaced numbers over a specified interval from 0 to 2 . plt.plot(q, func(q)) # -minus_profits(q) could have been used instead of func(q). But we wanted to show the lambda function. plt.axhline(y=0.5,linestyle='dashed',color='k') # creates a horizontal line in the plot at func(q)=0.5 plt.axvline(x=1,linestyle='dashed',color='k') # creates a vertical line in the plot at q=0.5 plt.xlabel("Quantity produced q ") plt.ylabel("Firm Profits") plt.grid() plt.title('The Monopoly Firms profit function') plt.show() ``` And we can plot the market equilibrium price and output in a standard diagram as shown below. ```python # Define marginal Revenue: def MR(Q): return A/alpha-2/alpha*Q plt.plot(q, demand(q)) plt.plot(q, MR(q)) plt.axhline(y=c,color='k') # creates a horizontal line in the plot at func(q)=0.5 plt.axvline(x=output,ymin=0,ymax=0.73,linestyle='dashed',color='k') # creates a vertical line in the plot at q=0.5 plt.axhline(y=price,xmin=0, xmax=0.5,linestyle='dashed',color='k') plt.xlabel("Quantity produced q ") plt.ylabel("Price") plt.grid() plt.title('The Demand Function') plt.show() ``` Both plottet side by side. ```python f = plt.figure(figsize=(13,6)) ax = f.add_subplot(121) ax2 = f.add_subplot(122) ax.plot(q, func(q)) ax.set_title('The Monopoly Firms profit function') ax.set_xlabel('Quantity Produced q') ax.set_ylabel('Firm Profits') ax.axhline(y=0.5,linestyle='dashed',color='k') # creates a horizontal line in the plot at func(q)=0.5 ax.axvline(x=1,linestyle='dashed',color='k') # creates a vertical line in the plot at q=0.5 ax.set_ylim(0,) # set a lower limit for y-axis at zero. ax.grid() ax2.plot(q, demand(q),label='Demand') ax2.plot(q,MR(q), label='Marginal Revenue') ax2.legend(loc='upper right') # Place the graph descriptions in the upper right corner ax2.grid() ax2.axhline(y=c,color='k', label='Marginal Cost') # creates a horizontal line in the plot at func(q)=0.5 ax2.axvline(x=output,ymin=0,ymax=0.71,linestyle='dashed',color='k') # creates a vertical line in the plot at q=0.5 ax2.axhline(y=price,xmin=0, xmax=0.5,linestyle='dashed',color='k') ax2.set_xlabel("Quantity produced q ") ax2.set_ylabel("Market Price p") ax2.set_ylim(0,) ax2.set_title('The Market Equilibrium') ``` We see that when the monopoly firm is producing $q=1$, they get the maximum profits of 0.5. In the right figure, we see that the monopoly firm maximises profits in the point, where the marginal revenue curve intersect the marginal cost curve(black line). The two curves intersect at $q=1$, and the market price is $p=1.5$ as given by the demand curve. **Extention at the exam**: We have extended the above plot with fixed parameter values with an interactive feature such that you can change the two graphs by changing the parameters $(A,\alpha,c)$. ```python def interactive_figure(A,alpha,c): """This function makes a intertive figure of the profit function, demand function and marginal revenue function with A,alpha and c as free variables""" # a. Specify the functions. func = lambda q : (A/alpha-q/alpha)*q-c*q # Defines the profit function using a lambda function. def demand(Q): # defies demand function return A/alpha-1/alpha*Q def MR(Q): # defines the marginal revenue function return A/alpha-2/alpha*Q # b. Create values for quantity. q = np.linspace(0, A-alpha*c, 200) # Return evenly spaced numbers over a specified interval from 0 to the point, where profits equal zero . qM = np.linspace(0, A/2, 200) # Return evenly spaced numbers over the interval from 0 to where MR(Q) intersect the x-axis. qD = np.linspace(0, A, 200) # Return evenly spaced numbers over the interval from 0 to where DM(Q) intersect the x-axis. # c. plot the figures f = plt.figure(figsize=(13,6)) ax = f.add_subplot(121) ax2 = f.add_subplot(122) ax.plot(q, func(q)) ax.set_title('The Monopoly Firms profit function') ax.set_xlabel('Quantity Produced q') ax.set_ylabel('Firm Profits') ax.axhline(y=np.max(func(q)),linestyle='dashed',color='k') # creates a horizontal line in the plot at func(q)=0.5 ax.axvline(x=(A-alpha*c)/2,linestyle='dashed',color='k') # creates a vertical line in the plot at q=0.5 ax.set_xlim(0,) ax.set_ylim(0,np.max(func(q))*1.05) # set a lower limit for y-axis at zero and max-limit at max profit. ax.grid() ax2.plot(qD, demand(qD),label='Demand') ax2.plot(qM,MR(qM), label='Marginal Revenue') ax2.legend(loc='upper right') # Place the graph descriptions in the upper right corner ax2.grid() ax2.axhline(y=c,color='k', label='Marginal Cost') # creates a horizontal line in the plot at func(q)=0.5 ax2.axvline(x=(A-alpha*c)/2,linestyle='dashed',color='k') # creates a vertical line in the plot at q=0.5 ax2.axhline(y=(A+c*alpha)/(2*alpha),linestyle='dashed',color='k') ax2.set_xlabel("Quantity produced q ") ax2.set_ylabel("Market Price p") ax2.set_xlim(0,A/2) # set x axis to go from zero to where MR(Q) intersect the x-axis ax2.set_ylim(0,np.max(demand(q))*1.05) # set y axis to go from zero to max of demand(q). ax2.set_title('The Market Equilibrium') ``` And the interactive figure is illustrated below. ```python widgets.interact(interactive_figure, A=widgets.FloatSlider(description="A",min=0.1,max=10,step=0.1,value=4), # alpha=widgets.FloatSlider(description="alpha",min=0.1,max=5,step=0.1,value=2), c=widgets.FloatSlider(description="c",min=0,max=1.999,step=0.1,value=1) ); ``` interactive(children=(FloatSlider(value=4.0, description='A', max=10.0, min=0.1), FloatSlider(value=2.0, descr… # Extentions: Solving for market equilibrium in a duopoly setting ## Market Equilibrium with Cournot Competition Consider the inverse demand funcion with identical goods $$p(Q)=\frac{A}{\alpha}-\frac{1}{\alpha}\cdot Q=\frac{A}{\alpha}-\frac{1}{\alpha}\cdot(q_1+q_2)$$ where $q_1$ is firm 1's output and $q_2$ is firm 2's output. $Q=q_1+q_2$. Both firms have identical cost-function $c(q_i)=c\cdot q_i$. So given cost and demand, each firm have the following profit function: $$\pi_{i}(q_{i},q_{j})=p_i(q_i,q_j)q_i-c(q_i)$$, $i,j\in\{0,1\},i\neq j$, which they seek to maximize. As this is the standard Cournot problem with two firms competing in quantities, we know that in equilibrium both firms produces the same Cournot output level given by: $$q_1^{C}=q_2^{C}=\frac{A-\alpha c}{3}$$ ### Analytical Solution We can use **sympy** to find an analytical expression for the market equilibrium/ the Cournot Nash Equilibrium, i.e. solving for the pair $(q_1^{C},q_2^{C})$ in which both firms play a best-response to the other firms equilibrium strategy. Hence $$\max_{q_{i}} \pi_{i}(q_i,q_j^{\ast})=\max \left(\frac{A}{\alpha}-\frac{1}{\alpha}\cdot(q_i+q_j^{\ast})-c\right)q_i $$ ```python # Defining variables; A = sm.symbols('A') # Constant in Q(p) q1 = sm.symbols('q1') # Firm 1's output q2 = sm.symbols('q2') # Firm 2's output c = sm.symbols('c') # Contant cost alpha=sm.symbols('alpha') # Slope coefficient in Q(p) Pi1 = (A/alpha-1/alpha*(q1+q2))*q1-c*q1 # Firm 1's profit function Pi2 = (A/alpha-1/alpha*(q1+q2))*q2-c*q2 # Frim 2's profit function F1 = sm.diff(Pi1,q1) # Take the first order condition for firm 1 F2 = sm.diff(Pi2,q2) # Take the first order condition for firm 2 sm.Matrix([F1,F2]) # Prints the first order conditions ``` ```python Cq2 = sm.solve(F2,q2)[0] # Solves Firm 2's FOC for q2. Cq2 ``` ```python Cq1 = sm.solve(F1,q2)[0] # Solves Firm 1's FOC for q2. Cq1 ``` ```python CON=sm.solve(Cq1-Cq2,q1)[0] # In Eq Cq1=Cq2, so solve Cq1=Cq2=0 for q1 CON ``` Given the standard symmetry argument, we know that both firms produce the same in equilibrium. Hence $$q_1^{C}=q_2^{C}=\frac{A-\alpha c}{3}$$ as given above. The total market quantity and price are found below. ```python MCQ = 2*CON # market quantiy # And the market price is given by; MCP=(A-MCQ)*1/alpha sm.Matrix([MCQ,MCP]) # Prints the market quantity and price ``` These can again by turned into python-functions to compare the analytical solution with the numerical solution ```python CON_func = sm.lambdify((A,alpha,c),CON) # Cournot quantity MCP_func = sm.lambdify((A,alpha,c),MCP) # Market price ``` ### Numerical Solution ```python def demand(q1,q2,b): # Define demand return A/alpha-1/alpha*(q1+b*q2) # b is in place to allow for potential heterogeneous goods. def cost(q,c): if q == 0: cost = 0 else: cost = c*q return cost ``` ```python def profit(q1,q2,c1,b): # Define a function for profits return demand(q1,q2,b)*q1-cost(q1,c1) ``` Define reaction functions. As we know scipy has various methods to optimize function. However as they are defined as minimization problems, maximizing a function $f(x)$ is the same as minimzing $-f(x)$. ```python def reaction(q2,c1,b): q1 = optimize.brute(lambda q: -profit(q,q2,c1,b), ((0,1,),)) # brute minimizes the function; # when we minimize -profits, we maximize profits return q1[0] ``` A solution method which can be used to solve many economic problems to find the Nash equilibrium, is to solve for the equilibirum as fixed point. Hence we are looking for a fixed point in which the following is true. $$\left(\begin{matrix} q_{1}^{\ast} \\ q_{2}^{\ast} \end{matrix}\right)=\left(\begin{matrix} r_{1}(q_2^{\ast}) \\ r_{2}(q_1^{\ast}) \end{matrix}\right) $$ where $r_1(q_2)$ is firm 1's reaction-function to firm 2's production level and vice versa. Numerically this can be solved by defining a vector function: $$f(q)=\left(\begin{matrix} r_{1}(q_2^{\ast}) \\ r_{2}(q_1^{\ast}) \end{matrix}\right)$$ and solve for a point $q^{\ast}=(q_1^{\ast},q_2^{\ast})$ such that $f(q^{\ast})=q^{\ast}$ We then define a function defined as $x-f(x)$ and look for the solution $x^{\ast}-f(x^{\ast})=0$ ```python def vector_reaction(q,param): # vector parameters = (b,c1,c2) return array(q)-array([reaction(q[1],param[1],param[0]),reaction(q[0],param[2],param[0])]) ``` ```python param = [1.0,1.0,1.0] # Specify the parameters (b,c1,c2) q0 = [0.3, 0.3] # Initial guess for quantities alpha=2 A=4 ans = optimize.fsolve(vector_reaction, q0, args = (param)) print(ans) ``` [0.6666581 0.6666581] ```python A = 4 alpha = 2 c = 1 print(f'analytical solution for Cournot quantity is: {CON_func(A,alpha,c):.2f}') print(f'analytical solution for market price is: {MCP_func(A,alpha,c):.2f}') print(f' Solution with fsolve for market quantity is: {ans}') ``` analytical solution for Cournot quantity is: 0.67 analytical solution for market price is: 1.33 Solution with fsolve for market quantity is: [0.6666581 0.6666581] And we see that the numerical solution for the market quantity is fairly close to the analytical solution at $q_1^{C}=q_2^{C}=\frac{2}{3}$ Below we illustrate the equilibrium quantities visually by plotting the two firms reaction functions/best-response functions. The equilibrium quantities is found in the point in which they intersect. ```python # Define the expression whose roots we want to find A = 4.0 # Specify the value of the Constant in the market demand function Q(p) alpha = 2.0 # Specify the slope coefficient in Q(p) c = 1 # Specify the value of the constant cost 'c' func1 = lambda q : 1/2*(A-alpha*c-q) # Defines the best-response function for firm 1using a lambda function. func2 = lambda q : A-alpha*c-2*q # Plot the profit function q = np.linspace(0, 5, 200) # Return evenly spaced numbers over a specified interval from 0 to 2 . plt.clf() plt.plot(q, func1(q),'-', color = 'r', linewidth = 2) plt.plot(q,func2(q),'-', color = 'b', linewidth = 2) plt.title("Cournot Nash Equilibrium",fontsize = 15) plt.xlabel("$q_1$",fontsize = 15) plt.ylabel("$q_2$",fontsize = 15,rotation = 90) plt.axvline(x=CON_func(A,alpha,c),ymin=0,ymax=1/3,linestyle='dashed',color='k') # creates a vertical line in the plot at q=2/3 plt.axhline(y=CON_func(A,alpha,c),xmin=0,xmax=1/3,linestyle='dashed',color='k') # creates a horizontal line in the plot at q=2/3 plt.annotate('$R_2(q_1)$', xy=(1,0.5), xycoords='data', # here we define the labels and arrows in the graph xytext=(30, 50), textcoords='offset points', size = 20, arrowprops=dict(arrowstyle="->", linewidth = 2, connectionstyle="arc3,rad=.2"), ) plt.annotate('$R_1(q_2)$', xy=(0.5,1), xycoords='data', # here we define the labels and arrows in the graph xytext=(30, 50), textcoords='offset points', size = 20, arrowprops=dict(arrowstyle="->", linewidth = 2, connectionstyle="arc3,rad=.2"), ) plt.xlim(0,2) # sets the x-axis plt.ylim(0,2) # Sets the y-axis ``` We see that when both firms have symmetric cost $c_1=c_2=c$ and produce homogeneous goods, both firms produce the same in the Cournot Nash equilibrium. We see that both firms individually produce less than if they have monopoly power due to the small increase in competition in the market. Hence when no collusion is possible as this is a one period static problem, the total market output is larger than the monopoly outcome and the associated market price is lower. However assuming no externalities and the standard economic assumptions, the market outcome is still inefficient seen from a social planners perspective as it is not equal to the social optimum, where all firms produce in the point in which marginal costs equal the market price. ## Market Equilibrium with Betrand Competition with differentiated goods Lastly we will investiate the market outcome in the duopoly setting with two firms is competing in prices rather than quanties. This competition type is called bertrand competion, and we will consinder the Betrand model with differentiated products and the standard Betrand Model of Duopoly with identical firms producing homogeneous products with the same cost-functions with constant marginal costs c. The market demand function is the same given by $$Q(p)=A-\alpha\cdot p$$ However from the perspective of firm i, the consumers demand for firm i's good is: $$q_i(p_i,p_j)=A-p_i+b\cdot p_j$$, $i,j\in\{1,2\}, i\neq j$, where b indicates that the goods are imperfect substitutes. The profit of firm i when choosing the price $p_i$ and firm j chooses the price $p_j$ is given by: $$\pi_i(p_i,p_j)=q_i(p_i,p_j)[p_i-c]$$ And the price pair $(p_1^{\ast},p_2^{\ast})$ constitute a Nash equilibrium if, for each firm i, the choosen price $p_i^{\ast}$ solve the firms maximization problem, i.e. $$\max_{0\leq p_i<\infty}\pi_i(p_i,p_j^{\ast})=\max_{0\leq p_i<\infty}\left[A-p_i+b\cdot p_j^{\ast}\right][p_i-c]$$ ### Analytical Solution with differentiated goods We can use **sympy** to find an analytical expression for the market equilibrium/ the Bertrand Nash Equilibrium, i.e. solving for the pair $(p_1^{B},p_2^{B})$ for which both firms play a best-response to the other firms equilibrium strategy. As this is the betrand problem with differentiated goods, we know already that both firms will try to underbid eachother in prices. Hence in equilibrium, $p_1^{B}=p_2^{B}$ and it is assumed that each firm produce half of the market demand, i.e. $$p_1^{B}=p_2^B=\frac{A+c}{2-b}$$ ```python # Defining variables; A = sm.symbols('A') # Constant in Q(p) p1 = sm.symbols('p1') # Firm 1's price p2 = sm.symbols('p2') # Firm 2's price c = sm.symbols('c') # Contant cost b = sm.symbols('b') # constant reflecting that the goods are differentiated alpha=sm.symbols('alpha') # Slope coefficient in Q(p) Pi1 = (A-p1+b*p2)*(p1-c) # Firm 1's profit function Pi2 = (A-p2+b*p1)*(p2-c) # Firm2's profit function F1 = sm.diff(Pi1,p1) # Take the first order condition for firm 1 F2 = sm.diff(Pi2,p2) # Take the first order condition for firm 2 sm.Matrix([F1,F2]) # Prints the first order conditions ``` We can then use the first order conditions to find the best-response functions by using sympy's solve function to solve $$F1=0$$ for $p_1$. ```python BR1 = sm.solve(F1,p1)[0] # Solves Firm 1's FOC for p1. BR2 = sm.solve(F2,p2)[0] # Solves Firm 2's FOC for p2. sm.Matrix([BR1,BR2]) # Prints the best-response functions ``` However to solve the function in an easier way, we solve firm 2's FOC for $p_1$. Call this BR12. We know that both firm's FOC must hold in equilibrium. Hence the equilibrium price is found by solving $BR1=BR12\Leftrightarrow BR1-BR12=0$ which can be solved sympy's solve function: ```python BR12=sm.solve(F2,p1)[0] # Solves Firm 2's FOC for p1. MP=sm.solve(BR1-BR12,p2)[0] # In Eq p1=p2, so solve BR1-BR12=0 for p2 MP ``` Hence both firms charge the price $p_1^{B}=p_2^{B}=-\frac{A+c}{b-2}=\frac{A+c}{2-b}$ Turned into a function ```python MP_func = sm.lambdify((A,b,c),MP) # Market price A = 4 b = 0.5 # parameter different from 1 to indicate imperfect substitutes c = 1 print(f'analytical solution for Bertrand price is: {MP_func(A,b,c):.2f}') ``` analytical solution for Bertrand price is: 3.33 ### Numerical Solution ```python A = 4 b=0.5 c = 1 B1 = 1/2*(A+b*p2+c) B2 = 1/b*(2*p2-A-c) SOL = optimize.fsolve(lambda p: 1/2*(A+b*p+c)-1/b*(2*p-A-c),0) print(f' Solution with fsolve for Bertrand price is: {SOL:}') ``` Solution with fsolve for Bertrand price is: [3.33333333] Thus when the firms products are imperfect substitutes, the market price is still above the firms marginal cost. Hence the market outcome is pareto inefficient, which differs from the result from the standard Betrand model with homogeneous products. ## Market Equilibrium with Betrand Competition and homogeneous goods The market equilibrium with betrand competition, homogeneous goods and identical firms has some interesting, nice properties. Most importantly, the equilibrium is pareto efficient equal to perfect market outcome. The market demand function is the same given by $$Q(p)=A-\alpha\cdot p$$ Both firm compete in prices, and seek to maximize the profit function: $$\max_{p_i}\pi_i(p_i,p_j)= \begin{cases} Q(p)\cdot (p_i-c) &p_j>p_i\\ \frac{Q(p)\cdot (p_i-c)}{2} &p_i=p_j\\ 0 &p_i>p_j \end{cases} $$ It is a standard assumption that the market is divided evenly between the two firms if the set the same price, but there is no reason why it couldn't be different in practice. Both firms have the symmetric Best-Response functions: $$BR_i(p_j)= \begin{cases} p_i=p_m &p_j>p_m\\ p_i=p_j-\epsilon &p_i>p_j>c\\ p_i=c &p_j<c \end{cases}$$ where $p_m$ is the monopoly price found in the first problem. Then with simple economic reasoning it can be shown/proven that the only strategies/set of prices $\{p_1^{B},p_2^{B}\}$ that can constitute a Nash Equilibrium is $(p_1^{B},p_2^{B})=(c,c)$. Because in all other cases at least one firm has an incentive to deviate. We will not prove this, but below we show it numerically. So both firms will produce half of the total market output in equilibrium, i.e. both firms produce the monopoly output: $$q^{B\ast}=\frac{A-\alpha\cdot c}{2}$$ ### Numerical Solution ```python def total_demand(p): # total demand Q(p) return A-alpha*p ``` ```python def profit(p1,p2,c1): # Define profit function depending p_1 and p_2 with af if, elif else statement" if p1 > p2: profits = 0 # firm 2 takes all the market elif p1 == p2: profits = 0.5*total_demand(p1)*(p1-c1) # The firms split the market in two else: profits = total_demand(p1)*(p1-c1) # firm 1 takes all the market return profits def reaction(p2,c1): # Reaction function if p2 > c1: reaction = c1+0.8*(p2-c1) else: reaction = c1 return reaction ``` ```python def vector_reaction(p,param): # vector param = (c1,c2) return array(p)-array([reaction(p[1],param[0]),reaction(p[0],param[1])]) param = [2.0,2.0] # c1 = c2 =2 alpha=2 A=4 p0 = [0.5, 0.5] # initial guess: p1 = p2 = 0.5 Psol = optimize.fsolve(vector_reaction, p0, args = (param)) print(Psol) # Bertrand prices ``` [2. 2.] As should become clear from this little numerical demostration. The two firms price setting decision is in practical sense invariant to the shape of the demand curve - as long as it is downwards slopping. The two firms will regardless of the demand function try to underbid the other firm in terms of price as long as the market price is above the marginal cost c. Hence as long as there is positive profits to get in the market, both firm will try to get the entire market by setting a price just below the other. This proces will continue until the market price reach the firms marginal cost, which are assumed identical. Thus the betrand price solely depends on the value of the costs $c_1=c_2$, as the firms compete the market price $p^{B}$ down to $c_1=c_2$. Because only then, no firm has an incentive to deviate, i.e. the pair of prices ${(p_1^{B},p_2^{B}})={c}$ constitute a Nash equilibrium. This is also known as the bertrand paradox. # Conclusion We see that the assumption about the market structure have a critical impact on the market equilibrium. We have shown that under the standard assumptions, when there is only one firm in the market, which utilizes it's monopoly power, the market equilibrium output is inefficiently low and the equilibrium price is ineffciently high from a social welfare perspective. When the number of firms increases to two, we show that the market inefficiency decreases but at different degrees depending on competition type. If the firms compete in quantities (Cournot), the market output is still lower than the social optimum. However there is still some competition between the firms, which results in a lower market price and higher market output compared to the monopoly case. Lastly, we show that when the two firms compete in prices(Bertrand) the market equilibrium is pareto efficient. As both firms seek to undercut the other firm resulting in both firms asking a price equal to their marginal costs (assumed identical). Hence even though there are only two firms, the market equilibrium is efficient as it is identical to social optimum with a market price equal to the marginal costs. However when allowing for the two firms to produce different goods that are imperfect substitutes, both firms raise their prices above marginal cost. Thus they earn positive profit and the market outcome is once again inefficient.
3e275fd9c58ce69c4cc1bc5a687524a13dc4a7d7
186,654
ipynb
Jupyter Notebook
modelproject/modelproject.ipynb
NumEconCopenhagen/projects-2019-peter-larsens-kaffe-m-havre
5e20f4f592ec38e84e85dd6f5713dc575caa9341
[ "MIT" ]
null
null
null
modelproject/modelproject.ipynb
NumEconCopenhagen/projects-2019-peter-larsens-kaffe-m-havre
5e20f4f592ec38e84e85dd6f5713dc575caa9341
[ "MIT" ]
8
2019-04-15T06:56:14.000Z
2019-05-24T18:59:19.000Z
modelproject/modelproject.ipynb
NumEconCopenhagen/projects-2019-peter-larsens-kaffe-m-havre
5e20f4f592ec38e84e85dd6f5713dc575caa9341
[ "MIT" ]
null
null
null
134.38013
54,836
0.8667
true
7,864
Qwen/Qwen-72B
1. YES 2. YES
0.939025
0.890294
0.836008
__label__eng_Latn
0.9886
0.780661
# Tutorial on Active Inference with `pymdp` This set of 3 tutorial notebooks aims to be an accessible introduction to discrete-state-space active inference modelling with the `pymdp` package. We assume no prerequisites other than a good grasp of Python and some basic mathematical knowledge (specifically some familiarity with probability and linear algebra). We assume no prior knowledge of active inference. Hopefully, by the end of this series of notebooks, you will understand active inference well enough to understand the recent literature, as well as implement your own agents! These tutorials will walk you through the specification, and construction of an active inference agent which can solve a simple navigation task in a 2-dimensional grid-world environment. The goal here is to implement the agent 'from scratch'. Specifically, instead of just using magical library functions, we will show you an example of how these functions could be implemented from pure Python and numpy code. The goal at the end of these tutorials is that you understand at a fairly detailed level how active inference works in discrete state spaces and how to apply it, as well as how you could implement a simple agent without using the `pymdp` package. Once you understand what the `pymdp` package aims to abstract, we will go through the structure and functionality offered by the package, and show how you can construct complex agents in a simple and straightforward way. # What is Active Inference? Fundamentally, the core contention of active inference is that the brain (and agents in general) can be thought of as fundamentally performing (Bayesian) inference about the world. Specifically, an agent performs two functions. 1.) **Perception**. An agent does not necessarily know the true state of the world, but instead must infer it from a limited set of (potentially ambiguous) observations. 2.) **Action**. Typically, the agent can also perform the actions which change the state of the world. The agent can use these actions to drive the world towards a set of states that it desires. The theory of Active Inference argues that **both** perception and action can be represented and solved as Bayesian inference problems. # What is Bayesian Inference? Bayesian inference provides a recipe for performing *optimal* inference. That is, if you have some set of Hypotheses $H$ and some set of data $D$, then Bayesian inference allows you to compute the *best possible* update to your hypotheses given the data that you have. In other words, the best explanation $H$ that accounts for your actual data $D$. For instance, suppose you are a trader and you want to know whether some stock will go up tomorrow. And you have a set of information about that stock (for instance earnings reports, sales data, rumours a friend of a friend told you etc). Then Bayesian inference provides the *optimal way* to estimate the probability that the stock will go up. In this scenario, our "hypotheses" $H$ is that the stock will go up, or it will go down and our "data" $D$ is the various pieces of information we hold. The fundamental equation in Bayesian inference is **Bayes Rule**. Which is $$ \begin{align} p(H | D) = \frac{p(D | H)p(H)}{p(D)} \end{align} $$ Here $p(H)$ etc are *probability distributions*. All a probability distribution is is a function that assigns a probability value (between 0 and 1) to a specific outcome. A probability distribution then represents the probability of that outcome for every possible outcome. For instance, take p(H). This is the probability distribution over our *hypothesis space*, which you can think of as our baseline assumptions about whether the stock tends to go up or down i.e. $p(H) = [p(stock\_goes\_up), p(stock\_goes\_down)]$, before we've encountered any data that provides evidence for/against either hypothesis. If we assume we have no idea whether the stock will go up or down, we can say that the probability in each case is 0.5 so that $p(H) = [0.5, 0.5]$. When there is a discrete set of possible outcomes or states, probability distributions can be simply represented as vectors with one element for each outcome - where the element itself is simply the probability of seeing that outcome. The sum of all the elements of a probability distribution must equal 1. The vector's elements encode the probabilities of *all possible events*, so one of them *must* occur -- i.e. the probability of seeing *some* outcome is 100%. There are three fundamental quantities in Bayesian inference: the **posterior**, the **likelihood** and the **prior**. The posterior is ultimately the goal of inference, it is $p(H | D)$. What this represents is the probability of each hypothesis in the hypothesis space *given the data $D$*. You can think of as you best guesses at the truth of each hypothesis after optimally integrating the data. Next is the **prior** $p(H)$ which represents your assumptions about how likely each hypothesis is *prior to seeing the data*. Finally, there is the likelihood $p(D | H)$ which quantifies how likely each outcome you see is, given the different hypotheses. The likelihood distribution can also be thought of as a *model* for how the data relates to the hypotheses. The key insight of Bayes rule is simply that the posterior probability -- how likely the hypothesis is, given the data -- is simply the likelihood times the prior. This multiplication can be expressed as computing the following: how likely is the data you actually saw, given the different hypotheses ($P(D = d | H)$), multiplied by the prior probability you assign to each hypothesis ($P(H)$). So the full posterior is a distribution over the different hypotheses - in the case of discrete / Categorical distributions, your posterior $p(H |D)$ will also be a vector of probabilities, e.g. $p(H | D) = [0.75, 0.25]$. Effectively, hypotheses are more likely if they can predict the data well, but are also weighted by their a-priori probability. The marginal likelihood $p(D)$ is basically just there to normalize the posterior (i.e. make sure it equals 1). # Generative Models In Active Inference we typically talk about *generative models* as a core component. But what are generative models? Technically a generative model is simply the product of a likelihood and a prior, for all the possible data points $D$ and hypotheses $H$. This is also known as a *joint distribution* $p(H,D) = p(D | H)p(H)$. The generative model, then, is simply the numerator of Bayes rule, and if we normalize it we can compute posterior probabilities. It is called a generative model because it allows you to *generate* samples of the data. To do so, we follow the following steps: 1.) Sample a hypothesis $h_i$ from the prior -- i.e. $p(H)$ 2.) Sample a datum $d_i$ from the likelihood distribution, given the particular hypothesis $h_i$ that you sampled i.e. sample $d_i$ from $p(D | H = h_i)$. Another way to think about a generative model is that it is simply a model, or set of beliefs/assumptions, of how observed data are generated. This is often a very helpful way to think about inference problems, since it aligns with our notion of causality -- i.e. there are unknown processes in the world which generate data. If we can imagine a process and then imagine the data generated by this process, then we are imagining a generative model. Inference of the posterior, on the other hand, is more difficult because it goes in the *reverse* direction -- i.e. you have some set of observations and want to reconstruct the process that gave rise to them. Fundamentally, all of Bayesian statistics can be broken down into two steps: 1.) Make a mathematical model of the data-generating process - the sort of environmental / world structures you think could give rise to the sort of data you have (i.e. come up with a generative model). Generative models are classically written down using a set of unknown parameters (e.g. parameters that describe probability distributions, like sufficient statistics). You want to "fit" (read: infer the values of) these parameters, given some data. 2.) Given your generative model and some data, compute the posterior distribution over the unknown parameters using Bayes rule, or some approximation of Bayes rule. 3.) Be happy! All the methods in Bayesian statistics essentially fall into two classes. Coming up with more expressive and powerful generative models and then figuring out algorithms to perform inference on them. # Why is Bayesian Inference hard? At this point, you may be wondering: Bayesian inference seems pretty easy. We known Bayes rule. We can invent generative models easily enough. Computing the posterior is just taking the generative model and dividing it by $p(D)$. Why all this fuss? Why do we need to make a whole academic field out of this anyway? Luckily this has a straightforward answer. Bayesian inference is hard for essentially just one reason: that computing $p(D)$, the normalizing constant, is hard. Let's think about why. $p(D)$, which is known as the **marginal likelihood**, is fundamentally just the probability of the data. What does that mean? How is there just some free-floating probability of the data? Fundamentally there isn't. $p(D)$ is the probability of the data *averaged over all possible hypotheses*. We can write this as, $$ \begin{align} p(D) = \sum_h p(D | H)p(H) \end{align} $$ Effectively, $p(D)$ is the sum over all possible hypotheses of the probability of the data given that hypothesis, weighted by the prior probability of that hypothesis. This is challenging to compute in practice because you are often using really large (or indeed often infinite) hypothesis spaces. For instance, suppose your trader doesn't just want to know whether the stock will go up or down but *how much* it will go up or down. Now, with this simple change, you have an *infinite* amount of hypotheses: $p(H) = [p(stock\_goes\_up\_0.00001), p(stock\_goes\_up\_0.00002), p(stock\_goes\_up\_0.00003) ...]$. Then, if we want to compute the posterior in this case, we need to sum over every one of this infinite amount of hypotheses. You can see why this ends up being quite challenging in practice. Because of this intrinsic difficulty, there are a large number of special case algorithms which can solve (or usually approximate) the Bayesian posterior in various cases, and a large amount of work in statistics goes into inventing new ones or improving existing methods. Active Inference agents use one special class of approximate Bayesian inference methods called *variational methods* or *variational inference*. This will be discussed in much more detail in notebook 2. Beyond merely the difficulty of performing inference, another reason why Bayesian statistics is hard is that you often *don't know* the generative model. Or at least you are uncertain about some aspects of it. There is also a wide class of methods which let you perform inference and *learning* simultaneously, including for active inference, although they won't be covered in this tutorial. # A Generative Model for an Agent In Active Inference, we typically go a step beyond the simple cases of Bayesian inference described above, where we have a static set of hypotheses and some static data. We are instead interested in the case of an *agent* interacting with a *dynamic environment*. The key thing we need to add to the formalism to accomodate this is a notion of *time*. We consider an environment consisting of a set of *states* $[x_1, x_2, \dots x_t]$ evolving over time. Moreover, there is an agent which is in the environment over time. This agent receives a set of *observations* $[o_1, o_2 \dots o_t]$, which are a function of the state of the environment, and it can emit *actions* $[a_1, a_2 \dots a_t]$ which can change the state of the environment. The key step to get a handle on this situation mathematically is to define a *generative model* of it. To start, we must make some assumptions to simplify the problem. Specifically, we assume that the *state* of the environment $x_t$ only depends on the state at the previous timestep $x_{t-1}$ and the action emitted by the agent at the previous timestep $a_{t-1}$. Then, we assume that the observation $o_t$ at a given time-step is only a function of the environmental state at the current timestep $x_t$. Together, these assumptions are often called the **Markov Assumptions** and if the environment adheres to them it is often called a **Markov Decision Process**. The general computational "flow" of a Markov decision process can be thought of as following a sequence of steps. 1.) The state of the environment is $x_t$. 2.) The environment state $x_t$ generates an observation $o_t$. 3.) The agent receives an observation $o_t$, and based on it (or inferences derived thereof) decides to take some action $a_t$, which it executes in the environment. 4.) Given the current state $x_t$ and the agent's action $a_t$, the environment updates its own state to produce $x_{t+1}$ 5.) Go back to step 1. Now that we have this series of steps, we can try to define what it means mathematically. Specifically, we need to define two quantities. a.) We need to know how the state of the environment $x_t$ is reflected in the observation sent to the agent $o_t$. In Bayesian terms from the agent's perspective, the true state of the environment is unknown, and its various possible states can be thought of as "hypotheses", while the observations it receives are "data". The agent's generative model encodes some prior assumptions about each possible state $x_t$ (i.e. each hypothesis) relates to the probability of seeing each possible outcome $o_t$. This relationship (from the agent's perspective) is a function known as the **likelihood distribution** $p(D | H)$ or, in our new notation $p(o_t | x_t)$. b.) We need to know how the environment updates itself to form its new state $x_{t+1}$, given the old one $x_t$ and the action $a_t$. This can be thought of (from the perspective of the agent who does not observe it) as the **prior** since it can be thought of as the default expectation the agent can have about the state of the environment prior to receiving any observations. It can be written as $p(x_t | x_{t-1}, a_{t-1})$. This distribution is also known as the **transition distribution** since it specifies (the agent's assumptions about) how the environment transitions from one state to another. These two distributions $p(o_t | x_t)$ and $p(x_t | x_{t-1}, a_{t-1})$ are all that is needed to specify the evolution of the environment. At this point, it is necessary to make a distinction between the *actual* evolution of the environment -- known as the **generative process** -- and the *agent's model* of the evolution of the environment, which is known as the **generative model**. These are not necessarily the same, although in some sense the goal of the agent is to figure out a generative model that is as close to the true generative process as possible. In this example, we will consider a scenario in which the agent knows the true model, so the generative model and the generative process are the same, although this is not always the case. To make this concrete, all that is necessary to do is to specify precisely what the **likelihood** and **transition** distributions actually are. In the case of discrete state spaces, where it is possible to explicitly enumerate all states, a very generic way of representing these distributions is as *matrices*. Specifically, we can represent the likelihood distribution as a matrix denoted $\textbf{A}$, which is of shape $dimension\_of\_observation \times dimension\_of\_state$. The element $A_{i,j}$ of $\textbf{A}$ represents the probability that observation $i$ is generated by state $j$. Secondly, we can represent the transition distribution by a matrix $\textbf{B}$ of shape $state\_dim \times state\_dim \times action\_dim$ where element $\textbf{B}_{i,j,k}$ represents the probability of the environment moving to state $i$ given that it was previously in state $j$ and that action $k$ was taken. In the rest of this notebook, we will explicitly write down the generative process / generative model for a simple grid-world environment in code, to get a better handle on how the environment and the agent's model is specified. In the next notebook, we will turn to inference and action selection and discuss how active inference solves these two tricky problems. ```python import numpy as np import matplotlib.pyplot as plt import seaborn as sns ``` ## Constructing a Generative Model For the rest of this notebook, we will construct a simple generative model for an active inference agent navigating a 3x3 grid world environment. The agent can perform one of 5 movement actions at each time step: `LEFT, RIGHT, UP, DOWN, STAY`. The goal of the agent is to navigate to its preferred position. We will create matrices for both the environment as well as the agent itself. As we go up levels of abstraction, these environment and generative model matrices will be imported from classes - but this notebook is the lowest level representation of construction, to show how everything is built from the ground up. ## Understanding the state space The first thing to note is that we are in a 3x3 grid world which means we have 9 states in total. We can define the following mapping to better understand the space. ```python state_mapping = {0: (0,0), 1: (1,0), 2: (2,0), 3: (0,1), 4: (1,1), 5:(2,1), 6: (0,2), 7:(1,2), 8:(2,2)} ``` All we're doing with this mapping dictionary is assigning a particular index (`0`, `1`, `2`, ..., `8`) to each grid position, which is defined as a pair of `(x, y)` coordinates ( `(0, 0), (1, 0)`, ..., `(2, 2)`). We will use the linear indices to refer to the grid positions in our probability distributions (e.g. `P(o_t | x_t = 5)`), so this `state_mapping` will allow us to easily move between these linear indices and the grid world indices. These kinds of mappings are very handy for intuition and visualization. And the following heatmap just represents how the coordinates map to the real grid space ```python grid = np.zeros((3,3)) for linear_index, xy_coordinates in state_mapping.items(): x, y = xy_coordinates grid[y,x] = linear_index # rows are the y-coordinate, columns are the x-coordinate -- so we index into the grid we'll be visualizing using '[y, x]' fig = plt.figure(figsize = (3,3)) sns.set(font_scale=1.5) sns.heatmap(grid, annot=True, cbar = False, fmt='.0f', cmap='crest') ``` ## Likelihood Matrix: A The likelihood matrix represents $P(o_t | x_t)$ , the probability of an observation given a state. In a grid world environment, the likelihood matrix of the agent is identical to that of the environment. It is simply the identity matrix over all states (in this case 9 states, for a 3x3 grid world) which represents the fact that the agent has probability 1 of observing that it is occupying any state x, given that it is in state x. This just means that the agent has full transparency over its own location in the grid. ```python A = np.eye(9) ``` ```python A ``` array([[1., 0., 0., 0., 0., 0., 0., 0., 0.], [0., 1., 0., 0., 0., 0., 0., 0., 0.], [0., 0., 1., 0., 0., 0., 0., 0., 0.], [0., 0., 0., 1., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 1., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 1., 0., 0., 0.], [0., 0., 0., 0., 0., 0., 1., 0., 0.], [0., 0., 0., 0., 0., 0., 0., 1., 0.], [0., 0., 0., 0., 0., 0., 0., 0., 1.]]) We can also plot the likelihood matrix as follows: ```python labels = [state_mapping[i] for i in range(A.shape[1])] def plot_likelihood(A): fig = plt.figure(figsize = (6,6)) ax = sns.heatmap(A, xticklabels = labels, yticklabels = labels, cbar = False) plt.title("Likelihood distribution (A)") plt.show() ``` ```python plot_likelihood(A) ``` ## Transition matrix: B The transition matrix determines how the agent can move around the gridworld given each of the 5 available actions (UP, DOWN, LEFT, RIGHT, STAY). So the transition matrix will be a 9x9x5 matrix, where each entry corresponds to an end state, a starting state, and the action that defines that specific transition. To construct this matrix, we have to understand that when the agent is at the edges of the grid, it cannot move outward, so trying to move right at the right wall will cause the agent to stay still. We will start by constructing a dictionary which we call P, which maps each state to its next state given an action ```python state_mapping ``` {0: (0, 0), 1: (1, 0), 2: (2, 0), 3: (0, 1), 4: (1, 1), 5: (2, 1), 6: (0, 2), 7: (1, 2), 8: (2, 2)} ```python P = {} dim = 3 actions = {'UP':0, 'RIGHT':1, 'DOWN':2, 'LEFT':3, 'STAY':4} for state_index, xy_coordinates in state_mapping.items(): P[state_index] = {a : [] for a in range(len(actions))} x, y = xy_coordinates '''if your y-coordinate is all the way at the top (i.e. y == 0), you stay in the same place -- otherwise you move one upwards (achieved by subtracting 3 from your linear state index''' P[state_index][actions['UP']] = state_index if y == 0 else state_index - dim '''f your x-coordinate is all the way to the right (i.e. x == 2), you stay in the same place -- otherwise you move one to the right (achieved by adding 1 to your linear state index)''' P[state_index][actions["RIGHT"]] = state_index if x == (dim -1) else state_index+1 '''if your y-coordinate is all the way at the bottom (i.e. y == 2), you stay in the same place -- otherwise you move one down (achieved by adding 3 to your linear state index)''' P[state_index][actions['DOWN']] = state_index if y == (dim -1) else state_index + dim ''' if your x-coordinate is all the way at the left (i.e. x == 0), you stay at the same place -- otherwise, you move one to the left (achieved by subtracting 1 from your linear state index)''' P[state_index][actions['LEFT']] = state_index if x == 0 else state_index -1 ''' Stay in the same place (self explanatory) ''' P[state_index][actions['STAY']] = state_index ``` ```python P ``` {0: {0: 0, 1: 1, 2: 3, 3: 0, 4: 0}, 1: {0: 1, 1: 2, 2: 4, 3: 0, 4: 1}, 2: {0: 2, 1: 2, 2: 5, 3: 1, 4: 2}, 3: {0: 0, 1: 4, 2: 6, 3: 3, 4: 3}, 4: {0: 1, 1: 5, 2: 7, 3: 3, 4: 4}, 5: {0: 2, 1: 5, 2: 8, 3: 4, 4: 5}, 6: {0: 3, 1: 7, 2: 6, 3: 6, 4: 6}, 7: {0: 4, 1: 8, 2: 7, 3: 6, 4: 7}, 8: {0: 5, 1: 8, 2: 8, 3: 7, 4: 8}} From here, we can easily construct the transition matrix ```python num_states = 9 B = np.zeros([num_states, num_states, len(actions)]) for s in range(num_states): for a in range(len(actions)): ns = int(P[s][a]) B[ns, s, a] = 1 ``` B is a very large matrix, we can see its shape below, which is as expected: ```python B.shape ``` (9, 9, 5) We can also visualize B on the plots below. The x axis is the starting state, and the y axis is the ending state, and each plot corresponds to an action given by the title. ```python fig, axes = plt.subplots(2,3, figsize = (15,8)) a = list(actions.keys()) count = 0 for i in range(dim-1): for j in range(dim): if count >= 5: break g = sns.heatmap(B[:,:,count], cmap = "OrRd", linewidth = 2.5, cbar = False, ax = axes[i,j], xticklabels=labels, yticklabels=labels) g.set_title(a[count]) count +=1 fig.delaxes(axes.flatten()[5]) plt.tight_layout() plt.show() ``` Now our generative model and environment are set up, and we can move on to Notebook 2, where we go through the core mechanics of how to perform inference and planning on this environment with this generative model.
fe69b70d6ab649aa88878be312bb7debf91a8982
310,005
ipynb
Jupyter Notebook
examples/gridworld_tutorial_1.ipynb
infer-actively/pymdp
4775054b0db53ec3f10b0b95eb6e78cc52f3588f
[ "MIT" ]
108
2020-12-08T06:45:28.000Z
2022-03-30T12:32:59.000Z
examples/gridworld_tutorial_1.ipynb
infer-actively/pymdp
4775054b0db53ec3f10b0b95eb6e78cc52f3588f
[ "MIT" ]
16
2021-01-17T14:32:17.000Z
2022-03-13T16:39:00.000Z
examples/gridworld_tutorial_1.ipynb
infer-actively/pymdp
4775054b0db53ec3f10b0b95eb6e78cc52f3588f
[ "MIT" ]
17
2021-01-01T15:02:47.000Z
2022-03-19T05:08:45.000Z
513.253311
158,820
0.711695
true
5,999
Qwen/Qwen-72B
1. YES 2. YES
0.737158
0.771843
0.568971
__label__eng_Latn
0.999484
0.160239
### University of Washington: Machine Learning and Statistics # Lecture 1: Regression (linear, errors on variables, outliers) Andrew Connolly and Stephen Portillo ##### Resources for this notebook include: - [Textbook](https://press.princeton.edu/books/hardcover/9780691198309/statistics-data-mining-and-machine-learning-in-astronomy) Chapter 8. - [astroML website](https://www.astroml.org/index.html) This notebook is developed based on material from A. Connolly, Z. Ivezic, M. Juric, S. Portillo, G. Richards, B. Sipocz, J. VanderPlas, D. Hogg, and many others. The notebook and assoociated material are available from [github](https://github.com/uw-astro/astr-598a-win22). Make sure you are using the latest version of Jupyterlab (>3.0) > pip install jupyterlab --upgrade ### Installing the latest (v1.0) of astroML from git > pip install --pre -U astroml <a id='toc'></a> ## This notebook includes: [Ordinary least squares method](#ordinaryLSQ) [Total least squares method](#totalLSQ) [Dealing witth Outliers](#outliers) ### Fitting a Line using a Maximum Likelihood Estimator Assume the scatter in our measurements (the residuals) is generated by a gaussian process. I.e.: >$ y_i = m x_i + b + r_i $ where $r_i$ is drawn from $N(0, \sigma)$. Here, $\sigma$ is the error the measurement induces. Let us compute the likelihood. First, we ask ourselves what is the probability $p(y_i|x_i, M(a, b), \sigma)$ that a particular point $y_i$ would be measured (M is our model). It is just the normal distribution: >$ p(y_i|x_i, M(m, b), \sigma) = N(y_i - M(x)|\sigma) = \frac{1}{\sqrt{2 \pi \sigma^2}} \exp \left( - \frac{(y_i - M(x_i))^2}{2 \sigma^2} \right) $. We can write down the $\ln L$ >$ \ln L(m, b) = constant - \frac{1}{2 \sigma^2} \sum_{i=1}^N (y_i - M(x_i))^2 $ This is the expression that we now minimize with respect to $m$ and $b$ to find ML estimators for those parameters. This is equivalent to minimizing the sum of the squares or a _least-squares method_ ## Ordinary least squares method <a id='ordinaryLSQ'></a> [Go to top](#toc) ### NOTE: We suppress warnings for the packages (this is not recommended) ```python def warn(*args, **kwargs): pass import warnings warnings.warn = warn ``` ```python import numpy as np from matplotlib import pyplot as plt from matplotlib.patches import Ellipse import matplotlib matplotlib.rc('text', usetex=False) import seaborn as sns from scipy import optimize from astroML.linear_model import TLS_logL from astroML.plotting import setup_text_plots from astroML.plotting.mcmc import convert_to_stdev setup_text_plots(fontsize=8, usetex=True) # random seed np.random.seed(42) ``` WARNING (theano.tensor.blas): Using NumPy C-API based implementation for BLAS functions. ```python # We'll use the data from Table 1 of Hogg et al. 2010 from astroML.datasets import fetch_hogg2010test data = fetch_hogg2010test() data = data[5:] # no outliers (the first 5 points are outliers, discussed later) x = data['x'] y = data['y'] sigma_x = data['sigma_x'] sigma_y = data['sigma_y'] rho_xy = data['rho_xy'] y_obs = y ``` ```python # Plot the data with y error bars plt.errorbar(x, y, yerr=sigma_y, fmt=".k", capsize=0) plt.xlabel('x', fontsize=18) plt.ylabel('y', fontsize=18) plt.xlim(0, 300) plt.ylim(100, 600) ``` ### We have data $y(x)$ and we want to fit this model (i.e. we want to obtain m and b): $$\mathbf{y} = m \, \mathbf{x} + b$$ For this problem the maximum likelihood and full posterior probability distribution (under infinitely broad priors) for the slope and intercept of the line are known analytically. The analytic result for the posterior probability distribution is a 2-d Gaussian with mean $$\mathbf{w} = \left(\begin{array}{c} m \\ b \end{array}\right) = (\mathbf{A}^\mathrm{T}\,C^{-1}\mathbf{A})^{-1} \, \mathbf{A}^\mathrm{T}\,C^{-1}\,\mathbf{y}$$ and covariance matrix $$\mathbf{V} = (\mathbf{A}^\mathrm{T}\,C^{-1}\mathbf{A})^{-1}$$ where $$\mathbf{y} = \left(\begin{array}{c} y_1 \\ y_2 \\ \vdots \\ y_N \end{array}\right) \quad , \quad \mathbf{A} = \left(\begin{array}{cc} x_1 & 1 \\ x_2 & 1 \\ \vdots & \vdots \\ x_N & 1 \end{array}\right) \quad,\, \mathrm{and} \quad \mathbf{C} = \left(\begin{array}{cccc} \sigma_1^2 & 0 & \cdots & 0 \\ 0 & \sigma_2^2 & \cdots & 0 \\ &&\ddots& \\ 0 & 0 & \cdots & \sigma_N^2 \end{array}\right)$$ Sometimes we call $A$ the design matrix. There are various functions in Python (and astroML/scikit-learn) for computing this but let's do it explicitly and step by step. With numpy, it only takes a few lines of code - here it is: ```python A = np.vander(x, 2) # Take a look at the documentation to see what this function does! # https://numpy.org/doc/stable/reference/generated/numpy.vander.html ATA = np.dot(A.T, A / sigma_y[:, None]**2) w = np.linalg.solve(ATA, np.dot(A.T, y / sigma_y**2)) V = np.linalg.inv(ATA) ``` Let's take a look and see what this prediction looks like. To do this, we'll sample 99 slopes and intercepts from this 2-D Gaussian and overplot them on the data. ```python plt.errorbar(x, y, yerr=sigma_y, fmt=".k", capsize=0) xGrid = np.linspace(40, 240) # note that we are drawing 99 lines here, with m and b randomly sampled from w and V for m, b in np.random.multivariate_normal(w, V, size=99): plt.plot(xGrid, m*xGrid + b, "g", alpha=0.02) plt.xlim(0, 250) plt.ylim(100, 600) plt.xlabel('x', fontsize=16) plt.ylabel('y', fontsize=16) ``` ```python # let's visualize the covariance between m and b a = np.random.multivariate_normal(w, V, size=2400) plt.scatter(a[:,0], a[:,1], alpha=0.1) plt.xlabel('m', fontsize=16) plt.ylabel('b', fontsize=16) ``` ### Another approach: the probabilistic model In order to perform posterior inference on a model and dataset, we need a function that computes the value of the posterior probability given a proposed setting of the parameters of the model. For reasons that will become clear below, we actually only need to return a value that is *proportional* to the probability. The posterior probability for parameters $\mathbf{w} = (m,\,b)$ conditioned on a dataset $\mathbf{y}$ is given by $$p(\mathbf{w} \,|\, \mathbf{y}) = \frac{p(\mathbf{y} \,|\, \mathbf{w}) \, p(\mathbf{w})}{p(\mathbf{y})}$$ where $p(\mathbf{y} \,|\, \mathbf{w})$ is the *likelihood* and $p(\mathbf{w})$ is the *prior*. For this example, we're modeling the likelihood by assuming that the datapoints are independent with known Gaussian uncertainties $\sigma_n$. This specifies a likelihood function: $$p(\mathbf{y} \,|\, \mathbf{w}) = \prod_{n=1}^N \frac{1}{\sqrt{2\,\pi\,\sigma_n^2}} \, \exp \left(-\frac{[y_n - f_\mathbf{w}(x_n)]^2}{2\,\sigma_n^2}\right)$$ where $f_\mathbf{w}(x) = m\,x + b$ is the linear model. For numerical reasons, we will acutally want to compute the logarithm of the likelihood. In this case, this becomes: $$\ln p(\mathbf{y} \,|\, \mathbf{w}) = -\frac{1}{2}\sum_{n=1}^N \frac{[y_n - f_\mathbf{w}(x_n)]^2}{\sigma_n^2} + \mathrm{constant} \quad.$$ By maxmizing $p(\mathbf{y} \,|\, \mathbf{w})$, we obtain posterior probability distributions for $m$ and $b$. ### Using astroML (or scikit-learn) to estimate the MLE for the parameters We use a standardized form for the regression - Define the regression model (clf = LinearRegression()) - Fit the model (clf.fit) - Predict the values given the model (clf.predict) ```python from astroML.linear_model import LinearRegression clf = LinearRegression() clf.fit(x[:, None], y, sigma_y) y_fit = clf.predict(x[:, None]) plt.errorbar(x, y, yerr=sigma_y, fmt=".k", capsize=0) plt.plot(x, y_fit, "g", alpha=1) plt.plot(xGrid, w[0]*xGrid + w[1], "r", alpha=0.5) plt.xlim(0, 250) plt.ylim(100, 600) plt.xlabel('x', fontsize=16) plt.ylabel('y', fontsize=16) ``` ## Total Least Squares regression <a id='totalLSQ'></a> [Go to top](#toc) What do we do if we have errors on the dependent and independent axes (or if the x axes errors are larger)? ### Exercise - using the approaches given above estimate the slope and intercept when the errors in x dominate ```python # read data and select training and cross validation sample cal_x,cal_dx,cal_y,cal_dy = np.loadtxt("data/X_Y.clean.txt", skiprows=1, unpack=True) cal_dxy=np.zeros(len(x)) # define classifiers n_constraints = [2,] # set plots fig = plt.figure(figsize=(10, 20)) ax = fig.add_subplot(211) plt.errorbar(cal_x, cal_y, yerr=cal_dy, xerr=cal_dx, fmt=".k", capsize=0) ax.scatter(cal_x, cal_y,c='black') ax.set_xlim(cal_x.min()/1.05,cal_x.max()*1.05) ax.set_ylim(cal_y.min()*1.05,cal_y.max()/1.05) ax.set_xlabel("$\log \Sigma_{H_2}\ (M_\odot\ pc^{-2})$") ax.set_ylabel("$\log \Sigma_{SFR}\ (M_\odot\ yr^{-1}\ kpc^{-2})$") ``` ### LSQ with uncertainties in both the dependent and independent axes In almost all real-world applications, the assumption that one variable (the independent variable) is essentially free from any uncertainty is not valid. Both the dependent and independent variables will have measurement uncertainties (e.g. Tully-Fisher relations). The impact of errors on the ``independent'' variables is a bias in the derived regression coefficients. This is straightforward to show if we consider a linear model with a dependent and independent variable, $y^*$ and $x^*$. We can write the objective function as before, \begin{equation} y^*_i=\theta_0 + \theta_1x^*_{i}. \end{equation} Now let us assume that we observe $y$ and $x$, which are noisy representations of $y^*$ and $x^*$, i.e., \begin{eqnarray} x_i&=&x^*_i + \delta_i,\\ y_i &=& y^* + \epsilon_i, \end{eqnarray} with $\delta$ and $\epsilon$ centered normal distributions. Solving for $y$ we get \begin{equation} y= \theta_0 + \theta_1 (x_i - \delta_i) +\epsilon_i. \end{equation} The uncertainty in $x$ is now part of the regression equation and scales with the regression coefficients (biasing the regression coefficient). This problem is known in the statistics literature as *total least squares* and belongs to the class of ``errors-in-variables'' problems. For a detailed discussion of the solution to this problem, which is essentially maximum likelihood estimation, please see Chapter 8 in the reference book. Two other recommended references are Hogg et al. (2010, astro-ph/1008.4686) and Kelly et al. (2011, astro-ph/1112.1745). How can we account for the measurement uncertainties in both the independent and dependent variables? Assuming they are Gaussian > $ \Sigma_i = \left[ \begin{array}{cc} \sigma_{x_i}^2 & \sigma_{xy_i} \\ \sigma_{xy_i} & \sigma_{y_i}^2 \end{array} \right] $ If we go back to the start of the lecture and write the equation for a line in terms of its normal vector > $ {\bf n} = \left [ \begin{array}{c} -\sin \alpha\\ \cos \alpha\\ \end{array} \right ] $ with $\theta_1 = \arctan(\alpha)$ and $\alpha$ is the angle between the line and the $x$-axis. The covariance matrix projects onto this space as >$ S_i^2 = {\bf n}^T \Sigma_i {\bf n} $ and the distance between a point and the line is >$\Delta_i = {\bf n}^T z_i - \theta_0\ \cos \alpha, $ where $z_i$ represents the data point >$(x_i,y_i)$. The log-likelihood is then >$ {\rm lnL} = - \sum_i \frac{\Delta_i^2}{2 S_i^2}$ and we can maximize the liklihood as a brute-force search or through MCMC <b> THINKING OF A PROJECT: IT WOULD BE A REALLY INTERESTING PROBLEM TO EXTEND AND GENERALIZE THIS - IT WOULD GET A LOT OF CITATIONS!</b> ```python # Define some convenience functions # translate between typical slope-intercept representation, # and the normal vector representation def get_m_b(beta): b = np.dot(beta, beta) / beta[1] m = -beta[0] / beta[1] return m, b def get_beta(m, b): denom = (1 + m * m) return np.array([-b * m / denom, b / denom]) # compute the ellipse principal axes and rotation from covariance def get_principal(sigma_x, sigma_y, rho_xy): sigma_xy2 = rho_xy * sigma_x * sigma_y alpha = 0.5 * np.arctan2(2 * sigma_xy2, (sigma_x ** 2 - sigma_y ** 2)) tmp1 = 0.5 * (sigma_x ** 2 + sigma_y ** 2) tmp2 = np.sqrt(0.25 * (sigma_x ** 2 - sigma_y ** 2) ** 2 + sigma_xy2 ** 2) return np.sqrt(tmp1 + tmp2), np.sqrt(tmp1 - tmp2), alpha # plot ellipses def plot_ellipses(x, y, sigma_x, sigma_y, rho_xy, factor=2, ax=None): if ax is None: ax = plt.gca() sigma1, sigma2, alpha = get_principal(sigma_x, sigma_y, rho_xy) for i in range(len(x)): ax.add_patch(Ellipse((x[i], y[i]), factor * sigma1[i], factor * sigma2[i], alpha[i] * 180. / np.pi, fc='none', ec='k')) # Find best-fit parameters def get_best_fit(x, y, sigma_x, sigma_y, rho_xy): X = np.vstack((x, y)).T dX = np.zeros((len(x), 2, 2)) dX[:, 0, 0] = sigma_x ** 2 dX[:, 1, 1] = sigma_y ** 2 dX[:, 0, 1] = dX[:, 1, 0] = rho_xy * sigma_x * sigma_y # note: TLS_logL was imported from astroML.linear_model min_func = lambda beta: -TLS_logL(beta, X, dX) # this is optimization, not MCMC return optimize.fmin(min_func, x0=[-1, 1]) # plot results def plot_best_fit(x, y, sigma_x, sigma_y, rho_xy, beta_fit, mLSQ, bLSQ): fig = plt.figure(figsize=(8, 5)) fig.subplots_adjust(left=0.1, right=0.95, wspace=0.25, bottom=0.15, top=0.9) ax = fig.add_subplot(121) ax.scatter(x, y, c='k', s=9) plot_ellipses(x, y, sigma_x, sigma_y, rho_xy, ax=ax) # plot the best-fit line m_fit, b_fit = get_m_b(beta_fit) x_fit = np.linspace(0, 300, 10) ax.plot(x_fit, m_fit * x_fit + b_fit, '-k') ax.plot(x_fit, mLSQ * x_fit + bLSQ, '--', c='red') ax.set_xlim(40, 250) ax.set_ylim(100, 600) ax.set_xlabel('$x$') ax.set_ylabel('$y$') # plot the likelihood contour in m, b ax = fig.add_subplot(122) m = np.linspace(1.7, 2.8, 100) b = np.linspace(-60, 110, 100) logL = np.zeros((len(m), len(b))) X = np.vstack((x, y)).T dX = np.zeros((len(x), 2, 2)) dX[:, 0, 0] = sigma_x ** 2 dX[:, 1, 1] = sigma_y ** 2 dX[:, 0, 1] = dX[:, 1, 0] = rho_xy * sigma_x * sigma_y for i in range(len(m)): for j in range(len(b)): logL[i, j] = TLS_logL(get_beta(m[i], b[j]), X, dX) ax.contour(m, b, convert_to_stdev(logL.T), levels=(0.683, 0.955, 0.997), colors='k') ax.plot([-1000, 1000], [bLSQ, bLSQ], ':k', lw=1, c='red') ax.plot([mLSQ, mLSQ], [-1000, 1000], ':k', lw=1, c='red') ax.set_xlabel('slope') ax.set_ylabel('intercept') ax.set_xlim(1.7, 2.8) ax.set_ylim(-60, 110) plt.show() ``` ```python # for comparison, let's get the standard LSQ solution - no errors mux = np.mean(x) muy = np.mean(y) mLSQ = np.sum(x*y-mux*muy)/np.sum((x-mux)**2) bLSQ = muy - mLSQ*mux print('mLSQ=', mLSQ) print('bLSQ=', bLSQ) ``` mLSQ= 2.191027996426704 bLSQ= 32.00396939102313 ```python ## let's do only errors in y - this is standard LSQ # Find best-fit parameters err_x = 0*sigma_x err_y = sigma_y rho = 0*rho_xy best_fit1 = get_best_fit(x, y, err_x, err_y, rho) m_fit1, b_fit1 = get_m_b(best_fit1) print('m=', m_fit1) print('b=', b_fit1) # plot best fit plot_best_fit(x, y, err_x, err_y, rho, best_fit1, mLSQ, bLSQ) ``` ```python ## now only errors in x; note that we could switch the axes and use standard LSQ # Find best-fit parameters err_x = sigma_x err_y = 0*sigma_y rho = rho_xy best_fit2 = get_best_fit(x, y, err_x, err_y, rho) m_fit2, b_fit2 = get_m_b(best_fit2) print('m=', m_fit2) print('b=', b_fit2) # plot best fit plot_best_fit(x, y, err_x, err_y, rho, best_fit2, mLSQ, bLSQ) ``` ```python ## errors in x and y, but without covariance # Find best-fit parameters err_x = sigma_x err_y = sigma_y rho = 0*rho_xy best_fit3 = get_best_fit(x, y, err_x, err_y, rho) m_fit3, b_fit3 = get_m_b(best_fit3) print('m=', m_fit3) print('b=', b_fit3) # plot best fit plot_best_fit(x, y, err_x, err_y, rho, best_fit3, mLSQ, bLSQ) ``` ```python ## errors in x and y with covariance # Find best-fit parameters err_x = sigma_x err_y = sigma_y rho = rho_xy best_fit4 = get_best_fit(x, y, err_x, err_y, rho) m_fit4, b_fit4 = get_m_b(best_fit4) print('m=', m_fit4) print('b=', b_fit4) # plot best fit plot_best_fit(x, y, err_x, err_y, rho, best_fit4, mLSQ, bLSQ) ``` ```python # compare all 4 versions and LSQ print('m=', m_fit1, m_fit2, m_fit3, m_fit4, mLSQ) ``` m= 2.299303276704902 2.596107117541477 2.3849344880407286 2.248785375229947 2.191027996426704 ### Using astroML ## Data sets used in the examples below Use simulation data from [Kelly 2007](https://iopscience.iop.org/article/10.1086/519947/pdf). This simulator, called `simulation_kelly` is available from `astroML.datasets`. The function returns the $\xi_i$, $\eta_i$, $x_i$, $y_i$, $\epsilon_{x,i}$, $\epsilon_{y,i}$ and the input regression coefficients $\alpha$ and $\beta$ and intrinsic scatter $\epsilon$. A total of ``size`` values generated, measurement errors are scaled by parameters ``scalex`` and ``scaley`` following section 7.1 in [Kelly 2007](https://iopscience.iop.org/article/10.1086/519947/pdf). ```python from astroML.datasets import simulation_kelly ksi, eta, xi, yi, xi_error, yi_error, alpha_in, beta_in = simulation_kelly(size=100, scalex=0.2, scaley=0.2, alpha=2, beta=1, epsilon=(0, 0.75)) ksi_0 = np.arange(np.min(xi[0]) - 0.5, np.max(xi[0]) + 0.5) eta_0 = alpha_in + ksi_0 * beta_in figure = plt.figure(figsize=(10, 8)) figure.subplots_adjust(left=0.1, right=0.95, bottom=0.1, top=0.95, hspace=0.1, wspace=0.15) ax = figure.add_subplot(111) ax.scatter(xi[0], yi, alpha=0.5) ax.errorbar(xi[0], yi, xerr=xi_error[0], yerr=yi_error, alpha=0.3, ls='') ax.set_xlabel(r'$\xi$') ax.set_ylabel(r'$\eta$') ax.plot(ksi_0, eta_0, color='orange') ``` ### Linear regression with uncertainties in both dependent and independent axes The class ``LinearRegressionwithErrors`` can be used to take into account measurement errors in both the dependent and independent variables. The implementation relies on the [PyMC3](https://docs.pymc.io/) and [Theano](http://deeplearning.net/software/theano/) packages. Note: The first initialization of the fitter is expected to take a couple of minutes, as ``Theano`` performs some code compilation for the underlying model. Sampling for consecutive runs is expected to start up significantly faster. ```python from astroML.linear_model import LinearRegressionwithErrors from astroML.plotting import plot_regressions, plot_regression_from_trace ``` ```python linreg_xy_err = LinearRegressionwithErrors() linreg_xy_err.fit(xi, yi, yi_error, xi_error) ``` ```python plot_regressions(ksi, eta, xi[0], yi, xi_error[0], yi_error, add_regression_lines=True, alpha_in=alpha_in, beta_in=beta_in) plot_regression_from_trace(linreg_xy_err, (xi, yi, xi_error, yi_error), ax=plt.gca(), chains=20) ``` ## Multivariate regression For multivariate data (where we fit a hyperplane rather than a straight line) we simply extend the description of the regression function to multiple dimensions. The formalism used in the previous example becomes: $$ \eta_i = \alpha + \beta^T \xi_i + \epsilon_i $$ where both $\beta^T$ and $\xi_i$ are now N-element vectors. ### Generate a dataset: We use the same function as above to generate 100 datapoints in 2 dimensions. Note that the size of the ``beta`` parameter needs to match the dimensions. ```python ksi2, eta2, xi2, yi2, xi_error2, yi_error2, alpha_in2, beta_in2 = simulation_kelly(size=100, scalex=0.2, scaley=0.2, alpha=2, beta=[0.5, 1], multidim=2) ``` The previously used ``LinearRegressionwithErrors`` class can be used with multidimensional data, thus the fitting is done the exact same way as before: ```python linreg_xy_err2 = LinearRegressionwithErrors() linreg_xy_err2.fit(xi2, yi2, yi_error2, xi_error2) ``` There are several ways to explore the fits, in the following we show a few ways to plot this dataset. As in this example the fitted hyperplane was 2D, we can use a 3D plot to show both the fit and the underlying regession we used to generate the data from. In this 3D plot, the blue plane is the true regression, while the red plane is our fit, that takes into account the errors on the data points. Other plotting libraries can also be used to e.g. create pairplots of the parameters (e.g. Arviz' ``plot_pair`` function, or Seaborn's ``jointplot``). ```python x0 = np.linspace(np.min(xi2[0])-0.5, np.max(xi2[0])+0.5, 50) x1 = np.linspace(np.min(xi2[1])-0.5, np.max(xi2[1])+0.5, 50) x0, x1 = np.meshgrid(x0, x1) y0 = alpha_in + x0 * beta_in2[0] + x1 * beta_in2[1] y_fitted = linreg_xy_err2.coef_[0] + x0 * linreg_xy_err2.coef_[1] + x1 * linreg_xy_err2.coef_[2] import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.set_xlabel('xi2[0] ') ax.set_ylabel('xi2[1] ') ax.set_zlabel('yi2 ') ax.scatter(xi2[0], xi2[1], yi2, s=20) ax.plot_surface(x0, x1, y0, alpha=0.2, facecolor='blue', label='True regression') ax.plot_surface(x0, x1, y_fitted, alpha=0.2, facecolor='red', label='Fitted regression') ``` ### Sampler statistics and traceplots The PyMC3 trace is available in the ``.trace`` attribute of the class instances (e.g. ``linreg_xy_err2.trace`` in the previous example), after we performed the fit. This can be then used for checking for convergence, and generating statistics for the samples. We would refer to use the tools provided by PyMC3, e.g. the ``traceplot()`` that takes the trace object as its input. Note that in the multidimensional case, there will be multiple ``ksi`` and ``slope`` traces in aggreement with the dimensionality of the input ``xi`` data, in the ``traceplot`` they are plotted with different colours. ```python import pymc3 as pm matplotlib.rc('text', usetex=False) pm.traceplot(linreg_xy_err2.trace) ``` ### CONCLUSION Beware of your measurement uncertainties, especially if both variables have them! ## Dealing with outliers <a id='outliers'></a> [Go to top](#toc) The $L_2$ ($\sum_{i=1}^N (y_i - M(x_i))^2$) norm is sensitive to outliers (i.e. it squares the residuals). A number of approaches exist for correcting for outliers. These include "sigma-clipping", using interquartile ranges, taking the median of solutions of subsets of the data, and least trimmed squares (which searchs for the subset of points that minimizes $\sum_i^K (y_i - \theta_ix_i)^2$). We can also change the **loss function** or **likelihood** to reduce the weight of outliers. An example of this is known as the _Huber loss function_ > $ \sum_{i=1}^N e(y_i|y), $ where >$ e(t) = \left\{ \begin{array}{ll} \frac{1}{2} t^2 & \mbox{if} \; |t| \leq c, \\ c|t| - \frac{1}{2} c^2 & \mbox{if} \; |t| \geq c, \end{array} \right ) $ this is continuous and differentiable and transitions to an $L_1$ norm ($\sum_{i=1}^N |y_i - M(x_i)|$) for large excursions ```python %matplotlib inline import numpy as np from matplotlib import pyplot as plt from scipy import optimize #------------------------------------------------------------ # Get data: this includes outliers data = fetch_hogg2010test() x = data['x'] y = data['y'] dy = data['sigma_y'] fig = plt.figure(figsize=(6, 6)) ax = fig.add_subplot(111) ax.errorbar(x[4:], y[4:], dy[4:], fmt='.k', lw=1, ecolor='gray') ax.errorbar(x[:4], y[:4], dy[:4], fmt='.k', lw=1, ecolor='red') ax.set_xlim(0, 350) ax.set_ylim(100, 700) ax.set_xlabel('$x$') ax.set_ylabel('$y$') ``` ```python # Define the standard squared-loss function def squared_loss(m, b, x, y, dy): y_fit = m * x + b return np.sum(((y - y_fit) / dy) ** 2, -1) # Define the log-likelihood via the Huber loss function def huber_loss(m, b, x, y, dy, c=2): y_fit = m * x + b t = abs((y - y_fit) / dy) flag = t > c return np.sum((~flag) * (0.5 * t ** 2) - (flag) * c * (0.5 * c - t), -1) f_squared = lambda beta: squared_loss(beta[0], beta[1], x=x[4:], y=y[4:], dy=dy[4:]) f_squared_outlier = lambda beta: squared_loss(beta[0], beta[1], x=x, y=y, dy=dy) f_huber = lambda beta: huber_loss(beta[0], beta[1], x=x, y=y, dy=dy, c=1) #------------------------------------------------------------ # compute the maximum likelihood using the huber loss beta0 = (2, 30) beta_squared = optimize.fmin(f_squared, beta0) beta_squared_outlier = optimize.fmin(f_squared_outlier, beta0) beta_huber = optimize.fmin(f_huber, beta0) #------------------------------------------------------------ # Plot the results fig = plt.figure(figsize=(6, 6)) ax = fig.add_subplot(111) x_fit = np.linspace(0, 350, 10) ax.plot(x_fit, beta_squared[0] * x_fit + beta_squared[1], '--k', label="squared loss:\n $y=%.2fx + %.1f$" % tuple(beta_squared)) ax.plot(x_fit, beta_squared_outlier[0] * x_fit + beta_squared_outlier[1], '--k', color='red', label="squared loss with outliers:\n $y=%.2fx + %.1f$" % tuple(beta_squared_outlier)) ax.plot(x_fit, beta_huber[0] * x_fit + beta_huber[1], '-k', color='blue', label="huber loss:\n $y=%.2fx + %.1f$" % tuple(beta_huber)) ax.legend(loc=4, prop=dict(size=14)) ax.errorbar(x[4:], y[4:], dy[4:], fmt='.k', lw=1, ecolor='gray') ax.errorbar(x[:4], y[:4], dy[:4], fmt='.k', lw=1, ecolor='red') ax.set_xlim(0, 350) ax.set_ylim(100, 700) ax.set_xlabel('$x$') ax.set_ylabel('$y$') ``` ### From a Bayesian Perspective We can assume the data are drawn from two Gaussians error distribution (one for the function and the other for the outliers) $\begin{eqnarray} & p(\{y_i\}|\{x_i\}, \{\sigma_i\}, \theta_0, \theta_1, \mu_b, V_b, p_b) \propto \nonumber\\ & \prod_{i=1}^{N} \bigg[ \frac{1-p_b}{\sqrt{2\pi\sigma_i^2}} \exp\left(-\frac{(y_i - \theta_1 x_i - \theta_0)^2} {2 \sigma_i^2}\right) + \frac{p_b}{\sqrt{2\pi(V_b + \sigma_i^2)}} \exp\left(-\frac{(y_i - \mu_b)^2}{2(V_b + \sigma_i^2)}\right) \bigg]. \end{eqnarray} $ $V_b$ is the variance of the outlier distribution. If we use MCMC we can marginalize over the nuisance parameters $p_b$, $V_b$, $\mu_b$. We could also calculate the probability that a point is drawn from the outlier or "model" Gaussian. ```python import numpy as np import pymc3 as pm from matplotlib import pyplot as plt from theano import shared as tshared import theano.tensor as tt from astroML.datasets import fetch_hogg2010test from astroML.plotting.mcmc import convert_to_stdev # ---------------------------------------------------------------------- # This function adjusts matplotlib settings for a uniform feel in the textbook. # Note that with usetex=True, fonts are rendered with LaTeX. This may # result in an error if LaTeX is not installed on your system. In that case, # you can set usetex to False. if "setup_text_plots" not in globals(): from astroML.plotting import setup_text_plots setup_text_plots(fontsize=8, usetex=True) np.random.seed(0) # ------------------------------------------------------------ # Get data: this includes outliers. We need to convert them to Theano variables data = fetch_hogg2010test() xi = tshared(data['x']) yi = tshared(data['y']) dyi = tshared(data['sigma_y']) size = len(data) # ---------------------------------------------------------------------- # Define basic linear model def model(xi, theta, intercept): slope = np.tan(theta) return slope * xi + intercept # ---------------------------------------------------------------------- # First model: no outlier correction with pm.Model(): # set priors on model gradient and y-intercept inter = pm.Uniform('inter', -1000, 1000) theta = pm.Uniform('theta', -np.pi / 2, np.pi / 2) y = pm.Normal('y', mu=model(xi, theta, inter), sd=dyi, observed=yi) trace0 = pm.sample(draws=5000, tune=1000) # ---------------------------------------------------------------------- # Second model: nuisance variables correcting for outliers # This is the mixture model given in equation 17 in Hogg et al def mixture_likelihood(yi, xi): """Equation 17 of Hogg 2010""" sigmab = tt.exp(log_sigmab) mu = model(xi, theta, inter) Vi = dyi ** 2 Vb = sigmab ** 2 root2pi = np.sqrt(2 * np.pi) L_in = (1. / root2pi / dyi * np.exp(-0.5 * (yi - mu) ** 2 / Vi)) L_out = (1. / root2pi / np.sqrt(Vi + Vb) * np.exp(-0.5 * (yi - Yb) ** 2 / (Vi + Vb))) return tt.sum(tt.log((1 - Pb) * L_in + Pb * L_out)) with pm.Model(): # uniform prior on Pb, the fraction of bad points Pb = pm.Uniform('Pb', 0, 1.0, testval=0.1) # uniform prior on Yb, the centroid of the outlier distribution Yb = pm.Uniform('Yb', -10000, 10000, testval=0) # uniform prior on log(sigmab), the spread of the outlier distribution log_sigmab = pm.Uniform('log_sigmab', -10, 10, testval=5) inter = pm.Uniform('inter', -200, 400) theta = pm.Uniform('theta', -np.pi / 2, np.pi / 2, testval=np.pi / 4) y_mixture = pm.DensityDist('mixturenormal', logp=mixture_likelihood, observed={'yi': yi, 'xi': xi}) trace1 = pm.sample(draws=5000, tune=1000) # ---------------------------------------------------------------------- # Third model: marginalizes over the probability that each point is an outlier. # define priors on beta = (slope, intercept) def outlier_likelihood(yi, xi): """likelihood for full outlier posterior""" sigmab = tt.exp(log_sigmab) mu = model(xi, theta, inter) Vi = dyi ** 2 Vb = sigmab ** 2 logL_in = -0.5 * tt.sum(qi * (np.log(2 * np.pi * Vi) + (yi - mu) ** 2 / Vi)) logL_out = -0.5 * tt.sum((1 - qi) * (np.log(2 * np.pi * (Vi + Vb)) + (yi - Yb) ** 2 / (Vi + Vb))) return logL_out + logL_in with pm.Model(): # uniform prior on Pb, the fraction of bad points Pb = pm.Uniform('Pb', 0, 1.0, testval=0.1) # uniform prior on Yb, the centroid of the outlier distribution Yb = pm.Uniform('Yb', -10000, 10000, testval=0) # uniform prior on log(sigmab), the spread of the outlier distribution log_sigmab = pm.Uniform('log_sigmab', -10, 10, testval=5) inter = pm.Uniform('inter', -1000, 1000) theta = pm.Uniform('theta', -np.pi / 2, np.pi / 2) # qi is bernoulli distributed qi = pm.Bernoulli('qi', p=1 - Pb, shape=size) y_outlier = pm.DensityDist('outliernormal', logp=outlier_likelihood, observed={'yi': yi, 'xi': xi}) trace2 = pm.sample(draws=5000, tune=1000) # ------------------------------------------------------------ # plot the data fig = plt.figure(figsize=(5, 5)) fig.subplots_adjust(left=0.1, right=0.95, wspace=0.25, bottom=0.1, top=0.95, hspace=0.2) # first axes: plot the data ax1 = fig.add_subplot(221) ax1.errorbar(data['x'], data['y'], data['sigma_y'], fmt='.k', ecolor='gray', lw=1) ax1.set_xlabel('$x$') ax1.set_ylabel('$y$') #------------------------------------------------------------ # Go through models; compute and plot likelihoods linestyles = [':', '--', '-'] labels = ['no outlier correction\n(dotted fit)', 'mixture model\n(dashed fit)', 'outlier rejection\n(solid fit)'] x = np.linspace(0, 350, 10) bins = [(np.linspace(140, 300, 51), np.linspace(0.6, 1.6, 51)), (np.linspace(-40, 120, 51), np.linspace(1.8, 2.8, 51)), (np.linspace(-40, 120, 51), np.linspace(1.8, 2.8, 51))] for i, trace in enumerate([trace0, trace1, trace2]): H2D, bins1, bins2 = np.histogram2d(np.tan(trace['theta']), trace['inter'], bins=50) w = np.where(H2D == H2D.max()) # choose the maximum posterior slope and intercept slope_best = bins1[w[0][0]] intercept_best = bins2[w[1][0]] # plot the best-fit line ax1.plot(x, intercept_best + slope_best * x, linestyles[i], c='k') # For the model which identifies bad points, # plot circles around points identified as outliers. if i == 2: Pi = trace['qi'].mean(0) outlier_x = data['x'][Pi < 0.32] outlier_y = data['y'][Pi < 0.32] ax1.scatter(outlier_x, outlier_y, lw=1, s=400, alpha=0.5, facecolors='none', edgecolors='red') # plot the likelihood contours ax = plt.subplot(222 + i) H, xbins, ybins = np.histogram2d(trace['inter'], np.tan(trace['theta']), bins=bins[i]) H[H == 0] = 1E-16 Nsigma = convert_to_stdev(np.log(H)) ax.contour(0.5 * (xbins[1:] + xbins[:-1]), 0.5 * (ybins[1:] + ybins[:-1]), Nsigma.T, levels=[0.683, 0.955], colors='black') ax.set_xlabel('intercept') ax.set_ylabel('slope') ax.grid(color='gray') ax.xaxis.set_major_locator(plt.MultipleLocator(40)) ax.yaxis.set_major_locator(plt.MultipleLocator(0.2)) ax.text(0.96, 0.96, labels[i], ha='right', va='top', bbox=dict(fc='w', ec='none', alpha=0.5), transform=ax.transAxes) ax.set_xlim(bins[i][0][0], bins[i][0][-1]) ax.set_ylim(bins[i][1][0], bins[i][1][-1]) ax1.set_xlim(0, 350) ax1.set_ylim(100, 700) plt.show() ``` ### Exercise - using the approaches given above estimate the slope and intercept when the errors are in both axes
5b5e58a619c328da613e6158f2798a29453468b6
438,068
ipynb
Jupyter Notebook
lectures/Lecture2-regression-errors.ipynb
uw-astro/astr-598a-win22
65e0f366e164c276f1dfc06873741c6f6c94b300
[ "BSD-3-Clause" ]
null
null
null
lectures/Lecture2-regression-errors.ipynb
uw-astro/astr-598a-win22
65e0f366e164c276f1dfc06873741c6f6c94b300
[ "BSD-3-Clause" ]
null
null
null
lectures/Lecture2-regression-errors.ipynb
uw-astro/astr-598a-win22
65e0f366e164c276f1dfc06873741c6f6c94b300
[ "BSD-3-Clause" ]
1
2022-01-10T16:01:50.000Z
2022-01-10T16:01:50.000Z
270.078915
51,540
0.912543
true
10,296
Qwen/Qwen-72B
1. YES 2. YES
0.845942
0.826712
0.699351
__label__eng_Latn
0.85133
0.463157
# Implementing Lemke-Howson in Python **Daisuke Oyama** *Faculty of Economics, University of Tokyo* ```python import numpy as np ``` ```python np.set_printoptions(precision=5) # Reduce the number of digits printed ``` ```python A = np.array([[3, 3], [2, 5], [0 ,6]]) B_T = np.array([[3, 2, 3], [2, 6, 1]]) m, n = A.shape # Numbers of actions of the players ``` To be consistent with the 0-based indexing in Python, we call the players 0 and 1. ## Complementary pivoting Build the tableau for each player: ```python # Player 0 tableau0 = np.empty((n, m+n+1)) tableau0[:, :m] = B_T tableau0[:, m:m+n] = np.identity(n) tableau0[:, -1] = 1 ``` ```python # One-line commamd # tableau0 = np.hstack((B_T, np.identity(n), np.ones((n, 1)))) ``` ```python tableau0 ``` array([[ 3., 2., 3., 1., 0., 1.], [ 2., 6., 1., 0., 1., 1.]]) ```python # Player 1 tableau1 = np.empty((m, n+m+1)) tableau1[:, :m] = np.identity(m) tableau1[:, m:m+n] = A tableau1[:, -1] = 1 ``` ```python # One-line command # tableau1 = np.hstack((np.identity(m), A, np.ones((m, 1)))) ``` ```python tableau1 ``` array([[ 1., 0., 0., 3., 3., 1.], [ 0., 1., 0., 2., 5., 1.], [ 0., 0., 1., 0., 6., 1.]]) Denote the player 0's variables by $x_0, x_1, x_2$ and $s_3, s_4$, and the player 1's variables by $r_0, r_1, r_2$ and $y_3, y_4$. The initial basic variables are $s_3, s_4$ and $r_0, r_1, r_2$. ```python basic_vars0 = np.arange(m, m+n) basic_vars0 ``` array([3, 4]) ```python basic_vars1 = np.arange(0, m) basic_vars1 ``` array([0, 1, 2]) Let the initial pivot index be `1`, so that $x_1$ is to enter the basis: ```python init_pivot = 1 ``` ```python # Current pivot pivot = init_pivot ``` ### Step 1 Determine the basic variable to leave by the minimum ratio test: ```python ratios = tableau0[:, -1] / tableau0[:, pivot] ratios ``` array([ 0.5 , 0.16667]) ```python row_min = ratios.argmin() row_min ``` 1 ```python basic_vars0[row_min] ``` 4 $s_4$ is the basic variable that leaves the basis. Update the tableau: ```python tableau0[row_min, :] /= tableau0[row_min, pivot] ``` ```python tableau0 ``` array([[ 3. , 2. , 3. , 1. , 0. , 1. ], [ 0.33333, 1. , 0.16667, 0. , 0.16667, 0.16667]]) ```python for i in range(tableau0.shape[0]): if i != row_min: tableau0[i, :] -= tableau0[row_min, :] * tableau0[i, pivot] ``` ```python # Another approach by a NumPy trick # ind = np.ones(tableau0.shape[0], dtype=bool) # ind[row_min] = False # tableau0[ind, :] -= tableau0[row_min, :] * tableau0[ind, :][:, [pivot]] ``` ```python tableau0 ``` array([[ 2.33333, 0. , 2.66667, 1. , -0.33333, 0.66667], [ 0.33333, 1. , 0.16667, 0. , 0.16667, 0.16667]]) Update the basic variables and the pivot for the next step: ```python basic_vars0[row_min], pivot = pivot, basic_vars0[row_min] ``` ```python basic_vars0 ``` array([3, 1]) ```python basic_vars0[row_min] ``` 1 ```python pivot ``` 4 That is, $x_1$ has become a basic variable, while $s_4$ becomes a nonbasic variable (i.e., $s_4 = 0$). If the new pivot is equal to the initial pivot, we are done. ```python pivot == init_pivot ``` False But this is `False`, so we continue. In the next step, the variable $y_4$ which is *complementary* to $s_4$ (i.e., $y_4 s_4 = 0$) becomes a basic variable. ### Step 2 Repeat the same exercise as above for `tableau1`. ```python tableau1 ``` array([[ 1., 0., 0., 3., 3., 1.], [ 0., 1., 0., 2., 5., 1.], [ 0., 0., 1., 0., 6., 1.]]) ```python ratios = tableau1[:, -1] / tableau1[:, pivot] row_min = ratios.argmin() row_min ``` 2 ```python tableau1[row_min, :] /= tableau1[row_min, pivot] ``` ```python tableau1 ``` array([[ 1. , 0. , 0. , 3. , 3. , 1. ], [ 0. , 1. , 0. , 2. , 5. , 1. ], [ 0. , 0. , 0.16667, 0. , 1. , 0.16667]]) ```python ind = np.ones(tableau1.shape[0], dtype=bool) ind[row_min] = False tableau1[ind, :] -= tableau1[row_min, :] * tableau1[ind, :][:, [pivot]] ``` ```python tableau1 ``` array([[ 1. , 0. , -0.5 , 3. , 0. , 0.5 ], [ 0. , 1. , -0.83333, 2. , 0. , 0.16667], [ 0. , 0. , 0.16667, 0. , 1. , 0.16667]]) ```python basic_vars1[row_min], pivot = pivot, basic_vars1[row_min] ``` ```python pivot ``` 2 ```python basic_vars1 ``` array([0, 1, 4]) ```python basic_vars1[row_min] ``` 4 That is, $y_4$ has become a basic variable, while $r_2$ becomes a nonbasic variable. If the new pivot is equal to the initial pivot, we are done. ```python pivot == init_pivot ``` False But this is `False`, so we continue. In the next step, the variable $x_2$ which is complementary to $r_2$ becomes a basic variable. ### Step 3 ```python tableau0 ``` array([[ 2.33333, 0. , 2.66667, 1. , -0.33333, 0.66667], [ 0.33333, 1. , 0.16667, 0. , 0.16667, 0.16667]]) ```python ratios = tableau0[:, -1] / tableau0[:, pivot] row_min = ratios.argmin() row_min ``` 0 ```python tableau0[row_min, :] /= tableau0[row_min, pivot] ind = np.ones(tableau0.shape[0], dtype=bool) ind[row_min] = False tableau0[ind, :] -= tableau0[row_min, :] * tableau0[ind, :][:, [pivot]] ``` ```python tableau0 ``` array([[ 0.875 , 0. , 1. , 0.375 , -0.125 , 0.25 ], [ 0.1875, 1. , 0. , -0.0625, 0.1875, 0.125 ]]) ```python basic_vars0[row_min], pivot = pivot, basic_vars0[row_min] ``` ```python pivot ``` 3 ```python pivot == init_pivot ``` False ### Step 4 ```python tableau1 ``` array([[ 1. , 0. , -0.5 , 3. , 0. , 0.5 ], [ 0. , 1. , -0.83333, 2. , 0. , 0.16667], [ 0. , 0. , 0.16667, 0. , 1. , 0.16667]]) ```python ratios = tableau1[:, -1] / tableau1[:, pivot] row_min = ratios.argmin() row_min ``` /Applications/anaconda/lib/python3.5/site-packages/ipykernel/__main__.py:1: RuntimeWarning: divide by zero encountered in true_divide if __name__ == '__main__': 1 Note on the warning: `tableau1[:, pivot]` has a zero entry, so we get a "divide by zero" warning. ```python tableau1[:, pivot] ``` array([ 3., 2., 0.]) We can just ignore it, but we can also suppress it by [`np.errstate`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.errstate.html) with a `with` clause: ```python with np.errstate(divide='ignore'): ratios = tableau1[:, -1] / tableau1[:, pivot] ``` ```python ratios ``` array([ 0.16667, 0.08333, inf]) ```python tableau1[row_min, :] /= tableau1[row_min, pivot] ind = np.ones(tableau1.shape[0], dtype=bool) ind[row_min] = False tableau1[ind, :] -= tableau1[row_min, :] * tableau1[ind, :][:, [pivot]] ``` ```python tableau1 ``` array([[ 1. , -1.5 , 0.75 , 0. , 0. , 0.25 ], [ 0. , 0.5 , -0.41667, 1. , 0. , 0.08333], [ 0. , 0. , 0.16667, 0. , 1. , 0.16667]]) ```python basic_vars1[row_min], pivot = pivot, basic_vars1[row_min] ``` ```python pivot ``` 1 ```python pivot == init_pivot ``` True Now we have complete labeling, so we are done. ### Obtaining the Nash equilibrium The basic variables are: ```python basic_vars0 ``` array([2, 1]) $x_2$ and $x_1$, and ```python basic_vars1 ``` array([0, 3, 4]) $r_0$, $y_3$, and $y_4$. The indices of the basic variables corresponding to $x$: ```python basic_vars0[basic_vars0 < m] ``` array([2, 1]) The indices of the basic variables corresponding to $y$: ```python basic_vars1[basic_vars1 >= m] ``` array([3, 4]) The values of the basic variables are stored in the last columns of the tableaux. The values of $x_2$ and $x_1$ are: ```python tableau0[basic_vars0 < m, -1] ``` array([ 0.25 , 0.125]) The values of $y_3$ and $y_4$ are: ```python tableau1[basic_vars1 >= m, -1] ``` array([ 0.08333, 0.16667]) We need to normalize these values so that $x$ and $y$ are probability distributions. ```python x = np.zeros(m) x[basic_vars0[basic_vars0 < m]] = tableau0[basic_vars0 < m, -1] x /= x.sum() ``` ```python x ``` array([ 0. , 0.33333, 0.66667]) ```python y = np.zeros(n) y[basic_vars1[basic_vars1 >= m] - m] = tableau1[basic_vars1 >= m, -1] y /= y.sum() ``` ```python y ``` array([ 0.33333, 0.66667]) The Nash equilibrium we have found is: ```python (x, y) ``` (array([ 0. , 0.33333, 0.66667]), array([ 0.33333, 0.66667])) ## Wrapping the procedure in functions ```python def min_ratio_test(tableau, pivot): ind_nonpositive = tableau[:, pivot] <= 0 with np.errstate(divide='ignore', invalid='ignore'): ratios = tableau[:, -1] / tableau[:, pivot] ratios[ind_nonpositive] = np.inf row_min = ratios.argmin() return row_min ``` ```python def pivoting(tableau, pivot, pivot_row): """ Perform a pivoting step. Modify `tableau` in place (and return its view). """ # Row indices except pivot_row ind = np.ones(tableau.shape[0], dtype=bool) ind[pivot_row] = False # Store the values in the pivot column, except for row_min # Made 2-dim by np.newaxis multipliers = tableau[ind, pivot, np.newaxis] # Update the tableau tableau[pivot_row, :] /= tableau[pivot_row, pivot] tableau[ind, :] -= tableau[pivot_row, :] * multipliers return tableau ``` ```python def lemke_howson_tbl(tableau0, tableau1, basic_vars0, basic_vars1, init_pivot): m, n = tableau1.shape[0], tableau0.shape[0] tableaux = (tableau0, tableau1) basic_vars = (basic_vars0, basic_vars1) init_player = int((basic_vars[0]==init_pivot).any()) players = [init_player, 1 - init_player] pivot = init_pivot while True: for i in players: # Determine the leaving variable row_min = min_ratio_test(tableaux[i], pivot) # Pivoting step: modify tableau in place pivoting(tableaux[i], pivot, row_min) # Update the basic variables and the pivot basic_vars[i][row_min], pivot = pivot, basic_vars[i][row_min] if pivot == init_pivot: break else: continue break out_dtype = np.result_type(*tableaux) out = np.zeros(m+n, dtype=out_dtype) for i, (start, num) in enumerate(zip((0, m), (m, n))): ind = basic_vars[i] < start + num if i == 0 else start <= basic_vars[i] out[basic_vars[i][ind]] = tableaux[i][ind, -1] return out ``` Note: There is no nested `break` in Python; see e.g., [break two for loops](http://stackoverflow.com/questions/9038160/break-two-for-loops). ```python def normalize(unnormalized, m, n): normalized = np.empty(m+n) for (start, num) in zip((0, m), (m, n)): s = unnormalized[start:start+num].sum() if s != 0: normalized[start:start+num] = unnormalized[start:start+num] / s else: normalized[start:start+num] = 0 return normalized[:m], normalized[m:] ``` ```python def lemke_howson(A, B_T, init_pivot=0, return_tableaux=False): m, n = A.shape tableaux = (np.hstack((B_T, np.identity(n), np.ones((n, 1)))), np.hstack((np.identity(m), A, np.ones((m, 1))))) basic_vars = (np.arange(m, m+n), np.arange(0, m)) unnormalized = lemke_howson_tbl(*tableaux, *basic_vars, init_pivot) normalized = normalize(unnormalized, m, n) if return_tableaux: return normalized, tableaux, basic_vars return normalized ``` ```python init_pivot = 1 x, y = lemke_howson(A, B_T, init_pivot) print("Nash equilibrium found\n", (x, y)) ``` Nash equilibrium found (array([ 0. , 0.33333, 0.66667]), array([ 0.33333, 0.66667])) ```python lemke_howson(A, B_T, init_pivot, return_tableaux=True) ``` ((array([ 0. , 0.33333, 0.66667]), array([ 0.33333, 0.66667])), (array([[ 0.875 , 0. , 1. , 0.375 , -0.125 , 0.25 ], [ 0.1875, 1. , 0. , -0.0625, 0.1875, 0.125 ]]), array([[ 1. , -1.5 , 0.75 , 0. , 0. , 0.25 ], [ 0. , 0.5 , -0.41667, 1. , 0. , 0.08333], [ 0. , 0. , 0.16667, 0. , 1. , 0.16667]])), (array([2, 1]), array([0, 3, 4]))) ## Enumerating all equilibria that are reached by Lemke-Howson paths ```python def lemke_howson_all(A, B_T): m, n = A.shape k = 0 NEs = [] basic_vars_list = [] player = (m <= n) init_pivot = k actions, tableaux, basic_vars = \ lemke_howson(A, B_T, init_pivot, return_tableaux=True) NEs.append(actions) basic_vars_list.append(np.sort(basic_vars[player])) for a in range(m+n): if a == k: continue init_pivot = a actions, tableaux, basic_vars = \ lemke_howson(A, B_T, init_pivot, return_tableaux=True) basic_vars_sorted = np.sort(basic_vars[player]) for arr in basic_vars_list: if np.array_equal(basic_vars_sorted, arr): break else: NEs.append(actions) basic_vars_list.append(basic_vars_sorted) unnormalized = \ lemke_howson_tbl(*tableaux, *basic_vars, init_pivot=k) NEs.append(normalize(unnormalized, m, n)) basic_vars_list.append(np.sort(basic_vars[player])) return NEs ``` ```python lemke_howson_all(A, B_T) ``` [(array([ 1., 0., 0.]), array([ 1., 0.])), (array([ 0. , 0.33333, 0.66667]), array([ 0.33333, 0.66667])), (array([ 0.8, 0.2, 0. ]), array([ 0.66667, 0.33333]))] There are games in which some of the Nash equilibria cannot be reached by Lemke-Howson path from the origin, as in the game in (3.7) in von Stengel (2007). ```python payoff_matrix_3_7 = np.array([[3, 3, 0], [4, 0, 1], [0, 4, 5]]) lemke_howson_all(payoff_matrix_3_7, payoff_matrix_3_7) ``` [(array([ 0., 0., 1.]), array([ 0., 0., 1.]))] ## Integer pivoting ```python def pivoting_int(tableau, pivot, pivot_row, prev_pivot_el): """ Perform a pivoting step with integer input data. Modify `tableau` in place (and return its view). """ # Row indices except pivot_row ind = np.ones(tableau.shape[0], dtype=bool) ind[pivot_row] = False # Store the values in the pivot column, except for row_min # Made 2-dim by np.newaxis multipliers = tableau[ind, pivot, np.newaxis] # Update the tableau tableau[ind, :] *= tableau[pivot_row, pivot] tableau[ind, :] -= tableau[pivot_row, :] * multipliers tableau[ind, :] //= prev_pivot_el # Floor division: return int return tableau ``` ```python def lemke_howson_tbl_int(tableau0, tableau1, basic_vars0, basic_vars1, init_pivot): m, n = tableau1.shape[0], tableau0.shape[0] tableaux = (tableau0, tableau1) basic_vars = (basic_vars0, basic_vars1) init_player = int((basic_vars[0]==init_pivot).any()) players = [init_player, 1 - init_player] pivot = init_pivot prev_pivot_els = np.ones(2, dtype=np.int_) while True: for i in players: # Determine the leaving variable row_min = min_ratio_test(tableaux[i], pivot) # Pivoting step: modify tableau in place pivoting_int(tableaux[i], pivot, row_min, prev_pivot_els[i]) # Backup the pivot element prev_pivot_els[i] = tableaux[i][row_min, pivot] # Update the basic variables and the pivot basic_vars[i][row_min], pivot = pivot, basic_vars[i][row_min] if pivot == init_pivot: break else: continue break out = np.zeros(m+n, dtype=np.int_) for i, (start, num) in enumerate(zip((0, m), (m, n))): ind = basic_vars[i] < start + num if i == 0 else start <= basic_vars[i] out[basic_vars[i][ind]] = tableaux[i][ind, -1] return out ``` Let us use the [Rational](http://docs.sympy.org/latest/modules/core.html#rational) class in [SymPy](http://www.sympy.org) to represent mixed actions in rational numbers. ```python import sympy ``` ```python def normalize_rational(unnormalized, m, n): """ Normalize the integer array `unnormalized` with ratioal numbers. """ normalized = np.empty(m+n, np.object_) for (start, num) in zip((0, m), (m, n)): s = unnormalized[start:start+num].sum() if s != 0: for k in range(start, start+num): normalized[k] = sympy.Rational(sympy.S(unnormalized[k]), sympy.S(s)) else: normalized[start:start+num] = sympy.Rational(0) return normalized[:m], normalized[m:] ``` ```python def lemke_howson_int(A, B_T, init_pivot=0, rational=False, return_tableaux=False): m, n = A.shape tableaux = (np.hstack((B_T, np.identity(n, dtype=np.int_), np.ones((n, 1), dtype=np.int_))), np.hstack((np.identity(m, dtype=np.int_), A, np.ones((m, 1), dtype=np.int_)))) basic_vars = (np.arange(m, m+n), np.arange(0, m)) unnormalized = lemke_howson_tbl_int(*tableaux, *basic_vars, init_pivot) if rational: normalized = normalize_rational(unnormalized, m, n) else: normalized = normalize(unnormalized, m, n) if return_tableaux: return normalized, tableaux, basic_vars return normalized ``` ```python def lemke_howson_all_int(A, B_T, rational=False): m, n = A.shape k = 0 NEs = [] basic_vars_list = [] player = (m <= n) init_pivot = k actions, tableaux, basic_vars = \ lemke_howson_int(A, B_T, init_pivot, rational=rational, return_tableaux=True) NEs.append(actions) basic_vars_list.append(np.sort(basic_vars[player])) for a in range(m+n): if a == k: continue init_pivot = a actions, tableaux, basic_vars = \ lemke_howson_int(A, B_T, init_pivot, rational=rational, return_tableaux=True) basic_vars_sorted = np.sort(basic_vars[player]) for arr in basic_vars_list: if np.array_equal(basic_vars_sorted, arr): break else: NEs.append(actions) basic_vars_list.append(basic_vars_sorted) unnormalized = \ lemke_howson_tbl_int(*tableaux, *basic_vars, init_pivot=k) if rational: normalized = normalize_rational(unnormalized, m, n) else: normalized = normalize(unnormalized, m, n) NEs.append(normalized) basic_vars_list.append(np.sort(basic_vars[player])) return NEs ``` ```python lemke_howson_int(A, B_T, init_pivot=1) ``` (array([ 0. , 0.33333, 0.66667]), array([ 0.33333, 0.66667])) ```python lemke_howson_int(A, B_T, init_pivot=1, return_tableaux=True) ``` ((array([ 0. , 0.33333, 0.66667]), array([ 0.33333, 0.66667])), (array([[14, 0, 16, 6, -2, 4], [ 3, 16, 0, -1, 3, 2]]), array([[ 12, -18, 9, 0, 0, 3], [ 0, 6, -5, 12, 0, 1], [ 0, 0, 2, 0, 12, 2]])), (array([2, 1]), array([0, 3, 4]))) ```python lemke_howson_int(A, B_T, init_pivot=1, rational=True) ``` (array([0, 1/3, 2/3], dtype=object), array([1/3, 2/3], dtype=object)) ```python lemke_howson_int(A, B_T, init_pivot=0, rational=True) ``` (array([1, 0, 0], dtype=object), array([1, 0], dtype=object)) ```python lemke_howson_all_int(A, B_T) ``` [(array([ 1., 0., 0.]), array([ 1., 0.])), (array([ 0. , 0.33333, 0.66667]), array([ 0.33333, 0.66667])), (array([ 0.2, 0.8, 0. ]), array([ 0.66667, 0.33333]))] ```python lemke_howson_all_int(A, B_T, rational=True) ``` [(array([1, 0, 0], dtype=object), array([1, 0], dtype=object)), (array([0, 1/3, 2/3], dtype=object), array([1/3, 2/3], dtype=object)), (array([1/5, 4/5, 0], dtype=object), array([2/3, 1/3], dtype=object))] ## Lexico-minimum ratio test ```python def min_ratio_test_no_tie_breaking(tableau, pivot, test_col, argmins, num_argmins): idx = 0 i = argmins[idx] if tableau[i, pivot] > 0: min_ratio = tableau[i, test_col] / tableau[i, pivot] else: min_ratio = np.inf for k in range(1, num_argmins): i = argmins[k] if tableau[i, pivot] <= 0: continue ratio = tableau[i, test_col] / tableau[i, pivot] if ratio > min_ratio: continue elif ratio < min_ratio: min_ratio = ratio idx = 0 elif ratio == min_ratio: idx += 1 argmins[idx] = k return idx + 1 ``` ```python def lex_min_ratio_test(tableau, pivot, slack_start): num_rows = tableau.shape[0] argmins = np.arange(num_rows) num_argmins = num_rows num_argmins = min_ratio_test_no_tie_breaking(tableau, pivot, -1, argmins, num_argmins) if num_argmins == 1: return argmins[0] for j in range(slack_start, slack_start+num_rows): if j == pivot: continue num_argmins = min_ratio_test_no_tie_breaking(tableau, pivot, j, argmins, num_argmins) if num_argmins == 1: break return argmins[0] ``` Caveat: Because of rounding errors, one should not rely on equality between floating point numbers. For example: ```python 2/3 ``` 0.6666666666666666 ```python 1 - 1/3 ``` 0.6666666666666667 ```python 1 - 1/3 == 2/3 ``` False Note: In comparing $\frac{a}{b}$ and $\frac{a'}{b'}$, one may instead compare $a b'$ and $a' b$: when these are integers, the latter involves only integers. (This, of course, does not apply only for lexico-minimum test.) ```python def lemke_howson_tbl_int_lex_min(tableau0, tableau1, basic_vars0, basic_vars1, init_pivot): m, n = tableau1.shape[0], tableau0.shape[0] tableaux = (tableau0, tableau1) basic_vars = (basic_vars0, basic_vars1) init_player = int((basic_vars[0]==init_pivot).any()) players = [init_player, 1 - init_player] pivot = init_pivot prev_pivot_els = np.ones(2, dtype=np.int_) slack_starts = (m, 0) while True: for i in players: # Determine the leaving variable row_min = lex_min_ratio_test(tableaux[i], pivot, slack_starts[i]) # Pivoting step: modify tableau in place pivoting_int(tableaux[i], pivot, row_min, prev_pivot_els[i]) # Backup the pivot element prev_pivot_els[i] = tableaux[i][row_min, pivot] # Update the basic variables and the pivot basic_vars[i][row_min], pivot = pivot, basic_vars[i][row_min] if pivot == init_pivot: break else: continue break out = np.zeros(m+n, dtype=np.int_) for i, (start, num) in enumerate(zip((0, m), (m, n))): ind = basic_vars[i] < start + num if i == 0 else start <= basic_vars[i] out[basic_vars[i][ind]] = tableaux[i][ind, -1] return out ``` ```python def lemke_howson_int_lex_min(A, B_T, init_pivot=0, rational=False, return_tableaux=False): m, n = A.shape tableaux = (np.hstack((B_T, np.identity(n, dtype=np.int_), np.ones((n, 1), dtype=np.int_))), np.hstack((np.identity(m, dtype=np.int_), A, np.ones((m, 1), dtype=np.int_)))) basic_vars = (np.arange(m, m+n), np.arange(0, m)) unnormalized = lemke_howson_tbl_int_lex_min(*tableaux, *basic_vars, init_pivot) if rational: normalized = normalize_rational(unnormalized, m, n) else: normalized = normalize(unnormalized, m, n) if return_tableaux: return normalized, tableaux, basic_vars return normalized ``` ```python def lemke_howson_all_int_lex_min(A, B_T, rational=False): m, n = A.shape k = 0 NEs = [] basic_vars_list = [] player = (m <= n) init_pivot = k actions, tableaux, basic_vars = \ lemke_howson_int_lex_min(A, B_T, init_pivot, rational=rational, return_tableaux=True) NEs.append(actions) basic_vars_list.append(np.sort(basic_vars[player])) for a in range(m+n): if a == k: continue init_pivot = a actions, tableaux, basic_vars = \ lemke_howson_int_lex_min(A, B_T, init_pivot, rational=rational, return_tableaux=True) basic_vars_sorted = np.sort(basic_vars[player]) for arr in basic_vars_list: if np.array_equal(basic_vars_sorted, arr): break else: NEs.append(actions) basic_vars_list.append(basic_vars_sorted) unnormalized = \ lemke_howson_tbl_int_lex_min(*tableaux, *basic_vars, init_pivot=k) if rational: normalized = normalize_rational(unnormalized, m, n) else: normalized = normalize(unnormalized, m, n) NEs.append(normalized) basic_vars_list.append(np.sort(basic_vars[player])) return NEs ``` Consider the following degenerate game: ```python C = np.array([[3, 3], [2, 5], [0 ,6]]) D_T = np.array([[3, 2, 3], [3, 6, 1]]) ``` `lemke_howson_all` fails to work properly: ```python lemke_howson_all(C, D_T) ``` [(array([ 1., 0., 0.]), array([ 1., 0.])), (array([ 0. , 0.33333, 0.66667]), array([ 0.33333, 0.66667])), (array([ 0., 0., 0.]), array([ 0., 0.]))] With lexico-minimum test: ```python lemke_howson_all_int_lex_min(C, D_T) ``` [(array([ 0. , 0.33333, 0.66667]), array([ 0.33333, 0.66667]))] Due to the paricular fixed way of introducing the perturbations $(\varepsilon^1, \ldots, \varepsilon^k)$, the output does depend on the ordering of the actions: ```python # Change the order of the actions of player 1 E = np.array([[3, 3], [5, 2], [6 ,0]]) F_T = np.array([[3, 6, 1], [3, 2, 3]]) ``` ```python lemke_howson_all_int_lex_min(E, F_T) ``` [(array([ 1., 0., 0.]), array([ 0., 1.])), (array([ 0. , 0.33333, 0.66667]), array([ 0.66667, 0.33333])), (array([ 1., 0., 0.]), array([ 0.33333, 0.66667]))] Essentially, the exercise corresponds to considering $$ \begin{pmatrix} \dfrac{3}{1+\varepsilon_1} & \dfrac{2}{1+\varepsilon_1} & \dfrac{3}{1+\varepsilon_1} \\ \dfrac{3}{1+\varepsilon_2} & \dfrac{6}{1+\varepsilon_2} & \dfrac{1}{1+\varepsilon_2} \end{pmatrix}, $$ where $(\varepsilon_1, \varepsilon_2) =(\varepsilon^1, \varepsilon^2)$ or $(\varepsilon_1, \varepsilon_2) =(\varepsilon^2, \varepsilon^1)$. In fact: ```python eps = 0.01 D_T_eps = np.array([[3 / (1 + eps), 2 / (1 + eps), 3 / (1 + eps)], [3 / (1 + eps**2), 6 / (1 + eps**2), 1 / (1 + eps**2)]]) F_T_eps = np.array([[3 / (1 + eps**2), 2 / (1 + eps**2), 3 / (1 + eps**2)], [3 / (1 + eps), 6 / (1 + eps), 1 / (1 + eps)]]) ``` ```python lemke_howson_all(C, D_T_eps) ``` [(array([ 0. , 0.32897, 0.67103]), array([ 0.33333, 0.66667]))] ```python lemke_howson_all(C, F_T_eps) ``` [(array([ 1., 0., 0.]), array([ 1., 0.])), (array([ 0. , 0.33773, 0.66227]), array([ 0.33333, 0.66667])), (array([ 0.99259, 0.00741, 0. ]), array([ 0.66667, 0.33333]))] ```python ```
86f7df33145c8abdf768767844bda6ffd6776da7
60,443
ipynb
Jupyter Notebook
lemke_howson/lemke_howson_py.ipynb
oyamad/theory16
09d0d948b801f1e7d5005ddc1b5582235d5abe0c
[ "MIT" ]
2
2016-10-26T14:54:49.000Z
2017-02-19T19:15:25.000Z
lemke_howson/lemke_howson_py.ipynb
oyamad/theory16
09d0d948b801f1e7d5005ddc1b5582235d5abe0c
[ "MIT" ]
7
2016-09-30T05:42:44.000Z
2016-11-26T01:01:13.000Z
lemke_howson/lemke_howson_py.ipynb
oyamad/theory16
09d0d948b801f1e7d5005ddc1b5582235d5abe0c
[ "MIT" ]
4
2016-09-30T02:20:20.000Z
2020-08-13T20:26:33.000Z
22.544946
144
0.466307
true
9,315
Qwen/Qwen-72B
1. YES 2. YES
0.90599
0.839734
0.76079
__label__eng_Latn
0.394428
0.605903
# Inaugural Project > **Note the following:** > 1. This is an example of how to structure your **inaugural project**. > 1. Remember the general advice on structuring and commenting your code from [lecture 5](https://numeconcopenhagen.netlify.com/lectures/Workflow_and_debugging). > 1. Remember this [guide](https://www.markdownguide.org/basic-syntax/) on markdown and (a bit of) latex. > 1. Turn on automatic numbering by clicking on the small icon on top of the table of contents in the left sidebar. > 1. The `inauguralproject.py` file includes a function which can be used multiple times in this notebook. Imports and set magics: ```python import numpy as np # autoreload modules when code is run. Otherwise, python will not see recent changes. %load_ext autoreload %autoreload 2 # Now we use the library scipy to do the heavy lifting from scipy import optimize from sympy import * ``` # Question 1 **Explain how you solve the model** ```python # Parameters are defined y = 1 p = 0.2 theta = -2 ``` ```python import Insurancefunctions as IFC import numpy as np from scipy import optimize ``` ```python import numpy as np from scipy import optimize y = 1 p = 0.2 theta = -2 N = 1_000 x_vec = np.linspace(0.01,0.9,N) q_vec = np.zeros(N) def premium_policy(q): ''' Calculates premium policy for the insurance company Args: q (float): The coverage ammount. p (float): The proability that the loss is incurred. Meaning the probability that the coverage ammount has to be paid. Returns: (float): The premium policy the agent has to pay for her preferred coverage ''' return p*q def utility_of_assets(z): ''' Calculates the utility for the assets the agent is receiving after paying for insurance Args: z (float): The assets the agent is receiving. theta (float): The relative risk aversion. Return: (float): the utility for the assets the agent is receiving after paying for insurance ''' return (z**(1+theta))/(1+theta) def uninsured_expected_utility(): ''' Calculates the expected utility of the assets, if the agents is uninsured. Args: p (float): The proability that the loss is incurred. y (float): The assets the agent is holdning initially. x (float): The maximum coverage the agent can buy of insurance from loss. Returns: (float): The expected utility of the assets, if the agents is uninsured.''' return p*utility_of_assets(z=y-x) + (1-p)*utility_of_assets(z=y) def insured_expected_utility(q): ''' Calculates the expected utility of the assets, if the agents is insured for the coverage ammount q. Args: p (float): The proability that the loss is incurred. y (float): The assets the agent is holdning initially. x (float): The maximum coverage the agent can buy of insurance from loss. q (float): The coverage ammount. Returns: (float): The expected utility of the assets, if the agents is insured.''' return p*utility_of_assets(z=y-x+q-premium_policy(q)) + (1-p)*utility_of_assets(z=y-premium_policy(q)) def find_optimal_coverage_ammount(x): ''' Finds optimal coverage ammount Args: p (float): The proability that the loss is incurred. y (float): The assets the agent is holdning initially. x (float): The maximum coverage the agent can buy of insurance from loss. q (float): The coverage ammount. Returns: (float): Expected utility at optimal coverage ammount''' obj = lambda q: -insured_expected_utility(q) res = optimize.minimize_scalar(obj,bounds=(1e-8,0.9),method='bounded') return res.x N = 90 x_vec = np.linspace(0.01,0.9,N) q_vec = np.zeros(N) for i,x in enumerate(x_vec): q_vec[i] = find_optimal_coverage_ammount(x) print(f'x={x:.2f} --> q = {q_vec[i]:12.0f}') ``` x=0.01 --> q = 0 x=0.02 --> q = 0 x=0.03 --> q = 0 x=0.04 --> q = 0 x=0.05 --> q = 0 x=0.06 --> q = 0 x=0.07 --> q = 0 x=0.08 --> q = 0 x=0.09 --> q = 0 x=0.10 --> q = 0 x=0.11 --> q = 0 x=0.12 --> q = 0 x=0.13 --> q = 0 x=0.14 --> q = 0 x=0.15 --> q = 0 x=0.16 --> q = 0 x=0.17 --> q = 0 x=0.18 --> q = 0 x=0.19 --> q = 0 x=0.20 --> q = 0 x=0.21 --> q = 0 x=0.22 --> q = 0 x=0.23 --> q = 0 x=0.24 --> q = 0 x=0.25 --> q = 0 x=0.26 --> q = 0 x=0.27 --> q = 0 x=0.28 --> q = 0 x=0.29 --> q = 0 x=0.30 --> q = 0 x=0.31 --> q = 0 x=0.32 --> q = 0 x=0.33 --> q = 0 x=0.34 --> q = 0 x=0.35 --> q = 0 x=0.36 --> q = 0 x=0.37 --> q = 0 x=0.38 --> q = 0 x=0.39 --> q = 0 x=0.40 --> q = 0 x=0.41 --> q = 0 x=0.42 --> q = 0 x=0.43 --> q = 0 x=0.44 --> q = 0 x=0.45 --> q = 0 x=0.46 --> q = 0 x=0.47 --> q = 0 x=0.48 --> q = 0 x=0.49 --> q = 0 x=0.50 --> q = 1 x=0.51 --> q = 1 x=0.52 --> q = 1 x=0.53 --> q = 1 x=0.54 --> q = 1 x=0.55 --> q = 1 x=0.56 --> q = 1 x=0.57 --> q = 1 x=0.58 --> q = 1 x=0.59 --> q = 1 x=0.60 --> q = 1 x=0.61 --> q = 1 x=0.62 --> q = 1 x=0.63 --> q = 1 x=0.64 --> q = 1 x=0.65 --> q = 1 x=0.66 --> q = 1 x=0.67 --> q = 1 x=0.68 --> q = 1 x=0.69 --> q = 1 x=0.70 --> q = 1 x=0.71 --> q = 1 x=0.72 --> q = 1 x=0.73 --> q = 1 x=0.74 --> q = 1 x=0.75 --> q = 1 x=0.76 --> q = 1 x=0.77 --> q = 1 x=0.78 --> q = 1 x=0.79 --> q = 1 x=0.80 --> q = 1 x=0.81 --> q = 1 x=0.82 --> q = 1 x=0.83 --> q = 1 x=0.84 --> q = 1 x=0.85 --> q = 1 x=0.86 --> q = 1 x=0.87 --> q = 1 x=0.88 --> q = 1 x=0.89 --> q = 1 x=0.90 --> q = 1 # Question 2 Explain your code and procedure ```python # code ``` # Question 3 Explain your code and procedure ```python # code ``` ADD CONCISE CONLUSION.
58334f79f5af25c63d9ed5d7007c41994da7e8c9
10,510
ipynb
Jupyter Notebook
Magnus/inauguralproject.ipynb
NumEconCopenhagen/projects-2022-git-good
df457732b3da0d52c481b0adcb18e1cef63a5089
[ "MIT" ]
null
null
null
Magnus/inauguralproject.ipynb
NumEconCopenhagen/projects-2022-git-good
df457732b3da0d52c481b0adcb18e1cef63a5089
[ "MIT" ]
null
null
null
Magnus/inauguralproject.ipynb
NumEconCopenhagen/projects-2022-git-good
df457732b3da0d52c481b0adcb18e1cef63a5089
[ "MIT" ]
null
null
null
29.773371
170
0.420362
true
2,207
Qwen/Qwen-72B
1. YES 2. YES
0.808067
0.72487
0.585744
__label__eng_Latn
0.945293
0.199209
## Datasets ```python # Visualization %pylab inline from IPython.display import display, Math, Latex import matplotlib.pyplot as plt # handling data import csv import json import pandas as pd # Math from random import random import scipy.stats as ss import numpy as np import itertools from collections import Counter ``` Populating the interactive namespace from numpy and matplotlib /usr/local/anaconda3/envs/Datascience/lib/python3.7/site-packages/IPython/core/magics/pylab.py:160: UserWarning: pylab import has clobbered these variables: ['random'] `%matplotlib` prevents importing * from pylab and numpy "\n`%matplotlib` prevents importing * from pylab and numpy" We will use the following dataset to define the universe of possible values throughout the notebook to test the paper, it contains the setting's universe values and the people the author's of the paper want to protect. To simplify things without the loss of generality, the authors just use 4 people. The 4 people belong to a high school. The high school would like to release a dataset to be queried; however, this new dataset will not include people that have been in probation, in this case, Terry. ```python # We define the actual dataset (conforming the universe) dict_school = {'name': ['Chris', 'Kelly', 'Pat', 'Terry'], 'school_year': [1, 2, 3, 4], 'absence_days': [1, 2, 3, 10]} ``` ```python df_school = pd.DataFrame(dict_school) df_school ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>name</th> <th>school_year</th> <th>absence_days</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>Chris</td> <td>1</td> <td>1</td> </tr> <tr> <th>1</th> <td>Kelly</td> <td>2</td> <td>2</td> </tr> <tr> <th>2</th> <td>Pat</td> <td>3</td> <td>3</td> </tr> <tr> <th>3</th> <td>Terry</td> <td>4</td> <td>10</td> </tr> </tbody> </table> </div> The attacker's ultimate goal is to find out which student was placed in probation, Terry. However, the attacker will only be able to query this other dataset (Without Terry, because the release dataset only contains students who were not in probation): ```python # We define the the dataset that we will release df_school_release = df_school.drop([3], axis=0) df_school_release ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>name</th> <th>school_year</th> <th>absence_days</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>Chris</td> <td>1</td> <td>1</td> </tr> <tr> <th>1</th> <td>Kelly</td> <td>2</td> <td>2</td> </tr> <tr> <th>2</th> <td>Pat</td> <td>3</td> <td>3</td> </tr> </tbody> </table> </div> To accomplish this, the attacker would perform analytics queries on df_school_release such as the mean. With the results, the adversary will try to improve his/her guess about which dataset is the true one to single out and discover the student who was placed on probation (Terry). The adversary model adopted in the paper is the worst-case scenario. It will be the one I adopted in this notebook as well: An attacker has infinite computation power. Because DP is supposed to provide privacy given adversaries with arbitrary background knowledge, it is okay to assume that the adversary has full access to all the records (Knows all the universe - df_school). However, there is a dataset made from the universe without an individual (df_school_release), and he does not know who is and who is not in this dataset (This is the only thing he does not know). However, he knows this dataset contains people with a certain quality (The students who have not been on probation). With the initial dataset, the attacker will reconstruct the dataset he does not know, querying the new dataset (df_school_release) without having access to it. ## Functions ### Auxiliary function ```python # With this funciton, we can make easier to call the mean, median... functions # REF: https://stackoverflow.com/questions/34794634/how-to-use-a-variable-as-function-name-in-python # It is not clean to have the var percentile input each function, but it is less verbose than having a function # For each percentile. We could however limit the maount of percentiles offer to 25 and 75. class Query_class: """ A class used to represent a query. You instantiate an object that will perform a particlar query on an array Attributes ---------- fDic - (dict) containing the possible queries the class can be transformed into fActive - (function) it contins the function we created the class to have Methods ------- run_query - I will run the query for which we instantiated the class The other methods implement the different possible queries """ def __init__(self, fCase): # mapping: string --> variable = function name fDic = {'mean':self._mean, 'median':self._median, 'count': self._count, 'sum': self._sum, 'std': self._std, 'var': self._var, 'percentile': self._percentile} self.fActive = fDic[fCase] # Calculate the mean of an array def _mean(self, array, percentile): return np.mean(array) # Calculate the median of an array def _median(self, array, percentile): return np.median(array) # Calculate the number of elements in the array def _count(self, array, percentile): return len(array) # Calculate the sum of an array def _sum(self, array, percentile): return np.sum(array) # Calculate the std of an array def _std(self, array, percentile): return np.std(array) # Calculate the variance of an array def _var(self, array, percentile): return np.var(array) def _percentile(self, array, percentile): return np.percentile(array, percentile) # It will run the given query def run_query(self, array, percentile=50): return self.fActive(array, percentile) ``` ```python # Set of checks on the input values def verify_sensitivity_inputs(universe_cardinality, universe_subset_cardinality, hamming_distance): """ INPUT: universe - (df) contains all possible values of the dataset universe_subset_cardinality - (df) cardinality of the universe subset hamming_distance - (int) hamming distance between neighboring datasets OUTPUT: ValueError - (str) error message due to the value of the inputs Description: It performs multiple checks to verify the validity of the inputs for the calculation of senstitivity """ # Check on unverse cardinality (1). # The cardinality of the subset of the universe cannot be larger than the universe if universe_cardinality < universe_subset_cardinality: raise ValueError("Your universe dataset cannot be smaller than your release dataset.") # Checks on the validity of the chosen hamming_distance (3) if hamming_distance >= (universe_subset_cardinality): raise ValueError("Hamming distance chosen is larger than the cardinality of the release dataset.") if (hamming_distance > np.abs(universe_cardinality - universe_subset_cardinality)): raise ValueError("Hamming distance chosen is larger than the cardinality difference between the \ universe and the release dataset, i.e., \ there are not enough values in your universe to create such a large neighboring dataset (Re-sampling records).") # The hamming distance cannot be 0, then your neighbor dataset is equal to the original dataset if hamming_distance == 0: raise ValueError("Hamming distance cannot be 0.") ``` ```python # Used by unbounded unbounded_empirical_global_L1_sensitivity_a def L1_norm_max(release_dataset_query_value, neighbor_datasets, query, percentile): """ INPUT: release_dataset_query_value - (float) query value of a particular possible release dataset neighbor_datasets - (list) contains the possible neighbors of the specific release dataset query - (object) instance of class Query_class percentile - (int) percentile value for the percentile query OUTPUT: L1_norm_maximum - (float) maximum L1 norm calcuated from the differences between the query results of the neighbor datasets and the specific release dataset Description: It claculates the maximum L1 norm between the query results of the neighbor datasets and the specific release dataset """ neighbor_dataset_query_values = [] for neighbor_dataset in neighbor_datasets: neighbor_dataset_query_value = query.run_query(neighbor_dataset, percentile) neighbor_dataset_query_values.append(neighbor_dataset_query_value) # We select the maximum and minimum values of the queries, as the intermediate values will not # yield a larger L1 norm (ultimately, we are interested in the maximum L1 norm) neighbor_dataset_query_value_min, neighbor_dataset_query_value_max = \ min(neighbor_dataset_query_values), max(neighbor_dataset_query_values) # We calculate the L1 norm for these two values and pick the maximum L1_norm_i = np.abs(release_dataset_query_value - neighbor_dataset_query_value_min) L1_norm_ii = np.abs(release_dataset_query_value - neighbor_dataset_query_value_max) L1_norm_maximum = max(L1_norm_i, L1_norm_ii) return L1_norm_maximum ``` ```python def calculate_unbounded_sensitivities(universe, universe_subset_cardinality, columns, hamming_distance, unbounded_sensitivities): """ INPUT: universe - (df or dict) contains all possible values of the dataset universe_subset_cardinality - (int) contains the length of the subset chosen for the release dataset columns - (array) contains the names of the columns we would like to obtain the sensitivity from hamming_distance - (int) hamming distance between neighboring datasets unbounded_sensitivities - (dict) stores sensitivities per hamming distance and query type OUTPUT unbounded_sensitivities - (dict) stores sensitivities per hamming distance and query type Description: It calculates the sensitivities for a set of queries given a universe and a release dataset. """ # Calculate the sensitivity of different queries for the unbounded DP query_type = 'mean' mean_unbounded_global_sensitivities = unbounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance) query_type = 'median' median_unbounded_global_sensitivities = unbounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance) query_type = 'count' count_unbounded_global_sensitivities = unbounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance) query_type = 'sum' sum_unbounded_global_sensitivities = unbounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance) query_type = 'std' std_unbounded_global_sensitivities = unbounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance) query_type = 'var' var_unbounded_global_sensitivities = unbounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance) query_type = 'percentile' percentile = 25 percentile_25_unbounded_global_sensitivities = unbounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance, percentile) percentile = 50 percentile_50_unbounded_global_sensitivities = unbounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance, percentile) percentile = 75 percentile_75_unbounded_global_sensitivities = unbounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance, percentile) percentile = 90 percentile_90_unbounded_global_sensitivities = unbounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance, percentile) print('Unbounded sensitivities for mean', mean_unbounded_global_sensitivities) print('Unbounded sensitivities for median', median_unbounded_global_sensitivities) print('Unbounded sensitivities for count', count_unbounded_global_sensitivities) print('Unbounded sensitivities for sum', sum_unbounded_global_sensitivities) print('Unbounded sensitivities for std', std_unbounded_global_sensitivities) print('Unbounded sensitivities for var', var_unbounded_global_sensitivities) print('Unbounded sensitivities for percentile 25', percentile_25_unbounded_global_sensitivities) print('Unbounded sensitivities for percentile 50', percentile_50_unbounded_global_sensitivities) print('Unbounded sensitivities for percentile 75', percentile_75_unbounded_global_sensitivities) print('Unbounded sensitivities for percentile 90', percentile_90_unbounded_global_sensitivities) unbounded_sensitivities = build_sensitivity_dict(unbounded_sensitivities, hamming_distance,\ mean_unbounded_global_sensitivities, median_unbounded_global_sensitivities, count_unbounded_global_sensitivities, \ sum_unbounded_global_sensitivities, std_unbounded_global_sensitivities, var_unbounded_global_sensitivities, \ percentile_25_unbounded_global_sensitivities, percentile_50_unbounded_global_sensitivities, \ percentile_75_unbounded_global_sensitivities, percentile_90_unbounded_global_sensitivities) return unbounded_sensitivities ``` ```python def calculate_bounded_sensitivities(universe, universe_subset_cardinality, columns, hamming_distance, bounded_sensitivities): """ INPUT: universe - (df or dict) contains all possible values of the dataset universe_subset_cardinality - (int) contains the length of the subset chosen for the release dataset columns - (array) contains the names of the columns we would like to obtain the sensitivity from hamming_distance - (int) hamming distance between neighboring datasets unbounded_sensitivities - (dict) stores sensitivities per hamming distance and query type OUTPUT bounded_sensitivities - (dict) stores sensitivities per hamming distance and query type Description: It calculates the sensitivities for a set of queries given a universe and a release dataset. """ # Calculate the sensitivity of different queries for the unbounded DP query_type = 'mean' mean_bounded_global_sensitivities = bounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance) query_type = 'median' median_bounded_global_sensitivities = bounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance) query_type = 'count' count_bounded_global_sensitivities = bounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance) query_type = 'sum' sum_bounded_global_sensitivities = bounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance) query_type = 'std' std_bounded_global_sensitivities = bounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance) query_type = 'var' var_bounded_global_sensitivities = bounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance) query_type = 'percentile' percentile = 25 percentile_25_bounded_global_sensitivities = bounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance, percentile) percentile = 50 percentile_50_bounded_global_sensitivities = bounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance, percentile) percentile = 75 percentile_75_bounded_global_sensitivities = bounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance, percentile) percentile = 90 percentile_90_bounded_global_sensitivities = bounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance, percentile) print('Bounded sensitivities for mean', mean_bounded_global_sensitivities) print('Bounded sensitivities for median', median_bounded_global_sensitivities) print('Bounded sensitivities for count', count_bounded_global_sensitivities) print('Bounded sensitivities for sum', sum_bounded_global_sensitivities) print('Bounded sensitivities for std', std_bounded_global_sensitivities) print('Bounded sensitivities for var', var_bounded_global_sensitivities) print('Bounded sensitivities for percentile 25', percentile_25_bounded_global_sensitivities) print('Bounded sensitivities for percentile 50', percentile_50_bounded_global_sensitivities) print('Bounded sensitivities for percentile 75', percentile_75_bounded_global_sensitivities) print('Bounded sensitivities for percentile 90', percentile_90_bounded_global_sensitivities) bounded_sensitivities = build_sensitivity_dict(bounded_sensitivities, hamming_distance,\ mean_bounded_global_sensitivities, median_bounded_global_sensitivities, count_bounded_global_sensitivities, \ sum_bounded_global_sensitivities, std_bounded_global_sensitivities, var_bounded_global_sensitivities, \ percentile_25_bounded_global_sensitivities, percentile_50_bounded_global_sensitivities, \ percentile_75_bounded_global_sensitivities, percentile_90_bounded_global_sensitivities) return bounded_sensitivities ``` ```python # We save the values in a dictionary def build_sensitivity_dict(unbounded_sensitivities, hamming_distance, mean_sensitivity, median_sensitivity, count_sensitivity, _sum_sensitivity, _std_sensitivity, _var_sensitivity, percentile_25_sensitivity, percentile_50_sensitivity, percentile_75_sensitivity, percentile_90_sensitivity): """ INPUT unbounded_sensitivities - (dict) stores sensitivities per hamming distance and query type hamming_distance - (int) hamming distance of the neighboring datasets mean_sensitivity - (float) sensitivity of the mean query median_sensitivity - (float) sensitivity of the media query count_sensitivity - (float) sensitivity of the count query _sum_sensitivity - (float) sensitivity of the sum query _std_sensitivity - (float) sensitivity of the std query _var - (float) sensitivity of the var query percentile_25_sensitivity - (float) sensitivity of the percentile 25 query percentile_50_sensitivity - (float) sensitivity of the percentile 50 query percentile_75_sensitivity - (float) sensitivity of the percentile 75query percentile_90_sensitivity - (float) sensitivity of the percentile 90 query OUTPUT unbounded_sensitivities - (dict) stores sensitivities per hamming distance and query type """ unbounded_sensitivities[hamming_distance] = {} unbounded_sensitivities[hamming_distance]['mean'] = mean_sensitivity unbounded_sensitivities[hamming_distance]['median'] = median_sensitivity unbounded_sensitivities[hamming_distance]['count'] = count_sensitivity unbounded_sensitivities[hamming_distance]['sum'] = _sum_sensitivity unbounded_sensitivities[hamming_distance]['std'] = _std_sensitivity unbounded_sensitivities[hamming_distance]['var'] = _var_sensitivity unbounded_sensitivities[hamming_distance]['percentile_25'] = percentile_25_sensitivity unbounded_sensitivities[hamming_distance]['percentile_50'] = percentile_50_sensitivity unbounded_sensitivities[hamming_distance]['percentile_75'] = percentile_75_sensitivity unbounded_sensitivities[hamming_distance]['percentile_90'] = percentile_90_sensitivity return unbounded_sensitivities ``` ### Main Functions ##### Equation in 4.1 after its first paragraph - Definition of sensitivity ```latex %%latex \begin{align} \ell_{1, \mbox{sensitivity}}: \Delta f=\max_{\substack{ {x, y \in \mathbb{N}^{(\mathcal{X})}} \\ \|x-y\|_{1} = h }} \|f(x)-f(y)\|_{1} \end{align} ``` \begin{align} \ell_{1, \mbox{sensitivity}}: \Delta f=\max_{\substack{ {x, y \in \mathbb{N}^{(\mathcal{X})}} \\ \|x-y\|_{1} = h }} \|f(x)-f(y)\|_{1} \end{align} \begin{align} \ell_{1, \mbox{sensitivity}}: \Delta f=\max_{\substack{ {x, y \in \mathbb{N}^{(\mathcal{X})}} \\ \|x-y\|_{1} = h }} \|f(x)-f(y)\|_{1} \end{align} ```python def unbounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance, percentile=50): """ INPUT: universe - (df) contains all possible values of the dataset universe_subset_cardinality - (int) contains the length of the subset chosen for the release dataset columns - (array) contains the names of the columns we would like to obtain the sensitivity from query_type - (str) contain the category declaring the type of query to be later on executed hamming_distance - (int) hamming distance between neighboring datasets percentile - (int) percentile value for the percentile query OUTPUT: unbounded_global_sensitivity - (float) the unbounded global sensitivity of the input universe Description: It claculates the global sensitivity of an array based on the knowledge of the entire universe of the dataset and query_type. """ # We initialie the type of query for which we would like calculate the sensitivity query = Query_class(query_type) # We will store the sensitivity of each column of the dataset containing universe in a dictionary unbounded_global_sensitivity_per_colum = {} for column in columns: # Check if the values for the hamming distance and universe sizes comply with the basic constraints verify_sensitivity_inputs(len(universe[column]), universe_subset_cardinality, hamming_distance) # 1) RELEASE DATASET # We calculate all the possible release datasets formed by the combination of values sampled from the universe release_datasets = itertools.combinations(universe[column], universe_subset_cardinality) release_datasets = list(release_datasets) # 2) |NEIGHBORING DATASET| < |RELEASE DATASET| //// cardinalities # The neighboring datasets are subsets of a smaller dimension of the possible release datasets (smaller by the hamming_distance) # The neighboring release datasets are used to calculate the max sensitivity, stemming from the DP definition neighbor_with_less_records_datasets = [] for release_dataset in release_datasets: # These yields the smaller possible neighboring datasets neighbor_with_less_records_dataset = itertools.combinations(release_dataset, \ universe_subset_cardinality - hamming_distance) neighbor_with_less_records_dataset = list(neighbor_with_less_records_dataset) neighbor_with_less_records_datasets.append(neighbor_with_less_records_dataset) # 3) |NEIGHBORING DATASET| > |RELEASE DATASET| //// cardinalities # similar process but adding records neighbor_with_more_records_datasets = [] for release_dataset in release_datasets: # We obtain combinations of values from the univsere and these will be appended to the release datasets. # The size of each combination is equal to the hamming distance, as the neighboring dataset will be that much larger # However, in case your universe is a dataset and not just a range of values, then the neighboring # dataset could contain the same record twice, which is NOT desirable (1 person appearing twice) # Therefore, the values must be sampled from the symmetric difference between the release dataset and the universe dataset # REF: https://www.geeksforgeeks.org/python-difference-of-two-lists-including-duplicates/ symmetric_difference = list((Counter(universe[column]) - Counter(release_dataset)).elements()) neighbor_possible_value_combinations = itertools.combinations(symmetric_difference, hamming_distance) neighbor_possible_value_combinations = list(neighbor_possible_value_combinations) temp_neighbor_with_more_records_datasets = [] for neighbor_possible_value_combination in neighbor_possible_value_combinations: # We create neighboring datasets by concatenating the neighbor_possible_value_combination with the release dataset neighbor_with_more_records_dataset = list(release_dataset + neighbor_possible_value_combination) temp_neighbor_with_more_records_datasets.append(neighbor_with_more_records_dataset) # We append in this manner to cluster the neighboring datasets with their respective release dataset neighbor_with_more_records_datasets.append(temp_neighbor_with_more_records_datasets) # 4) For each possible release datase, there is a set of neighboring datasets # We will iterate through each possible release dataset and calculate the L1 norm with # each of its repspective neighboring datasets L1_norms = [] for i, release_dataset in enumerate(release_datasets): release_dataset_query_value = query.run_query(release_dataset, percentile) L1_norm = L1_norm_max(release_dataset_query_value, neighbor_with_less_records_datasets[i], query, percentile) L1_norms.append(L1_norm) L1_norm = L1_norm_max(release_dataset_query_value, neighbor_with_more_records_datasets[i], query, percentile) L1_norms.append(L1_norm) # We pick the maximum out of all the maximum L1_norms calculated from each possible release dataset unbounded_global_sensitivity_per_colum[column] = max(L1_norms) return unbounded_global_sensitivity_per_colum ``` ##### You can find this definition after equation (5) of 5.1 ```latex %%latex \begin{align} \Delta v=\max_{\substack{ {1 \leq i, j \leq n} \\ \\ i \neq j }} \|f(w_i)-f(w_j)\|_{1} \end{align} ``` \begin{align} \Delta v=\max_{\substack{ {1 \leq i, j \leq n} \\ \\ i \neq j }} \|f(w_i)-f(w_j)\|_{1} \end{align} \begin{align} \Delta v=\max_{\substack{ {1 \leq i, j \leq n} \\ \\ i \neq j }} \|f(w_i)-f(w_j)\|_{1} \end{align} ```python def bounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance, percentile=50): """ INPUT: universe - (df) contains all possible values of the dataset universe_subset_cardinality - (int) contains the length of the subset chosen for the release dataset columns - (array) contains the names of the columns we would like to obtain the sensitivity from query_type - (str) contain the category declaring the type of query to be later on executed hamming_distance - (int) hamming distance between neighboring datasets percentile - (int) percentile value for the percentile query OUTPUT: bounded_global_sensitivity - (float) the bounded global sensitivity of the input universe Description: It claculates the global sensitivity of an array based on the knowledge of the entire universe of the dataset and query_type. """ # We initialie the type of query for which we would like calculate the sensitivity query = Query_class(query_type) # We will store the sensitivity of each column of the dataset containing universe in a dictionary bounded_global_sensitivity_per_column = {} for column in columns: # Check if the values for the hamming distance and universe sizes comply with the basic constraints verify_sensitivity_inputs(len(universe[column]), universe_subset_cardinality, hamming_distance) # We calculate all the possible release datasets # First we obtain the combinations within the release dataset. The size of this combinations is not the original size # but the original size minus the hamming_distance release_i_datasets = itertools.combinations(universe[column], universe_subset_cardinality - hamming_distance) release_i_datasets = list(release_i_datasets) # it will contain sets of neighboring datasets. The L1 norm will be calculated between these sets. The maximum will be chosen # The datasets from different groups do not necesarilly need to be neighbors, thus we separate them in groups neighbor_datasets = [] for release_i_dataset in release_i_datasets: # second we calculate the combinations of the items in the universe that are not in the release dataset # the size of a combination is equal to the hamming distance symmetric_difference = list((Counter(universe[column]) - Counter(release_i_dataset)).elements()) release_ii_datasets = itertools.combinations(symmetric_difference, hamming_distance) release_ii_datasets = list(release_ii_datasets) # We create neighboring datasets by concatenating i with ii temp_neighbors = [] for release_ii_dataset in release_ii_datasets: temp_neighbor = list(release_i_dataset + release_ii_dataset) temp_neighbors.append(temp_neighbor) neighbor_datasets.append(temp_neighbors) # We calculate the L1_norm for the different combinations with the aim to find the max # We can loop in this manner because we are obtaining the absolute values L1_norms = [] for m in range(0, len(neighbor_datasets)): for i in range(0, len(neighbor_datasets[m])-1): for j in range(i+1, len(neighbor_datasets[m])): L1_norm = np.abs(query.run_query(neighbor_datasets[m][i], percentile) - query.run_query(neighbor_datasets[m][j], percentile)) L1_norms.append(L1_norm) bounded_global_sensitivity_per_column[column] = max(L1_norms) return bounded_global_sensitivity_per_column ``` ```python def prior_belief(universe, universe_subset, columns): """ INPUT: universe - (df) contains all possible values of the dataset universe_subset - (df) contains the values of the dataset to be released columns - (array) contains the names of the columns we would like to obtain the sensitivity from OUTPUT: prior_per_column - (dict) maps each column of all the possible release datasets to the prior knowledge of an adversary Description: It calculates the prior knowledge of an adversary assuming uniform distribution """ # Initialize the dictionary to store all the posteriors with the column names as keys prior_per_column = {} for column in columns: # We calculate all the possible release datasets release_datasets = itertools.combinations(universe[column], len(universe_subset[column])) release_datasets = list(release_datasets) number_possible_release_datasets = len(release_datasets) # We assume a uniform prior prior_per_column[column] = [1 / number_possible_release_datasets] * number_possible_release_datasets return prior_per_column ``` #### Definition 3, also in equation 2 of 5.1 but with another form / right before the triangle inequality ```python def posterior_belief(universe, universe_subset, columns, query_type, query_result, sensitivity, epsilon): """ INPUT: universe - (df) contains all possible values of the dataset universe_subset - (df) contains the values of the dataset to be released columns - (array) contains the names of the columns we would like to obtain the sensitivity from query_type - (str) contain the category declaring the type of query to be later on executed query_result - (float) the result of the query the attacker received sensitivity - (dict) it maps the columns to their sensitivities, they can be based on bounded or unbounded DP epsilon - (float) it is the parameter that tunes noise, based on DP OUTPUT: posterior_per_column - (dict) maps each column of all the possible release datasets to the posterior knowledge of an adversary Description: It calculates the posterior knowledge of an adversary """ # Initialize the dictionary to store all the posteriors with the column names as keys posterior_per_column = {} # We initialie the type of query for which we would like calculate the sensitivity query = Query_class(query_type) for column in columns: # We calculate all the possible release datasets release_datasets = itertools.combinations(universe[column], len(universe_subset[column])) release_datasets = list(release_datasets) # According to the definiton of DP, the sacle factor of a Laplacian distribution: scale_parameter = sensitivity[column]/epsilon posterior_probability = [] for release_dataset in release_datasets: probability = (1 / (2 * scale_parameter)) * np.exp(- np.abs(query_result - np.mean(release_dataset)) / scale_parameter) posterior_probability.append(probability) print(probability) posterior_per_column[column] = posterior_probability / np.sum(posterior_probability) return posterior_per_column ``` #### Result from 4.1 probability ratio of 3.2933 ```python scale_parameter = (17/12) ``` Using the cumulative distribution functions. REF: https://en.wikipedia.org/wiki/Laplace_distribution But we want a P(x > value) and not P(x < value), the latter is given by the REF ```python # probability that output is greater than 1.1677 > 0=mu 0.5*(np.exp(- 1.1677 / scale_parameter)) ``` 0.21927996116513163 ```python # prob that he output is greater than -0.832 < 0=mu 1 - 0.5*(np.exp(- 0.832 / scale_parameter)) ``` 0.7220853698398951 ```python 0.7221/0.2192 ``` 3.294251824817518 ```python 0.5 + 0.5 * np.sign(1.1677)*(1 - np.exp(- 1.1677 / scale_parameter)) ``` 0.7807200388348684 ##### Definition 4 ```python def confidence(priors, posteriors, columns): """ INPUT: priors - (dict) maps each column of all the possivle release datasets to the prior knowledge of an adversary posteriors - (dict) maps each column of all the possible release datasets to the posterior knowledge of an adversary columns - (array) contains the names of the columns we would like to obtain the sensitivity from OUTPUT: confidence_per_column - (dict) maps each column to the confidence of the adversary Description: It calculates the confidence of an adversary after seeing the prior """ confidence_per_column = {} for column in columns: confidence_per_column[column] = np.max(posteriors[column] - priors[column]) return confidence_per_column ``` ##### Definition 5 ```python def risk_disclosure(posteriors, columns): """ INPUT: posteriors - (dict) maps each column of all the possible release datasets to the posterior knowledge of an adversary columns - (array) contains the names of the columns we would like to obtain the sensitivity from OUTPUT: posterior_max_per_column - (dict) maps each column to the risk of disclosure Description: It calculates the risk of disclosure per column, which is the max posterior """ posterior_max_per_column = {} for column in columns: posterior_max_per_column[column] = np.max(posteriors[column]) return posterior_max_per_column ``` #### 5.1 equation 4-5 #### This definition is used for equation. (4-5) of point 5.1 ```python def upper_bound_posterior(universe, columns, bounded_sensitivity, unbounded_sensitivity, epsilon): """ INPUT: universe - (df) contains all possible values of the dataset columns - (array) contains the names of the columns we would like to obtain the sensitivity from bounded_sensitivity - (dict) it maps the columns to their bounded sensitivities unbounded_sensitivity - (dict) it maps the columns to their unbounded sensitivities epsilon - (float) it is the parameter that tunes noise, based on DP OUTPUT: upper_bound_per_column - (dict) dictionary that maps the value that bounds the posterior (also the risk) per column Description: It calculates the upper bound of the posterior """ upper_bound_posterior_per_column = {} for column in columns: upper_bound_posterior = 1 / (1 + (universe.shape[0] - 1) * \ np.exp(-epsilon * bounded_sensitivity[column] / unbounded_sensitivity[column])) upper_bound_posterior_per_column[column] = upper_bound_posterior return upper_bound_posterior_per_column ``` ##### inequality 3 of 5.1 ```python def tighter_upper_bound_posterior(universe, universe_subset, columns, query_type, sensitivity, epsilon, percentile=50): """ INPUT: universe - (df) contains all possible values of the dataset universe_subset - (df) contains the values of the dataset to be released columns - (array) contains the names of the columns we would like to obtain the sensitivity from query_type - (str) contain the category declaring the type of query to be later on executed sensitivity - (dict) it maps the columns to their sensitivities, they can be based on bounded or unbounded DP epsilon - (float) it is the parameter that tunes noise, based on DP OUTPUT: tighter_upper_bound_posteriorper_column - (dict) maps each column of all the possible release datasets to the tighter upper bound of the posterior knowledge of an adversary Description: It calculates a tighter bound of the posterior knowledge of an adversary """ # Initialize the dictionary to store all the posteriors with the column names as keys tighter_upper_bound_posterior_per_column = {} # We initialie the type of query for which we would like calculate the sensitivity query = Query_class(query_type) for column in columns: # We calculate all the possible release datasets release_datasets = itertools.combinations(universe[column], len(universe_subset[column])) release_datasets = list(release_datasets) # We calculate the L1_norm for the different combinations of the query result from different data releass # We have to complete all loops becuase we need to calculate different values of the posterior # Then we select the max after calculating all posteriors posterior_probability = [] for i in range(0, len(release_datasets) - 1): L1_norms = [] for j in range(0, len(release_datasets)): if release_datasets[i] == release_datasets[j]: continue else: L1_norms.append(np.abs(query.run_query(release_datasets[i], percentile) - query.run_query(release_datasets[j], percentile))) denominator_posterior = 1 for L1_norm in L1_norms: denominator_posterior += np.exp(-epsilon * L1_norm / sensitivity[column]) beta = 1 / denominator_posterior posterior_probability.append(beta) tighter_upper_bound_posterior_per_column[column] = max(posterior_probability) return tighter_upper_bound_posterior_per_column ``` #### 5.2 - inequality 7 ```python def upper_bound_epsilon(universe, columns, bounded_sensitivity, unbounded_sensitivity, risk): """ INPUT: universe - (df) contains all possible values of the dataset columns - (array) contains the names of the columns we would like to obtain the sensitivity from bounded_sensitivity - (dict) it maps the columns to their bounded sensitivities unbounded_sensitivity - (dict) it maps the columns to their unbounded sensitivities risk - (float) it is a parameter that sets the privacy requirement. It is the probability that the attacker succeeds in his/her attack OUTPUT: epsilon_upper_bound_per_column - (dict) dictionary that maps the upper bound of epsilon per column Description: It calculates epsilon given a risk willing to take """ epsilon_upper_bound_per_column = {} for column in columns: epsilon_upper_bound = unbounded_sensitivity[column] / bounded_sensitivity[column] * \ np.log(((universe.shape[0] - 1) * risk) / (1 - risk)) epsilon_upper_bound_per_column[column] = epsilon_upper_bound return epsilon_upper_bound_per_column ``` #### Binary search - 5.2 ```python def binary_search_epsilon(universe, universe_subset, columns, query_type, bounded_global_sensitivities, unbounded_global_sensitivities, privacy_requirement, posterior_bound_type='tight'): """ INPUT: universe - (df) contains all possible values of the dataset universe_subset - (df) contains the values of the dataset to be released columns - (array) contains the names of the columns we would like to obtain the sensitivity from query_type - (str) contain the category declaring the type of query to be later on executed bounded_global_sensitivities - (dict) it maps the columns to their bounded sensitivities unbounded_global_sensitivities - (dict) it maps the columns to their unbounded sensitivities privacy_requirement - (float) the highest probability admisable of disclosure posterior_bound_type - (str) the type of bound we use to calculate the new values to decide upon which epsilon to take next OUTPUT: optimal_epsilon - (dict) dictionary that maps the optimal epsilons per column Description: It performs binary search to find the optimal epsilon """ optimal_epsilon = {} for column in columns: max_risk = 0.9999 epsilon_upper_bound = upper_bound_epsilon(universe, columns, bounded_global_sensitivities, unbounded_global_sensitivities, max_risk) epsilon_f = epsilon_upper_bound epsilon_s = 0 for i in range(0,25): epsilon = (epsilon_f[column] + epsilon_s)/2 # Check which type of bound if posterior_bound_type != 'upper': posterior_upper_bound_new = tighter_upper_bound_posterior(universe, universe_subset, columns, query_type, unbounded_global_sensitivities, epsilon) else: posterior_upper_bound_new = upper_bound_posterior(universe, columns, bounded_global_sensitivities, unbounded_global_sensitivities, epsilon) if posterior_upper_bound_new[column] < privacy_requirement: epsilon_s = epsilon elif posterior_upper_bound_new[column] > privacy_requirement: epsilon_f[column] = epsilon optimal_epsilon[column] = epsilon return optimal_epsilon ``` ## MAIN ### True results for different query types - just a warm up ```python # Finding true values of different queries mean_year = df_school['school_year'].mean() mean_absence_days = df_school['absence_days'].mean() median_year = df_school['school_year'].median() median_absence_days = df_school['absence_days'].median() count_year = df_school['school_year'].count() count_absence_days = df_school['absence_days'].count() sum_year = df_school['school_year'].sum() sum_absence_days = df_school['absence_days'].sum() std_year = df_school['school_year'].std() std_absence_days = df_school['absence_days'].std() var_year = df_school['school_year'].var() var_absence_days = df_school['absence_days'].var() var_year = df_school['school_year'].var() var_absence_days = df_school['absence_days'].var() percentile_25_year = np.percentile(df_school['school_year'], 25) percentile_25_absence_days = np.percentile(df_school['absence_days'], 25) percentile_50_year = np.percentile(df_school['school_year'], 50) percentile_50_absence_days = np.percentile(df_school['absence_days'], 50) percentile_75_year = np.percentile(df_school['school_year'], 75) percentile_75_absence_days = np.percentile(df_school['absence_days'], 75) print('School year: mean =', mean_year, 'Absence days: mean =', mean_absence_days) print('School year: median =', median_year, 'Absence days: median =', median_absence_days) print('School year: count =', count_year, 'Absence days: count =', count_absence_days) print('School year: sum =', sum_year, 'Absence days: sum =', sum_absence_days) print('School year: std =', std_year, 'Absence days: std =', std_absence_days) print('School year: var =', var_year, 'Absence days: var =', var_absence_days) print('School year: 25th percentile =', percentile_25_year, 'Absence days: 25th percentile =', percentile_25_absence_days) print('School year: 50th percentile =', percentile_50_year, 'Absence days: 50th percentile =', percentile_50_absence_days) print('School year: 75th percentile =', percentile_75_year, 'Absence days: 75th percentile =', percentile_75_absence_days) ``` School year: mean = 2.5 Absence days: mean = 4.0 School year: median = 2.5 Absence days: median = 2.5 School year: count = 4 Absence days: count = 4 School year: sum = 10 Absence days: sum = 16 School year: std = 1.2909944487358056 Absence days: std = 4.08248290463863 School year: var = 1.6666666666666667 Absence days: var = 16.666666666666668 School year: 25th percentile = 1.75 Absence days: 25th percentile = 1.75 School year: 50th percentile = 2.5 Absence days: 50th percentile = 2.5 School year: 75th percentile = 3.25 Absence days: 75th percentile = 4.75 ## All the cross checks of the paper are done with the Mean, as the paper utilizes the mean as the use case ### Unbounded global sensitivity for different query types - 4.1 - 2.8333 of point 4.1 ```python # Calculate the sensitivity of different queries for the unbounded DP columns = ['school_year', 'absence_days'] hamming_distance = 1 unbounded_sensitivities = {} unbounded_sensitivities = calculate_unbounded_sensitivities(df_school, df_school_release.shape[0], columns, hamming_distance, unbounded_sensitivities) ; ``` Unbounded sensitivities for mean {'school_year': 0.8333333333333335, 'absence_days': 2.833333333333333} Unbounded sensitivities for median {'school_year': 1.0, 'absence_days': 4.0} Unbounded sensitivities for count {'school_year': 1, 'absence_days': 1} Unbounded sensitivities for sum {'school_year': 4, 'absence_days': 10} Unbounded sensitivities for std {'school_year': 0.747219128924647, 'absence_days': 3.5276819911981905} Unbounded sensitivities for var {'school_year': 1.3055555555555554, 'absence_days': 15.972222222222221} Unbounded sensitivities for percentile 25 {'school_year': 1.25, 'absence_days': 2.75} Unbounded sensitivities for percentile 50 {'school_year': 1.0, 'absence_days': 4.0} Unbounded sensitivities for percentile 75 {'school_year': 1.25, 'absence_days': 4.25} Unbounded sensitivities for percentile 90 {'school_year': 1.7000000000000002, 'absence_days': 6.5} '' Notice the obvious, the sensitivity for the median is the same as the sensitivity for the 50th percentile. #### 4.3 & 4.4 ```python # Calculating prior knowledge. We assume a uniform prior priors = prior_belief(df_school, df_school_release, columns) priors ``` {'school_year': [0.25, 0.25, 0.25, 0.25], 'absence_days': [0.25, 0.25, 0.25, 0.25]} ###### Posterior 0.618 from the paper replicated - there is a typo on the expression below Table 3, the numerator should be "0.3062" ```python # Let us calculate the posteriors query_result = 2.20131 epsilon = 2 query_type = 'mean' mean_unbounded_global_sensitivities = unbounded_empirical_global_L1_sensitivity(df_school, df_school_release.shape[0], columns, query_type, hamming_distance) posteriors = posterior_belief(df_school, df_school_release, columns, query_type, query_result, mean_unbounded_global_sensitivities, epsilon) posteriors ``` {'school_year': array([0.33898835, 0.4003158 , 0.17987348, 0.08082237]), 'absence_days': array([0.61802372, 0.15816999, 0.12500781, 0.09879847])} ```python # Let us calculate the risk of disclosure risk_disclosure(posteriors, columns) ``` {'school_year': 0.40031580079087553, 'absence_days': 0.6180237199612966} ```python # Let us calculate the confidence of the attacker confidence_adversary = confidence(posteriors, priors, columns) confidence_adversary ``` {'school_year': 0.16917763372208006, 'absence_days': 0.15120152863171882} ### Bounded global sensitivity for different query types - prep for 5.1 (right above equation 8, delta_v = 3 for absence days and 1 for school year, for the mean query) ```python # Calculate the sensitivity of different queries for the unbounded DP columns = ['school_year', 'absence_days'] hamming_distance = 1 bounded_sensitivities = {} calculate_bounded_sensitivities(df_school, df_school_release.shape[0], columns, hamming_distance, bounded_sensitivities) ; ``` Bounded sensitivities for mean {'school_year': 1.0, 'absence_days': 3.0} Bounded sensitivities for median {'school_year': 1.0, 'absence_days': 1.0} Bounded sensitivities for count {'school_year': 0, 'absence_days': 0} Bounded sensitivities for sum {'school_year': 3, 'absence_days': 9} Bounded sensitivities for std {'school_year': 0.430722547996921, 'absence_days': 3.2111854102704642} Bounded sensitivities for var {'school_year': 0.8888888888888887, 'absence_days': 15.555555555555555} Bounded sensitivities for percentile 25 {'school_year': 1.0, 'absence_days': 1.0} Bounded sensitivities for percentile 50 {'school_year': 1.0, 'absence_days': 1.0} Bounded sensitivities for percentile 75 {'school_year': 1.0, 'absence_days': 4.0} Bounded sensitivities for percentile 90 {'school_year': 0.9999999999999996, 'absence_days': 5.799999999999999} '' ##### These calculations are not in the paper, but it is interesting to see what values we would get if we assume a bounded DP definition from the beginning. ```python # Calculating prior knowledge. We assume a uniform prior priors = prior_belief(df_school, df_school_release, columns) priors ``` {'school_year': [0.25, 0.25, 0.25, 0.25], 'absence_days': [0.25, 0.25, 0.25, 0.25]} ```python # Let us calculate the posteriors query_result = 2.20131 epsilon = 2 query_type = 'mean' mean_bounded_global_sensitivities = bounded_empirical_global_L1_sensitivity(df_school, df_school_release.shape[0], columns, query_type, hamming_distance) posteriors = posterior_belief(df_school, df_school_release, columns, query_type, query_result, mean_bounded_global_sensitivities, epsilon) posteriors ``` {'school_year': array([0.32882419, 0.37769861, 0.19391693, 0.09956027]), 'absence_days': array([0.59733148, 0.16489847, 0.13204038, 0.10572967])} ```python # Let us calculate the risk of disclosure risk_disclosure(posteriors, columns) ``` {'school_year': 0.3776986100000907, 'absence_days': 0.5973314804471918} ```python # Let us calculate the confidence of the attacker confidence_adversary = confidence(posteriors, priors, columns) confidence_adversary ``` {'school_year': 0.15043972733368804, 'absence_days': 0.14427033183087148} ### Calculate upper bounds for posterior - 5.1 ```python # Experiment with an epsilon of 2 epsilon = 2 posterior_upper_bound = upper_bound_posterior(df_school, columns, mean_bounded_global_sensitivities, mean_unbounded_global_sensitivities, epsilon) print('For epsilon {} the posterio is {}'.format(epsilon, posterior_upper_bound)) ``` For epsilon 2 the posterio is {'school_year': 0.7860684399476446, 'absence_days': 0.7347845417564229} ```python # Epsilon of 0 should provie the highest privacy, but also zero utility, hence, the adversary has not updated his/her prior # It has not learned anaything new. But if this individual querying is not malicious, then the utility of the query is 0 epsilon = 0 posterior_upper_bound = upper_bound_posterior(df_school, columns, mean_bounded_global_sensitivities, mean_unbounded_global_sensitivities, epsilon) print('For epsilon {} the posterio is {}'.format(epsilon, posterior_upper_bound)) ``` For epsilon 0 the posterio is {'school_year': 0.25, 'absence_days': 0.25} ### Calculate upper bounds for epsilon given risk willing to take - 5.2 (9) - 0.3829 ```python # Let us calculate the upper bound with epsilon with a risk of 0.33 (there is a chance of 1/3 of letting # the adversary know the true value) risk = 1/3 epsilon_upper_bound = upper_bound_epsilon(df_school, columns, mean_bounded_global_sensitivities, mean_unbounded_global_sensitivities, risk) epsilon_upper_bound ``` {'school_year': 0.3378875900901369, 'absence_days': 0.38293926876882173} ### Calcualte a tighter risk bound - with 5.1 equation 3 (result 12) - 0.3292 ```python epsilon = 0.5 query_type = 'mean' posterior_tighter_upper_bound = tighter_upper_bound_posterior(df_school, df_school_release, columns, query_type, mean_unbounded_global_sensitivities, epsilon) print('For epsilon {} the posterior is {}'.format(epsilon, posterior_tighter_upper_bound)) ``` For epsilon 0.5 the posterior is {'school_year': 0.3291788293012836, 'absence_days': 0.3476971459619019} ### Plotting Fig 2 If you are wondering why are we using these bounds and not plotting the real value of the posterior: This is because, in order to plot the real value of the posterior, you already needed to have a query result. For someone to output the query result, he/she had to already decide on a value of epsilon, and that is actually what they are aiming for - hence the name of the paper. The purpose of finding these upper and tighter bounds of the posterior, is to find an optimal value for epsilon, i.e. by trying out with many values of epsilon and without knowing which query result you will get, you can plot the posterior upper and tight bounds and decide which epsilon to pick based on the risk you are willing to take. ###### Side note: In the paper they refer to the acceptable risk as the greek letters delta and as rho. Delta is never placed in an equation, that might somewhat confusing if you are reading the paper. They use 1/3 as this value. First we get the values for the bounds for plotting: ```python precision = 0.01 limit_x = 5 epsilons = np.linspace(0, 5, num=int(limit_x/precision)) # Setting parameters columns = ['school_year', 'absence_days'] query_type = 'mean' # Initialize dicts with correspondng keys: https://stackoverflow.com/questions/11509721/how-do-i-initialize-a-dictionary-of-empty-lists-in-python posterior_upper_bound = {k: [] for k in columns} posterior_tighter_upper_bound = {k: [] for k in columns} # Obtaining the values for the bounds for epsilon in epsilons: temp_posterior_upper_bound = upper_bound_posterior(df_school, columns, mean_bounded_global_sensitivities, mean_unbounded_global_sensitivities, epsilon) temp_posterior_tighter_upper_bound = tighter_upper_bound_posterior(df_school, df_school_release, columns, query_type, mean_unbounded_global_sensitivities, epsilon) for column in columns: posterior_upper_bound[column].append(temp_posterior_upper_bound[column]) posterior_tighter_upper_bound[column].append(temp_posterior_tighter_upper_bound[column]) ``` ```python plt.figure(figsize=(15, 7)) risk = 1/3 # Calculate upper bound epsilon_upper_bound = upper_bound_epsilon(df_school, columns, mean_bounded_global_sensitivities, mean_unbounded_global_sensitivities, risk) for index, column in enumerate(columns): # Start the plot plot_index = int(str(1) + str(len(columns)) + str(index+1)) plt.subplot(plot_index) # plot the upper bounds upper_bound, = plt.plot(epsilons, posterior_upper_bound[column], 'r', label="Risk upper bound") tighter_bound, = plt.plot(epsilons, posterior_tighter_upper_bound[column], 'b', label="Risk tighter bound") # Legends legend = plt.legend(handles=[upper_bound, tighter_bound], loc='lower right') ax = plt.gca().add_artist(legend) # axis labels and titles plt.xlabel('Epsilon') plt.ylabel('Risk disclosure probability') plt.ylim(0.2,1) plt.xlim(0,5) plt.title('{}) Domain {} = {}'.format(index+1, column, df_school[column].values)) plt.suptitle('Upper bounds of the risk disclosure (posterior probability) by varying domains') plt.show() ``` Let us zoom in and check some values for risk and epsilon: ```python plt.figure(figsize=(15, 7)) risk = 1/3 # Calculate upper bound epsilon_upper_bound = upper_bound_epsilon(df_school, columns, mean_bounded_global_sensitivities, mean_unbounded_global_sensitivities, risk) for index, column in enumerate(columns): # Start the plot plot_index = int(str(1) + str(len(columns)) + str(index+1)) plt.subplot(plot_index) # plot the upper bounds upper_bound, = plt.plot(epsilons, posterior_upper_bound[column], 'r', label="Risk upper bound") tighter_bound, = plt.plot(epsilons, posterior_tighter_upper_bound[column], 'b', label="Risk tighter bound") # Plot the risk privacy_requirement, = plt.plot(epsilons, np.full(shape=len(epsilons), fill_value=risk), color='black', label="Privacy requirement") # Plot the epsilon upper bound y_axis_points = np.linspace(0,1,2) plt.plot(np.full(shape=len(y_axis_points), fill_value=epsilon_upper_bound[column]), y_axis_points, 'r--') plt.plot(np.full(shape=len(y_axis_points), fill_value=0.5), y_axis_points, 'r--') # Legends legend = plt.legend(handles=[upper_bound, tighter_bound, privacy_requirement], loc='lower right') # axis labels and titles plt.xlabel('Epsilon') plt.ylabel('Risk disclosure probability') plt.title('{}) Domain {} = {}'.format(index+1, column, df_school[column].values)) # Zoom # plt.ylim(0, 0.4) # plt.xlim(0, 0.6) # Additonal info print('Epsilon upper bound for risk {} in universe {} = {}'.format(round(risk, 2), column, epsilon_upper_bound[column])) plt.suptitle('Upper bounds of the risk disclosure (posterior probability) by varying domains') plt.show() ``` More specifically: (Looking at plot 1), left) The authors want us to notice that even though we have an epsilon bound on 0.3379 (left red dotted vertical line), we can find a value of epsilon (e.g., 0.5, right red dotted line) that still fulfills the privacy requirement (<1/3). (But note than an epsilon of 0.5 on the absence days would make the risk go above the set threshold of 1/3) You cannot free epsilon on one side of the inequality from the tighter bound formula due to arithmetic constraints. Thus, you cannot find epsilon directly on the tighter bound curve formula, unlike with the upper bound curve. Therefore, they propose to use binary search starting at the max of the domain; in this case, it is with an epsilon of 5, which would approximately yield a 100% probability of the adversary being successful. To make it more optimal, if you already have visualized the curve, you can choose your start and end of the binary search with high precision. ### Binary Search - 5.2 We are going to perform binary search with the upper bound (not with the tight one9 to show that the binary search converges into the expected values: ```python privacy_requirement = 1/3 query_type = 'mean' posterior_bound_type = 'upper' optimal_epsilons = binary_search_epsilon(df_school, df_school_release, columns, query_type, mean_bounded_global_sensitivities, mean_unbounded_global_sensitivities, privacy_requirement, posterior_bound_type) optimal_epsilons ``` {'school_year': 0.3378877996541445, 'absence_days': 0.3829395062746971} These are the sub-optimal values (as they are calculated with the upper bound) for epsilon to comply with a privacy requirement (disclosure probability) of 1/3 We thus show that our binary search works. See below the exact calculations with the upper bound (they are equal). ```python risk = 1/3 epsilon_upper_bound = upper_bound_epsilon(df_school, columns, mean_bounded_global_sensitivities, mean_unbounded_global_sensitivities, risk) epsilon_upper_bound ``` {'school_year': 0.3378875900901369, 'absence_days': 0.38293926876882173} ```python privacy_requirement = 1/3 query_type = 'mean' posterior_bound_type = 'tight' columns = ['school_year', 'absence_days'] optimal_epsilons = binary_search_epsilon(df_school, df_school_release, columns, query_type, mean_bounded_global_sensitivities, mean_unbounded_global_sensitivities, privacy_requirement, posterior_bound_type) print('Optimal epsilons:') optimal_epsilons ``` Optimal epsilons: {'school_year': 0.525149770057615, 'absence_days': 0.43171996782769506} ```python tight_upper = {} for column in columns: epsilon = optimal_epsilons[column] tight_upper[column] = tighter_upper_bound_posterior(df_school, df_school_release, columns, query_type, mean_unbounded_global_sensitivities, epsilon) print('Posterior with optimal epsilons. Very close to the threhold of 1/3. So OK.') print('You can also see, of course, how the optimal epsilon for one attribute does not yield a tight upper bound of 1/3 on the other') tight_upper ``` Posterior with optimal epsilons. Very close to the threhold of 1/3. So OK. You can also see, of course, how the optimal epsilon for one attribute does not yield a tight upper bound of 1/3 on the other {'school_year': {'school_year': 0.33333334702817974, 'absence_days': 0.3530576956680186}, 'absence_days': {'school_year': 0.31796230321506674, 'absence_days': 0.3333333017312349}} This is computationally taxing due ti how the algorithm is written. We could save part of the maximum posterior operations so we do not need to run them again in every iteration. However, the purpose of this notebook is to get a deeper undersranding of the intricaces of the paper. Nonetheless, for these datasets sizes used for demonstrations, the algorithm runs smoothly. Let us plot the output to verify that indeed these optimal epsilons correspond to the tighter upper bound curves. ```python plt.figure(figsize=(15, 7)) risk = 1/3 # Calculate upper bound epsilon_upper_bound = upper_bound_epsilon(df_school, columns, mean_bounded_global_sensitivities, mean_unbounded_global_sensitivities, risk) for index, column in enumerate(columns): # Start the plot plot_index = int(str(1) + str(len(columns)) + str(index+1)) plt.subplot(plot_index) # plot the upper bounds upper_bound, = plt.plot(epsilons, posterior_upper_bound[column], 'r', label="Risk upper bound") tighter_bound, = plt.plot(epsilons, posterior_tighter_upper_bound[column], 'b', label="Risk tighter bound") # Plot the risk privacy_requirement, = plt.plot(epsilons, np.full(shape=len(epsilons), fill_value=risk), color='black', label="Privacy requirement") # Plot the epsilon upper bound y_axis_points = np.linspace(0,1,2) plt.plot(np.full(shape=len(y_axis_points), fill_value=epsilon_upper_bound[column]), y_axis_points, 'r--') plt.plot(np.full(shape=len(y_axis_points), fill_value=optimal_epsilons[column]), y_axis_points, 'b--') # Legends legend = plt.legend(handles=[upper_bound, tighter_bound, privacy_requirement], loc='lower left') # axis labels and titles plt.xlabel('Epsilon') plt.ylabel('Risk disclosure probability') plt.title('{}) Domain {} = {}'.format(index+1, column, df_school[column].values)) # Zoom plt.ylim(0, 0.4) plt.xlim(0, 0.6) # Additonal info print('Plot:', index, column) print('Epsilon upper bound for risk {} in universe {} = {} in dash red'.format(round(risk, 2), column, epsilon_upper_bound[column])) print('Epsilon tight upper bound for risk {} in universe {} = {} in dash blue'.format(round(risk, 2), column, optimal_epsilons[column])) print('\n') plt.suptitle('Upper bounds of the risk disclosure (posterior probability) by varying domains') plt.show() ``` ##### The optimal epsilons (in dash blue) fit pretty well, they maximize utility while preserving privacy. ### MEDIAN - the last part of the paper (6), exemplifies this process with the median ```python query_type = 'median' median_unbounded_global_sensitivities = unbounded_empirical_global_L1_sensitivity(df_school, df_school_release.shape[0], columns, query_type, hamming_distance) median_bounded_global_sensitivities = bounded_empirical_global_L1_sensitivity(df_school, df_school_release.shape[0], columns, query_type, hamming_distance) print('unbounded sensitivity', median_unbounded_global_sensitivities) print('bounded sensitivity', median_bounded_global_sensitivities) ``` unbounded sensitivity {'school_year': 1.0, 'absence_days': 4.0} bounded sensitivity {'school_year': 1.0, 'absence_days': 1.0} ```python # Calculating prior knowledge. We assume a uniform prior priors = prior_belief(df_school, df_school_release, columns) priors ``` {'school_year': [0.25, 0.25, 0.25, 0.25], 'absence_days': [0.25, 0.25, 0.25, 0.25]} ```python # Let us calculate the posteriors query_result = 2.20131 epsilon = 2 query_type = 'median' posteriors = posterior_belief(df_school, df_school_release, columns, query_type, query_result, mean_unbounded_global_sensitivities, epsilon) posteriors ``` {'school_year': array([0.33898835, 0.4003158 , 0.17987348, 0.08082237]), 'absence_days': array([0.61802372, 0.15816999, 0.12500781, 0.09879847])} ```python # Let us calculate the risk of disclosure risk_disclosure(posteriors, columns) ``` {'school_year': 0.40031580079087553, 'absence_days': 0.6180237199612966} ```python # Let us calculate the confidence of the attacker confidence_adversary = confidence(posteriors, priors, columns) confidence_adversary ``` {'school_year': 0.16917763372208006, 'absence_days': 0.15120152863171882} ##### Calculate the upper bounds of the posterior ```python epsilon = 2 posterior_upper_bound = upper_bound_posterior(df_school, columns, median_bounded_global_sensitivities, median_unbounded_global_sensitivities, epsilon) print('For epsilon {} the posterio is {}'.format(epsilon, posterior_upper_bound)) ``` For epsilon 2 the posterio is {'school_year': 0.7112345942275939, 'absence_days': 0.35466124439244334} ```python epsilon = 0 posterior_upper_bound = upper_bound_posterior(df_school, columns, median_bounded_global_sensitivities, median_unbounded_global_sensitivities, epsilon) print('For epsilon {} the posterio is {}'.format(epsilon, posterior_upper_bound)) ``` For epsilon 0 the posterio is {'school_year': 0.25, 'absence_days': 0.25} ##### Calculate upper bounds for epsilon given risk willing to take - 6 result 18 = 1.6219 ```python # Let us calculate the upper bound with epsilon with a risk of 0.33 (there is a chance of 1/3 of letting # the adversary know the true value) risk = 1/3 epsilon_upper_bound = upper_bound_epsilon(df_school, columns, median_bounded_global_sensitivities, median_unbounded_global_sensitivities, risk) epsilon_upper_bound ``` {'school_year': 0.4054651081081642, 'absence_days': 1.6218604324326569} ##### Calcualte a tighter risk bound with a given epsilon ```python epsilon = 0.5 query_type = 'median' posterior_tighter_upper_bound = tighter_upper_bound_posterior(df_school, df_school_release, columns, query_type, median_unbounded_global_sensitivities, epsilon) print('For epsilon {} the posterio is {}'.format(epsilon, posterior_tighter_upper_bound)) ``` For epsilon 0.5 the posterio is {'school_year': 0.31122966560092735, 'absence_days': 0.26560468668687814} ##### Plotting ```python precision = 0.01 limit_x = 5 epsilons = np.linspace(0, 5, num=int(limit_x/precision)) # Setting parameters columns = ['school_year', 'absence_days'] query_type = 'median' # Initialize dicts with correspondng keys: https://stackoverflow.com/questions/11509721/how-do-i-initialize-a-dictionary-of-empty-lists-in-python posterior_upper_bound = {k: [] for k in columns} posterior_tighter_upper_bound = {k: [] for k in columns} # Obtaining the values for the bounds for epsilon in epsilons: temp_posterior_upper_bound = upper_bound_posterior(df_school, columns, median_bounded_global_sensitivities, median_unbounded_global_sensitivities, epsilon) temp_posterior_tighter_upper_bound = tighter_upper_bound_posterior(df_school, df_school_release, columns, query_type, median_unbounded_global_sensitivities, epsilon) for column in columns: posterior_upper_bound[column].append(temp_posterior_upper_bound[column]) posterior_tighter_upper_bound[column].append(temp_posterior_tighter_upper_bound[column]) ``` ```python plt.figure(figsize=(15, 7)) risk = 1/3 # Calculate upper bound epsilon_upper_bound = upper_bound_epsilon(df_school, columns, median_bounded_global_sensitivities, median_unbounded_global_sensitivities, risk) for index, column in enumerate(columns): # Start the plot plot_index = int(str(1) + str(len(columns)) + str(index+1)) plt.subplot(plot_index) # plot the upper bounds upper_bound, = plt.plot(epsilons, posterior_upper_bound[column], 'r', label="Risk upper bound") tighter_bound, = plt.plot(epsilons, posterior_tighter_upper_bound[column], 'b', label="Risk tighter bound") # Legends legend = plt.legend(handles=[upper_bound, tighter_bound], loc='lower right') ax = plt.gca().add_artist(legend) # axis labels and titles plt.xlabel('Epsilon') plt.ylabel('Risk disclosure probability') plt.ylim(0.2,1) plt.xlim(0,5) plt.title('{}) Domain {} = {}'.format(index+1, column, df_school[column].values)) plt.suptitle('Upper bounds of the risk disclosure (posterior probability) by varying domains') plt.show() ``` ```python plt.figure(figsize=(15, 7)) risk = 1/3 # Calculate upper bound epsilon_upper_bound = upper_bound_epsilon(df_school, columns, median_bounded_global_sensitivities, median_unbounded_global_sensitivities, risk) for index, column in enumerate(columns): # Start the plot plot_index = int(str(1) + str(len(columns)) + str(index+1)) plt.subplot(plot_index) # plot the upper bounds upper_bound, = plt.plot(epsilons, posterior_upper_bound[column], 'r', label="Risk upper bound") tighter_bound, = plt.plot(epsilons, posterior_tighter_upper_bound[column], 'b', label="Risk tighter bound") # Plot the risk privacy_requirement, = plt.plot(epsilons, np.full(shape=len(epsilons), fill_value=risk), color='black', label="Privacy requirement") # Plot the epsilon upper bound y_axis_points = np.linspace(0,1,2) plt.plot(np.full(shape=len(y_axis_points), fill_value=epsilon_upper_bound[column]), y_axis_points, 'r--') # Legends legend = plt.legend(handles=[upper_bound, tighter_bound, privacy_requirement], loc='lower right') # axis labels and titles plt.xlabel('Epsilon') plt.ylabel('Risk disclosure probability') plt.title('{}) Domain {} = {}'.format(index+1, column, df_school[column].values)) # Zoom # plt.ylim(0, 0.4) # plt.xlim(0, 0.6) # Additonal info print('Epsilon upper bound for risk {} in universe {} = {}'.format(round(risk, 2), column, epsilon_upper_bound[column])) plt.suptitle('Upper bounds of the risk disclosure (posterior probability) by varying domains') plt.show() ``` ### Binary Search - 5.2 We are going to perform binary search with the upper bound (not with the tight one9 to show that the binary search converges into the expected values: ```python privacy_requirement = 1/3 query_type = 'median' posterior_bound_type = 'upper' optimal_epsilons = binary_search_epsilon(df_school, df_school_release, columns, query_type, median_bounded_global_sensitivities, median_unbounded_global_sensitivities, privacy_requirement, posterior_bound_type) optimal_epsilons ``` {'school_year': 0.40546535958497326, 'absence_days': 1.621861438339893} These are the sub-optimal values (as they are calculated with the upper bound) for epsilon to comply with a privacy requirement (disclosure probability) of 1/3 We thus show that our binary search works. See below the exact calculations with the upper bound (they are equal). ```python risk = 1/3 epsilon_upper_bound = upper_bound_epsilon(df_school, columns, median_bounded_global_sensitivities, median_unbounded_global_sensitivities, risk) epsilon_upper_bound ``` {'school_year': 0.4054651081081642, 'absence_days': 1.6218604324326569} ##### Result 6 after equation 21 - 2.773 ```python privacy_requirement = 1/3 query_type = 'median' posterior_bound_type = 'tight' columns = ['school_year', 'absence_days'] optimal_epsilons = binary_search_epsilon(df_school, df_school_release, columns, query_type, median_bounded_global_sensitivities, median_unbounded_global_sensitivities, privacy_requirement, posterior_bound_type) print('Optimal epsilons:') optimal_epsilons ``` Optimal epsilons: {'school_year': 0.693147280402229, 'absence_days': 2.772589121608916} ```python query_type = 'median' tight_upper = {} for column in columns: epsilon = optimal_epsilons[column] tight_upper[column] = tighter_upper_bound_posterior(df_school, df_school_release, columns, query_type, median_unbounded_global_sensitivities, epsilon) print('Posterior with optimal epsilons. Very close to the threhold of 1/3. So OK.') print('You can also see how the optimal epsilon for one attribute does not yield a tight upper bound of 1/3 on the other') tight_upper ``` Posterior with optimal epsilons. Very close to the threhold of 1/3. So OK. You can also see how the optimal epsilon for one attribute does not yield a tight upper bound of 1/3 on the other {'school_year': {'school_year': 0.3333333444269202, 'absence_days': 0.27160681152823796}, 'absence_days': {'school_year': 0.47058824634931673, 'absence_days': 0.3333333444269202}} This is computationally taxing due ti how the algorithm is written. We could save part of the maximum posterior operations so we do not need to run them again in every iteration. However, the purpose of this notebook is to get a deeper undersranding of the intricaces of the paper. Nonetheless, for these datasets sizes used for demonstrations, the algorithm runs smoothly. Let us plot the output to verify that indeed these optimal epsilons correspond to the tighter upper bound curves. ```python plt.figure(figsize=(15, 7)) risk = 1/3 # Calculate upper bound epsilon_upper_bound = upper_bound_epsilon(df_school, columns, median_bounded_global_sensitivities, median_unbounded_global_sensitivities, risk) for index, column in enumerate(columns): # Start the plot plot_index = int(str(1) + str(len(columns)) + str(index+1)) plt.subplot(plot_index) # plot the upper bounds upper_bound, = plt.plot(epsilons, posterior_upper_bound[column], 'r', label="Risk upper bound") tighter_bound, = plt.plot(epsilons, posterior_tighter_upper_bound[column], 'b', label="Risk tighter bound") # Plot the risk privacy_requirement, = plt.plot(epsilons, np.full(shape=len(epsilons), fill_value=risk), color='black', label="Privacy requirement") # Plot the epsilon upper bound y_axis_points = np.linspace(0,1,2) plt.plot(np.full(shape=len(y_axis_points), fill_value=epsilon_upper_bound[column]), y_axis_points, 'r--') plt.plot(np.full(shape=len(y_axis_points), fill_value=optimal_epsilons[column]), y_axis_points, 'b--') # Legends legend = plt.legend(handles=[upper_bound, tighter_bound, privacy_requirement], loc='lower right') # axis labels and titles plt.xlabel('Epsilon') plt.ylabel('Risk disclosure probability') plt.title('{}) Domain {} = {}'.format(index+1, column, df_school[column].values)) # Additonal info print('Plot:', index, column) print('Epsilon upper bound for risk {} in universe {} = {} in dash red'.format(round(risk, 2), column, epsilon_upper_bound[column])) print('Epsilon tight upper bound for risk {} in universe {} = {} in dash blue'.format(round(risk, 2), column, optimal_epsilons[column])) print('\n') plt.suptitle('Upper bounds of the risk disclosure (posterior probability) by varying domains') plt.show() ``` The optimal epsilons (in dash blue) fit pretty well, they maximize utility while preserving privacy.
23aa166b39415141efc0fa44b4ee673d7d8d38ea
417,457
ipynb
Jupyter Notebook
Extant_Papers_Implementations/A_method_to_choose_epsilon/How_much_is_enough_Calculating_An_Optimal_Epsilon_Last_version.ipynb
gonzalo-munillag/Differential_Privacy
965016e7fb15201d74ba415f2117697b39bc056a
[ "MIT" ]
6
2020-10-26T09:03:41.000Z
2021-06-02T12:57:22.000Z
Extant_Papers_Implementations/A_method_to_choose_epsilon/How_much_is_enough_Calculating_An_Optimal_Epsilon_Last_version.ipynb
gonzalo-munillag/Blog
965016e7fb15201d74ba415f2117697b39bc056a
[ "MIT" ]
null
null
null
Extant_Papers_Implementations/A_method_to_choose_epsilon/How_much_is_enough_Calculating_An_Optimal_Epsilon_Last_version.ipynb
gonzalo-munillag/Blog
965016e7fb15201d74ba415f2117697b39bc056a
[ "MIT" ]
null
null
null
139.524398
55,996
0.864951
true
19,088
Qwen/Qwen-72B
1. YES 2. YES
0.810479
0.857768
0.695203
__label__eng_Latn
0.944187
0.453521
<h1 style="border: 1.5px solid #ccc; padding: 8px 12px; color:#56BFCB;" > <center> <br/> Lista de Exercícios 4a <br/> <span style="font-size:18px;"> Guilherme Esdras </span> </center> </h1> --- <b> <center> Imports </center> </b> ```python import warnings warnings.filterwarnings('ignore') import numpy as np import pandas as pd from scipy import optimize import matplotlib as mpl import matplotlib.pyplot as plt %matplotlib inline plt.style.use('seaborn-poster') import sympy as sp sp.init_printing() sp.var('x, y') ``` --- <div class="alert alert-block alert-info" style="color:#20484d;"> <b>Exercicio 1:</b> Implemente o método da Bisseção descrito pelo algoritmo 1.1 no texto. Para testar o algoritmo, use os exemplos do material textual. </div> ```python # Implementando função do Método da Bisseção # Versão 1 (antes da aula de quinta-feira) def bissecao_old(f, a, b, n_its, tol, verbose=False, r_tb=False): ''' Método da Bisseção v1 [by guilherme esdras] Parâmetros: ---------- f (function) : A função na qual busca-se encontrar uma solução aproximada para f(x) = 0. a (int), b (int): Limites inferior e superior, respectivamente, do intervalo. n_its (int): Número máximo de iterações. tol (float): Tolerância mínima ou Erro aceitável (critério de parada). verbose (boolean): True = exibe as tabelas e prints; False (padrão) = não exibe. r_tb (boolean): True = retorna uma tabela de dados para uso com Pandas; False (padrão) = não retorna. Retornos: ------- xi (int): O ponto médio do intervalo computado na i-ésima iteração. i (int): O número máximo de iterações que foi preciso para encontrar a solução. ''' # Inicializa as variaveis de entrada i = 0 ai = a bi = b ''' Opcional... ''' if verbose or r_tb: tb = [] # Verifica a existencia da raiz, # primeiro testando: if np.sign(f(a)) * np.sign(f(b)) != -1: if verbose: print('Não existe raiz no intervalo. Encerrando!') return # Caso não contrário... else: # Enquanto a tolerância for aceitável e o número de iterações não tiver sido atingido... while ( ((bi - ai) / 2.0) > tol ) and ( i < n_its ): ''' Opcional... ''' if verbose or r_tb: tb.append([]); tb[i].append(ai); tb[i].append(bi); # calcula o ponto médio do intervalo... xi = (ai + bi) / 2.0 ''' Opcional... ''' if verbose or r_tb: tb[i].append(xi); tb[i].append(f(xi)); # verifica se este ponto médio é a raiz... if f(xi) == 0: ''' Opcional... ''' if verbose: print('A raiz é o valor atual de x!') return ( xi, i ) # caso contrário, verificar em que subintervalo está a raiz... else: # se o primeiro subintervalo tem a raiz... if np.sign(f(ai)) * np.sign(f(xi)) == -1: # atualiza o limite superior: bi = xi # caso contrário (se o segundo subintervalo tem a raiz)... else: # atualiza o limite inferior: ai = xi # atualiza o iterador, incrementado-o i += 1 ''' Opcional... ''' if verbose: msg = f'Raiz encontrada: {xi} | Em {i} iterações' print(pd.DataFrame(tb, columns=["a", "b", "x", "f(x)"])) print('-'*len(msg)) print(msg) print('-'*len(msg)) ''' ----------- ''' # ao fim, retorna o valor exato ou aproximado da raiz if not r_tb: return ( xi, i ) else: return ( xi, i, tb ) ``` ```python def bissecao_do_professor(f, a, b, tol=1e-10): i = 0; erro, x_ant = 1, a while(erro > tol): inf = np.sign(f(a)) sup = np.sign(f(b)) if inf*sup != -1: print("Não há raiz nesse intervalo") return else: x = (a + b)/2. if f(x) == 0: print("A raíz é", x) return elif inf*np.sign(f(x)) == -1: b = x else: a = x erro = np.abs((x - x_ant)/np.abs(x)) x_ant = x return x ``` ```python # Implementando função do Método da Bisseção def bissecao(f, a, b, n_its=9999, tol=1e-10, verbose=False, r_tb=False, p_err=False): ''' Método da Bisseção [by guilherme esdras] Parâmetros: ---------- f (function): A função na qual busca-se encontrar uma solução aproximada para f(x) = 0. a (int), b (int): Limites inferior e superior, respectivamente, do intervalo. n_its (int) [Opcional | Default: 9999]: Número máximo de iterações. tol (float) [Opcional | Default: 1e-10]: Tolerância mínima ou Erro aceitável (critério de parada). Se nenhum valor for passado o padrão será 10^-10. verbose (boolean): True: Exibe as tabelas e prints; False (default): Não exibe. r_tb (boolean): True: Retorna uma tabela de dados para uso com Pandas; False (default): Não retorna. p_err (boolean): True: Imprime o erro em cada iteração; False (default): Não imprime. Retornos: ------- xi (int): O ponto médio do intervalo computado na i-ésima iteração. i (int): O número máximo de iterações que foi preciso para encontrar a solução. ''' # Inicializa as variaveis de entrada i = 0 ai = a bi = b ''' Opcional... ''' if verbose or r_tb: tb = [] # Verifica a existencia da raiz, # primeiro testando: if np.sign(f(a)) * np.sign(f(b)) != -1: ''' Opcional... ''' if verbose: print('Não existe raiz no intervalo. Encerrando!') return # Caso não contrário... else: erro, x_ant = 1, a # Enquanto a tolerância for aceitável e o número de iterações não tiver sido atingido... while ( erro > tol ) and ( i < n_its ): ''' Opcional... ''' if verbose or r_tb: tb.append([]); tb[i].append(ai); tb[i].append(bi); # calcula o ponto médio do intervalo... x = ai + 0.5 * (bi - ai) # (ai + bi) / 2.0 ''' Opcional... ''' if verbose or r_tb: tb[i].append(x); tb[i].append(f(x)); # verifica se este ponto médio é a raiz... if f(x) == 0: ''' Opcional... ''' if verbose: print('A raiz é o valor atual de x!') return ( x, i ) # caso contrário, verificar em que subintervalo está a raiz... else: # se o primeiro subintervalo tem a raiz... if np.sign(f(ai)) * np.sign(f(x)) == -1: # atualiza o limite superior: bi = x # caso contrário (se o segundo subintervalo tem a raiz)... else: # atualiza o limite inferior: ai = x # atualiza o iterador, incrementado-o i += 1 # calcula o erro erro = np.abs((x - x_ant) / np.abs(x)) x_ant = x if p_err: print(f'Erro {i}: {erro:.5f}') ''' Opcional... ''' if verbose: msg = f'Raiz encontrada: {x} | Em {i} iterações' print('-'*len(msg)) print(pd.DataFrame(tb, columns=["a", "b", "x", "f(x)"])) print('-'*len(msg)) print(msg) print('-'*len(msg)) ''' ----------- ''' # ao fim, retorna o valor exato ou aproximado da raiz if not r_tb: return ( x, i ) else: return ( x, i, tb ) ``` --- **Realizando Teste 01** ```python f = lambda x: ( x**5+2*x**3-5*x-2 ) sp.var('x') f_sp = sp.Lambda(x, x**5+2*x**3-5*x-2) f_sp ``` ```python x = np.linspace(-5, 5, 100) plt.plot(x, f(x)) plt.grid() ``` ```python bissecao(f, -2, 2, verbose=True) ``` ```python bissecao_do_professor(f, -2, 2) ``` --- **Realizando Teste 02** ```python f = lambda x: ( x * np.exp(x) - np.sin(8 * x) - 1) sp.var('x') f_sp = sp.Lambda(x, x * sp.exp(x) - sp.sin(8 * x) - 1) f_sp ``` ```python x, i, tb = bissecao(f, -0.5, 1, r_tb=True) print('\nRaiz {} encontrada em {} iterações.\nComo mostrado na tabela:\n'.format(x, i)) pd.DataFrame(tb, columns=["a", "b", "x", "f(x)"]) ``` Raiz 0.43458593652030686 encontrada em 36 iterações. Como mostrado na tabela: <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>a</th> <th>b</th> <th>x</th> <th>f(x)</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>-0.500000</td> <td>1.000000</td> <td>0.250000</td> <td>-1.588291e+00</td> </tr> <tr> <th>1</th> <td>0.250000</td> <td>1.000000</td> <td>0.625000</td> <td>1.126578e+00</td> </tr> <tr> <th>2</th> <td>0.250000</td> <td>0.625000</td> <td>0.437500</td> <td>2.839648e-02</td> </tr> <tr> <th>3</th> <td>0.250000</td> <td>0.437500</td> <td>0.343750</td> <td>-8.968958e-01</td> </tr> <tr> <th>4</th> <td>0.343750</td> <td>0.437500</td> <td>0.390625</td> <td>-4.392856e-01</td> </tr> <tr> <th>5</th> <td>0.390625</td> <td>0.437500</td> <td>0.414062</td> <td>-2.034669e-01</td> </tr> <tr> <th>6</th> <td>0.414062</td> <td>0.437500</td> <td>0.425781</td> <td>-8.664151e-02</td> </tr> <tr> <th>7</th> <td>0.425781</td> <td>0.437500</td> <td>0.431641</td> <td>-2.885010e-02</td> </tr> <tr> <th>8</th> <td>0.431641</td> <td>0.437500</td> <td>0.434570</td> <td>-1.526564e-04</td> </tr> <tr> <th>9</th> <td>0.434570</td> <td>0.437500</td> <td>0.436035</td> <td>1.414120e-02</td> </tr> <tr> <th>10</th> <td>0.434570</td> <td>0.436035</td> <td>0.435303</td> <td>6.999002e-03</td> </tr> <tr> <th>11</th> <td>0.434570</td> <td>0.435303</td> <td>0.434937</td> <td>3.424343e-03</td> </tr> <tr> <th>12</th> <td>0.434570</td> <td>0.434937</td> <td>0.434753</td> <td>1.636134e-03</td> </tr> <tr> <th>13</th> <td>0.434570</td> <td>0.434753</td> <td>0.434662</td> <td>7.418116e-04</td> </tr> <tr> <th>14</th> <td>0.434570</td> <td>0.434662</td> <td>0.434616</td> <td>2.945957e-04</td> </tr> <tr> <th>15</th> <td>0.434570</td> <td>0.434616</td> <td>0.434593</td> <td>7.097419e-05</td> </tr> <tr> <th>16</th> <td>0.434570</td> <td>0.434593</td> <td>0.434582</td> <td>-4.083998e-05</td> </tr> <tr> <th>17</th> <td>0.434582</td> <td>0.434593</td> <td>0.434587</td> <td>1.506739e-05</td> </tr> <tr> <th>18</th> <td>0.434582</td> <td>0.434587</td> <td>0.434585</td> <td>-1.288622e-05</td> </tr> <tr> <th>19</th> <td>0.434585</td> <td>0.434587</td> <td>0.434586</td> <td>1.090601e-06</td> </tr> <tr> <th>20</th> <td>0.434585</td> <td>0.434586</td> <td>0.434585</td> <td>-5.897806e-06</td> </tr> <tr> <th>21</th> <td>0.434585</td> <td>0.434586</td> <td>0.434586</td> <td>-2.403602e-06</td> </tr> <tr> <th>22</th> <td>0.434586</td> <td>0.434586</td> <td>0.434586</td> <td>-6.564999e-07</td> </tr> <tr> <th>23</th> <td>0.434586</td> <td>0.434586</td> <td>0.434586</td> <td>2.170507e-07</td> </tr> <tr> <th>24</th> <td>0.434586</td> <td>0.434586</td> <td>0.434586</td> <td>-2.197246e-07</td> </tr> <tr> <th>25</th> <td>0.434586</td> <td>0.434586</td> <td>0.434586</td> <td>-1.336925e-09</td> </tr> <tr> <th>26</th> <td>0.434586</td> <td>0.434586</td> <td>0.434586</td> <td>1.078569e-07</td> </tr> <tr> <th>27</th> <td>0.434586</td> <td>0.434586</td> <td>0.434586</td> <td>5.325999e-08</td> </tr> <tr> <th>28</th> <td>0.434586</td> <td>0.434586</td> <td>0.434586</td> <td>2.596153e-08</td> </tr> <tr> <th>29</th> <td>0.434586</td> <td>0.434586</td> <td>0.434586</td> <td>1.231230e-08</td> </tr> <tr> <th>30</th> <td>0.434586</td> <td>0.434586</td> <td>0.434586</td> <td>5.487689e-09</td> </tr> <tr> <th>31</th> <td>0.434586</td> <td>0.434586</td> <td>0.434586</td> <td>2.075382e-09</td> </tr> <tr> <th>32</th> <td>0.434586</td> <td>0.434586</td> <td>0.434586</td> <td>3.692286e-10</td> </tr> <tr> <th>33</th> <td>0.434586</td> <td>0.434586</td> <td>0.434586</td> <td>-4.838481e-10</td> </tr> <tr> <th>34</th> <td>0.434586</td> <td>0.434586</td> <td>0.434586</td> <td>-5.730971e-11</td> </tr> <tr> <th>35</th> <td>0.434586</td> <td>0.434586</td> <td>0.434586</td> <td>1.559597e-10</td> </tr> </tbody> </table> </div> --- <div class="alert alert-block alert-info" style="color:#20484d;"> <b>Exercicio 2</b> - Determine as raízes reais de $f(x) = −0.5x^2 + 2.5x + 4.5$ </div> ```python # Exibindo f = lambda x: ( ((-0.5)*(x**2)) + (2.5*x) + (4.5) ) sp.var('x') f_sp = sp.Lambda(x, ((-0.5)*(x**2)) + (2.5*x) + (4.5)) f_sp ``` ```python a = (-0.5) b = 2.5 c = 4.5 coefs = np.array([a, b, c]) coefs ``` array([-0.5, 2.5, 4.5]) <ul class="alert alert-block alert-info" style="color:#20484d;"> <b>(a)</b> Graficamente </ul> ```python x = np.linspace(-6, 10, 500) y = f(x) raiz1 = optimize.root(f, -2) raiz2 = optimize.root(f, 7) raizes = np.array([raiz1.x, raiz2.x]) fig = plt.figure(figsize=(10, 8)) ax = fig.add_subplot(111) plt.vlines(x=raiz1.x, ymin=f(-6), ymax=0, colors='gray', ls=':', lw=2) plt.vlines(x=raiz2.x, ymin=f(-6), ymax=0, colors='gray', ls=':', lw=2) ax.axhline(0, color='b') ax.plot(x, f(x), 'r', label="$f(x)$") ax.plot(raizes, f(raizes), 'kv', label="$Raizes$") ax.legend(loc='best') ax.set_xlabel('$x$') ax.set_ylabel('$f(x)$') plt.show() ``` <ul class="alert alert-block alert-info" style="color:#20484d;"> <b>(b)</b> Usando a fórmula quadrática </ul> **Forma 1: "na raça" com Bhaskara** ```python def formula_quadratica(coefs, verbose=False): a = coefs[0] b = coefs[1] c = coefs[2] if a == 0: return bhask = (b**2) - 4*a*c delta = np.sqrt(np.abs(bhask)) if bhask > 0: if verbose: print(" raizes reais e diferentes ") x1 = (-b + delta) / (2 * a) x2 = (-b - delta) / (2 * a) return x1, x2 elif bhask == 0: if verbose: print(" raizes reais e iguais ") x = ( -b / (2 * a) ) return x else: if verbose: print(" raizes complexas "); print(-b / (2 * a), " + i", delta); print(-b / (2 * a), " - i", delta); x = -b / (2 * a) return x return ``` ```python formula_quadratica(coefs, True) ``` **Forma 2: Usando Numpy** ```python coefs = np.array([a, b, c]) x1, x2 = np.roots(coefs) x2, x1 ``` **Forma 3: Usando SymPy (apenas simbólico)** ```python sp.var('x') sp.solve(((-0.5)*(x**2)) + (2.5*x) + (4.5), x) ``` <ul class="alert alert-block alert-info" style="color:#20484d;"> <b>(c)</b> Usando três iterações do método da bisseção para determinar a maior raiz. Use as aproximações iniciais $x_l = 5$ e $x_u = 10$. Calcule o erro relativo obtido entre cada iteração, e o erro entre os valores verdadeiros encontrados no item b e o valor de cada iteração. </ul> ```python bissecao(f, 5, 10, 3, verbose=True, p_err=True) ``` --- <div class="alert alert-block alert-info" style="color:#20484d;"> <b>Exercicio 3:</b> Localize a primeira raiz não-trivial de $sin \ x = x^3$, onde x está em radianos. Use uma técnica gráfica e a bisseção com o intervalo inicial de 0,5 a 1. Faça os cálculos até que o erro seja inferior a 2%. </div> ```python f = lambda x: np.sin(x) - x**3 ``` ```python # gráfico x = np.linspace(-np.pi, np.pi, 500) y = f(x) raiz1 = optimize.root(f, -2) raiz2 = optimize.root(f, 7) raizes = np.array([raiz1.x, raiz2.x]) fig = plt.figure(figsize=(10, 8)) ax = fig.add_subplot(111) ax.axhline(0, color='b') plt.vlines(x=raiz1.x, ymin=f(3), ymax=0, colors='gray', ls=':', lw=2) plt.vlines(x=raiz2.x, ymin=f(3), ymax=0, colors='gray', ls=':', lw=2) ax.plot(x, f(x), 'r', label="$f(x)$") ax.plot(raizes, f(raizes), 'kv', label="$Raizes$") ax.legend(loc='best') ax.set_xlabel('$x$') ax.set_ylabel('$f(x)$') plt.show() ``` ```python # bisseção bissecao(f, 0.5, 1, tol=0.02, verbose=True) ``` --- <div class="alert alert-block alert-info" style="color:#20484d;"> <b>Exercicio 4:</b> Dada $f(x) = −2x^6 − 1.5x^4 + 10x + 20$, encontre o máximo dessa função ($f'(x) = 0$) usando o método da bisseção, considerando o intervalo $[0, 1]$ e um erro limite de 5%. </div> ```python f = lambda x: ( (-2*x**6) - (1.5*x**4) + (10*x) + 20 ) sp.var('x') f_sp = sp.Lambda(x, (-2*x**6) - (1.5*x**4) + (10*x) + 20) f_sp ``` ```python # derivada sp.diff(f_sp(x), x) ``` ```python f_ = lambda x: ( (-12*x**5) - (6*x**3) + 10 ) ``` ```python x, i = bissecao(f_, 0, 1, tol=0.05) f'O máximo é {x} encontrado em {i} iterações' ``` 'O máximo é 0.84375 encontrado em 5 iterações' --- <div class="alert alert-block alert-info" style="color:#20484d;"> <b>Exercicio 5</b> </div> ```python def regula_falsi(f, xl, xu, tol=1e-10): if (f(xl) * f(xu) >= 0): return -1 i = 0 x = xl erro, x_ant = 1, x while erro > tol: x = xu - ( ( f(xu)*(xl-xu) ) / (f(xl)-f(xu)) ) if f(x) * f(xl) < 0: xu = x else: xl = x erro = np.abs((x - x_ant) / np.abs(x)) x_ant = x i += 1 return ( x, i ) ``` ```python f = lambda x: ( x**5+2*x**3-5*x-2 ) ``` ```python # Comparativo de métodos print(f'Método da Bisseção: \t {bissecao(f, -2, 2)}') print(f'Método Regula Falsi: \t {regula_falsi(f, -2, 2)}') ``` Método da Bisseção: (1.319641167181544, 35) Método Regula Falsi: (1.3196411670137347, 51) Como podemos notar, o método **Regula Falsi** conseguiu ainda mais casas decimais, utilizando mais iterações e chegando a um valor ainda mais preciso que o método da **Bisseção**.
d69dda4f9082667c54dc4e80859862c517075b15
168,419
ipynb
Jupyter Notebook
Class 04/Lista 4a - Guilherme Esdras.ipynb
GuilhermeEsdras/number-methods
e92a1e12d71ba688d01407982cbde5160f849498
[ "MIT" ]
null
null
null
Class 04/Lista 4a - Guilherme Esdras.ipynb
GuilhermeEsdras/number-methods
e92a1e12d71ba688d01407982cbde5160f849498
[ "MIT" ]
null
null
null
Class 04/Lista 4a - Guilherme Esdras.ipynb
GuilhermeEsdras/number-methods
e92a1e12d71ba688d01407982cbde5160f849498
[ "MIT" ]
null
null
null
102.882712
33,896
0.81511
true
6,975
Qwen/Qwen-72B
1. YES 2. YES
0.800692
0.863392
0.691311
__label__por_Latn
0.474466
0.444478
```python from ipywidgets import interactive, interact import matplotlib.pyplot as plt import numpy as np import ipywidgets as widgets import sympy as sym import seaborn as sns import plotly.graph_objects as go import plotly.express as px from plotly.offline import init_notebook_mode, iplot from numba import jit init_notebook_mode(connected=True) jit(nopython=True, parallel=True) sns.set() ``` ```python class plot(): def __init__(self, preWidgetN): self.N = preWidgetN n = np.linspace(0, self.N.value, self.N.value+1) a = (3**(n+1)+2**n)/(3**n+2.) a = (1/np.sqrt(5))*(((1+np.sqrt(5))/2)**n-((1-np.sqrt(5))/2)**n) b = (1/np.sqrt(5))*(((1+np.sqrt(5))/2)**(n+1)-((1-np.sqrt(5))/2)**(n+1)) a = a/b self.trace = go.Scatter(x=n, y=a, mode='markers', name=r'$a$', showlegend=True, ) layout = go.Layout(template='plotly_dark') self.fig = go.FigureWidget(data=[self.trace], layout = layout, #layout_yaxis_range=[-3 , 3] ) def series(self, change): n = np.linspace(0, self.N.value, self.N.value+1) a = (3**(n+1)+2**n)/(3**n+2.) a = (1/np.sqrt(5))*(((1+np.sqrt(5))/2)**n-((1-np.sqrt(5))/2)**n) a = (1/np.sqrt(5))*(((1+np.sqrt(5))/2)**n-((1-np.sqrt(5))/2)**n) b = (1/np.sqrt(5))*(((1+np.sqrt(5))/2)**(n+1)-((1-np.sqrt(5))/2)**(n+1)) a = a/b with self.fig.batch_update(): self.fig.data[0].x = n self.fig.data[0].y = a self.fig.data[0].name = r'$n\in (0, {%s})$' %(self.N.value) return def show(self): self.N.observe(self.series, names='value') display(self.N, self.fig) return ``` ```python N = widgets.IntSlider(min=0, max=50, step=1, value=0, description='n') p = plot(N) p.show() ``` IntSlider(value=0, description='n', max=50) FigureWidget({ 'data': [{'mode': 'markers', 'name': '$a$', 'showlegend': True,… ```python ```
f2bfb9583e3868f6fac7e949090dbacbba2ded2e
5,415
ipynb
Jupyter Notebook
folgen.ipynb
zolabar/Interactive-Calculus
5b4b01124eba7a981e4e9df7afcb6ab33cd7341f
[ "MIT" ]
1
2022-03-11T01:26:50.000Z
2022-03-11T01:26:50.000Z
folgen.ipynb
zolabar/Interactive-Calculus
5b4b01124eba7a981e4e9df7afcb6ab33cd7341f
[ "MIT" ]
null
null
null
folgen.ipynb
zolabar/Interactive-Calculus
5b4b01124eba7a981e4e9df7afcb6ab33cd7341f
[ "MIT" ]
null
null
null
27.912371
90
0.406279
true
676
Qwen/Qwen-72B
1. YES 2. YES
0.887205
0.782662
0.694382
__label__eng_Latn
0.316561
0.451613
# Announcements - Problem Set 6 will be on finite difference methods, due Nov 2 (to be posted on D2L). - __Outlook__: Next week, we'll move on to Monte Carlo Markov Chain methods. *Note: The presentation below largely follows part I in "Finite Difference Methods for Ordinary and Partial Differential Equations" by LeVeque (SIAM, 2007).* # The Heat Equation Numerical solutions to partial differential equations (PDEs) are a large subfield of applied math, and we won't be able to cover a full survey in this course. Today, we'll solve the heat equation $$ \partial_{t}u(\vec x,t) = \alpha \vec \nabla^2 u(\vec x,t)\,, $$ which describes how a quantity $u$ (such as heat) diffuses through a medium, and with $\alpha$ a positive constant, called _diffusivity_ of the medium. The heat equation is among the most widely studied PDEs, and a prototype for parabolic PDEs. Today we will numerically solve an example in 1D, applying the finite difference method from Lecture 23 and 24 for the spatial solution, combined with a forward Euler step for the time solution. ## Example Problem Let consider the 1D heat equation on a wire with extend $x\in[0,1]$, \begin{equation} \partial_{t}u = \partial^2_{x}u , \quad 0 < x < 1, \quad t > 0 \\ \end{equation} with boundary conditions \begin{equation} u(0,t) = 0, \quad {u}(1,t) = 0\,,\\ \end{equation} which correspond to endpoints held at constant temperature, and inital temperature distribution \begin{equation} u(x, 0) = 10\sin(\pi x)\,. \end{equation} # Discretization Lecture 23 introduced the _Finite Difference_ method and we used it to solve a boundary value problem with Dirichlet conditions as a linear system. Let's use the subscript $i$ to indicate the spatial discretization, and subscript $J$ for the time discretization, so that $U_{i,J} = u(x_i,t_J)$. Today we will incorporate the Dirichlet conditions as ghost cells, $U_{0,J}$ and $U_{N+1,J}$, corresponding to the boundary points. Then the spatial discretization (at one slice in time) becomes $$ A = \frac{1}{\Delta x^2} \begin{bmatrix} \Delta x^2 & 0 \\ 1 & -2 & 1 \\ & 1 & -2 & 1 \\ & & \ddots & \ddots & \ddots \\ & & & 1 & -2 & 1 \\ & & & & 1 & -2 & 1 \\ & & & & & 0 & \Delta x^2 \end{bmatrix} \quad \quad U_J = \begin{bmatrix} U_{0,J} \\ U_{1,J} \\ \vdots \\ U_{N,J} \\ U_{N+1,J} \end{bmatrix} \quad \quad b_J = \begin{bmatrix} u(0,t_J)=0 \\ f(x_1,t_J) \\ \vdots \\ f(x_{N},t_J) \\ u(1,t_J) =0\,. \end{bmatrix} $$ _Note that $A$ is time independent!_ The next step is to discretize in time $$ \frac{U_{i,J+1} -U_{i,J}}{\Delta t} = U^{\prime\prime}_{i,J} = \frac{U_{i-1,J}-2U_{i,J}+U_{i+1,j}}{\Delta x^2}\,, $$ <span style="color:blue"> **Solve today's Example Problem via the construction of a linear system of equations and time stepping.**</span> <span style="color:blue"> Your expression for $U_{i,J+1}$ in terms of elements of $U_J$: </span> _The code below is adapted from Lecture 23. The missing parts need to be adapted to today's problem, but you may find looking back at Lecture 23 helpful._ ```python import numpy import matplotlib.pyplot as plt # Problem setup f_t0 = lambda x: 10.*numpy.sin(numpy.pi*x) #spatial discretization a = 0.0 b = 1.0 u_a = 0.0 u_b = 0.0 # Spatial Discretization N_x = 10 #keep boundary points x = numpy.linspace(a, b, N_x + 2) delta_x = (b - a) / (N_x + 1) #time discretization t_0 = 0 t_final = 10 N_t = 100 delta_t = (t_final-t_0)/(N_t+1) # Construct matrix A A = numpy.zeros((N_x + 2, N_x + 2)) diagonal = numpy.ones(N_x + 2) / delta_x**2 A += numpy.diag(diagonal * -2.0, 0) A += numpy.diag(diagonal[:-1], 1) A += numpy.diag(diagonal[:-1], -1) # Now, add boundary conditions for ghost cells to A #left boundary #right boundary # Construct RHS - without boundary conditions b = f_t0(x) # add boundary conditions to b #left boundary #right boundary #U[J,:] will contain solution at time J U = numpy.empty((N_t+2,N_x + 2)) #spatial solution U[0,:] = numpy.linalg.solve(A, b) #now do time iteration for J in range(0,N_t+2): #boundary conditions for i in range (1,N_x+1): ``` ```python ```
47e8c2eee33c2fba0922b050f8a8b2c9460886a0
6,280
ipynb
Jupyter Notebook
Lectures/Lecture 25/Lecture25_HeatEquation.ipynb
astroarshn2000/PHYS305S20
18f4ebf0a51ba62fba34672cf76bd119d1db6f1e
[ "MIT" ]
3
2020-09-10T06:45:46.000Z
2020-10-20T13:50:11.000Z
Lectures/Lecture 25/Lecture25_HeatEquation.ipynb
astroarshn2000/PHYS305S20
18f4ebf0a51ba62fba34672cf76bd119d1db6f1e
[ "MIT" ]
null
null
null
Lectures/Lecture 25/Lecture25_HeatEquation.ipynb
astroarshn2000/PHYS305S20
18f4ebf0a51ba62fba34672cf76bd119d1db6f1e
[ "MIT" ]
null
null
null
35.885714
204
0.539013
true
1,361
Qwen/Qwen-72B
1. YES 2. YES
0.877477
0.798187
0.70039
__label__eng_Latn
0.973587
0.465573
# ***Introduction to Radar Using Python and MATLAB*** ## Andy Harrison - Copyright (C) 2019 Artech House <br/> # Barker Ambiguity Function *** Referring to Section 8.7.1, a common set of binary phase codes for radar applications is Barker codes, which have several unique properties. The time sidelobe level of the normalized ambiguity function is $1/N$, where $N$ is the length of the code. The ambiguity function has a length of $2N\tau_n$, the main lobe has a width of $2\tau_n$, and there are $(N-1)/2$ sidelobes on either side of the main lobe. There are only seven known Barker codes, and these are given in Table 8.1. While these are the only known Barker codes, much research has been performed to find longer binary phase codes with low time sidelobe levels. Although the time sidelobe levels are not $1/N$ as with the Barker codes, some codes have been found to have quite small sidelobe levels. A Barker code of length $4$ is illustrated in Figure 8.20. Another approach to obtain longer codes is to combine or embed one code within another. For example, a Barker code of length $5$ may be used with a Barker code of length $3$ to give (Equation 8.71) \begin{equation} B_{MN} = [\; \; 0\;\; 0\;\; 0\;\; \pi\;\; 0\,, \; 0\;\; 0\;\; 0\;\; \pi\;\; 0\,, \; \pi\;\; \pi\;\; \pi\;\; 0\;\; \pi\;]. \end{equation} The compression ratio of the combined Barker code is $MN$. However, the sidelobe level of the combined code is not $1/MN$. When using combined Barker codes, a number of sidelobes may be reduced to zero if the matched filter is followed by a linear transversal filter. *** Begin by getting the library path ```python import lib_path ``` Set the chip width (s) and the code length (2, 3, 4, 5, 7, 11, 13) ```python chip_width = 0.1 code_length = 5 ``` Set the Barker code ```python if code_length == 2: code = [1, -1] elif code_length == 3: code = [1, 1, -1] elif code_length == 4: code = [1, 1, -1, 1] elif code_length == 5: code = [1, 1, 1, -1, 1] elif code_length == 7: code = [1, 1, 1, -1, -1, 1, -1] elif code_length == 11: code = [1, 1, 1, -1, -1, -1, 1, -1, -1, 1, -1] elif code_length == 13: code = [1, 1, 1, 1, 1, -1, -1, 1, 1, -1, 1, -1, 1] ``` Calculate the ambiguity function for the Barker code ```python from Libs.ambiguity.ambiguity_function import phase_coded_wf ambiguity, time_delay, doppler_frequency = phase_coded_wf(code, chip_width) ``` Display the zero-Doppler cut, the zero-range cut, and the 2D contour plot using the `matplotlib` routines ```python from matplotlib import pyplot as plt # Set the figure size plt.rcParams["figure.figsize"] = (15, 10) # Plot the ambiguity function plt.plot(time_delay, ambiguity[round(len(doppler_frequency) / 2)], '') # Set the time axis limits plt.xlim(-len(code) * chip_width, len(code) * chip_width) # Set the x and y axis labels plt.xlabel("Time (s)", size=12) plt.ylabel("Relative Amplitude", size=12) # Turn on the grid plt.grid(linestyle=':', linewidth=0.5) # Set the plot title and labels plt.title('Barker Code Ambiguity Function', size=14) # Set the tick label size plt.tick_params(labelsize=12) ``` Create the zero-range cut ```python # Plot the ambiguity function plt.plot(doppler_frequency, ambiguity[:, round(len(time_delay) / 2)], '') # Set the x and y axis labels plt.xlabel("Doppler (Hz)", size=12) plt.ylabel("Relative Amplitude", size=12) # Turn on the grid plt.grid(linestyle=':', linewidth=0.5) # Set the plot title and labels plt.title('Barker Code Ambiguity Function', size=14) # Set the tick label size plt.tick_params(labelsize=12) ``` Create the two-dimensional contour plot ```python from numpy import meshgrid # Create the grid t, f = meshgrid(time_delay, doppler_frequency) # Plot the ambiguity function plt.contour(t, f, ambiguity, 30, cmap='jet', vmin=-0.2, vmax=1.0) # Set the time axis limits plt.xlim(-len(code) * chip_width, len(code) * chip_width) # Set the x and y axis labels plt.xlabel("Time (s)", size=12) plt.ylabel("Doppler (Hz)", size=12) # Turn on the grid plt.grid(linestyle=':', linewidth=0.5) # Set the plot title and labels plt.title('Barker Code Ambiguity Function', size=14) # Set the tick label size plt.tick_params(labelsize=12) ``` ```python ```
91c6c547db46d3e4480f4aa4d36ac4e44299ee9c
300,610
ipynb
Jupyter Notebook
jupyter/Chapter08/barker_ambiguity.ipynb
mberkanbicer/software
89f8004f567129216b92c156bbed658a9c03745a
[ "Apache-2.0" ]
null
null
null
jupyter/Chapter08/barker_ambiguity.ipynb
mberkanbicer/software
89f8004f567129216b92c156bbed658a9c03745a
[ "Apache-2.0" ]
null
null
null
jupyter/Chapter08/barker_ambiguity.ipynb
mberkanbicer/software
89f8004f567129216b92c156bbed658a9c03745a
[ "Apache-2.0" ]
null
null
null
846.788732
188,628
0.949815
true
1,304
Qwen/Qwen-72B
1. YES 2. YES
0.868827
0.828939
0.720204
__label__eng_Latn
0.982213
0.511607
```python %matplotlib notebook import numpy as np import matplotlib.pyplot as plt import sympy as sym #sym.init_printing(use_unicode=False, wrap_line=True) ``` ```python s = sym.Symbol('alpha') k = sym.Symbol('self%k') asym = (1-s)**2/(1+(k-1)*(1-1-s)**2) print('a: ',asym) dasym = sym.diff(asym, s) print("a': ",dasym) d2asym = sym.diff(dasym, s) print('a": ',d2asym) def a(s,k): return (1 - s)**2/(s**2*(k - 1) + 1) def da(s,k): return -2*s*(1 - s)**2*(k - 1)/(s**2*(k - 1) + 1)**2 + (2*s - 2)/(s**2*(k - 1) + 1) def d2a(s,k): return 8*s**2*(1 - s)**2*(k - 1)**2/(s**2*(k - 1) + 1)**3 - 4*s*(k - 1)*(2*s - 2)/(s**2*(k - 1) + 1)**2 - 2*(1 - s)**2*(k - 1)/(s**2*(k - 1) + 1)**2 + 2/(s**2*(k - 1) + 1) def w(s): return 1-(1-s)**2 ``` a: (1 - alpha)**2/(alpha**2*(self.k - 1) + 1) a': -2*alpha*(1 - alpha)**2*(self.k - 1)/(alpha**2*(self.k - 1) + 1)**2 + (2*alpha - 2)/(alpha**2*(self.k - 1) + 1) a": 8*alpha**2*(1 - alpha)**2*(self.k - 1)**2/(alpha**2*(self.k - 1) + 1)**3 - 4*alpha*(2*alpha - 2)*(self.k - 1)/(alpha**2*(self.k - 1) + 1)**2 - 2*(1 - alpha)**2*(self.k - 1)/(alpha**2*(self.k - 1) + 1)**2 + 2/(alpha**2*(self.k - 1) + 1) cw: Integral(Piecewise((I*(1 - alpha)*(alpha - 1)**3/(2*((alpha - 1)**2 - 1)**(3/2)) - I*(1 - alpha)*(alpha - 1)/(2*((alpha - 1)**2 - 1)**(3/2)) + 3*I*(alpha - 1)**2/(2*sqrt((alpha - 1)**2 - 1)) - I/sqrt((alpha - 1)**2 - 1), (alpha - 1)**2 > 1), ((1 - alpha)*(alpha - 1)/(2*sqrt(1 - (alpha - 1)**2)) + sqrt(1 - (alpha - 1)**2)/2 + 1/(2*sqrt(1 - (alpha - 1)**2)), True)), (alpha, 0, 1)) ```python alpha = np.linspace(0,1,101)[1:-1] fig, ax = plt.subplots(figsize=(8,6)) for k in (0,0.25,0.5,0.75,1.): ax.plot(alpha,a(alpha,k),label=r'$s=${0}'.format(k)) ax.set_xlabel(r'$\alpha$') ax.legend(loc=0) ax.set_title(r'$a(\alpha)$') ax.plot() ``` <IPython.core.display.Javascript object> [] ```python alpha = np.linspace(0,1,101)[1:-1] fig, ax = plt.subplots(figsize=(8,6)) for k in (0,0.25,0.5,0.75,1.): ax.plot(alpha,da(alpha,k),label=r'$s=${0}'.format(k)) ax.set_xlabel(r'$\alpha$') ax.legend(loc=0) ax.set_title(r"$a'(\alpha)$") ax.plot() ``` <IPython.core.display.Javascript object> [] ```python alpha = np.linspace(0,1,101)[1:-1] fig, ax = plt.subplots(figsize=(8,6)) for k in (0,0.25,0.5,0.75,1.): ax.plot(alpha,d2a(alpha,k),label=r'$s=${0}'.format(k)) ax.set_xlabel(r'$\alpha$') ax.legend(loc=0) ax.set_title(r'$a"(\alpha)$') ax.plot() ``` <IPython.core.display.Javascript object> [] ```python alpha = np.linspace(0,1,101)[1:-1] fig, ax = plt.subplots(figsize=(8,6)) ax.plot(alpha,w(alpha),label=r'$w$') ax.set_xlabel(r'$\alpha$') #ax.legend(loc=0) ax.set_title(r"$w(\alpha)$") ax.plot() ``` <IPython.core.display.Javascript object> [] ```python ```
98c4d35335a1b14f2683a9ab6e6bf04f3713056d
894,272
ipynb
Jupyter Notebook
m_DefMech/LinSoft model.ipynb
jeanmichelscherer/mef90
48b9b7d8bdaccb846a76833853f6ea81ce6fc9b1
[ "BSD-2-Clause" ]
9
2019-12-04T01:38:56.000Z
2022-02-13T17:35:06.000Z
m_DefMech/LinSoft model.ipynb
jeanmichelscherer/mef90
48b9b7d8bdaccb846a76833853f6ea81ce6fc9b1
[ "BSD-2-Clause" ]
1
2022-02-19T21:38:52.000Z
2022-02-19T21:38:52.000Z
m_DefMech/LinSoft model.ipynb
jeanmichelscherer/mef90
48b9b7d8bdaccb846a76833853f6ea81ce6fc9b1
[ "BSD-2-Clause" ]
7
2021-01-20T01:57:25.000Z
2022-02-17T18:11:38.000Z
218.916034
231,367
0.868236
true
1,180
Qwen/Qwen-72B
1. YES 2. YES
0.909907
0.839734
0.76408
__label__yue_Hant
0.164467
0.613546
This notebook is part of https://github.com/AudioSceneDescriptionFormat/splines, see also https://splines.readthedocs.io/. [back to overview](catmull-rom.ipynb) # Barry--Goldman Algorithm The *Barry--Goldman algorithm* (named after *Phillip Barry* and *Ronald Goldman*) can be used to calculate values of [non-uniform Catmull--Rom splines](catmull-rom-non-uniform.ipynb). We have also applied this algorithm to [rotation splines](../rotation/barry-goldman.ipynb). <cite data-cite="catmull1974splines">(Catmull and Rom, 1974)</cite> describes "a class of local interpolating splines" and <cite data-cite="barry1988recursive">(Barry and Goldman, 1988)</cite> describes "a recursive evaluation algorithm for a class of Catmull–Rom splines", by which they mean a sub-class of the original class, which only contains splines generated from a combination of [Lagrange interpolation](lagrange.ipynb) and B-spline blending: > In particular, they observed that certain choices led to interpolatory curves. Although Catmull and Rom discussed a more general case, we will restrict our attention to an important class of Catmull--Rom splines obtained by combining B-spline basis functions and Lagrange interpolating polynomials. > [...] > They are piecewise polynomial, have local support, are invariant under affine transformations, and have certain differentiability and interpolatory properties. > > ---<cite data-cite="barry1988recursive">Barry and Goldman (1988)</cite>, section 1: "Introduction" The algorithm can be set up to construct curves of arbitrary degree (given enough vertices and their parameter values), but here we only take a look at the cubic case (using four vertices), which seems to be what most people mean by the term *Catmull--Rom splines*. The algorithm is a combination of two sub-algorithms: > The Catmull--Rom evaluation algorithm is constructed by combining the de Boor algorithm for evaluating B-spline curves with Neville's algorithm for evaluating Lagrange polynomials. > > ---<cite data-cite="barry1988recursive">Barry and Goldman (1988)</cite>, abstract Combining the two will lead to a multi-stage algorithm, where each stage consists of only linear interpolations (and *extra*polations). We will use the algorithm here to derive an expression for the [tangent vectors](#Tangent-Vectors), which will show that the algorithm indeed generates [non-uniform Catmull--Rom splines](catmull-rom-non-uniform.ipynb#Tangent-Vectors). ## Triangular Schemes In <cite data-cite="barry1988recursive">(Barry and Goldman, 1988)</cite>, the presented algorithms are illustrated using triangular evaluation patterns, which we will use here in a very similar form. As an example, let's look at the most basic building block: linear interpolation between two given points (in this case $\boldsymbol{x}_4$ and $\boldsymbol{x}_5$ with corresponding parameter values $t_4$ and $t_5$, respectively): \begin{equation*} \begin{array}{ccccc} && \boldsymbol{p}_{4,5} && \\ & \frac{t_5 - t}{t_5 - t_4} && \frac{t - t_4}{t_5 - t_4} & \\ \boldsymbol{x}_4 &&&& \boldsymbol{x}_5 \end{array} \end{equation*} The values at the base of the triangle are known, and the triangular scheme shows how the value at the apex can be calculated from them. In this example, to obtain the *linear* polynomial $\boldsymbol{p}_{4,5}$ one has to add $\boldsymbol{x}_4$, weighted by the factor shown next to it ($\frac{t_5 - t}{t_5 - t_4}$), and $\boldsymbol{x}_5$, weighted by the factor next to it ($\frac{t - t_4}{t_5 - t_4}$). The parameter $t$ can be chosen arbitrarily, but in this example we are mostly interested in the range $t_4 \le t \le t_5$. If the parameter value is outside this range, the process is more appropriately called *extra*polation instead of *inter*polation. Since we will need linear interpolation (and extrapolation) quite a few times, let's define a helper function: ```python def lerp(xs, ts, t): """Linear interpolation. Returns the interpolated value at time *t*, given the two values *xs* at times *ts*. """ x_begin, x_end = xs t_begin, t_end = ts return (x_begin * (t_end - t) + x_end * (t - t_begin)) / (t_end - t_begin) ``` ## Neville's Algorithm We have already seen this algorithm in our [notebook about Lagrange interpolation](lagrange.ipynb). In the *quadratic* case, it looks like this: \begin{equation*} \begin{array}{ccccccccc} &&&& \boldsymbol{p}_{3,4,5} &&&& \\ &&& \frac{t_5 - t}{t_5 - t_3} && \frac{t - t_3}{t_5 - t_3} &&& \\ && \boldsymbol{p}_{3,4} &&&& \boldsymbol{p}_{4,5} && \\ & \frac{t_4 - t}{t_4 - t_3} && \frac{t - t_3}{t_4 - t_3} & & \frac{t_5 - t}{t_5 - t_4} && \frac{t - t_4}{t_5 - t_4} & \\ \boldsymbol{x}_{3} &&&& \boldsymbol{x}_{4} &&&& \boldsymbol{x}_{5} \end{array} \end{equation*} The *cubic* case is shown in figure 2 of <cite data-cite="barry1988recursive">(Barry and Goldman, 1988)</cite>. ```python import matplotlib.pyplot as plt import numpy as np ``` Let's try to plot this for three points: ```python points = np.array([ (0, 0), (0.5, 2), (3, 0), ]) ``` In the following example plots we show the *uniform* case (with $t_3=3$, $t_4=4$ and $t_5=5$), but don't worry, the algorithm works just as well for arbitrary non-uniform time values. ```python plot_times = np.linspace(4, 5, 30) ``` ```python plt.scatter(*np.array([ lerp( [lerp(points[:2], [3, 4], t), lerp(points[1:], [4, 5], t)], [3, 5], t) for t in plot_times]).T) plt.plot(*points.T, 'x:g') plt.axis('equal'); ``` Note that the quadratic curve is defined by three points but we are only evaluating it between two of them (for $4 \le t \le 5$). ## De Boor's Algorithm This algorithm (named after [Carl de Boor](https://en.wikipedia.org/wiki/Carl_R._de_Boor), see <cite data-cite="de_boor1972calculating">(de Boor, 1972)</cite>) can be used to calculate B-spline basis functions. The quadratic case looks like this: \begin{equation*} \begin{array}{ccccccccc} &&&& \boldsymbol{p}_{3,4,5} &&&& \\ &&& \frac{t_5 - t}{t_5 - t_4} && \frac{t - t_4}{t_5 - t_4} &&& \\ && \boldsymbol{p}_{3,4} &&&& \boldsymbol{p}_{4,5} && \\ & \frac{t_5 - t}{t_5 - t_3} && \frac{t - t_3}{t_5 - t_3} & & \frac{t_6 - t}{t_6 - t_4} && \frac{t - t_4}{t_6 - t_4} & \\ \boldsymbol{x}_{3} &&&& \boldsymbol{x}_{4} &&&& \boldsymbol{x}_{5} \end{array} \end{equation*} The *cubic* case shown in figure 1 of <cite data-cite="barry1988recursive">(Barry and Goldman, 1988)</cite>. ```python plt.scatter(*np.array([ lerp( [lerp(points[:2], [3, 5], t), lerp(points[1:], [4, 6], t)], [4, 5], t) for t in plot_times]).T) plt.plot(*points.T, 'x:g') plt.axis('equal'); ``` ## Combining Both Algorithms Figure 5 of <cite data-cite="catmull1974splines">(Catmull and Rom, 1974)</cite> shows an example where linear interpolation is followed by quadratic B-spline blending to create a cubic curve. We can re-create this example with the building blocks from above: * At the base of the triangle, we put four known vertices. * Consecutive pairs of these vertices form three linear interpolations (and *extra*polations), resulting in three interpolated (and *extra*polated) values. * On top of these three values, we arrange a quadratic instance of de Boor's algorithm (as shown above). This culminates in the final value of the spline (given an appropriate parameter value $t$) at the apex of the triangle, which looks like this: \begin{equation*} \def\negspace{\!\!\!\!\!\!} \begin{array}{ccccccccccccc} &&&&&& \boldsymbol{p}_{3,4,5,6} &&&&&& \\ &&&&& \negspace \frac{t_5 - t}{t_5 - t_4} \negspace && \negspace \frac{t - t_4}{t_5 - t_4} \negspace &&&&& \\ &&&& \boldsymbol{p}_{3,4,5} &&&& \boldsymbol{p}_{4,5,6} &&&& \\ && & \negspace \frac{t_5 - t}{t_5 - t_3} \negspace && \negspace \frac{t - t_3}{t_5 - t_3} \negspace & & \negspace \frac{t_6 - t}{t_6 - t_4} \negspace && \negspace \frac{t - t_4}{t_6 - t_4} \negspace & && \\ && \boldsymbol{p}_{3,4} &&&& \boldsymbol{p}_{4,5} &&&& \boldsymbol{p}_{5,6} && \\ & \negspace \frac{t_4 - t}{t_4 - t_3} \negspace && \negspace \frac{t - t_3}{t_4 - t_3} \negspace & & \negspace \frac{t_5 - t}{t_5 - t_4} \negspace && \negspace \frac{t - t_4}{t_5 - t_4} \negspace & & \negspace \frac{t_6 - t}{t_6 - t_5} \negspace && \negspace \frac{t - t_5}{t_6 - t_5} \negspace & \\ \boldsymbol{x}_3 &&&& \boldsymbol{x}_4 &&&& \boldsymbol{x}_5 &&&& \boldsymbol{x}_6 \end{array} \end{equation*} Here we are considering the fifth spline segment $\boldsymbol{p}_{3,4,5,6}(t)$ (represented at the apex of the triangle) from $\boldsymbol{x}_4$ to $\boldsymbol{x}_5$ (to be found at the base of the triangle) which corresponds to the parameter range $t_4 \le t \le t_5$. To calculate the values in this segment, we also need to know the preceding control point $\boldsymbol{x}_3$ (at the bottom left) and the following control point $\boldsymbol{x}_6$ (at the bottom right). But not only their positions are relevant, we also need the corresponding parameter values $t_3$ and $t_6$, respectively. This same triangular scheme is also shown in figure 3 of <cite data-cite="yuksel2011parameterization">(Yuksel et al., 2011)</cite>, except that here we shifted the indices by $+3$. Another way to construct a cubic curve with this algorithm would be to flip the degrees of interpolation and blending, in other words: * Instead of three linear interpolations (and extrapolations), apply two overlapping quadratic Lagrange interpolations using Neville's algorithm (as shown above) to $\boldsymbol{x}_3$, $\boldsymbol{x}_4$, $\boldsymbol{x}_5$ and $\boldsymbol{x}_4$, $\boldsymbol{x}_5$, $\boldsymbol{x}_6$, respectively. Note that the interpolation of $\boldsymbol{x}_4$ and $\boldsymbol{x}_5$ appears in both triangles but has to be calculated only once (see also figures 3 and 4 in <cite data-cite="barry1988recursive">(Barry and Goldman, 1988)</cite>). * This will occupy the lower two stages of the triangle, yielding two interpolated values. * Those two values are then linearly blended in the final stage. Readers of the [notebook about uniform Catmull--Rom splines](catmull-rom-uniform.ipynb) may already suspect that, for others it might be a revelation: both ways lead to exactly the same triangular scheme and therefore they are equivalent! The same scheme, but only for the *uniform* case, is also shown in figure 7 of <cite data-cite="barry1988recursive">(Barry and Goldman, 1988)</cite>, which casually mentions the equivalent cases (with $m$ being the degree of Lagrange interpolation and $n$ being the degree of the B-spline basis functions): > Note too from Figure 7 that the case $n=1$, $m=2$ [...] is identical to the case $n=2$, $m=1$ [...] > > ---<cite data-cite="barry1988recursive">Barry and Goldman (1988)</cite>, section 3: "Examples" <div class="alert alert-warning"> Not an Overhauser Spline Equally casually, they mention: > Finally, the particular case here is also an Overhauser spline > <cite data-cite="overhauser1968parabolic">(Overhauser, 1968)</cite>. > > ---<cite data-cite="barry1988recursive">Barry and Goldman (1988)</cite>, section 3: "Examples" This is not true. Overhauser splines -- as described in <cite data-cite="overhauser1968parabolic">(Overhauser, 1968)</cite> -- don't provide a choice of parameter values. The parameter values are determined by the Euclidean distances between control points, similar, but not quite identical to [chordal parameterization](catmull-rom-properties.ipynb#Chordal-Parameterization). Calculating a value of a Catmull--Rom spline doesn't involve calculating any distances. </div> For completeness' sake, there are two more combinations that lead to cubic splines, but they have their limitations: * Cubic Lagrange interpolation, followed by no blending at all, which leads to a cubic spline that's not $C^1$ continuous (only $C^0$), as shown in figure 8 of <cite data-cite="barry1988recursive">(Barry and Goldman, 1988)</cite>. * No interpolation at all, followed by cubic B-spline blending, which leads to an approximating spline (instead of an interpolating spline), as shown in figure 5 of <cite data-cite="barry1988recursive">(Barry and Goldman, 1988)</cite>. <div class="alert alert-info"> Note Here we are using the time instances of the Lagrange interpolation also as B-spline knots. Equation (9) of <cite data-cite="barry1988recursive">(Barry and Goldman, 1988)</cite> shows a more generic formulation of the algorithm with separate parameters $s_i$ and $t_i$. </div> ## Step by Step The triangular figure above looks more complicated than it really is. It's just a bunch of linear *inter*polations and *extra*polations. Let's go through the figure above, piece by piece. ```python import sympy as sp ``` ```python t = sp.symbols('t') ``` ```python x3, x4, x5, x6 = sp.symbols('xbm3:7') ``` ```python t3, t4, t5, t6 = sp.symbols('t3:7') ``` We use some custom SymPy-based tools from [utility.py](utility.py): ```python from utility import NamedExpression, NamedMatrix ``` ### First Stage In the center of the bottom row, there is a straightforward linear interpolation from $\boldsymbol{x}_4$ to $\boldsymbol{x}_5$ within the interval from $t_4$ to $t_5$. ```python p45 = NamedExpression('pbm_4,5', lerp([x4, x5], [t4, t5], t)) p45 ``` Obviously, this starts at: ```python p45.evaluated_at(t, t4) ``` ... and ends at: ```python p45.evaluated_at(t, t5) ``` The bottom left of the triangle looks very similar, with a linear interpolation from $\boldsymbol{x}_3$ to $\boldsymbol{x}_4$ within the interval from $t_3$ to $t_4$. ```python p34 = NamedExpression('pbm_3,4', lerp([x3, x4], [t3, t4], t)) p34 ``` However, that's not the parameter range we are interested in! We are interested in the range from $t_4$ to $t_5$. Therefore, this is not actually an *inter*polation between $\boldsymbol{x}_3$ and $\boldsymbol{x}_4$, but rather a linear *extra*polation starting at $\boldsymbol{x}_4$ ... ```python p34.evaluated_at(t, t4) ``` ... and ending at some extrapolated point beyond $\boldsymbol{x}_4$: ```python p34.evaluated_at(t, t5) ``` Similarly, at the bottom right of the triangle there isn't a linear *inter*polation from $\boldsymbol{x}_5$ to $\boldsymbol{x}_6$, but rather a linear *extra*polation that just reaches $\boldsymbol{x}_5$ at the end of the parameter interval (i.e. at $t=t_5$). ```python p56 = NamedExpression('pbm_5,6', lerp([x5, x6], [t5, t6], t)) p56 ``` ```python p56.evaluated_at(t, t4) ``` ```python p56.evaluated_at(t, t5) ``` ### Second Stage The second stage of the algorithm involves linear interpolations of the results of the previous stage. ```python p345 = NamedExpression('pbm_3,4,5', lerp([p34.name, p45.name], [t3, t5], t)) p345 ``` ```python p456 = NamedExpression('pbm_4,5,6', lerp([p45.name, p56.name], [t4, t6], t)) p456 ``` Those interpolations are defined over a parameter range from $t_3$ to $t_5$ and from $t_4$ to $t_6$, respectively. In each case, we are only interested in a sub-range, namely from $t_4$ to $t_5$. These are the start and end points at $t_4$ and $t_5$: ```python p345.evaluated_at(t, t4, symbols=[p34, p45]) ``` ```python p345.evaluated_at(t, t5, symbols=[p34, p45]) ``` ```python p456.evaluated_at(t, t4, symbols=[p45, p56]) ``` ```python p456.evaluated_at(t, t5, symbols=[p45, p56]) ``` ### Third Stage The last step is quite simple: ```python p3456 = NamedExpression( 'pbm_3,4,5,6', lerp([p345.name, p456.name], [t4, t5], t)) p3456 ``` This time, the interpolation interval is exactly the one we care about. To get the final result, we just have to combine all the above expressions: ```python p3456 = p3456.subs_symbols(p345, p456, p34, p45, p56).simplify() p3456 ``` We can make this marginally shorter if we rewrite the segment durations as $\Delta_i = t_{i+1} - t_i$: ```python delta3, delta4, delta5 = sp.symbols('Delta3:6') deltas = { t4 - t3: delta3, t5 - t4: delta4, t6 - t5: delta5, t5 - t3: delta3 + delta4, t6 - t4: delta4 + delta5, t6 - t3: delta3 + delta4 + delta5, # A few special cases that SymPy has a hard time resolving: t4 + t4 - t3: t4 + delta3, t6 + t6 - t3: t6 + delta3 + delta4 + delta5, } ``` ```python p3456.subs(deltas) ``` Apart from checking if it's really cubic ... ```python sp.degree(p3456.expr, t) ``` ... and if it's really interpolating ... ```python p3456.evaluated_at(t, t4).simplify() ``` ```python p3456.evaluated_at(t, t5).simplify() ``` ... the only thing left to do is to check its ... ## Tangent Vectors To get the tangent vectors at the control points, we just have to take the first derivative ... ```python pd3456 = p3456.diff(t) ``` ... and evaluate it at $t_4$ and $t_5$: ```python pd3456.evaluated_at(t, t4).simplify().simplify() ``` ```python pd3456.evaluated_at(t, t5).simplify() ``` If all went well, this should be identical to the result in [the notebook about non-uniform Catmull--Rom splines](catmull-rom-non-uniform.ipynb#Tangent-Vectors). ## Animation The linear interpolations (and *extra*polations) of this algorithm can be shown graphically. By means of the file [barry_goldman.py](barry_goldman.py), we can generate an animation of the algorithm: ```python from barry_goldman import animation ``` ```python from IPython.display import HTML ``` ```python vertices = [ (1, 0), (0.5, 1), (6, 2), (5, 0), ] ``` ```python times = [ 0, 1, 6, 8, ] ``` ```python ani = animation(vertices, times) ``` ```python HTML(ani.to_jshtml(default_mode='reflect')) ``` If this doesn't look very intuitive to you, you are not alone. For a different (and probably more straightforward) point of view, have a look at the [notebook about non-uniform Catmull--Rom splines](catmull-rom-non-uniform.ipynb#Animation).
b15ff3b2e2b9a0d3e03ef74176c4494e0cfb1f7b
30,699
ipynb
Jupyter Notebook
doc/euclidean/catmull-rom-barry-goldman.ipynb
AudioSceneDescriptionFormat/splines
0faf87ffddf3be01087959f80d9c43a4da2ae862
[ "MIT" ]
6
2021-06-21T11:41:02.000Z
2022-02-15T19:18:47.000Z
doc/euclidean/catmull-rom-barry-goldman.ipynb
AudioSceneDescriptionFormat/splines
0faf87ffddf3be01087959f80d9c43a4da2ae862
[ "MIT" ]
3
2018-07-09T17:47:29.000Z
2022-01-28T19:55:55.000Z
doc/euclidean/catmull-rom-barry-goldman.ipynb
AudioSceneDescriptionFormat/splines
0faf87ffddf3be01087959f80d9c43a4da2ae862
[ "MIT" ]
3
2020-05-15T18:34:42.000Z
2021-12-28T06:29:54.000Z
26.694783
178
0.545197
true
5,532
Qwen/Qwen-72B
1. YES 2. YES
0.835484
0.817574
0.68307
__label__eng_Latn
0.972033
0.425332
```python %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt ``` ```python import numpy as np import numpy.linalg as la ``` # Dimension Reduction ```python np.random.seed(123) np.set_printoptions(3) ``` ### PCA from scratch Principal Components Analysis (PCA) basically means to find and rank all the eigenvalues and eigenvectors of a covariance matrix. This is useful because high-dimensional data (with $p$ features) may have nearly all their variation in a small number of dimensions $k$, i.e. in the subspace spanned by the eigenvectors of the covariance matrix that have the $k$ largest eigenvalues. If we project the original data into this subspace, we can have a dimension reduction (from $p$ to $k$) with hopefully little loss of information. For zero-centered vectors, \begin{align} \text{Cov}(X, Y) &= \frac{\sum_{i=1}^n(X_i - \bar{X})(Y_i - \bar{Y})}{n-1} \\ &= \frac{\sum_{i=1}^nX_iY_i}{n-1} \\ &= \frac{XY^T}{n-1} \end{align} and so the covariance matrix for a data set X that has zero mean in each feature vector is just $XX^T/(n-1)$. In other words, we can also get the eigendecomposition of the covariance matrix from the positive semi-definite matrix $XX^T$. We will take advantage of this when we cover the SVD later in the course. Note: Here we are using a matrix of **row** vectors ```python n = 100 x1, x2, x3 = np.random.normal(0, 10, (3, n)) x4 = x1 + np.random.normal(size=x1.shape) x5 = (x1 + x2)/2 + np.random.normal(size=x1.shape) x6 = (x1 + x2 + x3)/3 + np.random.normal(size=x1.shape) ``` #### For PCA calculations, each column is an observation ```python xs = np.c_[x1, x2, x3, x4, x5, x6].T ``` ```python xs[:, :10] ``` array([[-10.856, 9.973, 2.83 , -15.063, -5.786, 16.514, -24.267, -4.289, 12.659, -8.667], [ 6.421, -19.779, 7.123, 25.983, -0.246, 0.341, 1.795, -18.62 , 4.261, -16.054], [ 7.033, -5.981, 22.007, 6.883, -0.063, -2.067, -0.865, -9.153, -0.952, 2.787], [-10.091, 9.144, 2.171, -14.452, -5.93 , 17.831, -24.971, -3.539, 13.002, -8.794], [ -0.684, -5.433, 4.485, 4.151, -3.025, 9.405, -12.987, -12.12 , 8.496, -11.511], [ 1.618, -5.193, 10.388, 6.864, -0.771, 6.267, -8.769, -11.222, 3.621, -7.55 ]]) #### Center each observation ```python xc = xs - np.mean(xs, 1)[:, np.newaxis] ``` ```python xc[:, :10] ``` array([[-1.113e+01, 9.702e+00, 2.559e+00, -1.533e+01, -6.057e+00, 1.624e+01, -2.454e+01, -4.560e+00, 1.239e+01, -8.938e+00], [ 6.616e+00, -1.958e+01, 7.318e+00, 2.618e+01, -5.090e-02, 5.368e-01, 1.991e+00, -1.842e+01, 4.457e+00, -1.586e+01], [ 7.984e+00, -5.030e+00, 2.296e+01, 7.834e+00, 8.882e-01, -1.115e+00, 8.609e-02, -8.202e+00, -7.123e-04, 3.738e+00], [-1.023e+01, 9.007e+00, 2.033e+00, -1.459e+01, -6.068e+00, 1.769e+01, -2.511e+01, -3.676e+00, 1.286e+01, -8.931e+00], [-7.496e-01, -5.498e+00, 4.419e+00, 4.085e+00, -3.091e+00, 9.339e+00, -1.305e+01, -1.219e+01, 8.431e+00, -1.158e+01], [ 1.801e+00, -5.009e+00, 1.057e+01, 7.048e+00, -5.873e-01, 6.451e+00, -8.585e+00, -1.104e+01, 3.805e+00, -7.366e+00]]) #### Covariance Remember the formula for covariance $$ \text{Cov}(X, Y) = \frac{\sum_{i=1}^n(X_i - \bar{X})(Y_i - \bar{Y})}{n-1} $$ where $\text{Cov}(X, X)$ is the sample variance of $X$. ```python cov = (xc @ xc.T)/(n-1) ``` ```python cov ``` array([[128.578, -2.133, -6.661, 127.395, 63.27 , 42.369], [ -2.133, 95.051, 2.269, -1.349, 44.885, 31.637], [ -6.661, 2.269, 94.934, -5.048, -1.141, 30.275], [127.395, -1.349, -5.048, 126.993, 63.196, 42.723], [ 63.27 , 44.885, -1.141, 63.196, 54.408, 36.838], [ 42.369, 31.637, 30.275, 42.723, 36.838, 36.563]]) #### Check ```python np.cov(xs) ``` array([[128.578, -2.133, -6.661, 127.395, 63.27 , 42.369], [ -2.133, 95.051, 2.269, -1.349, 44.885, 31.637], [ -6.661, 2.269, 94.934, -5.048, -1.141, 30.275], [127.395, -1.349, -5.048, 126.993, 63.196, 42.723], [ 63.27 , 44.885, -1.141, 63.196, 54.408, 36.838], [ 42.369, 31.637, 30.275, 42.723, 36.838, 36.563]]) #### Eigendecomposition ```python e, v = la.eigh(cov) ``` ```python idx = np.argsort(e)[::-1] ``` ```python e = e[idx] v = v[:, idx] ``` #### Explain the magnitude of the eigenvalues Note that $x4, x5, x6$ are linear combinations of $x1, x2, x3$ with some added noise, and hence the last 3 eigenvalues are small. ```python plt.stem(e) pass ``` #### The eigenvalues and eigenvectors give a factorization of the covariance matrix ```python v @ np.diag(e) @ v.T ``` array([[128.578, -2.133, -6.661, 127.395, 63.27 , 42.369], [ -2.133, 95.051, 2.269, -1.349, 44.885, 31.637], [ -6.661, 2.269, 94.934, -5.048, -1.141, 30.275], [127.395, -1.349, -5.048, 126.993, 63.196, 42.723], [ 63.27 , 44.885, -1.141, 63.196, 54.408, 36.838], [ 42.369, 31.637, 30.275, 42.723, 36.838, 36.563]]) ### Geometry of PCA ### Algebra of PCA Note that $Q^T X$ results in a new data set that is uncorrelated. ```python m = np.zeros(2) s = np.array([[1, 0.8], [0.8, 1]]) x = np.random.multivariate_normal(m, s, n).T ``` ```python x.shape ``` (2, 100) #### Calculate covariance matrix from centered observations ```python xc = (x - x.mean(1)[:, np.newaxis]) ``` ```python cov = (xc @ xc.T)/(n-1) ``` ```python cov ``` array([[1.014, 0.881], [0.881, 1.163]]) #### Find eigendecoposition ```python e, v = la.eigh(cov) idx = np.argsort(e)[::-1] e = e[idx] v = v[:, idx] ``` #### In original coordinates ```python plt.scatter(x[0], x[1], alpha=0.5) for e_, v_ in zip(e, v.T): plt.plot([0, e_*v_[0]], [0, e_*v_[1]], 'r-', lw=2) plt.xlabel('x', fontsize=14) plt.ylabel('y', fontsize=14) plt.axis('square') plt.axis([-3,3,-3,3]) pass ``` #### After change of basis ```python yc = v.T @ xc ``` ```python plt.scatter(yc[0,:], yc[1,:], alpha=0.5) for e_, v_ in zip(e, np.eye(2)): plt.plot([0, e_*v_[0]], [0, e_*v_[1]], 'r-', lw=2) plt.xlabel('PC1', fontsize=14) plt.ylabel('PC2', fontsize=14) plt.axis('square') plt.axis([-3,3,-3,3]) pass ``` #### Eigenvectors from PCA ```python pca.components_ ``` array([[ 0.677, 0.736], [-0.736, 0.677]]) #### Eigenvalues from PCA ```python pca.explained_variance_ ``` array([1.404, 0.452]) #### Explained variance This is just a consequence of the invariance of the trace under change of basis. Since the original diagnonal entries in the covariance matrix are the variances of the featrues, the sum of the eigenvalues must also be the sum of the orignal varainces. In other words, the cumulateive proportion of the top $n$ eigenvaluee is the "explained variance" of the first $n$ principal components. ```python e/e.sum() ``` array([0.906, 0.094]) #### Check ```python from sklearn.decomposition import PCA ``` ```python pca = PCA() ``` #### Note that the PCA from scikit-learn works with feature vectors, not observation vectors ```python z = pca.fit_transform(x.T) ``` ```python pca.explained_variance_ratio_ ``` array([0.906, 0.094]) #### The principal components are identical to our home-brew version, up to a flip in direction of eigenvectors ```python plt.scatter(z[:, 0], z[:, 1], alpha=0.5) for e_, v_ in zip(e, np.eye(2)): plt.plot([0, e_*v_[0]], [0, e_*v_[1]], 'r-', lw=2) plt.xlabel('PC1', fontsize=14) plt.ylabel('PC2', fontsize=14) plt.axis('square') plt.axis([-3,3,-3,3]) pass ``` ```python plt.subplot(121) plt.scatter(-z[:, 0], -z[:, 1], alpha=0.5) for e_, v_ in zip(e, np.eye(2)): plt.plot([0, e_*v_[0]], [0, e_*v_[1]], 'r-', lw=2) plt.xlabel('PC1', fontsize=14) plt.ylabel('PC2', fontsize=14) plt.axis('square') plt.axis([-3,3,-3,3]) plt.title('Scikit-learn PCA (flipped)') plt.subplot(122) plt.scatter(yc[0,:], yc[1,:], alpha=0.5) for e_, v_ in zip(e, np.eye(2)): plt.plot([0, e_*v_[0]], [0, e_*v_[1]], 'r-', lw=2) plt.xlabel('PC1', fontsize=14) plt.ylabel('PC2', fontsize=14) plt.axis('square') plt.axis([-3,3,-3,3]) plt.title('Homebrew PCA') plt.tight_layout() pass ```
a19077c2b6e81923e8e985d1122b00b6a0eb93bb
82,591
ipynb
Jupyter Notebook
notebooks/B03A_PCA.ipynb
jenniesun/bios-823-2021
276654785a03443d15851634c5e88952404cf684
[ "MIT" ]
13
2020-08-17T20:59:59.000Z
2021-09-27T16:30:59.000Z
notebooks/B03A_PCA.ipynb
jenniesun/bios-823-2021
276654785a03443d15851634c5e88952404cf684
[ "MIT" ]
null
null
null
notebooks/B03A_PCA.ipynb
jenniesun/bios-823-2021
276654785a03443d15851634c5e88952404cf684
[ "MIT" ]
11
2020-08-17T21:35:22.000Z
2021-09-19T16:05:45.000Z
105.75032
20,992
0.854524
true
3,416
Qwen/Qwen-72B
1. YES 2. YES
0.868827
0.705785
0.613205
__label__eng_Latn
0.526887
0.263011