Datasets:
text
stringlengths 87
777k
| meta.hexsha
stringlengths 40
40
| meta.size
int64 682
1.05M
| meta.ext
stringclasses 1
value | meta.lang
stringclasses 1
value | meta.max_stars_repo_path
stringlengths 8
226
| meta.max_stars_repo_name
stringlengths 8
109
| meta.max_stars_repo_head_hexsha
stringlengths 40
40
| meta.max_stars_repo_licenses
sequencelengths 1
5
| meta.max_stars_count
int64 1
23.9k
⌀ | meta.max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | meta.max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | meta.max_issues_repo_path
stringlengths 8
226
| meta.max_issues_repo_name
stringlengths 8
109
| meta.max_issues_repo_head_hexsha
stringlengths 40
40
| meta.max_issues_repo_licenses
sequencelengths 1
5
| meta.max_issues_count
int64 1
15.1k
⌀ | meta.max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | meta.max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | meta.max_forks_repo_path
stringlengths 8
226
| meta.max_forks_repo_name
stringlengths 8
109
| meta.max_forks_repo_head_hexsha
stringlengths 40
40
| meta.max_forks_repo_licenses
sequencelengths 1
5
| meta.max_forks_count
int64 1
6.05k
⌀ | meta.max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | meta.max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | meta.avg_line_length
float64 15.5
967k
| meta.max_line_length
int64 42
993k
| meta.alphanum_fraction
float64 0.08
0.97
| meta.converted
bool 1
class | meta.num_tokens
int64 33
431k
| meta.lm_name
stringclasses 1
value | meta.lm_label
stringclasses 3
values | meta.lm_q1_score
float64 0.56
0.98
| meta.lm_q2_score
float64 0.55
0.97
| meta.lm_q1q2_score
float64 0.5
0.93
| text_lang
stringclasses 53
values | text_lang_conf
float64 0.03
1
| label
float64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
# Introduction to zfit
In this notebook, we will have a walk through the main components of zfit and their features. Especially the extensive model building part will be discussed separately.
zfit consists of 5 mostly independent parts. Other libraries can rely on this parts to do plotting or statistical inference, such as hepstats does. Therefore we will discuss two libraries in this tutorial: zfit to build models, data and a loss, minimize it and get a fit result and hepstats, to use the loss we built here and do inference.
## Data
This component in general plays a minor role in zfit: it is mostly to provide a unified interface for data.
Preprocessing is therefore not part of zfit and should be done beforehand. Python offers many great possibilities to do so (e.g. Pandas).
zfit `Data` can load data from various sources, most notably from Numpy, Pandas DataFrame, TensorFlow Tensor and ROOT (using uproot). It is also possible, for convenience, to convert it directly `to_pandas`. The constructors are named `from_numpy`, `from_root` etc.
```python
import zfit
from zfit import z
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
```
A `Data` needs not only the data itself but also the observables: the human readable string identifiers of the axes (corresponding to "columns" of a Pandas DataFrame). It is convenient to define the `Space` not only with the observable but also with a limit: this can directly be re-used as the normalization range in the PDF.
First, let's define our observables
```python
obs = zfit.Space('obs1', (-5, 10))
```
This `Space` has limits. Next to the effect of handling the observables, we can also play with the limits: multiple `Spaces` can be added to provide disconnected ranges. More importantly, `Space` offers functionality:
- limit1d: return the lower and upper limit in the 1 dimensional case (raises an error otherwise)
- rect_limits: return the n dimensional limits
- area(): calculate the area (e.g. distance between upper and lower)
- inside(): return a boolean Tensor corresponding to whether the value is _inside_ the `Space`
- filter(): filter the input values to only return the one inside
```python
size_normal = 10000
data_normal_np = np.random.normal(size=size_normal, scale=2)
data_normal = zfit.Data.from_numpy(obs=obs, array=data_normal_np)
```
The main functionality is
- nevents: attribute that returns the number of events in the object
- data_range: a `Space` that defines the limits of the data; if outside, the data will be cut
- n_obs: defines the number of dimensions in the dataset
- with_obs: returns a subset of the dataset with only the given obs
- weights: event based weights
Furthermore, `value` returns a Tensor with shape `(nevents, n_obs)`.
To retrieve values, in general `z.unstack_x(data)` should be used; this returns a single Tensor with shape (nevents) or a list of tensors if `n_obs` is larger then 1.
```python
print(f"We have {data_normal.nevents} events in our dataset with the minimum of {np.min(data_normal.unstack_x())}") # remember! The obs cut out some of the data
```
We have 9950 events in our dataset with the minimum of -4.979805501079585
```python
data_normal.n_obs
```
1
## Model
Building models is by far the largest part of zfit. We will therefore cover an essential part, the possibility to build custom models, in an extra chapter. Let's start out with the idea that you define your parameters and your observable space; the latter is the expected input data.
There are two types of models in zfit:
- functions, which are rather simple and "underdeveloped"; their usage is often not required.
- PDF that are function which are normalized (over a specified range); this is the main model and is what we gonna use throughout the tutorials.
A PDF is defined by
\begin{align}
\mathrm{PDF}_{f(x)}(x; \theta) = \frac{f(x; \theta)}{\int_{a}^{b} f(x; \theta)}
\end{align}
where a and b define the normalization range (`norm_range`), over which (by inserting into the above definition) the integral of the PDF is unity.
zfit has a modular approach to things and this is also true for models. While the normalization itself (e.g. what are parameters, what is normalized data) will already be pre-defined in the model, models are composed of functions that are transparently called inside. For example, a Gaussian would usually be implemented by writing a Python function `def gauss(x, mu, sigma)`, which does not care about the normalization and then be wrapped in a PDF, where the normalization and what is a parameter is defined.
In principle, we can go far by using simply functions (e.g. [TensorFlowAnalysis/AmpliTF](https://github.com/apoluekt/AmpliTF) by Anton Poluektov uses this approach quite successfully for Amplitude Analysis), but this design has limitations for a more general fitting library such as zfit (or even [TensorWaves](https://github.com/ComPWA/tensorwaves), being built on top of AmpliTF).
The main thing is to keep track of the different ordering of the data and parameters, especially the dependencies.
Let's create a simple Gaussian PDF. We already defined the `Space` for the data before, now we only need the parameters. This are a different object than a `Space`.
### Parameter
A `Parameter` (there are different kinds actually, more on that later) takes the following arguments as input:
`Parameter(human readable name, initial value[, lower limit, upper limit])` where the limits are recommended but not mandatory. Furthermore, `step_size` can be given (which is useful to be around the given uncertainty, e.g. for large yields or small values it can help a lot to set this). Also, a `floating` argument is supported, indicating whether the parameter is allowed to float in the fit or not (just omitting the limits does _not_ make a parameter constant).
Parameters have a unique name. This is served as the identifier for e.g. fit results. However, a parameter _cannot_ be retrieved by its string identifier (its name) but the object itself should be used. In places where a parameter maps to something, the object itself is needed, not its name.
```python
mu = zfit.Parameter('mu', 1, -3, 3, step_size=0.2)
sigma_num = zfit.Parameter('sigma42', 1, 0.1, 10, floating=False)
```
These attributes can be changed:
```python
print(f"sigma is float: {sigma_num.floating}")
sigma_num.floating = True
print(f"sigma is float: {sigma_num.floating}")
```
sigma is float: False
sigma is float: True
*PITFALL NOTEBOOKS: since the parameters have a unique name, a second parameter with the same name cannot be created; the behavior is undefined and therefore it raises an error.
While this does not pose a problem in a normal Python script, it does in a Jupyter-like notebook, since it is an often practice to "rerun" a cell as an attempt to "reset" things. Bear in mind that this does not make sense, from a logic point of view. The parameter already exists. Best practice: write a small wrapper, do not rerun the parameter creation cell or simply rerun the notebook (restart kernel & run all). For further details, have a look at the discussion and arguments [here](https://github.com/zfit/zfit/issues/186)*
Now we have everything to create a Gaussian PDF:
```python
gauss = zfit.pdf.Gauss(obs=obs, mu=mu, sigma=sigma_num)
```
Since this holds all the parameters and the observables are well defined, we can retrieve them
```python
gauss.n_obs # dimensions
```
1
```python
gauss.obs
```
('obs1',)
```python
gauss.space
```
<zfit Space obs=('obs1',), axes=(0,), limits=(array([[-5.]]), array([[10.]]))>
```python
gauss.norm_range
```
<zfit Space obs=('obs1',), axes=(0,), limits=(array([[-5.]]), array([[10.]]))>
As we've seen, the `obs` we defined is the `space` of Gauss: this acts as the default limits whenever needed (e.g. for sampling). `gauss` also has a `norm_range`, which equals by default as well to the `obs` given, however, we can explicitly change that with `set_norm_range`.
We can also access the parameters of the PDF in two ways, depending on our intention:
either by _name_ (the parameterization name, e.g. `mu` and `sigma`, as defined in the `Gauss`), which is useful if we are interested in the parameter that _describes_ the shape
```python
gauss.params
```
OrderedDict([('mu', <zfit.Parameter 'mu' floating=True value=1>),
('sigma', <zfit.Parameter 'sigma42' floating=True value=1>)])
or to retrieve all the parameters that the PDF depends on. As this now may sounds trivial, we will see later that models can depend on other models (e.g. sums) and parameters on other parameters. There is one function that automatically retrieves _all_ dependencies, `get_params`. It takes three arguments to filter:
- floating: whether to filter only floating parameters, only non-floating or don't discriminate
- is_yield: if it is a yield, or not a yield, or both
- extract_independent: whether to recursively collect all parameters. This, and the explanation for why independent, can be found later on in the `Simultaneous` tutorial.
Usually, the default is exactly what we want if we look for _all free parameters that this PDF depends on_.
```python
gauss.get_params()
```
OrderedSet([<zfit.Parameter 'mu' floating=True value=1>, <zfit.Parameter 'sigma42' floating=True value=1>])
The difference will also be clear if we e.g. use the same parameter twice:
```python
gauss_only_mu = zfit.pdf.Gauss(obs=obs, mu=mu, sigma=mu)
print(f"params={gauss_only_mu.params}")
print(f"get_params={gauss_only_mu.get_params()}")
```
params=OrderedDict([('mu', <zfit.Parameter 'mu' floating=True value=1>), ('sigma', <zfit.Parameter 'mu' floating=True value=1>)])
get_params=OrderedSet([<zfit.Parameter 'mu' floating=True value=1>])
## Functionality
PDFs provide a few useful methods. The main features of a zfit PDF are:
- `pdf`: the normalized value of the PDF. It takes an argument `norm_range` that can be set to `False`, in which case we retrieve the unnormalized value
- `integrate`: given a certain range, the PDF is integrated. As `pdf`, it takes a `norm_range` argument that integrates over the unnormalized `pdf` if set to `False`
- `sample`: samples from the pdf and returns a `Data` object
```python
integral = gauss.integrate(limits=(-1, 3)) # corresponds to 2 sigma integral
integral
```
<tf.Tensor: shape=(1,), dtype=float64, numpy=array([0.95449974])>
### Tensors
As we see, many zfit functions return Tensors. This is however no magical thing! If we're outside of models, than we can always safely convert them to a numpy array by calling `zfit.run(...)` on it (or any structure containing potentially multiple Tensors). However, this may not even be required often! They can be added just like numpy arrays and interact well with Python and Numpy:
```python
np.sqrt(integral)
```
array([0.97698502])
They also have shapes, dtypes, can be slices etc. So do not convert them except you need it. More on this can be seen in the talk later on about zfit and TensorFlow 2.0.
```python
sample = gauss.sample(n=1000) # default space taken as limits
sample
```
<zfit.core.data.SampleData at 0x7f1790089d00>
```python
sample.unstack_x()[:10]
```
<tf.Tensor: shape=(10,), dtype=float64, numpy=
array([-0.09952247, 1.49016828, 2.12148953, 0.39491123, 1.08061772,
0.49297195, 0.57305784, 1.98737622, 1.46084697, 0.30197322])>
```python
sample.n_obs
```
1
```python
sample.obs
```
('obs1',)
We see that sample returns also a zfit `Data` object with the same space as it was sampled in. This can directly be used e.g.
```python
probs = gauss.pdf(sample)
probs[:10]
```
<tf.Tensor: shape=(10,), dtype=float64, numpy=
array([0.21796662, 0.35378319, 0.21271375, 0.33220443, 0.39764798,
0.35082167, 0.36419045, 0.24502515, 0.35875036, 0.31268492])>
**NOTE**: In case you want to do this repeatedly (e.g. for toy studies), there is a way more efficient way (see later on)
## Plotting
so far, we have a dataset and a PDF. Before we go for fitting, we can make a plot. This functionality is not _directly_ provided in zfit (but can be added to [zfit-physics](https://github.com/zfit/zfit-physics)). It is however simple enough to do it:
```python
def plot_model(model, data, scale=1, plot_data=True): # we will use scale later on
nbins = 50
lower, upper = data.data_range.limit1d
x = tf.linspace(lower, upper, num=1000) # np.linspace also works
y = model.pdf(x) * size_normal / nbins * data.data_range.area()
y *= scale
plt.plot(x, y)
data_plot = zfit.run(z.unstack_x(data)) # we could also use the `to_pandas` method
if plot_data:
plt.hist(data_plot, bins=nbins)
```
```python
plot_model(gauss, data_normal)
```
We can of course do better (and will see that later on, continuously improve the plots), but this is quite simple and gives us the full power of matplotlib.
### Different models
zfit offers a selection of predefined models (and extends with models from zfit-physics that contain physics specific models such as ARGUS shaped models).
```python
print(zfit.pdf.__all__)
```
['BasePDF', 'BaseFunctor', 'Exponential', 'CrystalBall', 'DoubleCB', 'Gauss', 'Uniform', 'TruncatedGauss', 'WrapDistribution', 'Cauchy', 'Chebyshev', 'Legendre', 'Chebyshev2', 'Hermite', 'Laguerre', 'RecursivePolynomial', 'ProductPDF', 'SumPDF', 'GaussianKDE1DimV1', 'ZPDF', 'SimplePDF', 'SimpleFunctorPDF']
To create a more realistic model, we can build some components for a mass fit with a
- signal component: CrystalBall
- combinatorial background: Exponential
- partial reconstructed background on the left: Kernel Density Estimation
```python
mass_obs = zfit.Space('mass', (0, 1000))
```
```python
# Signal component
mu_sig = zfit.Parameter('mu_sig', 400, 100, 600)
sigma_sig = zfit.Parameter('sigma_sig', 50, 1, 100)
alpha_sig = zfit.Parameter('alpha_sig', 300, 100, 400)
n_sig = zfit.Parameter('n sig', 4, 0.1, 30)
signal = zfit.pdf.CrystalBall(obs=mass_obs, mu=mu_sig, sigma=sigma_sig, alpha=alpha_sig, n=n_sig)
```
```python
# combinatorial background
lam = zfit.Parameter('lambda', -0.01, -0.05, -0.001)
comb_bkg = zfit.pdf.Exponential(lam, obs=mass_obs)
```
```python
part_reco_data = np.random.normal(loc=200, scale=150, size=700)
part_reco_data = zfit.Data.from_numpy(obs=mass_obs, array=part_reco_data) # we don't need to do this but now we're sure it's inside the limits
part_reco = zfit.pdf.GaussianKDE1DimV1(obs=mass_obs, data=part_reco_data, bandwidth='adaptive')
```
## Composing models
We can also compose multiple models together. Here we'll stick to one dimensional models, the extension to multiple dimensions is explained in the "custom models tutorial".
Here we will use a `SumPDF`. This takes pdfs and fractions. If we provide n pdfs and:
- n - 1 fracs: the nth fraction will be 1 - sum(fracs)
- n fracs: no normalization attempt is done by `SumPDf`. If the fracs are not implicitly normalized, this can lead to bad fitting
behavior if there is a degree of freedom too much
```python
sig_frac = zfit.Parameter('sig_frac', 0.3, 0, 1)
comb_bkg_frac = zfit.Parameter('comb_bkg_frac', 0.25, 0, 1)
model = zfit.pdf.SumPDF([signal, comb_bkg, part_reco], [sig_frac, comb_bkg_frac])
```
In order to have a corresponding data sample, we can just create one. Since we want to fit to this dataset later on, we will create it with slightly different values. Therefore, we can use the ability of a parameter to be set temporarily to a certain value with
```python
print(f"before: {sig_frac}")
with sig_frac.set_value(0.25):
print(f"new value: {sig_frac}")
print(f"after 'with': {sig_frac}")
```
before: <zfit.Parameter 'sig_frac' floating=True value=0.3>
new value: <zfit.Parameter 'sig_frac' floating=True value=0.25>
after 'with': <zfit.Parameter 'sig_frac' floating=True value=0.3>
While this is useful, it does not fully scale up. We can use the `zfit.param.set_values` helper therefore.
(_Sidenote: instead of a list of values, we can also use a `FitResult`, the given parameters then take the value from the result_)
```python
with zfit.param.set_values([mu_sig, sigma_sig, sig_frac, comb_bkg_frac, lam], [370, 34, 0.18, 0.15, -0.006]):
data = model.sample(n=10000)
```
```python
plot_model(model, data);
```
Plotting the components is not difficult now: we can either just plot the pdfs separately (as we still can access them) or in a generalized manner by accessing the `pdfs` attribute:
```python
def plot_comp_model(model, data):
for mod, frac in zip(model.pdfs, model.params.values()):
plot_model(mod, data, scale=frac, plot_data=False)
plot_model(model, data)
```
```python
plot_comp_model(model, data)
```
Now we can add legends etc. Btw, did you notice that actually, the `frac` params are zfit `Parameters`? But we just used them as if they were Python scalars and it works.
```python
print(model.params)
```
OrderedDict([('frac_0', <zfit.Parameter 'sig_frac' floating=True value=0.3>), ('frac_1', <zfit.Parameter 'comb_bkg_frac' floating=True value=0.25>), ('frac_2', <zfit.ComposedParameter 'Composed_autoparam_2' params=OrderedDict([('param_0', <zfit.Parameter 'sig_frac' floating=True value=0.3>), ('param_1', <zfit.Parameter 'comb_bkg_frac' floating=True value=0.25>)]) value=0.45>)])
### Extended PDFs
So far, we have only looked at normalized PDFs that do contain information about the shape but not about the _absolute_ scale. We can make a PDF extended by adding a yield to it.
The behavior of the new, extended PDF does **NOT change**, any methods we called before will act the same. Only exception, some may require an argument _less_ now. All the methods we used so far will return the same values. What changes is that the flag `model.is_extended` now returns `True`. Furthermore, we have now a few more methods that we can use which would have raised an error before:
- `get_yield`: return the yield parameter (notice that the yield is _not_ added to the shape parameters `params`)
- `ext_{pdf,integrate}`: these methods return the same as the versions used before, however, multiplied by the yield
- `sample` is still the same, but does not _require_ the argument `n` anymore. By default, this will now equal to a _poissonian sampled_ n around the yield.
The `SumPDF` now does not strictly need `fracs` anymore: if _all_ input PDFs are extended, the sum will be as well and use the (normalized) yields as fracs
The preferred way to create an extended PDf is to use `PDF.create_extended(yield)`. However, since this relies on copying the PDF (which may does not work for different reasons), there is also a `set_yield(yield)` method that sets the yield in-place. This won't lead to ambiguities, as everything is supposed to work the same.
```python
yield_model = zfit.Parameter('yield_model', 10000, 0, 20000, step_size=10)
model_ext = model.create_extended(yield_model)
```
alternatively, we can create the models as extended and sum them up
```python
sig_yield = zfit.Parameter('sig_yield', 2000, 0, 10000, step_size=1)
sig_ext = signal.create_extended(sig_yield)
comb_bkg_yield = zfit.Parameter('comb_bkg_yield', 6000, 0, 10000, step_size=1)
comb_bkg_ext = comb_bkg.create_extended(comb_bkg_yield)
part_reco_yield = zfit.Parameter('part_reco_yield', 2000, 0, 10000, step_size=1)
part_reco.set_yield(part_reco_yield) # unfortunately, `create_extended` does not work here. But no problem, it won't change anyting.
part_reco_ext = part_reco
```
```python
model_ext_sum = zfit.pdf.SumPDF([sig_ext, comb_bkg_ext, part_reco_ext])
```
# Loss
A loss combines the model and the data, for example to build a likelihood. Furthermore, it can contain constraints, additions to the likelihood. Currently, if the `Data` has weights, these are automatically taken into account.
```python
nll_gauss = zfit.loss.UnbinnedNLL(gauss, data_normal)
```
The loss has several attributes to be transparent to higher level libraries. We can calculate the value of it using `value`.
```python
nll_gauss.value()
```
<tf.Tensor: shape=(), dtype=float64, numpy=32625.929833399066>
Notice that due to graph building, this will take significantly longer on the first run. Rerun the cell above and it will be way faster.
Furthermore, the loss also provides a possibility to calculate the gradients or, often used, the value and the gradients.
We can access the data and models (and possible constraints)
```python
nll_gauss.model
```
[<zfit.Gauss params=[mu, sigma42] dtype=float64>0]
```python
nll_gauss.data
```
[<zfit.core.data.Data at 0x7f17900ed820>]
```python
nll_gauss.constraints
```
[]
Similar to the models, we can also get the parameters via `get_params`.
```python
nll_gauss.get_params()
```
OrderedSet([<zfit.Parameter 'mu' floating=True value=1>, <zfit.Parameter 'sigma42' floating=True value=1>])
### Extended loss
More interestingly, we can now build a loss for our composite sum model using the sampled data. Since we created an extended model, we can now also create an extended likelihood, taking into account a Poisson term to match the yield to the number of events.
```python
nll = zfit.loss.ExtendedUnbinnedNLL(model_ext_sum, data)
```
```python
nll.get_params()
```
OrderedSet([<zfit.Parameter 'sig_yield' floating=True value=2000>, <zfit.Parameter 'comb_bkg_yield' floating=True value=6000>, <zfit.Parameter 'part_reco_yield' floating=True value=2000>, <zfit.Parameter 'alpha_sig' floating=True value=300>, <zfit.Parameter 'mu_sig' floating=True value=400>, <zfit.Parameter 'n sig' floating=True value=4>, <zfit.Parameter 'sigma_sig' floating=True value=50>, <zfit.Parameter 'lambda' floating=True value=-0.01>])
# Minimization
While a loss is interesting, we usually want to minimize it. Therefore we can use the minimizers in zfit, most notably `Minuit`, a wrapper around the [iminuit minimizer](https://github.com/scikit-hep/iminuit).
The philosophy is to create a minimizer instance that is mostly _stateless_, e.g. does not remember the position (there are considerations to make it possible to have a state, in case you feel interested, [contact us](https://github.com/zfit/zfit#contact))
Given that iminuit provides us with a very reliable and stable minimizer, it is usually recommended to use this. Others are implemented as well and could easily be wrapped, however, the convergence is usually not as stable.
Minuit has a few options:
- `tolerance`: the Estimated Distance to Minimum (EDM) criteria for convergence (default 1e-3)
- `verbosity`: between 0 and 10, 5 is normal, 7 is verbose, 10 is maximum
- `use_minuit_grad`: if True, uses the Minuit numerical gradient instead of the TensorFlow gradient. This is usually more stable for smaller fits; furthermore the TensorFlow gradient _can_ (experience based) sometimes be wrong.
```python
minimizer = zfit.minimize.Minuit(use_minuit_grad=True)
```
For the minimization, we can call `minimize`, which takes a
- loss as we created above
- optionally: the parameters to minimize
By default, `minimize` uses all the free floating parameters (obtained with `get_params`). We can also explicitly specify which ones to use by giving them (or better, objects that depend on them) to `minimize`; note however that non-floating parameters, even if given explicitly to `minimize` won 't be minimized.
## Pre-fit parts of the PDF
Before we want to fit the whole PDF however, it can be useful to pre-fit it. A way can be to fix the combinatorial background by fitting the exponential to the right tail.
Therefore we create a new data object with an additional cut and furthermore, set the normalization range of the background pdf to the range we are interested in.
```python
values = z.unstack_x(data)
obs_right_tail = zfit.Space('mass', (700, 1000))
data_tail = zfit.Data.from_tensor(obs=obs_right_tail, tensor=values)
with comb_bkg.set_norm_range(obs_right_tail):
nll_tail = zfit.loss.UnbinnedNLL(comb_bkg, data_tail)
minimizer.minimize(nll_tail)
```
------------------------------------------------------------------
| FCN = 328 | Ncalls=19 (19 total) |
| EDM = 9.18e-10 (Goal: 0.001) | up = 0.5 |
------------------------------------------------------------------
| Valid Min. | Valid Param. | Above EDM | Reached call limit |
------------------------------------------------------------------
| True | True | False | False |
------------------------------------------------------------------
| Hesse failed | Has cov. | Accurate | Pos. def. | Forced |
------------------------------------------------------------------
| False | True | True | True | False |
------------------------------------------------------------------
Since we now fit the lambda parameter of the exponential, we can fix it.
```python
lam.floating = False
lam
```
<zfit.Parameter 'lambda' floating=False value=-0.008587>
```python
result = minimizer.minimize(nll)
```
------------------------------------------------------------------
| FCN = -1.93e+04 | Ncalls=185 (185 total) |
| EDM = 6.35e-06 (Goal: 0.001) | up = 0.5 |
------------------------------------------------------------------
| Valid Min. | Valid Param. | Above EDM | Reached call limit |
------------------------------------------------------------------
| True | True | False | False |
------------------------------------------------------------------
| Hesse failed | Has cov. | Accurate | Pos. def. | Forced |
------------------------------------------------------------------
| False | True | True | True | False |
------------------------------------------------------------------
```python
plot_comp_model(model_ext_sum, data)
```
# Fit result
The result of every minimization is stored in a `FitResult`. This is the last stage of the zfit workflow and serves as the interface to other libraries. Its main purpose is to store the values of the fit, to reference to the objects that have been used and to perform (simple) uncertainty estimation.
```python
print(result)
```
FitResult of
<ExtendedUnbinnedNLL model=[<zfit.SumPDF params=[Composed_autoparam_5, Composed_autoparam_6, Composed_autoparam_7] dtype=float64>0] data=[<zfit.core.data.SampleData object at 0x7f176002fdc0>] constraints=[]>
with
<Minuit strategy=PushbackStrategy tolerance=0.001>
╒═════════╤═════════════╤══════════════════╤═════════╤═════════════╕
│ valid │ converged │ param at limit │ edm │ min value │
╞═════════╪═════════════╪══════════════════╪═════════╪═════════════╡
│ True │ True │ False │ 6.4e-06 │ -1.93e+04 │
╘═════════╧═════════════╧══════════════════╧═════════╧═════════════╛
Parameters
name value at limit
--------------- ------- ----------
sig_yield 1804 False
comb_bkg_yield 1095 False
part_reco_yield 7101 False
alpha_sig 300 False
mu_sig 370.8 False
n sig 4 False
sigma_sig 33.87 False
This gives an overview over the whole result. Often we're mostly interested in the parameters and their values, which we can access with a `params` attribute.
```python
print(result.params)
```
name value at limit
--------------- ------- ----------
sig_yield 1804 False
comb_bkg_yield 1095 False
part_reco_yield 7101 False
alpha_sig 300 False
mu_sig 370.8 False
n sig 4 False
sigma_sig 33.87 False
This is a `dict` which stores any knowledge about the parameters and can be accessed by the parameter (object) itself:
```python
result.params[mu_sig]
```
{'value': 370.7878667059073}
'value' is the value at the minimum. To obtain other information about the minimization process, `result` contains more attributes:
- fmin: the function minimum
- edm: estimated distance to minimum
- info: contains a lot of information, especially the original information returned by a specific minimizer
- converged: if the fit converged
```python
result.fmin
```
-19300.779346305906
## Estimating uncertainties
The `FitResult` has mainly two methods to estimate the uncertainty:
- a profile likelihood method (like MINOS)
- Hessian approximation of the likelihood (like HESSE)
When using `Minuit`, this uses (currently) it's own implementation. However, zfit has its own implementation, which are likely to become the standard and can be invoked by changing the method name.
Hesse is also [on the way to implement](https://github.com/zfit/zfit/pull/244) the [corrections for weights](https://inspirehep.net/literature/1762842).
We can explicitly specify which parameters to calculate, by default it does for all.
```python
result.hesse()
```
OrderedDict([(<zfit.Parameter 'sig_yield' floating=True value=1804>,
{'error': 70.15087267737408}),
(<zfit.Parameter 'comb_bkg_yield' floating=True value=1095>,
{'error': 70.35114369167526}),
(<zfit.Parameter 'part_reco_yield' floating=True value=7101>,
{'error': 131.80924912505543}),
(<zfit.Parameter 'alpha_sig' floating=True value=300>,
{'error': 141.4213562373095}),
(<zfit.Parameter 'mu_sig' floating=True value=370.8>,
{'error': 1.3661545484142485}),
(<zfit.Parameter 'n sig' floating=True value=4>,
{'error': 10.069756698215553}),
(<zfit.Parameter 'sigma_sig' floating=True value=33.87>,
{'error': 1.2650183734125646})])
```python
# result.hesse(method='hesse_np')
```
We get the result directly returned. This is also added to `result.params` for each parameter and is nicely displayed with an added column
```python
print(result.params)
```
name value minuit_hesse at limit
--------------- ------- -------------- ----------
sig_yield 1804 +/- 70 False
comb_bkg_yield 1095 +/- 70 False
part_reco_yield 7101 +/- 1.3e+02 False
alpha_sig 300 +/- 1.4e+02 False
mu_sig 370.8 +/- 1.4 False
n sig 4 +/- 10 False
sigma_sig 33.87 +/- 1.3 False
```python
errors, new_result = result.errors(params=[sig_yield, part_reco_yield, mu_sig]) # just using three for speed reasons
```
/home/jonas/Documents/physics/software/zfit_project/zfit_repo/zfit/minimizers/fitresult.py:360: FutureWarning: 'minuit_minos' will be changed as the default errors method to a custom implementationwith the same functionality. If you want to make sure that 'minuit_minos' will be used in the future, add it explicitly as in `errors(method='minuit_minos')`
warnings.warn("'minuit_minos' will be changed as the default errors method to a custom implementation"
```python
# errors, new_result = result.errors(params=[yield_model, sig_frac, mu_sig], method='zfit_error')
```
```python
print(errors)
```
OrderedDict([(<zfit.Parameter 'sig_yield' floating=True value=1804>, MError(name='sig_yield', is_valid=True, lower=-69.66325485797651, upper=70.75759128186598, lower_valid=True, upper_valid=True, at_lower_limit=False, at_upper_limit=False, at_lower_max_fcn=False, at_upper_max_fcn=False, lower_new_min=False, upper_new_min=False, nfcn=138, min=1803.8532804234746)), (<zfit.Parameter 'part_reco_yield' floating=True value=7101>, MError(name='part_reco_yield', is_valid=True, lower=-131.88637854089905, upper=132.34447403753458, lower_valid=True, upper_valid=True, at_lower_limit=False, at_upper_limit=False, at_lower_max_fcn=False, at_upper_max_fcn=False, lower_new_min=False, upper_new_min=False, nfcn=60, min=7101.093509366213)), (<zfit.Parameter 'mu_sig' floating=True value=370.8>, MError(name='mu_sig', is_valid=True, lower=-1.36717243612375, upper=1.356060293846917, lower_valid=True, upper_valid=True, at_lower_limit=False, at_upper_limit=False, at_lower_max_fcn=False, at_upper_max_fcn=False, lower_new_min=False, upper_new_min=False, nfcn=106, min=370.7878667059073))])
```python
print(result.params)
```
name value minuit_hesse minuit_minos at limit
--------------- ------- -------------- ------------------- ----------
sig_yield 1804 +/- 70 - 70 + 71 False
comb_bkg_yield 1095 +/- 70 False
part_reco_yield 7101 +/- 1.3e+02 -1.3e+02 +1.3e+02 False
alpha_sig 300 +/- 1.4e+02 False
mu_sig 370.8 +/- 1.4 - 1.4 + 1.4 False
n sig 4 +/- 10 False
sigma_sig 33.87 +/- 1.3 False
#### What is 'new_result'?
When profiling a likelihood, such as done in the algorithm used in `errors`, a new minimum can be found. If this is the case, this new minimum will be returned, otherwise `new_result` is `None`. Furthermore, the current `result` would be rendered invalid by setting the flag `valid` to `False`. _Note_: this behavior only applies to the zfit internal error estimator.
### A simple profile
There is no default function (yet) for simple profiling plot. However, again, we're in Python and it's simple enough to do that for a parameter. Let's do it for `sig_yield`
```python
x = np.linspace(1600, 2000, num=50)
y = []
sig_yield.floating = False
for val in x:
sig_yield.set_value(val)
y.append(nll.value())
sig_yield.floating = True
zfit.param.set_values(nll.get_params(), result)
```
<zfit.util.temporary.TemporarilySet at 0x7f16cc8d7550>
```python
plt.plot(x, y)
```
We can also access the covariance matrix of the parameters
```python
result.covariance()
```
array([[ 4.92114494e+03, 1.14332473e+03, -4.33133015e+03,
0.00000000e+00, -1.99905281e+01, 0.00000000e+00,
4.04511667e+01],
[ 1.14332473e+03, 4.94928342e+03, -5.67290390e+03,
0.00000000e+00, -6.88067541e+00, 0.00000000e+00,
1.48550756e+01],
[-4.33133015e+03, -5.67290390e+03, 1.73736782e+04,
0.00000000e+00, 2.77291911e+01, 0.00000000e+00,
-5.58205907e+01],
[ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
2.00000000e+04, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00],
[-1.99905281e+01, -6.88067541e+00, 2.77291911e+01,
0.00000000e+00, 1.86637825e+00, 0.00000000e+00,
-3.46142640e-01],
[ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 1.01400000e+02,
0.00000000e+00],
[ 4.04511667e+01, 1.48550756e+01, -5.58205907e+01,
0.00000000e+00, -3.46142640e-01, 0.00000000e+00,
1.60027149e+00]])
# End of zfit
This is where zfit finishes and other libraries take over.
# Beginning of hepstats
`hepstats` is a library containing statistical tools and utilities for high energy physics. In particular you do statistical inferences using the models and likelhoods function constructed in `zfit`.
Short example: let's compute for instance a confidence interval at 68 % confidence level on the mean of the gaussian defined above.
```python
from hepstats.hypotests.parameters import POIarray
from hepstats.hypotests.calculators import AsymptoticCalculator
from hepstats.hypotests import ConfidenceInterval
```
```python
calculator = AsymptoticCalculator(input=result, minimizer=minimizer)
```
```python
value = result.params[mu_sig]["value"]
error = result.params[mu_sig]["minuit_hesse"]["error"]
mean_scan = POIarray(mu_sig, np.linspace(value - 1.5*error, value + 1.5*error, 10))
```
```python
ci = ConfidenceInterval(calculator, mean_scan)
```
```python
ci.interval()
```
Confidence interval on mu_sig:
369.42424650773955 < mu_sig < 372.1404588905455 at 68.0% C.L.
{'observed': 370.7878667059073,
'upper': 372.1404588905455,
'lower': 369.42424650773955}
```python
from utils import one_minus_cl_plot
ax = one_minus_cl_plot(ci)
ax.set_xlabel("mean")
```
There will be more of `hepstats` later.
| b00b392978a7224fab32ff55c40d9292dc6918f0 | 170,262 | ipynb | Jupyter Notebook | tutorial2_zfit/Introduction.ipynb | zfit/python_hpc_TensorFlow_MSU | a866b63ddf59c773d89b7e499625bd1eb3d70cb0 | [
"BSD-3-Clause"
] | 1 | 2020-10-10T13:34:04.000Z | 2020-10-10T13:34:04.000Z | tutorial2_zfit/Introduction.ipynb | zfit/python_hpc_TensorFlow_MSU | a866b63ddf59c773d89b7e499625bd1eb3d70cb0 | [
"BSD-3-Clause"
] | null | null | null | tutorial2_zfit/Introduction.ipynb | zfit/python_hpc_TensorFlow_MSU | a866b63ddf59c773d89b7e499625bd1eb3d70cb0 | [
"BSD-3-Clause"
] | null | null | null | 89.049163 | 26,560 | 0.823449 | true | 9,792 | Qwen/Qwen-72B | 1. YES
2. YES | 0.72487 | 0.699254 | 0.506869 | __label__eng_Latn | 0.972054 | 0.015955 |
# Cálculo e clasificación de puntos críticos
Con todas as ferramentas que xa levamos revisado nas anteriores prácticas, o cálculo de puntos críticos e a súa clasificación mediante o criterio que involucra á matriz Hessiana de funcións de dúas variables diferenciables é moi sinxelo usando o módulo **Sympy**. No caso de puntos críticos basta calcular o gradiente da función e resolver un sistema de dúas ecuacións (habitualmente non lineal) e para a clasificación dos puntos críticos se deben inspeccionar os autovalores da matriz Hessiana (cálculo que tamén está dispoñible en **Sympy**).
Como aplicación do cálculo e identificación de máximos e mínimos relativos, revisaremos como interpretar como un problema de optimización o axuste polinomial mediante mínimos cadrados dun conxunto de puntos unidimensional.
### Obxectivos:
- Cálculo de puntos críticos
- Clasificación de puntos críticos: matriz Hessiana
- Problema de optimización: axuste polinomial mediante mínimos cadrados
## Cálculo de puntos críticos
Nesta práctica usaremos tanto o módulo **Sympy**, como tamén **Numpy** e **Matplotlib**. Así que debemos importalos para o resto do guión de prácticas:
```python
import sympy as sp
import numpy as np
import matplotlib.pyplot as plt
```
Como xa aconteceu en prácticas anteriores, debemos facer unha implementación propia para calcular o gradiente dunha función escalar $f$. Para iso usaremos a relación que xa coñecemos entre a matriz Xacobiana $Df$ dunha función escalar e o vector (columna) gradiente $\nabla f$, isto é $\nabla f=Df^{t}$:
```python
gradient = lambda f, v: sp.transpose(sp.Matrix([f]).jacobian(v))
```
Como xa estudamos nas sesións expositivas, que se supoñemos que a función de dúas variables $f$ é diferenciable, o cálculo de puntos críticos realízase tendo en conta que o plano tanxente á superficie que define a función son planos horizontais nos extremos relativos da función, é dicir, naqueles puntos onde as derivadas parciais de $f$ son nulas. Vexamos isto cun exemplo no que $f(x,y)=-x^3 +4xy-2y^2+1$:
```python
x, y = sp.symbols('x y', real=True) # define as variables simbólicas x e y
f = sp.Lambda((x,y), -x**3 +4*x*y-2*y**2+1)
# Cálculo de puntos críticos
grad_f = gradient(f(x,y),(x,y))
sol = sp.solve((sp.Eq(grad_f[0],0),sp.Eq(grad_f[1],0)),(x,y))
display('Critical points for x and y:', sol)
```
'Critical points for x and y:'
[(0, 0), (4/3, 4/3)]
Para comprobar visualmente o tipo de puntos críticos que posúe esta función, a podemos representar gráficamente:
```python
p = sp.plotting.plot3d(f(x,y), (x, -2, 2), (y, -2, 2), show=False)
p.xlabel='x'
p.ylabel='y'
p.zlabel='z'
p.show()
```
### **Exercicio 8.1**
Calcula os puntos críticos e representa graficamente a función:
$$
f(x,y) = \left(\frac12-x^2+y^2\right)e^{1-x^2-y^2}
$$
na rexión $(x,y)\in[-4,4]\times[-4,4]$.
```python
# O TEU CÓDIGO AQUÍ
```
## Clasificación de puntos críticos: matriz Hessiana
O cálculo da matriz Hessiana co módulo **Sympy** é inmediato xa que basta usar o comando `sp.hessian`. Unha vez feito isto, para calcular os autovalores desta matriz e decidir atendendo aos seus valores sen os puntos críticos son máximos relativos, mínimos relativos ou puntos de sela, soamnete debemos empregar o método `eigenvals` que está accesible na clase de obxectos tipo `sp.Matrix`:
```python
H = sp.Lambda((x,y), sp.hessian(f(x,y), (x,y)))
display('Hessian matrix', H(x,y))
# Clasificación do primeiro punto crítico: (0,0)
eigs = H(*sol[0]).eigenvals()
display('Eigenvalues for point (0,0)', np.double([*eigs]))
# Clasificación do segundo punto crítico: (4/3,4/3)
eigs = H(*sol[1]).eigenvals()
display('Eigenvalues for point (4/3,4/3)', np.double([*eigs]))
```
'Hessian matrix'
$\displaystyle \left[\begin{matrix}- 6 x & 4\\4 & -4\end{matrix}\right]$
'Eigenvalues for point (0,0)'
array([-6.47213595, 2.47213595])
'Eigenvalues for point (4/3,4/3)'
array([-10.47213595, -1.52786405])
### **Exercicio 8.2**
Clasifica os puntos críticos obtidos no exercicio 8.1, que correspondían á función:
$$
f(x,y) = \left(\frac12-x^2+y^2\right)e^{1-x^2-y^2}
$$
na rexión $(x,y)\in[-4,4]\times[-4,4]$.
```python
# O TEU CÓDIGO AQUÍ
```
## Axuste polinomial mediante mínimos cadrados
Dado un conxunto de puntos nun plano $(x_1,y_1), (x_2,y_2),\ldots, (x_{m},y_{m})$, é habitual que se trate atopar cal é o mellor polinomio de grao $N$ que minimiza o erro cadrático medio entre os datos proporcionados e os valores do polinomio axustado. No caso, dunh polinomio de grado $1$, o polinomio se escribe como $p(x)=ax+b$ e o anterior problema redúcese a atopar $(a^*,b^*)$ tal que se minimiza a función error:
$$
\mathrm{error}(a^*,b^*)=\min_{(a,b)\in\mathbb{R}^{2}}\mathrm{error}(a,b)
$$
onde
$$
\mathrm{error}(a,b)=\sum_{i=1}^{m}(ax_i+b-y_i)^2.
$$
Como calquera outro problema de minimización sen restriccións, para a súa resolución se deben calcular os puntos críticos da función erro e despois comprobar que se trata dun mínimo relativo (que será absoluto xa que a función erro tende a infinito cando $a$ ou $b$ tenden a $\pm\infty$). Vexamos este cálculo cun exemplo concreto. En primeiro lugar introducimos os datos:
```python
# Datos
xdata = np.array([0.0, 1.0, 2.0, 3.0, 4.0, 5.0])
ydata = np.array([0.0, 0.8, 0.9, 0.1, -0.8, -1.0])
```
A continuación, definimos a función error, calculamos os puntos críticos e mediante a matriz Hessiana determinamos que se trata dun mínimo relativo.
```python
# Erro a minimizar no método de mínimos cadrados
a, b = sp.symbols('a b', real=True)
error = sp.Lambda((a,b),sum((a*xi + b - yi)**2 for xi,yi in zip(xdata, ydata)))
display('Error function', error(a,b))
# Cálculo de puntos críticos
grad_error = gradient(error(a,b),(a,b))
sol = sp.solve((sp.Eq(grad_error[0],0),sp.Eq(grad_error[1],0)),(a,b))
display('Critical points for a and b:', sol)
# Clasificación de puntos críticos: autovalores da matriz Hessiana
H = sp.hessian(error(a,b), (a,b))
display('Hessian matrix', H)
eigs = H.eigenvals()
display('Eigenvalues', [*eigs])
display('Eigenvalues', np.double([*eigs]))
```
'Error function'
$\displaystyle b^{2} + \left(1.0 a + b - 0.8\right)^{2} + \left(2.0 a + b - 0.9\right)^{2} + \left(3.0 a + b - 0.1\right)^{2} + \left(4.0 a + b + 0.8\right)^{2} + \left(5.0 a + b + 1.0\right)^{2}$
'Critical points for a and b:'
{a: -0.302857142857143, b: 0.757142857142857}
'Hessian matrix'
$\displaystyle \left[\begin{matrix}110.0 & 30.0\\30.0 & 12\end{matrix}\right]$
'Eigenvalues'
[61 - sqrt(3301), sqrt(3301) + 61]
'Eigenvalues'
array([ 3.54567031, 118.45432969])
Dado que o proceso de axuste polinomial de datos é unha tarefa moi recurrente, existen métodos numéricos dedicados ao seu cálculo (tanto nunha como en varias dimensións). En particular, o módulo **Numpy** tamén ten un ferramenta directa para facer este axuste co comando `np.polyfit`. Comprobemos que os coeficientes que calcula son os mesmos que se obteñen cos cálculos en **Sympy**:
```python
# Axuste por mínimos cadrados dun polinomio de orde 1
z = np.polyfit(xdata, ydata, 1)
display('Values for a and b', z)
```
'Values for a and b'
array([-0.30285714, 0.75714286])
Adicionalmente, o comando `np.polyfit` permite facer o axuste usando un polinomo de calquera orde. No que segue, se calcula e representa graficamente o axuste polinomial de orde $3$:
```python
# Axuste por mínimos cadrados dun polinomio de orde 3
z = np.polyfit(xdata, ydata, 3)
# Define un polinomio en Sympy a partir dos seus coeficientes
x = sp.symbols('x', real=True)
P = sp.Lambda(x,sum((a*x**i for i,a in enumerate(z[::-1]))))
# Representación gráfica
pol = sp.lambdify(x,P(x),"numpy")
plt.plot(xdata, ydata, '.', label='Data')
xp = np.linspace(-1.,6.,100)
plt.plot(xp, pol(xp), '-', label='Fitting')
plt.xlim(-1,6)
plt.ylim(-2,2)
plt.legend()
plt.show()
```
### **Exercicio 8.3**
Sobre os datos usados anteriormente, sábese que no punto $x=4.5$, o valor é $-1.01$. Usa diferente orde de polinomios para axustar os datos, por exemplo $N=3, 5, 10, 20, 30$. Calcula o erro que se comete neste axuste no punto $x=4.5$:
- Ao aumentar a orde polinomial, mellorase o erro?
- Cal é o valor de $N$ co que se comete menor erro no axuste para o punto $x=4.5$?
- Ao introducir o novo dato do punto $x=4.5$ e usar $N<4$: apreciase algunha diferencia na curva axustada?
```python
# O TEU CÓDIGO AQUÍ
```
| 5ff0a30107d820b79eda1a3b90654559cfe0816f | 112,842 | ipynb | Jupyter Notebook | practicas/extremos-relativos.ipynb | maprieto/CalculoMultivariable | 6bd7839803d696c6cd0e3536c0631453eacded70 | [
"MIT"
] | 1 | 2021-01-09T18:30:54.000Z | 2021-01-09T18:30:54.000Z | practicas/extremos-relativos.ipynb | maprieto/CalculoMultivariable | 6bd7839803d696c6cd0e3536c0631453eacded70 | [
"MIT"
] | null | null | null | practicas/extremos-relativos.ipynb | maprieto/CalculoMultivariable | 6bd7839803d696c6cd0e3536c0631453eacded70 | [
"MIT"
] | null | null | null | 307.47139 | 82,555 | 0.905718 | true | 2,880 | Qwen/Qwen-72B | 1. YES
2. YES | 0.833325 | 0.737158 | 0.614292 | __label__glg_Latn | 0.970904 | 0.265537 |
# Constrained optimization
Now we will move to studying constrained optimizaton problems i.e., the full problem
$$
\begin{align} \
\min \quad &f(x)\\
\text{s.t.} \quad & g_j(x) \geq 0\text{ for all }j=1,\ldots,J\\
& h_k(x) = 0\text{ for all }k=1,\ldots,K\\
&a_i\leq x_i\leq b_i\text{ for all } i=1,\ldots,n\\
&x\in \mathbb R^n,
\end{align}
$$
where for all $i=1,\ldots,n$ it holds that $a_i,b_i\in \mathbb R$ or they may also be $-\infty$ of $\infty$.
For example, we can have an optimization problem
$$
\begin{align} \
\min \quad &x_1^2+x_2^2\\
\text{s.t.} \quad & x_1+x_2-1\geq 0\\
&-1\leq x_1\leq 1, x_2\leq 3.\\
\end{align}
$$
In order to optimize that problem, we can define the following python function:
```python
import numpy as np
def f_constrained(x):
return np.linalg.norm(x)**2,[x[0]+x[1]-1],[]
```
Now, we can call the function:
```python
(f_val,ieq,eq) = f_constrained([1,0])
print "Value of f is "+str(f_val)
if len(ieq)>0:
print "The values of inequality constraints are:"
for ieq_j in ieq:
print str(ieq_j)+", "
if len(eq)>0:
print "The values of the equality constraints are:"
for eq_k in eq:
print str(eq_k)+", "
```
Value of f is 1.0
The values of inequality constraints are:
0,
Is this solution feasible?
```python
if all([ieq_j>=0 for ieq_j in ieq]) and all([eq_k==0 for eq_k in eq]):
print "Solution is feasible"
else:
print "Solution is infeasible"
```
Solution is feasible
# Indirect and direct methods for constrained optimization
There are two categories of methods for constrained optimization: Indirect and direct methods. The main difference is that
1. Indirect methods convert the constrained optimization problem into a single or a sequence of unconstrained optimization problems, that are then solved. Often, the intermediate solutions do not need to be feasbiel, the sequence of solutions converges to a solution that is optimal (and, thus, feasible).
2. Direct methods deal with the constrained optimization problem directly. In this case, the intermediate solutions are feasible.
# Indirect methods
## Penalty function methods
**IDEA:** Include constraints into the objective function with the help of penalty functions that penalize constraint violations.
Let, $\alpha(x):\mathbb R^n\to\mathbb R$ be a function so that
* $\alpha(x)=$ for all feasible $x$
* $\alpha(x)>0$ for all infeasible $x$.
Define optimization problems
$$
\begin{align} \
\min \qquad &f(x)+r\alpha(x)\\
\text{s.t.} \qquad &x\in \mathbb R^n
\end{align}
$$
for $r>0$ and $x_r$ be the optimal solutions of these problems.
In this case, the optimal solutions $x_r$ converge to the optimal solution of the constrained problem, when $r\to\infty$, if such solution exists.
For example, good ideas for penalty functions are
* $h_k(x)^2$ for equality constraints,
* $\left(\min\{0,g_j(x)\}\right)^2$ for inequality constraints.
```python
def alpha(x,f):
(_,ieq,eq) = f(x)
return sum([min([0,ieq_j])**2 for ieq_j in ieq])+sum([eq_k**2 for eq_k in eq])
```
```python
alpha([1,0],f_constrained)
```
0
```python
def penalized_function(x,f,r):
return f(x)[0] + r*alpha(x,f)
```
```python
penalized_function([-1,0],f_constrained,10000)
```
40001.0
```python
from scipy.optimize import minimize
res = minimize(lambda x:penalized_function(x,f_constrained,100000),
[0,0],method='Nelder-Mead',
options={'disp': True})
print res.x
```
Optimization terminated successfully.
Current function value: 0.499998
Iterations: 57
Function evaluations: 96
[ 0.49994305 0.50005243]
```python
(f_val,ieq,eq) = f_constrained(res.x)
print "Value of f is "+str(f_val)
if len(ieq)>0:
print "The values of inequality constraints are:"
for ieq_j in ieq:
print str(ieq_j)+", "
if len(eq)>0:
print "The values of the equality constraints are:"
for eq_k in eq:
print str(eq_k)+", "
if all([ieq_j>=0 for ieq_j in ieq]) and all([eq_k==0 for eq_k in eq]):
print "Solution is feasible"
else:
print "Solution is infeasible"
```
Value of f is 0.49999548939
The values of inequality constraints are:
-4.51660156242e-06,
Solution is infeasible
### How to set the penalty term $r$?
The penalty term should
* be large enough in order for the solutions be close enough to the feasible region, but
* not be too large to
* cause numerical problems, or
* cause premature convergence to non-optimal solutions because of relative tolerances.
Usually, the penalty term is either
* set as big as possible without causing problems (hard to know), or
* updated iteratively.
# Barrier function methods
**IDEA:** Prevent leaving the feasible region so that the value of the objective is $\infty$ outside the feasible set.
This method is only applicable to problems with inequality constraints and for which the set
$$\{x\in \mathbb R^n: g_j(x)>0\text{ for all }j=1,\ldots,J\}$$
is non-empty.
Let $\beta:\{x\in \mathbb R^n: g_j(x)>0\text{ for all }j=1,\ldots,J\}\to \mathbb R$ be a function so that $\beta(x)\to \infty$, when $x\to\partial\{x\in \mathbb R^n: g_j(x)>0\text{ for all }j=1,\ldots,J\}$, where $\partial A$ is the boundary of the set $A$. Now, define optimization problem
$$
\begin{align}
\min \qquad & f(x) + r\beta(x)\\
\text{s.t. } \qquad & x\in \{x\in \mathbb R^n: g_j(x)>0\text{ for all }j=1,\ldots,J\}.
\end{align}
$$
and let $x_r$ be the optimal solution of this problem (which we assume to exist for all $r>0$).
In this case, $x_r$ converges to the optimal solution of the problem (if it exists), when $r\to 0^+$ (i.e., $r$ converges to zero from the right).
A good idea for barrier algorithm is $\frac1{g_j(x)}$.
```python
def beta(x,f):
_,ieq,_ = f(x)
try:
value=sum([1/max([0,ieq_j]) for ieq_j in ieq])
except ZeroDivisionError:
value = float("inf")
return value
```
```python
def function_with_barrier(x,f,r):
return f(x)[0]+r*beta(x,f)
```
```python
from scipy.optimize import minimize
res = minimize(lambda x:function_with_barrier(x,f_constrained,0.00000000000001),
[1,1],method='Nelder-Mead', options={'disp': True})
print res.x
```
Optimization terminated successfully.
Current function value: 0.500000
Iterations: 78
Function evaluations: 136
[ 0.49998927 0.50001085]
```python
(f_val,ieq,eq) = f_constrained(res.x)
print "Value of f is "+str(f_val)
if len(ieq)>0:
print "The values of inequality constraints are:"
for ieq_j in ieq:
print str(ieq_j)+", "
if len(eq)>0:
print "The values of the equality constraints are:"
for eq_k in eq:
print str(eq_k)+", "
if all([ieq_j>=0 for ieq_j in ieq]) and all([eq_k==0 for eq_k in eq]):
print "Solution is feasible"
else:
print "Solution is infeasible"
```
Value of f is 0.500000122097
The values of inequality constraints are:
1.21864303093e-07,
Solution is feasible
## Other notes about using penalty and barrier function methods
* It is worthwile to consider whether feasibility can be compromized. If the constraints do not have any tolerances, then barrier function method should be considered.
* Also barrier methods parameter can be set iteratively
* Penalty and barrier functions should be chosen so that they are differentiable (thus $x^2$ above)
* In both methods, the minimum is attained at the limit.
* Different penalty and barrier parameters can be used for differnt constraints, even for same problem.
| 45256d4d304a6853e389d2698bef91a04485aac0 | 14,539 | ipynb | Jupyter Notebook | Lecture 6, Indirect methods for constrained optimization.ipynb | maeehart/TIES483 | cce5c779aeb0ade5f959a2ed5cca982be5cf2316 | [
"CC-BY-3.0"
] | 4 | 2019-04-26T12:46:14.000Z | 2021-11-23T03:38:59.000Z | Lecture 6, Indirect methods for constrained optimization.ipynb | maeehart/TIES483 | cce5c779aeb0ade5f959a2ed5cca982be5cf2316 | [
"CC-BY-3.0"
] | null | null | null | Lecture 6, Indirect methods for constrained optimization.ipynb | maeehart/TIES483 | cce5c779aeb0ade5f959a2ed5cca982be5cf2316 | [
"CC-BY-3.0"
] | 6 | 2016-01-08T16:28:11.000Z | 2021-04-10T05:18:10.000Z | 25.285217 | 321 | 0.528578 | true | 2,227 | Qwen/Qwen-72B | 1. YES
2. YES | 0.94079 | 0.897695 | 0.844543 | __label__eng_Latn | 0.981543 | 0.800488 |
# Laboratorio de física con Python
## Temario
* Simulación de ODE (de primer orden) [Proximamente, de orden superior!)
* Análisis de datos
- Transformación de datos, filtrado
- Ajuste de modelos
- Integración
- Derivación
* Adquisición de datos
* Gráficos
Importamos librerías: _numpy_ para análisis numérico, _scipy_ para funciones de integración y ajuste, y _matplotlib_ para graficar. De paso, definimos características de ploteo.
```python
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
from matplotlib import animation
import scipy.optimize as opt
%matplotlib inline
#Estilos de los gráficos de matplotlib.
plt.rcParams["figure.figsize"] = (5 * (1 + np.sqrt(5)) / 2, 5)
plt.rcParams["lines.linewidth"] = 2.5
plt.rcParams["ytick.labelsize"] = 12
plt.rcParams["xtick.labelsize"] = 12
plt.rcParams["axes.labelsize"] = 20
plt.rcParams["axes.grid"] = True
```
## Explicación del circuito
En este taller vamos a analizar el siguiente circuito, que corresponde a un RC
Para encontrar la tensión en el capacitor usamos la segunda ley de Kirchoff, que nos dice que
$v_{in} = v_{R} + v_{c}$
$ v_{in}(t) = R\;i(t) + \frac{1}{C} \int_{0}^{t} i(\tau) d\tau $
que si derivamos respecto al tiempo y multiplicamos por $C$, finalmente llegamos a la ecuación diferencial
$$ RC \; \frac{d i(t)}{dt} + i(t) = C \frac{d v_{in}(t)}{dt} $$
Para una señal escalón, que se simula con una señal cuadrada, se puede demostrar (ver https://en.wikipedia.org/wiki/Heaviside_step_function) que la derivada es nula para $t>0$, que es el tiempo que nos importa, por lo que vamos a resolver finalmente la siguiente ecuación
$$ RC \; \frac{d i(t)}{dt} + i(t) = 0 $$
$$\frac{d i(t)}{dt} = - \frac{1}{RC} i(t)$$
Pasemos a la simulación propiamente dicha, para lo que definimos el parámetro
$$\tau = RC$$
## Análisis de datos
Adquiridos datos o ya adquiridos, vamos a analizarlos.
```python
data = np.loadtxt("RC.csv") #Traigo datos ya adquiridos
```
```python
print(type(data))
#print(dir(data)) #Permite graficar las funciones del ndarray
print(data)
```
<class 'numpy.ndarray'>
[[ 8.00000000e+00 2.70000000e+01]
[ 1.28000000e+02 9.40000000e+01]
[ 2.48000000e+02 1.58000000e+02]
[ 3.68000000e+02 2.16000000e+02]
[ 4.88000000e+02 2.71000000e+02]
[ 6.08000000e+02 3.22000000e+02]
[ 7.28000000e+02 3.69000000e+02]
[ 8.48000000e+02 4.14000000e+02]
[ 9.68000000e+02 4.55000000e+02]
[ 1.08800000e+03 4.93000000e+02]
[ 1.20800000e+03 5.29000000e+02]
[ 1.32800000e+03 5.63000000e+02]
[ 1.44800000e+03 5.94000000e+02]
[ 1.56800000e+03 6.23000000e+02]
[ 1.68800000e+03 6.50000000e+02]
[ 1.81600000e+03 6.77000000e+02]
[ 1.93600000e+03 7.00000000e+02]
[ 2.05600000e+03 7.22000000e+02]
[ 2.17600000e+03 7.43000000e+02]
[ 2.29600000e+03 7.62000000e+02]
[ 2.41600000e+03 7.79000000e+02]
[ 2.53600000e+03 7.96000000e+02]
[ 2.65600000e+03 8.11000000e+02]
[ 2.77600000e+03 8.25000000e+02]
[ 2.89600000e+03 8.39000000e+02]
[ 3.01600000e+03 8.51000000e+02]
[ 3.13600000e+03 8.63000000e+02]
[ 3.25600000e+03 8.74000000e+02]
[ 3.37600000e+03 8.84000000e+02]
[ 3.49600000e+03 8.93000000e+02]
[ 3.61600000e+03 9.02000000e+02]
[ 3.73600000e+03 9.11000000e+02]
[ 3.85600000e+03 9.18000000e+02]
[ 3.97600000e+03 9.25000000e+02]
[ 4.09600000e+03 9.32000000e+02]
[ 4.21600000e+03 9.38000000e+02]
[ 4.33600000e+03 9.44000000e+02]
[ 4.45600000e+03 9.49000000e+02]
[ 4.57600000e+03 9.54000000e+02]
[ 4.69600000e+03 9.59000000e+02]
[ 4.81600000e+03 9.63000000e+02]
[ 4.93600000e+03 9.67000000e+02]
[ 5.05600000e+03 9.71000000e+02]
[ 5.17600000e+03 9.74000000e+02]
[ 5.29600000e+03 9.78000000e+02]
[ 5.41600000e+03 9.81000000e+02]
[ 5.53600000e+03 9.84000000e+02]
[ 5.65600000e+03 9.86000000e+02]
[ 5.77600000e+03 9.89000000e+02]
[ 5.89600000e+03 9.91000000e+02]
[ 6.01600000e+03 9.93000000e+02]
[ 6.13600000e+03 9.95000000e+02]
[ 6.25600000e+03 9.97000000e+02]
[ 6.37600000e+03 9.99000000e+02]
[ 6.49600000e+03 1.00100000e+03]
[ 6.61600000e+03 1.00200000e+03]
[ 6.73600000e+03 1.00400000e+03]
[ 6.85600000e+03 1.00500000e+03]
[ 6.97600000e+03 1.00600000e+03]
[ 7.09600000e+03 1.00700000e+03]
[ 7.21600000e+03 1.00800000e+03]
[ 7.33600000e+03 1.00900000e+03]
[ 7.45600000e+03 1.01000000e+03]
[ 7.57600000e+03 1.01100000e+03]
[ 7.70400000e+03 9.94000000e+02]
[ 7.82400000e+03 9.26000000e+02]
[ 7.94400000e+03 8.63000000e+02]
[ 8.06400000e+03 8.04000000e+02]
[ 8.18400000e+03 7.49000000e+02]
[ 8.30400000e+03 6.98000000e+02]
[ 8.42400000e+03 6.51000000e+02]
[ 8.54400000e+03 6.07000000e+02]
[ 8.66400000e+03 5.66000000e+02]
[ 8.78400000e+03 5.27000000e+02]
[ 8.90400000e+03 4.92000000e+02]
[ 9.02400000e+03 4.58000000e+02]
[ 9.14400000e+03 4.27000000e+02]
[ 9.26400000e+03 3.98000000e+02]
[ 9.38400000e+03 3.71000000e+02]
[ 9.50400000e+03 3.46000000e+02]
[ 9.62400000e+03 3.22000000e+02]
[ 9.74400000e+03 3.00000000e+02]
[ 9.86400000e+03 2.80000000e+02]
[ 9.98400000e+03 2.61000000e+02]
[ 1.01040000e+04 2.43000000e+02]
[ 1.02240000e+04 2.26000000e+02]
[ 1.03440000e+04 2.11000000e+02]
[ 1.04640000e+04 1.97000000e+02]
[ 1.05840000e+04 1.83000000e+02]
[ 1.07040000e+04 1.71000000e+02]
[ 1.08240000e+04 1.54000000e+02]
[ 1.10000000e+04 1.44000000e+02]
[ 1.11400000e+04 1.33000000e+02]
[ 1.12560000e+04 1.24000000e+02]
[ 1.13760000e+04 1.16000000e+02]
[ 1.14960000e+04 1.08000000e+02]
[ 1.16160000e+04 1.00000000e+02]
[ 1.17360000e+04 9.30000000e+01]
[ 1.18560000e+04 8.70000000e+01]
[ 1.19760000e+04 8.10000000e+01]
[ 1.21120000e+04 7.50000000e+01]
[ 1.22320000e+04 7.00000000e+01]
[ 1.23520000e+04 6.50000000e+01]
[ 1.24720000e+04 6.00000000e+01]
[ 1.25920000e+04 5.60000000e+01]
[ 1.27120000e+04 5.20000000e+01]
[ 1.28320000e+04 4.90000000e+01]
[ 1.29520000e+04 4.50000000e+01]
[ 1.30720000e+04 4.20000000e+01]
[ 1.31920000e+04 3.90000000e+01]
[ 1.33120000e+04 3.60000000e+01]
[ 1.34320000e+04 3.40000000e+01]
[ 1.35520000e+04 3.10000000e+01]
[ 1.36720000e+04 2.90000000e+01]
[ 1.37920000e+04 2.70000000e+01]
[ 1.39120000e+04 2.50000000e+01]
[ 1.40320000e+04 2.30000000e+01]
[ 1.41520000e+04 2.20000000e+01]
[ 1.42720000e+04 2.00000000e+01]
[ 1.43920000e+04 1.90000000e+01]
[ 1.45120000e+04 1.70000000e+01]
[ 1.46320000e+04 1.60000000e+01]
[ 1.47520000e+04 1.50000000e+01]
[ 1.48720000e+04 1.40000000e+01]
[ 1.49920000e+04 1.30000000e+01]
[ 1.51120000e+04 1.20000000e+01]
[ 1.52320000e+04 1.10000000e+01]
[ 1.53520000e+04 1.00000000e+01]]
Ahora imprimimos los datos. Como son dos tiras de datos, debemos imprimir varios
```python
plt.plot(data[:,0], data[:,1], "ro-");
```
Vamos a hacer un poco de análisis de datos, para eso, tomemos solamente una parte de la curva verde. Index slicing al rescate! Graficamos para revistar los resultados
```python
#Hay varias opciones para sacar elementos
#fitData = data[0:64]
fitData = data[data[:, 0] < 7700]
#Grafiquemos los resultados
plt.plot(fitData[:,0], fitData[:,1], 'bo-')
```
Para ajustar el modelo, que sabemos que es
$ V = V_0 ( 1 - e^{-B t}) $
primero construimos la función
```python
f = lambda x, A, B, C: A * np.exp(- B * x) + C
```
Y luego usamos la función curve_fit de scipy
```python
T = (fitData[:,0] - fitData[:,0].min()) * 1e-6
V = fitData[:,1]
ErrV = 20 #Este error corresponde solamente al instrumental
p0 = (-1023, 1000, 1024)
p, cov = opt.curve_fit(f, T, V, p0)
#Construyo una variable auxiliar del T
t = np.linspace(T.min(), T.max(), 1000)
plt.errorbar(T, V, yerr = ErrV, fmt = 'go')
#plt.plot(T, V, 'go')
plt.plot(t, f(t, *p), 'r-') #Grafico el ajuste
#Presento de una manera "linda" el ajuste
sigma = np.sqrt(np.diag(cov)) #la diagonal de la covarianza corresponde a la varianza, el "cuadrado del error"
print("f = A exp(-B t) + C")
print("A = {:.2f} +- {:.2f}".format(p[0], sigma[0])) #Le digo que ponga solo dos digitos después de la coma
print("B = {:.2f} +- {:.2f}".format(p[1], sigma[1]))
print("C = {:.2f} +- {:.2f}".format(p[2], sigma[2]))
```
La componentes corresponde a $R = (18,0 \pm 0,2)\text{k}\Omega$, $C = (0,10 \pm 0,01)\mu\text{F}$, con lo que la constante del circuito nos queda
$\tau = \dfrac{1}{RC} = (555 \pm 62) $
que nos permite corresponder el modelo propuesto con lo experimental.
También podemos ver la "bondad del ajuste", para lo que necesitamos el $\chi$-cuadrado. La función curve_fit no lo devuelve, pero podemos calcularlo rápidamente
```python
chi2Red = (np.power((V - f(T, *p)) / np.sqrt(ErrV), 2)).sum()/(len(V) - 2)
chi2Red
```
0.0039539953544748853
Acá tenemos el $\chi$-cuadrado reducido, que debe ser cercano a 1. Si es mucho menor que 1, los errores están sobre dimensionados; si es mucho mayor a 1, el ajuste puede ser rechazado. Para cuantificar la palabra "mucho" podemos usar el p-valor con el test de $\chi$-cuadrado (https://en.wikipedia.org/wiki/Goodness_of_fit)
## Derivación e integración
Ahora vamos a tomar datos para un integrador y un derivador. Para eso ejecuten la parte de adquisición de datos, si tienen el dispositivo, y obtengan los datos. Si no importenlos como ya hicimos
```python
data = np.loadtxt("RC_int.csv")
plt.plot(data[:,0], data[:,1], "ro-");
```
Derivemos los datos para ver que resultado tenemos. Para eso tenemos la función de _numpy.diff_. Notar que al diferenciar el resultado tiene un dato menos, por lo que se debe eliminar del "vector tiempo" data[:,0]
```python
plt.plot(data[:-1,0], np.diff(data[:,1]), 'go');
```
Se "parece" a una cuadrada, creemos datos que representen a una cuadrada e integremoslos y comparandolos. Para eso, debemos usar la función _scipy.integrate.cumtrapz_. Luego reescalamos la señal integrada para compararla con la cuadrada
```python
from scipy.integrate import cumtrapz
T = data[:,0]
N = int(data[:,1].shape[0] / 2)
V = np.concatenate((np.full(N, 1), np.zeros(N) - 1))
plt.plot(T,V,'go')
V_int = sp.integrate.cumtrapz(V, initial = 0)
V_int /= V_int.max()
plt.plot(T, V_int, 'ro')
```
Veamos ahora comparado los datos adquiridos respecto a la integración numérica, escalado para que sea comparable
```python
plt.plot(data[:,0], data[:,1] / data[:,1].max(), 'bo', T, V_int, 'go')
```
Se nota que el circuito RC se acerca a la integración numérica, pero debido a lo que se conoce como ganancia del filtro RC y la constante de tiempo $\tau$ no se llega a obtener un integrador real. Para mejorar esto, en general, se debe usar circuitos activos (ver https://en.wikipedia.org/wiki/Active_filter), pero requieren más experiencia de diseño
## Simulación del circuito
### Simulación numérica
Conociendo el comportamiento del circuito y la forma funcional de su derivada podemos usar las funciones que ya conocemos para simularlo. Recordemos que la ecuación diferencial había quedado como
$ \dfrac{d i(t)}{dt} = - \dfrac{1}{\tau} i(t)$
Esto lo podemos escribir como
```python
from scipy.integrate import odeint
tau = 10 #Varien el parámetro y vean el resultado
def f(y, t, tau):
return (-y * tau)
t = np.linspace(0, 1, 1000)
y0 = [1]
y = odeint(f, y0, t, args = (tau,))
plt.plot(t, y, 'r-');
```
Recordemos que la ecuación diferencial que estamos resolviendo corresponde a la corriente del circuito. Si en vez de eso queremos la tensión del capacitor, debemos integrar este resultado, ya que
$$ v_{c}(t) = \frac{1}{C} \int_{0}^{t} i(\tau) d\tau $$
Con la librería _scipy.integrate_, y la función _cumtrapz_, podemos integrar, ya que permite aplicar acumulativamente la regla del trapesoide.
```python
from scipy.integrate import cumtrapz
y_int = cumtrapz(y.ravel(), t)
plt.plot(t[:-1], y_int.ravel(),'g-')
```
### Simulación analítica
Para completar, vamos a encontrar la expresión analítica de la corriente y la tensión del capacitor, que nos va a permitir determinar el modelo y ajustar los datos
```python
#Usamos sympy, que tiene un tutorial muy completo en
#http://docs.sympy.org/latest/tutorial/
#Nostros no hacemos más que repetir
from sympy import symbols, Function, Eq, dsolve, integrate, collect_const
#Creo los simbolos, es decir variables con significado simbolico
t, tau = symbols('t, tau')
i = symbols("i", cls=Function)
diffEq = Eq(i(t).diff(t), -i(t)/tau)
sol = dsolve(diffEq)
sol
```
i(t) == C1*exp(-t/tau)
Esta expresión es la corriente, para la tensión del capacitor, debemos integrar este resultado
```python
C1, u, A = symbols('C1 u A') #Creo una variable de integración, la constante de integración y una variable A
g = sol.rhs.subs(t, u) #Tomo la parte izquierda de la igualdad, y elimino t por u para integrar
I = integrate(g, (u, 0, t))
collect_const(I.subs(tau*C1,A), A) #Esta función solo reodena el resultado, y remplazo tau*C1 -> A
```
A*(1 - exp(-t/tau))
Por lo que el modelo utilizado previamente es coherente con la solución de la ecuación diferencial. Con esto queda un pantallazo de toda la simulación, analítica y numérica, que se puede efectuar en Python. Nada mal
## Adquisición de datos
Acá están las funciones para adquisición de datos. **No es necesario ejecutarlas**, pero si tenés un dispositivo serie que devuelva una lista de datos ASCII separada por tabulaciones (como ser algunos osciloscopios, por ejemplo), esto te va a permitir guardarlo!. Mientras, está pensado para utilizarlo con un [Arduino](http://www.arduino.cc), que fue programado por nosotros (y [acá está el código]() en Processing para ver que se hace). La librería _pandas_ tiene herramientas muy poderosas de análisis y filtrado de datos, pero es más de lo que necesitamos en general y usamos solamente _numpy_
```python
import io
import time
import serial
import pandas as pd
from serial.tools import list_ports
def inputPort():
'''Obtiene la lista de puertos, la presenta en pantalla y da
a elegir un puerto, devolviendo el string para conectarse
'''
ports = list(list_ports.comports())
for i,p in enumerate(ports):
print("[{}]: Puerto {}".format(i + 1, p[1]))
port = ""
if len(ports) > 0:
port = ports[int(input("Ingrese el puerto serie: ")) - 1][0]
return port
def updateData(s):
'''Obtiene los datos que manda el Arduino/Teensy por UART USB'''
A = []
s.flushInput()
s.write(b"1")
time.sleep(2)
while True:
if ser.inWaiting() == 0:
break
A.append(s.read().decode())
A = "".join(A)
data = pd.read_csv(io.StringIO(A),sep="\t",names=["t","v"]).dropna(axis=0)
return data
port = inputPort()
ser = serial.Serial(port, "9600")
data = updateData(ser).values
ser.close()
np.savetxt("data.csv", data)
```
Grafiquemos los datos para estar seguros de que obtenimos el resultado!
```python
plt.plot(data[:,0], data[:,1], "ro-");
```
Para complentar, reimplementamos la adquisición de datos, pero la hacemos un bucle y presentamos en tiempo real la adquisición, con un intervalo de refresco de 2s, aproximadamente
```python
from IPython import display #Permite borrar la salida y volverla a cargar. Sirve para toda instancia de Ipython
inputPort()
ser = serial.Serial(port, "9600")
data = updateData(ser).values
ser.close()
for i in range(50):
plt.clf()
data = update()
plt.plot(data.t, data.V, "ro")
display.clear_output(wait=True)
display.display(plt.gcf())
#time.sleep(0.5) #Este delay no es necesario, ya está implementado en el update
plt.close()
```
| 28dcb52ac12618a2eec3535582e5e3b44dd9023c | 203,947 | ipynb | Jupyter Notebook | python/Extras/Arduino/laboratorio.ipynb | LTGiardino/talleresfifabsas | a711b4425b0811478f21e6c405eeb4a52e889844 | [
"MIT"
] | 17 | 2015-10-23T17:14:34.000Z | 2021-12-31T02:18:29.000Z | python/Extras/Arduino/laboratorio.ipynb | LTGiardino/talleresfifabsas | a711b4425b0811478f21e6c405eeb4a52e889844 | [
"MIT"
] | 5 | 2016-04-03T23:39:11.000Z | 2020-04-03T02:09:02.000Z | python/Extras/Arduino/laboratorio.ipynb | LTGiardino/talleresfifabsas | a711b4425b0811478f21e6c405eeb4a52e889844 | [
"MIT"
] | 29 | 2015-10-16T04:16:01.000Z | 2021-09-18T16:55:48.000Z | 225.355801 | 26,954 | 0.890388 | true | 6,268 | Qwen/Qwen-72B | 1. YES
2. YES | 0.894789 | 0.774583 | 0.693089 | __label__spa_Latn | 0.761206 | 0.448609 |
# Binet's Formula
## Formula
Explicit formula to find the nth term of the Fibonacci sequence.
$\displaystyle F_n = \frac{1}{\sqrt{5}} \Bigg(\Bigg( \frac{1 + \sqrt{5}}{2} \Bigg)^n - \Bigg( \frac{1 - \sqrt{5}}{2} \Bigg)^n \Bigg)$
*Derived by Jacques Philippe Marie Binet, alreday known by Abraham de Moivre*
----
## Fibonacci Sequence
The Fibonacci sequence iterates with the next value being the sum of the previous two:
$F_{n+1} = F_n + F_{n-1}$
```python
def fib(n):
a = b = 1
for _ in range(n):
yield a
a, b = b, a + b
", ".join([str(x) for x in fib(20)])
```
'1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765'
----
## Proof
### Fibonacci Ratios
The ratios of the Fibanacci sequence
$\displaystyle \lim_{n \rightarrow \infty} \frac{F_n}{F_{n-1}} = \varphi$ <br/>
$\displaystyle \frac{F_n}{F_{n-1}}$ converges to the limit $\phi$
```python
def fib_ratio(n):
a = b = 1
for _ in range(n):
yield a/b
a, b = b, a + b
", ".join(["{0:.6f}".format(x) for x in fib_ratio(20)])
```
'1.000000, 0.500000, 0.666667, 0.600000, 0.625000, 0.615385, 0.619048, 0.617647, 0.618182, 0.617978, 0.618056, 0.618026, 0.618037, 0.618033, 0.618034, 0.618034, 0.618034, 0.618034, 0.618034, 0.618034'
### Compose as a Geometric Sequence
This sequence resembles a geometric sequence. Geometric sequences have terms in the form of $G_n = a \cdot r^n$ .
Therefore $F_{n+1} = F_n + F_{n-1} \implies a \cdot r^{n+1} = a \cdot r^n + a \cdot r^{n-1} \implies r^2 = r + 1$.
### Resolve Quadratic
Using the Quadratic formula we find r as $1+\varphi$, or $\displaystyle\frac{1 \pm \sqrt{5}}{2}$
Let's declare $G_n = \displaystyle\frac{1 + \sqrt{5}}{2}$, and $H_n = \displaystyle\frac{1 - \sqrt{5}}{2}$
```python
# x^2−x−1=0
from sympy import *
from sympy.plotting import plot
from sympy.solvers import solve
init_printing()
x = symbols('x')
exp = x**2 - x -1
plot(exp, (x, -2, 2))
answers = solve(x**2 -x -1, x)
[ratsimp(a) for a in answers]
```
### Conclusion
Although neither $G_n$ and $H_n$ conform to the Fibonacci sequence, through induction, $G_n - H_n$ does.
To find $a$, we can see that $F_0 = G_0 - H_0 = 0, and F_1 = G_1 - H_1 = 1 \implies a = \frac{1}{\sqrt{5}}$
----
## References
- [Art of Problem Solving - Binet's Formula][3]
- [Art of Problem Solving - Geometric Sequence][2]
- [Art of Problem Solving - Fibonacci Sequence][1]
[1]: https://artofproblemsolving.com/wiki/index.php?title=Fibonacci_sequence
[2]: https://artofproblemsolving.com/wiki/index.php?title=Geometric_sequence
[3]: https://artofproblemsolving.com/wiki/index.php?title=Binet%27s_Formula
| 6e6e90986c00390ebd9217509a085ab7b81c477e | 21,750 | ipynb | Jupyter Notebook | notebooks/math/number_theory/binets_formula.ipynb | sparkboom/my_jupyter_notes | 9255e4236b27f0419cdd2c8a2159738d8fc383be | [
"MIT"
] | null | null | null | notebooks/math/number_theory/binets_formula.ipynb | sparkboom/my_jupyter_notes | 9255e4236b27f0419cdd2c8a2159738d8fc383be | [
"MIT"
] | null | null | null | notebooks/math/number_theory/binets_formula.ipynb | sparkboom/my_jupyter_notes | 9255e4236b27f0419cdd2c8a2159738d8fc383be | [
"MIT"
] | null | null | null | 106.097561 | 14,436 | 0.849011 | true | 996 | Qwen/Qwen-72B | 1. YES
2. YES | 0.913677 | 0.855851 | 0.781971 | __label__eng_Latn | 0.594059 | 0.655113 |
# Exercise 1
## JIT the pressure poisson equation
The equation we need to unroll is given by
\begin{equation}
p_{i,j}^{n} = \frac{1}{4}\left(p_{i+1,j}^{n}+p_{i-1,j}^{n}+p_{i,j+1}^{n}+p_{i,j-1}^{n}\right) - b
\end{equation}
and recall that `b` is already computed, so no need to worry about unrolling that. We've also filled in the boundary conditions, so don't worry about those. (don't forget to decorate your function!)
```python
import numpy
from numba import jit
```
```python
def pressure_poisson(p, b, l2_target=1e-4):
I, J = b.shape
iter_diff = l2_target + 1
n = 0
while iter_diff > l2_target and n <= 500:
pn = p.copy()
#Your code here
#boundary conditions
for i in range(I):
p[i, 0] = p[i, 1]
p[i, -1] = 0
for j in range(J):
p[0, j] = p[1, j]
p[-1, j] = p[-2, j]
if n % 10 == 0:
iter_diff = numpy.sqrt(numpy.sum((p - pn)**2)/numpy.sum(pn**2))
n += 1
return p
```
```python
import pickle
from snippets.ns_helper import cavity_flow, velocity_term, quiver_plot
```
```python
def run_cavity():
nx = 41
with open('../IC.pickle', 'rb') as f:
u, v, p, b = pickle.load(f)
dx = 2 / (nx - 1)
dt = .005
nt = 1000
u, v, p = cavity_flow(u, v, p, nt, dt, dx,
velocity_term,
pressure_poisson,
rtol=1e-4)
return u, v, p
```
```python
un, vn, pn = run_cavity()
```
```python
%timeit run_cavity()
```
```python
with open('../numpy_ans.pickle', 'rb') as f:
u, v, p = pickle.load(f)
```
```python
assert numpy.allclose(u, un)
assert numpy.allclose(v, vn)
assert numpy.allclose(p, pn)
```
# Exercise 2 (optional)
Finish early? Just want to try more stuff?
This line is not super efficient:
```python
iter_diff = numpy.sqrt(numpy.sum((p - pn)**2)/numpy.sum(pn**2))
```
Try rewriting it using a jitted function and see what kind of performance gain you can get.
```python
```
| 0e5936f7cf5cec621e0e31c2a188ada3e097a3e4 | 4,223 | ipynb | Jupyter Notebook | notebooks/exercises/05.Cavity.Flow.Exercises.ipynb | gforsyth/numba_tutorial_scipy2017 | 01befd25218783f6d3fb803f55dd9e52f6072ff7 | [
"CC-BY-4.0"
] | 131 | 2017-06-23T10:18:26.000Z | 2022-03-27T21:16:56.000Z | notebooks/exercises/05.Cavity.Flow.Exercises.ipynb | gforsyth/numba_tutorial_scipy2017 | 01befd25218783f6d3fb803f55dd9e52f6072ff7 | [
"CC-BY-4.0"
] | 9 | 2017-06-11T21:20:59.000Z | 2018-10-18T13:57:30.000Z | notebooks/exercises/05.Cavity.Flow.Exercises.ipynb | gforsyth/numba_tutorial_scipy2017 | 01befd25218783f6d3fb803f55dd9e52f6072ff7 | [
"CC-BY-4.0"
] | 64 | 2017-06-26T13:04:48.000Z | 2022-01-11T20:36:31.000Z | 23.461111 | 206 | 0.460336 | true | 653 | Qwen/Qwen-72B | 1. YES
2. YES | 0.880797 | 0.888759 | 0.782816 | __label__eng_Latn | 0.89803 | 0.657077 |
```python
from sympy import *
from IPython.display import display, Latex, HTML, Markdown
init_printing()
from eqn_manip import *
from codegen_extras import *
import codegen_extras
from importlib import reload
from sympy.codegen.ast import Assignment, For, CodeBlock, real, Variable, Pointer, Declaration
from sympy.codegen.cnodes import void
```
## Cubic Spline solver - derivation and code generation
### Tridiagonal Solver
From Wikipedia: https://en.wikipedia.org/wiki/Tridiagonal_matrix_algorithm
In the future it would be good to derive these equations from Gaussian elimintation (as on the Wikipedia page), but for now they are simply given.
```python
n = Symbol('n', integer=True)
i = Symbol('i', integer=True)
x = IndexedBase('x',shape=(n,))
dp = IndexedBase("d'",shape=(n,))
cp = IndexedBase("c'",shape=(n,))
a = IndexedBase("a",shape=(n,))
b = IndexedBase("b",shape=(n,))
c = IndexedBase("c",shape=(n,))
d = IndexedBase("d",shape=(n,))
```
```python
# forward sweep
# start/end using the natural range for math notation
#start = 1
#end = n
# Use the C++ range 0,n-1
start = 0
end = n-1
teq1 = Eq(cp[start], c[start]/b[start])
display(teq1)
teq2 = Eq(dp[start], d[start]/b[start])
display(teq2)
teq3 = Eq(dp[i],(d[i] - dp[i-1]*a[i])/ (b[i] - cp[i-1]*a[i]))
display(teq3)
teq4 = Eq(cp[i],c[i]/(b[i] - cp[i-1]*a[i]))
display(teq4)
```
$${c'}_{0} = \frac{{c}_{0}}{{b}_{0}}$$
$${d'}_{0} = \frac{{d}_{0}}{{b}_{0}}$$
$${d'}_{i} = \frac{- {a}_{i} {d'}_{i - 1} + {d}_{i}}{- {a}_{i} {c'}_{i - 1} + {b}_{i}}$$
$${c'}_{i} = \frac{{c}_{i}}{- {a}_{i} {c'}_{i - 1} + {b}_{i}}$$
```python
# backward sweep
teq5 = Eq(x[end],dp[end])
display(teq5)
teq6 = Eq(x[i],dp[i] - cp[i]*x[i+1])
display(teq6)
```
$${x}_{n - 1} = {d'}_{n - 1}$$
$${x}_{i} = - {c'}_{i} {x}_{i + 1} + {d'}_{i}$$
### Cubic Spline equations
Start with uniform knot spacing. The derivation is easier to see than in the case with general knot spacing.
```python
# Distance from the previous knot, for the case of uniform knot spacing
t = Symbol('t')
# Number of knots
n = Symbol('n', integer=True)
i = Symbol('i', integer=True)
# Function values to intepolated at the knots
y = IndexedBase('y',shape=(n,))
# Coefficients of the spline function
a,b,c,d = [IndexedBase(s, shape=(n,)) for s in 'a b c d'.split()]
# Cubic spline function
s = a + b*t + c*t*t + d*t**3
display(Eq(y,s))
# With indexed variables
si = a[i] + b[i]*t + c[i]*t*t + d[i]*t**3
display(Eq(y[i],si))
```
$$y = t^{3} d + t^{2} c + t b + a$$
$${y}_{i} = t^{3} {d}_{i} + t^{2} {c}_{i} + t {b}_{i} + {a}_{i}$$
### Strategy
To eventually reduce the equations to a tridiagonal form, express the equations in terms of the second derivative ($E$).
See the MathWorld page for cubic splines, which derives the equations in terms of the first derivative ($D$).
http://mathworld.wolfram.com/CubicSpline.html
```python
# Value at knots (t=0)
sp1 = Eq(si.subs(t,0), y[i])
sp1
```
$${a}_{i} = {y}_{i}$$
```python
# Value at knots (t=1)
sp2 = Eq(si.subs(t,1), y[i+1])
sp2
```
$${a}_{i} + {b}_{i} + {c}_{i} + {d}_{i} = {y}_{i + 1}$$
```python
# Express the second derivative at the beginning of the interval in terms of E
E = IndexedBase('E',shape=(n,))
sp3 = Eq(E[i], diff(si,t,2).subs(t,0))
sp3
```
$${E}_{i} = 2 {c}_{i}$$
```python
# Express the second derivative at the end of the interval in terms of E
sp4 = Eq(E[i+1], diff(si,t,2).subs(t,1))
sp4
```
$${E}_{i + 1} = 2 {c}_{i} + 6 {d}_{i}$$
```python
# Continuity of the first derivative
sp5 = Eq(diff(si,t).subs(t,1), diff(si,t).subs(t,0).subs(i,i+1))
sp5
```
$${b}_{i} + 2 {c}_{i} + 3 {d}_{i} = {b}_{i + 1}$$
### For general spacing of the knots
```python
L = IndexedBase('L',shape=(n,)) # L[i] = x[i+1] - x[i]
t = Symbol('t')
x = IndexedBase('x',shape=(n,))
si = a[i] + b[i]*t + c[i]*t*t + d[i]*t**3
```
```python
# Value at knots (t=0)
sp1 = Eq(si.subs(t,0), y[i])
sp1
```
$${a}_{i} = {y}_{i}$$
```python
# Value at next knot
sp2 = Eq(si.subs(t,L[i]), y[i+1])
sp2
```
$${L}_{i}^{3} {d}_{i} + {L}_{i}^{2} {c}_{i} + {L}_{i} {b}_{i} + {a}_{i} = {y}_{i + 1}$$
```python
# Express the second derivative at the beginning of the interval in terms of E
E = IndexedBase('E',shape=(n,))
sp3 = Eq(E[i], diff(si,t,2).subs(t,0))
sp3
```
$${E}_{i} = 2 {c}_{i}$$
```python
# Express the second derivative at the end of the interval in terms of E
sp4 = Eq(E[i+1], diff(si,t,2).subs(t,L[i]))
sp4
```
$${E}_{i + 1} = 6 {L}_{i} {d}_{i} + 2 {c}_{i}$$
```python
# Solve for spline coefficients in terms of E's
sln = solve([sp1,sp2,sp3,sp4], [a[i],b[i],c[i],d[i]])
sln
```
$$\left \{ {a}_{i} : {y}_{i}, \quad {b}_{i} : \frac{- \frac{\left({E}_{i + 1} + 2 {E}_{i}\right) {L}_{i}^{2}}{6} + {y}_{i + 1} - {y}_{i}}{{L}_{i}}, \quad {c}_{i} : \frac{{E}_{i}}{2}, \quad {d}_{i} : \frac{{E}_{i + 1} - {E}_{i}}{6 {L}_{i}}\right \}$$
```python
# also for i+1
sln1 = {k.subs(i,i+1):v.subs(i,i+1) for k,v in sln.items()}
sln1
```
$$\left \{ {a}_{i + 1} : {y}_{i + 1}, \quad {b}_{i + 1} : \frac{- \frac{\left(2 {E}_{i + 1} + {E}_{i + 2}\right) {L}_{i + 1}^{2}}{6} - {y}_{i + 1} + {y}_{i + 2}}{{L}_{i + 1}}, \quad {c}_{i + 1} : \frac{{E}_{i + 1}}{2}, \quad {d}_{i + 1} : \frac{- {E}_{i + 1} + {E}_{i + 2}}{6 {L}_{i + 1}}\right \}$$
```python
# Continuity of first derivatives at knots
# This will define the tridiagonal system to be solved
sp5 = Eq(diff(si,t).subs(t,L[i]), diff(si,t).subs(i, i+1).subs(t,0))
sp5
```
$$3 {L}_{i}^{2} {d}_{i} + 2 {L}_{i} {c}_{i} + {b}_{i} = {b}_{i + 1}$$
```python
sp6 = sp5.subs(sln).subs(sln1)
sp7 = expand(sp6)
sp7
```
$$\frac{{E}_{i + 1} {L}_{i}}{3} + \frac{{E}_{i} {L}_{i}}{6} + \frac{{y}_{i + 1}}{{L}_{i}} - \frac{{y}_{i}}{{L}_{i}} = - \frac{{E}_{i + 1} {L}_{i + 1}}{3} - \frac{{E}_{i + 2} {L}_{i + 1}}{6} - \frac{{y}_{i + 1}}{{L}_{i + 1}} + \frac{{y}_{i + 2}}{{L}_{i + 1}}$$
```python
sp8 = divide_terms(sp7, [E[i],E[i+1],E[i+2]], [y[i],y[i+1],y[i+2]])
display(sp8)
sp9 = mult_eqn(sp8,6)
display(sp9)
# The index 'i' used in the cubic spline equations is not the same 'i' used
# in the tridigonal solver. Here we need to make them match.
# The first foundary condition will the equation at index at 0.
# Adjust the indexing on this equation so i=1 is the index of the first continuity interval match
sp9 = sp9.subs(i,i-1)
```
$$\frac{{E}_{i + 1} {L}_{i + 1}}{3} + \frac{{E}_{i + 1} {L}_{i}}{3} + \frac{{E}_{i + 2} {L}_{i + 1}}{6} + \frac{{E}_{i} {L}_{i}}{6} = - \frac{{y}_{i + 1}}{{L}_{i}} + \frac{{y}_{i}}{{L}_{i}} - \frac{{y}_{i + 1}}{{L}_{i + 1}} + \frac{{y}_{i + 2}}{{L}_{i + 1}}$$
$$2 {E}_{i + 1} {L}_{i + 1} + 2 {E}_{i + 1} {L}_{i} + {E}_{i + 2} {L}_{i + 1} + {E}_{i} {L}_{i} = - \frac{6 {y}_{i + 1}}{{L}_{i}} + \frac{6 {y}_{i}}{{L}_{i}} - \frac{6 {y}_{i + 1}}{{L}_{i + 1}} + \frac{6 {y}_{i + 2}}{{L}_{i + 1}}$$
```python
# Extract the three coefficients in each row for the general case
symlist = [E[i-1],E[i],E[i+1],E[i+2]]
coeff1 = get_coeff_for(sp9.lhs, E[i-1], symlist)
display(coeff1)
coeff2 = get_coeff_for(sp9.lhs, E[i], symlist)
display(coeff2)
coeff3 = get_coeff_for(sp9.lhs, E[i+1], symlist)
display(coeff3)
```
$${L}_{i - 1}$$
$$2 {L}_{i - 1} + 2 {L}_{i}$$
$${L}_{i}$$
```python
# Now get the coefficients for the boundary conditions (first row and last row)
# Natural BC
bc_natural_start = Eq(E[i].subs(i,0),0)
display(bc_natural_start)
bc_natural_end = Eq(E[i].subs(i,end),0)
display(bc_natural_end)
# The coefficients and RHS for this BC are pretty simple. but we will follow
# a deterministic path for derivation anyway.
bc_natural_start_coeff1 = get_coeff_for(bc_natural_start.lhs, E[start],[E[start]])
display(bc_natural_start_coeff1)
bc_natural_start_coeff2 = get_coeff_for(bc_natural_start.lhs, E[start+1],[E[start],E[start+1]])
display(bc_natural_start_coeff2)
bc_natural_end_coeff1 = get_coeff_for(bc_natural_end.lhs, E[end-1],[E[end]])
display(bc_natural_end_coeff1)
bc_natural_end_coeff2 = get_coeff_for(bc_natural_end.lhs, E[end],[E[end]])
bc_natural_end_coeff2
```
$${E}_{0} = 0$$
$${E}_{n - 1} = 0$$
$$1$$
$$0$$
$$0$$
$$1$$
```python
# BC - first derivative specified at the beginning of the range
yp0 = Symbol('yp0')
eqbc1=Eq(diff(si,t).subs(t,0).subs(sln).subs(i,0), yp0)
display(eqbc1)
eqbc1b = divide_terms(expand(eqbc1),[E[0],E[1]],[y[0],y[1],yp0])
eqbc1c = mult_eqn(eqbc1b, 6)
display(eqbc1c)
bc_firstd_start_coeff1 = get_coeff_for(eqbc1c.lhs, E[0], [E[0],E[1]])
display(bc_firstd_start_coeff1)
bc_firstd_start_coeff2 = get_coeff_for(eqbc1c.lhs, E[1], [E[0],E[1]])
display(bc_firstd_start_coeff2)
```
$$\frac{- \frac{\left(2 {E}_{0} + {E}_{1}\right) {L}_{0}^{2}}{6} - {y}_{0} + {y}_{1}}{{L}_{0}} = yp_{0}$$
$$- 2 {E}_{0} {L}_{0} - {E}_{1} {L}_{0} = 6 yp_{0} + \frac{6 {y}_{0}}{{L}_{0}} - \frac{6 {y}_{1}}{{L}_{0}}$$
$$- 2 {L}_{0}$$
$$- {L}_{0}$$
```python
# For the general algorithm, the input parameters for the boundary conditions are
# - first derivative, if value is less than cutoff
# - second derivative is zero, if vlaue is greater than cutoff
bc_cutoff = 0.99e30
tbc_start_coeff1 = Piecewise((bc_firstd_start_coeff1, yp0 < bc_cutoff),(bc_natural_start_coeff1,True))
display(tbc_start_coeff1)
tbc_start_coeff2 = Piecewise((bc_firstd_start_coeff2, yp0 < bc_cutoff),(bc_natural_start_coeff2,True))
display(tbc_start_coeff2)
sym_bc_start_coeff1 = Symbol('bc_start1')
sym_bc_start_coeff2 = Symbol('bc_start2')
bc_eqs = [Eq(sym_bc_start_coeff1, tbc_start_coeff1)]
bc_eqs.append(Eq(sym_bc_start_coeff2, tbc_start_coeff2))
```
$$\begin{cases} - 2 {L}_{0} & \text{for}\: yp_{0} < 9.9 \cdot 10^{29} \\1 & \text{otherwise} \end{cases}$$
$$\begin{cases} - {L}_{0} & \text{for}\: yp_{0} < 9.9 \cdot 10^{29} \\0 & \text{otherwise} \end{cases}$$
```python
# BC - first derivative specified at the end of the range
ypn = Symbol('ypn')
eqbc2=Eq(diff(si,t).subs(t,L[end-1]).subs(sln).subs(i,end-1),ypn)
display(eqbc2)
eqbc2b = divide_terms(expand(eqbc2),[E[end-1],E[end]],[y[end-1],y[end],ypn])
display(eqbc2b)
eqbc2c = mult_eqn(eqbc2b, 6)
display(eqbc2c)
bc_firstd_end_coeff1 = get_coeff_for(eqbc2c.lhs, E[end-1],[E[end-1],E[end]])
display(bc_firstd_end_coeff1)
bc_firstd_end_coeff2 = get_coeff_for(eqbc2c.lhs, E[end],[E[end-1],E[end]])
display(bc_firstd_end_coeff2)
```
$$\frac{\left({E}_{n - 1} - {E}_{n - 2}\right) {L}_{n - 2}}{2} + \frac{- \frac{\left({E}_{n - 1} + 2 {E}_{n - 2}\right) {L}_{n - 2}^{2}}{6} + {y}_{n - 1} - {y}_{n - 2}}{{L}_{n - 2}} + {E}_{n - 2} {L}_{n - 2} = ypn$$
$$\frac{{E}_{n - 1} {L}_{n - 2}}{3} + \frac{{E}_{n - 2} {L}_{n - 2}}{6} = ypn - \frac{{y}_{n - 1}}{{L}_{n - 2}} + \frac{{y}_{n - 2}}{{L}_{n - 2}}$$
$$2 {E}_{n - 1} {L}_{n - 2} + {E}_{n - 2} {L}_{n - 2} = 6 ypn - \frac{6 {y}_{n - 1}}{{L}_{n - 2}} + \frac{6 {y}_{n - 2}}{{L}_{n - 2}}$$
$${L}_{n - 2}$$
$$2 {L}_{n - 2}$$
```python
# Create the conditional expression for the end BC
tbc_end_coeff1 = Piecewise((bc_firstd_end_coeff1, ypn < bc_cutoff),(bc_natural_end_coeff1, True))
display(tbc_end_coeff1)
sym_bc_end_coeff1 = Symbol('bc_end1')
bc_eqs.append(Eq(sym_bc_end_coeff1, tbc_end_coeff1))
tbc_end_coeff2 = Piecewise((bc_firstd_end_coeff2, ypn < bc_cutoff),(bc_natural_end_coeff2, True))
tbc_end_coeff2
display(tbc_end_coeff2)
sym_bc_end_coeff2 = Symbol('bc_end2')
bc_eqs.append(Eq(sym_bc_end_coeff2, tbc_end_coeff2))
```
$$\begin{cases} {L}_{n - 2} & \text{for}\: ypn < 9.9 \cdot 10^{29} \\0 & \text{otherwise} \end{cases}$$
$$\begin{cases} 2 {L}_{n - 2} & \text{for}\: ypn < 9.9 \cdot 10^{29} \\1 & \text{otherwise} \end{cases}$$
```python
# conditional expressions for RHS for boundary conditions
rhs_start = Piecewise((eqbc1c.rhs,yp0 < bc_cutoff),(bc_natural_start.rhs,True))
display(rhs_start)
rhs_end = Piecewise((eqbc2c.rhs, ypn < bc_cutoff), (bc_natural_end.rhs, True))
display(rhs_end)
sym_rhs_start = Symbol('rhs_start')
sym_rhs_end = Symbol('rhs_end')
bc_eqs.append(Eq(sym_rhs_start, rhs_start))
bc_eqs.append(Eq(sym_rhs_end, rhs_end))
bc_eqs
```
$$\begin{cases} 6 yp_{0} + \frac{6 {y}_{0}}{{L}_{0}} - \frac{6 {y}_{1}}{{L}_{0}} & \text{for}\: yp_{0} < 9.9 \cdot 10^{29} \\0 & \text{otherwise} \end{cases}$$
$$\begin{cases} 6 ypn - \frac{6 {y}_{n - 1}}{{L}_{n - 2}} + \frac{6 {y}_{n - 2}}{{L}_{n - 2}} & \text{for}\: ypn < 9.9 \cdot 10^{29} \\0 & \text{otherwise} \end{cases}$$
$$\left [ bc_{start1} = \begin{cases} - 2 {L}_{0} & \text{for}\: yp_{0} < 9.9 \cdot 10^{29} \\1 & \text{otherwise} \end{cases}, \quad bc_{start2} = \begin{cases} - {L}_{0} & \text{for}\: yp_{0} < 9.9 \cdot 10^{29} \\0 & \text{otherwise} \end{cases}, \quad bc_{end1} = \begin{cases} {L}_{n - 2} & \text{for}\: ypn < 9.9 \cdot 10^{29} \\0 & \text{otherwise} \end{cases}, \quad bc_{end2} = \begin{cases} 2 {L}_{n - 2} & \text{for}\: ypn < 9.9 \cdot 10^{29} \\1 & \text{otherwise} \end{cases}, \quad rhs_{start} = \begin{cases} 6 yp_{0} + \frac{6 {y}_{0}}{{L}_{0}} - \frac{6 {y}_{1}}{{L}_{0}} & \text{for}\: yp_{0} < 9.9 \cdot 10^{29} \\0 & \text{otherwise} \end{cases}, \quad rhs_{end} = \begin{cases} 6 ypn - \frac{6 {y}_{n - 1}}{{L}_{n - 2}} + \frac{6 {y}_{n - 2}}{{L}_{n - 2}} & \text{for}\: ypn < 9.9 \cdot 10^{29} \\0 & \text{otherwise} \end{cases}\right ]$$
### Substitute cubic spline equations into tridiagonal solver
```python
subslist = {
a[start] : 0,
a[i] : coeff1,
a[end] : sym_bc_end_coeff1,
b[start] : sym_bc_start_coeff1,
b[i] : coeff2,
b[end] : sym_bc_end_coeff2,
c[start] : sym_bc_start_coeff2,
c[i] : coeff3,
c[end] : 0,
d[start] : sym_rhs_start,
d[i] : sp9.rhs,
d[end] : sym_rhs_end,
}
# Replace knot spacing with differences bewteen knot locations
subsL = {
L[i] : x[i+1] - x[i],
L[i+1] : x[i+2] - x[i+1],
L[i-1] : x[i] - x[i-1],
L[start] : x[start+1]-x[start],
L[start+1] : x[start+2]-x[start+1],
L[end-1] : x[end] - x[end-1],
}
subslist
```
$$\left \{ {a}_{0} : 0, \quad {a}_{i} : {L}_{i - 1}, \quad {a}_{n - 1} : bc_{end1}, \quad {b}_{0} : bc_{start1}, \quad {b}_{i} : 2 {L}_{i - 1} + 2 {L}_{i}, \quad {b}_{n - 1} : bc_{end2}, \quad {c}_{0} : bc_{start2}, \quad {c}_{i} : {L}_{i}, \quad {c}_{n - 1} : 0, \quad {d}_{0} : rhs_{start}, \quad {d}_{i} : \frac{6 {y}_{i + 1}}{{L}_{i}} - \frac{6 {y}_{i}}{{L}_{i}} + \frac{6 {y}_{i - 1}}{{L}_{i - 1}} - \frac{6 {y}_{i}}{{L}_{i - 1}}, \quad {d}_{n - 1} : rhs_{end}\right \}$$
```python
# Substitute into the tridiagonal solver
display(teq1.subs(subslist))
teq2b = teq2.subs(subslist).subs(subsL)
display(teq2b)
teq3b = simplify(teq3.subs(subslist).subs(subsL))
display(teq3b)
teq4b = teq4.subs(subslist).subs(subsL)
display(teq4b)
teq5b = Eq(teq5.lhs,teq5.rhs.subs(dp[end],teq3.rhs).subs(i,end).subs(subslist))
display(teq5b)
display(teq6.subs(subslist))
```
$${c'}_{0} = \frac{bc_{start2}}{bc_{start1}}$$
$${d'}_{0} = \frac{rhs_{start}}{bc_{start1}}$$
$${d'}_{i} = \frac{- \left({x}_{i + 1} - {x}_{i}\right) \left({x}_{i - 1} - {x}_{i}\right)^{2} {d'}_{i - 1} + 6 \left({x}_{i + 1} - {x}_{i}\right) \left({y}_{i - 1} - {y}_{i}\right) + 6 \left({x}_{i - 1} - {x}_{i}\right) \left(- {y}_{i + 1} + {y}_{i}\right)}{\left({x}_{i + 1} - {x}_{i}\right) \left({x}_{i - 1} - {x}_{i}\right) \left(- \left({x}_{i - 1} - {x}_{i}\right) {c'}_{i - 1} - 2 {x}_{i + 1} + 2 {x}_{i - 1}\right)}$$
$${c'}_{i} = \frac{{x}_{i + 1} - {x}_{i}}{- \left(- {x}_{i - 1} + {x}_{i}\right) {c'}_{i - 1} + 2 {x}_{i + 1} - 2 {x}_{i - 1}}$$
$${x}_{n - 1} = \frac{- bc_{end1} {d'}_{n - 2} + rhs_{end}}{- bc_{end1} {c'}_{n - 2} + bc_{end2}}$$
$${x}_{i} = - {c'}_{i} {x}_{i + 1} + {d'}_{i}$$
```python
# Extract sub-expressions
subexpr, final_expr = cse([simplify(teq3b),simplify(teq4b)],symbols=numbered_symbols('z'))
display(subexpr)
display(final_expr)
```
$$\left [ \left ( z_{0}, \quad - {x}_{i}\right ), \quad \left ( z_{1}, \quad z_{0} + {x}_{i + 1}\right ), \quad \left ( z_{2}, \quad z_{0} + {x}_{i - 1}\right ), \quad \left ( z_{3}, \quad 2 {x}_{i + 1}\right ), \quad \left ( z_{4}, \quad 2 {x}_{i - 1}\right ), \quad \left ( z_{5}, \quad z_{2} {c'}_{i - 1}\right ), \quad \left ( z_{6}, \quad - {y}_{i}\right )\right ]$$
$$\left [ {d'}_{i} = \frac{z_{1} z_{2}^{2} {d'}_{i - 1} - 6 z_{1} \left(z_{6} + {y}_{i - 1}\right) + 6 z_{2} \left(z_{6} + {y}_{i + 1}\right)}{z_{1} z_{2} \left(z_{3} - z_{4} + z_{5}\right)}, \quad {c'}_{i} = \frac{- {x}_{i + 1} + {x}_{i}}{- z_{3} + z_{4} - z_{5}}\right ]$$
```python
# Substitute knot spacing into the boundary conditions
bc_eqs2 = [eq.subs(subsL) for eq in bc_eqs]
bc_eqs2
```
$$\left [ bc_{start1} = \begin{cases} 2 {x}_{0} - 2 {x}_{1} & \text{for}\: yp_{0} < 9.9 \cdot 10^{29} \\1 & \text{otherwise} \end{cases}, \quad bc_{start2} = \begin{cases} {x}_{0} - {x}_{1} & \text{for}\: yp_{0} < 9.9 \cdot 10^{29} \\0 & \text{otherwise} \end{cases}, \quad bc_{end1} = \begin{cases} {x}_{n - 1} - {x}_{n - 2} & \text{for}\: ypn < 9.9 \cdot 10^{29} \\0 & \text{otherwise} \end{cases}, \quad bc_{end2} = \begin{cases} 2 {x}_{n - 1} - 2 {x}_{n - 2} & \text{for}\: ypn < 9.9 \cdot 10^{29} \\1 & \text{otherwise} \end{cases}, \quad rhs_{start} = \begin{cases} 6 yp_{0} + \frac{6 {y}_{0}}{- {x}_{0} + {x}_{1}} - \frac{6 {y}_{1}}{- {x}_{0} + {x}_{1}} & \text{for}\: yp_{0} < 9.9 \cdot 10^{29} \\0 & \text{otherwise} \end{cases}, \quad rhs_{end} = \begin{cases} 6 ypn - \frac{6 {y}_{n - 1}}{{x}_{n - 1} - {x}_{n - 2}} + \frac{6 {y}_{n - 2}}{{x}_{n - 1} - {x}_{n - 2}} & \text{for}\: ypn < 9.9 \cdot 10^{29} \\0 & \text{otherwise} \end{cases}\right ]$$
```python
# Use temporary storage for cp, and reuse output vector for dp
# In the future there should be some dependency analysis to verify this is a legal transformation
tmp = IndexedBase('u',shape=(n,))
y2 = IndexedBase('y2',shape=(n,))
storage_subs = {cp:y2, dp:tmp}
#storage_subs = {}
teq1c = teq1.subs(subslist).subs(storage_subs)
display(teq1c)
teq2c = teq2b.subs(subslist).subs(storage_subs)
display(teq2c)
teq3c = final_expr[0].subs(storage_subs)
display(teq3c)
teq4c = final_expr[1].subs(storage_subs)
display(teq4c)
teq5c = teq5b.subs(storage_subs).subs(x,y2)
display(teq5c)
teq6c = teq6.subs(storage_subs).subs(x,y2)
display(teq6c)
```
$${y_{2}}_{0} = \frac{bc_{start2}}{bc_{start1}}$$
$${u}_{0} = \frac{rhs_{start}}{bc_{start1}}$$
$${u}_{i} = \frac{z_{1} z_{2}^{2} {u}_{i - 1} - 6 z_{1} \left(z_{6} + {y}_{i - 1}\right) + 6 z_{2} \left(z_{6} + {y}_{i + 1}\right)}{z_{1} z_{2} \left(z_{3} - z_{4} + z_{5}\right)}$$
$${y_{2}}_{i} = \frac{- {x}_{i + 1} + {x}_{i}}{- z_{3} + z_{4} - z_{5}}$$
$${y_{2}}_{n - 1} = \frac{- bc_{end1} {u}_{n - 2} + rhs_{end}}{- bc_{end1} {y_{2}}_{n - 2} + bc_{end2}}$$
$${y_{2}}_{i} = {u}_{i} - {y_{2}}_{i + 1} {y_{2}}_{i}$$
```python
# Now for some code generation
#reload(codegen_more)
#from codegen_more import *
```
```python
templateT = Type('T')
```
```python
# forward sweep
fr = ARange(start+1,end,1)
body = []
for e in subexpr:
body.append(Variable(e[0],type=templateT).as_Declaration(value=e[1].subs(storage_subs)))
body.append(convert_eq_to_assignment(teq3c))
body.append(convert_eq_to_assignment(teq4c))
loop1 = For(i,fr,body)
```
```python
# backward sweep
br = ARangeClosedEnd(end-1,start,-1)
loop2 = For(i,br,[convert_eq_to_assignment(teq6c)])
```
```python
tmp_init = VariableWithInit("n",tmp,type=Type("std::vector<T>")).as_Declaration()
bc_tmps = []
for e in bc_eqs2:
bc_tmps.append(Variable(e.lhs, type=templateT).as_Declaration(value=e.rhs))
algo = CodeBlock(tmp_init,
*bc_tmps,
convert_eq_to_assignment(teq1c),
convert_eq_to_assignment(teq2c),
loop1,
convert_eq_to_assignment(teq5c),
loop2)
```
```python
# Generate the inner part of the algorithm to check it
ACP = ACodePrinter()
s = ACP.doprint(algo)
print(s)
```
// Not supported in C++:
// IndexedBase
std::vector<T> u(n);
T bc_start1 = ((yp0 < 9.9000000000000002e+29) ? (
2*x[0] - 2*x[1]
)
: (
1
));
T bc_start2 = ((yp0 < 9.9000000000000002e+29) ? (
x[0] - x[1]
)
: (
0
));
T bc_end1 = ((ypn < 9.9000000000000002e+29) ? (
x[n - 1] - x[n - 2]
)
: (
0
));
T bc_end2 = ((ypn < 9.9000000000000002e+29) ? (
2*x[n - 1] - 2*x[n - 2]
)
: (
1
));
T rhs_start = ((yp0 < 9.9000000000000002e+29) ? (
6*yp0 + 6*y[0]/(-x[0] + x[1]) - 6*y[1]/(-x[0] + x[1])
)
: (
0
));
T rhs_end = ((ypn < 9.9000000000000002e+29) ? (
6*ypn - 6*y[n - 1]/(x[n - 1] - x[n - 2]) + 6*y[n - 2]/(x[n - 1] - x[n - 2])
)
: (
0
));
y2[0] = bc_start2/bc_start1;
u[0] = rhs_start/bc_start1;
for (auto i = 1; i < n - 1; i += 1) {
T z0 = -x[i];
T z1 = z0 + x[i + 1];
T z2 = z0 + x[i - 1];
T z3 = 2*x[i + 1];
T z4 = 2*x[i - 1];
T z5 = z2*y2[i - 1];
T z6 = -y[i];
u[i] = (z1*z2*z2*u[i - 1] - 6*z1*(z6 + y[i - 1]) + 6*z2*(z6 + y[i + 1]))/(z1*z2*(z3 - z4 + z5));
y2[i] = (-x[i + 1] + x[i])/(-z3 + z4 - z5);
};
y2[n - 1] = (-bc_end1*u[n - 2] + rhs_end)/(-bc_end1*y2[n - 2] + bc_end2);
for (auto i = n - 2; i >= 0; i += -1) {
y2[i] = u[i] - y2[i + 1]*y2[i];
};
```python
# Set up to create a template function
tx = Pointer(x,type=templateT)
ty = Pointer(y,type=templateT)
ty2 = Pointer(y2,type=templateT)
yp0_var = Variable('yp0',type=templateT)
ypn_var = Variable('ypn',type=templateT)
tf = TemplateFunctionDefinition(void, "cubic_spline_solve",[tx,ty,n,yp0_var,ypn_var,ty2],[templateT],algo)
```
```python
ACP = ACodePrinter()
s = ACP.doprint(tf)
print(s)
```
// Not supported in C++:
// IndexedBase
// IndexedBase
// IndexedBase
// IndexedBase
template<typename T>
void cubic_spline_solve(T * x, T * y, int n, T yp0, T ypn, T * y2){
std::vector<T> u(n);
T bc_start1 = ((yp0 < 9.9000000000000002e+29) ? (
2*x[0] - 2*x[1]
)
: (
1
));
T bc_start2 = ((yp0 < 9.9000000000000002e+29) ? (
x[0] - x[1]
)
: (
0
));
T bc_end1 = ((ypn < 9.9000000000000002e+29) ? (
x[n - 1] - x[n - 2]
)
: (
0
));
T bc_end2 = ((ypn < 9.9000000000000002e+29) ? (
2*x[n - 1] - 2*x[n - 2]
)
: (
1
));
T rhs_start = ((yp0 < 9.9000000000000002e+29) ? (
6*yp0 + 6*y[0]/(-x[0] + x[1]) - 6*y[1]/(-x[0] + x[1])
)
: (
0
));
T rhs_end = ((ypn < 9.9000000000000002e+29) ? (
6*ypn - 6*y[n - 1]/(x[n - 1] - x[n - 2]) + 6*y[n - 2]/(x[n - 1] - x[n - 2])
)
: (
0
));
y2[0] = bc_start2/bc_start1;
u[0] = rhs_start/bc_start1;
for (auto i = 1; i < n - 1; i += 1) {
T z0 = -x[i];
T z1 = z0 + x[i + 1];
T z2 = z0 + x[i - 1];
T z3 = 2*x[i + 1];
T z4 = 2*x[i - 1];
T z5 = z2*y2[i - 1];
T z6 = -y[i];
u[i] = (z1*z2*z2*u[i - 1] - 6*z1*(z6 + y[i - 1]) + 6*z2*(z6 + y[i + 1]))/(z1*z2*(z3 - z4 + z5));
y2[i] = (-x[i + 1] + x[i])/(-z3 + z4 - z5);
};
y2[n - 1] = (-bc_end1*u[n - 2] + rhs_end)/(-bc_end1*y2[n - 2] + bc_end2);
for (auto i = n - 2; i >= 0; i += -1) {
y2[i] = u[i] - y2[i + 1]*y2[i];
};
}
```python
```
```python
```
| 7b01ff500c6cfa45f4fff37c44f8da2857c39ab1 | 58,317 | ipynb | Jupyter Notebook | Wavefunctions/CubicSplineSolver.ipynb | QMCPACK/qmc_algorithms | 015fd1973e94f98662149418adc6b06dcd78946d | [
"MIT"
] | 3 | 2018-02-06T06:15:19.000Z | 2019-11-26T23:54:53.000Z | Wavefunctions/CubicSplineSolver.ipynb | chrinide/qmc_algorithms | 015fd1973e94f98662149418adc6b06dcd78946d | [
"MIT"
] | null | null | null | Wavefunctions/CubicSplineSolver.ipynb | chrinide/qmc_algorithms | 015fd1973e94f98662149418adc6b06dcd78946d | [
"MIT"
] | 4 | 2017-11-14T20:25:00.000Z | 2022-02-28T06:02:01.000Z | 31.403877 | 1,028 | 0.365434 | true | 9,653 | Qwen/Qwen-72B | 1. YES
2. YES | 0.859664 | 0.817574 | 0.702839 | __label__eng_Latn | 0.199713 | 0.471262 |
```python
from decodes.core import *
from decodes.io.jupyter_out import JupyterOut
import math
out = JupyterOut.unit_square( )
```
# Transformation Mathematics
We are familiar with a set of operations in CAD designated by verbs, such as "Move”, “Mirror”, “Rotate”, and “Scale”, and that ***act upon a geometric object to produce the same kind of object, only transformed***.
Operations such as these are termed ***transformations or transforms***. After an object has undergone a transformation, we can observe that certain properties of the object are altered while others are preserved.
Mathematicians employ a number of terms (such as ***congruency, isometry, similarity, and affinity***) to classify transformations by the features they preserve and those they distort.
Consider, for example, the axonometric projection transformation, which projects geometry onto a plane in such a way that parallel lines are mapped onto parallel lines, thereby maintaining parallelism.
Mathematically speaking, we say that ***a transformation of a space onto itself is a rule which assigns to every point $P$ in the space another point $P^*$ in the space***.
The simple reflection that was constructed by compass and straightedge is an example of a transformation of the plane (or any point on the plane) onto itself (in that all points end up somewhere else on the plane).
Notice that, in the case of the mirror reflection, ***the kinds of things that come out after being reflected are the same kinds of things that go in***; namely, a reflected line is still a line with the same length, a reflected circle is still a circle with the same radius, and a reflected curve is still the same curve with all of the same geometric properties.
There are other transformations that do not preserve geometric features in the same way. Consider the ***circle inversion transformation***, which can be expressed as the function below.
\begin{align}
T(x,y) = (\frac{R^2x}{x^2+y^2},\frac{R^2y}{x^2+y^2})
\end{align}
The geometric properties that are preserved here may be more difficult to discern.
Knowing that the graphic shows the inversion of points on a hexagonal grid, we can understand that while the linearity of lines are not preserved, the angle between two lines or curves is preserved.
How and under what circumstances different classes of geometric features are preserved is an important and distinguishing property of transformations.
Even though a transformation is formally defined as any function that takes a point and gives a point back in return, we will find it beneficial to narrow this definition to include only those transformations that may be represented by a particularly useful mathematical construct: ***the matrix***.
A matrix is ***a structure for organizing sets of values in rows and columns, such that
these values may be operated upon by a set of algebraic rules***.
Matrix algebra underlies much of geometric computation. The expenditure of just a bit of effort in mastering the fundamentals of this potentially imposing mathematical construct will yield a wealth of insight in return.
## Matrix Fundamentals
A brief account of the matrix will serve to ground our understanding of how transformations work in computer graphics in general, and will offer a basis for the implementation of transformations in the Decod.es library in particular.
For this we will need a grasp of the basic notation for writing matrices, and a working understanding of how they are used to perform operations.
In this section, we:
* Detail the relevant notational conventions
* Present the algebra of matrices
* Demonstrate their basic operation on a simple example of transforming 2d vectors
### Matrix Notation
A mathematical matrix is much like its namesake in code: a two-dimensional array that organizes values into regular rows and columns.
An ***m x n matrix*** (read as “*m by n*”) and denoted throughout this chapter as $(m \times n)$, is an arrangement of elements into ***m rows*** and ***n columns***. Any matrix for which the number of rows and the number of columns are the same may be termed a square matrix.
By convention, the notation for a generic element contained within a matrix is $c_{ij}$, with the subscript index $i$ indicating the containing row, and the index $j$ the containing column.
Note that the conventional ordering of the indices of a matrix is the reverse of the `(x,y)` convention that we are accustomed to in describing horizontal and vertical positions.
Also, positions are numbered starting at the top left, and the indexing starts with `(1,1)`, not with `(0,0)` as we have become accustomed to in code.
\begin{bmatrix}
c_{11} & c_{12} & c_{13} & c_{14} \\
c_{21} & c_{22} & c_{23} & c_{24} \\
c_{31} & c_{32} & c_{33} & c_{34}
\end{bmatrix}
### Matrix Algebra
With a grasp of the notation conventionally used to describe matrices, we are ready to review the rules by which they may be combined and manipulated.
Three of the basic operations we are able to perform on matrices - addition, subtraction, and scalar multiplication - work exactly the same for matrices as they do for vectors, proceeding by operating on one set of matching components at a time.
#### Matrix Addition
Matrix addition and subtraction works ***component-wise***, matching components at the same indices of each matrix. This procedure requires that each matrix exhibits the same number of rows and columns.
\begin{align}
\begin{bmatrix} 2 & -1 \\ 3 & 0 \end{bmatrix} +
\begin{bmatrix} -1 & 5 \\ 0 & 10 \end{bmatrix} =
\begin{bmatrix} 1 & 4 \\ 3 & 10 \end{bmatrix}
\end{align}
#### Matrix-Scalar Multiplication
Scalar multiplication matches the given scalar to each of the components of the matrix.
\begin{align}
3
\begin{bmatrix} 1 & -1 \\ -2 & 1 \\ 0 & 2 \end{bmatrix} =
\begin{bmatrix} 3 & -3 \\ -6 & 3 \\ 0 & 6 \end{bmatrix}
\end{align}
#### Matrix-Matrix Multiplication
Matrices may be multiplied together to form another matrix, but the convention for doing so is more involved.
Here, the components are formed by pairing rows of the first matrix with columns of the second and performing a “dot product” of the components.
This convention imposes a rule on the shapes of the two matrices being multiplied: the number of columns in the first must match the number of rows of the second, such that a $(m \times p)$ matrix can only multiply a $(p \times n)$ matrix. The result of this multiplication is a $(m \times n)$ matrix that takes its number of rows from the first matrix, and its number of columns from the second.
In summary:
\begin{align}
(m \times p)(p \times n) = (m \times n)
\end{align}
Each entry ***matches a row from the first matrix with a column from the second***, and is calculated by a dot product operation, as shown below.
### Matrices and Vectors
Points and Vecs may be represented by matrices, such that a two-dimensional vector may be expressed as a $(1 \times 2)$ or, more often ***a $(2 \times 1)$ matrix***. Seen in this way, we can multiply a matrix by a vector only so long as the dimensions are compatible. A square matrix $M$, can then multiply a vector $\vec{x} = (x,y)$ in the following way:
\begin{align}
M\vec{x} =
\begin{bmatrix} c_{11} & c_{12} \\ c_{21} & c_{22} \end{bmatrix}
\begin{bmatrix} x \\ y \end{bmatrix} =
\begin{bmatrix} c_{11}x + c_{12}y \\ c_{21}x + c_{22}y \end{bmatrix}
\end{align}
A square matrix multiplied by a vector in $\mathbb{R}^2$ yields another vector in $\mathbb{R}^2$.
We can legitimately say that ***$M$ maps one set of points onto a corresponding set of points***. This is the very definition of a transformation. It is no exaggeration to say that the implications of this are profound.
Consider that we have demonstrated that ***a compact and versatile mathematical form is capable of describing a high-level operation***. So armed, we need not think of any geometric operation, such as the rotation of a set of objects about an axis, merely as a command in software. Instead, we now have ***a mathematical instrument that captures this action precisely, compactly, and in a format that is completely independent*** from any software platform.
The ramifications of this discovery are indeed far-reaching, and extend well beyond the two-dimensional planar transformations captured by the square matrix demonstrated above.
In summary.
When a matrix $M$ multiplies a vector $\vec{x}$, it has the effect of transforming this vector into a new vector $M\vec{x}$.
Substituting points for vectors, any $(2 \times 2)$ matrix can then be seen as a ***planar transformation*** that maps any point in the plane to another point in the plane.
Similarly, a $(3 \times 3)$ matrix specifies a ***spatial transformation*** which maps a point from one location in space to another.
We require two more insights before we are in good position for implementation.
* We need a deeper understanding of the nature of ***a special class of transformations*** that represents the basic building blocks critical to many operations relevant to visual design.
* To aggregate these basic elements into more complex operations requires ***a method for expressing transformations into coherent sequences***.
#### Examples of Matrix-Vector Multiplication
Before moving on, it will be worth our time to consider the specific cases outlined in a nearby table that demonstrate what happens to a generic vector when multiplied by a variety of fixed square matrices. These examples will help us to associate some familiar actions with matrices that produce them.
##### Scaling Matrix
$ M = \begin{bmatrix} s & 0 \\ 0 & s \end{bmatrix} $
This matrix scales vectors by a uniform scaling factor as can be seen by multiplying the matrix by a vector, expanded out below. The vector is stretched for values of $s$ greater than one, and contracts for values less than one. For negative values of $s$, the transformed vector is both scaled and flipped across the origin.
\begin{align}
\begin{bmatrix} s & 0 \\ 0 & s \end{bmatrix}
\begin{bmatrix} x \\ y \end{bmatrix} =
\begin{bmatrix} sx \\ sy \end{bmatrix} =
s\begin{bmatrix} x \\ y \end{bmatrix}
\end{align}
##### Rotation Matrix
$ M = \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix} $
We can see what transformation this matrix represents by looking at how it acts on specific vectors. Multiplying this matrix by a vector rotates the vector by ninety degrees counterclockwise about the origin:
* $(1,0)$ is transformed to $(0,1)$
* $(0,1)$ is transformed to $(-1,0)$
* $(- 1,0)$ is transformed to $(0, -1)$.
\begin{align}
\begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix}
\begin{bmatrix} x \\ y \end{bmatrix} =
\begin{bmatrix} (0x) + (-1y) \\ (1x) + (0y) \end{bmatrix} =
\begin{bmatrix} -y \\ x \end{bmatrix}
\end{align}
##### Mirror Matrix
$ M = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} $
This matrix transforms the vector $(x,y)$ to $(y,x)$ which is the vector mirrored across the line $y = x$.
\begin{align}
\begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}
\begin{bmatrix} x \\ y \end{bmatrix} =
\begin{bmatrix} (0x) + (1y) \\ (1x) + (0y) \end{bmatrix} =
\begin{bmatrix} y \\ x \end{bmatrix}
\end{align}
##### Projection Matrix
$ M = \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} $
This matrix maps $(x,y)$ to $(x,0)$, its projection onto the x-axis.
\begin{align}
\begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}
\begin{bmatrix} x \\ y \end{bmatrix} =
\begin{bmatrix} (1x) + (0y) \\ (0x) + (0y) \end{bmatrix} =
\begin{bmatrix} x \\ 0 \end{bmatrix}
\end{align}
## Matrix Transformations
The kind of transformations that can be described by matrices are very special, and just three categories of matrix transformations are prevalent in computer graphics: ***linear, affine, and projective transformations***.
***Linear transformations*** represent the most constrained category, and include elemental transforms such as rotation, scaling, shearing, and reflection. A discussion of these will comprise the bulk of this section.
A closely related category are the ***affine transformations***. These are ***"almost" linear***, as they can be expressed as the combination of a linear transformation and a translation vector. While pairing a matrix with a vector can be useful, an even more compact representation of an affine transformation as a matrix can be achieved by ***elevating the dimension*** of the matrix.
Finally, we have the ***projective transformations*** that include orthographic projection and perspectival projection.
### Linear Transformations
To discuss the unique features of linear transformations, we will first establish the relationship between linear transformations and matrix transformations.
To do so, we denote transformations that act on a vector by multiplication of a matrix as $T(\vec{x}) = M\vec{x}$.
Matrices such as this share a number of properties in common for any choice of matrix $M$. Crucially, the following two properties hold true:
* The transformation of the sum of any two vectors is equal to the sum of their individual transformations. In other words, $T(\vec{x} + \vec{y}) = T(\vec{x}) + T(\vec{y})$ for any vectors $\vec{x}$ and $\vec{y}$.
* The transformation of the product of a scalar and a vector is equal to the product of the scalar and the transformation of the vector. In other words, $T(c\vec{x}) = cT(\vec{x})$ for any vector $\vec{x}$ and scalar $c$.
Any transformation that satisfies these two properties is called a ***linear transformation***.
Linearity yields a remarkable number of useful consequences. Among these, three are particularly relevant for our purposes: two that concern the preservation of geometric features, and one that allows us to predict the action of a transformation simply by examining the values held by particular components of it.
* Linear transformations map straight lines to straight lines.
* Linear transformations preserve parallelism
* If we know how a linear transformation acts for ***each vector in a basis***, then we can predict how it will transform ***every point and vector in that space***.
\begin{align}
T(\vec{x}) = T(x,y) = xT(1,0) + yT(0,1) = xT(\vec{e_{1}}) + yT(\vec{e_{2}})
\end{align}
This last property of linear transformations allows us to quickly read off the action of any given matrix, and enables us to write matrices with properties that we can easily control.
Take, for example, the following matrix: An examination of the components here reveals how the standard basis vectors are transformed, and from this, we are able to extrapolate a pattern of behavior that can be applied more generally.
\begin{align}
\begin{bmatrix} 1 & 0 \\ 0 & 2 \end{bmatrix}
\end{align}
The basis vector $\vec{e_{1}} = (1,0)$ is unchanged by the transformation, but that $\vec{e_{2}} = (0,1)$ is stretched to twice its length. This is a scale 1d.
\begin{align}
\begin{bmatrix} 1 & 1 \\ 0 & 2 \end{bmatrix}
\end{align}
The vector $\vec{e_{1}} = (1,0)$ is again fixed, so the x-axis remains unchanged, but $\vec{e_{2}} = (0,1)$ is shifted to the line $y = 2x$. This is a shear.
\begin{align}
\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}
\end{align}
This does precisely nothing.
Mathematicians gave this one a compelling name: the identity transformation or $I$.
\begin{align}
T(\vec{x}) =
xT(\vec{e_{1}}) + yT(\vec{e_{2}}) =
\begin{bmatrix} T(\vec{e_{1}}) T(\vec{e_{2}}) \end{bmatrix} \vec{x}
\end{align}
Not only is every matrix transformation a linear one, but every linear transformation can be represented by a matrix. With this in mind, we can now assemble a library of useful linear transformations.
#### Selected Linear Transformations in the Plane
##### Rotation
$ \begin{bmatrix} cos\theta & -sin\theta \\ sin\theta & cos\theta \end{bmatrix} $
Building upon the earlier example representing a rotation by ninety degrees, the above matrix shows a transformation that rotates a vector by an arbitrary angle counter-clockwise about the origin. We’ve seen that all we need in order to construct this matrix is to understand how basis vectors are transformed.
Working with the standard basis, we can show that rotating $\vec{e_{1}} = (1,0)$ by $\theta$ counterclockwise will result in the vector $(cos\theta, sin\theta)$. Similarly, $\vec{e_{2}} = (0,1)$ transforms to $(-sin\theta, cos\theta)$. Putting these transformed basis vectors as columns, we arrive at the nearby matrix.
##### Orthogonal Projection
$ \begin{bmatrix}
cos^2\theta & cos\theta \ sin\theta \\
cos\theta \ sin\theta & sin^2\theta
\end{bmatrix} $
Given a line through the origin rotated at an angle $\theta$ counterclockwise from the horizontal, we may construct a matrix representing the transformation of the normal projection onto this line. The orthogonal projection of a point onto this line is equivalent to the nearest point on the line.
To see how the standard basis vectors are transformed, we will make use of the formula for the projected vector derived using the dot product. Since a unit vector along the projection line is given by $\vec{u} = (cos\theta, sin\theta)$, the projected vector for $\vec{u_{1}}$ onto this line is given by
\begin{align}
(\vec{e_{1}} \bullet \vec{u}) \ \vec{u} =
cos\theta \ (cos\theta, sin\theta) =
(\ cos^2\theta, \ cos\theta sin\theta \ )
\end{align}
Similarly, the projected vector for $\vec{e_{2}} = (0,1)$ is $( \ sin\theta, \ cos\theta sin^2\theta \ )$.
##### Mirror
$ \begin{bmatrix}
2 \ cos^2\theta-1 & 2 \ cos\theta \ sin\theta \\
2 \ cos\theta \ sin\theta & 2 \ sin^2\theta-1
\end{bmatrix} $
Given a line as constructed above, we may express a general mirror transformation across this line in terms of the projection vectors by simple vector subtraction, as given by
\begin{align}
\vec{p_{mirror}} = \vec{p_{near}} + ( \vec{p_{near}} - \vec{p}) =
2\vec{p_{near}} - \vec{p}
\end{align}
The reflection across this line of $\vec{e_{1}} = (1,0)$ is thus given by $2(cos^2 \theta, cos\theta sin\theta) - (1,0)$ and the mirror of $\vec{e_{1}} = (0,1)$ is given by $2(sin \theta, cos\theta sin^2\theta) - (0,1)$. From these, we arrive at the general mirror transformation above.
***What's missing?***
Examining what we have covered thus far, we may note the conspicuous absence of what is perhaps the most basic of all transformations: translation. Although basic, this transformation is actually not a linear transformation.
Expressing translation as displacement by a fixed vector, $T(\vec{x}) = \vec{x} + \vec{b}$, we see that the first condition of linearity is violated by any nonzero translation vector $\vec{b}$.
\begin{align}
T(\vec{x} + \vec{y}) =
\vec{x}+\vec{y}+\vec{b} \neq
T(\vec{x}) + T(\vec{y})
\end{align}
It appears the that ***translation is not able to be represented using a square matrix***. To account for a wider range of transformations that include translations using matrices requires elevating the size of the matrices employed. We'll get to that in a bit.
### The Algebra of Transformations in Sequence
Some transformations are better described as a sequence of operations, broken down into an ordered list of more basic transformations.
The order of operations at work here matters.
One great advantage of the matrix form is that ***the cumulative effect of the application of a sequence of transformations is equivalent to the application of the ordered product of this sequence***. In other words, we can capture a series of transformations in a single matrix.
Of critical importance here is the order in which this multiplication is done:
Successive application of transformations represented by matrices translates to multiplying matrices in ***right-to-left order***.
\begin{align}
\begin{bmatrix} \frac{1}{\sqrt{2}} & \frac{-1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \end{bmatrix}
\begin{bmatrix} -1 & 0 \\ 0 & 1 \end{bmatrix} =
\begin{bmatrix} \frac{-1}{\sqrt{2}} & \frac{-1}{\sqrt{2}} \\ \frac{-1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \end{bmatrix}
\end{align}
\begin{align}
\begin{bmatrix} -1 & 0 \\ 0 & 1 \end{bmatrix}
\begin{bmatrix} \frac{1}{\sqrt{2}} & \frac{-1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \end{bmatrix} =
\begin{bmatrix} \frac{-1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \end{bmatrix}
\end{align}
## From Math to Code
The matrices used to express transformations in three-dimensional space are not $(3 \times 3)$, as we would expect from the discussion so far. Rather, they are $(4 \times 4)$. In deconstructing the rationale behind this, we address the question of how to represent the translation transformation.
The easiest way to include translation into the mix of linear transformations is simply to combine a linear transformation with a translation vector: $T(\vec{x}) = Mx + \vec{b}$. In fact, this broader set of transformations make up the class of ***affine transformations***.
Every linear transformation is affine but not the other way around.
Tacking a vector on a matrix to deal with affine transformations is fantastic, but it ruins the purity of the matrix format. This is no good when considering how to move from math to code. There is an alternative.
We can employ a system of coordinates called ***homogeneous coordinates***. By
employing these, it is possible to use a $(4 \times 4)$ matrix to describe not
only affine transformations in a three-dimensional space, but also many other useful transforms.
### The Elevated Matrix
The dominant technique in computer graphics is to elevate the square matrix to have an added dimension on each side.
Therefore, $(3 \times 3)$ matrices are used for transformations in two dimensions and $(4 \times 4)$ are used for transformations in three dimensions.
This unified representation both accounts for ***translation***, and accommodates the larger class of ***projective transformations*** which includes perspective projection.
Since matrix multiplication only works if the two matrices involved have compatible shapes, this technique also require vectors and points that exhibit a modified structure. To be compatible with elevated matrices, our points and vectors must be granted an extra coordinate.
Points in homogeneous coordinates are interchangeable with Cartesian points so long as $w = 1$, while vectors in homogeneous coordinates maintain a $w = 0$.
We can learn alot by simply familiarizing ourselves with the relationship between certain patterns of component values in a $(4 \times 4)$ matrix and the spatial transformations that result.
First, the long awaited ***translation transformation***.
\begin{align}
\begin{bmatrix}
1 & 0 & 0 & b_{x} \\
0 & 1 & 0 & b_{y} \\
0 & 0 & 1 & b_{z} \\
0 & 0 & 0 & 1 \\
\end{bmatrix}
\begin{bmatrix} x \\ y \\ z \\ 1 \end{bmatrix} =
\begin{bmatrix} x + b_{x} \\ y + b_{y} \\ z + b_{z} \\ 1 \end{bmatrix}
\end{align}
This translation matrix applied to points moves them, but applied to a vector in homogeneous coordinates $(x,y,z,0)$, leaves the vector unchanged.
```python
```
| 3c9cc74527c223690a3d4d7509cc2912e12c259c | 35,016 | ipynb | Jupyter Notebook | 107 - Transformations and Intersections/242 - Transformation Mathematics.ipynb | ksteinfe/decodes_ipynb | 2e4bb6b398472fc61ef8b88dad7babbdeb2a5754 | [
"MIT"
] | 1 | 2018-05-15T14:31:23.000Z | 2018-05-15T14:31:23.000Z | 107 - Transformations and Intersections/242 - Transformation Mathematics.ipynb | ksteinfe/decodes_ipynb | 2e4bb6b398472fc61ef8b88dad7babbdeb2a5754 | [
"MIT"
] | null | null | null | 107 - Transformations and Intersections/242 - Transformation Mathematics.ipynb | ksteinfe/decodes_ipynb | 2e4bb6b398472fc61ef8b88dad7babbdeb2a5754 | [
"MIT"
] | 2 | 2020-05-19T05:40:18.000Z | 2020-06-28T02:18:08.000Z | 41.439053 | 467 | 0.617061 | true | 5,839 | Qwen/Qwen-72B | 1. YES
2. YES | 0.695958 | 0.847968 | 0.59015 | __label__eng_Latn | 0.998084 | 0.209447 |
# Announcements
- No Problem Set this week, Problem Set 4 will be posted on 9/28.
- Stay on at the end of lecture if you want to ask questions about Problem Set 3.
<style>
@import url(https://www.numfys.net/static/css/nbstyle.css);
</style>
<a href="https://www.numfys.net"></a>
# Ordinary Differential Equations - higher order methods
<section class="post-meta">
Based on notes and notebooks by Niels Henrik Aase, Thorvald Ballestad, Vasilis Paschalidis and Jon Andreas Støvneng
</section>
## Algorithms for initial value problem ODEs
Assume we have a first-order differential equation which can be expressed in the form
$$ \frac{dy}{dt} = g(y,t) $$
We will solve this on constant-interval mesh of the independent variable $t$ defined by
$$ t_n = t_0 + n h $$
### Forward-Euler method
In Lecture 10 we derived Euler's method, which simply solves the first-order forward difference approximation to $dy/dt$
$$ \frac{y_{i+1}-y_i}{h} = g(y_i,t_i)$$
as
$$ y_{i+1} = y_i + h g(y_i,t_i) \label{Euler_fwd}\tag{3}$$
```python
# Importing the necessary libraries
import numpy as np # NumPy is used to generate arrays and to perform some mathematical operations
import matplotlib.pyplot as plt # Used for plotting results
```
```python
def forwardEuler_step(t, y, h, g, *P):
"""
Implements a single step of the forward-Euler finite-difference scheme
Parameters:
t: time t
y: Numerical approximation of y at time t
h: Step size
g: RHS of our ODE (RHS = Right hand side). Can be any function with signature g(t,y,*P).
*P: tuple of parameters, arguments to g
Returns:
next_y: Numerical approximation of y at time t+h
"""
next_y = y + h*g(t, y, *P)
return next_y
```
We now need some sort of framework which will take this function and do the integration for us. Let's rewrite `full_Euler` from Lecture 10 to be more general:
```python
def odeSolve(t0, y0, tmax, h, g, method, *P):
""" A full numerical aproximation of an ODE in a set time interval. Performs consecutive steps of `method`
with step size h from start time until the end time. Also takes into account the initial values of the ODE
Parameters:
t0: start time
y0 : Initial condition for y at t = t0
tmax: The end of the interval where the `method` is integrated, t_N
h: Step size
g: RHS of our ODE (RHS = Right hand side). Can be any function with signature g(t,y,*P).
*P: tuple of parameters, arguments to g
Returns:
t_list: Evenly spaced discrete list of time with spacing h.
Starting time = start_t, and end time = end_t
y_list: Numerical approximation of y at times t_list
"""
# make the t-mesh; guarantees we stop precisely at tmax
t_list = np.arange(t0,tmax+h,h)
# allocate space for the solution
y_list = np.zeros_like(t_list)
# set the initial condition
y_list[0] = y0
# find out the size of the t-mesh, and then integrate forward one meshpoint per iteration of the loop
n, = t_list.shape
for i in range(0,n-1):
y_list[i+1] = method(t_list[i], y_list[i], h, g, *P)
# return the solution
return t_list,y_list
```
Armed with this machinery, let's set up another simple problem and try it out.
Last time, we looked at exponential growth, let's solve exponential decay this time:
$$ \frac{dy}{dt} = - c y, \quad y[0] = 1 $$
First, we provide a function to implement the RHS:
```python
def expRHS(t, y, c):
"""
Implements the RHS (y'(x)) of the DE
"""
return -c*y
```
Now we set up the problem to compute and plot the result, along with a plot of the magnitude of the fractional error
### Runge-Kutta Schemes
The idea of the Runge-Kutta schemes is to take advantage of derivative information at the times between $t_i$ and $t_{i+1}$ to increase the order of accuracy.
For example, in the midpoint method, the derivative at the initial time is used to approximate the derivative at the midpoint of the interval, $f(y_i+\frac{1}{2}hf(y_i,t_i), t_i+\frac{1}{2}h)$. The derivative at the midpoint is then used to advance the solution to the next step. The method can be written in two stages $k_i$,
$$ \begin{aligned} \begin{array}{l} k_1 = h f(y_i,t_i)\\ k_2 = h f(y_i+\frac{1}{2}k_1, t_n+\frac{1}{2}h)\\ y_{i+1} = y_i + k_2 \end{array} \end{aligned}\label{RK2}\tag{4} $$
The midpoint method is known as a __2nd-order Runge-Kutta__ formula.
In general, an explicit 2-stage Runge-Kutta method can be written as,
$$ \begin{array}{l} k_1 = h f(y_n,t_n)\\ k_2 = h f(y_n+b_{21}k_1, t_n+a_2h)\ \\ y_{n+1} = y_n + c_1k_1 +c_2k_2 \label{explicitrk2}\tag{5}\end{array} $$
The scheme is said to be *explicit* since a given stage does not depend *implicitly* on itself, as in the backward Euler method , or on a later stage.
Other explicit second-order schemes can be derived by comparing Eq.(\ref{explicitrk2}) to other second-order expansions and matching terms to determine the coefficients $a_2$, $b_{21}$, $c_1$ and $c_2$.
### Explicit Fourth-Order Runge-Kutta Method
Explicit Runge-Kutta methods are popular as each stage can be calculated with one function evaluation. In contrast, implicit Runge-Kutta methods usually involves solving a non-linear system of equations in order to evaluate the stages. As a result, explicit schemes are much less expensive to implement than implicit schemes.
The higher-order Runge-Kutta methods can be derived by in manner similar to the midpoint formula. An s-stage method is compared to a Taylor method and the terms are matched up to the desired order.
As it happens to be, <strong>The Fourth Order Runge-Kutta Method</strong> uses three such test-points and is the most widely used Runge-Kutta Method. You might ask why we don't use five, ten or even more test-points, and the answer is quite simple: It is not computationally free to calculate all these test-points, and the gain in accuracy rapidly decreases beyond the fourth order of the method. That is, if high precision is of such importance that you would require a tenth-order Runge-Kutta, then you're better off reducing the step size $h$, than increasing the order of the method.
Also, there exists other more sophisticated methods which can be both faster and more accurate for equivalent choices of $h$, but obviously, may be a lot more complicated to implement. See for instance <i>Richardson Extrapolation</i>, <i>the Bulirsch-Stoer method</i>, <i>Multistep methods, Multivalue methods</i> and <i>Predictor-Corrector methods</i>.
The classic fourth-order Runge-Kutta formula is:
$$ \begin{array}{l} k_1 = h f(y_n,t_n)\\ k_2 = h f(y_n+\frac{k_1}{2}, t_n+\frac{h}{2})\\ k_3 = h f(y_n+\frac{k_2}{2}, t_n+\frac{h}{2})\\ k_4 = h f(y_n+k_3, t_n+h)\\ y_{n+1} = y_n + \frac{k_1}{6}+ \frac{k_2}{3}+ \frac{k_3}{3} + \frac{k_4}{6} \label{RK4}\tag{6}\end{array} $$
```python
def RK2_step(t, y, h, g, *P):
"""
Implements a single step of the second-order, explicit midpoint method
"""
thalf = t + 0.5*h
k1 = h * g(t, y, *P)
k2 = h * g(thalf, y + 0.5*k1, *P)
return y +k2
```
```python
def RK4_step(t, y, h, g, *P):
"""
Implements a single step of a fourth-order, explicit Runge-Kutta scheme
"""
thalf = t + 0.5*h
k1 = h * g(t, y, *P)
k2 = h * g(thalf, y + 0.5*k1, *P)
k3 = h * g(thalf, y + 0.5*k2, *P)
k4 = h * g(t + h, y + k3, *P)
return y + (k1 + 2*k2 + 2*k3 + k4)/6
```
```python
# set up problem
c = 1.0
h = 0.5
t0 = 0.0
y0 = 1.0
tmax = 5.0
# call the solver for RK2
t, y = odeSolve(t0, y0, tmax, h, expRHS, RK2_step, c)
# plot the result
fig,ax = plt.subplots(1,2)
ans = np.exp(-c*t)
ax[0].plot(t,ans,'r')
ax[0].set_xlabel('t')
ax[0].set_ylabel('y')
ax[0].plot(t,y,'o','RK2')
err_RK2 = np.abs((ans-y)/ans)
# call the solver for Euler
t, y = odeSolve(t0, y0, tmax, h, expRHS, forwardEuler_step, c)
ax[0].plot(t,y,'o','Euler')
err = np.abs((ans-y)/ans)
# call the solver for RK2
t, y4 = odeSolve(t0, y0, tmax, h, expRHS, RK4_step, c)
ax[0].plot(t,y4,'o','RK4')
err_RK4 = np.abs((ans-y4)/ans)
# along with the errors
err_RK2 = np.abs((ans-y)/ans)
ax[1].plot(t, err_RK2, 'o',label = "RK2")
ax[1].plot(t, err_RK4, 'o',label = "RK4")
ax[1].plot(t, err, 'o',label = "Euler")
ax[1].set_xlabel('t')
ax[1].set_ylabel('fractional error')
ax[1].legend()
# now also overplot the error we calculated for forward-Euler
# this gives better spacing between axes
plt.tight_layout()
plt.show()
```
### Systems of First-Order ODEs
Next, we turn to systems of ODE's. We'll take as our example the Lotke-Volterra equations, a simple model of population dynamics in an ecosystem (with many other uses as well).
Imagine a population of rabbits and of foxes on a small island. The rabbits eat a plentiful supply of grass and
would breed like, well, rabbits, with their population increasing exponentially with time in the absence of preditors. The foxes eat the rabbits, and would die out exponentially in time with no food supply. The rate at which foxes eat rabbits depends upon the product of the fox and rabbit populations.
The equations for the population of the rabbits $R$ and foxes $F$ in this simple model is then
\begin{eqnarray*}
\frac{dR}{dt} &= \alpha R - \beta R F \\
\frac{dF}{dt} &= \delta R F - \gamma F
\end{eqnarray*}
Without the cross terms in $RF$, these are just two decay equations of the form we have used as an example above.
A random set of parameters (I am not a biologist!) might be that a rabbit lives four years, so $\alpha=1/4$ and
a fox lives 10 years, so $\gamma=1/10$. Let's pick the other parameters as $\beta = 1$ and $\delta = 1/4$.
We can express the unknown populations as a vector of length two: $y = (R, F)$. The rate of change of populations then can also be expressed as a vector $dy/dt = (dR/dt, DF/dt)$. With such a definition, we can write the RHS function of our system as
```python
def lvRHS(t, y, *P):
# Lotke-Volterra system RHS
# unpack the parameters from the array P
alpha, beta, gamma, delta = P
# make temporary variables with rabbit and fox populations
R = y[0]
F = y[1]
# LV system
dRdt = alpha * R - beta * R * F
dFdt = delta * R * F - gamma * F
# return an array of derivatives with same order as input vector
return np.array([ dRdt, dFdt ])
```
We now have to generalize our odeSolve function to allow more than one equation
```python
def odeSolve(t0, y0, tmax, h, RHS, method, *P):
"""
ODE driver with constant step-size, allowing systems of ODE's
"""
# make array of times and find length of array
t = np.arange(t0,tmax+h,h)
ntimes, = t.shape
# find out if we are solving a scalar ODE or a system of ODEs, and allocate space accordingly
if type(y0) in [int, float]: # check if primitive type -- means only one eqn
neqn = 1
y = np.zeros( ntimes )
else: # otherwise assume a numpy array -- a system of more than one eqn
neqn, = y0.shape
y = np.zeros( (ntimes, neqn) )
# set first element of solution to initial conditions (possibly a vector)
y[0] = y0
# march on...
for i in range(0,ntimes-1):
y[i+1] = method(t[i], y[i], h, RHS, *P)
return t,y
```
Now we can solve our system of two coupled ODEs. Note that the solution is now a vector of 2D vectors... the first index is the solution time, the second the variable:
```python
alpha = 1.0
beta = 0.025
gamma = 0.4
delta = 0.01
h = 0.2
t0 = 0.0
y0 = np.array([ 30, 10 ])
tmax = 50
# call the solver
t, y = odeSolve(t0, y0, tmax, h, lvRHS, RK4_step, alpha, beta, gamma, delta)
fig,ax = plt.subplots()
ax.plot(t,y[:,0],'b', label='prey')
ax.plot(t,y[:,1],'r', label='preditor')
ax.set_xlabel('time')
ax.set_ylabel('population')
ax.legend()
plt.tight_layout()
plt.show()
```
### Higher Order Derivatives and Sets of 1st order ODEs
The trick to solving ODEs with higher derivatives is turning them into systems of first-order ODEs.
As a simple example, consider the second-order differential equation describing the van der Pol oscillator
$$ \frac{d^2 x}{dt^2} - a (1-x^2) \frac{dx}{dt} + x = 0 $$
We turn this into a pair of first-order ODEs by defining an auxiliary function $v(t) = dx/dt$ and writing the system as
\begin{align}
\begin{split}
\frac{dv}{dt} &= a (1-x^2) v - x\\
\frac{dx}{dt} &= v
\end{split}
\end{align}
Note that there are only functions (and the independent variable) on the RHS; all "differentials" are on the LHS.
Now that we have a system of first-order equations ,we can proceed as above. A function describing
the RHS of this system is
```python
def vdpRHS(t, y, a):
# we store our function as the array [x, x']
return np.array([
y[1], # dx/dt = v
a*(1-y[0]**2)*y[1] - y[0] # dv/dt = a*(1-x**2)*v - x
])
```
```python
a = 15 # parameter
h = 0.01
t0 = 0.0
y0 = np.array([ 0, 1])
tmax = 50
# call the solver
t, y = odeSolve(t0, y0, tmax, h, vdpRHS, RK4_step, a)
fig,ax = plt.subplots()
ax.plot(t,y[:,0],'b', label='x')
ax.plot(t,y[:,1],'r--', label='v')
ax.set_xlabel('time')
ax.legend()
ax.set_title(f"van der Pol Oscillator for a={a}")
plt.tight_layout()
plt.show()
```
A somewhat more complex example is the Lane-Emden equation, which is really just Poisson's equation in spherical symmetry for the graviational potential of a self-gravitating fluid whose pressure is related to its density as $P\propto\rho^\gamma$. Such a system is called a _polytrope_, and is often used in astrophysics as a simple model for the structure of system such as a a star in which outward pressure and inward gravity are in equilibrium.
Let $\xi$ be the dimensionless radius of the system, and let $\theta$ be related to the density as
$\rho = \rho_c \theta^n$, where $\rho_c$ is the density at the origin and $n = 1/(\gamma-1)$. We then have the dimensionless second-order differential equation
$$ \frac{1}{\xi^2}\frac{d}{d\xi}\left(\xi^2\frac{d\theta}{d\xi}\right) + \theta^n = 0 $$
Note that the first term is just the divergence $\nabla\cdot\theta$ in spherical symmetry.
If we expand out the first term, we have
$$ \frac{d^2\theta}{d\xi^2} + \frac{2}{\xi}\frac{d\theta}{d\xi} + \theta^n = 0 $$
Defining an auxiliary function $v(\xi) = d\theta/d\xi$, we can then convert this into a system of two first-order ODEs:
\begin{align}
\begin{split}
\frac{dv}{d\xi} &= -\frac{2}{\xi} v - \theta^n \\
\frac{d\theta}{d\xi} &= v
\end{split}
\end{align}
Again, we have "derivatives" only on the LHS and no derivatives on the RHS of our system.
Looking at this expression, one can see right away that at the origin $\xi=0$ we will have a numerical problem; we are dividing by zero.
Analytically, this is not a problem, since $v/\xi\rightarrow0$ as $\xi\rightarrow0$, but here we need to address this numerically.
The first approach is to take care of the problem in our RHS function:
```python
def leRHS(x, y, n):
dthetadx = y[1]
if x==0:
dvdx = -y[0]**n
else:
dvdx = -2/x*y[1] - y[0]**n
return np.array([ dthetadx, dvdx ])
```
This is somewhat clunky, however, and you would first have to convince yourself that in fact $v(\xi)\rightarrow0$ faster than $\xi$ (don't just take my word for it!).
Instead, we could use a more direct RHS function
```python
def leRHS(x, y, n):
dthetadx = y[1]
dvdx = -2/x*y[1] - y[0]**n
return np.array([ dthetadx, dvdx ])
```
and expand the solution in a Taylor series about the origin to get a starting value for our numerical integration at a small distance away from the origin. To do this, write
$$\theta(\xi) = a_0 + a_1 \xi + a_2 \xi^2 + \dots$$
The first thing to notice is that, by symmetry, only even powers of $\xi$ will appear in the solution.
Thus we will have
$$ \theta(\xi) = a_0 + a_2 \xi^2 + a_4 \xi^4 + \dots$$
By the boundary condition $\theta(0) = 1$, we have immediately that $a_0 = 1$.
Next, substitute $\theta(\xi) = 1 + a_2 \xi^2 + a_4 \xi ^4 + O(\xi^6)$ into the Lane-Emden equation. $\theta$ and its first two derivatives are
\begin{align}
\begin{split}
\theta(\xi) &= 1 + a_2 \xi^2 + a_4 \xi^4 + O(\xi^6)\\
\theta'(\xi) &= 2 a_2 \xi + 4 a_4 \xi^3 + O(\xi^5) \\
\theta''(\xi) &= 2 a_2 + 12 a_4 \xi^2 + O(\xi^4)
\end{split}
\end{align}
Putting these into the Lane-Emden equation, we have
\begin{align}
\begin{split}
2 a_2 + 12 a_4 \xi^2 + O(\xi^4) + \frac{2}{\xi} (2 a_2 x + 4 a_4 \xi^3 + O(\xi^5)) &= -\theta^n \\
6 a_2 + 20 a_4 \xi^2 + O(\xi^4) &= -\theta^n
\end{split}
\end{align}
A boundary condition $\theta(0)=1$, and thus we have $a_2 = -1/6$. Away from zero, then, we have
\begin{align}
\begin{split}
-1 + 20 a_4 \xi^2 + O(\xi^4) &= -\left(1 - 1/6 \xi^2 + a_4 \xi^4 + O(\xi^6)\right)^n
\end{split}
\end{align}
The term on the RHS is $ 1 - n \xi^2/6 + O(\xi^4)$, and so we must have $a_4 = n/120$.
Thus, the series expansion of the solution around the origin is
$$ \theta(\xi) = 1 - \frac{1}{6}\xi^2 + \frac{n}{120} \xi^4 + \dots $$
We can now use this expansion to take a first step slightly away from the origin before beginning our
numerical integration, thus avoiding the divide by zero. Note that this series solution near the origin is $O(h^5)$ and so is a good match for RK4 if we take the same (or smaller) step-size.
```python
n = 3
xi0 = 0.01 # starting value of xi for our numerical integration
theta0 = 1 - xi0**2/6 + n*xi0**4/120 # Taylor series solution to the DE near zero derived above
theta0p = -xi0/3 + n*xi0**3/30
y0 = np.array([ theta0, theta0p]) # set IC's for numerical integration
print(f"IC at {xi0:10.5e}: {y0[0]:10.5e}, {y0[1]:10.5e}")
h = 0.1
tmax = 8
# call the solver
t, y = odeSolve(xi0, y0, tmax, h, leRHS, RK4_step, n)
fig,ax = plt.subplots()
ax.plot(t,y[:,0],'b', label=r'$\theta(\xi)$')
ax.plot(t,y[:,1],'r--', label=r'$\frac{d\theta}{d\xi}$')
ax.plot([0,tmax],[0,0],'k')
ax.set_xlabel(r'$\xi$')
ax.set_title(f"Lane Emden Equation for n={n}")
ax.legend()
plt.tight_layout()
plt.show()
```
For values of $n\le5$, the solutions of the Lane Emden equation (the so-called Lane-Emden functions of index $n$) decrease to zero at finite $\xi$. Since this is the radius at which the density goes to zero, we can interpret it as the surface of the self-gravitating body (for example, the radius of the star). Knowing this value $\xi_1$ is thus interesting... Let us see how to determine it numerically.
Cleary, we are looking for the solution to $\theta(\xi_1)=0$; this is just root-finding, which we already know how to do. Instead of using some closed-form function, however, the value of the function $\theta(\xi)$ must in this case be determined numerically. But we have just figured out how to do this!
Let's use the bisection method for our root-finding algorithm; here is a quick version (no error checking!)
```python
def bisection(func, low, high, eps, *P):
flow = func(low, *P)
fhigh = func(high, *P)
mid = 0.5*(low+high)
fmid = func(mid,*P)
while (high-low)> eps:
if fmid*flow < 0:
high = mid
fhigh = fmid
else:
low = mid
flow = mid
mid = 0.5*(low+high)
fmid = func(mid,*P)
return low
```
Now let us make a function which returns $\theta(\xi)$, the solution to the Lane-Emden equation at $\xi$
```python
def theta(xi, n):
h = 1e-4
xi0 = 1e-4
theta0 = 1 - xi0**2/6 + n*xi0**4/120
theta0p = -xi0/3 + n*xi0**3/30
y0 = np.array([ theta0, theta0p])
t, y = odeSolve(xi0, y0, xi, h, leRHS, RK4_step, n)
return y[-1,0]
```
Using these, we can compute the surface radius of the polytrope
```python
n = 3
xi1 = bisection(theta, 6, 8, 1e-5, n)
print(f"xi_1 = {xi1:7.5f}")
```
A more careful treatment gives a value $\xi_1 = 6.89685...$, so we are doing pretty well...
```python
```
| 30c00abbaaa3111abafe96512375232710a15b33 | 28,444 | ipynb | Jupyter Notebook | Lectures/Lecture 12/Lecture12_ODE_part3.ipynb | astroarshn2000/PHYS305S20 | 18f4ebf0a51ba62fba34672cf76bd119d1db6f1e | [
"MIT"
] | 3 | 2020-09-10T06:45:46.000Z | 2020-10-20T13:50:11.000Z | Lectures/Lecture 12/Lecture12_ODE_part3.ipynb | astroarshn2000/PHYS305S20 | 18f4ebf0a51ba62fba34672cf76bd119d1db6f1e | [
"MIT"
] | null | null | null | Lectures/Lecture 12/Lecture12_ODE_part3.ipynb | astroarshn2000/PHYS305S20 | 18f4ebf0a51ba62fba34672cf76bd119d1db6f1e | [
"MIT"
] | null | null | null | 36.84456 | 598 | 0.544825 | true | 6,292 | Qwen/Qwen-72B | 1. YES
2. YES | 0.880797 | 0.91848 | 0.808995 | __label__eng_Latn | 0.987972 | 0.717899 |
# definition
数值定义. 对于 N-bit two's complement number system, 最高位 N-th bit 为符号位, 0 为正,
1 为负. 对于任意一个非负整数, 它的相反数为 its complement with respect to $2^N$.
# properties
- 一个数字的 two's complement 可以通过:
1. take its ones' complement and add one.
因为: the sum of a number and its ones' complement is -0, i.e. ‘1’ bits, or $2^N-1$;
and by definition, the sum of a number and its two's complement is $2^N$.
2. 通过定义中的 unsigned binary arithmetic 计算得出.
- The value of A N-bit binary number in two's complement numercal system can be computed by
$$w = -a_{N-1} 2^{N-1} + \sum_{i=0}^{N-2} a_i 2^i$$. 推导: 对于互为相反数的 $A$ 和 $B$,
已知:
$$
\begin{align}
A + B &= 0 \\
A &= \sim B + 1
\end{align}
$$
第二式为 two's complement 和 ones' complement 的关系.
根据第二式得到
$$
A=1+\sum_{i=0}^{N-1}(1-b_i)2^i
$$
对于 N-bit number system, $A+B=0$ 意味着
$$(A+B) \operatorname{mod} 2^N = 0$$
满足上式的 $B$ 有两种设计方式:
$$
\begin{align}
B &= b_{N-1}2^{N-1} + \sum_{i=0}^{N-2}b_i 2^i \\
B &= -b_{N-1}2^{N-1} + \sum_{i=0}^{N-2}b_i 2^i
\end{align}
$$
分别对应于 unsigned number 和 signed number 的数值公式.
- most negative number, `INT_MIN`. 它没有可以在系统内表达的相反数. 因为正负数范围的不对称性.
根据 two's complement 的定义计算 `INT_MIN` 得到的是自身, 这显然是不对的.
- 加减法. 普通的二进制加减法运算即可, 没有特殊处理.
- 加减法的溢出检测. an XOR operation on leftmost two carry/borrow bits can quickly determine
if an overflow condition exists.
```python
```
| 57e58ee67b325cd5a78841b53ba007ba4ce08912 | 2,461 | ipynb | Jupyter Notebook | math/arithmetic/binary-arithmetic/two-s-complement.ipynb | Naitreey/notes-and-knowledge | 48603b2ad11c16d9430eb0293d845364ed40321c | [
"BSD-3-Clause"
] | 5 | 2018-05-16T06:06:45.000Z | 2021-05-12T08:46:18.000Z | math/arithmetic/binary-arithmetic/two-s-complement.ipynb | Naitreey/notes-and-knowledge | 48603b2ad11c16d9430eb0293d845364ed40321c | [
"BSD-3-Clause"
] | 2 | 2018-04-06T01:46:22.000Z | 2019-02-13T03:11:33.000Z | math/arithmetic/binary-arithmetic/two-s-complement.ipynb | Naitreey/notes-and-knowledge | 48603b2ad11c16d9430eb0293d845364ed40321c | [
"BSD-3-Clause"
] | 2 | 2019-04-11T11:02:32.000Z | 2020-06-27T11:59:09.000Z | 30.7625 | 102 | 0.502641 | true | 675 | Qwen/Qwen-72B | 1. YES
2. YES | 0.872347 | 0.875787 | 0.76399 | __label__eng_Latn | 0.872233 | 0.613338 |
```python
from sympy import *
from sympy.abc import m,M,l,b,c,g,t
from sympy.physics.mechanics import dynamicsymbols, init_vprinting
th = dynamicsymbols('theta')
x = dynamicsymbols('x')
dth = diff(th)
dx = diff(x)
ddth = diff(dth)
ddx = diff(dx)
init_vprinting()
```
```python
```
```python
ddth = (-(1/2)*m*l cos(th)ddth - b*dx +(1/2)*m*l*sin(th)*dth*dx)/((m/12)(3l+l^2))
ddx = (-(1/2)*m*l cos(th)ddth - b*dx +(1/2)*m*l*sin(th)*dth^2)/(M+m)
```
| de761e7fe343e53c15b1cbb441c4f622da1a09df | 1,294 | ipynb | Jupyter Notebook | notebook.ipynb | dnlrbns/pendcart | 696c5d2c5fc7b787f3ab074e3ec3949a94dfc5ed | [
"MIT"
] | null | null | null | notebook.ipynb | dnlrbns/pendcart | 696c5d2c5fc7b787f3ab074e3ec3949a94dfc5ed | [
"MIT"
] | null | null | null | notebook.ipynb | dnlrbns/pendcart | 696c5d2c5fc7b787f3ab074e3ec3949a94dfc5ed | [
"MIT"
] | null | null | null | 21.213115 | 90 | 0.51391 | true | 174 | Qwen/Qwen-72B | 1. YES
2. YES | 0.90599 | 0.61878 | 0.560609 | __label__yue_Hant | 0.234876 | 0.140812 |
# The Harmonic Oscillator Strikes Back
*Note:* Much of this is adapted/copied from https://flothesof.github.io/harmonic-oscillator-three-methods-solution.html
This week we continue our adventures with the harmonic oscillator.
The harmonic oscillator is a system that, when displaced from its equilibrium position, experiences a restoring force F proportional to the displacement x:
$$F=-kx$$
The potential energy of this system is
$$V = {1 \over 2}k{x^2}$$
These are sometime rewritten as
$$ F=- \omega_0^2 m x, \text{ } V(x) = {1 \over 2} m \omega_0^2 {x^2}$$
Where $\omega_0 = \sqrt {{k \over m}} $
If the equilibrium value of the harmonic oscillator is not zero, then
$$ F=- \omega_0^2 m (x-x_{eq}), \text{ } V(x) = {1 \over 2} m \omega_0^2 (x-x_{eq})^2$$
## 1. Harmonic oscillator from last time (with some better defined conditions)
Applying the harmonic oscillator force to Newton's second law leads to the following second order differential equation
$$ F = m a $$
$$ F= -m \omega_0^2 (x-x_{eq}) $$
$$ a = - \omega_0^2 (x-x_{eq}) $$
$$ x(t)'' = - \omega_0^2 (x-x_{eq}) $$
The final expression can be rearranged into a second order homogenous differential equation, and can be solved using the methods we used above
This is already solved to remind you how we found these values
```python
import sympy as sym
sym.init_printing()
```
**Note** that this time we define some of the properties of the symbols. Namely, that the frequency is always positive and real and that the positions are always real
```python
omega0,t=sym.symbols("omega_0,t",positive=True,nonnegative=True,real=True)
xeq=sym.symbols("x_{eq}",real=True)
x=sym.Function("x",real=True)
x(t),omega0
```
```python
dfeq=sym.Derivative(x(t),t,2)+omega0**2*(x(t)-xeq)
dfeq
```
```python
sol = sym.dsolve(dfeq)
sol
```
```python
sol,sol.args[0],sol.args[1]
```
**Note** this time we define the initial positions and velocities as real
```python
x0,v0=sym.symbols("x_0,v_0",real=True)
ics=[sym.Eq(sol.args[1].subs(t, 0), x0),
sym.Eq(sol.args[1].diff(t).subs(t, 0), v0)]
ics
```
```python
solved_ics=sym.solve(ics)
solved_ics
```
### 1.1 Equation of motion for $x(t)$
```python
full_sol = sol.subs(solved_ics[0])
full_sol
```
### 1.2 Equation of motion for $p(t)$
```python
m=sym.symbols("m",positive=True,nonnegative=True,real=True)
p=sym.Function("p")
sym.Eq(p(t),m*sol.args[1].subs(solved_ics[0]).diff(t))
```
## 2. Time average values for a harmonic oscillator
If we want to understand the average value of a time dependent observable, we need to solve the following integral
$${\left\langle {A(t)} \right\rangle}_t = \begin{array}{*{20}{c}}
{\lim }\\
{\tau \to 0}
\end{array}\frac{1}{\tau }\int\limits_0^\tau {A(t)dt} $$
### 2.1 Average position ${\left\langle {x} \right\rangle}_t$ for a harmonic oscillator
```python
tau=sym.symbols("tau",nonnegative=True,real=True)
xfunc=full_sol.args[1]
xavet=(xfunc.integrate((t,0,tau))/tau).limit(tau,sym.oo)
xavet
```
The computer does not always make the best choices the first time. If you treat each sum individually this is not a hard limit to do by hand. The computer is not smart. We can help it by inseting an `expand()` function in the statement
```python
xavet=(xfunc.integrate((t,0,tau))/tau).expand().limit(tau,sym.oo)
xavet
```
### 2.2 Excercise: Calculate the average momenta ${\left\langle {p} \right\rangle}_t$ for a harmonic oscillator
```python
# Your code here
m=sym.symbols("m",positive=True,nonnegative=True,real=True)
p=sym.Function("p")
sym.Eq(p(t),m*sol.args[1].subs(solved_ics[0]).diff(t))
tau=sym.symbols("tau",nonnegative=True,real=True)
pfunc=sym.Eq(p(t),m*sol.args[1].subs(solved_ics[0]).diff(t)).args[1]
pavet=(pfunc.integrate((t,0,tau))/tau).limit(tau,sym.oo)
pavet
```
### 2.3 Exercise: Calculate the average kinetic energy of a harmonic oscillator
```python
# Your code here
m=sym.symbols("m",positive=True,nonnegative=True,real=True)
KE=sym.Function("KE")
sym.Eq(KE(t),(m*sol.args[1].subs(solved_ics[0]).diff(t))**2/(2*m))
```
```python
tau=sym.symbols("tau",nonnegative=True,real=True)
KEfunc=sym.Eq(KE(t),(m*sol.args[1].subs(solved_ics[0]).diff(t))**2/(2*m)).args[1]
KEavet=(KEfunc.integrate((t,0,tau))/tau).limit(tau,sym.oo)
KEavet
```
```python
KEavet=(KEfunc.integrate((t,0,tau))/tau).expand().limit(tau,sym.oo)
KEavet
```
## 3. Ensemble (Thermodynamic) Average values for a harmonic oscillator
If we want to understand the thermodynamics ensemble average value of an observable, we need to solve the following integral.
$${\left\langle {A(t)} \right\rangle}_{T} = \frac{\int{A e^{-\beta H}dqdp}}{\int{e^{-\beta H}dqdp} } $$
You can think of this as a Temperature average instead of a time average.
Here $\beta=\frac{1}{k_B T}$ and the classical Hamiltonian, $H$ is
$$ H = \frac{p^2}{2 m} + V(q)$$
**Note** that the factors of $1/h$ found in the classical partition function cancel out when calculating average values
### 3.1 Average position ${\left\langle {x} \right\rangle}_t$ for a harmonic oscillator
For a harmonic oscillator with equilibrium value $x_{eq}$, the Hamiltonian is
$$ H = \frac{p^2}{2 m} + \frac{1}{2} m \omega_0 (x-x_{eq})^2 $$
First we will calculate the partition function $\int{e^{-\beta H}dqdp}$
```python
k,T=sym.symbols("k,T",positive=True,nonnegative=True,real=True)
xT,pT=sym.symbols("x_T,p_T",real=True)
ham=sym.Rational(1,2)*(pT)**2/m + sym.Rational(1,2)*m*omega0**2*(xT-xeq)**2
beta=1/(k*T)
bolz=sym.exp(-beta*ham)
z=sym.integrate(bolz,(xT,-sym.oo,sym.oo),(pT,-sym.oo,sym.oo))
z
```
Then we can calculate the numerator $\int{A e^{-\beta H}dqdp}$
```python
numx=sym.integrate(xT*bolz,(xT,-sym.oo,sym.oo),(pT,-sym.oo,sym.oo))
numx
```
And now the average value
```python
xaveT=numx/z
xaveT
```
### 3.2 Exercise: Calculate the average momenta ${\left\langle {p} \right\rangle}_t$ for a harmonic oscillator
After calculating the value, explain why you think you got this number
```python
# your code here
nump=sym.integrate(pT*bolz,(xT,-sym.oo,sym.oo),(pT,-sym.oo,sym.oo))
nump
```
```python
paveT=nump/z
paveT
```
### 3.3 Exercise: Calculate the average kinetic energy
The answer you get here is a well known result related to the energy equipartition theorem
```python
# Your code here
numKE=sym.integrate((pT**2)/(2*m)*bolz,(xT,-sym.oo,sym.oo),(pT,-sym.oo,sym.oo))
numKE
```
```python
KEaveT=numKE/z
KEaveT
```
# Back to the lecture
## 4. Exercise Verlet integrators
In this exercise we will write a routine to solve for the equations of motion for a hamonic oscillator.
Plot the positions and momenta (seprate plots) of the harmonic oscillator as a functions of time.
Calculate trajectories using the following methods:
1. Exact solution
2. Simple taylor series expansion
3. Predictor-corrector method
4. Verlet algorithm
5. Leapfrog algorithm
6. Velocity Verlet algorithm
```python
# Your code here
```
| 5da4b5239749aaa9408ca4110a87007295b39726 | 87,460 | ipynb | Jupyter Notebook | harmonic_student.ipynb | sju-chem264-2019/new-10-14-10-m-jacobo | a80b342b8366f5203d08b8d572468b519067752c | [
"MIT"
] | null | null | null | harmonic_student.ipynb | sju-chem264-2019/new-10-14-10-m-jacobo | a80b342b8366f5203d08b8d572468b519067752c | [
"MIT"
] | null | null | null | harmonic_student.ipynb | sju-chem264-2019/new-10-14-10-m-jacobo | a80b342b8366f5203d08b8d572468b519067752c | [
"MIT"
] | null | null | null | 93.540107 | 11,464 | 0.804574 | true | 2,225 | Qwen/Qwen-72B | 1. YES
2. YES | 0.867036 | 0.746139 | 0.646929 | __label__eng_Latn | 0.880557 | 0.341364 |
# Sizing a mosfet using gm/Id method
This is an example you can use to calculate mosfet size in Sky130 for given design parameters. You can change the parameters below and recalculate.
```python
%pylab inline
import numpy as np
from scipy.interpolate import interp1d
import pint
ureg = pint.UnitRegistry() # convenient unit conversions
```
Populating the interactive namespace from numpy and matplotlib
First we'll setup the design parameters. The mosfet length and width will need to be one of the bin values for the selected mosfet model.
```python
A_v = np.abs(-2) # voltage gain at DC
I_d = 0.5 * ureg.mA # maximum drain current
f_c = 500 * ureg.MHz # corner (3dB) frequency
C_L = 1 * ureg.pF # Load capacitance
# simulation parameters
sim_L = 0.15 * ureg.um # target mosfet length
sim_W = 1 * ureg.um # calculations are independent of width but we need to have a matching bin value for the initial simulations
sim_Vdd = 1.8 * ureg.V
```
First we calculate the load resistance.
\begin{align}
R_L &= \frac{1}{2 * \pi * f_c * C_L}
\end{align}
```python
R_L = 1 / (2 * 3.1415 * f_c * C_L)
R_L = R_L.to(ureg.ohms)
print(R_L)
```
318.3192742320548 ohm
Next we calculate transconductance.
\begin{align}
g_m &= \frac{A_v}{R_L}
\end{align}
```python
g_m = A_v / R_L
g_m = g_m.to(ureg.mS)
print(f'gm={g_m}')
```
gm=6.2829999999999995 millisiemens
Now we need to generate the gm/Id graphs we'll need to determine the remaining values. These can be pre-generated and loaded or calculate here. We'll load them from an hdf5 file generated with _gen_gm_id_plots.py_.
```python
import h5py
f = h5py.File('gm_id_01v8/sky130_fd_pr__nfet_01v8__data.h5', 'r')
bin_idx = 4
assert(f['bins'][bin_idx][1] - sim_L.magnitude < 0.00001) # index of the W=1 L=0.15 bin in the repo data.
vsweep=f['vsweep'][bin_idx] * ureg.V
gm_id = (f['gm'][bin_idx] * ureg.mS) / (f['id'][bin_idx] * ureg.A)
id_W = (f['id'][bin_idx] * ureg.A / sim_W)
```
We could just look for the $\frac{I_d}{W}$ on the graph, but we've got the data and data interpolation tools, so we can calculate exactly. We'll figure out the value and plot it on the graph as a visual validation.
```python
i_id_w__gm_id = interp1d(gm_id.magnitude, id_W.magnitude)
id_interp = i_id_w__gm_id(g_m.magnitude) * id_W.units
print(f'Id={id_interp.to(ureg.uA / ureg.um)}')
```
Id=63.30221617728767 microampere / micrometer
```python
fig = figure()
id_w__gm_id = fig.subplots(1, 1)
id_w__gm_id.plot(gm_id.magnitude, id_W.magnitude)
id_w__gm_id.axes.set_xlabel(f'gm/Id ({gm_id.units})')
id_w__gm_id.axes.set_ylabel(f'Id/W ({id_W.units})')
id_w__gm_id.plot(g_m.magnitude, id_interp.magnitude, 'o', markersize=8)
fig.tight_layout()
```
This allows us to calculate the transistor width.
\begin{align}
W &= \frac{I_d}{\frac{I_d}{W}}
\end{align}
```python
W = I_d / id_interp
W = W.to(ureg.um)
print(f'W={W}')
```
W=7.898617618689249 micrometer
Next we determine the gate bias using the same interpolation technique as above.
```python
i_vgg__gm_id = interp1d(gm_id.magnitude, vsweep.magnitude)
vbias_interp = i_vgg__gm_id(g_m.magnitude) * vsweep.units
print(f'Vbias={vbias_interp}')
fig = figure()
gm_id__vgg = fig.subplots(1, 1)
gm_id__vgg.plot(vsweep.magnitude, gm_id.magnitude)
gm_id__vgg.axes.set_xlabel(f'Vgg ({vsweep.units})')
gm_id__vgg.axes.set_ylabel(f'gm/Id ({gm_id.units})')
gm_id__vgg.plot(vbias_interp.magnitude, g_m.magnitude, 'o', markersize=8)
fig.tight_layout()
```
| 14f0905464a6d9cee459617aaf0502b60725b1bf | 39,374 | ipynb | Jupyter Notebook | utils/gm_id_example.ipynb | tclarke/sky130radio | 4eca853b7e4fd6bc0d69998f65c04f97e73bee84 | [
"Apache-2.0"
] | 14 | 2020-09-28T19:41:26.000Z | 2021-10-05T01:40:00.000Z | utils/gm_id_example.ipynb | tclarke/sky130radio | 4eca853b7e4fd6bc0d69998f65c04f97e73bee84 | [
"Apache-2.0"
] | null | null | null | utils/gm_id_example.ipynb | tclarke/sky130radio | 4eca853b7e4fd6bc0d69998f65c04f97e73bee84 | [
"Apache-2.0"
] | 6 | 2020-07-30T21:54:19.000Z | 2021-02-07T07:58:12.000Z | 133.471186 | 16,484 | 0.893254 | true | 1,108 | Qwen/Qwen-72B | 1. YES
2. YES | 0.875787 | 0.826712 | 0.724023 | __label__eng_Latn | 0.736974 | 0.520481 |
End of preview. Expand
in Dataset Viewer.
Math Notebooks
This repository contains mathematically informative ipython notebooks that were collated from OpenWebMath, RedPajama, and the Algebraic Stack in the AutoMathText effort. Zhang et. al. used Qwen 72B to score text with the following prompt:
<system>
You are ChatGPT, equipped with extensive expertise in mathematics and coding, and skilled
in complex reasoning and problem-solving. In the following task, I will present a text excerpt
from a website. Your role is to evaluate whether this text exhibits mathematical intelligence
and if it is suitable for educational purposes in mathematics. Please respond with only YES
or NO
</system>
User: {
“url”: “{url}”,
“text”: “{text}”
}
1. Does the text exhibit elements of mathematical intelligence? Respond with YES or NO
2. Is the text suitable for educational purposes for YOURSELF in the field of mathematics? Respond with YES or NO
The responses to these questions were each scored with the function:
These scores are found in the meta.lm_q1_score
and meta.lm_q2_score
columns. A total score (meta.lm_q1q2_score
) is achieved by taking the product of the two scores.
- Downloads last month
- 42