text
stringlengths 256
16.4k
|
---|
Generally speaking, the answer to both questions is linked to some number becoming increasingly large so that, for atoms you have a large density of higher excited states (think to Rydberg atoms as an example) or for electromagnetic field one has such a large number of photons that a coherent state is a good description of it and an average field can be safely taken. Then, quantum fluctuations are negligible small as these numbers increases.
In order to make all the argument quantitative, let me consider a standard Hamiltonian for radiation-matter interaction for hydrogen-like atoms, in a non-relativistic limit,
$$H=H_a+H_f+H_i=H_f-\frac{\hbar^2}{2m}\Delta_2-\frac{Ze^2}{r}-{\bf d}\cdot{\bf E}$$
where I have used an equivalent form for the interaction. Now, we can always rewrite this through a complete set of atomic states and this will give (the continuous part of the spectrum is implicit in the summation)
$$H=H_f+\sum_nE_n|n\rangle\langle n|+\sum_{m,n}|m\rangle\langle n|{\bf d}_{mn}\cdot{\bf E}$$
but we can do the same also for the field. Assuming this monochromatic and using coherent states $|\alpha\rangle$, that we know are overcomplete $\langle\alpha|\beta\rangle=$, we can use the resolution of identity (e.g. see here)
$$I=\int\frac{d^2\alpha}{\pi}|\alpha\rangle\langle\alpha|$$
that will produce
$$H=H_f+\sum_nE_n|n\rangle\langle n|+\sum_{m,n}|m\rangle\langle n|{\bf d}_{mn}\cdot\int \frac{d^2\alpha}{\pi}\frac{d^2\beta}{\pi}\langle\alpha|{\bf E}|\beta\rangle|\alpha\rangle\langle\beta|.$$
Now, we are a step away from our conclusion. Indeed, we should not that $|\alpha|^2=N$, being $N$ the number of photons. So, the interaction part of the Hamiltonian can be promptly evaluated as
$$\langle\alpha|{\bf E}|\beta\rangle=\tilde{\bf E}(\alpha,\beta)e^{-\frac{1}{2}|\alpha-\beta|^2}$$
and, for a very large $N$, the integral will have a dominant contribution and the coherent state can be assumed practically orthogonal. This will justify the use of a classical approximation through the averaged field.
This argument can be repeated for the atomic states, if we introduce the operators (see here) $\sigma_{nm}=|n\rangle\langle m|$, $\sigma_{nm}^\dagger=|n\rangle\langle m|$ and $\sigma_{nm}^3=(1/2)(|n\rangle\langle n|-|m\rangle\langle m|$ forming an su(2) algebra. We can use now atomic coherent states and arrive to the same conclusion as above, provided the atomic state is large enough. This is the rationale behind this kind of approximations normally used in quantum optics. |
The Incredible Process of Packaging Potato Chips ...vertical form fill seal packaging machines grain packaging equipment sealing PRODUCTS Detail
we are a professional
The Incredible Process of Packaging Potato Chips ...vertical form fill seal packaging machines grain packaging equipment sealing machine vertical packing machine hand operated liquid filling machine product manufacturer which committed to provide trustworthy products.In order to provide all kinds of proudct, we continue to improve our design capabilities. Besides reasonable price and high quality chips packing machine products, we can also provide more value-added services.Our greatest pursuit is customer satisfaction,We hope our products can help your business.Good service, customers will be more assured, business will be more long-term.Whether business can cooperate or not, we mainly wish you a smooth career.
the load of potato chips in a medium-measurement bag is cited to be 10 oz. The volume that the packaging computer places in these luggage is believed to have a normal model with an average of ounces and a typical deviation of ounces. (circular to four decimal places as essential.)
a) What fraction of all bags sold are underweight?
b) some of the chips are offered in "cut price packs" of 33 baggage. what's the chance that none of the 33 is underweight?
c) what is the probability that the suggest weight of the 33 bags is under the cited volume?
d) what's the likelihood that the imply weight of a 20-bag case of potato chips is beneath 10 oz.?common Distribution:
A random variable X is declared to be following general distribution with implyeq\mu /eq and variance eq\sigma^2 /eq if its distribution is given as eqf(x|\mu,\sigma^2)=\frac1\sqrt2\pi \sigma^2e^-\frac(x-\mu)^22\sigma^2 \qquad , -\infty \leq X \leq \infty \\ \bar x \house \sim \space N(\mu,\frac\sigma^2n). /eq
standard distribution is given by gauss and originally it's used for modeling the error .
It is believed that the entire natural phenomenon follows common distribution.answer and explanation:
it is for the reason that medium dimension chips is of 10 ounces
The quantity that packaging computer put follow regular distribution with mean and general deviation
eq\mu= \\ \sigma= /eq
a) what fraction of bags are underweight below 10 oz.
So we deserve to discover zscore after which the use of NORMSDIST (z) characteristic of MS Excel we are able to get the likelihood
eqP(X <10)=P(Z<\ /eq
b) P(None is underweight)=?
None is underweight skill all 33 don't seem to be beneath 10 oz.
Let y denotes variety of underweight
y ~Binom(33,
P(None is underweight)=P(Y=0)=?
eqP(Y=0)=\ /eq
c) right here n=33
So we comprehend that eq\bar x \sim N(\mu ,\frac\sigma^2n) /eq
consequently eq\bar x /eq observe normal distribution with mean and average deviation
eqP(\bar x <10)=P(Z<\ /eq
d) similarly right here n=20
as a result eq\bar x /eq comply with normal distribution with imply and typical deviation
eqP(\bar x <10)=P(Z<\ /eq |
The Annals of Probability Ann. Probab. Volume 21, Number 1 (1993), 248-289. The Continuum Random Tree III Abstract
Let $(\mathscr{R}(k), k \geq 1)$ be random trees with $k$ leaves, satisfying a consistency condition: Removing a random leaf from $\mathscr{R}(k)$ gives $\mathscr{R}(k - 1)$. Then under an extra condition, this family determines a random continuum tree $\mathscr{L}$, which it is convenient to represent as a random subset of $l_1$. This leads to an abstract notion of convergence in distribution, as $n \rightarrow \infty$, of (rescaled) random trees $\mathscr{J}_n$ on $n$ vertices to a limit continuum random tree $\mathscr{L}$. The notion is based upon the assumption that, for fixed $k$, the subtrees of $\mathscr{J}_n$ determined by $k$ randomly chosen vertices converge to $\mathscr{R}(k)$. As our main example, under mild conditions on the offspring distribution, the family tree of a Galton-Watson branching process, conditioned on total population size equal to $n$, can be rescaled to converge to a limit continuum random tree which can be constructed from Brownian excursion.
Article information Source Ann. Probab., Volume 21, Number 1 (1993), 248-289. Dates First available in Project Euclid: 19 April 2007 Permanent link to this document https://projecteuclid.org/euclid.aop/1176989404 Digital Object Identifier doi:10.1214/aop/1176989404 Mathematical Reviews number (MathSciNet) MR1207226 Zentralblatt MATH identifier 0791.60009 JSTOR links.jstor.org Citation
Aldous, David. The Continuum Random Tree III. Ann. Probab. 21 (1993), no. 1, 248--289. doi:10.1214/aop/1176989404. https://projecteuclid.org/euclid.aop/1176989404 |
$\def\abs#1{|#1|}\def\i{\mathbf {i}}\def\ket#1{|{#1}\rangle}\def\bra#1{\langle{#1}|}\def\braket#1#2{\langle{#1}|{#2}\rangle}\def\tr{\mathord{\mbox{tr}}}\mathbf{Exercise\ 7.7}$
Fast Fourier Transform Decomposition
a) For $k < l$, write the entries $F_{ij}^{(k)}$ of the $2^k\times 2^k$ matrix for the Fourier transform ${U_F}^{(k)}$ in terms of $\omega_{(l)}$.
b) Find $m$ in terms of $k$ such that $-\omega_{(k)}^i = \omega_{(k)}^{m+i}$ for all $i\in {\bf Z}$.
c) Compute the product(1)
ultimately writing each entry as a power of $\omega_{(k)}$.
d) Let $A$ be any $2^k\times 2^k$ matrix with columns $A_j$. The product matrix $A R^{(k)}$ is just a permutation of the columns. Where does column $A_j$ end up in the product $A R^{(k)}$.
e) Verify that(2) |
Another day in latex wonderland … Today I was writing an equation in an
aligned environment using
sum and those fancy things. Unfortunately
aligned is a display math environment such that the limits of
sum are displayed above and below, which was really not suitable in my case. So how do I display inline-math style in a display math environment?
Let’s say we have an equation environment with an equation
\begin{equation*}
\sum_{i=0}^{n} x^i
\end{equation*}
To display the
sum as inline math we can simply use
\textstyle.
\begin{equation*}
{\textstyle \sum_{i=0}^{n} 2^i} \prod_{i=0}^{n} i^2
\end{equation*}
Note that the
prod is display math again. |
Frequent Links Mountain range (options)
This article does not cite any references or sources. (November 2006) Mountain ranges are exotic options originally marketed by Société Générale in 1998. The options combine the characteristics of basket options and range options by basing the value of the option on several underlying assets, and by setting a time frame for the option.
The mountain range options are further subdivided into further types, depending on the specific terms of the options. Examples include:
Altiplano- in which a vanilla option is combined with a compensatory coupon payment if the underlying security never reaches its strike price during a given period. Annapurna- in which the option holder is rewarded if all securities in the basket never fall below a certain price during the relevant time period Atlas- in which the best and worst-performing securities are removed from the basket prior to execution of the option Everest- a long-term option in which the option holder gets a payoff based on the worst-performing securities in the basket Himalayan- based on the performance of the best asset in the portfolio
Most mountain ranges cannot be priced using closed form formulae, and are instead valued through the use of Monte Carlo simulation methods.
Everest Options
Although Mount Everest is the highest point on earth, the Everest option payoff is on the worst performer in a basket of 10-25 stocks, with 10-15 year maturity. (Richard Quessette 2002).Given
n stocks, <math>S_1, S_2,..., S_n</math> in a basket, the payoff for an Everest option is: <math> \min_{i=1...n}(\frac{S_i^T}{S_i^0}). </math> Atlas Options
Atlas was a Titan who supported the Earth on his back. The Atlas option is a call on the mean (or average) of a basket of stocks, with some of the best and worst performers removed. (Quessette 2002). Given
n stocks <math>S_1, S_2,..., S_n</math> in a basket, define: <math>R_{(1)}^t=\min{\{\frac{S_1^t}{S_1^0},\frac{S_2^t}{S_2^0},...,\frac{S_n^t}{S_1^n}\}}, </math> <math>R_{(n)}^t=\max{\{\frac{S_1^t}{S_1^0},\frac{S_2^t}{S_2^0},...,\frac{S_n^t}{S_n^0}\}}, </math>
where <math>R_{(i)}^t </math> is the
i-th smallest return, so that: <math>R_{(1)}^t \leq R_{(2)}^t \leq \dots \leq R_{(i)}^t \leq \dots \leq R_{(n)}^t. </math>
The Atlas removes a fixed number (<math>n_1</math>) of stocks from the minimum ordering of the basket and a fixed number (<math>n_2</math>) of stocks from the maximum ordering of the basket. In a basket of
n stocks, notice that (<math>n_1+n_2 < n</math>), to leave at least one stock in the basket on which to compute the option payoff. With a strike price <math>K</math>, the payoff for the Atlas option is: <math>\sum_{j=1+n_1}^{n-n_2}{\frac{R_{(j)}^T}{n-(n_1+n_2)}-K})^{+}.</math> Himalayan Options
A Himalayan option with notional <math>N</math>, and maturity <math>T</math> starts with a
basket of <math>m</math> equities. The terms of the contract will specify <math>m</math> payoff times: <math>t_0 = 0 < t_1 < t_2 < \dots < t_m = T</math>. At payoff time <math>t_i, \ i=1:m</math>, the percentage returns since inception of all equities currently in the basket are computed, and the equity with the largest return is noted; denote this equity by <math>S_{k_i}, \ 1\leq k_i \leq m</math>. The derivative then makes the payoff: <math>N \max \left(\frac{S_{k_i, t_i} - S_{k_i, t_0}}{S_{k_i, t_0}}, \ 0\right)</math>, and <math>S_{k_i}</math> is removed from the basket. The procedure is repeated until maturity, at which time the final payoff occurs and the basket is emptied. |
Search results for: A. Gomes −1of integrated luminosity at the centre-of-mass energies of 7, 8, and 13 TeV, respectively, the decay Λ b 0 → χ c 1 $$ {\Lambda}_{\mathrm{b}}^0\to {\upchi}_{\mathrm{c}1} $$ (3872)pK −with χ c1(3872) → J /ψ π +π −is observed for the first time. The significance of the observed...
Concurrency and Computation: Practice and Experience > 31 > 18 > n/a - n/a
B 0→ DK ∗0decays are presented, where Drepresents a superposition of D 0and D ¯ 0 $$ {\overline{D}}^0 $$ states. The Dmeson is reconstructed in the two-body final states K + π −, π + K −, K + K −and π + π −, and, for the first time, in the fourbody final states K + π − π + π −, π + K − π + π −and π +π −π +π −. The analysis uses a sample of neutral B mesons produced in proton-proton...
International Journal of Applied Ceramic Technology > 16 > 5 > 1904 - 1919
b-tagged jets is presented in this paper. The search uses 36.1 fb −1of s = 13 $$ \sqrt{s}=13 $$ TeV proton-proton collision data recorded by the ATLAS experiment at the LHC. No significant excess of events above the expected Standard Model background is observed in... −1, collected with the LHCb detector between 2011 and 2018, a new narrow charmonium state, the X(3842) resonance, is observed in the decay modes X 3842 → D 0 D ¯ 0 $$ \mathrm{X}(3842)\to {D}^0{\overline{D}}^0 $$ and X(3842) → D + D −. The mass and the natural width of this state are measured to... −1. An untagged and timeintegrated amplitude analysis of B ( s) 0→ ( K + π −)( K − π +) decays in two-body... t, b) and a lepton ( τ, ν) of the third generation are considered. The limits are presented as a function of the leptoquark mass and the branching ratio into charged leptons for up-type (LQ 3 u→ tν/ bτ) and down-type (LQ 3 d→ bν/ tτ) leptoquarks. Many results are reinterpretations... ppcollisions at a centre-of-mass energy of 13 $$\text {TeV}$$ TeV . The data were collected in 2015 and 2016 by the ATLAS experiment at the Large Hadron Collider, and correspond to an integrated luminosity of $$36.1~\hbox {fb}^{-1}$$ 36.1fb-1 . The $$W^{\pm }Z$$ W±Z candidate events are reconstructed using leptonic... B s 0→ K S 0 K ± π ∓decays is performed using a sample corresponding to 3.0 fb −1of ppcollision data recorded with the LHCb detector during 2011 and 2012. The data are described with an amplitude model that contains contributions from the intermediate resonances K *(892) 0,+, K 2 *(1430) 0,+and K 0 *(1430) 0,+, and their charge conjugates... |
Suppose $A=6z\hat i+(2x+y)\hat j-x\hat k$ .Evaluate $$\iint A.dS$$ Over the entire surface S of the region bounded by the cylinder $x^2+z^2=9,x=0,y=0,z=0$ and $y=8$.
I split it into three surface 1.Upper circle part, $S_1$ 2.Lower circle part, $S_2$ 3.cylindrical part, $S_3$. I couldn't do surface integral for $S_3$.Since i am familiar with parametrization of a cylinder and cylindrical coordinates but failed to approach to the answer. The solution provided by the book is $18\Pi$.Can anyone help me to explain how to get $\iint A.n dS$ please. Thanks in Advanced.
Suppose $A=6z\hat i+(2x+y)\hat j-x\hat k$ .Evaluate $$\iint A.dS$$ Over the entire surface S of the region bounded by the cylinder $x^2+z^2=9,x=0,y=0,z=0$ and $y=8$.
If you don't want to use the divergence theorem.
Your volume has 5 surfaces.
$S_1$ is the quarter disk in the plane $y = 0$
$S_2$ is the quarter dis in the plane $y = 8$
$S_3$ is the rectangle in the plane $x = 0$
$S_4$ is the rectangle in the plane $z = 0$
$S_5$ is the surface of the cylinder.
The normal vectors to these respective surfaces are $(0,-1,0), (0,1,0), (-1,0,0),(0,0,-1), (\cos\theta, 0, \sin \theta) $ respectively.
I am going evaluate the surfaces $S_1, S_2$ together for reasons that hopefully become apparent.
$\iint -2x\ dS_1 + \iint 2x + 8 \ dS_2\\ \int_0^3\int_0^\sqrt{9-x^2} -2x\ dz\ dx + \int_0^3\int_0^\sqrt{9-x^2} 2x + 8 \ dz\ dx\\ \int_0^3\int_0^\sqrt{9-x^2} 8\ dz\ dx\\ 18\pi$
$\iint -6z\ dS_3\\ \int_0^3\int_0^8 -6z\ dy\ dz\\ -216$
$\iint x\ dS_4\\ \int_0^3\int_0^8 x\ dy\ dx\\ 36$
$S_5$ We will parameterize the surface.
$x = 3\cos \theta\\ y = y\\ z = 3\sin\theta$
$dS = (3\cos\theta, 0,3\sin\theta)\ dy\ d\theta\\ F(y,\theta)\cdot dS = 45\sin\theta\cos\theta \ dy\ d\theta$
$\int_0^{\frac{\pi}{2}}\int_0^8 45\sin\theta\cos\theta \ dy\ d\theta\\ \int_0^{\frac{\pi}{2}} 180\sin 2\theta d\theta\\ -90\cos 2\theta|_0^{\frac {\pi}2}\\ 180$
Add them together we get $18\pi$
There are some reasons we might deduce that flux over $S_3, S_4$ and $S_5$ will cancel each other out.
I don't know if you want to do this way, but in case you are allowed to do by the divergence theorem here it goes (I'm a little bit lazy to do it by surface integrals).
$$\int_{\partial\Omega}{\vec{A}\cdot d\vec{S}}=\int_{\Omega}{\textrm{div}(\vec{A})\,dV}$$ Since $\textrm{div}(\vec{A})=1$ in your case, you must calculate the volume of your body. In this case is a quarter of the volume of a cylinder of radius equal to $R=3$ and a height of $h=8$, giving $$V = \int_{\Omega}{\,dV}=\frac{1}{4}\pi R^2h = 18\pi $$
For the cylindrical part, make a change of coordinates: $$x = R\sin{\theta}\qquad y=y\qquad z = R\cos{\theta}\qquad $$
In the cylindrical surface , $\vec{A}=(6R\cos{\theta},2R\sin{\theta}+y,-R\sin{\theta})$, and $d\vec{S}=(\sin{\theta},0,\cos{\theta})Rd\theta dy$. Multiply and integrate! $\theta\in[0,\pi/2]$ and $y\in[0,8]$ |
High Speed AD Converter: AD 9467
The AD converter is the heart of every direct sampling receiver. Its properties are crucial for the overall receiver performance. This is because once the signal has been digitized, virtually any receiver performance can be obtained by digital signal processing in FPGA or software. Important ADC figures are SNR, SFDR (supr free dynamic range), analog bandwidth and full scale voltage. It should be noted, that the sole number of bits is of minor importance, because usually some of the least reliable bits carry only noise. The figure ENOB (effective number of bits) considers this issue and describes the number of bits, that are “noise-free”.
The Panoradio relies on the AD9467-250 from Analog Devices, which is a state-of-the-art 16 bit, 250 Msps AD converter with excellent properties: SNR: 76 dBFS / 12.3 ENOB SFDR: 100 dBFS analog bandwidth: 0-900 MHz full scale voltage: 2.5 Vpp
The Panoradio uses the AD evaluation board with FMC connector, that can be easily attached to the Zedboard signal processing platform. The board is equipped with a optional clock generator, a preamplifier (not used) and a clock conditioner (not used) for external clocks. The on-board clock generator (Vectron VCC 6) is of very good performance and therefore used as a clock source here. Some modifications were made on the eval board to enable the clock source and route the signals correctly to the AD converter chip.
Implementation Details and Analysis Analog ADC Frontend
The AD converter AD9467 has differential input signals with an equivalent input impedance of 530 Ohms and 3.5 pF capacitance. Although it is possible to feed single ended signals to the AD9467 it will decrease SFDR performance, so differential signaling is recommended. The purpose of the analog ADC frontend is to convert RF signals from the antenna to differential signals and provide a proper impedance matching from 530 Ohms || 3.5 pF to 50 Ohms at the SMA input jack. The analog ADC frontend is part of the evaluation board. Since the board allows for very different configuration, the design implemented for the Panoradio is shown below.
R5, C6 and R6 form a filter to reduce high frequency noise. R3 and R4 reduce kick-back currents from the ADC sampling process. R1 and R2 are the main parts for the resistive impedance matching, L1 finalizes impedance matching to real 50 Ohms. The transformers convert the single ended signal to differential signals. The reason for using two transformers is because a two transformer configuration reduces the second order distortion in comparison to a single transformer. This is because of parasitic phase and magnitude imbalances in non-ideal (real world) transformers. Especially phase imbalance causes second order harmonics. Since the imbalances quickly rise with increasing frequencies, this becomes an issue for higher speed analog signals (e.g. >100 MHz). A two-transformer configuration reduces the second order harmonics resulting in an improved SFDR for higher frequency signals. See AD’s Wideband A/D Converter Front-End Design Considerations for more information.
Full Scale Analysis
The full scale of the AD converter is given by 2.5 Vpp for the 530 Ohms input. In general the signal power of a sinus waveform isP = \frac{ U_{pp}^2 } { 8R }
with the peak-to-peak voltage U_{pp} = 2.5V the full scale power P_{FS} at the AD input isP_{FS} = \frac{ ({2.5 \,V}) ^2 } { 8 \cdot 530 \, \Omega } = 1.5 \,mW
and in dBmP_{FS} = 10 \,log_{10} \left(\frac{1.5 \,mW}{1 \,mW} \right) = 1.7 \,dBm.
With the SNR of 76 dB and the thermal noise for 125 MHz bandwidth of -93 dBm the ADC’s noise figure is around 19 dB.
However, the situation is different looking at the inputs of the complete circuit with the analog ADC frontend and impedance matching in between. The impedance matching circuit on the board uses mainly resistive matching. Resistive matching has the advantage of being independent of frequency. Thus the match stretches over a large frequency band up to many 100 MHz. However, resistive matches are not lossless as LC matching networks mostly are. The loss introduced by the AD matching circuit is approximately 9 dB, so the full scale power of the complete circuit is approximatelyP_{FS} \approx 11 \,dBm.
It should be noted that also the noise figure of the ADC is worsened by another 9 dB and sensitivity decreases to a final noise figure of approximately 28 dB.
A Comment on Preamps and Transformers
The full scale power of the AD converter circuitry of +11 dBm is quite large for the application of a direct sampling SDR. A preamp or a transformer may be considered to raise the input level in order to get improved practical dynamic range. The Panoradio uses neither of them for several reasons:
Large Bandwidth: Since the Panoradio supports sampling the complete spectrum from 0 to 100 MHz including medium wave, short wave and VHF radio, very high signal levels may occur at the antenna output. So overloading of the ADC has to be avoided at all cost. If the full scale voltage is exceeded, massive distortions occur, that block the receiver completely. Linearity: Using an active preamplifier always introduces additional non-linear behaviour resulting in increased distortion. Many very good preamps can be found on the market today and indeed many SDRs provide preamplification. The problem of non-linear distortion increases with the number of signals a system processes at the same time. This is problematic for a wideband SDR with large reception bandwidth. Using no preamp immediately excludes possible distortions. Transformer: Transformers offer a nice possibility to amplify signals noiselessly. Voltage amplification can be calculated from its turns ratio: N2/N1. The problem with transformers is, that they have parasitic impedances, which limits their use for higher frequencies. At higher frequencies insertion loss increases as well as the imbalance between differential lines, which introduces additional distortion (see above). This effect gets worse for larger turns ratios, i.e. higher voltage gain. For 1:1 transformers (with no gain) this problem can be handled by the two transformer configuration. For larger turns ratios like 1:4 (+6 dB) or 1:9 (+9.5 dB) the maximum usable signal frequency is limited, which heavily impairs performance at 70cm. More information at Analog Devices here and here. The Clock Jitter Issue
It is important to consider the clock jitter of the sampling clock, because it may degrade the SNR of the AD converter. More information on clock jitter and SNR: Basics of AD conversion
Clock Jitter and SNR for HF and VHF
The Vectron VCC 6 clock generator on the AD9467 eval board has a jitter of 133 fs. The figure below shows the theoretical achievable SNR under different clock jitter values. For input frequencies from 0-100 MHz (the Panoradio’s frequency range) the theoretical SNR is always more than 80 dB, which is clearly above the AD Converter’s SNR of 76 dB.
So the clock jitter does not affect the noise performance of the receiver. Clock Jitter and SNR for UHF
Clock jitter is a major issue for direct sampling high frequencies, which should not be taken lightly, because severe degradations in SNR may occur. When using the Panoradio as a direct sampling receiver for 70 cm signals (430 MHz), the impact of clock jitter not negligible. The following figure shows, that for signal frequencies above 240 MHz, SNR is affected by clock jitter.
For the 70 cm band the overall SNR is approximately reduced by 7 dB. Therefore the 70 cm front end uses an amplifier to make up for this additional loss. It reduces the overall NF for the 70 cm band from 34 dB to 13 dB. |
Integration over manifolds is commonly defined with object called chains. What about if I want to integrate the exterior derivative of a $k-form$ over the n-sphere and use Stokes theorem:
\begin{eqnarray} \int_{\sigma} d\omega &=& \int_{\partial \sigma} \omega \end{eqnarray}
I found in several books that this integral is zero, they argue it's because the sphere is compact so its boundary is zero.
My question is how can I relate this with chains??, I menan, how is the chain for a n-sphere and why its boundary is zero... |
If $f(x) = 0.5 e^{-|x|}$ for $-\infty < x < \infty$, how would you find the moment generating function for this? Also how would you find the distribution of $Y = |X|$?
Attempt:
$$E(e^{tX}) = \int_{-\infty}^\infty f(x) e^{tx} \; dx.$$
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
If $f(x) = 0.5 e^{-|x|}$ for $-\infty < x < \infty$, how would you find the moment generating function for this? Also how would you find the distribution of $Y = |X|$?
Attempt:
$$E(e^{tX}) = \int_{-\infty}^\infty f(x) e^{tx} \; dx.$$
As Didier Piau stated: $$ \begin{align} E(e^{tX}) & = \int_{-\infty}^\infty f(x) e^{tx} \ dx=\int_{-\infty}^0 f(x) e^{tx}\ dx+\int_0^\infty f(x) e^{tx}\ dx \\ \\ & =\int_{-\infty}^0 0.5e^x e^{tx}\ dx + \int_0^\infty 0.5 e^{-x} e^{tx}\ dx \\ \\ & = 0.5\left(\int_{-\infty}^0 e^{(t+1)x}\ dx+\int_{0}^\infty e^{(t-1)x} \ dx\right) = \frac{0.5}{t+1}-\frac{0.5}{t-1},\quad t\in (-1,1) \end{align} $$
For the distribution of $Y=|X|$ we have the general rule for a transformation of the form $y=h(x)$ in our case $Y=|X|$ with $Y\in [0,+\infty)$: $g(y)=f(h^{-1}(y))|\frac{d(h^{-1}(y))}{dy}|$. Applying this rule to your problem will give you the distribution of Y. |
1 ) When B travels towards A, wouldn't each twin observe the other's clock moving ["running", "ticking"] slower?
No, quite the opposite:
If two participants (such as A and B) were moving towards each other (at constant mutual speed, $v := c~\beta$, $\beta > 0$) and if one stated signal indications at constant frequency $f$ (as "ticks" of a good clock) then the indications of the other having observed these signal "ticks" occured at larger frequency $f \sqrt{\frac{1 + \beta}{1 - \beta}}$; or in other words: quicker than the signal indications had been stated; cmp. blueshift.
In the described setup this applies to both clocks, mutually.
let's say the twins start out some distance apart, in the same reference frame.
... more precisely: the twins are initially
at rest to each other, some distance apart.
2) The initial positions of A and B contain synchronized clocks
Let's also suppose - that both clocks had each been (initially) "ticking" at particular frequencies, - that the synchronization established that these frequency of one had been equal to the frequency of the other, and - that both clocks had kept (ticking at) their constant equal frequency throughout the experiment; i.e. the clocks were and remained good throughout the experiment.
3) At some pre agreed upon time B accelerates towards A, reaches some velocity and travels all the way to A at that velocity. [...]
when they meet and before A decelerates what would each one observe regarding the other's clock?
At their meeting
A's clock indicates $n$ ticks after the synchronized " pre agreed upon start time", and B's clock indicates $n \sqrt{ 1 - \beta^2 }$ ticks after the synchronized " pre agreed upon start time"; or almost, approximately, as far as the "acceleration phase" B is of negligible duration compared to the "drift phase".
The important point is:
B's duration from indicating the " pre agreed upon start time" (and starting to accelerate towards A) until indicating the meeting and passing of A is shorter by a factor of $\sqrt{ 1 - \beta^2 }$ than A's duration from indicating the " pre agreed upon start time" until indicating the meeting and passing of B.
The following sketch illustrates the case of $\beta = 0.6$, therefore with mutual blueshift factor $\sqrt{ \frac{1 + \beta}{1 - \beta} } = 2$:
When B decelerates and enters A's reference frame would he observe A spontaneously age?
When
B decelerates and has come to rest with respect to A they can again determine which indication of one has been simultaneous to any indication of the other. And both clocks would again tick at equal frequency, since both clocks were supposed to have been and to have remained good.
As far as the "deceleration phase"
B, too, is of negligible duration compared to the "drift phase", they would still find the relation between their good clocks as it was observed at their meeting: clock A is and remains $n (1 - \sqrt{ 1 - \beta^2 })$ ticks ahead of clock B. |
Export 28 results:[ Author] Title Type Year
Filters: is First Letter Of Last Name [Clear All Filters] T
T
Deformation and strain storage mechanisms during high-temperature compression of a powder metallurgy nickel-base superalloy. Metallurgical and Materials Transactions A. 41:2002–2009.. 2010.
Single-crystal solidification of new Co–Al–W-base alloys. Intermetallics. 19:636–643.. 2011.
Hybrid intermetallic Ru/Pt-modified bond coatings for thermal barrier systems. Surface and Coatings Technology. 202:349–361.. 2007.
Multilayered ruthenium-modified bond coats for thermal barrier coatings. Metallurgical and Materials Transactions A. 37:3347–3358.. 2006.
Intermetallic phases formed by ruthenium–nickel alloy interdiffusion. Intermetallics. 12:957–962.. 2004.
Thermal expansion behavior of ruthenium aluminides. Scripta Materialia. 50:845–848.. 2004.
Experimental assessment of the Ru–Al–Ni ternary phase diagram at 1000 and 1100° C. Materials Science and Engineering: A. 430:266–276.. 2006.
The Critical Role of Shock Melting in Ultrafast Laser Machining. Minerals, Metals and Materials Society/AIME, 420 Commonwealth Dr., P. O. Box 430 Warrendale PA 15086 United States.[np]. Feb.. 2011.
Sub-nanometer Resolution Chemi-STEM EDS Mapping of Superlattice Intrinsic Stacking Faults in Co-based Superalloys. Microscopy and Microanalysis. 20:1028–1029.. 2014.
Dislocation injection in strontium titanate by femtosecond laser pulses. Journal of Applied Physics. 118:075901.. 2015.
High resolution energy dispersive spectroscopy mapping of planar defects in L1 2-containing co-base superalloys. Acta Materialia. 89:423–437.. 2015.
Creep and directional coarsening in single crystals of new $\gamma$–$\gamma$′ cobalt-base alloys. Scripta Materialia. 66:574–577.. 2012.
Creep-induced planar defects in L1 2-containing Co-and CoNi-base single-crystal superalloys. Acta Materialia. 82:530–539.. 2015.
High Temperature Creep of New L12 Containing Cobalt-Base Superalloys. Superalloys 2012. :823–832.. 2012.
Predicting freckle formation in single crystal Ni-base superalloys. Journal of materials science. 39:7199–7205.. 2004.
Stabilization of thermosolutal convective instabilities in Ni-based single-crystal superalloys: Carbide precipitation and Rayleigh numbers. Metallurgical and Materials Transactions A. 34:1953–1967.. 2003.
Carbides and grain defect formation in directionally solidified nickel-base superalloys. Advanced Technologies for Superalloy Affordability as held at the 2000 TMS Annual Meeting. :2000.. 2000.
Stabilization of thermosolutal convective instabilities in Ni-based single-crystal superalloys: Carbon additions and freckle formation. Metallurgical and Materials Transactions A. 32:1743–1753.. 2001.
Phase instabilities and carbon additions in single-crystal nickel-base superalloys. Materials Science and Engineering: A. 348:111–121.. 2003.
Creep of $\alpha$ 2+ $\beta$ Titanium Aluminide Alloys. ISIJ International. 31:1139–1146.. 1991.
A sharp computational method for the simulation of the solidification of binary alloys. Journal of Scientific Computing. 63:330–354.. 2015.
Microstructural features controlling the variability in low-cycle fatigue properties of alloy Inconel 718DA at intermediate temperature. Metallurgical and Materials Transactions A. 47:1096–1109.. 2016.
A comparative examination of aging and creep behavior of die-cast MRI230D and AXJ530. Symposium on Magnesium Technology 2008 (TMS 9 March 2008 to 13 March 2008). :117–122.. 2008.
Elemental partitioning and microstructure of Mg-Al-Ca-Sn quaternary alloys. TMS 2010: Aluminium and Magnesium (TMS 14 February 2010 to 18 February 2010). :1–5.. 2010. |
Black ring
Roberto Emparan and Harvey Reall (2010), Scholarpedia, 5(9):8786. doi:10.4249/scholarpedia.8786 revision #135545 [link to/cite this article]
A
black ring is a black hole with an event horizon with topology \( S^1 \times S^p \ .\) Black rings can exist only inspacetimes with five or more dimensions. Exact black ring solutions of general relativity are known only in five dimensions, but approximate solutions for thin blackrings (with the radius of \( S^1 \) much larger than the radiusof \( S^p \)) have been constructed in spacetimes with more than five dimensions. The existence of black ring solutions shows that higher-dimensional black holes can have non-spherical topology and are not uniquely specified by their conserved charges.
Contents Background
In four dimensional space-time, the black hole uniqueness theorem asserts that the Kerr solution is the unique black hole solution of the vacuum Einstein equation that is time-independent and asymptotically flat (i.e. approaches Minkowski spacetime far from the hole). This solution has an event horizon that is topologically spherical and is uniquely labelled by its mass \(M\) and angular momentum \(J\ .\) This result proves that all multipole moments of the gravitational field of a time-independent black hole are uniquely determined by the lowest two, namely \(M\) and \(J\ .\)
In 2001, the discovery of an exact black ring solution of the five-dimensional vacuum Einstein equations [1] ([2]) proved that these simple topological and uniqueness properties of 4d black holes do not extend to higher dimensions.
Heuristic construction of black rings
A heuristic argument that suggests the possible existence of black rings is the following. Take a neutral black string in \(D\geq 5\) dimensions, constructed as thedirect product of the \(D-1\)-dimensional Schwarzschild-Tangherlini solution and a line, so the geometryof the horizon is \( \mathbf{R}\times S^{D-3} \ .\) Imagine bending this string toform a circle, so the topology is now \(S^1\times S^{D-3} \ .\) In principle thiscircular string tends to contract, decreasing the radius of the \(S^1\ ,\)due to its tension and gravitational self-attraction. However, if thestring can be made to rotate along the \(S^1\) then Newtonian arguments suggest that these forces could be balanced by centrifugal repulsion. The result is a rotating
black ring: a black hole with an event horizon of topology \( S^1 \times S^{D-3} \ .\)
Ref.[3] ([4]) presented an explicit solution of five-dimensional vacuum general relativity describing a black ring that rotates along its circle. It provided the first example of non-spherical horizon topology and of black hole non-uniqueness in vacuum gravity. Ref.[5] found a generalization of this solution in which the black ring rotates also along the \( S^2 \ ,\) i.e., a
doubly-spinning black ring. Non-uniquenessThe five-dimensional black ring with a single angular momentum (along its circle) illustrates the main novelties of the solution more clearly than the doubly-spinning ring. The absence of uniqueness is illustrated in a plot of the area of the horizon as a function of angular momentum for fixed mass (see Figure 1).
In this plot the horizon area \(A_H\) and the spin \(J\) have been conveniently normalized to dimensionless magnitudes \(a_H=\sqrt{\frac{27}{256\pi}}\frac{A_H}{(GM)^{3/2}}\) and\(j=\sqrt{\frac{27\pi}{32G}}\frac{J}{M^{3/2}}\) (for ease of visualization the horizontal axis plots \(j^2\) instead of \(j\)).Contrary to what happens for rotating black holes in four dimensions, and for the MP black hole in five dimensions, the angular momentum of the black ring (for fixed mass) is bounded below, but not above. Observe also that above this minimum angular momentum there exist two branches of black rings: the one with higher area is referred to as the branch of
thinblack rings, and the other as fatblack rings. (See Figure 2.)
Non-uniqueness is reflected in the fact that there is a narrow range of angular momenta, \(\sqrt{27/32}\leq j<1\ ,\) for which there exist one MP black hole and two black rings (a fat and a thin one) all with the same values of the mass and the spin. Since the latter are the only conserved quantities carried by these objects, there is an explicit violation of black hole uniqueness.
References Emparan R and Reall H S (2002) Phys. Rev. Lett. 88:101101 (http://arxiv.org/abs/hep-th/0110260) Emparan R and Reall H S (2006) Class. Quant. Grav. 23:R169 (http://arxiv.org/abs/hep-th/0608012) Emparan R and Reall H S (2008) Living Rev. Rel. 11:6 (http://arxiv.org/abs/0801.3471) Pomeransky A A and Sen'kov R A (2006) hep-th/0612005 (http://arxiv.org/abs/hep-th/0612005) Internal references |
As a student of mathematics, I'm often interested in how fascinating math works its way into other subjects. In particular, I recently became curious about why computer scientists are talking about complicated categorical machinery, and this post is a quasi-answer to this question. As a disclaimer, I'm neither a computer scientist nor a category theorist, so what ensues will be a layman's (or lay-mathematician's rather) approach to understanding their connection. In the following, we will not actually need any theory but this post is mostly about developing the requisite language to understand how all of this strangeness came to be. I'm going to routinely defer main definitions to wikipedia, and instead pass to the level of heuristics throughout this post, so that it does not become bloated and technical. Additionally,…
Here is a pretty wild theorem that generalizes Liouville's theorem in complex analysis: Picard's Little Theorem: If a holomorphic, entire function $f:\mathbb C \to \hat{\mathbb C}$ misses three points, it is constant. This theorem is remarkable, and I had never really heard of it, since its proof relies on a certain covering. The following argument can be made elementary (by describing a certain covering) but instead we will spend some time talking about some implications of the Uniformization Theorem, and finish with an "easy" lifting argument. Note that Picard's Little theorem works just as well for $f: \mathbb C \to \mathbb C$, since we can remove a point, and rotate to stereographically project, so we replace three points with two points. Uniformization Theorem: every simply connected Riemann surface is the…
the main goal of this quick note is to simultaneously misrepresent the work done by Harmonic Analysts and by working under some dubious assumptions, shed some light on how we can define a Fourier transform on finite abelian groups. Suppose we are given a function $f:\mathbb R \to \mathbb R$ that is $2 \pi$ periodic, and suppose further that it is a function that deserves to be integrated. Then, we know from the grapevine that we can rewrite $f$ in terms of its "Fourier series:" $$f(x)=a_0+\sum_{n=1}^{\infty}[a_n \cos(nx)+b_n \sin(nx)]$$ and lest anyone is nauseated by trigonometric functions, let's quickly generalize out of this to functions $f: \mathbb R \to \mathbb C$ by rewriting $\cos(x)$ and $\sin(x)$ in terms of $e^{inx}$. In other words, we are now thinking about $$L^2(\mathbb R):=\{f:\mathbb…
I've recently been doing some research for my Senior thesis in applications of characteristic classes to plane equipartition problems.These types of problems follow a general procedure known as the Configuration Space-Test Map paradigm. I'll try to demonstrate how these basic arguments go, using what is probably the most famous theorem in equivariant topology: The Borsuk Ulam Theorem: Every $\mathbb Z_2$-equivariant map $f:S^n \to \mathbb R^n$ vanishes. The theorem can be proven in a more general language that I will try to flesh out in future blog posts, but I will prove the two dimensional case using just basic covering space theory here and apply it to the two-dimensional ham sandwich theorem: The Ham-Sandwich Theorem: Any two masses in $\mathbb R^2$ can be simultaneously equipartitioned by a single line. The idea will be…
These are a bunch of Topological proofs for facts in Algebra. Have you ever had the experience of insisting to yourself that the fundamental theorem of algebra is really a topological result, only to forget the argument? Have you ever proven Cayley Hamilton by muttering something about cofactors? Showing Bezout's Identity by stating the Euclidian algorithm (whose proof eludes lazy undergraduates everywhere.) I have. I am therefore writing this blogpost to once again weasel out of actually doing any math. Theorem (Bezout's Identity): For any coprime integers $a,b \in \mathbb Z$, there exist integers $c,d$ so that $ac-bd=1$. Proof: recall from my previous blogpost "a cute application of mapping class groups" that a member of $\pi_1(\mathbb T)$, or $\mathbb Z \oplus \mathbb Z$ can be realized as a simple closed curve, if…
Let's start with a fairly innocuous question: Question: What elements of $\pi_1(\mathbb T^2)$ can be represented by a (homotopy class of a) simple closed curve? A naive guess might be that with some effort, every element can be obtained by some simply closed curve, but further inspection shows that this cannot be the case. For example, first drawings of the element $(2,0)$ will show that the curve crosses itself. On the other hand, there are elements that surely can be represented: $(1,0), (2,1), (3,2)$ and so on. In fact, given any $(a,b)$ with $\mathrm{gcd}(a,b)=1$, we have the following procedure: which fails for when $(a,b)$ have a common factor, since it produces multiple curves! However, we claim that indeed these are the only elements of $\pi_1(\mathbb T)$ that can be represented by a…
This is a quick note on an idea that I think is worth pursuing, but I've mentioned in a previous post (although I was sort of blasé about it.) There is nothing really to prove here, but I feel that Nakayama's Lemma is hard to conceptualize, and the Jacobson Radial generalization makes an intuitive result obscure. All modules in this post are finitely generated. Nakayama's Lemma: If $M$ is an $R$ module with $(R,\mathfrak{m})$ local, then $M=N+\mathfrac{m}M$ implies that $N=M$. The idea here is that finitely generated modules are closely related to vector spaces in a sense that can be made precise: Corollary: $a_1, \dots,a_n$ generate $M$ as a module if and only if their images in the quotient generate $M/\mathfrac{m}M$ as an $A/\mathrfac{m}$-vector space. The forward direction is easy…
The title is a quote attributed to the mathematician William Cayley. The goal of this post is to merely organize and make sense of the fact that all three of the principal continuous geometries are contained in projective geometry. Apriori, this is really remarkable since the definition of projective space (and its symmetries) as linear subspaces of $\mathbb R^n$ has very little to do with the other geometries we consider. For me, the most interesting fact here is not that the underlying spaces (euclidian, spherical etc.) can be embedded in projective space, but rather that there exist embeddings that preserve the symmetries of each respective geometry, a statement that will be made precise immediately: Definition 1: A pair $(Y:H)$ is said to be a geometry (in the sense of…
Today, I was in class, and was given a question of the form "show that $h:G \to H$ cannot be surjective, and I asked my professor, "can we assume some basic functional analysis? The open mapping theorem implies that this is is open, and hence a homeomorphism, which it is not." A fellow student remarked that manifolds aren't Linear, which I accidentally misheard as "please quit mathematics immediately." Luckily, I am very bull-headed and I've spent the day reformulating the proof for the open mapping theorem in Banach spaces to Lie groups. I think it works-- I was unable to find any such result in my books on Lie theory, and I didn't want to google it initially (we live with our contradictions), but after proving the lemma, I looked it up…
I've heard many times "a vector space is naturally isomorphic to its double dual," with reference to the "canonical isomorphism" (evaluation.) However, in the same breath, there is no "easy" way to construct an isomorphism from a vector space to its usual dual (in a sense that can be made formal.) This is really pretty sad, since in finite dimensions they are always isomorphic. But, on the other hand, if one works in the category of inner product spaces, a real vector space is canonically isomorphic to its dual space. In fact, the isomorphism is exactly what we would expect: Proposition 1: Given a finite dimensional real vector space $V$ equipped with inner product $(\dot,\dot)$, there is an isomorphism $\phi: V \to V^*$ given by $\phi(x):= y \mapsto (x,y)$. First note that this makes… |
I am interested in the complexity of a problem involving spanning hyperforests (a union of hypertrees, which covers all of the vertices) of a $k$-hypergraph. I describe the relevant definitions for hypergraphs below, but the following is the problem on
SPANNING HYPERFOREST ROOT SET.For a directed hypergraph $D$ and an integer $k \geqslant 1$, determine whether there exists a spanning hyperforest for $D$ which has a root-set of size at most $k$. Remarks. It is not difficult to show that SPANNING HYPERFOREST ROOT SET is in NP: in particular, if a root-set of the suitable size is provided, then a spanning hyperforest with that root-set can be found in polynomial time. It is also trivial to find a value of $k$ for which $(D,k)$ is a YES instance: for instance $k = \lvert V(D) \rvert$ (in which case the empty hypergraph is a spanning hyperforest with root set $k$). Considered as an optimisation problem, it is usually easy to find values $k < \lvert V(D) \rvert$ for which $(D,k)$ remain YES instances, though it is not clear how easily one can find the optimum. Question.
Is SPANNING HYPERFOREST ROOT SET also
NP-hard? Is this true in the special case where the input hypergraph is "symmetric", in the sense that for any edge $e = (t(e), h(e))$ and for any $v \in s(e) := t(e) \cup h(e)$, there is also an edge $e' = (s(e) {\,\smallsetminus\,} \{v\}, v)$? Relevant definitions. A hypergraph is a pair $G = (V,E)$, where $E \subseteq \mathcal P(V)$. If each $e\in E$ has the same cardinality $k$, we call $G$ a $k$-uniform hypergraph (or $k$-hypergraph). A directed hypergraph is a pair $D = (V,E)$, where in our setting we let $E \subseteq \mathcal P(V) \times V$ be the set of hyper-edges. For each edge $e \in E$, we let $t(e) = \pi_1(e) \subseteq V$ be the "tail" of the edge, and $h(e) = \pi_2(e) \in V \smallsetminus t(e)$ be the "head" of the edge. Thus we consider hypergraphs where each edge has exactly one head (more general definitions are common). We may associate a "symmetric" directed hypergraph $D_G = (V,E')$ of this sort to any hypergraph $G = (V,E)$, by replacing each undirected edge $e \in E$ with a collection of directed variants $E'_e = \{ (e{\,\smallsetminus\,} v, v) \,\vert\, v \in e \}$ and letting $E' = \bigcup_{e \in E} E'_e$.
A directed cycle in a directed hypergraph $D = (V,E)$ is just a vertex sequence $(v_0,v_1,\ldots,v_\ell)$ for which $v_{i+1} \ne v_i$ for $0 \leqslant i < \ell$, $v_0 = v_\ell$, and for which for each $0 \leqslant i < \ell$ there is an edge $e_\ell$ for which $v_i \in t(e_i)$ and $v_{i+1} = h(e_i)$.
A hyperforest is a directed hypergraph in which all vertices have in-degree either zero or one, and which has no cycles in the above sense. (N.B. I am diverging here from the terminology in my reference above, which does not explicitly consider whether the hypergraph is connected; but this cannot be taken for granted in my setting.) The set of nodes of in-degree zero we call the "root set" of the hyperforest.
A spanning hyperforest $T$ for a directed hypergraph $D = (V,E)$, is simply a subgraph $T \subseteq D$ of the hypergraph which contains all of the vertices $V$ and which is a hyperfohrest. |
This is no longer really a hint: the induction step ended up messy enough that I went ahead and wrote it out, though the internal induction has only been indicated, not actually carried out properly. For the main induction step:
$$\begin{align*}\sum_{k\ge 0}2k\binom{n+1}{2k}&=\sum_{k\ge 0}2k\left(\binom{n}{2k}+\binom{n}{2k-1}\right)\\\\&=n2^{n-2}+\sum_{k\ge 0}2k\binom{n}{2k-1}\\\\&=n2^{n-2}+\sum_{k\ge 0}2k\left(\binom{n-1}{2k-1}+\binom{n-1}{2k-2}\right)\\\\&=n2^{n-2}+\sum_{k\ge 0}2k\binom{n-1}{2k-2}+\sum_{k\ge 0}2k\binom{n-1}{2k-1}\\\\&=n2^{n-2}+\sum_{k\ge 0}(2k+2)\binom{n-1}{2k}+\sum_{k\ge 0}2k\binom{n-1}{2k-1}\\\\&=n2^{n-2}+\sum_{k\ge 0}2k\binom{n-1}{2k}+2\sum_{k\ge 0}\binom{n-1}{2k}+\sum_{k\ge 0}2k\binom{n-1}{2k-1}\\\\&=n2^{n-2}+(n-1)2^{n-3}+2^{n-1}+\sum_{k\ge 0}2k\binom{n-1}{2k-1}\;.\end{align*}$$
Notice that the last term of the last line is just like the last term of the second line, but with the upper number in the binomial coefficients reduced by $1$. This suggests that we should look at both
$$\sum_{k\ge 0}2k\binom{n}{2k}\quad\text{and}\quad\sum_{k\ge 0}2k\binom{n}{2k-1}$$
simultaneously. They’re a bit awkward to write, so I’ll introduce a couple of abbreviations: let
$$f(n)=\sum_{k\ge 0}2k\binom{n}{2k}\quad\text{and}\quad g(n)=\sum_{k\ge 0}2k\binom{n}{2k-1}\;.$$
What’s been done already can be summed up as $$f(n+1)=n2^{n-2}+(n-1)2^{n-3}+2^{n-1}+g(n-1)\;,$$ and along the way we found that $$g(n)=(n-1)2^{n-3}+2^{n-1}+g(n-1)\;.$$
A straightforward induction now shows that
$$\begin{align*}f(n+1)&=n2^{n-2}+(n-1)2^{n-3}+2^{n-1}+g(n-1)\\&=n2^{n-2}+(n-1)2^{n-3}+(n-2)2^{n-4}+2^{n-1}+2^{n-2}+g(n-2)\\&\;\vdots\\&=\sum_{k=2}^nk2^{k-2}+\sum_{k=2}^{n-1}2^k+g(2)\\\\&=\sum_{k=0}^{n-2}(k+2)2^k+4\sum_{k=0}^{n-3}2^k+4\\\\&=n2^{n-2}+\sum_{k=0}^{n-3}(k+6)2^k+4\\\\&=n2^{n-2}+\sum_{k=1}^{n-3}k2^k+6\sum_{k=0}^{n-3}2^k+4\\\\&=n2^{n-2}+\sum_{k=1}^{n-3}\sum_{i=1}^k2^k+6\left(2^{n-2}-1\right)+4\\\\&=(n+6)2^{n-2}+\sum_{i=1}^{n-3}\sum_{k=i}^{n-3}2^k-2\\\\&=(n+6)2^{n-2}+\sum_{i=1}^{n-3}\left(\left(2^{n-2}-1\right)-\left(2^i-1\right)\right)-2\\\\&=(n+6)2^{n-2}+\sum_{i=1}^{n-3}\left(2^{n-2}-2^i\right)-2\\\\&=(n+6)2^{n-2}+(n-3)2^{n-2}-\sum_{i=1}^{n-3}2^i-2\\\\&=(2n+3)2^{n-2}-\left(2^{n-2}-2\right)-2\\\\&=(2n+2)2^{n-2}\\\\&=(n+1)2^{n-1}\;.\end{align*}$$
Added: Note that a combinatorial proof is also possible. A term $2k\dbinom{n}{2k}$ on the lefthand side is the total number of elements in $2k$-element subsets of $[n]$ if each is counted once for each $2k$-element set in which it appears. Thus, the lefthand side counts each element of $[n]$ once for each even-sized subset of $[n]$ in which it appears.
Fix an element $a\in[n]$. There is an obvious bijection between the even-sized subsets of $[n]$ containing $a$ and the odd-sized subsets of $[n]\setminus\{a\}$. $[n]\setminus\{a\}$ has $2^{n-1}$ subsets, so it has $2^{n-2}$ odd-sized subsets. Thus, $a$ appears in $2^{n-2}$ even-sized subsets of $[n]$. The righthand side of the identity sums this figure over all $n$ elements of $[n]$. |
Can somebody explain to me ( or give a proof ) why the field extension $\mathbb{R/Q}$, that is the field of real numbers as an extension of the field of rational numbers, is transcendental and not algebraic, which would mean that each element of $\mathbb R$ would be a root of some polynomial with rational coefficients only if it is transcendental?
You are confusing the quantifiers when you negated the statement.
$K$ is an algebraic extension of $F$ if
everyelement of $K$ is the root of a polynomial with coefficients in $F$. So $\forall k\exists p\in F[x]: p(k)=0$.
$K$ is a transcendental extension of $F$ if
it is not algebraic. So $\lnot(\forall k\exists p\in F[x]:p(k)=0)$.
Let's parse this negation into the statement, we have $\exists k\forall p\in F[x]:p(k)\neq 0$. So $K$ is transcendental over $F$ if there is at least one element in $K$ which is not the root of any polynomial with coefficients in $F$.
Now a simple counting argument shows that $\Bbb Q[x]$ is a countable set, and every polynomial has finitely many roots. So there are only countably many algebraic numbers in any given extension of $\Bbb Q$. Since $\Bbb R$ is uncountable it means that "most" of its elements are indeed transcendental.
Any algebraic extension of $\mathbb{Q}$ is denumerable, since its elements are roots of polynomials with coefficients in $\mathbb{Q}$, but $\mathbb{R}$ is not denumerable so it contains elements that are not algebraic and this elements are said
transcendental. |
If $\triangle$ is the diagonal of $X \times X$, show that its tangent space $T_{(x,x)}(\triangle)$ is the diagonal of $T_x(X) \times T_x(X).$
I don't have the slightest idea on how to do this.
By definition, the tangent space of $X$ at $x$ is the image of the map $d\phi_0: \mathbb{R}^k \rightarrow \mathbb{R}^N$, where $x + d\phi_0(v)$ is the best linear approximation to $\phi: V \rightarrow X$ at 0. $\phi: V \rightarrow X$ is a local parametrization around $x$, $X$ sits in $\mathbb{R}^N$ and $V$ is an open set in $\mathbb{R}^k$, and $\phi(0) = x$.
Thanks. |
Let's consider a call on min option on two underlying
arithmetic Browniation motions $V_t$ and $H_t$ (no drift). Let $P_t$ denotes the price process of the option, $r$ the riskfree rate, $\tau$ the time to maturity, then following the notation and the procedure in [Stulz, 1982] (eq (3) - (7) in particular), we obtain a similar PDE
$$-P_\tau = r_f(P-P_VV-P_HH)-\frac12(P_{VV}\sigma_V^2+P_{HH}\sigma_H^2+2P_{VH}\rho_{VH}\sigma_V\sigma_H)$$
and the boundry conditions are given by $$P_T:=(\min(V,H)-F,0)_+$$
It looks like the PDE is a heat PDE. Is there any available literature that has already dealt with this PDE, or even better, already given the explicit pricing formula for the
arithmetic rainbow option? (Assume constant vol, constant corr etc.) |
Calculates the p-value of the statistical test for the population mean.
Syntax TEST_MEAN( x, mean, Return_type, Alpha) x is the input data sample (one/two dimensional array of cells (e.g. rows or columns)) mean is the assumed population mean. If missing, the default value of zero is assumed. Return_type is a switch to select the return output (1 = P-Value (default), 2 = Test Stats, 3 = Critical Value.
Method Description 1 P-Value 2 Test Statistics (e.g. Z-score) 3 Critical Value Alpha is the statistical significance of the test (i.e. alpha). If missing or omitted, an alpha value of 5% is assumed. Remarks The sample data may include missing values (e.g. #N/A). The test hypothesis for the population mean: $$H_{o}: \mu=\mu_o$$ $$H_{1}: \mu\neq \mu_o$$ Where: $H_{o}$ is the null hypothesis. $H_{1}$ is the alternate hypothesis. $\mu_o$ is the assumed population mean. $\mu$ is the actual population mean. For the case in which the underlying population distribution is normal, the sample mean/average has a Student's t with T-1 degrees of freedom sampling distribution: $$\bar x \sim t_{\nu=T-1}(\mu,\frac{S^2}{T}) $$ Where: $\bar x$ is the sample average. $\mu$ is the population mean/average. $S$ is the sample standard deviation. $$ S^2 = \frac{\sum_{i=1}^T(x_i-\bar x)^2}{T-1}$$ $T$ is the number of non-missing values in the data sample. $t_{\nu}()$ is the Student's t-Distribution. $\nu$ is the degrees of freedom of the Student's t-Distribution. The Student's t-Test for the population mean can be used for small and for large data samples. This is a two-sides (i.e. two-tails) test, so the computed p-value should be compared with half of the significance level ($\alpha/2$). The underlying population distribution is assumed normal (gaussian). Examples Example 1:
Formula Description (Result) =AVERAGE($B$2:$B$11) Sample mean (-0.0256) =TEST_MEAN($B$2:$B$11,0) p-value of the test (0.472) Files Examples References George Casella; Statistical Inference; Thomson Press (India) Ltd; (Dec 01, 2008), ISBN: 8131503941 K.L. Lange, R.J.A. Little and J.M.G. Taylor. "Robust Statistical Modeling Using the t Distribution." Journal of the American Statistical Association 84, 881-896, 1989 Hurst, Simon, The Characteristic Function of the Student-t Distribution , Financial Mathematics Research Report No. FMRR006-95, Statistics Research Report No. SRR044-95 |
$\def\abs#1{|#1|}\def\i{\mathbf {i}}\def\ket#1{|{#1}\rangle}\def\bra#1{\langle{#1}|}\def\braket#1#2{\langle{#1}|{#2}\rangle}\def\tr{\mathord{\mbox{tr}}}\mathbf{Exercise\ 4.20}$
Let $O_{\theta_1}$ be the single-qubit observable with $+1$-eigenvector
$\ket{v_1} = \cos\theta_1\ket{0} + \sin\theta_1\ket{1}$ and $-1$-eigenvector $\ket{v_1^\perp} = -\sin_1\theta\ket{0} + \cos\theta_1\ket{1}.$ Similarly let $O_{\theta_2}$ be the single-qubit observable with $+1$-eigenvector $\ket{v_2} = \cos\theta_2\ket{0} + \sin\theta_2\ket{1}$ and $-1$-eigenvector $\ket{v_2^\perp} = -\sin\theta_2\ket{0} + \cos\theta_2\ket{1}.$ Let $O$ be the two-qubit observable $O_{\theta_1}\otimes O_{\theta_2}$. We consider various measurements on the EPR state $\ket{\psi} = \frac{1}{\sqrt{2}}(\ket{00} + \ket{11})$. We are interested in the probability that the measurements $O_{\theta_1}\otimes I$ and $I\otimes O_{\theta_2}$, if they were performed on the state $\ket\psi$, would {\it agree} on the two qubits in that either both qubits are measured in the $1$-eigenspace or both are measured in $-1$-eigenspace of their respective single-qubit observables. As in Example \ref{bit-equality}, we are not interested in the specific outcome of the two measurements, just whether or not they would agree. The observable $O = O_{\theta_1}\otimes O_{\theta_2}$ gives exactly this information.
a) Find the probability that the measurements $O_{\theta_1}\otimes I$ and $I \otimes O_{\theta_2}$, when performed on $\ket\psi$, would agree in the sense of both resulting in a $+1$ eigenvector or both resulting in a $-1$ eigenvector. (Hint: Use the trigonometric identities $\cos(\theta_1 - \theta_2) = \cos(\theta_1)\cos(\theta_2) + \sin(\theta_1)\sin(\theta_2)$ and $\sin(\theta_1 - \theta_2) = \sin(\theta_1)\cos(\theta_2) - \cos(\theta_1)\sin(\theta_2)$ to obtain a simple form for your answer.)
b) For what values of $\theta_1$ and $\theta_2$ do the results always agree?
c) For what values of $\theta_1$ and $\theta_2$ do the results never agree?
d) For what values of $\theta_1$ and $\theta_2$ do the results agree half the time?
e) Show that whenever $\theta_1 \ne \theta_2$ and $\theta_1$ and $\theta_2$ are chosen from $\{-60^\circ, 0^\circ, 60^\circ \}$, then the results agree $1/4$ of the time and disagree $3/4$ of the time. |
If the inradius=$2013$ of a right angled triangle with integer sides. Find the no. of possible right angled triangles that can be formed using the above information. I have tried $r(a+b+c)=ab$ and $a^2+b^2=c^2$ , but couldn't reach further. Thanks in anticipation
One thing we can get is
$CF=CD=b-r$ and $BE=BD=c-r$. So, $BC=a=b+c-2r\quad (1)$.
By Euclide's formula we can look for a solution such that $\gcd(a,b,c)=1$ and $b=m^2-n^2$, $c=2mn$ and $a=m^2+n^2$ with, $\gcd(m,n)=1$ and not both odd.
We can then back to $(1)$ and get
$$a+2\cdot2013=b+c\\ m^2+n^2+2\cdot2013=m^2-n^2+2mn\\ n^2+2013=mn\to n(m-n)=2013$$
so, $n|2013$ and once $2013=3\cdot11\cdot61$ then we have the possibilities:
$$n\in\{3,11,61,3\cdot11,3\cdot61,11\cdot61,3\cdot11\cdot61\}$$
So you just have to try all possibilities for $n$ and find a suitable $m$.
For example, for $n=3$ you will find $m=674$ and then you have one solution which is $(a,b,c)=(674^2+3^2,674^2-3^2,2\cdot3\cdot674)$.
Now just find the others.
Answer to the question is easy: 8 primitive Pythagorean triples are there for the inradius 2013. There are no other triples (non-primitive Pythagorean triples).
My extended Answer to the question: 2013 has 3 prime factors such that 2013 = 3*11*61, thus, number of Pythagorean triples with inradius = 2^3 = 8. Therefore, there are 8 Pythagorean triples with the given inradius 2013. Ironically, all of them are primitive and no non-primitive Pythagorean triples present for the inradius of 2013.
Reference: Neville Robbins, On the number of primitive Pythagorean triangles with a given inradius, Fibonacci Quarterly 2006, 44(4), pp. 368–369.
For total Pythagorean triples, read: Tron Omland, How many Pythagorean triples with a given inradius?, Journal of Number Theory 2017, 170(1), 1–2. |
Saddle-node bifurcation
Yuri A. Kuznetsov (2006), Scholarpedia, 1(10):1859. doi:10.4249/scholarpedia.1859 revision #151865 [link to/cite this article]
A
saddle-node bifurcation is a collision and disappearance of two equilibria in dynamical systems. In systems generated by autonomous ODEs, this occurs when the critical equilibrium has one zero eigenvalue. This phenomenon is also called fold or limit point bifurcation. A discrete version of this bifurcation is considered in the article "Saddle-node bifurcation for maps".
Contents Definition
Consider an autonomous system of ordinary differential equations (ODEs) \[ \dot{x}=f(x,\alpha),\ \ \ x \in {\mathbb R}^n \] depending on a parameter \(\alpha \in {\mathbb R}\ ,\) where \(f\) is smooth.
Suppose that at \(\alpha=0\) the system has an equilibrium \(x^0=0\ .\) Further assume that its Jacobian matrix \(A_0=f_x(0,0)\) has a simple eigenvalue \(\lambda_{1}=0 \ .\)
Then, generically, as \(\alpha\) passes through \(\alpha=0\ ,\) two equilibria collide, form a critical saddle-node equilibrium (case \(\beta=0\) in Figure 1), and disappear. This bifurcation is characterized by a single bifurcation condition \(\lambda_1=0\) (has codimension one) and appears generically in one-parameter families of smooth ODEs. The critical equilibrium \( x^0 \) is a multiple (double) root of the equation \( f(x,0)=0 \ .\)
One-dimensional Case
\[\dot{x} = f(x,\alpha), \ \ \ x \in {\mathbb R}\ .\]If the following
nondegeneracy conditions hold: (SN.1)\(a(0)=\frac{1}{2}f_{xx}(0,0) \neq 0\ ,\) (SN.2)\(f_{\alpha}(0,0) \neq 0\ ,\)
then this system is locally topologically equivalent near the origin to the normal form \[ \dot{y} = \beta + \sigma y^2 \ ,\] where \(y \in {\mathbb R},\ \beta \in {\mathbb R}\ ,\) and \(\sigma= {\rm sign}\ a(0) = \pm 1\ .\)
The normal form has two equilibria (one stable and one unstable) \(y^{1,2}=\pm \sqrt{-\sigma \beta}\) for \(\sigma \beta<0\) and no equilibria for \(\sigma \beta > 0\ .\) At \(\beta=0\ ,\) there is one critical equilibrium \(y^0=0\) with zero eigenvalue.
Multidimensional Case
In the \(n\)-dimensional case with \(n \geq 2\ ,\) the Jacobian matrix \(A_0\) at the saddle-node bifurcation has
a simple zero eigenvalue \(\lambda_{1}=0\ ,\) as well as \(n_s\) eigenvalues with \({\rm Re}\ \lambda_j < 0\ ,\) and \(n_u\) eigenvalues with \({\rm Re}\ \lambda_j > 0\ ,\)
with \(n_s+n_u+1=n\ .\) According to the Center Manifold Theorem, there is a family of smooth one-dimensional invariant manifolds \(W^c_{\alpha}\) near the origin. The \(n\)-dimensional system restricted on \(W^c_{\alpha}\) is one-dimensional, hence has the normal form above.
Moreover, under the non-degeneracy conditions (SN.1) and (SN.2), the \(n\)-dimensional system is locally topologically equivalent near the origin to the suspension of the normal form by the
standard saddle, i.e.\[\dot{y} = \beta + \sigma y^2\ ,\]\[\dot{y}^s = -y^s\ ,\]\[\dot{y}^u = +y^u\ ,\]where \(y \in {\mathbb R}\ ,\) \(y^s \in {\mathbb R}^{n_s}, \ y^u \in {\mathbb R}^{n_u}\ .\) Figure 1 shows the phase portraits of the normal form suspension when \(n=2\ ,\) \(n_s=1\ ,\) \(n_u=0\ ,\) and \(\sigma=+1\ .\) Quadratic Coefficient
The quadratic coefficient \(a(0)\ ,\) which is involved in the nondegeneracy condition (SN.1), can be computed for \(n \geq 1\) as follows. Write the Taylor expansion of \(f(x,0)\) at \(x=0\) as \[ f(x,0)=A_0x + \frac{1}{2}B(x,x) + O(\|x\|^3) \ ,\] where \(B(x,y)\) is the bilinear function with components \[ \ \ B_j(x,y) =\sum_{k,l=1}^n \left. \frac{\partial^2 f_j(\xi,0)}{\partial \xi_k \partial \xi_l}\right|_{\xi=0} x_k y_l \ ,\] where \(j=1,2,\ldots,n\ .\) Let \(q\in {\mathbb R}^n\) be a null-vector of \(A_0\ :\) \(A_0q=0, \ \langle q, q \rangle =1\ ,\) where \(\langle p, q \rangle = p^Tq\) is the standard inner product in \({\mathbb R}^n\ .\) Introduce also the adjoint null-vector \(p \in {\mathbb R}^n\ :\) \(A_0^T p = 0, \ \langle p, q \rangle =1\ .\) Then (see, for example, Kuznetsov (2004)) \[ a(0)= \frac{1}{2} \langle p, B(q,q))\rangle = \left.\frac{1}{2} \frac{d^2}{d\tau^2} \langle p, f(\tau q,0) \rangle \right|_{\tau=0} \ .\] Standard bifurcation software (e.g. MATCONT) computes \(a(0)\) automatically.
Other Cases
Saddle-node bifurcation occurs also in infinitely-dimensional ODEs generated by PDEs and DDEs, to which the Center Manifold Theorem applies. Saddle-node bifurcations occur also for dynamical systems with discrete time (iterated maps).
An important case of saddle-node bifurcation in planar ODEs is when the center manifold makes a homoclinic loop, as in the Figure 3. Such a saddle-node homoclinic bifurcation results in the birth of a limit cycle when the saddle-node disappears. The period of this cycle tends to infinity as the parameter approaches its bifurcation value. In ODEs with \( n \geq 3 \ ,\) a saddle-node with \( n_sn_u >0 \) can have more than one homoclinic orbit simultaneously. Disappearance of such a saddle-node, called a saddle-saddle or a Shilnikov saddle-node, generates an infinite number of saddle periodic orbits.
References A.A. Andronov, E.A. Leontovich, I.I. Gordon, and A.G. Maier (1971) Theory of Bifurcations of Dynamical Systems on a Plane. Israel Program Sci. Transl. L.P. Shilnikov, On a new type of bifurcation in multidimensional dynamical systems (1969) Sov Math Dokl. 10, 1368-1371. V.I. Arnold (1983) Geometrical Methods in the Theory of Ordinary Differential Equations. Grundlehren Math. Wiss., 250, Springer J. Guckenheimer and P. Holmes (1983) Nonlinear Oscillations, Dynamical systems and Bifurcations of Vector Fields. Springer Yu.A. Kuznetsov (2004) Elements of Applied Bifurcation Theory, Springer, 3rd edition. S. Newhouse, J. Palis and F. Takens (1983) Bifurcations and stability of families of diffeomorphisms. Inst. Hautes Études Sci. Publ. Math. 57, 5-71. L.P. Shilnikov, A.L. Shilnikov, D.V. Turaev, and L.O. Chua (2001) Methods of Qualitative Theory in Nonlinear Dynamics. Part II, World Scientific. Internal references Yuri A. Kuznetsov (2006) Andronov-Hopf bifurcation. Scholarpedia, 1(10):1858. Jack Carr (2006) Center manifold. Scholarpedia, 1(12):1826. Willy Govaerts, Yuri A. Kuznetsov, Bart Sautois (2006) MATCONT. Scholarpedia, 1(9):1375. James Murdock (2006) Normal forms. Scholarpedia, 1(10):1902. Jeff Moehlis, Kresimir Josic, Eric T. Shea-Brown (2006) Periodic orbit. Scholarpedia, 1(7):1358. Philip Holmes and Eric T. Shea-Brown (2006) Stability. Scholarpedia, 1(10):1838. External Links See Also
Andronov-Hopf bifurcation, Bifurcations, Center manifold theorem, Dynamical systems, Equilibria, MATCONT, Ordinary differential equations, Saddle-node bifurcation for maps, Saddle-node homoclinic bifurcation, XPPAUT |
\({ \sqrt{5x^2+11}} = x + 5\)
How would I solve this equation algebraically? \(\large{ \sqrt{5x^2+11}} = x + 5\)
\(\begin{array}{|rcll|} \hline \sqrt{5x^2+11} &=& x + 5 \quad & | \quad \text{square both sides} \\ 5x^2+11 &=& (x + 5)^2 \\ 5x^2+11 &=& x^2+10x+25 \\ 4x^2-10x-14 &=& 0 \quad & | \quad : 2 \\ 2x^2-5x-7 &=& 0 \\ \\ x &=& \dfrac{ 5\pm\sqrt{25-4\cdot 2 \cdot (-7) } } {2\cdot 2 } \\\\ x &=& \dfrac{ 5\pm\sqrt{25+56 } } { 4 } \\\\ x &=& \dfrac{ 5\pm\sqrt{81} } { 4 } \\\\ x &=& \dfrac{ 5\pm 9 } { 4 } \\\\ x_1 &=& \dfrac{ 5 + 9 } { 4 } \\\\ x_1 &=& \dfrac{ 14 } { 4 } \\\\ \mathbf{ x_1 } &\mathbf{ =}& \mathbf{ \dfrac{ 7 } { 2 }} \\\\ x_2 &=& \dfrac{ 5 - 9 } { 4 } \\\\ x_2 &=& -\dfrac{ 4 } { 4 } \\\\ \mathbf{ x_2} &\mathbf{ =}& \mathbf{ -1} \\ \hline \end{array}\)
\({ \sqrt{5x^2+11}} = x + 5\)... you have this equation.
Sqaure both sides to give you \(5x^2 + 11 = (x+5)^2\)
Solving this, we get \(5x^2 + 11 = x^2 + 10x + 25\).
This is equal to \(4x^2 - 10x - 14 = 0\)
Factorizing this equation, we get \((2x-7)(2x+2) = 0\). (You can test that out to see if it matches the previous equation of \(4x^2 - 10x - 14 = 0\).)
Since \((2x-7)(2x+2)\) has to equal zero, at least one of the brackets need to equal zero for the equation to equal zero. Therefore, we can end up with two solutions.
\(x\) is either \(\boxed{\frac{7}{2}}\), or \(x\) is \(\boxed{-1}\). |
Epidemics
Dr. Angela McLean accepted the invitation on 10 July 2007 (self-imposed deadline: 10 October 2007).
This article will briefly cover: observed patterns of epidemics of infectious diseases and the underlying processes thought to generate those patterns.
Contents The SIR Model Control and Eradication Age Structure Seasonal Forcing Macroparasites
Macroparasites are, as the name suggests, much larger parasitic organisms such as helminths, flukes or other worms. The mathematical modelling of this type of infection requires several extra components to be included in the standard model. Firstly, worm burden (number of worms within a host) must be modelled as is important for both transmission and severity of the disease. Secondly, transmission is more complex, many macroparasites have free-living stages outside the host and hosts can be infected multiple times which will increase the burden. Finally, worm burden shows a highly over-dispersed distribution with a few individuals having incredibly high numbers of parasites. We will not discuss these elements in any more detail, but note that all of these factors have parallels in the study of micro-parasites.
Multi-strain Infections Stochastic Dynamics
Stochastic dynamics refers to the situation where noise is incorporated into the basic dynamics. Two main approaches have been explored, one is to add suitably scaled noise to the standard differential equations, the other is to adopt an individual-level, event-based approach. We discuss both of these below:
Stochastic Differential Equations
Consider the following modification to the standard differential equation for the level of infection within the population: \[ \frac{dI}{dt} \; = \; \beta S I \, - \, \gamma I \, + \, f(S,I)\xi \] where \(\xi(t)\) is a source of Gaussian noise (see Stochastic Dynamical Systems for more information about this type of system). The function \(f\) determines how the noise scales with the population dynamics, several common forms are:
\(f\) is a constant. This is the simplest form of noise that can be investigated, which can lead to unrealistic negative population values. \(f(S,I) = \sqrt{\beta S I + \gamma I}\ .\) This form of noise mimics event-driven (demographic) stochasticity, and is derived from the fact that events form a Poisson process. The variance of the noise for each event (infection and recovery) is equal to the mean rate, and the variance of the two noise terms add together. \(f(S,I) = \sqrt{ v_{\beta} (S I)^2 + v_{\gamma} (I)^2} \ .\) This form of noise corresponds to external parameter noise, due to factors external to the model such as temperature or humidity. Here, \(v_{\beta}\) and \(v_{\gamma}\) measure the variance in the transmission rate (\(\beta\)) and recovery rate (\(\gamma\)). \(f(S,I) = \sqrt{k_1 (\beta S)^2 I + k_2 (\beta I)^2 S + k_3 (\gamma)^2 I}\ .\) This final formulation is due to heterogeneities in the parameters associated with individuals, and the variation in the mean values. The parameters \(k_1\ ,\) \(k_2\) and \(k_3\) measure the heterogeneity in infectivity, susceptibility and recovery rates.
Which of these forms of noise is used (or even a combination of them is possible) is largely dependent on the problem being modelled, and the expected source of noise.
Event-based Stochasticity
In recent years event-based (demographic) stochasticity has been used increasingly by applied researchers. This dominance of event-based stochasticity over stochastic differential equations can be attributed to one main factor: event-based models respect the individual nature of the population, such that the population is composed on an integer number of susceptible and infected individuals. For the simple SIR model two events can occur:
InfectionThis occurs with a probabilistic rate \(\beta X Y/N\) and leads to integer changes in the population variables \(X \to X - 1\) and \(Y \to Y + 1\ .\) RecoveryThis occurs with a probabilistic rate \(\gamma Y\) and again leads to integer changes in the population variable \(Y \to Y - 1\) and \(Z \to Z + 1\ .\)
We can therefore calculate the rate that
any even occurs is simply the sum of all the individual rates\[\beta X Y / N + \gamma Y\ .\] Making the standard assumption that events are Poisson, the time to the next event (whatever it might be) is then:\[ \delta t = \frac{ - \log(RAND_1) }{\beta X Y / N + \gamma Y} \ .\]where \(RAND_1\) is a randomly number, uniformly distributed between 0 and 1. This calculation tells us the time to the next event, but not which event it is; for the simple SIR with two events this can be done with relative ease. Picking a second random number, \(RAND_2\ ,\) (again uniformly distributed between 0 and 1), then the event is infection if\[RAND_2 < \frac{\beta X Y / N}{\beta X Y / N + \gamma Y}\]in which case \(Y\) is increased by one and \(X\) is decreased by one. Otherwise we assume the event is recovery and we decrease \(Z\) by one and increase \(Z\) by one.This process can now be repeated for event after event, increasing the time each time. As such this provides a fast and robust means of simulating the dynamics of infection in a population and accounts for the chance nature of transmission and recovery. Implications of Stochasticity
Including stochasticity in models has several effects not evident in deterministic models:
1) The most obvious is that variability enters the simulations. Therefore where the deterministic models may predict an equilibrium prevalence of infections, stochastic models will display variation in both the number of suceptible and infected individuals -- what is more, there is generally a negative covariance between infected and susceptible populations. This variablity means that multiple stochastic simulations are generally required, and there results must be treated statistically.
2) The second element is that population size is important, we can no-longer simply rescale the parameters. It is generally found that large population sizes experience relatively less stochasticity and therefore are closer to the deterministic ideal.
3) Given that the deterministic approach to equilibrium is oscillatory, stochasticity can often excite oscillations. These oscillations, which can be substantial in small populations, can be distinguished from seasonally forced oscillations by the fact that they are not locked to any regular (multi-)annual cycle.
4) Finally, given that transmission and recovery are stochastic processes, there is always the chance that the infection will die-out -- even though it is deterministically stable. This phenomena has been well documented for measles, where populations below 300,000 suffer regular stochastic extinctions (followed by re-introduction of infection from elsewhere) while in populations above 500,000 measles appears to persist.
It is still an open challenge to understand how the stochastic nature of epidemics can be utilised in the more effecient control of infections. |
I'm trying to make a "triplot" to illustrate Bayesian inference (so I'd like to have prior, likelihood and posterior in the same picture). For likelihood I'm using \begin{equation}\label{eq:lik}f(y|\tau) = \prod_{i=1}^{n}\frac{\tau}{\sqrt{2\pi}}\exp\left(-\frac{\tau(y_{i}-\mu)^{2}}{2}\right) = \frac{\tau}{\sqrt{2\pi}}\exp\left(-\frac{\tau\sum_{i=1}^{n}(y_{i}-\mu)^{2}}{2}\right),\end{equation}i.e., the Gaussian distribution with known mean $\mu$ and unknown precision $\tau=\frac{1}{\sigma}$, where ${\sigma}^{2}$ is the unknown variance.
If we choose the prior on $\tau$ to be a gamma distribution \begin{equation} p(\tau)=\Gamma(\alpha,\beta), \end{equation} with the shape $\alpha$ and the rate $\beta$, we can use the conjugacy theory to find the form of the posterior. The posterior distribution in our example is the following gamma distribution \begin{equation} p(\tau|y)=\Gamma\left(\alpha+\frac{n}{2},\beta+\frac{\sum_{i=1}^{n}(y_{i}-\mu)^{2}}{2}\right). \end{equation} I plotted it with $\alpha=1.5$, $\beta=10.0$, $\mu=0.0$ $n=5$ and the random sample $(y_i)_{i=1\dots 5}$ was generated assuming the "true" value of $\tau=0{.}25$. As you can see, the likelihood is not visible. I tried different configurations of parameters but nothing helped. I'd be very grateful for any ideas how to choose $\alpha$ and $\beta$ so that I get a nice illustration, something like that.
I'm trying to make a "triplot" to illustrate Bayesian inference (so I'd like to have prior, likelihood and posterior in the same picture). For likelihood I'm using \begin{equation}\label{eq:lik}f(y|\tau) = \prod_{i=1}^{n}\frac{\tau}{\sqrt{2\pi}}\exp\left(-\frac{\tau(y_{i}-\mu)^{2}}{2}\right) = \frac{\tau}{\sqrt{2\pi}}\exp\left(-\frac{\tau\sum_{i=1}^{n}(y_{i}-\mu)^{2}}{2}\right),\end{equation}i.e., the Gaussian distribution with known mean $\mu$ and unknown precision $\tau=\frac{1}{\sigma}$, where ${\sigma}^{2}$ is the unknown variance.
The method is very easy: I'll rescale the likelihood which is fine because it doesn't have to integrate to 1. |
Let $X_n(\Bbb{Z})$ be the simplicial complex whose vertex set is $\Bbb{Z}$ and such that the vertices $v_0,...,v_k$ span a $k$-simplex if and only if $|v_i-v_j| \le n$ for every $i,j$. Prove that $X_n(\Bbb{Z})$ is $n$-dimensional...
no kidding, my maths is foundations (basic logic but not pedantic), calc 1 which I'm pretty used to work with, analytic geometry and basic linear algebra (by basic I mean matrices and systems of equations only
Anyway, I would assume if removing $ fixes it, then you probably have an open math expression somewhere before it, meaning you didn't close it with $ earlier. What's the full expression you're trying to get? If it's just the frac, then your code should be fine
This is my first time chatting here in Math Stack Exchange. So I am not sure if this is frowned upon but just a quick question: I am trying to prove that a proper subgroup of $\mathbb{Z}^n$ is isomorphic to $\mathbb{Z}^k$, where $k \le n$. So we must have $rank(A) = rank(\mathbb{Z}^k)$ , right?
For four proper fractions $a, b, c, d$ X writes $a+ b + c >3(abc)^{1/3}$. Y also added that $a + b + c> 3(abcd)^{1/3}$. Z says that the above inequalities hold only if a, b,c are positive. (a) Both X and Y are right but not Z. (b) Only Z is right (c) Only X is right (d) Neither of them is absolutely right.
Yes, @TedShifrin the order of $GL(2,p)$ is $p(p+1)(p-1)^2$. But I found this on a classification of groups of order $p^2qr$. There order of $H$ should be $qr$ and it is present as $G = C_{p}^2 \rtimes H$. I want to know that whether we can know the structure of $H$ that can be present?
Like can we think $H=C_q \times C_r$ or something like that from the given data?
When we say it embeds into $GL(2,p)$ does that mean we can say $H=C_q \times C_r$? or $H=C_q \rtimes C_r$? or should we consider all possibilities?
When considering finite groups $G$ of order, $|G|=p^2qr$, where $p,q,r$ are distinct primes, let $F$ be a Fitting subgroup of $G$. Then $F$ and $G/F$ are both non-trivial and $G/F$ acts faithfully on $\bar{F}:=F/ \phi(F)$ so that no non-trivial normal subgroup of $G/F$ stabilizes a series through $\bar{F}$.
And when $|F|=pr$. In this case $\phi(F)=1$ and $Aut(F)=C_{p-1} \times C_{r-1}$. Thus $G/F$ is abelian and $G/F \cong C_{p} \times C_{q}$.
In this case how can I write G using notations/symbols?
Is it like $G \cong (C_{p} \times C_{r}) \rtimes (C_{p} \times C_{q})$?
First question: Then it is, $G= F \rtimes (C_p \times C_q)$. But how do we write $F$ ? Do we have to think of all the possibilities of $F$ of order $pr$ and write as $G= (C_p \times C_r) \rtimes (C_p \times C_q)$ or $G= (C_p \rtimes C_r) \rtimes (C_p \times C_q)$ etc.?
As a second case we can consider the case where $C_q$ acts trivially on $C_p$. So then how to write $G$ using notations?
There it is also mentioned that we can distinguish among 2 cases. First, suppose that the sylow $q$-subgroup of $G/F$ acts non trivially on the sylow $p$-subgroup of $F$. Then $q|(p-1) and $G$ splits over $F$. Thus the group has the form $F \rtimes G/F$.
A presentation $\langle S\mid R\rangle$ is a Dehn presentation if for some $n\in\Bbb N$ there are words $u_1,\cdots,u_n$ and $v_1,\cdots, v_n$ such that $R=\{u_iv_i^{-1}\}$, $|u_i|>|v_i|$ and for all words $w$ in $(S\cup S^{-1})^\ast$ representing the trivial element of the group one of the $u_i$ is a subword of $w$
If you have such a presentation there's a trivial algorithm to solve the word problem: Take a word $w$, check if it has $u_i$ as a subword, in that case replace it by $v_i$, keep doing so until you hit the trivial word or find no $u_i$ as a subword
There is good motivation for such a definition here
So I don't know how to do it precisely for hyperbolic groups, but if $S$ is a surface of genus $g \geq 2$, to get a geodesic representative for a class $[\alpha] \in \pi_1(S)$ where $\alpha$ is an embedded loop, one lifts it to $\widetilde{\alpha}$ in $\Bbb H^2$ by the locally isometric universal covering, and then the deck transformation corresponding to $[\alpha]$ is an isometry of $\Bbb H^2$ which preserves the embedded arc $\widetilde{\alpha}$
It has to be an isometry fixing a geodesic $\gamma$ with endpoints at the boundary being the same as the endpoints of $\widetilde{\alpha}$.
Consider the homotopy of $\widetilde{\alpha}$ to $\gamma$ by straightline homotopy, but straightlines being the hyperbolic geodesics. This is $\pi_1(S)$-equivariant, so projects to a homotopy of $\alpha$ and the image of $\gamma$ (which is a geodesic in $S$) downstairs, and you have your desired representative
I don't know how to interpret this coarsely in $\pi_1(S)$
@anakhro Well, they print in bulk, and on really cheap paper, almost transparent and very thin, and offset machine is really cheaper per page than a printer, you know, but you should be printing in bulk, its all economy of scale.
@ParasKhosla Yes, I am Indian, and trying to get in some good masters progam in math.
Algebraic graph theory is a branch of mathematics in which algebraic methods are applied to problems about graphs. This is in contrast to geometric, combinatoric, or algorithmic approaches. There are three main branches of algebraic graph theory, involving the use of linear algebra, the use of group theory, and the study of graph invariants.== Branches of algebraic graph theory ===== Using linear algebra ===The first branch of algebraic graph theory involves the study of graphs in connection with linear algebra. Especially, it studies the spectrum of the adjacency matrix, or the Lap...
I can probably guess that they are using symmetries and permutation groups on graphs in this course.
For example, orbits and studying the automorphism groups of graphs.
@anakhro I have heard really good thing about Palka. Also, if you do not worry about little sacrifice of rigor (e.g. counterclockwise orientation based on your intuition, rather than, on winding numbers, etc.), Howie's Complex analysis is good. It is teeming with typos here and there, but you will be fine, i think. Also, thisbook contains all the solutions in appendix!
Got a simple question: I gotta find kernel of linear transformation $F(P)=xP^{''}(x) + (x+1)P^{'''}(x)$ where $F: \mathbb{R}_3[x] \to \mathbb{R}_3[x]$, so I think it would be just $\ker (F) = \{ ax+b : a,b \in \mathbb{R} \}$ since only polynomials of degree at most 1 would give zero polynomial in this case
@chandx you're looking for all the $G = P''$ such that $xG + (x+1)G' = 0$; if $G \neq 0$ you can solve the DE to get $G'/G = -x/(x+1) = -1 + 1/(x+1) \implies \ln G = -x + \ln(1+x) \implies G = (1+x)e^(-x) + C$ which is obviously not a polyonomial, so $G = 0$ and thus $P = ax + b$
could you suppose that $\operatorname{deg} P \geq 2$ and show that you wouldn't have nonzero polynomials? Sure. |
Let $X_n(\Bbb{Z})$ be the simplicial complex whose vertex set is $\Bbb{Z}$ and such that the vertices $v_0,...,v_k$ span a $k$-simplex if and only if $|v_i-v_j| \le n$ for every $i,j$. Prove that $X_n(\Bbb{Z})$ is $n$-dimensional...
no kidding, my maths is foundations (basic logic but not pedantic), calc 1 which I'm pretty used to work with, analytic geometry and basic linear algebra (by basic I mean matrices and systems of equations only
Anyway, I would assume if removing $ fixes it, then you probably have an open math expression somewhere before it, meaning you didn't close it with $ earlier. What's the full expression you're trying to get? If it's just the frac, then your code should be fine
This is my first time chatting here in Math Stack Exchange. So I am not sure if this is frowned upon but just a quick question: I am trying to prove that a proper subgroup of $\mathbb{Z}^n$ is isomorphic to $\mathbb{Z}^k$, where $k \le n$. So we must have $rank(A) = rank(\mathbb{Z}^k)$ , right?
For four proper fractions $a, b, c, d$ X writes $a+ b + c >3(abc)^{1/3}$. Y also added that $a + b + c> 3(abcd)^{1/3}$. Z says that the above inequalities hold only if a, b,c are positive. (a) Both X and Y are right but not Z. (b) Only Z is right (c) Only X is right (d) Neither of them is absolutely right.
Yes, @TedShifrin the order of $GL(2,p)$ is $p(p+1)(p-1)^2$. But I found this on a classification of groups of order $p^2qr$. There order of $H$ should be $qr$ and it is present as $G = C_{p}^2 \rtimes H$. I want to know that whether we can know the structure of $H$ that can be present?
Like can we think $H=C_q \times C_r$ or something like that from the given data?
When we say it embeds into $GL(2,p)$ does that mean we can say $H=C_q \times C_r$? or $H=C_q \rtimes C_r$? or should we consider all possibilities?
When considering finite groups $G$ of order, $|G|=p^2qr$, where $p,q,r$ are distinct primes, let $F$ be a Fitting subgroup of $G$. Then $F$ and $G/F$ are both non-trivial and $G/F$ acts faithfully on $\bar{F}:=F/ \phi(F)$ so that no non-trivial normal subgroup of $G/F$ stabilizes a series through $\bar{F}$.
And when $|F|=pr$. In this case $\phi(F)=1$ and $Aut(F)=C_{p-1} \times C_{r-1}$. Thus $G/F$ is abelian and $G/F \cong C_{p} \times C_{q}$.
In this case how can I write G using notations/symbols?
Is it like $G \cong (C_{p} \times C_{r}) \rtimes (C_{p} \times C_{q})$?
First question: Then it is, $G= F \rtimes (C_p \times C_q)$. But how do we write $F$ ? Do we have to think of all the possibilities of $F$ of order $pr$ and write as $G= (C_p \times C_r) \rtimes (C_p \times C_q)$ or $G= (C_p \rtimes C_r) \rtimes (C_p \times C_q)$ etc.?
As a second case we can consider the case where $C_q$ acts trivially on $C_p$. So then how to write $G$ using notations?
There it is also mentioned that we can distinguish among 2 cases. First, suppose that the sylow $q$-subgroup of $G/F$ acts non trivially on the sylow $p$-subgroup of $F$. Then $q|(p-1) and $G$ splits over $F$. Thus the group has the form $F \rtimes G/F$.
A presentation $\langle S\mid R\rangle$ is a Dehn presentation if for some $n\in\Bbb N$ there are words $u_1,\cdots,u_n$ and $v_1,\cdots, v_n$ such that $R=\{u_iv_i^{-1}\}$, $|u_i|>|v_i|$ and for all words $w$ in $(S\cup S^{-1})^\ast$ representing the trivial element of the group one of the $u_i$ is a subword of $w$
If you have such a presentation there's a trivial algorithm to solve the word problem: Take a word $w$, check if it has $u_i$ as a subword, in that case replace it by $v_i$, keep doing so until you hit the trivial word or find no $u_i$ as a subword
There is good motivation for such a definition here
So I don't know how to do it precisely for hyperbolic groups, but if $S$ is a surface of genus $g \geq 2$, to get a geodesic representative for a class $[\alpha] \in \pi_1(S)$ where $\alpha$ is an embedded loop, one lifts it to $\widetilde{\alpha}$ in $\Bbb H^2$ by the locally isometric universal covering, and then the deck transformation corresponding to $[\alpha]$ is an isometry of $\Bbb H^2$ which preserves the embedded arc $\widetilde{\alpha}$
It has to be an isometry fixing a geodesic $\gamma$ with endpoints at the boundary being the same as the endpoints of $\widetilde{\alpha}$.
Consider the homotopy of $\widetilde{\alpha}$ to $\gamma$ by straightline homotopy, but straightlines being the hyperbolic geodesics. This is $\pi_1(S)$-equivariant, so projects to a homotopy of $\alpha$ and the image of $\gamma$ (which is a geodesic in $S$) downstairs, and you have your desired representative
I don't know how to interpret this coarsely in $\pi_1(S)$
@anakhro Well, they print in bulk, and on really cheap paper, almost transparent and very thin, and offset machine is really cheaper per page than a printer, you know, but you should be printing in bulk, its all economy of scale.
@ParasKhosla Yes, I am Indian, and trying to get in some good masters progam in math.
Algebraic graph theory is a branch of mathematics in which algebraic methods are applied to problems about graphs. This is in contrast to geometric, combinatoric, or algorithmic approaches. There are three main branches of algebraic graph theory, involving the use of linear algebra, the use of group theory, and the study of graph invariants.== Branches of algebraic graph theory ===== Using linear algebra ===The first branch of algebraic graph theory involves the study of graphs in connection with linear algebra. Especially, it studies the spectrum of the adjacency matrix, or the Lap...
I can probably guess that they are using symmetries and permutation groups on graphs in this course.
For example, orbits and studying the automorphism groups of graphs.
@anakhro I have heard really good thing about Palka. Also, if you do not worry about little sacrifice of rigor (e.g. counterclockwise orientation based on your intuition, rather than, on winding numbers, etc.), Howie's Complex analysis is good. It is teeming with typos here and there, but you will be fine, i think. Also, thisbook contains all the solutions in appendix!
Got a simple question: I gotta find kernel of linear transformation $F(P)=xP^{''}(x) + (x+1)P^{'''}(x)$ where $F: \mathbb{R}_3[x] \to \mathbb{R}_3[x]$, so I think it would be just $\ker (F) = \{ ax+b : a,b \in \mathbb{R} \}$ since only polynomials of degree at most 1 would give zero polynomial in this case
@chandx you're looking for all the $G = P''$ such that $xG + (x+1)G' = 0$; if $G \neq 0$ you can solve the DE to get $G'/G = -x/(x+1) = -1 + 1/(x+1) \implies \ln G = -x + \ln(1+x) \implies G = (1+x)e^(-x) + C$ which is obviously not a polyonomial, so $G = 0$ and thus $P = ax + b$
could you suppose that $\operatorname{deg} P \geq 2$ and show that you wouldn't have nonzero polynomials? Sure. |
I have a set of $n\times n$ matrices with entries on $\mathbb{F}_2$, given by $$\mathcal{A}=\left\{A\in\mathcal{M}_{n\times n}(\mathbb{F}_2):A= \left( \begin{array}{ccc} I_{k_1} & 0 & 0 \\ 0 & J & 0 \\ 0 & 0 & I_{k_3} \end{array} \right) \right\} $$ where $J=\left( \begin{array}{cccc} 0 & \ldots & 0 & 1 \\ 0 & \ldots & 1 & 0\\ \vdots & \vdots & \vdots & \vdots\\ 1 & \ldots & 0 & 0\end{array} \right)_{b\times b}$ is a square matrix of size $b$ and $I_{k_1},I_{k_3}$ are identity matrices such that $k_1+k_3+b=n$
This set acts a flip block set for the column vectors $X$ such that $X^T=(x_1,x_2,\ldots,x_n)\in\mathbb{F}_2^n$.
I want to break the set of those vectors in some kind of equivalence classes such that every reversed vector $Y$ can be identified with just one of those equivalence classes.
For example, in the case of $n=5$ and $b=3$ the vectors with Hamming weight $1$ are divided in $2$ sets: $\left\{(1,0,0,0,0),(0,0,1,0,0),(0,0,0,0,1)\right\}$ and $\left\{(0,1,0,0,0),(0,0,0,1,0)\right\}$
Because no matter what $A\in\mathcal{A}$ you multiply any of these vectors they will stay in their sets.
I want to get this for any case but I don't know how to express it. |
My questions are about worldline path integrals from the book Gauge Fields and Strings of Polyakov.
On page 153, chapter 9, he says
>Let us begin with the following path integral
\begin{align} &\mathscr{H}(x,y)[h(\tau)]=\int_{x}^{y}\mathscr{D}x(\tau)\delta(\overset{\,\centerdot}{x}{}^{2}(\tau)\boldsymbol{-}h(\tau)) \nonumber\\ &=\int\mathscr{D}\lambda(\tau)\exp\left(i\int_{0}^{1}d\tau\lambda(\tau)h(\tau)\right)\int_{x}^{y}\mathscr{D}x(\tau)\exp\left(-i\int_{0}^{1}d\tau\lambda(\tau)\dot{x}^{2}(\tau)\right) \tag{9.8}\label{9.8} \end{align} where $h(\tau)$ is the worldline metric tensor.
>The action in (9.8) is invariant under reparametrizations, if we transform:
\begin{align} x(\tau)&\rightarrow x(f(\tau)) \nonumber\\ h(\tau)&\rightarrow\left(\frac{df}{d\tau}\right)^{2}h(f(\tau)) \tag{9.9}\label{9.9}\\ \lambda(\tau)&\rightarrow\left(\frac{df}{d\tau}\right)^{-1}\lambda(f(\tau)) \end{align}
Polyakov continued with the following statement.
>It is convenient to introduce instead of the worldline vector $\lambda(\tau)$, the worldline scalar Lagrange multiplier $\alpha(\tau)$:
\begin{align} \lambda(\tau)&\equiv\alpha(\tau)h(\tau)^{-1/2} \nonumber\\ \alpha(\tau)&\rightarrow\alpha(f(\tau)) \tag{9.11} \end{align} So that: \begin{align} &\mathscr{H}(x,y)[h(\tau)] \nonumber\\ &=\int\mathscr{D}\alpha(\tau)e^{i\int_{0}^{1}d\tau\alpha(\tau)\sqrt{h(\tau)}}\int_{x}^{y}\mathscr{D}x(\tau)\exp\left(-i\int_{0}^{1}d\tau\frac{\alpha(\tau)\dot{x}^{2}(\tau)}{\sqrt{h(\tau)}}\right) \tag{9.12} \end{align}
My first question is about equation (9.12). What Polyakov did there is boldly replace the integral measure $\mathscr{D}\lambda$ by $\mathscr{D}\alpha$. Didn't he miss the Jacobian factor?
$$\mathscr{D}\lambda=\mathscr{D}\alpha\det\left(\frac{\delta\lambda}{\delta\alpha}\right)$$
My second question is the following.
He introduced another parameter $t$, called proper time, defined as
>\begin{align}
t\equiv\int_{0}^{\tau}\sqrt{h(s)}ds;\quad T\equiv t(1) \tag{9.13} \end{align} and so \begin{align} &\mathscr{H}(x,y)[h(\tau)]\equiv\mathscr{H}(x,y;T) \nonumber\\ &=\int\mathscr{D}\alpha\exp i\int_{0}^{T}\alpha(t)dt\int_{x}^{y}\mathscr{D}x\exp-i\int_{0}^{T}\alpha(t)\dot{x}^{2}(t)dt \tag{9.14} \end{align}
Can anybody tell me how he derived the equation (9.14) via using the "proper time" parameter $t$?
I also posted my question here here. |
Let $X_n(\Bbb{Z})$ be the simplicial complex whose vertex set is $\Bbb{Z}$ and such that the vertices $v_0,...,v_k$ span a $k$-simplex if and only if $|v_i-v_j| \le n$ for every $i,j$. Prove that $X_n(\Bbb{Z})$ is $n$-dimensional...
no kidding, my maths is foundations (basic logic but not pedantic), calc 1 which I'm pretty used to work with, analytic geometry and basic linear algebra (by basic I mean matrices and systems of equations only
Anyway, I would assume if removing $ fixes it, then you probably have an open math expression somewhere before it, meaning you didn't close it with $ earlier. What's the full expression you're trying to get? If it's just the frac, then your code should be fine
This is my first time chatting here in Math Stack Exchange. So I am not sure if this is frowned upon but just a quick question: I am trying to prove that a proper subgroup of $\mathbb{Z}^n$ is isomorphic to $\mathbb{Z}^k$, where $k \le n$. So we must have $rank(A) = rank(\mathbb{Z}^k)$ , right?
For four proper fractions $a, b, c, d$ X writes $a+ b + c >3(abc)^{1/3}$. Y also added that $a + b + c> 3(abcd)^{1/3}$. Z says that the above inequalities hold only if a, b,c are positive. (a) Both X and Y are right but not Z. (b) Only Z is right (c) Only X is right (d) Neither of them is absolutely right.
Yes, @TedShifrin the order of $GL(2,p)$ is $p(p+1)(p-1)^2$. But I found this on a classification of groups of order $p^2qr$. There order of $H$ should be $qr$ and it is present as $G = C_{p}^2 \rtimes H$. I want to know that whether we can know the structure of $H$ that can be present?
Like can we think $H=C_q \times C_r$ or something like that from the given data?
When we say it embeds into $GL(2,p)$ does that mean we can say $H=C_q \times C_r$? or $H=C_q \rtimes C_r$? or should we consider all possibilities?
When considering finite groups $G$ of order, $|G|=p^2qr$, where $p,q,r$ are distinct primes, let $F$ be a Fitting subgroup of $G$. Then $F$ and $G/F$ are both non-trivial and $G/F$ acts faithfully on $\bar{F}:=F/ \phi(F)$ so that no non-trivial normal subgroup of $G/F$ stabilizes a series through $\bar{F}$.
And when $|F|=pr$. In this case $\phi(F)=1$ and $Aut(F)=C_{p-1} \times C_{r-1}$. Thus $G/F$ is abelian and $G/F \cong C_{p} \times C_{q}$.
In this case how can I write G using notations/symbols?
Is it like $G \cong (C_{p} \times C_{r}) \rtimes (C_{p} \times C_{q})$?
First question: Then it is, $G= F \rtimes (C_p \times C_q)$. But how do we write $F$ ? Do we have to think of all the possibilities of $F$ of order $pr$ and write as $G= (C_p \times C_r) \rtimes (C_p \times C_q)$ or $G= (C_p \rtimes C_r) \rtimes (C_p \times C_q)$ etc.?
As a second case we can consider the case where $C_q$ acts trivially on $C_p$. So then how to write $G$ using notations?
There it is also mentioned that we can distinguish among 2 cases. First, suppose that the sylow $q$-subgroup of $G/F$ acts non trivially on the sylow $p$-subgroup of $F$. Then $q|(p-1) and $G$ splits over $F$. Thus the group has the form $F \rtimes G/F$.
A presentation $\langle S\mid R\rangle$ is a Dehn presentation if for some $n\in\Bbb N$ there are words $u_1,\cdots,u_n$ and $v_1,\cdots, v_n$ such that $R=\{u_iv_i^{-1}\}$, $|u_i|>|v_i|$ and for all words $w$ in $(S\cup S^{-1})^\ast$ representing the trivial element of the group one of the $u_i$ is a subword of $w$
If you have such a presentation there's a trivial algorithm to solve the word problem: Take a word $w$, check if it has $u_i$ as a subword, in that case replace it by $v_i$, keep doing so until you hit the trivial word or find no $u_i$ as a subword
There is good motivation for such a definition here
So I don't know how to do it precisely for hyperbolic groups, but if $S$ is a surface of genus $g \geq 2$, to get a geodesic representative for a class $[\alpha] \in \pi_1(S)$ where $\alpha$ is an embedded loop, one lifts it to $\widetilde{\alpha}$ in $\Bbb H^2$ by the locally isometric universal covering, and then the deck transformation corresponding to $[\alpha]$ is an isometry of $\Bbb H^2$ which preserves the embedded arc $\widetilde{\alpha}$
It has to be an isometry fixing a geodesic $\gamma$ with endpoints at the boundary being the same as the endpoints of $\widetilde{\alpha}$.
Consider the homotopy of $\widetilde{\alpha}$ to $\gamma$ by straightline homotopy, but straightlines being the hyperbolic geodesics. This is $\pi_1(S)$-equivariant, so projects to a homotopy of $\alpha$ and the image of $\gamma$ (which is a geodesic in $S$) downstairs, and you have your desired representative
I don't know how to interpret this coarsely in $\pi_1(S)$
@anakhro Well, they print in bulk, and on really cheap paper, almost transparent and very thin, and offset machine is really cheaper per page than a printer, you know, but you should be printing in bulk, its all economy of scale.
@ParasKhosla Yes, I am Indian, and trying to get in some good masters progam in math.
Algebraic graph theory is a branch of mathematics in which algebraic methods are applied to problems about graphs. This is in contrast to geometric, combinatoric, or algorithmic approaches. There are three main branches of algebraic graph theory, involving the use of linear algebra, the use of group theory, and the study of graph invariants.== Branches of algebraic graph theory ===== Using linear algebra ===The first branch of algebraic graph theory involves the study of graphs in connection with linear algebra. Especially, it studies the spectrum of the adjacency matrix, or the Lap...
I can probably guess that they are using symmetries and permutation groups on graphs in this course.
For example, orbits and studying the automorphism groups of graphs.
@anakhro I have heard really good thing about Palka. Also, if you do not worry about little sacrifice of rigor (e.g. counterclockwise orientation based on your intuition, rather than, on winding numbers, etc.), Howie's Complex analysis is good. It is teeming with typos here and there, but you will be fine, i think. Also, thisbook contains all the solutions in appendix!
Got a simple question: I gotta find kernel of linear transformation $F(P)=xP^{''}(x) + (x+1)P^{'''}(x)$ where $F: \mathbb{R}_3[x] \to \mathbb{R}_3[x]$, so I think it would be just $\ker (F) = \{ ax+b : a,b \in \mathbb{R} \}$ since only polynomials of degree at most 1 would give zero polynomial in this case
@chandx you're looking for all the $G = P''$ such that $xG + (x+1)G' = 0$; if $G \neq 0$ you can solve the DE to get $G'/G = -x/(x+1) = -1 + 1/(x+1) \implies \ln G = -x + \ln(1+x) \implies G = (1+x)e^(-x) + C$ which is obviously not a polyonomial, so $G = 0$ and thus $P = ax + b$
could you suppose that $\operatorname{deg} P \geq 2$ and show that you wouldn't have nonzero polynomials? Sure. |
People are always fascinated by intelligent devices, and today they are software “chatbots”, which are becoming more and more human-like and automated. The combination of an immediate response and a permanent connection makes them an attractive way to expand or replace the web applications. The high-level diagram of an architecture for a chat-bot shown in Figure 1. Natural Language Understanding component tries to determine the purpose of the user’s input (intent) and useful data (entities). Also, the chatbot has a Dialog Manager to define the flow of conversation and paths that the system will take. Finally, a Language Generation component generates user output.
The main function of a chatbot us to generate an answer for the user request. It can be a direct answer to the question, requesting missed slots, fallback standard input or anything else. Natural Language Understanding module detects intents (which represent the purpose of a user’s input) and entities (an object that provides specific information for an intent). There are many useful NLU tools, like Dialogflow, wit.ai, LUIS or IBM Watson that can help you classify intents and parse entities using its custom deep learning and rule-based models. They have convenient web interfaces and easily integrate with different messaging platforms
There are three main types of dialog management systems:
Switch statements Finite state machine Frame-based system Partially Observable Markov Decision Processes
Switch statements is the simplest way to implement a dialog manager. Every intent triggers different responses. The initiative of the conversation always comes from the user and it’s easy to design and implement. But it has significant disadvantages: it doesn’t keep a conversation context and can’t take initiative of the dialogue.
Finite state machine helps your bot to navigate through the conversation flow. The nodes of the graph represent system questions, the transitions between the nodes represent answers to questions, and the graph specifies all valid dialogues. FSMs are useful if you have a limited number of conversational scenarios and structured dialogue. Many commercial bots have this model under its hood, but it lacks flexibility and naturalness. So it’s problematic to develop more complex flow, adding new states and transitions between them. Finally, you can get an unmanageable diagram with a bunch of circles and elements.
In a frame (or template) based system, the user is asked questions that enable the system to fill slots in a template in order to perform a task. This model helps to provide more flexibility to the dialog control because it allows a user to input slots in a different order and combination. Google’s Dialogflow chatbot framework uses a frame-based system with the context control mechanism. Contexts represent the current state of a user’s request and allow your agent to carry information from one intent to another. So you create intents with some entities and design context chains. Dialogflow provides a simple and user-friendly interface for creating and managing your bot agents.
Partially Observable Markov Decision Processes (POMDP) helps us to combine the idea of statistical dialogue management approach and handling unexpected results. POMDP characterizes by:
a set of states San agent can be in a set of actions Athe agent can take a set of observations O a matrix of transition probabilities T a reward R(a, s)that the agent receives for taking an action in a state a dialogue state reward J(s) a discount factor \gamma \in \left[0,1\right]
Classical MDP model defines rewards as a function of belief states, but POMDP – as a function of the environment state s , because the performance of the model should be measured by the user really wants. Distribution over possible states called a belief state b_t is maintained where b(s_t) indicates the probability of being in a particular state s_t . The aim of the system is then to optimize the expected dialogue reward using this adjusted reward function. The reward could be obtained using the following formula:
J(s_{i}) = \underset{a}{max}\mathbf{\Bigg(}R(s_{i}, a)+\gamma\sum_{j=1}^{N}p(s_{j}\mid p(s_{i}, a)) \times J(s_{j})\Bigg)
So, first of all, you should understand potential users and nature of the tasks through user surveys, research on similar systems and study related human dialogues. Then you need to build a prototype of your conversation flow and try to figure out which dialogue management system will be most appropriate for this task. |
You are absolutely right that the limit in which this approximation holds is
$$\beta(\epsilon - \mu) \gg 1 \,,$$
which is not trivially the 'high-temperature limit', and indeed
looks rather like the low temperature limit. However, it also looks like the limit of large negative $\mu$. If we want to know how temperature will affect the exponent, we need to know how temperature will effect the chemical potential. To proceed, suppose we're dealing with a gas of non-interacting particles. The grand potential is, in this limit,
$$ \Phi = -k_B T \ln \mathcal{Z} = -k_B T \int_0^\infty \ln \mathcal{Z}_\epsilon \,g(\epsilon)\,\mathrm{d}\epsilon \simeq -k_B T \int_0^\infty \ln \bigg(1 + \exp(-\beta(\epsilon - \mu))\bigg)\,g(\epsilon) \,\mathrm{d}\epsilon \,,$$
where $\mathcal{Z}_\epsilon$ is the grand partition function associated with the energy level $\epsilon$ and $g(\epsilon)$ is the density of states. The integral is essentially just a sum of the partition functions due to each energy level. To get to the final expression we have assumed that we can approximate the grand partition function like so:
$$ \mathcal{Z}_\epsilon = \sum_{n} \bigg(\exp(-\beta(\epsilon - \mu))\bigg)^n \simeq 1 + \exp(-\beta(\epsilon - \mu)) \,,$$
which corresponds to the limit stated at the top. As a brief detour, if we want to find the average occupancy of the energy level $\epsilon$, we can use
$$ \langle N_\epsilon \rangle = -\left(\frac{\partial \Phi_\epsilon}{\partial \mu}\right)_{T,V} \simeq \exp(-\beta(\epsilon - \mu))\qquad \mathrm{where} \qquad \Phi_\epsilon = -k_B T \ln \mathcal{Z}_\epsilon\,,$$
which is the Maxwell-Boltzmann distribution we were expecting (in the second equality we have Taylor expanded the logarithm in accordance with $\beta(\epsilon - \mu) \gg 1$). Now the density of states for a three-dimensional gas in a box can be obtained by standard means --- I won't bother going through it here, but the end result is:
$$ \Phi = -k_B TV\left(\frac{mk_B T}{2 \pi \hbar^2}\right)^{3/2} \exp(\beta \mu) \equiv -\frac{k_B T V}{\lambda^3} \exp(\beta \mu) \,,$$
where the thermal wavelength $\lambda$ has been defined appropriately. From here we can write
$$N_\mathrm{tot} \equiv N = -\left(\frac{\partial \Phi}{\partial \mu}\right)_{T,V} = \frac{V}{\lambda^3} \exp(\beta \mu) \,,$$
and hence
$$ \boxed{\mu = k_B T \ln \left(\frac{N \lambda^3}{V}\right) \,.}$$
Now to answer your question. The condition at the top can be considered the limit of $\beta \mu$ being large and negative. We see from the above that
$$ \beta \mu = \ln\left(\frac{N \lambda^3}{V}\right) \qquad \mathrm{where} \qquad \lambda = \left( \frac{2 \pi \hbar^2}{mk_B T}\right)^{1/2} \,.$$
This quantity will be large and negative when the argument of the logarithm is small. This will be the case for a) low densities $N/V$, b) high temperatures $T$ and/or c) high-mass particles.
You should think of the underlying situation in which the classical limit holds as when the number of thermally accessible states vastly exceeds the number of particles. This is because under such circumstances we can ignore multiple occupation of energy levels, which means we can ignore the fine details of particle indistinguishability. In the
canonical distribution, when the number of states vastly exceeds the number of particles we can account for indistinguishability with a simple (but approximate) correction of division of the partition function by $N!$ --- we must do this even in the classical case, otherwise we run into all sorts of problems like the Gibbs paradox. However, when states start to become multiply occupied, this simple prescription fails, and we need to be more sophisticated in our consideration of particle indistinguishability.
If you imagine our gas particles as being wavepackets with a width of $\lambda$ as defined above, then you can think of each particle as occupying a volume $\lambda^3$. This has a nice interpretation --- the quantity $N \lambda^3/V$ that appears in the expression for the chemical potential can be thought of as the fraction of space occupied by the particles. The classical limit corresponds to this quantity being small, so that it's very unlikely for two particles to be in the same place --- i.e., be in the same state (here I'm essentially considering the states of our system to be position eigenstates rather than the usual energy eigenstates). If this quantity becomes larger, we start to get 'multiple-occupation', and so we imagine our classical approximation will break down. This is consistent: when $N \lambda^3 /V \sim 1$, the argument of the logarithm in the chemical potential is no longer large and negative, and so indeed the condition at the very top of this page breaks down.
Hope this helps! |
Some implementations of MFCC apply sinusoidal liftering as the final step in calculations of MFCC. It is claimed that speech recognition can be significantly improved. For instance, if $\text{MFCC}_i$ is a cepstral coefficient, and $w$ is a lifter, then $$\widehat{\text{MFCC}_i}=w_i\text{MFCC}_i$$
is a liftered cepstral coefficient, where $w_i$ for sinusoidal liftering is defined as:
$$w_i=1+\frac{D}{2}\sin\Big(\frac{\pi i}{D}\Big)$$
When I look at the equation, I understand the sinusoidal function has a shape such that its maximum is in the middle and approaches to zero at edges. Therefore, cepstral vector's first and the last coefficients are is reduced to zero while the middle one is intact.
Why is liftering applied and how does it improve the speech recognition? |
Well consider an orbit - I'm trying to calculate the exact time spent in the shadow of the body you orbit around. An explanation of "shadow" (sun is positioned to the far left):
For a circular orbit this is quite easy: One just calculates the orbit radius and solve it using a simple sine ($T$ is the orbital period):
$$R_{earth} = r \cdot\sin(\theta)$$ $$t = T \cdot \frac{\theta}{\pi}$$
(notice division by $\pi$ since $\theta$ represents half the time in shadow.)
However this is for the specific case of a circular orbit. - I'm wondering how to do it for an (highly) eccentric orbit.
The simple equation above becomes slightly more complicated, since $r$ is no longer constant: $$r = a \frac{1-e^2}{1 + e \cdot \cos(\theta)}$$ For the point around the periapsis, Filling that in to the equation above results (wolfram alpha) into something I do not particularly like. Solving it at a random point it even becomes worse: $$r = a \frac{1-e^2}{1 + e \cdot \cos(\theta_{avg} \pm \theta)}$$
Once I have the true anomalies ($\theta$) I could use some straight forward solution: Eccentric anomaly -> mean anomaly -> time
Before I wish to start this, a question pops up: does the time in shadow actually depend on the position? (true anomaly/radius). When we're close to the planet the planet overshadows a higher angle of the orbit. However due to Kepler's law an object also moves faster at this point.
Specifically does Kepler's second law of motion prove that the time in shadow is independent on the mean true anomaly of the shadow? Kepler's law:
$$\frac{dA}{dt} = \tfrac{1}{2} r^2 \frac{d\theta}{dt}$$
I have a feeling that through kepler's law above problem could be reduced a lot..
Now once I know the position of the longest-time I could solve above equations (wondering if I should try to fill in the equations or solve it numerically). |
Digital barrier options pricing: an improved Monte Carlo algorithm 7.6k Downloads Citations Abstract
A new Monte Carlo method is presented to compute the prices of digital barrier options on stocks. The main idea of the new approach is to use an exceedance probability and uniformly distributed random numbers in order to efficiently estimate the first hitting time of barriers. It is numerically shown that the answer of this method is closer to the exact value and the first hitting time error of the modified Monte Carlo method decreases much faster than of the standard Monte Carlo methods.
KeywordsDigital option Double barrier Monte Carlo simulation Uniform distribution Introduction
Derivative securities have witnessed incredible innovation over the past years. In particular, path-dependent options are successful, and most of them comprise barrier options to reduce the cost of hedging [4, 8, 22]. For these derivatives, exact valuation expressions are seldom available, thus one resorts to simulations multiple times. In this manuscript a new Monte Carlo method is proposed in order to efficiently compute the prices of digital barrier options based on an exceedance probability.
i-th binary or nothing and then obtain the pricing formulae. In addition, Ballestra [3] considered the problem of pricing vanilla and digital options under the Black–Scholes model, and showed that, if the payoff functions are dealt with properly, then errors close to the machine precision are obtained in only some hundredths of a second.
Barrier options are similar to vanilla options except that the option is knocked out or in, if the underlying asset price hits the barrier price
B, before expiration date. Since 1967, barrier options have been traded in the OTC market and nowadays are the most popular class of exotic options. A step further along the option evolution path is where we combine barrier and binary options to obtain binary barrier options and binary double barrier options. Accordingly, it is quite important to develop accurate and efficient methods to evaluate barrier digital option prices in financial derivative markets.
Most research done to date have focused on option pricing with various methods, for example, Mehrdoust [17] has proposed an efficient algorithm for pricing arithmetic Asian options based on the AV and the MCV procedures, and Jerbi et al. [13], have calculated the conditional expectation using the Malliavin approach and shown that with this formula, the American option under J-process can be performed using the Monte Carlo simulation. In addition, Zhang et al. [23], have presented the total least squares quasi-Monte Carlo approach for valuing American barrier options, and Jasra and Del Moral provided a review and development of sequential Monte Carlo (SMC) methods for option pricing [12], and in Kim et al. [15], have considered Heston’s stochastic volatility model and derive exact analytic expressions for the prices of fixed strike and floating-strike geometric Asian options with continuously sampled averages.
The Monte Carlo method is very popular and robust numerical method, since it is not only easily extended to multiple underlying assets but also is stochastic and amenable to coding. On the other hand, one of main drawbacks of the Monte Carlo method is slow convergence. The statistical error of the Monte Carlo method is of order \(O(\frac{1}{\sqrt{M}})\) with
M simulations. In particular, for continuously monitored barrier options, the hitting time error is of order \(O(\frac{1}{\sqrt{N}})\) with N time steps, see [7], while the European vanilla options have no time discretization error. In this study, to efficiently reduce the hitting time error near the barrier price, inspired by [16], at each finite time step, we suggest the use of a uniformly distributed random variable and a conditional exceedance probability to correctly check whether the continuous underlying asset price hits the barrier or not. Numerical results show that the new Monte Carlo method converges much faster than the standard Monte Carlo method [18]. This idea of using exceedance probability for stopped diffusion is well known in the physics community [11, 16].
The outline of the paper is as follows: in “Digital options” section, we introduce digital options and their pricing formulas and we estimate it by using standard Monte Carlo. In “Modified Monte Carlo algorithm” section, we propose the new Monte Carlo method based on the idea of using uniformly distributed random variable and the conditional exceedance probability. In “Digital barrier options” section, we present numerical results for digital barrier options with one underlying assets and compare the accuracy and efficiency between the standard and the new Monte Carlo methods. In “Double-barrier digital options” section, we present numerical results for pricing double barrier digital options and see the efficiency of the new Monte Carlo method. Finally, we summarize our conclusions and give some direction for future work.
Digital options
The purpose of this section is to introduce two main types of digital options and express their pricing formula.
Cash-or-nothing options xat expiration if the option is in-the-money. The payoff from a call is 0 if \(S_{\text {T}}\le K\) and xif \(S_{\text {T}}>K,\) and the payoff from a put is 0 if \(S_{\text {T}}\ge K\) and xif \(S_{\text {T}}<K,\) where \(S_{\text {T}}\) and Kare stock price at maturity and strike price, respectively. Valuation of cash-or-nothing call and put options can be made using the formula described by Rubinstein and Reiner [21]: Sis the price of the underlying asset, ris a risk-free interest rate, \(\sigma\) is a volatility, Tis the exercise date and \(N(\cdot )\) denotes the cumulative function for the standard normal distribution. For example, the value of a cash-or-nothing put option with 9 months to expiration, futures price 100, strike price 80, cash payout 10, risk-free interest rate 6 % per year, and the volatility 35 % per year is \(p=10e^{-0.06\times 0.75}N(-0.5846)=2.6710.\) The simulation of standard Monte Carlo that conducted on Matlab for this example has the answer 2.23. Asset-or-nothing options Modified Monte Carlo algorithm Q, by a sample average of Msimulations T] into Nuniform subinterval \(0 = t_0 < t_1 <\cdots < t_N = T.\) Then compute \(S_{n+1} := S_{\text {t}_{n+1}}\) at each time step for \(n = 0,...,N-1\) by B. The idea is to use an exceedance probability at each time step. Let \(p_n\) denotes the probability that a diffusion process Xexits of domain Dat \(t\in [t_n, t_{n+1}]\) by given values \(X_n\) and \(X_{n+1}.\) In one dimensional half interval case, \(D=(-\infty , B)\) for a constant B, the probability \(p_n\) has a simple expression using the law of Brownian bridge, see [14]. So, Ris a prescribed cash rebate. In this case, as an approximation of the first hitting time \(\tau,\) we may choose the midpoint \(\widetilde{\tau }=(t_n+t_{n+1})/2.\) Digital barrier options 1.
Cash-or-nothing barrier options. These payout either a prespecified cash amount or nothing, depending on whether the asset price has hit the barrier or not.
2.
Asset-or-nothing barrier options. These payout the value of the asset or nothing, depending on whether the asset price has hit the barrier or not.
Example 1 Double-barrier digital options xat maturity if the asset price touches the lower Lor upper Ubarriers before expiration. The option pays off zero if the barriers are not hit during the lifetime of the option. Similarly, a knock-out pays out a predefined cash amount xat maturity if the lower or upper barriers are not hit during the lifetime of the option. If the underlying asset price touches any of barriers during the option’s life, the option vanishes. Using the Fourier sine series, we can show that the risk natural value of double barrier cash or nothing knock-out is: Example 2
Table 1 gives examples of values for knock-out double-barrier binary options for different choices of barriers and volatilities and the value of them simulation with \(M=10,000\) using the new Monte Carlo in Matlab. Also, Fig. 4 shows comparison between the exact value and the new Monte Carlo values on this example with \(~\sigma =0.1,\) and Fig. 5 displays comparison between the standard MC and the improve MC errors.
Comparison of numerical approximations using the improve MC for Example 2
Double-barrier binary option parameters \(S=100,\,T=0.25,\,r=0.05,\,x=10\)
\(\sigma =0.1\)
\(\sigma =0.2\)
Exact
New MC
Exact
New MC
80
120
9.873
9.864
8.977
8.898
85
115
9.815
9.770
7.268
7.250
90
110
8.977
8.825
3.685
3.622
95
105
3.667
3.598
0.091
0.081
Conclusion
In this paper, we have proposed a new efficient Monte Carlo approach for estimate values of the digital barrier and double barrier options, to correctly compute the first hitting time of the barrier price by the underlying asset. The approximate error of the new method converges much faster than the standard Monte Carlo method. Future work will be devoted to extend this idea to more general diffusion problems, and theoretically study the rate of convergence of the approximate errors, and also pricing digital barrier options by other methods such as SMC and comparing results.
Notes Acknowledgments
The authors are grateful to the referees for their careful reading, insightful comments and helpful suggestions which have led to improvement of the paper.
References 1.Appolloni, E., Ligori, A.: Efficient tree methods for pricing digital barrier options (2014). arxiv.org/pdf/1401.2900 2. 3. 4. 5. 6.Cox, J.C., Rubinstein, M.: Options Markets. Prentice Hall, New Jersey (1985)Google Scholar 7. 8.Haug, E.G.: Option Pricing Formulas. McGraw-Hill Companies, New York (2007)Google Scholar 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21.Rubinstein, M., Reiner, E.: Unscrambling the binary code. Risk Mag. 4, 75–83 (1991)Google Scholar 22.Wilmott, P.: Derivatives: The Theory and Practice of Financial Engineering. Wiley, New York (1998)Google Scholar 23. Copyright information Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. |
According to Ortin, Gravity & Strings (Chapter 2), we define
$$\dfrac{\delta S}{\delta \phi} \equiv \dfrac{\partial \mathcal{L}}{\partial \phi} - \partial_\mu \bigg(\dfrac{\partial \mathcal{L}}{\partial(\partial_\mu\phi)}\bigg)$$
Now, as you can easily see,
$$\delta S = \displaystyle\int d^4x \bigg(\dfrac{\partial \mathcal{L}}{\partial \phi} \delta \phi - \dfrac{\partial \mathcal{L}}{\partial(\partial_\mu\phi)} \delta (\partial_\mu \phi)\bigg) $$
$$= \displaystyle\int d^4x \bigg(\dfrac{\partial \mathcal{L}}{\partial \phi} - \partial_\mu \bigg(\dfrac{\partial \mathcal{L}}{\partial(\partial_\mu\phi)}\bigg)\bigg) \delta \phi $$
$$=\displaystyle\int d^4x \dfrac{\delta S}{\delta \phi} \delta \phi$$
Edit From this answer, I realize that we don't really need to define an explicit $\dfrac{\delta S}{\delta \phi}$. Rather, we can derive it from the definition of $S$ as it should be the case. Based on the answer linked above, I derive here $\dfrac{\delta S}{\delta \phi}$ in the simplest manner apparent to me. The key is to realize that there is an implicit label associated with $\phi$ which decides what $\phi$ we are talking about - the label is the coordinates $x$.
$$S=\displaystyle\int d^4x \mathcal{L}(\phi(x), \partial_\mu \phi(x)) $$$$\delta S = \displaystyle\int d^4x \bigg(\dfrac{\partial \mathcal{L}}{\partial \phi (x)} \delta \phi(x) - \dfrac{\partial \mathcal{L}}{\partial(\partial_\mu\phi(x))} \delta (\partial_\mu \phi(x))\bigg) $$
$$= \displaystyle\int d^4x \bigg(\dfrac{\partial \mathcal{L}}{\partial \phi(x)} - \partial_\mu \bigg(\dfrac{\partial \mathcal{L}}{\partial(\partial_\mu\phi(x))}\bigg)\bigg) \delta \phi(x) $$
Now, since $x$ is already used in the integration and it runs over all of spacetime, we should use a different variable for coordinate label when we want to define $\dfrac{\delta S}{\delta \phi}$ which is the variation of action with respect to variation in the field at any point in spacetime. Let's use $y$ to denote coordinates of this point where we vary the field and notice the variation in action.
$$\dfrac{\delta S}{\delta \phi} = \dfrac{\delta S}{\delta \phi (y)}$$$$= \displaystyle\int d^4x \bigg(\dfrac{\partial \mathcal{L}}{\partial \phi(x)} - \partial_\mu \bigg(\dfrac{\partial \mathcal{L}}{\partial(\partial_\mu\phi(x))}\bigg)\bigg) \dfrac{\delta \phi(x)}{\delta \phi(y)} $$$$= \displaystyle\int d^4x \bigg(\dfrac{\partial \mathcal{L}}{\partial \phi(x)} - \partial_\mu \bigg(\dfrac{\partial \mathcal{L}}{\partial(\partial_\mu\phi(x))}\bigg)\bigg) \delta (x -y) $$$$=\dfrac{\partial \mathcal{L}}{\partial \phi(y)} - \partial_\mu \bigg(\dfrac{\partial \mathcal{L}}{\partial(\partial_\mu\phi(y))}\bigg)$$
So, $$\dfrac{\delta S}{\delta \phi(y)} = \dfrac{\partial \mathcal{L}}{\partial \phi} - \partial_\mu \bigg(\dfrac{\partial \mathcal{L}}{\partial(\partial_\mu\phi)}\bigg)$$And, thus, $$\delta S = \displaystyle\int d^4x \bigg(\dfrac{\partial \mathcal{L}}{\partial \phi(x)} - \partial_\mu \bigg(\dfrac{\partial \mathcal{L}}{\partial(\partial_\mu\phi(x))}\bigg)\bigg) \delta \phi(x) $$$$=\displaystyle\int d^4x \dfrac{\delta S}{\delta \phi(x)} \delta \phi(x)$$
Or, concisely, ${\delta S} = \displaystyle \int d^4x \dfrac{\delta S}{\delta \phi} \delta \phi$. |
It is possible to approximate a solution to this problem for most parametric trajectories. The idea is the following: if you zoom deep enough on a curve, you cannot tell the curve itself from its tangent at that point.
By making this assumption, there is no need to precompute anything more than two vectors (three for cubic Bezier curves,
etc.).
So for a curve \$M(t)\$ we compute its tangent vector \$\frac{dM}{dt}\$ at point \$t\$. The norm of this vector is \$\lVert \frac{dM}{dT} \rVert\$ and thus the distance traveled for a duration \$\Delta t\$ can be approximated as \$\lVert \frac{dM}{dT} \rVert \Delta t \$. It follows that a distance \$L\$ is traveled for a duration \$L \div \lVert \frac{dM}{dT} \rVert\$.
Application: quadratic Bezier curve
If the control points of the Bezier curve are \$A\$, \$B\$ and \$C\$, the trajectory can be expressed as:
$$\begin{align}M(t) &= (1-t)^2A + 2t(1-t)B + t^2C \\ &= t^2(A - 2B + C) + t(-2A + 2B) + A\end{align}$$
So the derivative is:
$$\frac{dM}{dt} = t(2A - 4B + 2C) + (-2A + 2B)$$
You just need to store vectors \$\vec v_1 = 2A - 4B + 2C\$ and \$\vec v_2 = -2A + 2B\$ somewhere. Then, for a given \$t\$, if you want to advance of a length \$L\$, you do:
$$t = t + {L \over length(t \cdot \vec v_1 + \vec v_2)}$$
Cubic Bezier curves
The same reasoning applies to a curve with four control points \$A\$, \$B\$, \$C\$ and \$D\$:
$$\begin{align}M(t) &= (1-t)^3A + 3t(1-t)^2B + 3t^2(1-t)C + t^3D \\ &= t^3(-A + 3B - 3C + D) + t^2(3A - 6B + 3C) + t(-3A + 3B) + A\end{align}$$
The derivative is:
$$\frac{dM}{dt} = t^2(-3A + 9B - 9C + 3D) + t(6A - 12B + 6C) + (-3A + 3B)$$
We precompute the three vectors:
$$\begin{align}\vec v_1 &= -3A + 9B - 9C + 3D \\\vec v_2 &= 6A - 12B + 6C \\\vec v_3 &= -3A + 3B\end{align}$$
and the final formula is:
$$t = t + {L \over length(t^2 \cdot \vec v_1 + t \cdot \vec v_2 + \vec v_3)}$$
Accuracy issues
If you are running at a reasonable framerate, \$L\$ (which should be computed according to the frame duration) will be sufficiently small for the approximation to work.
However, you may experience inaccuracies in extreme cases. If \$L\$ is too large, you can do the computation piecewise, for instance using 10 parts:
for (int i = 0; i < 10; i++)
t = t + (L / 10) / length(t * v1 + v2); |
When dividing by any quantity,or when canceling out two quantities in a ratio(for example, canceling $x$ and $x$ to find that $\frac xx=1$),you need to be aware of what assumptions you have to make sothat the division or canceling makes sense,and remember that those assumptions apply to any results you get.
For example, with$$x^2 = x,$$$x \neq 0$ then you
can divide by $x$.One way to keep track of your assumptions is to work out different"cases" of the solution:
Case $x = 0$: Then the equation becomes $0^2 = 0$, which is true,so $x = 0$ is a solution.
Case $x \neq 0$: Then since $x \neq 0$, you can divide by $x$,so $x^2 = x$ implies $x = 1$. This is consistent with the assumptions(namely that $x\neq 0$), so $x = 1$ is a solution.
If you look at all possible cases (and here, since $x$ either is or isn't zero,we have already covered all possible cases),the complete solution set consists of all solutions you find in all those cases;that is, for this problem the solution set is $x=\{0,1\}$(in other words, $x=0$ or $x=1$).
Now consider the example from the book:$$\frac {x+2}{x-2} - \frac1x = \frac{2}{x(x-2)}.$$
Two equal quantities multiplied by the same quantityare two equal quantities (even if we multiply both by zero!),so we know that$$x(x-2)\frac {x+2}{x-2} - x(x-2)\frac 1x = x(x-2)\frac{2}{x(x-2)}.$$
The tricky part is what comes next. It looks like the multiplier $x-2$on the right-hand side should cancel the divisor $x-2$,that is, $\frac {x-2}{x-2} = 1.$ But this is true only if $x-2 \neq 0$;if $x - 2 = 0$ then $\frac {x-2}{x-2} = \frac 00,$ which is undefined.Similarly, we can only cancel $x$ and $x$ on the right-hand side if $x \neq 0$.So again we have two cases:
Case $x - 2 = 0$ or $x = 0$: In this case, the term on the right-handside of the equation evaluates to $\frac 20$, which is undefined,so there are no solutions in this case.
Case $x - 2 \neq 0$ and $x \neq 0$:In this case we can cancel $x-2$ with $x-2$ and cancel $x$ with $x$,so we have
$$\begin{eqnarray}x(x+2) - (x-2) &=& 2.\\x^2 + x &=& 0.\\x(x+1) &=& 0.\end{eqnarray}$$
Now, remembering that we are still working out the case where$x - 2 \neq 0$ and $x \neq 0$, we can divide both sides by $x$:$$\begin{eqnarray}x+1 &=& 0. \\x &=& -1.\end{eqnarray}$$
So $x = -1$ is the only solution in this case.(Alternatively, if we used $x(x+1)=0$ to conclude that "$x=0$ or $x+1=0$",we would still be working under the assumption that $x\neq 0$,and from those facts we could conclude simply that $x+1=0$.)
Since there were no solutions from the other case, $x = -1$ is altogether the only solution of the equation.
Note that this method did not identify $x=0$ as a solution.That's because I do not accept that it
is a solution:if you set $x=0$ in $\frac {x+2}{x-2} - \frac1x = \frac{2}{x(x-2)},$you get a term of $\frac 10$ on the left and you get $\frac 20$on the right, and both of those are undefined.I hope that (eventually) the book agrees with this. |
I want to use the integral test to show that $ \sum_{n=3}^{\infty} {1 \over {n\cdot \log{n} \cdot \log{\log {n}}}} $ diverges.
First, I let $ f(x) = {1 \over {x\cdot \log{x} \cdot \log{\log {x}}}} $
I have learned that in order to use the Integral test, $f(x)$ must be continuous, positive, and decreasing at the interval $ [1,\infty) $
However, when I drew the graph of $f(x)$ using an online graphing calculator, the graph only seemed to be satisfying the three conditions of the Integral test when $ x \geq 10 $, or the interval $ [10, \infty) $
Doesn't that mean that I cannot use the Integral test? Other than that, I was also confused on why the series starts from n = 3 when the series is also defined a value at n = 2. (although negative, like at n = 3) |
September 17th, 2018, 08:12 AM
# 1
Member
Joined: Sep 2014
From: Sweden
Posts: 94
Thanks: 0
How many different 5-digit number combinations can be created?
How many
different 5-digit number combinations can be created using 5, 7ths and 3, 2s? 77777 | 5
222 | 3
So my first thought was to use this formula:
$\displaystyle n_{1} * n_{2} * ... * n_{n}$
So that I get:
$\displaystyle 5! = 5 * 4 * 3 * 2 * 1$
but this is probably just calculating how
many ways I can put all 7s. And I only
have 7s so it do not tell me much.
Then I would do the same thing for the 3, 2s.
And that can't work. I thought if I added
the two results that might give me the
answer, but I think it doesn't since I only
find out how many combinations it is possible
to put 5, 7s in and 3, 2s in.
So I thought the problem might want me to
use this formula:
$\displaystyle \frac{n!}{(n - k)!}$
So how do you actually begin to solve this?
September 17th, 2018, 09:07 AM
# 2
Senior Member
Joined: Feb 2010
Posts: 711
Thanks: 147
You have five 7's and three 2's and wish to construct a five digit number.
One way is to directly count cases
five 7's and zero 2's for a total of $\displaystyle \dfrac{5!}{5!0!}=1$ way
four 7's and one 2 for a total of $\displaystyle \dfrac{5!}{4!1!}=5$ ways
three 7's and two 2's for a total of $\displaystyle \dfrac{5!}{3!2!}=10$ ways
two 7's and three 2's for a total of $\displaystyle \dfrac{5!}{2!3!}=10$ ways
Adding gives $\displaystyle 1+5+10+10=26$ ways.
The only other way I know is to use an exponential generating function. Go look it up.
Tags 5digit, combinations, created, number
Thread Tools Display Modes
Similar Threads Thread Thread Starter Forum Replies Last Post How many three digit number whose digit sum is 13 leave reminder 1 divided by 4 Shen Elementary Math 3 June 6th, 2014 12:27 AM natural number multiple of another number if its digit sum equal to that number Shen Elementary Math 2 June 5th, 2014 07:50 AM Four-digit number life24 Number Theory 8 February 20th, 2014 10:16 AM Four-digit number life24 Advanced Statistics 2 February 17th, 2014 04:24 AM Number system - proving 9 digit number not divisible by 5 sachinrajsharma Number Theory 7 April 29th, 2013 05:49 AM |
I was wondering if it is important in Quantum Mechanics to deal with operators that have an orthonormal basis of eigenstates? Imagine that we would have an operator (finite-dimensional) acting on a spin system that has real eigenvalues, but its eigenvectors are not perpendicular to each other. Is there any reason why such an operator cannot describe an actual physical quantity?
If two states are orthogonal, this means that $\langle \psi | \phi \rangle = 0$. Physically this means that if a system is in state $|\psi\rangle$ then there is no possibility that we will find the system in state $|\phi\rangle$ on measurement, and vis versa. In other words the 2 states in some sense mutually exclusive. This is an important property for operators because it means that the results of a measurement are unambiguous. A state with a well defined momentum $p_1$, i.e. an eigenstate of the momentum operator, cannot also have a momentum $p_2 \ne p_1$. Observables having an orthogonal (and complete) set of eigenstates is therefore a requirement in order for the theory to make physical sense (or at least for repeated measurements to give consistent results, as is experimentally observed)
If the eigenstates of your operator is not a orthogonal set, then your operator is not a hermitian operator, or in other words, is not an observable.
Actually, non-hermitian operators "appears" all the time, but if you investigate decoherence mechanism you may note that this operators don't affect directly the classical realm. This is because you can't construct consistent histories upon question about this non-hermitian operators. More concretely, if you have the coherent state $|\alpha \rangle$, i.e.
$$ a|\alpha \rangle=\alpha |\alpha \rangle $$
where $a$ is the annihilation operator, then asking the probability of the system have some value of $\alpha$ don't make sense because we always can represent one $\alpha$ state in superposition of others $\alpha's$.
A measurement extracts information from a quantum system that can be copied. Not all of the information in a quantum system can be copied as a result of the no-cloning theorem. The information that can be copied has to be described by an operator that remains unchanged under the unitary operator representing the measurement. And the only operators that respect this requirement are normal operators.
A similar argument can be given in terms of state vectors showing that only outcomes in orthogonal state vectors can be recorded: |
Help me, please.
Radicals: Simplify or Reduce
√3/8
Just to make sure.... Is this the expression in your question:
\(\sqrt\frac38\)
??
Okay....
\(\ \quad\sqrt{\frac38}\\ =\\ \quad\sqrt{\frac38\cdot\frac88}\\ =\\ \quad\sqrt{\frac{24}{8^2}}\\ =\\ \quad\frac{\sqrt{24}}{\sqrt{8^2}}\\ =\\ \quad\frac{\sqrt{24}}{8}\\ =\\ \quad\frac{\sqrt{4\cdot6}}{8}\\ =\\ \quad\frac{\sqrt4\cdot\sqrt6}{8}\\ =\\ \quad\frac{2\cdot\sqrt6}{8}\\ =\\ \quad\frac{\sqrt6}{4}\)
BTW...
Whenever you use a radical, that is, this symbol: √
it is a good idea to always include parenthesees after it, like this: √( )
and then put all of the numbers that go under the radical in the parenthesees.
So for this expression it would be √( 3/8 ) .
That way, there is no confusion.
Notice that the expression √3/8 can also be interpreted as \(\frac{\sqrt3}{8}\)
(which would actually be the correct interpretation in this case). |
Here are a couple of "proofs" of Stirling's formula. They are quite elegant (in my opinion), but not rigorous. On could write down a real proof from these, but as they rely on some hidden machinery, the result would be quite heavy.
1) A probabilistic non-proof
We start from the expression $e^{-n} n^n/n!$, of which we want to find an equivalent. Let us fix $n$, and let $Y$ be a random variable with a Poisson distribution of parameter $n$. By definition, for any integer $k$, we have $\mathbb{P} (Y=k) = e^{-n} n^k/k!$. If we take $k=n$, we get $\mathbb{P} (Y=n) = e^{-n} n^n/n!$. The sum of $n$ independent random variables with a Poisson distribution of parameter $1$ has a Poisson distribution of parameter $n$; so let us take a sequence $(X_k)$ of i.i.d. random variables with a Poisson distribution of parameter $1$. Note that $\mathbb{E} (X_0) = 1$. We have:
$$\mathbb{P} \left( \sum_{k=0}^{n-1} (X_k - \mathbb{E} (X_k)) = 0 \right) = \frac{e^{-n} n^n}{n!}.$$
In other words, $e^{-n} n^n/n!$ is the probability that a centered random walk with Poissonnian steps of parameter $1$ is in $0$ at time $n$. We have tools to estimates such quantities, namely local central limit theorems. They assert that:
$$\frac{e^{-n} n^n}{n!} = \mathbb{P} \left( \sum_{k=0}^{n-1} (X_k - \mathbb{E} (X_k)) = 0 \right) \sim \frac{1}{\sqrt{2 \pi n \text{Var} (X_0)}},$$
a formula which is closely liked with Gauss integral and diffusion processes. Since the variance of $X_0$ is $1$, we get:
$$n! \sim \sqrt{2 \pi n} n^n e^{-n}.$$
The catch is of course that the local central limit theorems are in no way elementary results (except for the simple random walks, and that is if you already know Stirling's formula...). The methods I know to prove such results involve Tauberian theorems and residue analysis. In some way, this probabilistic stuff is a way to disguise more classical approaches (in my defense, if all you have is a hammer, everything looks like a nail).
I think one could get higher order terms for Stirling's formula by computing more precise asymptotics for Green's function in $0$, which requires the knowledge of higher moments for the Poisson distribution. Note that the generating function for a Poisson distribution of parameter $1$ is:
$$\mathbb{E} (e^{t X_0}) = e^{e^t-1},$$
and this "exponential of exponential" will appear again in a moment.
2) A generating functions non-proof
If you want to apply analytic methods to problems related to sequences, generating functions are a very useful tool. Alas, the series $\sum_{n \geq 0} n! z^n$ is not convergent for non-zero values of $z$. Instead, we shall work with:
$$e^z = \sum_{n \geq 0} \frac{z^n}{n!};$$
we are lucky, as this generating function is well-known. Let $\gamma$ be a simple loop around $0$ in the complex plane, oriented counter-clockwise. Let us fix some non-negative integer $n$. By Cauchy's formula,
$$\frac{1}{n!} = \frac{1}{n!} \frac{\text{d} e^z}{\text{d} z}_{|_0} = \frac{1}{2 \pi i} \oint_\gamma \frac{e^z}{z^{n+1}} \text{d} z.$$
We choose for $\gamma$ the circle of radius $n$ around $0$, with its natural parametrization $z (t) = n e^{it}$:
$$\frac{1}{n!} = \frac{1}{2 \pi i} \int_{- \pi}^{\pi} \frac{e^{n e^{it}}}{n^{n+1} e^{(n+1)it}} nie^{it} \text{d} t = \frac{e^n}{2 \pi n^n} \oint_{- \pi}^{\pi} e^{n (e^{it}-it-1)} \text{d} t = \frac{e^n}{2 \pi \sqrt{n} n^n} \int_{- \pi \sqrt{n}}^{\pi \sqrt{n}} e^{n \left(e^{\frac{i\theta}{\sqrt{n}}}-\frac{i\theta}{\sqrt{n}}-1\right)} \text{d} \theta,$$
where $\theta =t \sqrt{n}$. Hitherto, we have an exact formula; note that we meet again the "exponential of exponential". Now comes the leap of faith. For $x$ close to $0$, the value of $e^x-x-1$ is roughly $x^2/2$. Moreover, the bounds of the integral get close to $- \infty$ and $ \infty$. Hence, for large $n$, we have:
$$\frac{1}{n!} \sim \frac{e^n}{2 \pi \sqrt{n} n^n} \int_{- \infty}^{+ \infty} e^{\frac{n}{2} \left(\frac{i\theta}{\sqrt{n}}\right)^2} \text{d} \theta = \frac{e^n}{2 \pi \sqrt{n} n^n} \int_{- \infty}^{+ \infty} e^{-\frac{\theta^2}{2}} \text{d} \theta = \frac{e^n}{\sqrt{2 \pi n} n^n}.$$
Of course, it is not at all trivial to prove that the equivalents we took are rigorous. Indeed, if one apply this method to bad generating functions (e.g. $(1-z)^{-1}$), they can get false results. However, this can be done for some
admissible functions, and the exponential is one of them.
I have learnt this method thanks to Don Zagier. It is also explained in
Generatingfunctionology, Chapter $5$ (III), where the author credits Hayman. The original reference seems to be A generalisation of Stirling's formula (Hayman, 1956), but I can't read it now.
One of the advantage of this method is that it becomes very easy to get the next terms in the asymptotic expansion of $n!$. You just have to develop further the function $e^x-x-1$ at $0$. Another advantage is that it is quite general, as it can be applied to many other sequences. |
EDIT: Reread it some hours later and found my error. I figured I was doing something wrong. I was applying operations out of order when calculating the conditional probability. It is 1/2 in each case. I'll leave the commentary untouched.
I think the answer is Yes, or at least I'm not entirely convinced the answer is no.
I will provide an example below, but I don't find it very convincing since I just ad-hoc approached it, and don't have a nice "overarching" principle to take away from this. Basically, consider this more as a comment to get discussion going, than a full fledged answer.
The other answers show that the expectation value of measuring the system to be in a particular state is the same. Basically the density matrix of the ensemble is the same, but the density matrix of the first machine only has two possible outputs while the second has an infinite number. Focusing immediately on the ensemble average seems to be throwing away any possibility we have of distinguishing them.
Here's an attempt at distinguishing them:
Machine 1 possible output, only pure states
$|0\rangle$ $|1\rangle$
Machine 2 possible output, any state
$\frac{1}{\sqrt{2}}(|0\rangle + p |1\rangle)$ where $p = e^{i\theta}$ with $0 \le \theta < 2\pi$
Now take some other qubit B (it doesn't matter here physically what it is) of prepared state $\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle)$ to get the product states:
machine 1
$\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle)|0\rangle = \frac{1}{\sqrt{2}}(|00\rangle +|10\rangle)$ $\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle)|1\rangle = \frac{1}{\sqrt{2}}(|01\rangle +|11\rangle)$
machine 2
$\frac{1}{2}(|0\rangle+|1\rangle)(|0\rangle + p |1\rangle) = \frac{1}{2}(|00\rangle+p|01\rangle + |10\rangle + p |11\rangle)$
Now let's introduce an interaction which can cause some interference:
$|00\rangle \rightarrow |00\rangle$ $|01\rangle \rightarrow \frac{1}{\sqrt{2}}(|01\rangle + |10\rangle)$ $|10\rangle \rightarrow \frac{1}{\sqrt{2}}(|01\rangle - |10\rangle)$ $|11\rangle \rightarrow |11\rangle$
now we have
machine 1 $\frac{1}{\sqrt{2}}(|00\rangle +|10\rangle) \rightarrow \frac{1}{\sqrt{2}}|00\rangle + \frac{1}{2}(|01\rangle-|10\rangle)$ $\frac{1}{\sqrt{2}}(|01\rangle +|11\rangle) \rightarrow \frac{1}{\sqrt{2}}|11\rangle + \frac{1}{2}(|01\rangle+|10\rangle)$ machine 2 $\frac{1}{2}(|00\rangle+p|01\rangle + |10\rangle + p |11\rangle) \rightarrow \frac{1}{2}(|00\rangle + p\frac{1}{\sqrt{2}}(|01\rangle+|10\rangle) + \frac{1}{\sqrt{2}}(|01\rangle-|10\rangle) + p |11\rangle)$ $ \ \ \ = \frac{1}{2}(|00\rangle + (p+1)\frac{1}{\sqrt{2}}|01\rangle+(p-1)\frac{1}{\sqrt{2}}|10\rangle + p |11\rangle)$
Now let's do two measurements. First measure the state of B to be 0 or 1, then measure the sate of the atom to be 0 or 1.
Conditional probability on the ensemble:
Given that we find B in state 1, what is the probability of finding the atom in state 0? machine 1 (1/2) x 1 + (1/2) x (1/3) = 4/6
machine 2
$\frac{\frac{1}{2}(p-1)^2}{\frac{1}{2}(p-1)^2 + p^2} = \frac{\frac{1}{2}(2 - 2\cos\theta)}{\frac{1}{2}(2 - 2\cos\theta) + 1} = \frac{1 - \cos\theta}{2 - \cos\theta}$
Now averaging over $\theta$
$\mathrm{Prob} = \frac{1}{2\pi}\int_0^{2\pi}\frac{1 - \cos\theta}{2 - \cos\theta} d\theta = 1 - \frac{1}{\sqrt{3}}$
Now, it is quite possible I've made a mistake here. But my main point is that the other answers seem to be throwing away the useful information to obtain solely an average of the initial output states. As the answers stand now, they do not mathematically convince me that we can never obtain an effect by adding interactions and multiple measurements with conditional probability or maybe 'weak' measurements, since individually the states have much different density matrices. Hopefully I didn't make a mistake above, but even if I did, I'd still very much like to hear more in the other answers beyond what is currently written. This is a fascinating question, so I'm quite interested in discussing this further. |
In deriving the half-angle formulas, my textbook first says: "Let's take the following identities:"
$$\cos^2\left(\frac a2\right)+\sin^2\left(\frac a2\right)=1;$$
$$\cos^2\left(\frac a2\right)-\sin^2\left(\frac a2\right)=\cos(a);$$
These identities I know. But then the texbook says "through addition and subtraction, we respectively arrive at:"
$$2\cos^2\left(\frac a2\right)=1+\cos(a)$$ $$2\sin^2\left(\frac a2\right)=1-\cos(a)$$
I failed to catch what exactly is added and what is substracted to arrive from the first two formulas to the second pair. Give me a hint, please. |
Update, trying to explain this in a better way:
I mean how to find the result without a calculator.
Base 2 Log 16 = 4: simple to figure out: 2 . 2 . 2 . 2
what about
Base 2 Log 18 = ??
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
First, you extract out the integer part of the answer; that is, for $\log_2 10$, we have $$ \log_2 10 = 3+\log_2 \frac{10}8 = 3+\log_2 \left(1+\frac14\right) $$ Now, we need a good approximation for the remaining logarithm. A straight-forward method to calculate this is to use the Taylor expansion of $\log(1+x)$, but we can do somewhat better using a "Pade Approximant". A neat approximant that's fairly accurate is $$ \log(1+x) \approx \frac{x(6+x)}{6+4x} $$ We still have to adjust for base, because the logarithm I've just approximated is the natural logarithm, base $e$... so we end up with $$ \log_2(1+x)\approx\frac{x(6+x)}{(6+4x)\log(2)} $$ where $\log(2)\approx 0.693$ So we work out that $$\begin{align} \log_2 10 &\approx 3+\frac{\frac14(6+\frac14)}{(6+4\frac14)\times 0.693}\\ &=3+\frac{\frac{25}{16}}{7\times 0.693}\\ &=3+\frac{25}{77.616}\\ &\approx 3+0.322098536 \approx 3.3221 \end{align}$$ As you can see, it's quite close to the right answer.
Let’s calculate
by hand $L = \log_2(18)$.
Be $L_0 = c_0 \in \mathbb{N}$ the biggest integer that won’t exceed $L$ our first guess. It is obvious that: $$c_0 = \lfloor \log_2(18) \rfloor = 4$$
Let $L_1 = L_0 + 2^{-1}c_{-1}$ where $c_{-1} \in \mathbb{N}$ our second guess: we’re splitting the interval of interest $[L_0, L_0 + 2^0)$ in half and trying to guess where the answer lies (we know for sure that it
must be between $4$ and $5$). The previous relation implies that:$$c_{-1} = 2(L_1 - L_0)$$Recalling that any $c_{i}$ should be an integer we should not expect that letting $L_1 = L$ will be enough. If we are lucky then $c_{-1}$ resolves to an integer, and we’re done. If not so then we want to maximize $c_{-1}$ assigning to it the closest integer to the actual real value of that computation, which is:$$c_{-1} = \lfloor 2(\log_2(18) - c_0) \rfloor = \left\lfloor 2\log_2\left(\frac{18}{16}\right) \right\rfloor = \left\lfloor \log_2\left(\frac{81}{64} \right) \right\rfloor$$Since $1 < 81\,/\,64 < 2$ then $0 < \log_2\left(81\,/\,64\right) < 1$ giving $c_{-1} = 0$ and $L_{1} = L_0 = 4$.
Be $L_2 = L_1 + 2^{-2}c_{-2}$, then: $$c_{-2} = 4(L_2 - L_1)$$
Going on as before: $$c_{-2}= \lfloor 4(\log_2(18) - 4) \rfloor = \left\lfloor 4\log_2\left(\frac{18}{16}\right) \right\rfloor = \left\lfloor \log_2\left(\frac{6561}{4096} \right) \right\rfloor = 0$$
Evaluate $c_{-3}$ as: $$c_{-3}= \lfloor 8(\log_2(18) - 4) \rfloor = \left\lfloor 8\log_2\left(\frac{9}{8}\right) \right\rfloor = \left\lfloor \log_2\left(\frac{9^8}{8^8}\right) \right\rfloor$$
If $9^8 > 2\cdot 8^8$ then $c_{-3}$ won’t be $0$. Now: $$9^8={9^2}^4={81^2}^2 = \cdots = 43046721$$ $$2\cdot 8^8=2 \cdot {8^2}^4=2 \cdot {64^2}^2 = \cdots = 33554432$$ We were right, $9^8 > 2\cdot 8^8$ is true; it is also true that $9^8 < 4\cdot 8^8$ just looking at the numbers. This means: $$c_{-3} = 1$$
Finally $L_3$ is: $$L_3 = c_0 + 2^{-1}c_{-1} + 2^{-2}c_{-2} + 2^{-3}c_{-3} = 4 + 0.125 = 4.125$$
This approach guarantees that $L_3 < L < L_3 + 2^{-3}$ giving an
absolute error of $2^{-4} = 0.0625$ and a percent error of $1.47\,\%$ (in the worst case).
Our best guess $L^\star$ is $$L_3 + 2^{-4} = 4.15625$$ while the actual value $L$ was $\simeq 4.16992$.
The
actual percent error is:$$\frac{L - L^\star}{L} \simeq 0.33\,\%$$
Here is one way for your specific problem. As you noted, $5^2=25$ and now keep squaring, instead of just multiplying by $5$. There is also a known trick to quickly square numbers ending in $5$, i.e. if $x = a5$ (where $a$ is any positive integer) then $x^2 = (a*(a+1))25$, for example if $5^2=25$ then $a=2$ and $a(a+1)=2\cdot 3=6$ so $$5^4=25^2 = 625$$ and $62\cdot 63 = 3906$ so $$5^8 = 625^2 = 390625,$$ so the answer will be between $8$ and $9$.
Your mistake was that $5^3=125 \ne 75$...
Generally people don't calculate logs "by hand." People use a computer or a calculator or a table or (gasp) a slide rule.
If you are programming the computers or calculator there are infinite series that converge to these functions.
If you want a reasonable approximation, you play some games.
$10^3 = 1000\\ 2^{10} = 1024\\ \frac {\log 10}{\log 2} \approx \frac {10}3$
Your other one:
$\log_2 18 = 2\log_2 3 + 1\\ 3^4 = 81 \\ 4 \log_2 3 \approx 3+\log 10\\ 2 \log_2 3 + 1 = 4 \frac 16$
Which compares to $4.17$ from my calculator.
One way you can get a good approximation is to use the Taylor series expansion of the $\log$ function: $$\log(1+x)=\sum\limits_{n=1}^\infty(-1)^{n+1}\frac{x^n}{n}$$ You can use the fact that $|a_n-s_n|<|s_{n+1}|$ to get an idea of how accurate your answer is.
You might want to memorize the following logarithms, which can then allow you to approximate most others quite well:
(In base $10$)
$\log2=0.30$
$\log3=0.48$
$\log5=0.70$
$\log7=0.85$
Now, just remember the following logarithm properties: $\log{ab}=\log{a}+\log{b}, $ $\log{\frac{a}{b}}=\log{a}-\log{b}$, and $\log_ab=\frac{\log_{10}{a}}{\log_{10}{b}}$. Using these identities, you can approximate, relatively, precisely, most logarithms.
Example 1: $\log_2{36}=\log_2{3}+\log_2{3}+\log_2{2}+\log_2{2}=\frac{\log3}{\log2}+\frac{\log3}{\log2}+1+1=\frac{0.48}{0.30}+\frac{0.48}{0.30}+1+1=1.60+1.60+1+1=5.20$
The actual answer is around $5.17$, but this is just an approximation.
Example 2: $\log_5{100}=\log_5{5}+\log_5{5}+\log_2{5}+\log_2{5}=1+1+\frac{\log2}{\log5}+\frac{\log2}{\log5}=1+1+\frac{0.30}{0.70}+\frac{0.30}{0.70}=1+1+0.43+0.43=2.86$
The actual answer is, when rounded to $2$ decimal places, $2.86.$
Hopefully you can see, because of logarithm properties, why $\log_{10}4,\log_{10}6,\log_{10}8, and \log_{10}9$ do not need to be memorized. |
The formula is $(A_r\land b)\cdot C_s=A_r\cdot (b\cdot C_s)$, where $0<r<s$. Hestenes suggests to expand $(A_rb)C_s=A_r(bC_s)$ and extract the $s-r-1$-vector part. But this method requires one to know the formula for the Clifford product of two arbitrary blades, and as far as I can see, the book only gives formulas for the Clifford product of a vector and a blade. One can prove this formula using formulas for the exterior and interior product in Grassmann algebra. But this is not the method Hestenes suggests. If somebody knows the proof that Hestenes means, please let me know too.
Here's a proof from the appendix of Geometric Algebra for Electrical Engineers, with the variable names changed to match your question. The order of the variables is reverse of your question, but you can apply the reverse operator to each side and make a change of variables to prove the formula as stated in NFCM.
Theorem: Distribution of inner productsGiven two blades $C_s, A_r$ with grades subject to $s > r > 0$, and a vector $\mathbf{b}$, the inner product distributes according to$$C_s \cdot \left( { \mathbf{b} \wedge A_r } \right) = \left( { C_s \cdot \mathbf{b} } \right) \cdot A_r.$$ Proof:
The proof is straightforward, relying primarily on grade selection, but also mechanical. Start by expanding the wedge and dot products within a grade selection operator $$C_s \cdot \left( { \mathbf{b} \wedge A_r } \right)={\left\langle{{C_s (\mathbf{b} \wedge A_r)}}\right\rangle}_{{s - (r + 1)}}=\frac{1}{{2}} {\left\langle{{C_s \left( {\mathbf{b} A_r + (-1)^{r} A_r \mathbf{b}} \right) }}\right\rangle}_{{s - (r + 1)}}.$$
Solving for $C_r \mathbf{b}$ in $$2 \mathbf{b} \cdot A_r = \mathbf{b} A_r - (-1)^{r} A_r \mathbf{b},$$ we have $$\begin{aligned}C_s \cdot \left( \mathbf{b} \wedge A_r \right) &= \frac{1}{2} {\left\langle C_s \mathbf{b} A_r + C_s \left( \mathbf{b} A_r - 2 \mathbf{b} \cdot A_r \right) \right\rangle}_{s - (r + 1)} \\ &= {\left\langle C_s \mathbf{b} A_r \right\rangle}_{s - (r + 1)} - {\left\langle C_s \left( \mathbf{b} \cdot A_r \right) \right\rangle}_{s - (r + 1)} \\ &= {\left\langle C_s \mathbf{b} A_r \right\rangle}_{s - (r + 1)}.\end{aligned}$$
The last term in the second step is zero since we are selecting the $s - r - 1$ grade element of a multivector with grades $s - r + 1$ and $s + r - 1$, which has no terms for $r > 0$. Now we can expand the $C_s \mathbf{b}$ multivector product, for $$C_s \cdot \left( { \mathbf{b} \wedge A_r } \right)={\left\langle{{ \left( { C_s \cdot \mathbf{b} + C_s \wedge \mathbf{b}} \right) A_r }}\right\rangle}_{{s - (r + 1)}}.$$
The latter multivector (with the wedge product factor) above has grades $s + 1 - r$ and $s + 1 + r$, so this selection operator finds nothing. This leaves $$C_s \cdot \left( { \mathbf{b} \wedge A_r } \right)={\left\langle{{\left( { C_s \cdot \mathbf{b} } \right) \cdot A_r+ \left( { C_s \cdot \mathbf{b} } \right) \wedge A_r}}\right\rangle}_{{s - (r + 1)}}.$$
The first dot products term has grade $s - 1 - r$ and is selected, whereas the wedge term has grade $s - 1 + r \ne s - r - 1$ (for $r > 0$), which completes the proof.
This is a special case of the identity $(A\wedge B)\,\lrcorner\,C=A\,\lrcorner\,(B\,\lrcorner\,C)$. It can be proven first for blades, then extended to all multivectors by linearity.
Suppose $A$ has grade $r$, $B$ has grade $k$ (in your case $k=1$), and $C$ has grade $s$. Then
$$AB=\langle AB\rangle_{r+k}$$
$$+\langle AB\rangle_{r+k-2}$$
$$+\langle AB\rangle_{r+k-4}$$
$$+\cdots$$
$$+\langle AB\rangle_{-r+k+2}$$
$$+\langle AB\rangle_{-r+k}$$
The first term is $A\wedge B$, and the last term is $A\,\lrcorner\,B$. We could also write this as $$\langle AB\rangle_{r+k}+\cdots+\langle AB\rangle_{|r-k|}=A\wedge B+\cdots+A\cdot B$$ because the last term is the same if $r\leq k$, and otherwise we're only adding $0$'s after $A\cdot B$. (For example, with $r=3$ and $k=1$, the expression is $\langle Ab\rangle_4+\langle Ab\rangle_2+\langle Ab\rangle_0+\langle Ab\rangle_{-2}=\langle Ab\rangle_4+\langle Ab\rangle_2$.)
Multiply by $C$, expanding each term in the same way, to get
$$ABC=\langle\langle AB\rangle_{r+k}C\rangle_{r+k+s}+\langle\langle AB\rangle_{r+k}C\rangle_{r+k+s-2}+\cdots+\langle\langle AB\rangle_{r+k}C\rangle_{-(r+k)+s}$$
$$+\langle\langle AB\rangle_{r+k-2}C\rangle_{r+k-2+s}+\langle\langle AB\rangle_{r+k-2}C\rangle_{r+k-2+s-2}+\cdots+\langle\langle AB\rangle_{r+k-2}C\rangle_{-(r+k-2)+s}$$
$$+\cdots$$
$$+\langle\langle AB\rangle_{-r+k}C\rangle_{-r+k+s}+\langle\langle AB\rangle_{-r+k}C\rangle_{-r+k+s-2}+\cdots+\langle\langle AB\rangle_{-r+k}C\rangle_{-(-r+k)+s}$$
Each row has successively fewer terms ($2$ less each time), so the rightmost term should be indented, and the expression above would look something like this trapezoid:
$$\begin{matrix} x&x&x&x&x&x&x&x&x \\ x&x&x&x&x&x&x \\ x&x&x&x&x \\ x&x&x \end{matrix}$$
The grade decreases going right or going down, so all the terms with a given grade are on a diagonal line. The leftmost diagonal (the highest grade) has only a single term, $\langle ABC\rangle_{r+k+s}=\langle\langle AB\rangle_{r+k}C\rangle_{r+k+s}=(A\wedge B)\wedge C$. The rightmost diagonal (the lowest grade) also has only a single term, $\langle ABC\rangle_{-(r+k)+s}=\langle\langle AB\rangle_{r+k}C\rangle_{-(r+k)+s}=(A\wedge B)\,\lrcorner\,C$.
Now write it the other way, expanding $BC$ first:
$$BC=\langle BC\rangle_{k+s}+\langle BC\rangle_{k+s-2}+\cdots+\langle BC\rangle_{-k+s}$$
and multiplying by $A$:
$$ABC=\langle A\langle BC\rangle_{k+s}\rangle_{r+k+s}+\langle A\langle BC\rangle_{k+s-2}\rangle_{r+k+s-2}+\cdots+\langle A\langle BC\rangle_{-k+s}\rangle_{r-k+s}$$
$$+\langle A\langle BC\rangle_{k+s}\rangle_{r-2+k+s}+\langle A\langle BC\rangle_{k+s-2}\rangle_{r-2+k+s-2}+\cdots+\langle A\langle BC\rangle_{-k+s}\rangle_{r-2-k+s}$$
$$+\cdots$$
$$+\langle A\langle BC\rangle_{k+s}\rangle_{-r+k+s}+\langle A\langle BC\rangle_{k+s-2}\rangle_{-r+k+s-2}+\cdots+\langle A\langle BC\rangle_{-k+s}\rangle_{-r-k+s}$$
This time, each row has the same number of terms, so it's just a rectangle instead of a trapezoid. Again, all the terms with a given grade are on a diagonal, and the leftmost diagonal has only $\langle ABC\rangle_{r+k+s}=\langle A\langle BC\rangle_{k+s}\rangle_{r+k+s}=A\wedge(B\wedge C)$. The rightmost diagonal also has only one term, $\langle ABC\rangle_{-r-k+s}=\langle A\langle BC\rangle_{-k+s}\rangle_{-r-k+s}=A\,\lrcorner\,(B\,\lrcorner\,C)$.
Comparing the two expansions, we can see both identities at once:
$$(A\wedge B)\wedge C=A\wedge(B\wedge C),\qquad(A\wedge B)\,\lrcorner\,C=A\,\lrcorner\,(B\,\lrcorner\, C).$$ |
Credit goes to Semiclassical for doing most of the work on this one. My answer is only different in that I don't directly appeal to the Poisson summation formula, but just plug the Fourier series into the integral directly; a tenable sum results. Consider the integral from Semiclassical,
$$I=\frac{1}{\sqrt{2\pi \sigma^2}}\int_{-\infty}^\infty e^{-x^2/2\sigma^2} f(x)\,dx $$with $f(x)=\arcsin\left(1-2\big|\lfloor x\rceil-x\big|\right)$. As the OP noted, Fourier expansion is a possibility since $f(x)$ is periodic with period $1$. Let us try this approach. The expansion is$$f(x) = \sum_{k=-\infty}^\infty c_k e^{2\pi i k x},$$Where the coefficients are given by the integral$$c_k = \int_{-1/2}^{1/2} f(x) e^{-i2\pi kx}\;dx.$$We can calculate the coefficents. We'll return to this. For now, insert the Fourier expansion into the original integral$$I=\frac{1}{\sqrt{2\pi \sigma^2}}\int_{-\infty}^\infty e^{-x^2/2\sigma^2} \sum_{k=-\infty}^\infty c_k e^{2\pi i k x}\,dx .$$Switching integration and summation,$$I=\frac{1}{\sqrt{2\pi \sigma^2}}\sum_{k=-\infty}^\infty c_k\int_{-\infty}^\infty e^{2\pi i k x-x^2/2\sigma^2}\,dx.$$So long as $\sigma$ is real, that integral can be calculated for any $k$ in closed form as$$\int_{-\infty}^\infty e^{2\pi i k x-x^2/2\sigma^2}\,dx = \sqrt{2 \pi\sigma^2 } e^{-2 \pi ^2 k^2 \sigma ^2},$$which leaves us with (after canceling the $\sqrt{2 \pi\sigma^2 }$)$$I=\sum_{k=-\infty}^\infty c_k e^{-2 \pi ^2 k^2 \sigma ^2}$$
Now returning to the $c_k$ coefficients, we have$$c_k = \int_{-1/2}^{1/2} \arcsin\left(1-2\big|x\big|\right) e^{-i2\pi kx}\;dx.$$I'm not clever enough to exploit the symmetries here to take a shortcut, so let's just bruteforce this by splitting the integral up into two pieces, breaking the interval up at $x=0$:$$c_k = c_k^{-} + c_k^{+}$$with$$c_k^{-} = \int_{-1/2}^{0} \arcsin\left(1-2\big|x\big|\right) e^{-i2\pi kx}\;dx.$$$$c_k^{+} = \int_{0}^{1/2} \arcsin\left(1-2\big|x\big|\right) e^{-i2\pi kx}\;dx.$$On the negative interval, $\big|x\big|=-x$. On the positive one, $\big|x\big|=x$. The two expressions are then$$c_k^{-} = \int_{-1/2}^{0} \arcsin\left(1+2x\right) e^{-i2\pi kx}\;dx.$$$$c_k^{+} = \int_{0}^{1/2} \arcsin\left(1-2x\right) e^{-i2\pi kx}\;dx.$$On the negative interval, we'll take the substitution $x=\frac{\sin(\phi)-1}{2}$, $dx=\frac{1}{2}\cos(\phi)d\phi$, and the interval maps as $[-1/2,0]\rightarrow[0,\pi/2]$.
On the positive interval, we'll use $x=\frac{1-\sin(\phi)}{2}$, $dx=-\frac{1}{2}\cos(\phi)d\phi$, and the interval maps as $[0,1/2]\rightarrow[\pi/2,0]$. Using these substitutions now give us$$c_k^{-} = \frac{1}{2}\int_{0}^{\pi/2} e^{i \pi k (\sin (\phi )-1)}\phi \cos (\phi )\;d\phi.$$$$c_k^{+} = -\frac{1}{2}\int_{\pi/2}^{0} e^{i \pi k (1-\sin (\phi ))}\phi \cos (\phi )\;d\phi.$$Doing these integrals results in expressions involving Struve functions and Bessel functions of order zero:$$c_k^{-} = -\frac{i \left(1-e^{-i \pi k} (J_0(k \pi )+i \pmb{H}_0(k \pi ))\right)}{4 k}.$$$$c_k^{+} = \frac{i \left(1-e^{i \pi k} (J_0(k \pi )-i \pmb{H}_0(k \pi ))\right)}{4 k}.$$Now summing these two results and simplifying gives the Fourier coefficients:$$c_k = c_k^{-}+c_k^{+} = \frac{J_0(k\pi)\sin(k\pi)-\pmb{H}_0(k\pi)\cos(k\pi)}{2 k}$$And because $k$ is an integer, the sines and cosines simplify, leaving$$c_k = -\frac{(-1)^k\pmb{H}_0(k\pi)}{2 k}$$Note that the $c_0$ average value coefficient is $c_0=\frac{\pi}{2}-1$, and must be calculated by taking $\lim_{k\rightarrow 0} c_k$ since the expression has a removable singularity there.
So now, if we have a way to calculate the zero order Struve function, we have the Fourier series coefficients to plug into the summation given above for the integral $I$. Note that you can expect the summation to converge pretty fast to the ``true'' value since the terms in the sum decay like a Gaussian tail in $k$. The convergence is hastened by larger values of $\sigma$ as well, so only a few terms should give accurate results as $\sigma$ is increased.
For fun, I calculated $c_k$ for $k=0,1,2\ldots 10$ to sixteen digits, about machine precision for double precision floating point numbers on most modern computers. You get$$0.5707963267948966, 0.2589127103423059, 0.03248993478100901,\\0.04215072581916797, 0.01378829513227803, 0.01838120431720631, \\0.008091939971224002, 0.01070468107470534, 0.005487219243638216, \\0.007169482259113914, 0.004040382366534786$$I've plotted the corresponding finite Fourier approximation of $f(x)$, and the true $f(x)$ below, on one period.
It can be seen that with $11$ real terms, you aren't that close to the original integrand; because of the derivative blow-ups in $f(x)$, the Fourier series converges slowly to $f$. However, as we'll see below, $11$ terms is overkill for the integral with respect to a Gaussian weight.
You can show that $c_{-k}=c_k$, so you don't have to calculate the negative coefficients. The exponential term has the same symmetry, so you only have to sum over the positive values and double the result (taking care with $c_0$). The finite truncation of our integral can be written.$$I=c_0 + 2\sum_{k=1}^N c_k e^{-2 \pi^2 k^2 \sigma ^2}$$We can see that the exponential factor in the first term in the summation (for $\sigma=1$) is $e^{-2 \pi^2}\approx 3\times 10^{-9}$, and subsequent terms are even smaller. Essentially, I think unless $\sigma$ is on the order of $\frac{1}{\pi\sqrt{2}}$ or smaller, the entire sum is dominated by the first term,$$I\approx c_0 = \frac{\pi}{2}-1.$$
I have numerical calculated the integral directly and verified the result. So in the limit $\sigma>>\frac{1}{2\sqrt{\pi}}$, $I=\frac{\pi}{2}-1$. At the other end, when $\sigma$ becomes very small, the exponential factor is approximately constant, and the integral would be approximately the sum of all the Fourier coefficients; I have a hunch that it sums to $\pi/2$ but haven't proven that. Alternately, you can directly use a lot of terms in the Fourier expansion for $I$. Finally, around $\sigma\approx\frac{1}{2\sqrt{\pi}}$, you'd probably not need more than 5 or 6 terms in the sum, because $\exp(-k^2)$ is order of machine precision by $k=6$. |
So let me whine for a bit about LaTeX. LaTeX is document-preparation software used a lot in the sciences - in the mathier sciences particularly. The basic idea is that you have a source file, you feed it into the LaTeX processor, and it spits up a PDF. The reason you'd do this is because LaTeX is really good at type-setting math - way better than any other software I know of, anyway.
Essentially, it turns a source file that looks something like this:
\documentclass[11pt]{article}
\begin{document} Do you seek knowledge of squid giant axons? \[ C_m \frac{dV \left ( t \right )}{dt} = \sum_i I_i \left ( t, V \right ) \] \end{document}
into this:
.
LaTeX is also the math-language for wikipedia, which some of you have probably guessed (or a sub-set of it is, I'm not sure).
If you have a lot of math to lay out, LaTeX is your baby. The problem is that there's more to modern documents than laying out equations nicely. You see, LaTeX is old: the current major version of LaTeX is LaTeX 2e, which had its first release in 1994 and its last release in 2011. There are a lot of things that modern users expect to be able to do easily when making a document that LaTeX doesn't nicely support - like embedding images, having colored text, setting your page margins or including source-code listings.
One or two people reading this post just yelled "Hey, wait a minute! LaTeX can do all of those things!" Sort of; straight-up LaTeX can't, but there are
packages
that add all of those features. Sometimes, in fact, more than one package; consider the wikibooks section on including source-code listings
, or the wikibook chapter on embedding graphics
, or the wikibooks section on typesetting algorithms
. (Would you call that
simple
?) Notice that in the case of both graphics and algorithms, there are
multiple
packages that do
the same job
in different ways - and sometimes there are even packages that extend other packages!
The problem, essentially, is that the needs of LaTeX users have changed, but the core system has not evolved at all; it's still more-or-less the same LaTeX 2e we've had since 1994. To fix this, people have been releasing more and more packages to shore up the capabilities of the basic system; however, even these packages have aged out usefulness, and other packages have sprung up to add in
even more
features that the previous crop of packages didn't include.
In an ideal world, what it would be nice to do is throw out the mess that is LaTeX 2e and rebuild it for the current, completely different environment; sadly, that's impractical for a number of reasons. Among them are all those goddamned packages; people are actually using those things right now in their documents. If the LaTeX authors started over, they'd either have to either maintain compatibility with all those packages (which'd be a nightmare) or provide as much of the functionality of those packages in their new system (which'd be a herculean engineering challenge). There is in fact a project to build a new version of LaTeX, LaTeX 3
; that's going nowhere, I suspect largely for the reason that they can't possibly satisfy all the needs that the old, decrepid LaTeX 2e plus its 80 bajillion mutually incompatible packages is currently handling (however poorly it's handling them).
So we're stuck with the same old typesetting system, which makes it easy to typeset equations in exchange for making it hard to do God-damned near anything else.
Edit: None of this even touches the less-than-intuitive and inconsistent syntax, or the sometimes-crazy layout rules (like the ones governing figure placement), or that its default behaviors and settings can be insanely hard to change without using packages, or the way it can shit the bath if a figure or table is wider or higher than a page. |
Yeah, this software cannot be too easy to install, my installer is very professional looking, currently not tied into that code, but directs the user how to search for their MikTeX and or install it and does a test LaTeX rendering
Some body like Zeta (on codereview) might be able to help a lot... I'm not sure if he does a lot of category theory, but he does a lot of Haskell (not that I'm trying to conflate the two)... so he would probably be one of the better bets for asking for revision of code.
he is usually on the 2nd monitor chat room. There are a lot of people on those chat rooms that help each other with projects.
i'm not sure how many of them are adept at category theory though... still, this chat tends to emphasize a lot of small problems and occasionally goes off tangent.
you're project is probably too large for an actual question on codereview, but there is a lot of github activity in the chat rooms. gl.
In mathematics, the Fabius function is an example of an infinitely differentiable function that is nowhere analytic, found by Jaap Fabius (1966). It was also written down as the Fourier transform off^(z)=∏m=1∞(cos...
Defined as the probability that $\sum_{n=1}^\infty2^{-n}\zeta_n$ will be less than $x$, where the $\zeta_n$ are chosen randomly and independently from the unit interval
@AkivaWeinberger are you familiar with the theory behind Fourier series?
anyway here's a food for thought
for $f : S^1 \to \Bbb C$ square-integrable, let $c_n := \displaystyle \int_{S^1} f(\theta) \exp(-i n \theta) (\mathrm d\theta/2\pi)$, and $f^\ast := \displaystyle \sum_{n \in \Bbb Z} c_n \exp(in\theta)$. It is known that $f^\ast = f$ almost surely.
(a) is $-^\ast$ idempotent? i.e. is it true that $f^{\ast \ast} = f^\ast$?
@AkivaWeinberger You need to use the definition of $F$ as the cumulative function of the random variables. $C^\infty$ was a simple step, but I don't have access to the paper right now so I don't recall it.
> In mathematics, a square-integrable function, also called a quadratically integrable function, is a real- or complex-valued measurable function for which the integral of the square of the absolute value is finite.
I am having some difficulties understanding the difference between simplicial and singular homology. I am aware of the fact that they are isomorphic, i.e. the homology groups are in fact the same (and maybe this doesnt't help my intuition), but I am having trouble seeing where in the setup they d...
Usually it is a great advantage to consult the notes, as they tell you exactly what has been done. A book will teach you the field, but not necessarily help you understand the style that the prof. (who creates the exam) creates questions.
@AkivaWeinberger having thought about it a little, I think the best way to approach the geometry problem is to argue that the relevant condition (centroid is on the incircle) is preserved by similarity transformations
hence you're free to rescale the sides, and therefore the (semi)perimeter as well
so one may (for instance) choose $s=(a+b+c)/2=1$ without loss of generality
that makes a lot of the formulas simpler, e.g. the inradius is identical to the area
It is asking how many terms of the Euler Maclaurin formula do we need in order to compute the Riemann zeta function in the complex plane?
$q$ is the upper summation index in the sum with the Bernoulli numbers.
This appears to answer it in the positive: "By repeating the above argument we see that we have analytically continued the Riemann zeta-function to the right-half plane σ > 1 − k, for all k = 1, 2, 3, . . .." |
MathModePlugin
Add math formulas to TWiki topics using LaTeX markup language
Description
This plugin allows you to include mathematics in a TWiki page, with a format very similar to LaTeX. The external program
latex2html
is used to generate
gif
(or
png
) images from the math markup, and the image is then included in the page. The first time a particular expression is rendered, you will notice a lag as
latex2html
is being run on the server. Once rendered, the image is saved as an attached file for the page, so subsequent viewings will not require re-renders. When you remove a math expression from a page, its image is deleted.
Note that this plugin is called MathModePlugin
, not LaTeXPlugin, because the only piece of LaTeX implemented is rendering of images of mathematics.
Syntax Rules <latex [attr="value"]* >
formula </latex>
generates an image from the contained
formula
. In addition attribute-value pairs may be specified that are passed to the resulting
img
html tag. The only exeptions are the following attributes which take effect in the latex rendering pipeline:
size: the latex font size; possible values are tiny, scriptsize, footnotesize, small, normalsize, large, Large, LARGE, huge or Huge; defaults to %LATEXFONTSIZE%
color: the foreground color of the formula; defaults to %LATEXFGCOLOR%
bgcolor: the background color; defaults to %LATEXBGCOLOR%
The formula will be displayed using a
math
latex environment by default. If the formula contains a latex linebreak (
\\
) then a
multline
environment of amsmath is used instead. If the formula contains an alignment sequence (
& = &
) then an
eqnarray
environment is used.
Note that the old notation using
%$formula$%
and
%\[formula\]%
is still supported but are deprecated.
If you might want to recompute the images cached for the current page then append
?refresh=on
to its url, e.g. click
here
to refresh the formulas in the examples below.
Examples
The following will only display correctly if this plugin is installed and configured correctly.
<latex title="this is an example">
\int_{-\infty}^\infty e^{-\alpha x^2} dx = \sqrt{\frac{\pi}{\alpha}}
</latex>
<latex>
{\cal P} & = & \{f_1, f_2, \ldots, f_m\} \\
{\cal C} & = & \{c_1, c_2, \ldots, c_m\} \\
{\cal N} & = & \{n_1, n_2, \ldots, n_m\}
</latex>
<latex title="Calligraphics" color="orange">
\cal
A, B, C, D, E, F, G, H, I, J, K, L, M, \\
\cal
N, O, P, Q, R, S, T, U, V, W, X, Y, Z
</latex>
<latex>
\sum_{i_1, i_2, \ldots, i_n} \pi * i + \sigma
</latex>
This is
new inline test.
Greek letters
\alpha
\theta
\beta
\iota
\gamma
\kappa
\delta
\lambda
\epsilon
\mu
\zeta
\nu
\eta
\xi
Plugin Installation Instructions Download the ZIP file Unzip
in your twiki installation directory. Content:
MathModePlugin.zip
File: Description:
data/TWiki/MathModePlugin.txt
lib/TWiki/Plugins/MathModePlugin/Core.pm
lib/TWiki/Plugins/MathModePlugin.pm
pub/TWiki/MathModePlugin/latex2img This plugin makes use of three additional tools that are used to convert latex formulas to images. These are Make sure they are installed and check the paths to the programs
latex,
dvipng and
convert in the latex2img shiped with this plugin
Edit the file
<path-to-twiki>/pub/TWiki/MathModePlugin/latex2img accordingly and set execute permission for your webserver on it
Visit
configure in your TWiki installation, and enable the plugin in the {Plugins} section.
Troubleshooting If you get error like
"fmtutil: [some-dir]/latex.fmt does not exist", run
fmtutil-sys --all on your server to recreate all latex formatstyles.
If your generated image of the latex formula does not show up, then you probably have encoding issues. Look into the source of the <img>-tag in your page's source code. Non-ASCII characters in file names might cause troubles. Check the localization in the TWiki configure page. Configuration
There are a set of configuration variables that an be set in different places. All of the below variables can be set in your
LocalSite.cfg
file like this:
$TWiki::cfg{MathModePlugin}{<Name>} = <value>;
Some of the below variables can
only
be set this way, some of the may be overridden by defining the respective prefrence variable.
Name Preference Variable Default
HashCodeLength
32 length of the hash code. If you switch to a different hash function, you will likely have to change this
ImagePrefix
'_MathModePlugin_' string to be prepended to any auto-generated image
ImageType
%LATEXIMAGETYPE% 'png' extension of the image type; possible values are 'gif' and 'png'
Latex2Img
'.../TWiki/MathModePlugin/latex2img' the script to convert a latex formula to an image
LatexPreamble
%LATEXPREAMBLE% '\usepackage{latexsym}' latex preamble to include additional packages (e.g. \usepackage{mathptmx} to change the math font) ; note, that the packages
amsmath and
color are loaded too as they are obligatory
ScaleFactor
%LATEXSCALEFACTOR% 1.2 factor to scale images
LatexFGColor
%LATEXFGCOLOR% black default text color
LatexBGColor
%LATEXBGCOLOR% white default background color
LatexFontSize
%LATEXFONTSIZE% normalsize default font size Plugin Info |
№ 8
All Issues
Ukr. Mat. Zh. - 2002νmber=3. - 54, № 8. - pp. 1031-1041
We consider periodic components of
A-diffeomorphisms on two-dimensional manifolds. We study properties of these components and give a topological description of their boundaries.
Ukr. Mat. Zh. - 2002νmber=3. - 54, № 8. - pp. 1042-1052
We consider a family of boundary-value problems in which the role of a parameter is played by a potential. We investigate the smooth structure and homotopic properties of the manifolds of eigenfunctions and degenerate potentials corresponding to double eigenvalues.
Ukr. Mat. Zh. - 2002νmber=3. - 54, № 8. - pp. 1053-1066
We investigate a mixed problem for a nonlinear ultraparabolic equation in a certain domain
Q unbounded in the space variables. This equation degenerates on a part of the lateral surface on which boundary conditions are given. We establish conditions for the existence and uniqueness of a solution of the mixed problem for the ultraparabolic equation; these conditions do not depend on the behavior of the solution at infinity. The problem is investigated in generalized Lebesgue spaces.
Ukr. Mat. Zh. - 2002νmber=3. - 54, № 8. - pp. 1067-1076
We establish a criterion for convolutors in certain
S-type spaces. Using this criterion, we prove the correct solvability (in both directions) of one Cauchy problem in these spaces.
Ukr. Mat. Zh. - 2002νmber=3. - 54, № 8. - pp. 1077-1088
For an arbitrary operator, we pose a general reconstruction problem inverse to the problem of finding solutions. For the pair operator considered, this problem is reduced to the equivalent problem of reconstruction of the kernels of the pair integral equation of the convolution type that generates this operator. In the cases investigated, we prove theorems that characterize the reconstruction of the corresponding kernels, which are constructed in terms of two functions from different Banach algebras of the type
L 1(−∞, ∞) with weight.
Ukr. Mat. Zh. - 2002νmber=3. - 54, № 8. - pp. 1089-1009
We propose and justify an algorithm for the construction of asymptotic solutions of singularly perturbed differential equations with impulse action.
Ukr. Mat. Zh. - 2002νmber=3. - 54, № 8. - pp. 1100-1125
We present a survey of results concerning the approximation of classes of periodic functions by the de la Vallée-Poussin sums obtained by various authors in the 20th century.
Ukr. Mat. Zh. - 2002νmber=3. - 54, № 8. - pp. 1126
We study problems related to the stability of solutions of nonlinear difference equations with random perturbations of semi-Markov type. We construct Lyapunov functions for different classes of nonlinear difference equations with semi-Markov right-hand side and establish conditions for their existence.
Ukr. Mat. Zh. - 2002νmber=3. - 54, № 8. - pp. 1135-1142
The present paper deals with the well-posedness and regularity of one class of one-dimensional time-dependent boundary-value problems with global boundary conditions on the entire time interval. We establish conditions for the well-posedness of boundary-value problems for partial differential equations in the class of bounded differentiable functions. A criterion for the regularity of the problem under consideration is also obtained.
Ukr. Mat. Zh. - 2002νmber=3. - 54, № 8. - pp. 1143-1148
We investigate the structure of matrices and their divisors over the domain of principal ideals.
Ukr. Mat. Zh. - 2002νmber=3. - 54, № 8. - pp. 1149-1153
Let
M f( r) and μ f( r) be, respectively, the maximum of the modulus and the maximum term of an entire function f and let Φ be a continuously differentiable function convex on (−∞, +∞) and such that x = o(Φ( x)) as x → +∞. We establish that, in order that the equality \(\lim \inf \limits_{r \to + \infty} \frac{\ln M_f (r)}{\Phi (\ln r)} = \lim \inf \limits_{r \to + \infty} \frac{\ln \mu_f (r)}{\Phi (\ln r)}\) be true for any entire function f, it is necessary and sufficient that ln Φ′( x) = o(Φ( x)) as x → +∞. |
(I have already asked this at MathOverflow, but got no answers there.)
Background
In the untyped lambda calculus, a term may contain many redexes, and different choices about which one to reduce may produce wildly different results (e.g. $(\lambda x.y)((\lambda x.xx)\lambda x.xx)$ which in one step ($\beta$-)reduces either to $y$ or to itself). Different (sequences of) choices of where to reduce are called
reduction strategies. A term $t$ is said to be normalizing if there exists a reduction strategy which brings $t$ to normal form. A term $t$ is said to be strongly normalizing if every reduction strategy brings $t$ to normal form. (I'm not worried about which, but confluence guarantees there can't be more than one possibility.)
A reduction strategy is said to be
normalizing (and is in some sense best possible) if whenever $t$ has a normal form, then that's where we'll end up. The leftmost-outermost strategy is normalizing.
At the other end of the spectrum, a reduction strategy is said to be
perpetual (and is in some sense worst possible) if whenever there is an infinite reduction sequence from a term $t$, then the strategy finds such a sequence - in other words, we could possibly fail to normalize, then we will.
I know of the perpetual reduction strategies $F_\infty$ and $F_{bk}$ given respectively by:\begin{array}{ll}F_{bk}(C[(\lambda x.s)t])=C[s[t/x]] & \text{if $t$ is strongly normalizing} \\F_{bk}(C[(\lambda x.s)t])=C[(\lambda x.s)F_{bk}(t)] & \text{otherwise}\end{array}and\begin{array}{ll}F_\infty(C[(\lambda x.s)t])=C[s[t/x]] & \text{if $x$ occurs in $s$, or if $t$ is on normal form} \\F_\infty(C[(\lambda x.s)t])=C[(\lambda x.s)F_\infty(t)] & \text{otherwise}\end{array}(In both cases, the indicated $\beta$-redex is the
leftmost one in the term $C[(\lambda x.s)t]$ - and on normal forms, reduction strategies are necessarily identity.) The strategy $F_\infty$ is even maximal - if it normalities a term, then it has used a longest possible reduction sequence to do so. (See e.g. 13.4 in Barendregt's book.)
Consider now the
leftmost-innermost reduction strategy. Informally, it will only reduce a $\beta$-redex which contains no other redexes. More formally, it is defined by\begin{array}{ll} L(t) = t &\text{if $t$ on normal form}\\ L(\lambda x.s) = \lambda x. L(s) &\text{for $s$ not on normal form}\\ L(st) = L(s)t &\text{for $s$ not on normal form}\\ L(st) = s L(t) &\text{if $s$, but not $t$ is on normal form}\\ L((\lambda x. s)t) = s[t/x] &\text{if $s$, $t$ both on normal form}\end{array}
The natural intuition for leftmost-innermost reduction is that it will do all the work - no redex can be lost, and so it ought to be perpetual. Since the corresponding strategy is perpetual for (untyped) combinatory logic (innermost reductions are perpetual for all orthogonal TRWs), this doesn't feel like completely unfettered blue-eyed optimism...
Is leftmost-innermost reduction a perpetual strategy for the untyped $\lambda$-calculus?
If the answer turns out to be 'no', a pointer to a counterexample would be very interesting too. |
You are right. Even though the phase jumps at the zeros of the frequency response, such a phase response is usually still called "linear". For a frequency selective filter with frequency response zeros in the stopband, the phase always has discontinuities at the zeros. A purely linear phase response (without jumps) is only possible for filters with no zeros in their frequency response.
The frequency response of your filter can be written as
$$H(e^{j\omega})=|H(e^{j\omega})|e^{j\phi(\omega)}\tag{1}$$
where $\phi(\omega)$ is the phase response with jumps at the zeros, as shown in your figure. For such a linear phase system, the frequency response (1) can equivalently be written as
$$H(e^{j\omega})=A(e^{j\omega})e^{j\hat{\phi}(\omega)}\tag{2}$$
where $A(e^{j\omega})$ is a real-valued but bipolar (i.e. positive and possibly negative) function satisfying $|A(e^{j\omega})|=|H(e^{j\omega})|$. The phase $\hat{\phi}(\omega)$ in (2) is now a purely linear function without any jumps. |
It wouldn't.
It is a well-known fact from gravity, which also has a law $\nabla\cdot\vec F_g \propto \rho$ expressing a $1/r^2$ force, that the force of a
spherical shell of mass is zero inside that shell, but outside it is $-\hat r G M m / r^2$ as if all of that mass were at the shell's center.
The same mathematics will mean that the $\vec E$ field within a spherical insulator with uniform charge density $\rho$ will be $$\vec E = \hat r \frac{Q_{\text{encl.}}}{4\pi\epsilon_0 r^2} = \hat r \frac{1}{4\pi\epsilon_0 r^2} \rho \cdot \frac{4\pi}{3}r^3 = \hat r \frac{\rho r}{3\epsilon_0}$$which is plainly not zero unless $\rho$ is.
Interestingly, you can kill these sorts of arguments by extending the radius of the sphere out to all space, at which point Gaussian arguments get a little hazier and there are symmetry reasons to think that there is no net force on you. However, Gauss still wins in this way: any little noise in that background distribution of charge gets amplified by the laws as time goes on, eventually leading to a very uneven distribution of charge, so it's like balancing a pen on its tip: the physics says it's possible but unstable. |
A straight rod is made of two parts, $[0,x_1]$ (green in the figure) with thermal diffusivity $\kappa_1$ and $[x_1,x_2]$ (blue) with thermal diffusivity $\kappa_2$. The rod is perfectly insulated. Zero $y$ and $z$ temperature gradients are assumed.
At $x=0$ temperature is maintained at constant $T_0$. At $x=x_2$ the rod is embedded into a perfect insulator ($\kappa=0$). At $t=0$ the rod has a uniform temperature $T(x,0)=T_i$.
Question: what is the temperature evolution of the rod? 1. The simple case where $\kappa_1=\kappa_2=\kappa$:
Let $u(x,t)=T(x,t)-T_0$.
Then Fourier's equation tells us:
$$u_t=\kappa u_{xx}$$
Boundary conditions:
$$u(0,t)=0$$ $$u_x(x_2,t)=0$$
Initial condition:
$$u(x,0)=u_i=T_i-T_0$$
Using the Ansatz $u(x,t)=X(x)\Gamma(t)$, separation constant $-k^2$ and the boundary conditions above, this solves easily to:
$$\Large{u(x,t)=\displaystyle \sum_{n=1}^{+\infty}B_n\sin\Bigg(\frac{n\pi x}{2x_2}\Bigg)e^{-\kappa \Big(\frac{n\pi }{2x_2}\Big)^2t}}$$
(for $n=1,3,5,7,...$)
The $B_n$ coefficients can easily be obtained from the initial condition with the Fourier sine series:
$$B_n=\frac{4u_i}{n\pi}$$
Back-substituting we get:
$$T(x,t)=T_0+\frac{4(T_i-T_0)}{\pi}\displaystyle \sum_{n=1}^{+\infty}\frac{1}{n} \sin\Bigg(\frac{n\pi x}{2x_2}\Bigg)e^{-\kappa \Big(\frac{n\pi }{2x_2}\Big)^2t}$$
(for $n=1,3,5,7,...$)
A plot for the first three terms at $t=0.1$:
2. The case where $\kappa_1\neq\kappa_2$:
We define two functions $u_1(x,t)$ for $[0,x_1]$ and $u_2(x,t)$ for $[x_1,x_2]$. We use the same Ansatz as under $1.$ We'll assume both functions have their own eigenvalues.
Boundary conditions:
$$u_1(0,t)=0\implies X_1(0)=0\tag{1}$$ $$\frac{\partial u_2(x_2)}{\partial x}=0\implies X_2'(x_2)=0\tag{2}$$
In addition (continuity):
$$u_1(x_1,t)=u_2(x_1,t)\tag{3}$$
With Fourier, the heat flux is the same at $x=x_1$:
$$\alpha_1\frac{\partial u_1(x_1)}{\partial x}=\alpha_2\frac{\partial u_2(x_1)}{\partial x}\tag{4}$$
Where $\alpha_i$ are the
thermal conductivities. a. for $u_1(x,t)$:
$$X_1(x)=c_1\cos k_1x+c_2\sin k_1x$$ $$X_1(0)=0\implies c_1=0\implies X_1(x)=c_2\sin k_1x\tag{5}$$
b. for $u_2(x,t)$:
$$X_2(x)=c_3\cos k_2x+c_4\sin k_2x$$ $$X_2'(x_2)=0\tag{2}$$ $$\implies -c_3k_2\sin k_2x_2+c_4k_2\cos k_2x_2=0\tag{6}$$ Using the additional conditions $(3)$ and $(4)$:
$$c_2\sin k_1x_1=c_3\cos k_2x_1+c_4\sin k_2x_1\tag{7}$$ $$c_2\alpha_1k_1\cos k_1x_1=-c_3\alpha_2k_2\sin k_2x_1+c_4\alpha_2k_2\cos k_2x_1\tag{8}$$
Problem:
$(6)$, $(7)$ and $(8)$ form a system of three simultaneous equations but with five unknowns: $c_2$, $c_3$, $c_4$, $k_1$ and $k_2$.
I'm tempted to set $c_3=0$ as it would yield $k_2$ from $(6)$. I think this would yield also the remaining unknowns.
But can I a priori assume $c_3=0$? Or is there another approach possible?
I'm also left wondering whether perhaps $k_1=k_2$. The eigenvalues do not depend on $\kappa$, so perhaps the eigenvalues $k$ are common to both functions. Due to $(4)$, $u_1$ and $u_2$ would then still be distinct. |
Existence and concentration of nodal solutions to a class of quasilinear problems
DOI: http://dx.doi.org/10.12775/TMNA.2007.012
Abstract
The existence and concentration behavior of nodal solutions are
established for the equation $-\varepsilon^{p} \Delta_{p}u + V(z)|u|^{p-2}u=f(u)$ in $\Omega$, where $\Omega$ is a domain in ${\mathbb R}^{N}$, not necessarily bounded, $V$ is a positive Hölder continuous function and $f\in C^{1}$ is a function having subcritical growth.
established for the equation $-\varepsilon^{p} \Delta_{p}u +
V(z)|u|^{p-2}u=f(u)$ in $\Omega$, where $\Omega$ is a domain in
${\mathbb R}^{N}$, not necessarily bounded, $V$ is a positive Hölder
continuous function and $f\in C^{1}$ is a function having
subcritical growth.
Keywords
Quasilinear equation; variational methods; behaviour of solutions
Full Text:FULL TEXT Refbacks There are currently no refbacks. |
Prof. Stephen Playfer Position: Personal Chair Research Theme: Particle and Nuclear Physics Research Group: Particle Physics Experiment Institution: Edinburgh Email address: s.m.playfer@ed.ac.uk Telephone number: +44 (0)131 650 5275 Address: School of Physics and Astronomy, James Clerk Maxwell Building, Peter Guthrie Tait Road, Edinburgh, EH9 3FD, United Kingdom Research interests
I am an experimental particle physicist with a particular interest in the mysteries associated with the existence of three flavours of quarks and leptons. Their different masses and couplings (the CKM matrix for quarks, and the PMNS matrix for neutrinos), are free parameters in the Standard Model which have to be determined experimentally. The existence of three flavours leads to matter-antimatter asymmetries known as CP violation. Since we live in a matter-dominated universe the mechanisms for CP violation are a fundamental question, and the current Standard Model of particle physics does not produce enough baryon asymmetry in the universe by many orders of magnitude.
I seek to address these questions by studying rare decays of heavy flavours (b quarks, c quarks and tau leptons), and by measuring CP violation in the decays of neutral B mesons. From 2000-2010 I led a group working within the BaBar collaboration at SLAC, where we studied rare radiative and electroweak B decays. Currently I am a member of the LHCb experiment at CERN, studying CP violation in rare hadronic decays of B mesons.
Teaching
I have taught undergraduate physics at all levels, including courses on waves, electromagnetism, dynamics & relativity and particle physics. Currently I am teaching second year waves (as part of the Classical & Modern Physics course), and supervising the third year experimental laboratory.
In 2011/2012 I was the convenor of a working group (Forward Look Implementation), that completely redesigned the core courses in our main degree programmes. From 2012 I have been the Convenor of the Board of Examiners in Physics. Also since 2012 I have been an External Examiner for Birmingham University.
Research outputs Measurement of Z -> tau(+)tau(-) production in proton-proton collisions at root s=8 TeV DOI, Journal of High Energy Physics, 9 (2018) Observation of the decay $\overline{B_s^0} \rightarrow χ_{c2} K^+ K^- $ in the $\varphi$ mass region DOI, Journal of High Energy Physics (2018) Measurement of the $\Upsilon$ polarizations in $pp$ collisions at $\sqrt{s}$=7 and 8TeV DOI, Journal of High Energy Physics, 1712, p. 110 (2017) Measurement of the shape of the $\Lambda_b^0\to\Lambda_c^+ \mu^- \overline{\nu}_{\mu}$ differential decay rate DOI, Physical Review, D96, 11 , p. 112005 (2017) Bose-Einstein correlations of same-sign charged pions in the forward region in $pp$ collisions at $\sqrt{s}$ = 7 TeV DOI, Journal of High Energy Physics, 1712, p. 025 (2017) Measurement of the $B^{\pm}$ production cross-section in pp collisions at $\sqrt{s} =$ 7 and 13 TeV DOI, Journal of High Energy Physics, 1712, p. 026 (2017) First Observation of the Rare Purely Baryonic Decay $B^0\to p\bar p$ DOI, Physical Review Letters, 119, 23 , p. 232001 (2017) Updated search for long-lived particles decaying to jet pairs DOI, European Physical Journal C: Particles and Fields, C77, 12 , p. 812 (2017) χc1 and χc2 Resonance Parameters with the Decays χc1,c2→J/ψμ+μ− DOI, Physical Review Letters, 119, 22 (2017) Measurement of $CP$ violation in $B^0\rightarrow J/\psi K^0_\mathrm{S}$ and $B^0\rightarrow\psi(2S) K^0_\mathrm{S}$ decays DOI, Journal of High Energy Physics, 1711, p. 170 (2017) |
ISSN:
1078-0947
eISSN:
1553-5231
All Issues
Discrete & Continuous Dynamical Systems - A
December 2015 , Volume 35 , Issue 12
Special issue on contemporary PDEs between theory and applications
Select all articles
Export/Reference:
Abstract:
This special issue of Discrete and Continuous Dynamical Systems is devoted to some recent developments in some important fields of partial differential equations.
The aim is to bring together several contributions in different fields that range from classical to modern topics with the intent to present new research perspectives, innovative methods and challenging applications.
Though it was of course impossible to take into account all the possible lines of research in PDEs, we tried to present a wide spectrum, hoping to capture the interest of both the general mathematical audience and the specialized mathematicians that work in differential equations and related fields.
We think that the Authors put a great effort to write their contributions in the clearest possible language. We are indeed grateful to all the Authors that contributed to this special issue, donating beautiful pieces of mathematics to the community and promoting further developments in the field.
We also thank the Managing Editor for his kind invitation to act as an editor of this special issue.
Also, we express our gratitude to all the Referees who kindly agreed to devote their time and efforts to read and check all the papers carefully, providing useful comments and recommendations. Indeed, each paper was submitted to the meticulous inspection of two independent and anonymous Experts, whose observations were fundamental to the final outcome of this special issue.
Finally, we would like to wish a `Happy reading!' to the Reader. This volume is for Her (or Him), after all.
Abstract:
We present a notion of weak solution for the Dirichlet problem driven by the fractional Laplacian, following the Stampacchia theory. Then, we study semilinear problems of the form $$ \left\lbrace\begin{array}{ll} (-\triangle)^s u = \pm\,f(x,u) & \hbox{ in }\Omega \\ u=g & \hbox{ in }\mathbb{R}^n\setminus\overline{\Omega}\\ Eu=h & \hbox{ on }\partial\Omega \end{array}\right. $$ when the nonlinearity $f$ and the boundary data $g,h$ are positive, but allowing the right-hand side to be both positive or negative and looking for solutions that blow up at the boundary. The operator $E$ is a weighted limit to the boundary: for example, if $\Omega$ is the ball $B$, there exists a constant $C(n,s)>0$ such that $$ Eu(\theta) = C(n,s) \lim_{x \to \theta}_{x\in B} u(x) {dist(x,\partial B)}^{1-s}, \hbox{ for all } \theta \in \partial B. $$ Our starting observation is the existence of $s$-harmonic functions which explode at the boundary: these will be used both as supersolutions in the case of negative right-hand side and as subsolutions in the positive case.
Abstract:
We characterize the set of harmonic functions with Dirichlet boundary conditions in unbounded domains which are union of two different chambers. We analyse the asymptotic behavior of the solutions in connection with the changes in the domain's geometry; we classify all (possibly sign-changing) infinite energy solutions having given asymptotic frequency at the infinite ends of the domain; finally we sketch the case of several different chambers.
Abstract:
We extend the Caffarelli--Córdoba estimates to the vector case (L. Caffarelli and A. Córdoba, Uniform Convergence of a singular perturbation problem,
Comm. Pure Appl. Math. 48(1995)). In particular, we establish lower codimension density estimates. These are useful for studying the hierarchical structure of minimal solutions. We also give applications. Abstract:
We establish a variational parabolic capacity in a context of degenerate parabolic equations of $p$-Laplace type, and show that this capacity is equivalent to the nonlinear parabolic capacity. As an application, we estimate the capacities of several explicit sets.
Abstract:
This paper provides an elementary proof of the classical limit of the Schrödinger equation with WKB type initial data and over arbitrary long finite time intervals. We use only the stationary phase method and the Laptev-Sigal simple and elegant construction of a parametrix for Schrödinger type equations [A. Laptev, I. Sigal, Review of Math. Phys.
12(2000), 749--766]. We also explain in detail how the phase shifts across caustics obtained when using the Laptev-Sigal parametrix are related to the Maslov index. Abstract:
We show that the parabolic minimal surface equation has an eventual regularization effect, that is, the solution becomes smooth after a strictly positive finite time.
Abstract:
We consider nonlinear diffusive evolution equations posed on bounded space domains, governed by fractional Laplace-type operators, and involving porous medium type nonlinearities. We establish existence and uniqueness results in a suitable class of solutions using the theory of maximal monotone operators on dual spaces. Then we describe the long-time asymptotics in terms of separate-variables solutions of the friendly giant type. As a by-product, we obtain an existence and uniqueness result for semilinear elliptic non local equations with sub-linear nonlinearities. The Appendix contains a review of the theory of fractional Sobolev spaces and of the interpolation theory that are used in the rest of the paper.
Abstract:
In this brief note we study how the fractional mean curvature of order $s \in (0, 1)$ varies with respect to $C^{1, \alpha}$ diffeomorphisms. We prove that, if $\alpha > s$, then the variation under a $C^{1, \alpha}$ diffeomorphism $\Psi$ of the $s$-mean curvature of a set $E$ is controlled by the $C^{0, \alpha}$ norm of the Jacobian of $\Psi$. When $\alpha = 1$ we discuss the stability of these estimates as $s \rightarrow 1^-$ and comment on the consistency of our result with the classical framework.
Abstract:
Given a compact three--manifold together with a Riemannian metric, we prove the short--time existence of a solution to the renormalization group flow, truncated at the second order term, under a suitable hypothesis on the sectional curvature of the initial metric.
Abstract:
We prove the existence of extremal domains with small prescribed volume for the first eigenvalue of the Laplace-Beltrami operator in any compact Riemannian manifold. This result generalizes a results of F. Pacard and the second author where the existence of a nondegenerate critical point of the scalar curvature of the Riemannian manifold was required.
Abstract:
Asymptotics of solutions to relativistic fractional elliptic equations with Hardy type potentials is established in this paper. As a consequence, unique continuation properties are obtained.
Abstract:
We prove the symmetry of components and some Liouville-type theorems for, possibly sign changing, entire distributional solutions to a family of nonlinear elliptic systems encompassing models arising in Bose-Einstein condensation and in nonlinear optics. For these models we also provide precise classification results for non-negative solutions. The sharpness of our results is also discussed.
Abstract:
A plate model describing the statics and dynamics of a suspension bridge is suggested. A partially hinged plate subject to nonlinear restoring hangers is considered. The whole theory from linear problems, through nonlinear stationary equations, ending with the full hyperbolic evolution equation is studied. This paper aims to be the starting point for more refined models.
Abstract:
We prove Harnack type inequalities for a wide class of parabolic doubly nonlinear equations including $u_t=$ ${ div}(|u|^{m-1}|Du|^{p-2}Du)$. We will distinguish between the
supercriticalrange $3 - \frac {p} {N} < p+m < 3$ and the subcritical$2 < p+m \le 3 - \frac {p} {N}$ range. Our results extend similar estimates holding for general equations having the same structure as the parabolic $p$-Laplace or the porous medium equation and recently collected in [6]. Abstract:
We are concerned with the long time behaviour of solutions to the fractional porous medium equation with a variable spatial density. We prove that if the density decays slowly at infinity, then the solution approaches the Barenblatt-type solution of a proper singular fractional problem. If, on the contrary, the density decays rapidly at infinity, we show that the minimal solution multiplied by a suitable power of the time variable converges to the minimal solution of a certain fractional sublinear elliptic equation.
Abstract:
We consider a class of scalar field equations with anisotropic nonlocal nonlinearities. We obtain a suitable extension of the well-known compactness lemma of Benci and Cerami to this variable exponent setting, and use it to prove that the Palais-Smale condition holds at all level below a certain threshold. We deduce the existence of a ground state when the variable exponent slowly approaches the limit at infinity from below.
Abstract:
We prove optimal pointwise Schauder estimates in the spatial variables for solutions of linear parabolic integro-differential equations. Optimal Hölder estimates in space-time for those spatial derivatives are also obtained.
Abstract:
A rate-independent model for the quasistatic evolution of a magnetoelastic plate is advanced and analyzed. Starting from the three-dimensional setting, we present an evolutionary $\Gamma$-convergence argument in order to pass to the limit in one of the material dimensions. By taking into account both conservative and dissipative actions, a nonlinear evolution system of rate-independent type is obtained. The existence of so-called
energetic solutionsto such system is proved via approximation. Abstract:
Smoothness of a function $f:\mathbb{R}^n\to\mathbb{R}$ can be measured in terms of the rate of convergence of $f*\rho_{\epsilon}$ to $f$, where $\rho$ is an appropriate mollifier. In the framework of fractional Sobolev spaces, we characterize the "appropriate" mollifiers. We also obtain sufficient conditions, close to being necessary, which ensure that $\rho$ is adapted to a given scale of spaces. Finally, we examine in detail the case where $\rho$ is a characteristic function.
Abstract:
In this work we consider the problems $$ \left\{\begin{array}{rcll} \mathcal{L \,} u&=&f &\hbox{ in } \Omega,\\ u&=&0 &\hbox{ in } \mathbb{R}^N\setminus\Omega, \end{array} \right. $$ and $$ \left\{\begin{array}{rcll} u_t +\mathcal{L \,} u&=&f &\hbox{ in } Q_{T}\equiv\Omega\times (0, T),\\ u (x,t) &=&0 &\hbox{ in } \big(\mathbb{R}^N\setminus\Omega\big) \times (0, T),\\ u(x,0)&=&0 &\hbox{ in } \Omega, \end{array} \right. $$ where $\mathcal{L \,}$ is a nonlocal differential operator and $\Omega$ is a bounded domain in $\mathbb{R}^N$, with Lipschitz boundary.
The main goal of this work is to study existence, uniqueness and summability of the solution $u$ with respect to the summability of the datum $f$. In the process we establish an $L^p$-theory, for $p \geq 1$, associated to these problems and we prove some useful inequalities for the applications.
Abstract:
In this paper, we study the regularity of convex solutions to the Dirichlet problem of the homogeneous Monge-Ampère equation $\det D^2 u=0$. We prove that if the domain is a strip region and the boundary functions are locally uniformly convex and $C^{k+2,\alpha}$ smooth, then the solution is $C^{k+2,\alpha}$ smooth up to boundary. By an example, we show the solution may fail to be $C^{2}$ smooth if boundary functions are not locally uniformly convex. Similar results have also been obtained for the Dirichlet problem on bounded convex domains.
Abstract:
For the cubic Schrödinger system with trapping potentials in $\mathbb{R}^N$, $N\leq3$, or in bounded domains, we investigate the existence and the orbital stability of standing waves having components with prescribed $L^2$-mass. We provide a variational characterization of such solutions, which gives information on the stability through a condition of Grillakis-Shatah-Strauss type. As an application, we show existence of conditionally orbitally stable solitary waves when: a) the masses are small, for almost every scattering lengths, and b) in the defocusing, weakly interacting case, for any masses.
Abstract:
This paper slightly improves a classical result by Gangbo and McCann (1996) about the structure of optimal transport plans for costs that are strictly concave and increasing functions of the Euclidean distance. Since the main difficulty for proving the existence of an optimal map comes from the possible singularity of the cost at $0$, everything is quite easy if the supports of the two measures are disjoint; Gangbo and McCann proved the result under the assumption $\mu(supp(\mathbf{v}))=0$; in this paper we replace this assumption with the fact that the two measures are singular to each other. In this case it is possible to prove the existence of an optimal transport map, provided the starting measure $\mu$ does not give mass to small sets (i.e. $(d\!-\!1)-$rectifiable sets). When the measures are not singular the optimal transport plan decomposes into two parts, one concentrated on the diagonal and the other being a transport map between mutually singular measures.
Abstract:
We consider the homogeneous wave equation on a bounded open connected subset $\Omega$ of $\mathbb{R}^n$. Some initial data being specified, we consider the problem of determining a measurable subset $\omega$ of $\Omega$ maximizing the $L^2$-norm of the restriction of the corresponding solution to $\omega$ over a time interval $[0,T]$, over all possible subsets of $\Omega$ having a certain prescribed measure. We prove that this problem always has at least one solution and that, if the initial data satisfy some analyticity assumptions, then the optimal set is unique and moreover has a finite number of connected components. In contrast, we construct smooth but not analytic initial conditions for which the optimal set is of Cantor type and in particular has an infinite number of connected components.
Abstract:
We show that the quotient of a harmonic function and a positive harmonic function, both vanishing on the boundary of a $C^{k,\alpha}$ domain is of class $C^{k,\alpha}$ up to the boundary.
Abstract:
The Hessian Sobolev inequality of X.-J. Wang, and the Hessian Poincaré inequalities of Trudinger and Wang are fundamental to differential and conformal geometry, and geometric PDE. These remarkable inequalities were originally established via gradient flow methods. In this paper, direct elliptic proofs are given, and extensions to trace inequalities with general measures in place of Lebesgue measure are obtained. The new techniques rely on global estimates of solutions to Hessian equations in terms of Wolff's potentials, and duality arguments making use of a non-commutative inner product on the cone of $k$-convex functions.
Readers Authors Editors Referees Librarians More Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top] |
I am solving:
$(\sigma_A^2 - 2\rho\sigma_A\sigma_B +\sigma_B^2)x^2 +2(\rho\sigma_A\sigma_B - \sigma_B^2)x +\sigma_B^2 = 0$
I need to show that a real $x$ exists if and only if $\rho = \pm 1$
Using the quadratic formula I could only get as far as
$ x = \frac{-2(\rho\sigma_A\sigma_B - \sigma_B^2) \pm \sqrt{4\sigma_A^2\sigma_B^2(\rho^2-1)}}{2( \sigma_A^2 +\sigma_B^2 - 2\rho\sigma_A\sigma_B)}$
I have a more simplified solution, which is
$x=[1- \frac{\sigma_A}{\sigma_B}(\rho\pm\sqrt{\rho^2-1})]^{-1}$
but I cannot see how to get there. |
This article will be permanently flagged as inappropriate and made unaccessible to everyone.
Are you certain this article is inappropriate?
Excessive Violence
Sexual Content
Political / Social
Email Address:
Article Id:
WHEBN0000018837
Reproduction Date:
In statistics and probability theory, the median is the numerical value separating the higher half of a data sample, a population, or a probability distribution, from the lower half. The median of a finite list of numbers can be found by arranging all the observations from lowest value to highest value and picking the middle one (e.g., the median of {3, 3, 5, 9, 11} is 5). If there is an even number of observations, then there is no single middle value; the median is then usually defined to be the mean of the two middle values [1] [2] (the median of {3, 5, 7, 9} is (5 + 7) / 2 = 6), which corresponds to interpreting the median as the fully trimmed mid-range. The median is of central importance in robust statistics, as it is the most resistant statistic, having a breakdown point of 50%: so long as no more than half the data is contaminated, the median will not give an arbitrarily large result. A median is only defined on ordered one-dimensional data, and is independent of any distance metric. A geometric median, on the other hand, is defined in any number of dimensions.
In a sample of data, or a finite population, there may be no member of the sample whose value is identical to the median (in the case of an even sample size); if there is such a member, there may be more than one so that the median may not uniquely identify a sample member. Nonetheless, the value of the median is uniquely determined with the usual definition. A related concept, in which the outcome is forced to correspond to a member of the sample, is the medoid. At most, half the population have values strictly less than the median, and, at most, half have values strictly greater than the median. If each group contains less than half the population, then some of the population is exactly equal to the median. For example, if a < b < c, then the median of the list {a, b, c} is b, and, if a < b < c < d, then the median of the list {a, b, c, d} is the mean of b and c; i.e., it is (b + c)/2.
The median can be used as a measure of location when a distribution is skewed, when end-values are not known, or when one requires reduced importance to be attached to outliers, e.g., because they may be measurement errors.
In terms of notation, some authors represent the median of a variable x either as \tilde{x} or as \mu_{1/2},[1] sometimes also M.[3] There is no widely accepted standard notation for the median,[4] so the use of these or other symbols for the median needs to be explicitly defined when they are introduced.
The median is the 2nd quartile, 5th decile, and 50th percentile.
The median is one of a number of ways of summarising the typical values associated with members of a statistical population; thus, it is a possible location parameter. Since the median is the same as the second quartile, its calculation is illustrated in the article on quartiles.
When the median is used as a location parameter in descriptive statistics, there are several choices for a measure of variability: the range, the interquartile range, the mean absolute deviation, and the median absolute deviation.
For practical purposes, different measures of location and dispersion are often compared on the basis of how well the corresponding population values can be estimated from a sample of data. The median, estimated using the sample median, has good properties in this regard. While it is not usually optimal if a given population distribution is assumed, its properties are always reasonably good. For example, a comparison of the efficiency of candidate estimators shows that the sample mean is more statistically efficient than the sample median when data are uncontaminated by data from heavy-tailed distributions or from mixtures of distributions, but less efficient otherwise, and that the efficiency of the sample median is higher than that for a wide range of distributions. More specifically, the median has a 64% efficiency compared to the minimum-variance mean (for large normal samples), which is to say the variance of the median will be ~50% greater than the variance of the mean—see Efficiency (statistics)#Asymptotic efficiency and references therein.
For any probability distribution on the real line R with cumulative distribution function F, regardless of whether it is any kind of continuous probability distribution, in particular an absolutely continuous distribution (which has a probability density function), or a discrete probability distribution, a median is by definition any real number m that satisfies the inequalities
or, equivalently, the inequalities
The efficiency of the sample median, measured as the ratio of the variance of the mean to the variance of the median, depends on the sample size and on the underlying population distribution. For a sample of size N = 2n + 1 from the normal distribution, the ratio is[20]
For large samples (as n tends to infinity) this ratio tends to \frac{2}{\pi} .
For univariate distributions that are symmetric about one median, the Hodges–Lehmann estimator is a robust and highly efficient estimator of the population median.[21]
If data are represented by a statistical model specifying a particular family of probability distributions, then estimates of the median can be obtained by fitting that family of probability distributions to the data and calculating the theoretical median of the fitted distribution. Pareto interpolation is an application of this when the population is assumed to have a Pareto distribution.
The coefficient of dispersion (CD) is defined as the ratio of the average absolute deviation from the median to the median of the data.[22] It is a statistical measure used by the states of Iowa, New York and South Dakota in estimating dues taxes.[23][24][25] In symbols
where n is the sample size, m is the sample median and x is a variate. The sum is taken over the whole sample.
Confidence intervals for a two sample test where the sample sizes are large have been derived by Bonett and Seier[22] This test assumes that both samples have the same median but differ in the dispersion around it. The confidence interval (CI) is bounded inferiorly by
where tj is the mean absolute deviation of the jth sample, var() is the variance and zα is the value from the normal distribution for the chosen value of α: for α = 0.05, zα = 1.96. The following formulae are used in the derivation of these confidence intervals
where r is the Pearson correlation coefficient between the squared deviation scores
a and b here are constants equal to 1 and 2, x is a variate and s is the standard deviation of the sample.
Previously, this article discussed the concept of a univariate median for a one-dimensional object (population, sample). When the dimension is two or higher, there are multiple concepts that extend the definition of the univariate median; each such multivariate median agrees with the univariate median when the dimension is exactly one. In higher dimensions, however, there are several multivariate medians.[21]
The marginal median is defined for vectors defined with respect to a fixed set of coordinates. A marginal median is defined to be the vector whose components are univariate medians. The marginal median is easy to compute, and its properties were studied by Puri and Sen.[21][26]
In a normed vector space of dimension two or greater, the "spatial median" minimizes the expected distance
where X and a are vectors, if this expectation has a finite minimum; another definition is better suited for general probability-distributions.[9][21] The spatial median is unique when the data-set's dimension is two or more.[9][10][21] It is a robust and highly efficient estimator of a central tendency of a population.[27][21]
The Geometric median is the corresponding estimator based on the sample statistics of a finite set of points, rather than the population statistics. It is the point minimizing the arithmetic average of Euclidean distances to the given sample points, instead of the expectation. Note that the arithmetic average and sum are interchangeable since they differ by a fixed constant which does not alter the location of the minimum.
An alternative generalization of the spatial median in higher dimensions that does not relate to a particular metric is the centerpoint.
For univariate distributions that are symmetric about one median, the Hodges–Lehmann estimator is a robust and highly efficient estimator of the population median; for non-symmetric distributions, the Hodges–Lehmann estimator is a robust and highly efficient estimator of the population pseudo-median, which is the median of a symmetrized distribution and which is close to the population median. The Hodges–Lehmann estimator has been generalized to multivariate distributions.[28]
The Theil–Sen estimator is a method for robust linear regression based on finding medians of slopes.
In the context of image processing of monochrome raster images there is a type of noise, known as the salt and pepper noise, when each pixel independently becomes black (with some small probability) or white (with some small probability), and is unchanged otherwise (with the probability close to 1). An image constructed of median values of neighborhoods (like 3×3 square) can effectively reduce noise in this case.
In cluster analysis, the k-medians clustering algorithm provides a way of defining clusters, in which the criterion of maximising the distance between cluster-means that is used in k-means clustering, is replaced by maximising the distance between cluster-medians.
This is a method of robust regression. The idea dates back to Wald in 1940 who suggested dividing a set of bivariate data into two halves depending on the value of the independent parameter x: a left half with values less than the median and a right half with values greater than the median.[29] He suggested taking the means of the dependent y and independent x variables of the left and the right halves and estimating the slope of the line joining these two points. The line could then be adjusted to fit the majority of the points in the data set.
Nair and Shrivastava in 1942 suggested a similar idea but instead advocated dividing the sample into three equal parts before calculating the means of the subsamples.[30] Brown and Mood in 1951 proposed the idea of using the medians of two subsamples rather the means.[31] Tukey combined these ideas and recommended dividing the sample into three equal size subsamples and estimating the line based on the medians of the subsamples.[32]
Any mean-unbiased estimator minimizes the risk (expected loss) with respect to the squared-error loss function, as observed by Gauss. A median-unbiased estimator minimizes the risk with respect to the absolute-deviation loss function, as observed by Laplace. Other loss functions are used in statistical theory, particularly in robust statistics.
The theory of median-unbiased estimators was revived by George W. Brown in 1947:[33]
An estimate of a one-dimensional parameter θ will be said to be median-unbiased if, for fixed θ, the median of the distribution of the estimate is at the value θ; i.e., the estimate underestimates just as often as it overestimates. This requirement seems for most purposes to accomplish as much as the mean-unbiased requirement and has the additional property that it is invariant under one-to-one transformation. —page 584
Further properties of median-unbiased estimators have been reported.[34][35][36][37] In particular, median-unbiased estimators exist in cases where mean-unbiased and maximum-likelihood estimators do not exist. Median-unbiased estimators are invariant under one-to-one transformations.
The idea of the median originated in Edward Wright's book on navigation (Certaine Errors in Navigation) in 1599 in a section concerning the determination of location with a compass. Wright felt that this value was the most likely to be the correct value in a series of observations.
In 1757, Roger Joseph Boscovich developed a regression method based on the L1 norm and therefore implicitly on the median.[38]
In 1774, Laplace suggested the median be used as the standard estimator of the value of a posterior pdf. The specific criteria was to minimize the expected magnitude of the error; |α - α*| where α* is the estimate and α is the true value. Laplaces's criterion was generally rejected for 150 years in favor of the least squares method of Gauss and Legendgre which minimizes < (α - α*)2 > to obtain the mean. [39] The distribution of both the sample mean and the sample median were determined by Laplace in the early 1800s.[12][40]
Antoine Augustin Cournot in 1843 was the first to use the term median (valeur médiane) for the value that divides a probability distribution into two equal halves. Gustav Theodor Fechner used the median (Centralwerth) in sociological and psychological phenomena.[41] It had earlier been used only in astronomy and related fields. Gustav Fechner popularized the median into the formal analysis of data, although it had been used previously by Laplace.[41]
Francis Galton used the English term median in 1881,[42] having earlier used the terms middle-most value in 1869 and the medium in 1880.
This article incorporates material from Median of a distribution on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
Probability theory, Regression analysis, Mathematics, Observational study, Calculus
Statistics, Outliers, Peter J. Rousseeuw, Xuming He, Probability distribution
Statistics, Normal distribution, Probability density function, Integral, Survey methodology
Robust statistics, Statistics, Estimator, Normal distribution, Experimental design
Statistics, Mean, Standard deviation, Median, Regression analysis
Median, Mode (statistics), Mean, Arithmetic mean, Geometric mean
Statistics, Mean, Median, Interquartile range, Measures of central tendency
Statistics, Regression analysis, Median, Mathematics, Arithmetic mean
Statistics, Robust statistics, Jstor, Regression analysis, Estimator |
№ 8
All Issues
Ukr. Mat. Zh. - 2005νmber=4. - 57, № 5. - pp. 3-11
Ukr. Mat. Zh. - 2005νmber=4. - 57, № 5. - pp. 582–600
We give a brief survey of results on functional analysis obtained at the Institute of Mathematics of the Ukrainian National Academy of Sciences from the day of its foundation.
Ukr. Mat. Zh. - 2005νmber=4. - 57, № 5. - pp. 601–613
We study representations of solutions of the Dirac equation, properties of spectral data, and inverse problems for the Dirac operator on a finite interval with discontinuity conditions inside the interval.
Ukr. Mat. Zh. - 2005νmber=4. - 57, № 5. - pp. 614–621
By using two operators representable by Jacobi matrices, we introduce a family of
q-orthogonal polynomials, which turn out to be dual with respect to alternative q-Charlier polynomials. A discrete orthogonality relation and the completeness property for these polynomials are established.
Ukr. Mat. Zh. - 2005νmber=4. - 57, № 5. - pp. 622–632
Let
A be an unbounded self-adjoint operator in a Hilbert separable space \(H_0\) with rigging \(H_ - \sqsupset H_0 \sqsupset H_ +\) such that \(D(A) = H_ +\) in the graph norm (here, \(D(A)\) is the domain of definition of A). Assume that \(H_ +\) is decomposed into the orthogonal sum \(H_ + = M \oplus N_ +\) so that the subspace \(M_ +\) is dense in \(H_0\). We construct and study a singularly perturbed operator A associated with a new rigging \(H_ - \sqsupset H_0 \sqsupset \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{H} _ +\), where \(\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{H} _ + = M_ + = D(\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{A} )\), and establish the relationship between the operators A and A.
Ukr. Mat. Zh. - 2005νmber=4. - 57, № 5. - pp. 633–643
For an arbitrary self-adjoint operator
B in a Hilbert space \(\mathfrak{H}\), we present direct and inverse theorems establishing the relationship between the degree of smoothness of a vector \(x \in \mathfrak{H}\) with respect to the operator B, the rate of convergence to zero of its best approximation by exponential-type entire vectors of the operator B, and the k-modulus of continuity of the vector x with respect to the operator B. The results are used for finding a priori estimates for the Ritz approximate solutions of operator equations in a Hilbert space.
Ukr. Mat. Zh. - 2005νmber=4. - 57, № 5. - pp. 644–653
We prove that a conditional expectation on a compact quantum group that satisfies certain conditions can be decomposed into a composition of two conditional expectations one of which is associated with quantum double cosets and the other preserves the counit.
Ukr. Mat. Zh. - 2005νmber=4. - 57, № 5. - pp. 654–658
We study the inverse spectral problem for the point spectrum of singularly perturbed self-adjoint operators.
Operators of Generalized Translation and Hypergroups Constructed from Self-Adjoint Differential Operators
Ukr. Mat. Zh. - 2005νmber=4. - 57, № 5. - pp. 659–668
We construct new examples of operators of generalized translation and convolutions in eigenfunctions of certain self-adjoint differential operators.
Ukr. Mat. Zh. - 2005νmber=4. - 57, № 5. - pp. 669–678
In earlier papers the author studied some classes of equations with Carlitz derivatives for $\mathbb{F}_q$ -linear functions, which are the natural function field counterparts of linear ordinary differential equations. Here we consider equations containing self-compositions $u \circ u ... \circ u$ of the unknown function. As an algebraic background, imbeddings of the composition ring of $\mathbb{F}_q$ -linear holomorphic functions into skew fields are considered.
Ukr. Mat. Zh. - 2005νmber=4. - 57, № 5. - pp. 679–688
For a singular perturbation $A = A_0 + \sum^n_{i, j=1}t_{ij} \langle \psi_j, \cdot \rangle \psi_i,\quad n \leq \infty$ of a positive self-adjoint operator $A_0$ with Lebesgue spectrum, the spectral analysis of the corresponding self-adjoint operator realizations $A_T$ is carried out and the scattering matrix $\mathfrak{S}_{(A_T, A_0)}(\delta)$ is calculated in terms of parameters $t_{ij}$ under some additional restrictions on singular elements $\psi_{j}$ that provides the possibility of application of the Lax -Phillips approach in the scattering theory.
Ukr. Mat. Zh. - 2005νmber=4. - 57, № 5. - pp. 689–696
We study the theory of elliptic boundary-value problems in the refined two-sided scale of the Hormander spaces $H^{s, \varphi}$, where $s \in R,\quad \varphi$ is a functional parameter slowly varying on $+\infty$. In the case of the Sobolev spaces $H^{s}$, the function $\varphi(|\xi|) \equiv 1$. We establish that the considered operators possess the properties of the Fredholm operators, and the solutions are globally and locally regular.
Ukr. Mat. Zh. - 2005νmber=4. - 57, № 5. - pp. 697–705
By using representations of general position and their properties, we give the description of group $C^{*}$-algebras for semidirect products $\mathbb{Z}^d \times G_f$, where $G_f$ is a finite group, in terms of algebras of continuous matrix-functions defined on some compact set with boundary conditions.
We present examples of the $C^{*}$-algebras of affine Coxeter groups.
Ukr. Mat. Zh. - 2005νmber=4. - 57, № 5. - pp. 706–720
We analyze correlations between different approaches to the definition of the Hausdorff dimension of singular probability measures on the basis of fractal analysis of essential supports of these measures. We introduce characteristic multifractal measures of the first and higher orders. Using these measures, we carry out the multifractal analysis of singular probability measures and prove theorems on the structural representation of these measures. |
Let’s take an example base which can be said to be fully dissociated in water, e.g. $\ce{NaOH}$. You take $40~\mathrm{g}$ of the pure compound and dissolve it in $0.5~\mathrm{l}$ of water. The standard way to calculate the concentration of sodium ion is:
$$c = \frac nV = \frac m{MV} = \frac{40~\mathrm{g}}{40~\mathrm{g\cdot mol^{-1}} \times 0.5~\mathrm{l}} = 2~\mathrm{\frac {mol}l}\tag{1}$$
This builds on the fact that the amount of sodium ions is identical to the amount of $\ce{NaOH}$ added. The same thing can be done for hydroxide ions; within experimental error the concentration of hydroxide ions is also $2~\mathrm{mol\cdot l^{-1}}$ as calculated in equation $(1)$.
Now say you add another $0.5~\mathrm{l}$ to the solution you generated above. Obviously, you did not add any additional sodium ions, so their amount will remain the same. The new concentration can be calculated as shown in $(2)$:
$$c' = \frac n{V'} = \frac m{MV'} = \frac{40~\mathrm{g}}{40~\mathrm{g\cdot mol^{-1}} \times 1.0~\mathrm{l}} = 1~\mathrm{\frac {mol}l}\tag{2}$$
Thus, in the resulting solution, you are left with
half the original concentration of sodium ions; similarly for hydroxide ions at first approximation. The concentration decreased by adding water.
Why did I always include a caveat with the hydroxide ions? Well, these are also determined by the autoionisation of water, defined by its equilibrium constant $K_\mathrm{w}$ of equation $(3)$ as shown in $(4)$.
$$\begin{gather}\ce{H2O <--> H+ + OH-}\tag{3}\\[0.6em]K_\mathrm{w} = [\ce{H+}][\ce{OH-}] = 10^{-14}~\mathrm{\frac{mol^2}{l^2}}\tag{4}\end{gather}$$
Since the concentration of added hydroxide is much larger than that present in pure water ($1 \gg 10^{-7}$), we can ignore the additional concentration of hydroxide introduced by the equilibrium. However, we
can calculate a new concentration of $\ce{H+}$, see equation $(5)$:
$$[\ce{H+}] = \frac{K_\mathrm{w}}{[\ce{OH-}]} = \frac{10^{-14}~\mathrm{mol^2 \cdot l^{-2}}}{2~\mathrm{mol \cdot l^{-1}}} = 5 \cdot 10^{-15}~\mathrm{\frac{mol}l}\tag{5}$$
Thus, after dilution, the new concentration of protons is:
$$[\ce{H+}]' = \frac{K_\mathrm{w}}{[\ce{OH-}]'} = \frac{10^{-14}~\mathrm{mol^2 \cdot l^{-2}}}{1~\mathrm{mol \cdot l^{-1}}} = 1 \cdot 10^{-14}~\mathrm{\frac{mol}l}\tag{6}$$
Note how this concentration increased; this is reflected in the change of the solution’s $\mathrm{pH}$ value from $14.3$ to $14$. |
What are the minimum depth circuits possible for addition and multiplication of two n-bit numbers using just AND and XOR gates? I read somewhere that we can achieve constant depth for addition if we have an OR gate. Can I achieve that using XOR gates?
You can simulate an OR gate using a constant number of AND and XOR gates:
$$x \lor y = ((x \oplus 1) \land (y \oplus 1)) \oplus 1.$$
Consequently, anything you can do in depth $d$ using AND and OR gates, can be done in depth $\le 3d$ using AND and XOR gates. |
Well, now I have an answer. The sketch of proof is following.
Suppose we have confining theory with chiral fermions and gauge group $G_{\text{gauge}}$; it has global symmetry $G$, which has not gauge anomalies and chiral gauge anomalies $GG_{\text{gauge}}^{2}$, but has anomaly $G^{3}$, i.e., symmetric tensor
$$ d_{abc}^{G} \equiv \text{tr}[[t_{a}^{G}, t_{b}^{G}]_{+}t_{c}^{G}] - L\leftrightarrow R, $$
which arises in the triangle diagram, is nonzero. Finally, suppose that $G$
isn't spontaneously broken.
By using
anomaly matching condition we have the next statement: the confined sector of theory, which is represented by the set of bound states (which belong to the representation of $SU_{L}(3)\times SU_{R}(3)\times U_{V}(1)$), has to reproduce $d_{abc}^{G}$. Now we have to discover which massless states are realized in effective theory.
First, due to anomalous equation $$ (k_{1}+k_{2})_{\mu}\Gamma^{\mu \nu \rho}_{abc}(k_{1}, k_{2}, -(k_{1}+k_{2})) = -\frac{d_{abc}}{\pi^{2}}\epsilon^{\nu \lambda \rho \beta}k_{1\lambda}k_{2\beta}, $$
on the 3-point function $\Gamma$,
$$ \Gamma_{\mu \nu \rho}^{abc}(x, y, z) = \langle \left|T\left(J^{a}_{\mu}(x)J_{\nu}^{b}(y)J_{\rho}^{c}(z) \right) \right| \rangle $$
(here $J_{\mu}^{a}(x)$ is the current of symmetry $G$), we have the statement that $\Gamma$ has the pole structure at $k_{1} = k_{2} = 0$. So that in effective field theory only massless bound states make the contribution into anomaly.
Second, if we haven't spontaneous symmetry breaking, then only helicity 1/2 massless bound states may exist in confined theory. Really, massless helicity zero bound states may exist only when global symmetry becomes spontaneously broken, existence of massless helicity $> 1$ bound states is forbidden in Lorentz-invariant theory due to Weinberg-Witten theorem, while massless helicity 1 bound state can't exist (again, we need Lorentz invariant theory) because of transformation properties of chiral current $j_{\mu}^{a}$ under the small group of lightlike 4-vector (namely, Euclide group).
Third (we restrict themselves to the group $G\sim SU_{L}(n)\times SU_{R}(n)\times U_{V}(1)$ and $G_{\text{gauge}} \sim SU(N)$), due to confinement the only possible massless fermionic bound state is combined from $m_{L}, m_{R}$ particles and $\bar{m}_{L},\bar{m}_{R}$ antiparticles which numbers satisfy the condition $$ m_{L}+m_{R}-\bar{m}_{L}-\bar{m}_{R} = Nk, \quad k \in Z $$
Precisely, it belongs to the representation $(r,s)$, where $r$ defines the direct product of $m_{L}$ on $\bar{m}_{L}$ of representations of $SU(3)$, $s$ defines the direct product of $\bar{m}_{R}$ on $m_{R}$ of $SU(3)$, and the $U_{V}(1)$ charge is $Nk$. You may find out that these representations consist of hypothetical massless baryons for the case of QCD ($n = N = 3$).
Now we can write anomaly matching conditions for the QCD: since
$$ d_{abc}(SU_{L/R}^{3}(3)) = 3\text{tr}[[t_{a}, t_{b}]_{+}, t_{c}], \quad d_{abc}(SU_{L/R}^{2}(3)U_{V}(1)) = 3\text{tr}[[t_{a}, t_{b}]_{+}] = \frac{3}{2}\delta_{ab}, $$
we have that for the representation of massless fermion bound state with given generators $T_{a}$,
integer quantities $l(r, s, k)$ and the dimension $d_{s}$ of the $s$ representation
$$
\sum_{r, s, k>0}l(r,s,k)d_{s}\text{tr}^{r}[[T_{a}, T_{b}]_{+}T_{c}] = 3\text{tr}[[t^{G}_{a}, t^{G}_{b}]_{+}, t^{G}_{c}], $$
$$\sum_{r, s, k>0}l(r,s,k)d_{s}\text{tr}^{r}[[T_{a}, T_{b}]_{+}] = \frac{1}{2} \qquad (1)$$
It can be shown by explicit calculations (details are given in 't Hooft paper "Naturalness, chiral symmetry and spontaneous chiral symmetry breaking") that there don't exist integers $l$ which satisfy $(1)$. So we come to the statement that $G \sim SU_{L}(3)\times SU_{R}(3)\times U_{V}(1)$ must be spontaneously broken. We do not know which subgroup is broken and which isn't by using this argument (we only know that $U_{V}(1)$ is unbroken due to Vafa-Vitten theorem). However, we establish now that the confinement in QCD formally implies necessarity of chiral symmetry breaking. Finally, we know that for wide range of chiral effective field theories we can relate anomaly sector of underlying theories to the Wess-Zumino term given in terms of goldstone bosons through topological reasons. |
743 1
Hey all,
This might seem like a stupid question, and this might not be the correct forum, but hopefully someone can clarify it really easily.
I often have seen two definitions of an inner product on a vector space. Firstly, it can be defined as a bilinear map on a [itex] \mathbb F-[/itex]vector space V as [tex] \langle \cdot, \cdot \rangle : V \times V \to \mathbb F[/itex] satisfying the usual inner product conditions. An example that comes to mind is the Riemannian metric, which is a 2-tensor and so acts on two copies of a tangent space. Alternatively, I've seen it defined as [tex] \langle \cdot, \cdot \rangle : V^* \times V \to \mathbb F [/tex] satisfying the usual inner product conditions. An example that comes to mind here is the formalism used in the Riesz Representation theorem.
The only place I've really seen the first definition is in the case of Riemannian metrics, hence the motivation for posting this discussion in this forum.
Now I know that for finite dimensional vector spaces that V and [itex] V^* [/itex] are isomorphic: is this the reason for the differing notations? Or is it perhaps that in the second case when the domain is [itex] V^*\times V[/itex] we've some how canonically identified a vector [itex] v \in V [/itex] with its induced linear functional [tex] v \mapsto \langle v, \cdot \rangle [/tex]?
Again, this may seem really simple but I'd appreciate any response.
This might seem like a stupid question, and this might not be the correct forum, but hopefully someone can clarify it really easily.
I often have seen two definitions of an inner product on a vector space. Firstly, it can be defined as a bilinear map on a [itex] \mathbb F-[/itex]vector space V as
[tex] \langle \cdot, \cdot \rangle : V \times V \to \mathbb F[/itex]
satisfying the usual inner product conditions. An example that comes to mind is the Riemannian metric, which is a 2-tensor and so acts on two copies of a tangent space. Alternatively, I've seen it defined as
[tex] \langle \cdot, \cdot \rangle : V^* \times V \to \mathbb F [/tex]
satisfying the usual inner product conditions. An example that comes to mind here is the formalism used in the Riesz Representation theorem.
The only place I've really seen the first definition is in the case of Riemannian metrics, hence the motivation for posting this discussion in this forum.
Now I know that for finite dimensional vector spaces that V and [itex] V^* [/itex] are isomorphic: is this the reason for the differing notations? Or is it perhaps that in the second case when the domain is [itex] V^*\times V[/itex] we've some how canonically identified a vector [itex] v \in V [/itex] with its induced linear functional
[tex] v \mapsto \langle v, \cdot \rangle [/tex]?
Again, this may seem really simple but I'd appreciate any response. |
Let $f: [0, \infty) \rightarrow \mathbb{R}$. Define the value of the left-hand limit of $f$ at $t>0$ to be $f(t^-) = \lim_{x \rightarrow t^-} f(x)$. Define the "left-hand limit function" of $f$ as $f^-: (0, \infty) \rightarrow \mathbb{R}, f^-(t) = f(t^-) = \lim_{x \rightarrow t^-} f(x)$. Prove that $f^-$ is left continuous at each $t>0$.
Let $t>0$. To prove $f^-$ is left continuous at $t$, I need to show that $\lim_{t \rightarrow t^-}f^-(t) = f^-(t) = f(t^-)$.
My question is, how can I use the sequential definition (not the $\epsilon-\delta$ definition) of left continuity to prove this question? In other words, let $(s_n)$ be a sequence contained in $(0, \infty)$ such that $s_n<t$ for all $n$ and $s_n \rightarrow t$. I need to show that $\lim_{n \rightarrow \infty} f^-(s_n) = f^-(t) = f(t^-)$. I have also been given a hint (but I have no idea where to incorporate it):
If $f(t^-)$ exists and is finite, then it is equivalent to: $$f(t^-) = \sup \inf_{s<t} \{f(v): s \le v < t\} = \inf \sup_{s <t} \{f(v) : s \le v < t\} $$
Any help would be greatly appreciated! |
$\qquad\qquad\qquad\qquad\qquad\qquad\qquad$
Is $~\pi^2\approx g~$ a coincidence ?
Some have answered
, others said yes , and no yet others considered $(!)$ as perfectly viable options. Personally, I cannot help but chuckle, as this question reminds me of both Newton’s famous disc, which can be said to be both white and colored at the same time, depending on whether it is either rotating, or at rest. To add even more to the already mystifying fog of confusion, I shall hereby venture yet a opinion fourth :
$\qquad\qquad\qquad\qquad\qquad\quad$
We don’t know, and we never shall !
Granted, such a statement, when taken at face value, would undoubtedly appear as an impious affront to
Hilbert’s celebrated adage, wir mussen wissen, wir werden wissen, but before anyone accuses me of embracing either philosophical pessimism or epistemological agnosticism, let me assure you, dear reader, that such is simply the case; rather, I am basing this short assertion purely on mathematical foundations. Basically, there are four main ways in which a measuring unit can be created, that is both practical or anthropocentric, as well as universally meaningful, at the same time $($not to mention reproducible$)$ not :
the length of the pendulum with a half-period of exactly
, since the length of a pendulum with a half-period of one second will be exceedingly long; one minute
the
, the ten-millionth , or even the hundred-millionth part of either a terrestrial meridian, or the Earth’s equator, since the other two adjacent options, i.e., the billionth and the millionth part, would be either way too big, or way too small; ten-billionth
the distance traveled by light in the
, the hundred-millionth , or even the billionth part of a second; again, the other two adjacent options i.e., the ten- billionth and the ten-millionth part, would be either way too long, or way too short; hundred-billionth
the length of a so-called
$($ i.e., the sixtieth part of a third $)$ of the Earth’s meridian or equator. second
Of course, someone might, at this point, easily be tempted to say that I have committed a hideous and unpardonable abuse by painstakingly enumerating all those powers of ten listed above, since the metric system, as we have it today, is
decimal, but such would not necessarily have been the case, given an alternate course of human history $($thus, for instance, if one were to take the distance traveled by light in $10^{-9}$ seconds, such a length could easily have been interpreted as representing a “new foot”, to be further subdivided into $12$ “new inches”, ultimately yielding a “new yard” of $0.9$ metres$)$. coincidentally
Now, the shocking surprise, which astounded many at the time of its first discovery, and still does so even today, is as follows
: the ratio of the first three units is $1:4:3$, , the sheer “niceness” of the numbers involved being utterly uncanny, to say the almost exactly very least. $($Spooky, thought-provoking, challenging, bewildering, and mesmerizing also come to mind$)$. Adding insult to injury, as the proverb goes, we also notice that twice the value of the latter unit, representing the $3~600^\text{th}$ part of a nautical mile, equals $103$ centimetres, with an error of ; speak- ing of which, the thousandth part of a nautical mile is also conspicuously close to the length of a fathom, measuring the distance between the fingertips of a man’s outstretched arms. less than $\pm1$ millimetre
Furthermore, even if one were quite purposefully to go out of one’s way, and intentionally try to avoid the two coincidences above, by $($repeatedly$)$ dividing, based purely on number-theoretical principles, the aforementioned non-metric unit into, say, sevenths, $($since the powers of all other previous primes already appear abundantly in its sexagesimal creation$)$, one would arrive at the eerie conclusion that it adds up to $5.4$ metres, with an error of
. less than half a millimetre
As an aside, as $($even further$)$ coincidence would have it, my own personal fathom is $1.8$ metres almost exactly, with an error of no more than a few millimetres, making the above length my own personal rod; indeed, I am a rather metric person, since even my own height towers at just slightly over $1.7$ metres, and does
exceed $171\rm~cm$ — but I digress $\ldots$ not
Some of the above relations are $($easily$)$ explained $($away$)$ by means of basic arithmetic, such as, for instance, the fact that $3\cdot7^3\simeq2^{10}\simeq10^3$, or $2^7\simeq5^3\simeq11^2$, and $2^8\simeq3^5$, the latter two “culprits” being responsible for the beautiful approximation $3000_{12}\simeq5000_{10}$, or, equivalently, $12^4\simeq2\cdot10^4$, which relate duodecimal thousands and myriads to their decimal counterparts; others, however, are $($much$)$ harder to dispel. Nevertheless, this is
what we shall endeavour to achieve ! precisely
Let us therefore fearlessly approach the most awe-inspiring of all the above-listed coincidences, and merrily $($and mercilessly$)$
the life out of it $-$ in the name of science ! :-$)$ debunk
Now, the way I see it, if the ratio in question were
$3:4$, then dividing the distance traveled by light in a day’s time $($since this is the smallest naturally occurring time unit which is also easily observable by man$)$ to the length of an Earth’s meridian should yield a result of truly $648~000.~$ However, by employing the most accurate measurements known to date, namely that of $$c=299~792~458~\rm\dfrac ms~,$$ and a quarter of a terrestrial meridian being $\ell\simeq10~001~965~.~7293\rm~m$, we ultimately arrive at the exactly and dull figure of $~\dfrac{24\cdot60^2\cdot c}\ell~\simeq~647~424~\dfrac49,~$ which is roughly $~575~\dfrac59~$ times less than expected. uninspiring
In other words, by
the enhancing of our lengths and ratios, the ghosts of modern superstitions are forever shattered in the cold light of day by the power of reason, and our minds can resolution rest assured that the whole thing was nothing more than a finally , tempest in a teapot or much ado about nothing, as Shakespeare so wonderfully put it all those centuries ago ! Now all that is left to do is no one notices that the previous ratio can also be expressed as $27~27\rm~BB$ in base $12$, with an error of less than one and a half units. :-$)$ praying
On a more serious note, it all boils down to divisors $($usually powers of $2,~3,$ and $5)$ and numeration systems. $-~$
it ?$\ldots$ In the words of Thomas More, Doesn’t I trust I make myself obscure. :-$)$ |
2018-09-11 04:29
Proprieties of FBK UFSDs after neutron and proton irradiation up to $6*10^{15}$ neq/cm$^2$ / Mazza, S.M. (UC, Santa Cruz, Inst. Part. Phys.) ; Estrada, E. (UC, Santa Cruz, Inst. Part. Phys.) ; Galloway, Z. (UC, Santa Cruz, Inst. Part. Phys.) ; Gee, C. (UC, Santa Cruz, Inst. Part. Phys.) ; Goto, A. (UC, Santa Cruz, Inst. Part. Phys.) ; Luce, Z. (UC, Santa Cruz, Inst. Part. Phys.) ; McKinney-Martinez, F. (UC, Santa Cruz, Inst. Part. Phys.) ; Rodriguez, R. (UC, Santa Cruz, Inst. Part. Phys.) ; Sadrozinski, H.F.-W. (UC, Santa Cruz, Inst. Part. Phys.) ; Seiden, A. (UC, Santa Cruz, Inst. Part. Phys.) et al. The properties of 60-{\mu}m thick Ultra-Fast Silicon Detectors (UFSD) detectors manufactured by Fondazione Bruno Kessler (FBK), Trento (Italy) were tested before and after irradiation with minimum ionizing particles (MIPs) from a 90Sr \b{eta}-source . [...] arXiv:1804.05449. - 13 p. Preprint - Full text Detaljert visning - Lignende elementer 2018-08-25 06:58
Charge-collection efficiency of heavily irradiated silicon diodes operated with an increased free-carrier concentration and under forward bias / Mandić, I (Ljubljana U. ; Stefan Inst., Ljubljana) ; Cindro, V (Ljubljana U. ; Stefan Inst., Ljubljana) ; Kramberger, G (Ljubljana U. ; Stefan Inst., Ljubljana) ; Mikuž, M (Ljubljana U. ; Stefan Inst., Ljubljana) ; Zavrtanik, M (Ljubljana U. ; Stefan Inst., Ljubljana) The charge-collection efficiency of Si pad diodes irradiated with neutrons up to $8 \times 10^{15} \ \rm{n} \ cm^{-2}$ was measured using a $^{90}$Sr source at temperatures from -180 to -30°C. The measurements were made with diodes under forward and reverse bias. [...] 2004 - 12 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 533 (2004) 442-453 Detaljert visning - Lignende elementer 2018-08-23 11:31 Detaljert visning - Lignende elementer 2018-08-23 11:31
Effect of electron injection on defect reactions in irradiated silicon containing boron, carbon, and oxygen / Makarenko, L F (Belarus State U.) ; Lastovskii, S B (Minsk, Inst. Phys.) ; Yakushevich, H S (Minsk, Inst. Phys.) ; Moll, M (CERN) ; Pintilie, I (Bucharest, Nat. Inst. Mat. Sci.) Comparative studies employing Deep Level Transient Spectroscopy and C-V measurements have been performed on recombination-enhanced reactions between defects of interstitial type in boron doped silicon diodes irradiated with alpha-particles. It has been shown that self-interstitial related defects which are immobile even at room temperatures can be activated by very low forward currents at liquid nitrogen temperatures. [...] 2018 - 7 p. - Published in : J. Appl. Phys. 123 (2018) 161576 Detaljert visning - Lignende elementer 2018-08-23 11:31 Detaljert visning - Lignende elementer 2018-08-23 11:31
Characterization of magnetic Czochralski silicon radiation detectors / Pellegrini, G (Barcelona, Inst. Microelectron.) ; Rafí, J M (Barcelona, Inst. Microelectron.) ; Ullán, M (Barcelona, Inst. Microelectron.) ; Lozano, M (Barcelona, Inst. Microelectron.) ; Fleta, C (Barcelona, Inst. Microelectron.) ; Campabadal, F (Barcelona, Inst. Microelectron.) Silicon wafers grown by the Magnetic Czochralski (MCZ) method have been processed in form of pad diodes at Instituto de Microelectrònica de Barcelona (IMB-CNM) facilities. The n-type MCZ wafers were manufactured by Okmetic OYJ and they have a nominal resistivity of $1 \rm{k} \Omega cm$. [...] 2005 - 9 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 548 (2005) 355-363 Detaljert visning - Lignende elementer 2018-08-23 11:31
Silicon detectors: From radiation hard devices operating beyond LHC conditions to characterization of primary fourfold coordinated vacancy defects / Lazanu, I (Bucharest U.) ; Lazanu, S (Bucharest, Nat. Inst. Mat. Sci.) The physics potential at future hadron colliders as LHC and its upgrades in energy and luminosity Super-LHC and Very-LHC respectively, as well as the requirements for detectors in the conditions of possible scenarios for radiation environments are discussed in this contribution.Silicon detectors will be used extensively in experiments at these new facilities where they will be exposed to high fluences of fast hadrons. The principal obstacle to long-time operation arises from bulk displacement damage in silicon, which acts as an irreversible process in the in the material and conduces to the increase of the leakage current of the detector, decreases the satisfactory Signal/Noise ratio, and increases the effective carrier concentration. [...] 2005 - 9 p. - Published in : Rom. Rep. Phys.: 57 (2005) , no. 3, pp. 342-348 External link: RORPE Detaljert visning - Lignende elementer 2018-08-22 06:27
Numerical simulation of radiation damage effects in p-type and n-type FZ silicon detectors / Petasecca, M (Perugia U. ; INFN, Perugia) ; Moscatelli, F (Perugia U. ; INFN, Perugia ; IMM, Bologna) ; Passeri, D (Perugia U. ; INFN, Perugia) ; Pignatel, G U (Perugia U. ; INFN, Perugia) In the framework of the CERN-RD50 Collaboration, the adoption of p-type substrates has been proposed as a suitable mean to improve the radiation hardness of silicon detectors up to fluencies of $1 \times 10^{16} \rm{n}/cm^2$. In this work two numerical simulation models will be presented for p-type and n-type silicon detectors, respectively. [...] 2006 - 6 p. - Published in : IEEE Trans. Nucl. Sci. 53 (2006) 2971-2976 Detaljert visning - Lignende elementer 2018-08-22 06:27
Technology development of p-type microstrip detectors with radiation hard p-spray isolation / Pellegrini, G (Barcelona, Inst. Microelectron.) ; Fleta, C (Barcelona, Inst. Microelectron.) ; Campabadal, F (Barcelona, Inst. Microelectron.) ; Díez, S (Barcelona, Inst. Microelectron.) ; Lozano, M (Barcelona, Inst. Microelectron.) ; Rafí, J M (Barcelona, Inst. Microelectron.) ; Ullán, M (Barcelona, Inst. Microelectron.) A technology for the fabrication of p-type microstrip silicon radiation detectors using p-spray implant isolation has been developed at CNM-IMB. The p-spray isolation has been optimized in order to withstand a gamma irradiation dose up to 50 Mrad (Si), which represents the ionization radiation dose expected in the middle region of the SCT-Atlas detector of the future Super-LHC during 10 years of operation. [...] 2006 - 6 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 566 (2006) 360-365 Detaljert visning - Lignende elementer 2018-08-22 06:27
Defect characterization in silicon particle detectors irradiated with Li ions / Scaringella, M (INFN, Florence ; U. Florence (main)) ; Menichelli, D (INFN, Florence ; U. Florence (main)) ; Candelori, A (INFN, Padua ; Padua U.) ; Rando, R (INFN, Padua ; Padua U.) ; Bruzzi, M (INFN, Florence ; U. Florence (main)) High Energy Physics experiments at future very high luminosity colliders will require ultra radiation-hard silicon detectors that can withstand fast hadron fluences up to $10^{16}$ cm$^{-2}$. In order to test the detectors radiation hardness in this fluence range, long irradiation times are required at the currently available proton irradiation facilities. [...] 2006 - 6 p. - Published in : IEEE Trans. Nucl. Sci. 53 (2006) 589-594 Detaljert visning - Lignende elementer |
Method to calculate the pH of a solution with a strong monoacid
$$\begin{array}{|c|c|c|c|c|}\hline&\text{Before}&&\text{After}&\\\hline&\ce{AH}&\ce{H2O}&\ce{A-}&\ce{H3O+}\\\hline\text{Initial}&n0&\text{excess}&0&0\\\hline\text{Equilbrium}&\epsilon&\text{excess}&n0&n0\\\hline\end{array}$$
Then we have $\mathrm{pH}=-\log\left[\ce{H3O+}\right]$ Now we have to verify if our result is true.
We consider that $\ce{H3O+}$ in the solution has exactly the same concentration as that of the acid. However they can react with $\ce{OH-}$ because of the water autoprotolysis! $K_\text{a}=10^{14}$ total reaction.
$$\ce{H3O^+ + HO^- -> 2H2O}$$
So the concentration of $\ce{H3O+}$ is approximatively constant if $\left[\ce{H3O+}\right]\gg\left[\ce{HO-}\right]$
For example if $\frac{1}{10}\left[\ce{H3O+}\right]>\left[\ce{HO-}\right]$ then we can find the limit for which our reasonning is false :
$$\left[\ce{HO-}\right]\times \left[\ce{H3O+}\right] < \frac{1}{10} \times \left[\ce{H3O+}\right] \times \left[\ce{H3O+}\right] \iff K_\text{e}<\frac{1}{10}\left[\ce{H3O+}\right]^2$$
Then we have $\mathrm{pH}<6.5$
So if with a strong monoacid you find $\mathrm{pH}=6.8$ with this formula the result is not correct and you have to make an other approximation or reasonning!
Here we have $\mathrm{pH}=-0.69<6.5$ so you find a reasonable answer. And in water $\mathrm{pH}$ is between $0$ and $14$ so your solution is at $\mathrm{pH} \approx 0$ |
Given a group $G$, with its binary ("product") and unary ("inverse") operations:$$\begin{equation}\begin{split}\circ&\colon G^2\to G\\\operatorname{inv}&\colon G\to G\end{split}\end{equation}\tag{1}$$you can consider the restrictions of them on $H^2$ and $H$ respectively, where $H\subset G$:$$\begin{equation}\begin{split}\circ_{H^2}&\colon H^2\to G&;~(h_1,h_2)&\mapsto h_1\circ h_2\\\operatorname{inv}_H&\colon H\to G&;~h&\mapsto\operatorname{inv}(h)\end{split}\end{equation}\tag{2}$$
When you ask whether $H$ is a subgroup of $G$, you are asking whether $H$ is a group with the group structure induced by that of $G$, that is, with binary and unary operations pointwise concident with those of $G$ on $H^2$ and $H$ respectively, that is, with binary and unary operations pointwise coincident with the restrictions $(2)$ of the corresponding operations of $G$. In the end you want to know whether there exist corestrictions to $H$ of $(2)$'s, or directly birestrictions of $(1)$'s to the pairs $(H^2,H)$ and $(H,H) $ respectively:$$\begin{equation}\begin{split}\circ_{H^2}^{H}&\colon H^2\to H&;~(h_1,h_2)&\mapsto h_1\circ h_2\\\operatorname{inv}_H^{H}&\colon H\to H&;~h&\mapsto\operatorname{inv}(h)\end{split}\end{equation}\tag{3}$$
These existence conditions are called closure of $H$ under product and inversion.
There remain to be proved associativity, the existence of identity and its equality to the identity of $G$. But the associativity is a pointwise property of $\circ$ that
a fortiori holds when $\circ$ is restricted or birestricted. Moreover if the identity of $H$ exists it must be equal to that of $G$ because the identity of $G$ is defined by a pointwise property of $\operatorname{inv}$ that a fortiori holds when $\operatorname{inv}$ is restricted or birestricted.
But you have no way to prove for the general case that the identity of $H$ exists unless you add to your hypotheses that $H$
must be non-empty.
This last was the only important thing missing in your reasoning. |
first of all, I need to confess my ignorance with respect to any physics since I'm a mathematician. I'm interested in the physical intuition of the Langlands program, therefore I need to understand what physicists think about homological mirror symmetry which comes from S-duality. This question is related to my previous one Intuition for Homological Mirror Symmetry.
As I have heard everything starts with an $S$-duality between two $N= 4$ super-symmetric Yang-Mills gauge theories of dimension $4$, $(G, \tau)$ and $(^{L}G, \frac{-1}{n_{\mathfrak{g}}\tau})$, where $\tau = \frac{\theta}{2\pi} + \frac{4\pi i}{g^2}$, $G$ is a compact connected simple Lie group and $n_{\mathfrak{g}}$ is the lacing number (the maximal number of edges connecting two vertices in the Dynkin diagram) . And, then the theory would be non-perturbative, since it would be defined "for all" $\tau$, because amplitudes are computed with an expansion in power series in $\tau$
So I need to understand what this would mean to a physicist.
1) First of all, what's the motivation form the Yang-Mills action and how should I understand the coupling constants $\theta$ and $g$?
2) How can I get this so called expansion in power series with variable $\tau$ of the probability amplitude?
3) What was the motivation to start looking at this duality? A creation of an everywhere defined (in $\tau$) gauge theory, maybe?
Thanks in advance.
This post imported from StackExchange Physics at 2015-03-17 04:40 (UTC), posted by SE-user user40276 |
In Zweibach's
A first course in string theory, he used the least action principle to get the equations of motion for strings, wehre the variation of action(which should be zero) is :$$\delta S = \int_{\tau_i}^{\tau_f} d\tau [\delta X^\mu \mathcal{P}^\sigma_\mu]^{\sigma_1}_0 - \int_{\tau_i}^{\tau_f} d\tau \int_0^{\sigma_1} d\sigma \delta X^\mu \left(\dfrac{\partial \mathcal{P}^\tau_\mu}{\partial \tau}+ \dfrac{\partial \mathcal{P}^\sigma_\mu}{\partial \sigma} \right)$$He then imposed the boundary conditions of the open strings endpoints (Dirichlet and Neumann) to let the first term vanish; and then he said that the second term must vanish as well. Doing so we got the equations of motion :$$\dfrac{\partial \mathcal{P}^\tau_\mu}{\partial \tau}+ \dfrac{\partial \mathcal{P}^\sigma_\mu}{\partial \sigma} =0$$Which are for both open and closed strings.
I have three questions:
How can we consider that these are the equations of motion for closed strings that don't have boundary conditions in the first place?
A string's endpoint may have Dirichlet or Neumann boundary condition, not both.
And the two endpoints of an open string may have different boundary conditions, so how is it logical to impose the two conditions to get the EOM? |
The classical point vortex model corresponding to the 2D incompressible Euler equation in vorticity form is the system of $N$ ODES
$$\begin{cases} \dot{x}_i(t) = \sum_{1\leq i\neq j\leq N} a_j K(x_i(t),x_j(t)) \\ x_i(0) = x_i^0\end{cases}\tag{1}$$
where $K(x,y) := -\frac{1}{2\pi}(\frac{(x-y)_2}{|x-y|^2}, -\frac{(x-y)_1}{|x-y|^2})$ is the Biot-Savart kernel.
It is evident from the RHS of equation $\ref{a}$ that the point vortices only have binary (i.e. pairwise) interactions. My question is the following: are there physically motivated models generalizing equation \ref{a} which have binary and ternary interactions? By ternary, I have in mind a term of the form
$$\sum_{1\leq i\neq j\neq k\leq N} a_j a_k \tilde{K}(x_i(t),x_j(t),x_k(t)),$$
where $\tilde{K}$ is an $\mathbb{R}^2$-valued map. |
This is the third issue in our ARMA Unplugged modeling series. In this issue, we introduce the common patterns often found in real time series data and discuss a few techniques to identify/model those patterns, paving the way for more elaborate discussion decomposition and seasonal adjustment methodologies in future issues.
What are the common patterns, and how do we identify and model them to derive a better understanding of the underlying process, and to better forecast our data? That is the central question for this issue.
Background
In time series analysis, our main objectives are (a) identifying the nature of the phenomenon represented by the sequence of observations, and (b) forecasting (i.e. predicting) future values of the time series variable.
Once the pattern is established, we can interpret and integrate it with other data (i.e., use it in our theory of the investigated phenomenon, e.g., seasonal commodity prices). Regardless of the depth of our understanding and the validity of our interpretation (theory) of the phenomenon, we can extrapolate the identified pattern to predict future events.
In the majority of time series data, there are two dominant systematic patterns or identifiable components: trend and seasonality.
Trend is the linear or (more often) non-linear component that changes with time and does not repeat (or at least not within the sample data scope) itself.
Seasonality, on the other hand, repeats itself in a systematic interval over time.
Trend and seasonality may simultaneously exist in real time data. For example, a company’s sales of a given consumer products may experience variation among the months of the year (e.g. Holiday season), while the sales of that month grow by 10% compared to the sales of the same month in the prior year.
Sound simple? Not really: the presence of irregulars (i.e. noise, shocks, and innovations) makes those components hard to identify.
Trend Analysis
There is no automatic means of identifying a trend component, but as along as trend is monotonous (consistent increasing or time series decreasing), a visual examination of the data series plot (or the smoothed version if the data has a lot of noise) can quickly reveal its presence.
In the event the time series exhibits a seasonality component, you can apply a moving average (or median) smoothing function with a window-size equal to the length of one period to cancel out noise and seasonality and isolate the trend component.
The trend here is assumed to be deterministic; in other words it follows a fixed relationship with time. The time series may possess a stochastic trend, in which case it can’t be captured with a simple smoothing function. We’ll discuss this later on.
Seasonality vs. Cycle
Often, we use the term seasonality (or periodicity) when the time series exhibits a pattern that repeats itself. The pattern can be visually detected in the time series plot, unless the data has lot of noise.
In this discussion, we need to make a further distinction in two types of periodicity:
Deterministic(Seasonality): A seasonal pattern exists when it is influenced by seasonal factors (e.g. quarter of the year, the month, day of the week). Seasonality is fixed and known period. This is why we often call it a “periodic” time series Stochastic(Cyclic) pattern exists when the data exhibits rises and falls that are not fixed over a period (stochastic). The duration of a period is usually several years, but the duration is not known ahead of time. The business cycle is often used in econometrics literature as an example of a cycle
Going forward, we will assume we mean a deterministic, calendar-driven, fixed period type of seasonality whenever the broad term “seasonality is used. “Cycle,” on the other hand, will refer to stochastic periodicity.
Seasonality
Seasonal dependency is defined as a correlational dependency of order k between the i-th component and (i+k) component. If the measurement error is not too large, it can be visible in the data: visually identified in the series as a pattern that repeats every k elements.
The exponential smoothing function (e.g. winter’s triple exponential smoothing) is suitable to capture seasonality and deterministic trend, but not a time series with (stochastic) cycles.
Cycle
For stochastic cycles (i.e. periodicity with a non-fixed period or unknown duration), we can use an ARMA(p, q) type of process: with an autoregressive component order (p) greater than one (1) and additional conditions on the parameters’ values to obtain cyclicity.
For example, consider the ARMA (2,q) process:$$(1-\phi_1 L -\phi_2 L^2)(r_t-\mu)=(1+\sum_{i=1}^q \theta_iL^i)a_t$$
Let:$$\phi_1\succ 0$$ $$\phi_2\prec 0$$ $$\phi_1^2+2\phi_2 \prec 0$$
Then AR characteristic equation possesses a complex root, and can be represented as follows:$$\psi=\frac{-\phi_1\pm \sqrt{\phi_1^2+4\phi_2}}{2\phi_2}=\alpha\pm j\omega$$
The ACF plot of this process exhibits exponential damping sine and cosine waves. The average length of the stochastic cycle $k$ is:$$k=\frac{2\pi}{cos^{-1}\left(\frac{\phi_1}{2\sqrt{-\phi_2}}\right)}$$
Seasonality and Cyclicity
It is possible to have both cyclic and seasonal behavior in an ARMA-type model: Seasonal ARIMA (aka SARIMA). For the general case, a multiplicative SARIMA $(p,d,q)\times(P,D,Q)_s$ process is defined as follows:$$\Phi(L^s)\phi(L)(1-L^s)^D(1-L)^d y_t=\Theta(L^2)\theta(L)a_t$$ $$(1-L^s)^D(1-L)^d y_t=\frac{\Theta(L^s)}{\Phi(L^s)}\times \frac{\theta(L)}{\phi(L)}a_t$$
Where
$s$ is the length of the seasonal period (deterministic)
We’ll cover SARIMA in greater detail in future issues.
Note: Long-period cyclicity is not handled very well in the ARMA framework. Alternative (nonlinear) models are usually preferred. Time Series Decomposition
In general, the time series $\{y_t\}$ can be broken down into two primary systematic components: trend $\{T_t\}$ and seasonality $\{S_t\}$, with everything else lumped under “Irregulars” $\{I_t\}$ (or noise).
The Irregulars (residuals) component $\{I_t\}$ captures all the non-systemic (i.e. deterministic) properties of the time series, so to improve our forecast further we can model $\{I_t\}$ with an ARMA type of model (e.g. regARIMA).
The time series can be modeled as the sum (i.e. additive decomposition) or the product (multiplicative decomposition) of its components.
Additive decomposition
In some time series, the amplitude of both the seasonal and irregular variations do not change as the level of the trend rises or falls. In such cases, an additive model is appropriate.
Let’s examine the formulation:$$y_t=T_t+S_t+I_t$$
To remove the seasonal effect (seasonal adjustment) from the time series:$$\textrm{SA}_t=y_t-\hat S_t=T_t+I_t$$
Where
$T_t$ is the trend component $S_t$ is the estimated seasonality component
Note: Under this model, the three components ($S_t,T_t,I_t$) have the same units as the original series.
Multiplicative Decomposition
In many time series, the amplitude of both the seasonal and irregular variations increase as the level of the trend rises. In this situation, a multiplicative model is usually appropriate.$$y_t=S_t\times T_t\times I_t$$
To remove the seasonal effect from the data:$$\textrm{SA}_t=\frac{y_t}{\hat S_t}=T_t\times I_t$$
Note: Under this model, the trend has the same units as the original series, but the seasonal and irregular components are unit-less factors, distributed (centered) around 1. Pseudo-Additive Decomposition
The multiplicative model cannot be used when the original time series contains very small or zero values. This is because it is not possible to divide a number by zero. In these cases, a pseudo-additive model combining the elements of both the additive and multiplicative models is used.$$y_t=T_t+T_t\times (S_t-1)+T_t\times(I_t-1)=T_t\times (S+t+I_t-1)$$
Note: Under this model, the trend has the same units as the original series, but the seasonal and irregular components are unit less factors, distributed (centered) around 1.
This model assumes that seasonal and irregular variations are both dependent on the level of the trend but independent of each other.
The seasonal adjusted series is defined as follows:$$\textrm{SA}_t=y_t-\hat T_t (\hat S_t-1)$$
Where
$T_t$ is the estimated trend component $S_t$ is the estimated seasonality component Conclusion
In real time series data, the two primary patterns often observed are trend and seasonality. Furthermore, the time series data include a noise or error term which lumps all non-systematic factors together and as a result makes the identification of those components a bit difficult. A smoothing or filtering procedure may be required to prepare the data for analysis.
Seasonality is described as a repeated deterministic pattern (i.e. ups and downs) that is driven primarily by calendar-related factors (e.g. day of week, month of the year, holiday, etc.). Cyclicity, on the other hand, is stochastic in nature and does not have a fixed or known period.
In short, a given time series can be viewed as a composition of one or more primitive series (systematic (i.e. deterministic) or stochastic).
There are two distinct types of decomposition models: additive and multiplicative. The decision of which model to use depends on the amplitude of both the seasonal and irregular variations. If they do not change as the level of the trend rises, then additive decomposition is in order; otherwise a multiplicative one is used. Multiplicative decomposition is found in the majority of time series.
This issue is intended to serve as a preliminary exercise for time series decomposition and time series seasonal adjustment.
Attachments
The PDF version of this issue can be found below: |
Let $X_n(\Bbb{Z})$ be the simplicial complex whose vertex set is $\Bbb{Z}$ and such that the vertices $v_0,...,v_k$ span a $k$-simplex if and only if $|v_i-v_j| \le n$ for every $i,j$. Prove that $X_n(\Bbb{Z})$ is $n$-dimensional...
no kidding, my maths is foundations (basic logic but not pedantic), calc 1 which I'm pretty used to work with, analytic geometry and basic linear algebra (by basic I mean matrices and systems of equations only
Anyway, I would assume if removing $ fixes it, then you probably have an open math expression somewhere before it, meaning you didn't close it with $ earlier. What's the full expression you're trying to get? If it's just the frac, then your code should be fine
This is my first time chatting here in Math Stack Exchange. So I am not sure if this is frowned upon but just a quick question: I am trying to prove that a proper subgroup of $\mathbb{Z}^n$ is isomorphic to $\mathbb{Z}^k$, where $k \le n$. So we must have $rank(A) = rank(\mathbb{Z}^k)$ , right?
For four proper fractions $a, b, c, d$ X writes $a+ b + c >3(abc)^{1/3}$. Y also added that $a + b + c> 3(abcd)^{1/3}$. Z says that the above inequalities hold only if a, b,c are positive. (a) Both X and Y are right but not Z. (b) Only Z is right (c) Only X is right (d) Neither of them is absolutely right.
Yes, @TedShifrin the order of $GL(2,p)$ is $p(p+1)(p-1)^2$. But I found this on a classification of groups of order $p^2qr$. There order of $H$ should be $qr$ and it is present as $G = C_{p}^2 \rtimes H$. I want to know that whether we can know the structure of $H$ that can be present?
Like can we think $H=C_q \times C_r$ or something like that from the given data?
When we say it embeds into $GL(2,p)$ does that mean we can say $H=C_q \times C_r$? or $H=C_q \rtimes C_r$? or should we consider all possibilities?
When considering finite groups $G$ of order, $|G|=p^2qr$, where $p,q,r$ are distinct primes, let $F$ be a Fitting subgroup of $G$. Then $F$ and $G/F$ are both non-trivial and $G/F$ acts faithfully on $\bar{F}:=F/ \phi(F)$ so that no non-trivial normal subgroup of $G/F$ stabilizes a series through $\bar{F}$.
And when $|F|=pr$. In this case $\phi(F)=1$ and $Aut(F)=C_{p-1} \times C_{r-1}$. Thus $G/F$ is abelian and $G/F \cong C_{p} \times C_{q}$.
In this case how can I write G using notations/symbols?
Is it like $G \cong (C_{p} \times C_{r}) \rtimes (C_{p} \times C_{q})$?
First question: Then it is, $G= F \rtimes (C_p \times C_q)$. But how do we write $F$ ? Do we have to think of all the possibilities of $F$ of order $pr$ and write as $G= (C_p \times C_r) \rtimes (C_p \times C_q)$ or $G= (C_p \rtimes C_r) \rtimes (C_p \times C_q)$ etc.?
As a second case we can consider the case where $C_q$ acts trivially on $C_p$. So then how to write $G$ using notations?
There it is also mentioned that we can distinguish among 2 cases. First, suppose that the sylow $q$-subgroup of $G/F$ acts non trivially on the sylow $p$-subgroup of $F$. Then $q|(p-1) and $G$ splits over $F$. Thus the group has the form $F \rtimes G/F$.
A presentation $\langle S\mid R\rangle$ is a Dehn presentation if for some $n\in\Bbb N$ there are words $u_1,\cdots,u_n$ and $v_1,\cdots, v_n$ such that $R=\{u_iv_i^{-1}\}$, $|u_i|>|v_i|$ and for all words $w$ in $(S\cup S^{-1})^\ast$ representing the trivial element of the group one of the $u_i$ is a subword of $w$
If you have such a presentation there's a trivial algorithm to solve the word problem: Take a word $w$, check if it has $u_i$ as a subword, in that case replace it by $v_i$, keep doing so until you hit the trivial word or find no $u_i$ as a subword
There is good motivation for such a definition here
So I don't know how to do it precisely for hyperbolic groups, but if $S$ is a surface of genus $g \geq 2$, to get a geodesic representative for a class $[\alpha] \in \pi_1(S)$ where $\alpha$ is an embedded loop, one lifts it to $\widetilde{\alpha}$ in $\Bbb H^2$ by the locally isometric universal covering, and then the deck transformation corresponding to $[\alpha]$ is an isometry of $\Bbb H^2$ which preserves the embedded arc $\widetilde{\alpha}$
It has to be an isometry fixing a geodesic $\gamma$ with endpoints at the boundary being the same as the endpoints of $\widetilde{\alpha}$.
Consider the homotopy of $\widetilde{\alpha}$ to $\gamma$ by straightline homotopy, but straightlines being the hyperbolic geodesics. This is $\pi_1(S)$-equivariant, so projects to a homotopy of $\alpha$ and the image of $\gamma$ (which is a geodesic in $S$) downstairs, and you have your desired representative
I don't know how to interpret this coarsely in $\pi_1(S)$
@anakhro Well, they print in bulk, and on really cheap paper, almost transparent and very thin, and offset machine is really cheaper per page than a printer, you know, but you should be printing in bulk, its all economy of scale.
@ParasKhosla Yes, I am Indian, and trying to get in some good masters progam in math.
Algebraic graph theory is a branch of mathematics in which algebraic methods are applied to problems about graphs. This is in contrast to geometric, combinatoric, or algorithmic approaches. There are three main branches of algebraic graph theory, involving the use of linear algebra, the use of group theory, and the study of graph invariants.== Branches of algebraic graph theory ===== Using linear algebra ===The first branch of algebraic graph theory involves the study of graphs in connection with linear algebra. Especially, it studies the spectrum of the adjacency matrix, or the Lap...
I can probably guess that they are using symmetries and permutation groups on graphs in this course.
For example, orbits and studying the automorphism groups of graphs.
@anakhro I have heard really good thing about Palka. Also, if you do not worry about little sacrifice of rigor (e.g. counterclockwise orientation based on your intuition, rather than, on winding numbers, etc.), Howie's Complex analysis is good. It is teeming with typos here and there, but you will be fine, i think. Also, thisbook contains all the solutions in appendix!
Got a simple question: I gotta find kernel of linear transformation $F(P)=xP^{''}(x) + (x+1)P^{'''}(x)$ where $F: \mathbb{R}_3[x] \to \mathbb{R}_3[x]$, so I think it would be just $\ker (F) = \{ ax+b : a,b \in \mathbb{R} \}$ since only polynomials of degree at most 1 would give zero polynomial in this case
@chandx you're looking for all the $G = P''$ such that $xG + (x+1)G' = 0$; if $G \neq 0$ you can solve the DE to get $G'/G = -x/(x+1) = -1 + 1/(x+1) \implies \ln G = -x + \ln(1+x) \implies G = (1+x)e^(-x) + C$ which is obviously not a polyonomial, so $G = 0$ and thus $P = ax + b$
could you suppose that $\operatorname{deg} P \geq 2$ and show that you wouldn't have nonzero polynomials? Sure. |
In this post the matroid theory connections are everywhere, but I won’t use any matroid language. Can you spot them all?
I’m going to discuss my favorite lecture from the course MAT377 – Introduction to Combinatorics, which I have taught at Princeton in the past three years (lecture notes can be found here). This particular lecture was not part of the first run of the course, but inspired by it. After introducing the MacWilliams relations in coding theory (see below), I was asked by my students what they can be used for. The coding theory books that I consulted were not much help, although they claimed them to be deep, important, etc. But an email to Peter Cameron, and then an answer from Chris Godsil on Mathoverflow, led me to the paper [AM78], in which Assmus and Maher prove the following (and slightly more, but I try to keep this post as short as possible).
Theorem 1. There is no projective plane of order $q$, where $q \equiv 6 \pmod 8$. Preliminaries
We need some design theory, coding theory, and linear algebra in our proof. Recall that a $t-(v,k,\lambda)$
design is a collection of subsets of a set of $v$ points (subsets can be repeated more than once), where each subset (or block) has size $k$, and every subset of $t$ points is contained in exactly $\lambda$ blocks. So in this terminology, a projective plane of order $q$, where the blocks are taken to be the lines, would be a $2-(q^2+q+1,q+1,1)$ design. Other parameters of the design include the number of blocks $b$, and the replication number $r$, the number of blocks containing a single point. One can show that $b$ and $r$ are determined by the parameters of the design, and that $r$ does not depend on the point chosen. The block-point incidence matrix of a design is the matrix $A$ with rows indexed by blocks, columns by points, and $$ A_{ij} = \begin{cases} 1 & \text{ if point } j \text{ is in block } i\\ 0 & \text{ otherwise.}\end{cases} $$ Exercise 1. Let $A$ be the block-point incidence matrix of a design. Show that for a $2-(v,k,\lambda)$ design with $v=b$, we have the following: $k=r$ $k(k-1) = \lambda(v-1)$ $AA^T = A^TA$ $|\det(A)| = k(k-\lambda)^{(v-1)/2}$ Every two blocks meet in exactly $\lambda$ points
Misleadingly, such designs are called
symmetric. Note that $A$ need not be a symmetric matrix at all!
A $q$
-ary linear $[n,k,d]$ code $C$ is a linear subspace of $\mathrm{GF}(q)^n$ of dimension $k$. Think of it as the row space of a matrix. The elements of $C$ are called codewords, and the weight of a codeword $c \in C$ is $\mathrm{wt}(c) = |\{i : c_i\neq 0\}|$. The parameter $d$ of the code is the minimum weight of the nonzero codewords in $C$. Since weights are related to the Hamming distance between codewords (and thus to the error-correcting capabilities of the code), it makes sense to study the weight enumerator:$$
W_C(x,y) := \sum_{c\in C} x^{\mathrm{wt}(c)}y^{n – \mathrm{wt}(c)}.
$$
The
dual code is $C^\perp$, the orthogonal complement of the vector space $C$. We can derive the following relation between the weight enumerators of $C$ and $C^\perp$: Theorem 2 (MacWilliams Relations).$$
W_{C^\perp}(x,y) = q^{-k} W_C(y-x, y+ (q-1)x).
$$
From linear algebra we require the
Smith Normal Form , of which we will only use the following restricted version: Theorem 3. Let $A$ be an $n\times n$ nonsingular matrix over $\mathbb{Z}$. There exist integer matrices $M, N$ with $\det(M) = \det(N) = 1$, and $MAN = D$, where $D$ is a diagonal matrix with diagonal entries $d_1, \ldots, d_n$ such that $d_{i} | d_{i+1}$ for $i = 1, \ldots, n-1$. The proof.
We start with two lemmas.
Lemma 1. Let $A$ be the incidence matrix of a symmetric $2-(v,k,\lambda)$ design. Let $A_+$ be obtained from $A$ by adding an all-ones column, and let $C$ be the binary linear code generated by the rows of $A_+$. If $k$ is odd, $k-\lambda$ is even, but $k-\lambda$ is not a multiple of $4$, then $C$ is a $[v+1, (v+1)/2, d]$ code for some $d$, and $C^\perp = C$. Proof. Interpret $A$ as an integer matrix. $$ r_{\mathrm{GF}(2)}(A_+) \geq r_{\mathrm{GF}(2)}(A) = r_{\mathrm{GF}(2)}(MAN) \geq (v+1)/2. $$
Conversely, let $a$ and $b$ be rows of $A_+$. The inner product of $a$ with itself is $\langle a, a\rangle = k + 1 \equiv 0 \pmod 2$. Also, $\langle a,b\rangle = \lambda + 1 \equiv 0 \pmod 2$. It follows by linearity that each codeword in $C$ is orthogonal to every codeword in $C$, i.e. $C \subseteq C^\perp$. Since $\dim(C) + \dim(C^\perp) = v+1$, it follows that $r_{\mathrm{GF}(2)}(A_+) \leq (v+1)/2$, so equality must hold. $\square$
A code is
doubly even if all weights are multiples of four. Lemma 2. If $C$ is a binary, linear $[v+1,(v+1)/2, d]$, self-dual, doubly even code, then $8|(v+1)$. Proof. For a binary linear $[n,k,d]$ code, the MacWilliams relations specialize to $$ W_{C^\perp}(x,y) = 2^{-k}W_C(y-x,y+x) = 2^{n/2 – k} W_C( (x,y)\sigma), $$ where $\sigma$ is the linear transformation $$ \sigma = \frac{1}{\sqrt{2}}\begin{bmatrix} -1 & 1\\ 1 & 1\end{bmatrix}. $$ If $C$ is self-dual, then $W_C$ is invariant under $\sigma$. If $C$ is doubly even, then $W_C$ is also invariant under $$ \pi = \begin{bmatrix} i & 0 \\ 0 & 1 \end{bmatrix}, $$ where $i \in \mathbb{C}, i^2 = -1$. Now $W_C$ is invariant under the group generated by $\sigma$ and $\pi$, and in particular under $$(\pi\sigma)^3 = \frac{1+i}{\sqrt{2}}\begin{bmatrix} 1 & 0\\ 0 & 1 \end{bmatrix}.$$ Since this transformation multiplies each of $x$ and $y$ by a primitive eighth root of unity, the result follows.$\square$ Proof of Theorem 1. Suppose a binary projective plane of order $q \equiv 6 \pmod 8$ exists. Consider the corresponding $2-(q^2+q+1,q+1,1)$ design, its incidence matrix $A$, and the binary, linear code $C$ generated by $A_+$ as above. By Lemma 1, $C$ is self-dual. Each row of $A_+$ has $q+2$ nonzero entries, and therefore weight $0 \pmod 4$. Since $C$ is self-dual, any two codewords intersect in an even number of positions, and it follows that all codewords have weight $0 \pmod 4$. By Lemma 2, then, $v+1 = q^2 + q + 2$ is divisible by $8$, which contradicts the assumption that $q \equiv 6 \pmod 8$. $\square$
Note that the MacWilliams relations also played a big role in determining the nonexistence of a projective plane of order 10.
Problem. Are techniques like the ones used above applicable elsewhere in matroid theory?
[AM78] Assmus, E. F., Jr.; Maher, David P.
Nonexistence proofs for projective designs. Amer. Math. Monthly 85 (1978), no. 2, 110–112 |
A famous theorem of Dirichlet says that infinitely many primes are of the form:$\alpha n+\beta$, but are there infinitely many of the form: $\alpha ^n+\beta$, where $\beta$ is even and $\alpha$ is prime to $\beta$? or of the form $\alpha!+\gamma$, where $\gamma$ is odd?
Out of mere curiosity has this question come, thus any help is greatly appreciated.
A famous theorem of Dirichlet says that infinitely many primes are of the form:$\alpha n+\beta$, but are there infinitely many of the form: $\alpha ^n+\beta$, where $\beta$ is even and $\alpha$ is prime to $\beta$? or of the form $\alpha!+\gamma$, where $\gamma$ is odd?
There are relatively prime non-trivial $\alpha$ and $\beta$, with $\beta$ even, such that $\alpha^n +\beta$ is not prime if $n \ge 1$. Easy, let $\beta$ have decimal expansion that ends in $4$, and let $\alpha>1$ have decimal expansion that ends in $1$.
A more subtle class of examples is illustrated by $625^n+4$. For this one we use the algebraic identity $$x^4+4=(x^2-2x+2)(x^2+2x+2)$$ to prove compositeness.
For the factorial question, a necessary condition for primality if $\alpha \gt 1$ is $\gamma=\pm 1$. Unfortunately it is not known whether there are infinitely many primes of the form $n!\pm 1$.
Numbers $n$ such that $n! - 1$ is prime is http://oeis.org/A002982. The list begins, 3, 4, 6, 7, 12, 14, 30, 32, 33, 38, 94, 166, 324, 379, 469, 546, 974, 1963, 3507, 3610, 6917, 21480, 34790, 94550, 103040. Presumably the list is infinite, but it appears that no one has proved it.
Numbers $n$ such that $n! + 1$ is prime is http://oeis.org/A002981. The list begins, 0, 1, 2, 3, 11, 27, 37, 41, 73, 77, 116, 154, 320, 340, 399, 427, 872, 1477, 6380, 26951, 110059, 150209. As before, presumably the list is infinite, but it appears that no one has proved it.
Many references are given at those two webpages.
if you are interested if there is infinite prime number of the form $n!+1$ for infinite many n,then first use some example $n=2$ then $n!+1=3$ is prime,for n=3,$n!+1=7$ but comes question who can calculate $n!$ for n=50 for example,so it is difficult to say if there is infinity number of prime of this form |
I have an interval $[0, 1]$ and a probability density function $f(x)$ defined on that interval. I know $f$ at say 1000 unevenly spaced points along that interval, but I don't have an analytic formula for $f$. How can I calculate the median and the quartiles of $f$?
The median is the point where 50% of the population lies below that value. Mathematically that definition means that the median $m$ satisfies the integral:
$$\int_{0}^{m}f(x)dx=0.5$$
You can approximate this integral using numerical methods. Since you have 1000 known points you can approximate the integral using the trapezoidal method. Suppose your points are $x_!, x_2, ... x_{1000}$, then the integral approximated up to the $k^{th}$ point is:
$\sum_{i=1}^k (x_{i+1}-x_i)\frac{f(x_{i+1})+f(x_i)}{2}$
Therefore, you want to find the minimum $k$ such that
$\sum_{i=1}^k (x_{i+1}-x_i)\frac{f(x_{i+1})+f(x_i)}{2}=0.5$
Likewise, for the quartiles just use 0.25 and 0.75 instead of 0.5. |
This is a basic electronics calculation, do it a hundred times before you move on.
It's Ohm's Law:
\$ V = I \times R \$
or, put differently:
\$ R = \dfrac{V}{I} \$
The voltage is the remainder after the 3.5V drop caused by the LED, so that's 8V - 3.5V = 4.5V. The current seems to be 800mA (though I see also 350mA here and there).
\$ R = \dfrac{4.5V}{0.8A} = 5.6\Omega \$
Don't just pick a common 1/4W resistor. You should always, but especially with high currents like this, check what power it will consume.
\$ P = V \times I = 4.5V \times 0.8A = 3.6W \$
So the answer is a 5.6\$\Omega\$/5W resistor.
That's much of a waste however. Both LED and resistor see the same current, then their power ratio is the same as their voltage ratio. And the efficiency is 3.5V/8V = 44%, excluding the LED's own efficiency.
A linear voltage regulator to bring down the 8V is no solution; it will dissipate the 3.6W just the same as the resistor. A switching regulator would help, but you'll have to keep its output pretty close to the LED's 3.5V to be maximum efficient. There are switchers which output a current instead of a voltage however, and they're made for the job. The LT3474 needs only a couple of external components, can drive 1A and can handle input voltages up to 36V. Efficiency for 1 LED at 800mA is slightly above 80% (for two LEDs it achieves near 90%). |
This is the second entry in our series of “Unplugged” tutorials, in which we delve into the details of each of the time series models with which you are already familiar, highlighting the underlying assumptions and driving home the intuitions behind them.
In financial time series and other fields, we often face a non-stationary time series, for example traded security (e.g. stock, bond, commodity, etc.) price levels. In this case, the time series exhibits either trending, seasonality or merely misguided (random) walk. Unfortunately the bulk of time series and econometric methods can be applied only to stationary processes, so how do we handle this scenario?
In this issue, we tackle the ARIMA model – an extension of the ARMA model, but the ARIMA model applies to non-stationary time series – the kind of time series with one or more unit-roots (integrated).
Once again, we will start here with the ARIMA Excel process definition, stating the inputs, outputs, parameters, stability constraints, and assumptions. Then we will introduce the integration operator and draw a few guidelines for the modeling process.
Background
A non-stationary time series often exhibits a few common patterns including trend over time, seasonality, and misguided random walk. The trend or seasonality can also be classified as either deterministic (function of time) or stochastic (function of past values).
For stochastic trend and/or seasonality, we often difference (i.e. compute the change of) the original time series to induce a stationary series which can be further modeled by an ARMA type of process.
By definition, the auto-regressive integrated moving average (ARIMA Excel) process is an ARMA process for the differenced time series.
Alternatively, in a simple formulation, an ARIMA (p,d,q) is defined as follow:$$\left(1-\sum_{i=1}^p{\phi_i L^i} \right )(1-L)^d Y_t =\mu + \left(1+ \sum_{j=1}^q{\theta_j L^j} \right )a_t$$
OR$$\left(1-\sum_{i=1}^p{\phi_i L^i} \right )Z_t = \mu + \left(1+ \sum_{j=1}^q{\theta_j L^j} \right )a_t$$ $$\left(1-\sum_{i=1}^p{\phi_i L^i} \right )(Z_t-\mu) = \left(1+ \sum_{j=1}^q{\theta_j L^j} \right )a_t$$ $$Z_t=\Delta^d Y_t=(1-L)\times(1-L)\times \cdots \times (1-L) Y_t=(1-L)^d Y_t$$
Where
$Y_t$ is the observed output at time t $\Delta^d$ is the difference operator of order d $Z_t$ is the differenced time series at time t $a_t$ is the innovation, shock or error term at time t ${a_t}$ time series observations: Are independent and identically distributed Follow a Gaussian distribution (i.e. $\Phi(0,\sigma^2)$). Assumptions
Looking closer at the formulation, we see that the ARIMA Excel process is essentially an ARMA process for the differenced time series aside from the difference operator ($\Delta^d$). The same assumption for an ARMA process applies here as well:
The ARMA process generates a stationary time series $Z_t$ The residuals ${a_t}$ follow a stable Gaussian distribution. The components’ parameter $\{\phi_1,\phi_2,\cdots,\phi_p,\theta_1,\theta_2,\cdots,\theta_q\}$ values are constants. The parameter $\{\phi_1,\phi_2,\cdots,\phi_p,\theta_1,\theta_2,\cdots,\theta_q\}$ values yield a stationary process.
Sound simple? It is! A careful selection of the ARMA model parameters can guarantee a stationary process for the differenced time series ($Z_t$), but how do we interpret the forecast of $Y_t$ using $Z_t$
Integration (un-difference) Operator
In many cases, we often apply a difference operator to yield a stationary time series that can be easily modeled using ARMA type of model. But how do go back to the original un-differenced time series space and interpret the ARMA results (e.g. forecast)? Our best bet is to use the Integration Operator.
$$a_t=\frac{1-\sum_{i=1}^p {\phi_i L^i}}{1-\sum_{j=1}^q {\theta_j L^j}}\Delta^d Y_t =\frac{1-\sum_{i=1}^p {\phi_i L^i}}{1-\sum_{j=1}^q {\theta_j L^j}} Z_t=(1+\sum_{i=1}^\infty \pi_i L^i)Z_t$$ $$\sum_{i=1}^\infty {\left | \pi_i \right |}< \infty$$
DEFINITION: a stochastic time series $\{Y_t\}$ is said to be integrated of order (d) (i.e. $Y_t\sim I(d)$) if the d-times differenced time series yields an invertible ARMA representation.
And, implicitly;$$\left(1-L\right)^d Y_t \sim \textrm{stationary}$$
Now, to recover $Y_t$ from the $(1-L)^d Y_t$, we apply the un-difference (integration) operator.
A first order integration can be expressed as$$Y_t=\frac{Z_t}{1-L}=Z_t \times (1+L+L^2+L^3+\cdots)=Z_t\sum_{i=0}^\infty L^i$$ $$Y_t=\sum_{i=0}^\infty Z_{t-i}$$ $$Y_{T+n}=Y_T + \sum_{i=1}^n Z_{T+i}$$
For higher order (i.e.$d$-order) integration, we simply integrate multiple times:$$Y_t=\frac{Z_t}{(1-L)^d}=Z_t\times \frac{1}{1-L}\times \frac{1}{1-L}\times \cdots \times \frac{1}{1-L} = Z_t\times \left(\sum_{i=0}^\infty L^i \right )^d$$
For instance, for $d=2$, the integration operator is defined as follow:$$Y_t=\frac{Z_t}{(1-L)^2}=Z_t\times \frac{1}{1-L}\times \frac{1}{1-L} = Z_t\times \left( 1+L+L^2+L^3+\cdots\right )^2$$ $$Y_t = Z_t (1+2L+3L^2+\cdots+(n+1)L^n+\cdots)=Z_t\sum_{i=0}^\infty {(i+1)L^i}$$ $$Y_{T+n}=Y_T+n\times W_T+\sum_{i=1}^{n-1}{(n+1-i)Z_{T+n-i}}$$
For $d=3$, the integration operator is defined as follow:$$Y_t=\frac{Z_t}{(1-L)^3}=Z_t\times \frac{1}{1-L}\times \frac{1}{1-L} \times \frac{1}{1-L}= Z_t\times \left( 1+L+L^2+L^3+\cdots\right )^3$$ $$Y_t = Z_t (1+3L+6L^2+\cdots+\frac{(n+1)(n+2)}{2}L^n+\cdots)=Z_t\sum_{i=0}^\infty {\frac{(i+1)(i+2)}{2}L^i}$$ $$Y_{T+n}=Y_T+n\times W_T+\frac{n(n-1)}{2}V_T+\sum_{i=1}^{n-1}{\frac{(n+1-i)(n+2-i)}{2}Z_{T+n-i}}$$ $$W_T=Y_T-Y_{T-1}$$ $$V_T=W_T-W_{T-1}=Y_T-2Y_{T-1}+Y_{T-2}$$
Since $\{Y_t\}$ is an integrated timer series of order , then $Z_t$ is a stationary time series which has an invertible ARMA representation:$$Y_t=a_t\sum_{k=0}^\infty {\psi_i L^i}\times \sum_{j=0}^\infty L^i\times \sum_{j=0}^\infty L^i\times \cdots \times \sum_{j=0}^\infty L^i = a_t\sum_{k=0}^\infty {\psi_i L^i}\times \sum_{k=0}^\infty {\zeta_i L^i}\\ $$
We can compute the conditional variance at time T+n given the information available at time T:$$\textrm{Var}\left(Y_{T+n}\|Y_T Y_{T-1}\cdots Y_1 \right )=\textrm{Var}\left( a_t\sum_{k=0}^\infty {\psi_i L^i} \sum_{k=0}^\infty {\zeta_i L^i}\right )=\sigma_a^2\times \sum_{i=1}^{n-1}\gamma_i^2$$
Where$$\gamma_i=\sum_{k=0}^i{\zeta_{i-k}\times \psi_k}$$ $$\zeta_o=\psi_o=1$$
IMPORTANT: NumXL has a function INTG() that computes the integral of a seasonal differenced (i.e.$Z_t=\Delta_s^d = (1-L^s)^d Y_t$) time series.To recover a differenced time series of order d, set s=1 and pass on the initial conditions (i.e. $Y_T,Y_{T-1}, ...Y_{T-d}$), and it will recover the original data series ARIMA Machine
The ARIMA Excel process is a simple machine that retains limited information about its past differenced outputs and the shocks it has experienced. In a more systematic view, the ARIMA process or machine can be viewed as below.
Note that we are observing the integrated output of the ARMA process ($Y_t$), but the machine processes the differenced outputs ($Z_t$). The INTG block references the integration operator.
How do we know if we have a unit-root in our time series?
Aside from the statistical tests for unit-root (e.g. ADF, KPSS, etc.), there are a few visual clues for detecting unit-root using the ACF and PACF plots. For instance, a time series with unit-root will exhibit high and very slow decaying ACF values for all lags. On the PACF plot, the PACF value for the first lag is almost one (1), and the PACF values for lag-order greater than one are insignificant.
For statistical testing, the Augmented Dickey – Fuller (ADF) test will examine the evidence for a unit root, even in the presence of deterministic trend or squared time trend.
NOTE: Starting in 1.55 (LYNX), NumXL natively supports the ADF test with a step-down optimization procedure. Statistical Characteristics
In our description of the ARIMA process, we highlighted a single input stimulus: shocks/innovations, emphasizing how they propagate throughout the ARIMA machinery to generate the observed output.
The ARIMA Excel machine is basically an ARMA machine, but the output is integrated before we can observe it. How does this affect the output distribution? Why do we care?
The statistical distribution (i.e.$\Psi$ ) of the output ($Y_{T+n}$) is pivotal for conducting a forecast and/or establishing a confidence interval at any future time (T+n).$$Y_{T+n}\sim \Psi(\mu_{T+n},\sigma_{T+n}^2)$$ $$\mu_{T+n}-Z_{l}^{\alpha/2}\sigma_{T+n}\leqslant \hat Y_{T+n} \leqslant \mu_{T+n}+Z_{u}^{\alpha/2}\sigma_{T+n}$$
Where
$\hat Y_{T+n}$ is the out-of-sample forecast at time T+n $Z_{l}^{\alpha/2}$ is the lower critical value for $\alpha/2$ significance level $Z_{u}^{\alpha/2}$ is the upper critical value for $\alpha/2$ significance level $\sigma_{T+n}^2$ is the conditional variance at time T+n
By now, the importance of understanding the output statistical distribution should be clear. Now how do we go about forming that understanding?
Back to the definition, the differenced time series $\{Z_t\}$ is modeled as a stationary ARMA process. Let’s convert it to an infinite-order MA model:$$(1-\sum_{i=1}^p{\phi_i L^i})Z_t=(1+\sum_{j=1}^q \theta_j L^j)a_t$$ $$Z_t=\frac{1+\sum_{j=1}^q \theta_j L^j}{1-\sum_{i=1}^p{\phi_i L^i}}=(1+\sum_{i=1}^\infty{\psi_i L^i})a_t=\sum_{i=0}^\infty{\psi_i L^i}a_t$$ $$\psi_o=1$$
Now, let’s recover the original time series from $\{Z_t\}$
Example 1
Let’s consider the following differenced series $Z_t=(1-L)Y_t$. To recover the $\{Y_t\}$ time series, we simply add up all the differences to date.$$Z_T=Y_T-Y_{T-1}$$ $$Y_{T+n}=Y_T+\sum_{i=1}^n{Z_{T+i}}=Y_T+\left( \sum_{i=1}^n\sum_{j=0}^\infty{\psi_j L^j}\right ) a_{T+n}=Y_T+\sum_{i=1}^n\left(a_{T+i}\sum_{j=0}^{i-1}\psi_j \right )$$
Now, the variance of the forecast is expressed as follow:$$\textrm{Var}\left(Y_{T+n} \right )=\sigma_a^2 \times \sum_{k=1}^n\left(\sum_{i=0}^{n-k} \psi_i \right )^2$$
As we see, although computing the forecast is simple exercise of summing all prior differences, the variance calculation is much more involved.
Furthermore, as $n\gg 1$, the $Z_{T+n} \to \frac{\mu}{1-\sum_{i=1}^p{\phi_i}}$, so the $Y_{T+n}$ estimate/forecast asymptotically approaches the deterministic linear trend defined by $Y_T + \frac{n\times \mu}{1-\sum_{i=1}^p{\phi_i}}$
Note: For higher order integration (d>1), it can be easily shown that long-run forecast values of the time series values would asymptotically follow a polynomial of the same order/ Conclusion
In simple terms, an ARIMA process is merely an ARMA process whose outputs have gone through an integrator. The integrator causes the observed time series $\{Y_t\}$ to be non-stationary. The integration process introduces the unit-root into $\{Y_t\}$. Integrating multiple times introduces multiple unit-roots into the output time series. This is why the word “integrated” is used in ARIMA.
The main take away of this paper is that differencing is a special transformation procedure that is aimed to convert a non-stationary time series into a stationary one. Like all transformations, care must be taken when we interpret the results back into the original time series space.
Notice that the unit-root modeling (e.g. ARIMA) is intended to capture a stochastic trend and it is not suited for a deterministic trend. If you suspect the presence of a deterministic trend, you should explore this avenue first (i.e. regress over time). At that point, you may choose to take the residuals and apply an ARMA type of process to exploit any remaining dynamics. |
No, this is not a recap from the O.J. Simpson trial, but it is a question we face whenever we propose a model for our data: does the model fit and does it explain the data variation properly?
In a time series modeling process, we seek a quantitative measure of the discrepancy (or the goodness of fit) between the observed values and the values expected under the model in question. The discrepancy measure is crucial for two important applications: (1) finding the optimal values of the model’s parameters, and (2) comparing competing models in an attempt to nail the best one. We believe that the model with the best fit should give us superior predictions for future values.
A few questions might spring to mind: What are the goodness-of-fit functions? How are they different from each other? Which one should I use? How are they related to the normality test?
In this tutorial, we’ll discuss the different goodness of fit functions through an example of the monthly average of ozone levels in Los Angeles between 1955 and 1972. This data set was used by Box, Jenkins, and Riesel in their time series textbook – Time Series Forecast and Control, published in 1976.
Background
We’ll start with the likelihood function, and then cover derivative measures (e.g. AIC, BIC, HQC, etc.)
The likelihood function is defined as a function of the model parameters $(\theta)$ of the statistical model. The likelihood of a set of parameter values given some observed outcomes ($\mathcal{L} (\theta | x) $) is equal to the probability of those observed outcomes given those parameter values $(f_\theta (x))$.$$\mathrm{L}(\theta |x)=f_\theta (x)$$
Where
$f_\theta=$probability mass (density) function
Assuming {${x_t}$} consists of independent and identically distributed (i.i.d) observations, the likelihood function of a sample set is expressed as follows: $$\mathcal{L}(\theta |x_1,x_2,...,x_T)=f_\theta(x_1,x_2,...,x_T)=f_\theta(x_1)f_\theta.(x_2).f_\theta(x_3)...f_\theta(x_T)=\prod_{t=1}^{T}f_\theta(x_t)$$
To overcome the diminishing value of $\mathcal{L}(\theta |x_1,x_2,...,x_T)$ as the sample size increases and to simplify its calculation, we take the natural logarithm of the likelihood function.
$$LLF(\theta | x_1,x_2,...,x_T)=\ln(\mathcal{L}(\theta | x_1, x_2,...x_T))=\sum_{t=1}^{T}\ln(f_\theta(x_t))$$
Example 1: Gaussian distribution
$$LLF(\mu,\sigma |x_1,x_2,...,x_T )=\sum_{t=1}^{T}\ln(\frac{e^\frac{(x_t-\mu )^2}{2\sigma^2 }}{\sqrt{2\pi\sigma }})=\frac{-T\ln(2\pi\sigma^2 )}{2}-\sum_{t=1}^{T}\frac{(x_t-\mu )^2}{2\sigma ^2}$$
$$LLF(\mu,\sigma |x_1,x_2,...,x_T )=\frac{-T\ln(2\pi\sigma ^2)-(T-1)\left ( \frac{ \hat{\sigma}}{\sigma} \right )^2}{2}$$
$$LLF(\mu =0,\sigma^2=1 |x_1,x_2,...,x_T )=- \frac{1}{2} (T\ln (2\pi)+(T-1){\sigma^*}^2)$$
Where
$\sigma^* =$ the un-biased estimate of the standard deviation $\sigma=$ the distribution standard deviation $\mu=$ the sample data mean or average Note:The log-likelihood function is linearly related to the sample data variance; as the sample data variance increases (fit is worse), the log-likelihood decreases, and vice versa. Models comparison
Typically, we use the LLF in searching for the optimal values of the coefficients of a model using one sample data set.
To compare the goodness of fit between models, we encounter two main challenges:
The number of free parameters $(k)$ in each model is different. Using LLF as it stands now, it is possible the LLF will give greater weight to complex models, as they can (theoretically) over-fit the sample data. Due to the different lag orders in each model, the number of remaining non-missing observations $(N)$ can differ between models, assuming we use the same sample data with all models.
We use two distinct measures to address these issues:
Akaike’s Information Criterion (AIC)
$$AIC=-2\times LLF+\frac{2\times N\times k}{N-k-1}$$ $$AICc=AIC+\frac{2k(k+1)}{N-k-1}=-2\times LLF+\frac{2\times N\times k}{N-k-1}$$
The original definition (AIC) adds a linear penalty term for the number of free parameters, but the AICc adds a second term to factor in to the sample size, making it more suitable for smaller sample sizes.
Bayesian (Schwarz) Information Criterion (BIC/SIC or SBC)
$$BIC=-2\times LLF+k\ln(N) $$
As in AIC, the BIC penalizes the model complexity $(k)$. Given any two estimated models, the model with the lower value of BIC is preferred.
The BIC generally penalizes free parameters more strongly than does the AIC, though it depends on the size of n and the relative magnitude of n and k.
Ozone levels in downtown Los Angeles
In this tutorial, we’ll use the monthly average of the hourly ozone levels in LA between Jan 1955 and Dec 1972.
The underlying process exhibits seasonality around a 12-month period, but it appears to decay over time. There are two important facts to consider about this example:
In 1960, two major events occurred, which might have reduced ozone levels: (1)the Golden State Freeway was opened, and (2) a new rule (rule 63) was passed to reduce the allowable proportion of hydrocarbons in locally-sold gasoline. In 1966, regulation was adopted requiring engine design changes that were expected to reduce the production of ozone in new cars.
The underlying process had undergone major changes throughout the sample data period. For forecasting purposes, we will exclude the observations between 1955 and 1966. For this tutorial, we assume we did not know about those events and simply ignore them.
The summary statistics suggest the following: (1) serial correlation, (2) arch effect, and (3) significant skew.
Next, let’s examine the ACF and PACF plot (correlogram).
The correlogram (ACF and PACF) appears similar to an Airline type of model with a 12-month period of seasonality. We went ahead and constructed the model, then calibrated its parameter values using the sample data.
$$(1-L)(1-L^{12})x_t=\mu +(1-\theta L)(1-\Theta L^{12})a_t$$ $$a_t=\sigma \times\varepsilon _t$$ $$\varepsilon_t\sim i.i.d\sim N(0,1)$$
Note: The calibration process is a simple maximization problem with the LLF as the utility function, and the model’s validity function as the only constraint. We can also use AIC or BIC as the utility function instead of the LLF, but we have to search for parameter values that minimize the utility.
Let’s now compute the standardized residuals and determine the different goodness-of-fit measures: LLF, AIC and BIC.
1. Compute the residuals Hard way:Using the AIRLINE_MEAN function, get the estimated model’s values and subtract those from the observed values to get the raw residuals. Subtract the residuals’ mean from the raw residuals and divide by standard deviation to get the standardized residuals.
Easy way: Using the AIRLINE_RESID function will yield an array of the standardized residuals of the model.
Let’s plot the distribution (and QQ-Plot) of the standardized residuals.
2. Compute the log-likelihood for the standardized residuals
Now, let’s compute the log-likelihood function. We can do that either by computing the log of the mass function at each point and then adding them together, or simply by using this formula:$$LLF=-\frac{1}{2}(N\ln(2\pi)+(N-1)\hat{\sigma }^2)$$
Note: The number of non-missing points is 203 (i.e. 216 – 13). We have lost 13 points. LLF is not identical to the one we had earlier (LLF = -265) in the Airline model table. The AIRLINE_LLF uses Whittle’s approximation to compute the LLF function, which is relatively close and efficient for large sample data. The AIC and BIC values are relatively close, but the BIC penalizes more than AIC. Conclusion
The log likelihood function offers an intuitive way to think about a model’s fit with a sample data set, but it lacks any consideration for the model’s complexity or sample size. Thus, it is not appropriate for comparing models of different orders.
The Akaike and Bayesian information criteria fill the gap that LLF leaves when it comes to comparing models. They both offer penalty terms for the number of free parameters and the number of non-missing observations. Furthermore, in practice, the BIC is more often used than the AIC, especially in model identification and selection processes.
To compare models,
please note that we need to use the same sample data with all models. We can’t use AIC or BIC to compare two models, each computed using a different data set. |
What about this we have two people A and B both born in February. I will make four cases:
Case1:Both born in a year with February with 28 day. this year has the probability 3/4 Hence we have the set which present their birthday day $\{(1,1) , (1, 2) , \cdots (28 , 28)\}$ This set has $28 \cdot 28$ days and $28$ element with same coordinates. This case has the probability ${9 \over 16}$. So to have the same birthday day we get $\frac{28}{28\cdot 28} \times \frac{9}{16}$.
Case 2 : A born in a 28 Feb year. B in a 29 Feb year. This has the probability ${3\over 16}$ again we make the set $\{ (1,1) , \cdots , (28,28), (1, 29) ,\cdots , (28 , 29)\}$ This set has $28\cdot 29$ elements with $28$ element with similar coordinates hence we have the probability $\frac{3}{16} \times \frac{28}{28 \cdot 29}$.
Case 3: A in 29 , B in 28 Feb year. This has a similar probability for case 2.
Case4: A and B in a year with 29 day in Feb. This will have the probability $\frac{1}{16} \times \frac{29}{29 \cdot 29}$
Thus the probability is the sum if the cases. $ \frac{9}{16} \cdot \frac{1}{28} + 2 \left(\frac{3}{16} \cdot \frac{1}{29} \right)+ \frac{1}{16} \cdot \frac{1}{29}$ |
This is (again) more a comment than an answer - motivated by René's question for more conceptional background A couple of years ago I began to look at the full primefactorization of the cyclotomic polynomials $f_b(n) = b^n-1 $ by looking at $f(n)$ modulo the primes, creating a little "algebra" on it based on the theorems of Fermat ("little Fermat") and Euler ("Totient").
The following notations seem to be helpful for such an "algebra":
We're considering the canonical primefactorization of the expression $$f_b(n) = p_1^{e_1} \cdot p_2^{e_2} \cdots p_m^{e_m} \tag 1$$Looking at this for each primefactor $p_k$ separately ($f_b(n) \pmod {p_k}$) gives reason for two compact notations:
$[n:p]$ with the meaning $[n:p]=0$ if $p$ does not divide $n$ and $=1$ if it does divide $n$ (also known as "Iverson-brackets"; and no special definition for $n=0$ as long as not really needed)
$\{ n, p \} = e $ with the meaning of giving the exponent $e$, to which the primefactor $p$ occurs in $n$, so $ \{f_b(n),p_1 \} = e_1$ implies $f_b(n) = p_1^{e_1} \cdot x$ where $gcd(x,p)=1$
(in Pari/GP this is the function "valuation(n,p)")
The idea is to restate the defining equation (1) with the help of this notations/concepts. Of course, Fermat and Euler show us, that we have periodicity in the occurence of any primefactor, when we increase $n$ and that on special $n$ the primefactors $p_k$ occur even with higher exponent. To have expressive formulae for this too we introduce the formula for
the smallest $n$ at which the primefactor $p$ occurs first in $f_b(n)$ (this is often written as $\text{ord}()$ denoting the "order of the multiplicative subgroup mod p" but to avoid possible conflicts with common terms we just call this $\lambda_b(p) $, so in $ f_b(\lambda_b(p)) $ the primefactor $p$ occurs first time when $n$ increases from $1$. Here and in the following we can remove the index-parameter $b$ at $f$ and at $\lambda$ for notational convenience, and even remove the argument with the parenthese at the $\lambda$-function when the referred prime $p$ is obvious from context. So we write $ [b^{\lambda(p)} -1 :p]=1$ (In Pari/GP it is $\lambda_b(p)=$
znorder(Mod(b,p)) )
We'll find, that sometimes in $f(\lambda(p))$ the primefactor $p$ occurs not only to the first, but by some higher power, so we introduce the function
$\alpha_b(p)$ by the implicite definition $ \{ f_b(\lambda_b(p)),p \} = \alpha_b(p) $ or simplified $ \{ f(\lambda),p \} = \alpha $
For the
primefactors $p$(the primefactor $p=2$ needs one extension) and of course when the base $b$ is coprime to the selected $p$, we can then state$$ \{b^n-1 , p\} = [n:\lambda]\cdot (\alpha + \{n, p\}) \tag 2$$For the primefactor $2$ and odd $b$ the $\lambda$-function is always $1$ . And because now always $[f(1):2]=1$ odd $[f(1)+2:2]=1$ the general expression (2) needs some refinement, but which I do not want show here - its indication may suffice for the following. and also
The question whether the same difference $a^x - b^y =d$ can also occur with $a^{x+v} - b^{y+w} =d$ can be rewritten as$$ \begin{array}{rcl}a^{x+v} - b^{y+w} &= &a^x -b^y \\ a^x(a^v-1) &=& b^y(b^w-1) \\ {a^v-1 \over b^y} &=& {b^w-1 \over a^x}\end{array} \tag 3$$we find that function $f_a(v)=a^v-1$ with divisibility condition by $b^y$ and similarly on the rhs.
The "conceptual" aspect is now, that on the lhs as well as on the rhs we have terms whose canonical primefactorizations are expressible by the above functions and notations
and
must be equal except for the bases $a$ and $b$ which in the given examples are usually primenumbers (and thus primefactors of the other expression) themselves, for instance for the problem $3^{3+v}-5^{2+w} \overset{?}= 3^3-5^2=2$ and $a=3$ and $b=5$ here (and other examples as discussed in Will Jagy's answers) and possible values $\gt 0$ for $v$ and $w$ are sought.
Using the canonical primefactorizations we can write$$ 3^v-1 = 2^{e_1} \cdot 3^0 \cdot 5^2 \cdot 7^{e_4} \cdots =\prod p_k^{e_k}\\ 5^w-1 = 2^{h_1} \cdot 3^3 \cdot 5^0 \cdot 7^{h_4} \cdots = \prod q_i^{h_i} \\$$and for a solution all variable exponents must respectively be equal: $e_k=h_k$ to have equality in eq(3)
For searching a possible solution one can, a bit more than @WillJagy has done this, write down a sufficient list of primefactors and the compositions of $3^v-1$ and $5^w-1$ by that primefactors . With Pari/GP one can easily find $$ \small \begin{array} {rl|rl} \{3^v-1,2\} &= e_1 = 1+ [v:2] + \{v,2\} & \{5^w-1,2\} &= h_1 = 2+ \{w,2\} \\\{3^v-1,3\} &= e_2 = 0 & \{5^w-1,3\} &= h_2 = [w:2](1+ \{w,3\}) \\\{3^v-1,5\} &= e_3 = [v:4](1+ \{v,5\}) & \{5^w-1,5\} & = h_3 = 0 \\\{3^v-1,7\} &= e_4 = [v:6](1+ \{v,7\}) & \{5^w-1,7\} &= h_4 = [w:6](1+ \{w,7\}) \\ \vdots\end{array}$$
There are now two critical aspects in that list:
ansatz a) we must find some $v$ and $w$ such that all $e_k=h_k$ except $e_3=2$ and $h_2=3$ . But as we see, the $\lambda$-entries in the $[v:\lambda]$-terms have common divisors and so the inclusion of some primefactor $p_k$ means
the inclusion of another primefactor$ p_m$ due to the fact, that $\lambda(p_k)$ might contain $\lambda(p_m)$ as a divisor. And that inclusion would also imply the primefactor $q_m$ with the same exponent and thus the inclusion of other $q_n$ and so on. So this might run into an infinite progress and this would then give a contradiction to the assumption, that some pair of finite $(v,w)$ might allow a solution. automatically
ansatz b) we must - in the logic of a)- find a pair $(v,w)$ which imply an inclusion of the bases as primefactors to an exponent which is higher than wanted, such that, for this example in the lhs the primefactor 5 is included to the power of 3 or in the rhs the primefactor 3 is included to the power of 4 or higher.
The case b) is the simpler one and can occur already when short lists of primefactors of $f_a(v)$ and $f_b(w)$ are checked after some $v$ and $w$ are recognized as mandatory to have equal primepowers at all.
The actual computation procedure is in principle the same as Will Jagy has done it, only that I provide an initial list of consecutive primes as possible primefactors of $f_a(v)$ and of $f_b(w)$ , keep their respective $\lambda_a(p_k), \lambda_b(q_k)$ and $\alpha_a(p_k),\alpha_b(q_k)$ .The from the example inserted in (3)$$ \begin{array}{rcl}3^{3+v} - 5^{2+w} &= &3^3 -5^2 = 2 \\ 3^3(3^v-1) &=& 5^2(5^w-1) \\ {3^v-1 \over 5^2} &=& {5^w-1 \over 3^3}\end{array} $$we have that $\{3^v-1,5\}=2$ is required, so by $\{3^n - 1,5\} = [n:4](1+\{n,5\}) = 2 $ we find that $ [n:4]=1 $ and also $1+\{n,5\}=2$ and thus $n=4\cdot 5 = 20$ and thus the initial exponent $v_0$ must be set as $v_0=n=20$. Of course $v_0 = 20$ implies that other primefactors shall be included (and by hand we can simply create the list of that other primefactors by factorizing $factor(3^20-1)$ using Pari/GP) . What I've got including only the first 100 primes is $$ \begin{array} {} p_k & \lambda_3(p_k) & \alpha_3(p_k) & y'\\ 2 & 1 & 1 & 1 \\ 5 & 4 & 1 & 1 \\ 11 & 5 & 2 & 2 \\ 61 & 10 & 1 & 1\end{array}$$*(the column y' here means, that including the primefactor $p_k$ and using the thus required value of $v$ we get $5$ to the power of
y'
in $f_3(v)$ )*
Similarly this can be done using $ \{5^w-1,3\} =3 $ following $ \{5^n-1,3\} = [n:2](1+\{n,3\}) = 3 \to n = 2 \cdot 3^2 $ and $w_0 = 18$. In the similar way as before we find, that other primefactors $q_k$ are now involved,see this:$$\small \begin{array} {} q_k & \lambda_5(q_k) & \alpha_5(q_k) & x' \\ 2 & 1 & 2 & 2 \\ 3 & 2 & 1 & 1 \\ 7 & 6 & 1 & 2 \\ 19 & 9 & 1 & 3 \\ 31 & 3 & 1 & 2 \end{array} $$Next, because all exponents of the involved primefactors $p_k$ and $q_k$ must be equal $e_k = h_k$ we build the common set $C$ of involved primefactors having the maximum exponent $c_k=max(e_k,h_k)$, excluding the primefactors which equal the mutual bases. That means, for instance, we have to increase $v_1$, such that $v_2=v_1 \cdot x$ and the prime $p = 31$ can occur in the list of $p_k$ with exponent $2$.
This is a very systematic job, given the above list of $\lambda$'s and $\alpha$'s and can be done using only a finite list of possible primefactors to include, say of length $100$.
This allows then a (relatively) simple algorithm which can be applied "blindly" to some problem.
1) Initialization: given the bases $a$ and $b$ select an upper bound
maxk for primefactors in the primefactorization. Initialize the lists of $\lambda$ and $\alpha$ for $p_k$ and $q_k$ up to maxk primes with respect to base1 $b_1= 3$ and base $b_2 = 5$ and the required exponents $x=3$ and $y=2$. Compute the initial $v_1$ and $w_1$ from the condition, that $5^2$ shall be factor of $f_3(v)$ and $3^3$ shall be factor of $f_5(w)$
2.a) adaption: at iteration-step $i$ given $v_i$ produce the list of primefactors $p_k$ which would occur in $f_3(v_i)$ and given $w_i$ the list $q_k$ which would occur in $f_5(w_i)$ .
2.b) combination: create the combined list $C$ of all occuring primefactors with maximal occuring exponent and compute the required $v_{i+1}$ and $w_{i+1}$ which allow the occurence of all $C_k$ in $f_3(v_{i+1})$ and in $f_5(w_{i+1})$
Iterate the steps 2.a and 2.b until either in $f_3(v_i)$ are too many primefactors $p_3 =5$ or in $f_5(w_i)$ are too many primefactors $p_2=3$. If this does not occur in a meaningful number of iterations, increase the number
maxk and start again or break with inconclusive result.
With two iterations of the steps 2.a and 2.b I get the following with some simple Pari/GP-procedures:
maxk=100;b1=3,b2=5;x=3;y=2
init (b1,b2, x,y, maxk)
\\ result: v=20 w=18 {f_3(v) -1, 5}= 2=y {f_5(w) -1, 3}= 3 =x
adapt
\\ primeslist p_k = [2, 5, 11, 61]
\\ primeslist q_k = [2, 3, 7, 19, 31]
\\result : v=360 w=1980 {f_3(v) -1, 5}= 2=y {f_5(w) -1, 3}= 3 =x
adapt
\\ primeslist p_k = [2, 5, 7, 11, 13, 19, 31, 37, 41, 61, 73, 181, 241, 271]
\\ primeslist q_k = [2, 3, 7, 11, 13, 19, 23, 31, 37, 41, 61, 67, 71, 89, 181, 199, 331, 397, 521]
\\result : v=720720 w=11880 {f_3(v) -1, 5}= 2=y {f_5(w) -1, 3}= 4 >x !!
\\ here we get now the contradiction because f_5(w) has too many factors 3
Other than in Will Jagy's code here is less "guess" - using a defined set of possible primefactors (the first
maxk
primes) and only the iterated adapt-function seems to provide the contradiction-result without further manual intervention and/or guesses - so this is the reason, that I assume this a better solution reflecting a "conceptual" ansatz.
The Pari/GP-code is not difficult and I can append them on request.
(errors, typos shall be removed when I detect them)
[update]: the essay with more systematic explanations was updated |
Look at the plot below. It generates noise according to the distribution you mentioned, and averages it (with different numbers of measurements). What is shown is the histogram of the obtained averaged noise values. As you can see, the more you average, the more the noise goes to the center. However, for $m \log m$ measurements, the noise has still significant contributions up to $2^{m-2}$, I dont know, if you would consider these the LSB. If you average more, you can get the gross of noise samples close to zero.
However, note that the minimum and maximum possible values of the averaged noise are still $\pm2^m-1$, which happens when all noise realizations are the maximum (which becomes more and more unlikely, when you have many measurements).
m = 16;
def getnoise(realizations, measurements):
lowest = 2**(m-1)
highest = 2**m - 1
width = highest - lowest
shape = (realizations, measurements)
sign = (1-2*(np.random.randn(*shape)>0).astype(float))
value = np.random.uniform(lowest, highest, size=shape)
return sign * value
N = 10000
plt.hist(getnoise(realizations=N, measurements=1), bins=50, label='noise distribution');
plt.hist(np.mean(getnoise(realizations=N, measurements=2), axis=1), bins=50, label="2 average");
plt.hist(np.mean(getnoise(realizations=N, measurements=3), axis=1), bins=50, label="3 average");
plt.hist(np.mean(getnoise(realizations=N, measurements=int(m*np.log2(m))), axis=1), bins=50, label="mlogm average");
plt.hist(np.mean(getnoise(realizations=N, measurements=1000), axis=1), bins=50, label="1000 average");
plt.legend();
Some analytic result: Actually, the central limit theorem kicks in here. You have i.i.d. random variables of finite variance $\sigma^2=\frac{2}{3}(-1+2^m)^3$ (asking Mathematica):
Simplify[Integrate[x*x, {x, -(2^m - 1), 2^(m - 1)}] +
Integrate[x*x, {x, 2^(m - 1), 2^m - 1}]]
$=\frac{2}{3} \left(2^m-1\right)^3$
Hence, according to the central limit theorem, if you take the average of enough of those measurements, you will end up with a Gaussian distribution of variance
$$Var(\frac{1}{n}\sum_{i=0}^{n-1}X_i)\approx\frac{\sigma^2}{n},$$
So, with this result, you can calculate how much measurements you need, to get the required reduction in noise. |
RESULTS
Predicted PK
Peak 34.8 mcg/mL
Trough 17 mcg/mL
(goal 15-20 mcg/mL)
AUC:MIC 599 mcg*hr/mL Vancomycin Concentration Graph Over Time
PK Parameters
Apparent CrCl
75 mL/min
Vd 73.5 L (0.7 L/kg)
Kel 0.068 hr -1
T 1/ 2 10.2 hrs
This website is intended to be used in conjunction with reasonable clinical judgment. This is not a substitute for clinical experience and expertise. An electronic tool cannot assess the clinical picture and patient-specific factors.
For more information, please view the website disclaimer
I understand
Printable
[Patient description, reason for consult]
Patient Metrics
Age: yrs
Height: [ ] in
ABW: 105 kg
IBW: [ ] kg
CrCl: [ ] mL/min
Labs/Vitals
BUN/SCr: [ ] / [ ]
WBC: [ ] ([ ]% PMN)
Tmax: [ ]
HR: [ ]
BP: [ ] / [ ]
O2Sat: [ ]
Current Antibiotics
[ ]
Recent doses/levels
[ ]
Cultures/Sensitivities
[ ]
(based on one level) Estimated Pharmacokinetic Parameters
Vd: 74 L (0.7 L/kg)
Kel: 0.068 hr -1 (T1/2 = 10.2 hrs)
Dosing Recommendations
Vancomycin dose: 1500 mg IV Q12hrs (infused over 1.5 hrs)
Estimated peak: 34.8 mcg/mL
Estimated trough: 17 mcg/mL
Estimated AUC:MIC: 599 mcg*hr/mL (assumed MIC 1 mcg/mL)
A/P:
1. Recommend vancomycin 1500 mg IV Q12hrs (14 mg/kg)
2. Consider a vancomycin trough level prior to the 4th dose.
3. Please monitor renal function (urine output, BUN/SCr). Dose adjustments may be necessary with a significant change in renal function.
Please page with questions. Thank you for the consult.
[Signature, pager]
Have you found ClinCalc helpful? Support us on Facebook!
Elimination Constant (K
el) and End of Infusion Peak (EoIP)
$$\\Time = Tau - T_{infusion} - T_{prior\;to\;next\;dose} = 12-1.5- \frac{30min}{60} = 10\;hrs\\ \Delta C = (Peak - Trough) = \frac{Dose}{Vd}*e^{-k*T_{infusion}}\\ End\;of\;infusion\;Peak\;(EoIP) = \frac{Dose}{Vd}*e^{-k*T_{infusion}} + Trough\\ Trough=EoIP*e^{-kt} \rightarrow (\frac{Dose}{Vd}*e^{(-k*T_{infusion})} + Trough)*e^{-kt}\\ 12.5=(\frac{1000}{73.5}*e^{(-k*1.5)} + 12.5)*e^{-k*10} \rightarrow k=0.068\;hr^{-1}\\ EoIP = \frac{Dose}{Vd}*e^{-k*T_{infusion}} + Trough = 13.61*0.9+12.5=24.8\;mcg/mL
$$
Volume of Distribution
$$V_d = 0.7 \frac{L}{kg}*105\;kg = 73.5\;L$$
Half-life
$$T\frac{1}{2} = \frac{0.693}{K_{el}} = \frac{0.693}{0.068 hr^{-1}} = 10.2\;hrs$$
Tau
$$\\ Tau = \frac{\ln(\frac{Peak}{Trough})}{K_{el}} + T_{infusion} = \frac{\ln(\frac{40}{15})}{0.068} + 1.5 = 14.4+ 1.5 \;hrs \; \sim = 12\;hrs\;$$
Estimation of Peak
$$ \\Dose = \frac{C_{peak}*T_{infusion}*Vd*K_{el}*(1-e^{(-K_{el}*Tau)})}{(1-e^{(-K_{el}*T_{infusion})})}\\ 1500 mg = \frac{C_{peak}*(1.5\;hr)*(73.5 L)*(0.068 hr^{-1})*(1-e^{(-0.068*12)})}{(1-e^{(-0.068*1.5)})} \\ \rightarrow C_{peak}=34.8\;mcg/mL$$
Estimation of Trough
$$\\ Cp=Cp^0*e^{(-kt)}\\ Trough = Peak*e^{(-kt)}\\ Trough = 34.8*e^{(-0.068*(12-1.5))} = 17\;mcg/mL$$
Calculation of AUC:MIC
$$ \\\\ Lin\;trap = \frac{Trough+Peak}{2}*(T_{infusion}) = \frac{17+34.8}{2}*1.5\\ \rightarrow Lin\;trap = 38.8mcg*h/mL\\ Log\;trap = \frac{(Peak-Trough)*(Tau-T_{infusion})}{\ln(\frac{Peak}{Trough})}\\ = \frac{(34.8-17)*(12-1.5)}{\ln(\frac{34.8}{17})} \; =260.9mcg*h/mL\\ AUC_{0-12} = (Lin\;trap) + (Log\;trap) = 38.8 + 260.9= 299.7\;mcg*h/mL\\ AUC_{0-24} = AUC_{0-12}*2=599.5\;mcg*h/mL\\ AUC_{0-24}:MIC\;ratio = AUC_{0-24} /1=599\;mcg*h/mL$$
About This Calculator
This vancomycin calculator uses a variety of published pharmacokinetic equations and principles to estimate a vancomycin dosing regimen for a patient. A regimen can be completely empiric, where the vancomycin dose is based on body weight and creatinine clearance, or a regimen may be calculated based on one or more vancomycin levels.
Our vancomycin calculator was specifically designed to help students and clinicians understand the process of calculating a vancomycin regimen. When a vancomycin regimen is calculated, each step in the dosing process is fully enumerated and visible by clicking the "Equations" tab.
In addition to being designed for students, this calculator was also intended with the practicing clinician in mind. All dosing regimens are rounded to the nearest 250 mg with appropriate dosing intervals (eg, Q8hr, Q12hr, Q24hr) to reflect clinical practice. Additionally, after calculating a dosing regimen, a pharmacokinetic progress note template is automatically generated for your convenience.
After calculating a dose, click on
'Progress Note' for a pharmacokinetic template or 'Equations' for a step-by-step explanation of the recommended dosing regimen. Major Updates to this Calculator
2015-06-07 - Drug elimination is accounted for during the infusion time. Read more
Inappropriate Populations for This Calculator
This calculator is NOT appropriate for the following patient populations or may require a higher degree of clinical judgment:
Unstable renal function
Vancomycin MIC ≥ 2 mcg/mL
Population Estimate of Kel
Because vancomycin is primarily renally eliminated, the elimination constant (Kel) is directly related to creatinine clearance (CrCl). While several population estimates exist, this calculator uses the Creighton equation
1 to estimate Kel for a given CrCl using the Cockcroft-Gault method: 2
$$ K_{el} = 0.00083*(CrCl) + 0.0044 $$
Importantly, this method relies on an accurate creatinine clearance; therefore, this method may not be appropriate in patients with unstable renal function or other characteristics that make creatinine clearance difficult to estimate (eg, obesity, elderly, amputations, etc.). Furthermore, it should be emphasized that this is merely an estimate of Kel -- there are many other equations to generate an estimate.
Estimate of Kel from a Trough Level
This calculator can use a single vancomycin trough to estimate true vancomycin clearance (rather than a population estimate from creatinine clearance). This calculator assumes that the vancomycin trough is drawn at steady state, which occurs prior to the fourth vancomycin dose (assuming a consistent dose and dosing regimen).
3
Population Estimate of Vd
By default, this calculator suggests a population estimate volume of distribution (Vd) of 0.7 L/kg for vancomycin. There is actually a large variation in the literature, with Vd being described between 0.5 and 1 L/kg.
4 There is some evidence that patients with reduced creatinine clearance (CrCl < 60 mL/min) have a larger Vd of 0.83 L/kg, whereas patients with preserved renal function (CrCl ≥ 60 mL/min) have a smaller Vd of 0.57 L/kg. 5 Actual or Ideal Dosing Body Weight
Although data are limited, it is recommended that vancomycin be initially dosed on actual body weight (not ideal or adjusted weight), even in obese patients.
3 Clinically, this dose may be capped at a specific weight (eg, 120 kg) or dose (eg, 2500 mg), although this practice has not been prospectively studied.
While vancomycin is dosed on actual body weight, it should be noted that creatinine clearance (which may be used to empirically estimate Kel) is based on ideal or adjusted body weight. For more information on the appropriate body weight, see Creatinine Clearance - Adjustments for Obesity.
Core Pharmacokinetic Equations
This vancomycin calculator uses three "core" clinical pharmacokinetic equations that are well described for intermittent intravenous infusions assuming a one-compartment model.
4:
$$ Cp=Cp^0*e^{(-kt)} $$
This equation describes how an initial drug concentration (Cp
0) declines to a final drug concentration (Cp) over a specified period of time (t) assuming an elimination constant (k).
$$ \Delta C = \frac{Dose}{Vd} $$
This equation describes the change in concentration (ΔC = C
final - C initial) is related to a given dose and volume of distribution (Vd).
$$ Dose = \frac{C_{peak}*T_{infusion}*Vd*K_{el}*(1-e^{(-K_{el}*Tau)})}{(1-e^{(-K_{el}*T_{infusion})})} $$
This large equation calculates an appropriate drug dose assuming a goal peak drug level (C
peak), volume of distribution (Vd), elimination constant (Kel), dosing frequency (Tau), and infusion time (T infusion).
Therapeutic Targets: Trough Level
Because an AUC:MIC goal value is difficult to calculate, many clinicians continue to use a goal vancomycin trough level as the therapeutic target of choice. As mentioned in guidelines,
3 an AUC of 400 may be achieved with a peak of about 40 mcg/mL and a trough of about 15 mcg/mL. This method is often used as an alternative to direct AUC calculations.
Current guidelines make the following suggestions regarding the optimal vancomycin trough level:
3
All patients should achieve a minimum trough level of > 10 mcg/mL
Patients with complicated infections should have a goal vancomycin trough of 15 to 20 mcg/mL. Complicated infections include:
Pathogen MIC of 1 mcg/mL
Bacteremia
Endocarditis
Osteomyelitis
Meningitis
Hospital-acquired pneumonia caused by Staph aureus
Therapeutic Targets: AUC:MIC
Although vancomycin has been on the market since the 1950s, there is still considerable controversy regarding the optimal monitoring parameter to maximize efficacy and minimize toxicity. Current guidelines recommend an AUC:MIC ratio of ≥ 400, although this goal is largely based on weak evidence.
3 , 6 , 7 , 8
In order for an AUC:MIC ratio to be calculated, both a peak and trough level must be known. Because only trough levels are drawn clinically, a peak level must be estimated. The following equation is used to estimate vancomycin's area under the curve (AUC):
5
$$
\\ Lin\;trap = \frac{Trough+Peak}{2}*(T_{infusion})
\\ Log\;trap = \frac{(Peak-Trough)*(Tau-T_{infusion})}{\ln(\frac{Peak}{Trough})}
\\ AUC_{0-Tau} = (Lin\;trap) + (Log\;trap)
\\ AUC_{0-24} = AUC_{0-Tau}*(24/Tau) = AUC\;in\;mcg*h/mL
$$
For more information about AUC:MIC and pharmacodynamic killing, see Vancomycin AUC:MIC versus T>MIC.
Vancomycin Loading Dose
In seriously ill patients, a loading dose of 25-30 mg/kg (actual body weight) may be considered.
3 This practice has a theoretical benefit of attaining therapeutic vancomycin levels earlier, but has not been extensively studied. This approach is not supported by evidence from large clinical trials, therefore, the safety and efficacy of a loading dose practice has not been established.
Regardless, based on expert opinion, the following patient populations may be considered for such a loading dose when MRSA is suspected:
9
Sepsis
Meningitis
Pneumonia
Infective endocarditis
Severe skin/soft tissue infection
One common question is a maximum vancomycin loading dose. As stated above, there is a lack of evidence supporting loading doses, let alone loading doses in morbidly obese patients. While not recommended by the guidelines, many institutions will "cap" a loading dose at approximately 2000 to 3000 mg. This calculator, when providing a loading dose, will cap at 3000 mg. A vancomycin loading dose cap of 3000 mg represents a maximum weight of 120 kg for a dose of 25 mg/kg.
References and Additional Reading
Matzke GR, McGory RW, Halstenson CE, Keane WF. Pharmacokinetics of vancomycin in patients with various degrees of renal function. Antimicrob Agents Chemother. 1984 Apr;25(4):433-7. PMID 6732213.
Cockcroft DW, Gault MH. Prediction of creatinine clearance from serum creatinine.
Nephron. 1976;16(1):31-41. PMID 1244564. Rybak M, Lomaestro B, Rotschafer JC, et al. Therapeutic monitoring of vancomycin in adult patients: a consensus review of the American Society of Health-System Pharmacists, the Infectious Diseases Society of America, and the Society of Infectious Diseases Pharmacists.
Am J Health Syst Pharm. 2009;66(1):82-98. PMID 19106348. Bauer LA. Chapter 5. Vancomycin. In: Bauer LA, ed. Applied Clinical Pharmacokinetics. 2nd ed. New York: McGraw-Hill; 2008.
DeRyke CA, Alexander DP. Optimizing Vancomycin Dosing Through Pharmacodynamic Assessment Targeting Area Under the Concentration-Time Curve/Minimum Inhibitory Concentration.
Hospital Pharmacy. 2009;44(9):751-765. Free Full Text. Craig WA. Basic pharmacodynamics of antibacterials with clinical applications to the use of beta-lactams, glycopeptides, and linezolid.
Infect Dis Clin North Am. 2003;17(3):479-501. PMID 14711073. Moise-Broder PA, Forrest A, Birmingham MC, et al. Pharmacodynamics of vancomycin and other antimicrobials in patients with Staphylococcus aureus lower respiratory tract infections.
Clin Pharmacokinet. 2004;43(13):925-42. PMID 15509186. Jeffres MN, Isakow W, Doherty JA, et al. Predictors of mortality for methicillin-resistant Staphylococcus aureus health-care-associated pneumonia: specific evaluation of vancomycin pharmacokinetic indices.
Chest. 2006;130(4):947-55. PMID 17035423. Liu C, Bayer A, Cosgrove SE, et al. Clinical practice guidelines by the infectious diseases society of america for the treatment of methicillin-resistant Staphylococcus aureus infections in adults and children.
Clin Infect Dis. 2011;52(3):e18-55. PMID 21208910. |
I am aware of matched filter and its application. Now, wondering if there is any application of inverse matched filter? What I mean my inverse matched filter is that convolution of matched filter and the inverse matched filter would lead to close to delta function.
In your question, you postulate the existence of two filters, let's say $p(t)$ and $g(t)$, such that $p(t) \star g(t)=\delta(t)$. This assumption is problematic because it implies that $G(f)=1/P(f)$, which requires $P(f) \neq 0$ for all $f$. Furthermore, for small values of $P(f)$, $G(f)$ will take arbitrarily large values (see Dilip Sarwate's comments above and below this answer).
Assuming (for the sake of discussion) the existance of such a pair of filters, there may not be any advantage to using them. I'll describe a possible application from digital communications. Let's say you want to transmit a number $A$ over an analog channel. You choose an appropriate analog pulse $p(t)$ and transmit $$s(t) = A\delta(t)\star p(t).$$ (The pulse $p(t)$ could be chosen to fit the channel bandwidth, and/or to have a certain energy). The receiver could perform the "inverse filter" operation on the signal $s(t)$: $$s(t) \star g(t)=A\delta(t),$$ where $g(t)$ is your "inverse matched filter". However, in practice it turns out that detecting $A$ in this way is not optimal, because it does not have the best signal-to-noise ratio. In other words, when you add noise to the transmitted signal, the received signal becomes $$r(t)=s(t)+n(t).$$ In this case, filtering $r(t)$ with the "inverse matched filter" $g(t)$ results in a worse signal-to-noise ratio than filtering with the filter matched to $p(t)$.
If you have access to it, I highly recommend "An Introduction to Matched Filters", by G. Turin, IRE Transactions on Information Theory, June 1960. Your question is answered in page 318. |
I have the calculation: $2^{31}\pmod {2925}$
It's for university and we should solve it like:
make prime partition $2^{31}$ mod all prime partitions Solve with Chinese Remainder Theorem.
I started with $2925 = 3 \cdot 3 \cdot 5 \cdot 5 \cdot 13$ , and found out that: $$2^{31} \equiv 2 \pmod{3}$$ $$2^{31} \equiv 3 \pmod{5}$$ $$2^{31} \equiv 11 \pmod{13}$$ I made: $$x \equiv 2 \pmod3$$ $$x \equiv 3 \pmod5$$ $$x \equiv 11 \pmod{13}$$
Then I tried CRT and got $x = -1237 + 195k$
If you simply calculate $2^{31}\pmod{ 2925}$ you get $1298$, which is in fact $-1237 + 195 \cdot 13$.
I don't know how to find out the $13$.
Any help appreciated.
EDIT:
SOLVED!I took $3$ instead of $9$ and $5$ instead of $25$ after prime partition. For more infos please see comments. Thanks! |
Since $\mathbb{F}_9$ is a field, its units $\mathbb{F}_{9}^* = (1,2,3,4,5,6,7,8)$ should form a multiplicative group. However in this group $3 \times 3 = 0 \notin \mathbb{F}_{9}^*$. I'm trying to understand how this is possible. Don't rush on me since I'm new to the literature.
$\Bbb F_9$ is a quotient ring of the polynomial ring $\Bbb F_3[X]$. As such, the elements of $\Bbb F_9$ are written as $a+bX +(f)$ where $a,b\in\Bbb F_3$ and $f$ is an irreducible quadratic polynomial over $\Bbb F_3$. Usually we shorten this to $a+bx$, where $x$ is thought of one of the two roots of $f$.
Addition is done the regular way, and multiplication is done as with regular polynomials, then reduced through $f$ to be on the above form again. Exactly which $f$ you choose is up to you, but be consistent.
The elements of $\Bbb F_9^\times$ are $$1,2,\\x,x+1,x+2,\\2x,2x+1,2x+2$$ An example of multiplication, using $f(X)=X^2-2$, meaning $x^2-2=0$, or $x^2=2$: $$ (x+2)(2x+2)=2x^2+6x+4\\ =2x^2+1=2\cdot2+1=2 $$
The error is that $\Bbb{F}_9$ is not $\Bbb{Z}/9$. For any field $K$ we have that $K^{\times}$ is a multiplicative group because $K$ is a field. But $\Bbb{Z}/9$ is not a field, as $3\cdot 3=0$ and $3\neq 0$.
Reference: This duplicate. |
Let $p_n$ denote the $n^\text{th}$ prime. Find a lower bound for $\left|S\right|$ where
$$S = \left\{ q \in \mathbb{N} \mid q \text{ is prime and } p_n - n \leq q \leq p_n + n \right\}. $$
Any good bounds known? See this graph: http://oeis.org/A097935/graph
[Edit 1] Let
$$ a = n\left(\ln(n) + \frac{13}{500}\right), \ \ b = n\left(\ln(n) - \frac{987}{500}\right)$$ and $$ r = \frac{a}{\ln(a)-1} - \frac{b}{\ln(b)-23/20} - 2. $$
Conjecture 1: For $n \ge 12$ $$ \lfloor r \rfloor \le |S|. $$
[Edit 2] It seems that we can push things a little further. Let
$$ s = \frac{a}{\ln(a)-1} - \frac{b}{\ln(b)-23/20} + \frac{a}{\ln(a)-23/20} - \frac{b}{\ln(b)-1} $$
Conjecture 2: $s/2$ and $|S|$ are asymptotical equivalent.
$$ \frac{s}{2} \sim |S| $$
For the fun of it let's look at an numerical example. Let $n = 10000$. Then $|S| = 1715$ and $s/2 = 1762.31..$. |
Search
Now showing items 1-10 of 20
Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV
(Elsevier, 2013-04-10)
The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ...
Multiplicity dependence of the average transverse momentum in pp, p-Pb, and Pb-Pb collisions at the LHC
(Elsevier, 2013-12)
The average transverse momentum <$p_T$> versus the charged-particle multiplicity $N_{ch}$ was measured in p-Pb collisions at a collision energy per nucleon-nucleon pair $\sqrt{s_{NN}}$ = 5.02 TeV and in pp collisions at ...
Directed flow of charged particles at mid-rapidity relative to the spectator plane in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(American Physical Society, 2013-12)
The directed flow of charged particles at midrapidity is measured in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV relative to the collision plane defined by the spectator nucleons. Both, the rapidity odd ($v_1^{odd}$) and ...
Long-range angular correlations of π, K and p in p–Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV
(Elsevier, 2013-10)
Angular correlations between unidentified charged trigger particles and various species of charged associated particles (unidentified particles, pions, kaons, protons and antiprotons) are measured by the ALICE detector in ...
Anisotropic flow of charged hadrons, pions and (anti-)protons measured at high transverse momentum in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2013-03)
The elliptic, $v_2$, triangular, $v_3$, and quadrangular, $v_4$, azimuthal anisotropic flow coefficients are measured for unidentified charged particles, pions, and (anti-)protons in Pb–Pb collisions at $\sqrt{s_{NN}}$ = ...
Measurement of inelastic, single- and double-diffraction cross sections in proton-proton collisions at the LHC with ALICE
(Springer, 2013-06)
Measurements of cross sections of inelastic and diffractive processes in proton--proton collisions at LHC energies were carried out with the ALICE detector. The fractions of diffractive processes in inelastic collisions ...
Transverse Momentum Distribution and Nuclear Modification Factor of Charged Particles in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV
(American Physical Society, 2013-02)
The transverse momentum ($p_T$) distribution of primary charged particles is measured in non single-diffractive p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV with the ALICE detector at the LHC. The $p_T$ spectra measured ...
Mid-rapidity anti-baryon to baryon ratios in pp collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV measured by ALICE
(Springer, 2013-07)
The ratios of yields of anti-baryons to baryons probes the mechanisms of baryon-number transport. Results for anti-proton/proton, anti-$\Lambda/\Lambda$, anti-$\Xi^{+}/\Xi^{-}$ and anti-$\Omega^{+}/\Omega^{-}$ in pp ...
Charge separation relative to the reaction plane in Pb-Pb collisions at $\sqrt{s_{NN}}$= 2.76 TeV
(American Physical Society, 2013-01)
Measurements of charge dependent azimuthal correlations with the ALICE detector at the LHC are reported for Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV. Two- and three-particle charge-dependent azimuthal correlations ...
Multiplicity dependence of two-particle azimuthal correlations in pp collisions at the LHC
(Springer, 2013-09)
We present the measurements of particle pair yields per trigger particle obtained from di-hadron azimuthal correlations in pp collisions at $\sqrt{s}$=0.9, 2.76, and 7 TeV recorded with the ALICE detector. The yields are ... |
Search
Now showing items 1-9 of 9
Measurement of $J/\psi$ production as a function of event multiplicity in pp collisions at $\sqrt{s} = 13\,\mathrm{TeV}$ with ALICE
(Elsevier, 2017-11)
The availability at the LHC of the largest collision energy in pp collisions allows a significant advance in the measurement of $J/\psi$ production as function of event multiplicity. The interesting relative increase ...
Multiplicity dependence of jet-like two-particle correlations in pp collisions at $\sqrt s$ =7 and 13 TeV with ALICE
(Elsevier, 2017-11)
Two-particle correlations in relative azimuthal angle (Δ ϕ ) and pseudorapidity (Δ η ) have been used to study heavy-ion collision dynamics, including medium-induced jet modification. Further investigations also showed the ...
The new Inner Tracking System of the ALICE experiment
(Elsevier, 2017-11)
The ALICE experiment will undergo a major upgrade during the next LHC Long Shutdown scheduled in 2019–20 that will enable a detailed study of the properties of the QGP, exploiting the increased Pb-Pb luminosity ...
Azimuthally differential pion femtoscopy relative to the second and thrid harmonic in Pb-Pb 2.76 TeV collisions from ALICE
(Elsevier, 2017-11)
Azimuthally differential femtoscopic measurements, being sensitive to spatio-temporal characteristics of the source as well as to the collective velocity fields at freeze-out, provide very important information on the ...
Charmonium production in Pb–Pb and p–Pb collisions at forward rapidity measured with ALICE
(Elsevier, 2017-11)
The ALICE collaboration has measured the inclusive charmonium production at forward rapidity in Pb–Pb and p–Pb collisions at sNN=5.02TeV and sNN=8.16TeV , respectively. In Pb–Pb collisions, the J/ ψ and ψ (2S) nuclear ...
Investigations of anisotropic collectivity using multi-particle correlations in pp, p-Pb and Pb-Pb collisions
(Elsevier, 2017-11)
Two- and multi-particle azimuthal correlations have proven to be an excellent tool to probe the properties of the strongly interacting matter created in heavy-ion collisions. Recently, the results obtained for multi-particle ...
Jet-hadron correlations relative to the event plane at the LHC with ALICE
(Elsevier, 2017-11)
In ultra relativistic heavy-ion collisions at the Large Hadron Collider (LHC), conditions are met to produce a hot, dense and strongly interacting medium known as the Quark Gluon Plasma (QGP). Quarks and gluons from incoming ...
Measurements of the nuclear modification factor and elliptic flow of leptons from heavy-flavour hadron decays in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 and 5.02 TeV with ALICE
(Elsevier, 2017-11)
We present the ALICE results on the nuclear modification factor and elliptic flow of electrons and muons from open heavy-flavour hadron decays at mid-rapidity and forward rapidity in Pb--Pb collisions at $\sqrt{s_{\rm NN}}$ ...
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
In the article Automata and semigroups recognizing infinite words an automaton is specified by $\mathcal A = (Q, A, E, I, F)$ where $I$ is a set of initial states and $F$ a set of final states, $Q$ its state set, $E \subseteq Q\times A \times Q$ the transitions and $A$ its alphabet. It is called
deterministic if for every $q \in Q$, $a \in A$ there is at most one state $q' \in Q$ such that $(q, a, q') \in E$ and $|I| = 1$. And it is called complete if there exists at least one $q' \in Q$ such that $(q,a,q') \in E$. Then on page 21 (section 7.1) the following further definitions are made:
A path is called initial if its first state is in $I$, a state is called
accessible if there exists a initial path to it, and it is called co-accessible if there exists a final path starting at this state. An automaton is called trim if every state is accessible and co-accesible. The above notion of determinism is called local determinism, and a global notion is defined as $|I| = 1$ and if every word is the label of at most one initial path. Similarly a trim automaton is globally co-deterministic if every word is the label of at most one final path. And it is called locally co-deterministic if all transitions are co-deterministic, which means that we have no two transitions starting in different states, with the same letter that lead to a common state.Then they write:
The local and global notions of co-determinism [...] are equivalent for finite words [page 23]
But what about for example the automaton with three states $q_0, q_1, q_2$, where $q_0$ is initial, $q_1$ and $q_2$ are both final and transitions \begin{align*} q_0 & \quad \mathrel{\mathop{\rightarrow}^{a}} \quad q_1 \\ q_0 & \quad \mathrel{\mathop{\rightarrow}^{a}} \quad q_2. \end{align*} This is not deterministic surely, but it is trim, it is locally co-deterministic but it is not globally co-deterministic as $a$ is the label of two final paths?
So do I interpret these notions wrongly? Why are the global and local notions of co-determinism equivalent for finite words, i.e. an automaton is locally co-deterministic if and only if it is globally co-deterministic on finite words?
I see that for the notions of determinism both variants are equivalent (for finite and for infinite words), but for this equivalence it is crucial that determinism also requires exactly one initial state, but in general we have more than one final state. |
The
weighted mean is similar to an arithmetic mean (the most common type of average), where instead of each of the data points contributing equally to the final average, some data points contribute more than others. The notion of weighted mean plays a role in descriptive statistics and also occurs in a more general form in several other areas of mathematics.
If all the weights are equal, then the weighted mean is the same as the arithmetic mean. While weighted means generally behave in a similar fashion to arithmetic means, they do have a few counterintuitive properties, as captured for instance in Simpson's paradox.
Examples Basic example
Given two school classes, one with 20 students, and one with 30 students, the grades in each class on a test were:
Morning class = 62, 67, 71, 74, 76, 77, 78, 79, 79, 80, 80, 81, 81, 82, 83, 84, 86, 89, 93, 98 Afternoon class = 81, 82, 83, 84, 85, 86, 87, 87, 88, 88, 89, 89, 89, 90, 90, 90, 90, 91, 91, 91, 92, 92, 93, 93, 94, 95, 96, 97, 98, 99
The straight average for the morning class is 80 and the straight average of the afternoon class is 90. The straight average of 80 and 90 is 85, the mean of the two class means. However, this does not account for the difference in number of students in each class (20 versus 30); hence the value of 85 does not reflect the average student grade (independent of class). The average student grade can be obtained by averaging all the grades, without regard to classes (add all the grades up and divide by the total number of students):
\bar{x} = \frac{4300}{50} = 86.
Or, this can be accomplished by weighting the class means by the number of students in each class (using a weighted mean of the class means):
Thus, the weighted mean makes it possible to find the average student grade in the case where only the class means and the number of students in each class are available.
Convex combination example
Since only the
relative weights are relevant, any weighted mean can be expressed using coefficients that sum to one. Such a linear combination is called a convex combination.
Using the previous example, we would get the following:
\frac{20}{20 + 30} = 0.4\,
\frac{30}{20 + 30} = 0.6\,
\bar{x} = \frac{(0.4\times80) + (0.6\times90)}{0.4 + 0.6} = 86.
Mathematical definition
Formally, the weighted mean of a non-empty set of data
with non-negative weights
is the quantity
which means:
\bar{x} = \frac{w_1 x_1 + w_2 x_2 + \cdots + w_n x_n}{w_1 + w_2 + \cdots + w_n}.
Therefore data elements with a high weight contribute more to the weighted mean than do elements with a low weight. The weights cannot be negative. Some may be zero, but not all of them (since division by zero is not allowed).
The formulas are simplified when the weights are normalized such that they sum up to , i.e. . For such normalized weights the weighted mean is simply.
Note that one can always normalize the weights by making the following transformation on the weights . Using the normalized weight yields the same results as when using the original weights. Indeed,
The common mean is a special case of the weighted mean where all data have equal weights, . When the weights are normalized then
Statistical properties
The weighted sample mean, , with normalized weights (weights summing to one) is itself a random variable. Its expected value and standard deviation are related to the expected values and standard deviations of the observations as follows,
If the observations have expected values
then the weighted sample mean has expectation
Particularly, if the expectations of all observations are equal, , then the expectation of the weighted sample mean will be the same,
For uncorrelated observations with standard deviations , the weighted sample mean has standard deviation
Consequently, when the standard deviations of all observations are equal, , the weighted sample mean will have standard deviation . Here is the quantity
such that . It attains its minimum value for equal weights, and its maximum when all weights except one are zero. In the former case we have , which is related to the central limit theorem.
Note that due to the fact that one can always transform non-normalized weights to normalized weights all formula in this section can be adapted to non-normalized weights by replacing all by .
Dealing with variance
For the weighted mean of a list of data for which each element comes from a different probability distribution with known variance , one possible choice for the weights is given by:
w_i = \frac{1}{\sigma_i^2}.
The weighted mean in this case is:
\bar{x} = \frac{ \sum_{i=1}^n (x_iw_i)}{\sum_{i=1}^n w_i},
and the variance of the weighted mean is:
\sigma_{\bar{x}}^2 = \frac{ 1 }{\sum_{i=1}^n w_i},
which reduces to , when all
The significance of this choice is that this weighted mean is the maximum likelihood estimator of the mean of the probability distributions under the assumption that they are independent and normally distributed with the same mean.
Correcting for over- or under-dispersion
Weighted means are typically used to find the weighted mean of experimental data, rather than theoretically generated data. In this case, there will be some error in the variance of each data point. Typically experimental errors may be underestimated due to the experimenter not taking into account all sources of error in calculating the variance of each data point. In this event, the variance in the weighted mean must be corrected to account for the fact that is too large. The correction that must be made is
where is divided by the number of degrees of freedom, in this case
n − 1. This gives the variance in the weighted mean as:
when all data variances are equal, , they cancel out in the weighted mean variance, , which then reduces to the standard error of the mean (squared), , in terms of the sample standard deviation (squared), .
Weighted sample variance
Typically when a mean is calculated it is important to know the variance and standard deviation about that mean. When a weighted mean is used, the variance of the weighted sample is different from the variance of the unweighted sample. The
biased weighted sample variance is defined similarly to the normal biased sample variance:
\sigma^2\ = \frac{
\sum_{i=1}^N{\left(x_i - \mu\right)^2}
}{
N
}
\sigma^2_\mathrm{weighted} = \frac{\sum_{i=1}^N w_i \left(x_i - \mu^*\right)^2 }{V_1}where , which is 1 for normalized weights.
For small samples, it is customary to use an unbiased estimator for the population variance. In normal unweighted samples, the
N in the denominator (corresponding to the sample size) is changed to N − 1. While this is simple in unweighted samples, it is not straightforward when the sample is weighted.
If each is drawn from a Gaussian distribution with variance , the unbiased estimator of a weighted population variance is given by:
[1]
s^2\ = \frac {V_1} {V_1^2-V_2} \sum_{i=1}^N w_i \left(x_i - \mu^*\right)^2,
where as introduced previously.
Note: If the weights are not integral frequencies (for instance, if they have been standardized to sum to 1 or if they represent the variance of each observation's measurement) as in this case, then all information is lost about the total sample size n, whence it is not possible to use an unbiased estimator because it is impossible to estimate the Bessel correction factor .
The degrees of freedom of the weighted, unbiased sample variance vary accordingly from
N − 1 down to 0.
The standard deviation is simply the square root of the variance above.
If all of the are drawn from the same distribution and the integer weights indicate the number of occurrences ("repeat") of an observation in the sample, then the unbiased estimator of the weighted population variance is given by
s^2\ = \frac {1} {V_1 - 1} \sum_{i=1}^N w_i \left(x_i - \mu^*\right)^2 = \frac {1} {\sum_{i=1}^n w_i - 1} \sum_{i=1}^N w_i \left(x_i - \mu^*\right)^2,
If all are unique, then counts the number of unique values, and counts the number of samples.
For example, if values are drawn from the same distribution, then we can treat this set as an unweighted sample, or we can treat it as the weighted sample with corresponding weights , and we should get the same results.
As a side note, other approaches have been described to compute the weighted sample variance.
[2] Weighted sample covariance
In a weighted sample, each row vector (each set of single observations on each of the
K random variables) is assigned a weight . Without loss of generality, assume that the weights are normalized:
If they are not, divide the weights by their sum:
Then the weighted mean vector is given by
(if the weights are not normalized, an equivalent formula to compute the weighted mean is:)
and the unbiased weighted covariance matrix is
[3]
If all weights are the same, with , then the weighted mean and covariance reduce to the sample mean and covariance above.
Alternatively, if each weight assigns a number of occurrences for one observation value, so (sometimes called the number of "repeats") and is
unnormalized so that with being the sample size (total number of observations), then the weighted sample covariance matrix is given by: [4]
and the unbiased weighted sample covariance matrix is given by applying the Bessel correction (since which is the real sample size):
Vector-valued estimates
The above generalizes easily to the case of taking the mean of vector-valued estimates. For example, estimates of position on a plane may have less certainty in one direction than another. As in the scalar case, the weighted mean of multiple estimates can provide a maximum likelihood estimate. We simply replace by the covariance matrix:
[5]
W_i = \Sigma_i^{-1}.
The weighted mean in this case is:
\bar{\mathbf{x}} = \left(\sum_{i=1}^n \Sigma_i^{-1}\right)^{-1}\left(\sum_{i=1}^n \Sigma_i^{-1} \mathbf{x}_i\right),
and the covariance of the weighted mean is:
\Sigma_{\bar{\mathbf{x}}} = \left(\sum_{i=1}^n \Sigma_i^{-1}\right)^{-1},
For example, consider the weighted mean of the point [1 0] with high variance in the second component and [0 1] with high variance in the first component. Then
then the weighted mean is:
which makes sense: the [1 0] estimate is "compliant" in the second component and the [0 1] estimate is compliant in the first component, so the weighted mean is nearly [1 1].
Accounting for correlations
In the general case, suppose that , is the covariance matrix relating the quantities , is the common mean to be estimated, and is the design matrix [1, ..., 1] (of length ). The Gauss–Markov theorem states that the estimate of the mean having minimum variance is given by:
and
Decreasing strength of interactions
Consider the time series of an independent variable and a dependent variable , with observations sampled at discrete times . In many common situations, the value of at time depends not only on but also on its past values. Commonly, the strength of this dependence decreases as the separation of observations in time increases. To model this situation, one may replace the independent variable by its sliding mean for a window size .
z_k=\sum_{i=1}^m w_i x_{k+1-i}.
Range weighted mean interpretation
Range (1–5) Weighted mean equivalence 3.34–5.00 Strong 1.67–3.33 Satisfactory 0.00–1.66 Weak Exponentially decreasing weights
In the scenario described in the previous section, most frequently the decrease in interaction strength obeys a negative exponential law. If the observations are sampled at equidistant times, then exponential decrease is equivalent to decrease by a constant fraction at each time step. Setting we can define normalized weights by
where is the sum of the unnormalized weights. In this case is simply
approaching for large values of .
The damping constant must correspond to the actual decrease of interaction strength. If this cannot be determined from theoretical considerations, then the following properties of exponentially decreasing weights are useful in making a suitable choice: at step , the weight approximately equals , the tail area the value , the head area . The tail area at step is . Where primarily the closest observations matter and the effect of the remaining observations can be ignored safely, then choose such that the tail area is sufficiently small.
Weighted averages of functions
The concept of weighted average can be extended to functions.
[6] Weighted averages of functions play an important role in the systems of weighted differential and integral calculus. [7] See also Notes Further reading External links
David Terr, "MathWorld. Weighted Mean Calculation
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization. |
Let there be two point charges, positive or negative, having velocities in directions perpendicular to each other. I need to evaluate the total interactive force on those (Lorentz force).
I do it by assuming magnetic force is an effect of special relativity on the electrostatic force. In doing so I am finding a value half of the Lorentz force predicted in classical electrodynamics. I have proceeded with many different possible approaches, but the result is always the same. Why is it so?
Here is my reasoning: $$v_\text{rel}= \frac{\vec{V}-\vec{U}}{1-\vec{V}\cdot\vec{U}/c^2}$$ Since the charges are moving perpendicular to each other in the static (lab) frame, $V\cdot U=0$. The Lorentz factor between two frames, (1) the rest frame of the "field charge" $Q$ and (2) the rest frame of the "test charge" $q$ is given by $$\gamma = \frac{1}{\sqrt{1-(u^2+v^2)/c^2}}$$ Now I evaluate the proper force (considering purely electrostatic force) exerted by $Q$ on $q$, in the rest frame of $q$. $$\begin{align} E_\perp q &= \gamma E_{0\perp} q & E_\parallel q &= E_{0\parallel} q \end{align}$$ where $E_0$ is the field at $q$ in the rest frame of $Q$.
After finding these two forces, I resolved them in the X and Y directions, where X is the direction in which $Q$ moves and Y is the direction in which $q$ moves, both in the lab frame. $$\begin{align} \sin\theta &= \frac{u}{\sqrt{u^2+v^2}} & \cos\theta &= \frac{v}{\sqrt{u^2+v^2}} \end{align}$$ Then I get $$\begin{align} F_x &= \frac{1}{2}\frac{v^2}{c^2} E_0 q & F_y &= \biggl(1+\frac{1}{2}\frac{v^2}{c^2}\biggr)E_0 \end{align}$$ Now these forces will produce proper acceleration in their respective directions in the rest frame of $q$. $$\begin{align} \alpha_x &= \frac{F_x}{m_0} & \alpha_y &= \frac{F_y}{m_0} \end{align}$$ where $m_0$ is the rest mass of the particle with charge $q$. After that I proceed to transfer these forces to the lab frame. Acceleration of $q$ with respect to the lab is given by $$\begin{align} \frac{\mathrm{d}^2x}{\mathrm{d}t^2} &= \frac{1}{\gamma^2}\alpha_x & \frac{\mathrm{d}^2y}{\mathrm{d}t^2} &= \frac{1}{\gamma^3}\alpha_y \end{align}$$ Obviously here $\gamma$ is given by $$\gamma = \frac{1}{\sqrt{1-u^2/c^2}}$$ so $$\begin{align} F_{\text{lab},x} &= \gamma m_0 \frac{\mathrm{d}^2x}{\mathrm{d}t^2} \\ &= \frac{1}{\gamma} m_0 \frac{F_x}{m_0} \\ &=\sqrt{1-u^2/c^2}\frac{1}{2}\frac{v^2}{c^2} E_0 q \\ &\approx \frac{1}{2}\frac{v^2}{c^2} E_0 q \end{align}$$ This is just half of the usual result in the X direction that we would classically find in the lab frame.
Also, $$\begin{align} F_{\text{lab},y} &= \gamma^3 \frac{1}{\gamma^3} m_0 \alpha_y \\ &= F_y \\ &= \gamma_v E_0 q \end{align}$$ where $\gamma_v= 1/\sqrt{1-v^2/c^2}$. |
The first complex, from Weibel, is a projective resolution of the trivial $\mathfrak g$-module $k$ as a $\mathcal U(\mathfrak g)$-module; I am sure Weibel says so!
Your second complex is obtained from the first by applying the functor $\hom_{\mathcal U(\mathfrak g)}(\mathord-,k)$, where $k$ is the trivial $\mathfrak g$-module. It therefore computes $\mathrm{Ext}_{\mathcal U(\mathfrak g)}(k,k)$, also known as $H^\bullet(\mathfrak g,k)$, the Lie algebra cohomology of $\mathfrak g$ with trivial coefficients.
The connection with deformation theory is explained at length in Gerstenhaber, Murray; Schack, Samuel D. Algebraic cohomology and deformation theory. Deformation theory of algebras and structures and applications (Il Ciocco, 1986), 11--264, NATO Adv. Sci. Inst. Ser. C Math. Phys. Sci., 247, Kluwer Acad. Publ., Dordrecht, 1988.
In particular neither of your two complexes 'computes' deformations: you need to take the projective resolution $\mathcal U(\mathfrak g)\otimes \Lambda^\bullet \mathfrak g$, apply the functor $\hom_{\mathcal U(\mathfrak g)}(\mathord-,\mathfrak g)$, where $\mathfrak g$ is the adjoint $\mathfrak g$-module, and compute cohomology to get $H^\bullet(\mathfrak g,\mathfrak g)$, the Lie algebra cohomology with coefficients in the adjoint representation. Then $H^2(\mathfrak g,\mathfrak g)$ classifies infinitesimal deformations, $H^3(\mathfrak g,\mathfrak g)$ is the target for obstructions to extending partial deformations, and so on, exactly along the usual yoga of formal deformation theory à la Gerstenhaber.
By the way, the original paper [Chevalley, Claude; Eilenberg, SamuelCohomology theory of Lie groups and Lie algebras.Trans. Amer. Math. Soc. 63, (1948). 85--124.] serves as an incredibly readable introductionto Lie algebra cohomology.This post imported from StackExchange MathOverflow at 2014-07-02 11:33 (UCT), posted by SE-user Mariano Suárez-Alvarez |
Could 2 moons that orbit same terrestrial planet never see each other if they orbit the planet at same time?
Moons have different mass and gravity.
Worldbuilding Stack Exchange is a question and answer site for writers/artists using science, geography and culture to construct imaginary worlds and settings. It only takes a minute to sign up.Sign up to join this community
In theory if the two moons were in the exact same orbit on opposite sides of the planet then yes. Having the moons closer to the planet and smaller also makes that easier. For example geostationary satellites over opposite sides of earth will never have direct line of sight to each other.
In practice though that would be a very unstable arrangement (even if there were no other moons to disrupt things) and would also be very unlikely to form naturally.
So it would be very unlikely to form naturally and if it did form it would be unstable ... so realistically the answer is "no" but if you can explain away the improbabilities somehow then "yes".
The moons having different masses doesn't change their behavior in this case. If they are in the same orbit they are in the same orbit.
Yes, this is possible.
A large moon and a smaller moon can share the same orbit if one is 60 degrees ahead of the other. In such an orbit, the smaller moon would be at one of the stable Lagrangian points L4 and L5. If the orbital radius is less than $\frac{1}{\cos (30^{\circ})} = \frac{2}{\sqrt{3}}R_M \approx 1.15 R_M$ (where $R_M$ is the radius of the planet), then the planet will block the line of sight between the two moons. That is, each moon will be beyond the horizon as seen from the other moon.
Of course, such orbits would be very close to the planet. Would the moons break apart due to tidal forces? The answer to that is given by the Roche limit, which for a rigid satellite is
$$ d = R_M \left( 2\frac{\rho_M}{\rho_m} \right)^{1/3} $$
where $\rho_M$ and $\rho_m$ are the densities of the planet and the moon respectively. If the moons orbit outside this radius, they will survive. If they are inside the radius, they will break apart. For our scenario, we need the Roche limit to be less than $1.15 R_M$, so the density of the moons must be at least 30% larger ($\frac{3^{3/2}}{2^2}$) than the density of the planet.
Orbits are elliptical, normally quite eccentric - our moon's almost circular orbit is unusual. For two moons not to see each other, both their orbits would have to be extremely circular and almost exactly in the same plane.
The system would be unstable. If one moon lead the other by a tiny fraction it would be accelerated by the lagging moon and the lagging moon would be dragged by the leading one. This would rapidly cause the system to collapse.
However, it is not impossible. |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |