text
stringlengths 256
16.4k
|
---|
Assume $u: \mathbb{R}^N \to \mathbb{R}$ is a smooth function with suitable integrability assumptions. I'm interested in a formal computation, do not worry about integrability properties or smoothness of $u$.
Let $a$ be a constant.
By integration by parts, how can one prove that the identity $$\int_{\mathbb{R}^N}u^a\nabla u\cdot\nabla\Delta u=C(\int_{\mathbb{R}^N}|D^2 (u^{(a+2)/2})|^2+\int_{\mathbb{R}^N}|\nabla(u^{(a+2)/4})|^4)$$ holds for $C$ some constant that depends on $a$?
For which $a$ does the previous result hold? |
This is a late answer to the question. For an easy typing, i will use the letters $b$ for a root of the polynomial $X^2+2\in\Bbb Q[X]$, and $a$ for a root of the polynomial $X^4 -5X^2-32\in\Bbb Q[X]$.
Note than as a short intermezzo the relation, used in the sequel in blue:$$(a^2-5)^2=a^4-10a^2+25 =(a^4-5a^2-32)-5(a^2-5)+32=-5(a^2-5)+32\ .$$Now we are ready to show / compute:
$$
\begin{aligned}
a(a^2-4b-5) &=\pm \sqrt{-256b-160}
\\
&\qquad\text{ in the equivalent, purely algebraic form}
\\
\Big(\
a(a^2-4b-5)
\ \Big)^2
&=-256b-160
\ .
\end{aligned}
$$
Computation:
$$\begin{aligned}\Big(\ a(a^2-4b-5) \ \Big)^2&=a^2(\ (a^2-5)-4b\ )^2\\&=a^2(\ \color{blue}{(a^2-5)^2}-8b(a^2-5)+4b^2\ )\\&=a^2(\ \color{blue}{-5(a^2-5)+32}-8b(a^2-5)-32\ )\\&=a^2(\ -5(a^2-5)-8b(a^2-5)\ )\\&=a^2(a^2-5)(\ -5-8b\ )\\&=32(\ -5-8b\ )\\&=-256b-160\ .\end{aligned}$$This means that we have the points on the given elliptic curve:$$\begin{aligned}P&=(15-36b,\ 27\sqrt{256b - 160})\\&=(15-36b,\ \pm a(a^2+4b-5))\ ,\\Q&=(15+36b,\ 27\sqrt{-256b - 160})\\&=(15+36b,\ \pm a(a^2-4b-5))\ .\end{aligned}$$But they differ. So the information
I was told that the coordinate can be written as...
may be possibly traced back to only
I was told that the $y$-coordinate of some point in $E(K)$, for $K=\Bbb Q(\alpha,\beta)$, can be written as$27\alpha(\alpha^2−4\beta−5))$...
Note: Such computations are easily covered by using computer algebra systems. My choice of the wappon is sage, and we can type to get the above:
sage: R.<x> = QQ[]
sage: R.<x> = PolynomialRing(QQ)
sage: K.<a,b> = NumberField( [x^4 - 5*x^2 - 32, x^2+2] )
sage: ( a*(a^2-4*b-5) )^2
-256*b - 160
For the second case, things are "slightly more complicated". I would start by using a new letter for $\sqrt{17}$, maybe $c=\sqrt{17}$, although we could write it in terms of $a,b$ in the same field $K=\Bbb Q(a,b)$,
sage: sqrt(K(17))
-2/3*a^2 + 5/3
and working economically in $\Bbb Q(c)$ we factorize first as much as possible in $\Bbb Q$ and $K$:
sage: L.<c> = QuadraticField(17)
sage: gcd( 1180628067960, 5672869080264 ).factor()
2^3 * 3^15 * 17
sage: w = ( 1180628067960*c + 5672869080264 ) / (2^3 * 3^15 * 17)
sage: w
605*c + 2907
sage: factor(w)
(-34840*c + 143649) * (-c) * (-1/2*c - 3/2)^16 * (-1/2*c + 3/2)
sage: L.units()
(c - 4,)
sage: (c-4)^6
-34840*c + 143649
sage: w / ( (c-4)^3 * (c+3)^8 / 2^8 )^2
-3/2*c + 17/2
sage: _.norm().factor()
2 * 17
This gives a hint on how to go on... |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
Search
Now showing items 1-10 of 18
J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-02)
Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ...
Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-12)
The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ...
Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC
(Springer, 2014-10)
Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ...
Centrality, rapidity and transverse momentum dependence of the J/$\psi$ suppression in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2014-06)
The inclusive J/$\psi$ nuclear modification factor ($R_{AA}$) in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76TeV has been measured by ALICE as a function of centrality in the $e^+e^-$ decay channel at mid-rapidity |y| < 0.8 ...
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Multiplicity Dependence of Pion, Kaon, Proton and Lambda Production in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV
(Elsevier, 2014-01)
In this Letter, comprehensive results on $\pi^{\pm}, K^{\pm}, K^0_S$, $p(\bar{p})$ and $\Lambda (\bar{\Lambda})$ production at mid-rapidity (0 < $y_{CMS}$ < 0.5) in p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV, measured ...
Multi-strange baryon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2014-01)
The production of ${\rm\Xi}^-$ and ${\rm\Omega}^-$ baryons and their anti-particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been measured using the ALICE detector. The transverse momentum spectra at ...
Measurement of charged jet suppression in Pb-Pb collisions at √sNN = 2.76 TeV
(Springer, 2014-03)
A measurement of the transverse momentum spectra of jets in Pb–Pb collisions at √sNN = 2.76TeV is reported. Jets are reconstructed from charged particles using the anti-kT jet algorithm with jet resolution parameters R ...
Two- and three-pion quantum statistics correlations in Pb-Pb collisions at √sNN = 2.76 TeV at the CERN Large Hadron Collider
(American Physical Society, 2014-02-26)
Correlations induced by quantum statistics are sensitive to the spatiotemporal extent as well as dynamics of particle-emitting sources in heavy-ion collisions. In addition, such correlations can be used to search for the ...
Exclusive J /ψ photoproduction off protons in ultraperipheral p-Pb collisions at √sNN = 5.02TeV
(American Physical Society, 2014-12-05)
We present the first measurement at the LHC of exclusive J/ψ photoproduction off protons, in ultraperipheral proton-lead collisions at √sNN=5.02 TeV. Events are selected with a dimuon pair produced either in the rapidity ... |
let $T$ be a linear operator from a Banach space $X$ to Banach space $Y$.and $X=ker(T)\oplus M_1$ where $M_1$ is closed subspace of $X$.let $M$ be a closed subspce of $X$ then I want to prove that there exist a finite dimensional subspace $M_0$ such that $M=M \cap M_1 +M_0$
If $\dim (Ker(T))= +\infty$ the claim is false. Indeed putting $M = Ker(T)$ yeld $M\cap M_1 = \emptyset$ thus $M_0 = M$ must have infinite dimension.
Suppose that $\dim (Ker(T))<+\infty$.
From linear algebra $M = M\cap M_1 \oplus M_0$ for some $M_0$ (here $\oplus$ is only algebraic, it do not implies that $M\cap M_1$ is complemented as a Banach space). Consider the map $M\to X \to \frac X M_1$ where the first map is the inclusion and the second one is the projection. The kernel of this map is $M\cap M_1$ and the image is $M/M_1$. It follows from elementary algebra that $M/(M\cap M_1)\approx M/M_1$ which is finite dimensional since $M/ M_1 \lhd X/M_1 \approx Ker(T)$ and $\dim (Ker(T))<+\infty$. Thus $M_0\approx M/(M\cap M_1)$ is finite dimensional.
It is probably better to post the original problem.
If $X$ is separable Hilbert with $X=H_1\oplus H_2$ and $H_1, H_2$ both infinite-dimensional, and $T$ is the orthogonal projection onto $H_2$ then $M_1=H_2$.
Take $M = H_1$. Then $M\cap M_1=\{0\}$. Therefore $M_0$ would have to be $H_2$.
In your case, do you know something else about $T$? |
https://doi.org/10.1351/goldbook.A00086
The quantity of light available to molecules at a particular point in the atmosphere and which, on absorption, drives photochemical processes in the atmosphere. It is calculated by integrating the @S05824@ \(L\left (\lambda,\,\theta,\,\varphi \right )\) over all directions of incidence of the light, \(E(\lambda) = \int _{\theta}\, \int _{\phi} L\left (\lambda,\theta,\varphi \right )\, \text{cos}\,\theta \: \text{sin}\,\theta\: \text{d}\theta\: \text{d}\varphi\). If the @R05037@ is expressed in \(\text{J m}^{-2}\ \text{s}^{-1}\ \text{st}^{-1}\ \text{nm}^{-1}\) and \(hc/\lambda\) is the energy per quantum of light of @W06659@ \(\lambda\), the @AT07314@ flux has units of \(\text{quanta cm}^{-2}\ \text{s}^{-1}\ \text{nm}^{-1}\). This important quantity is one of the terms required in the calculation of
j-values, the first order rate coefficients for photochemical processes in the sunlight-absorbing, trace gases in the atmosphere. The @AT07314@ flux is determined by the solar radiation entering the atmosphere and by any changes in this due to atmospheric gases and particles (e.g. @R05160@ absorption by stratospheric ozone, @S05487@ and absorption by aerosols and clouds), and reflections from the ground. It is therefore dependent on the @W06659@ of the light, on the altitude and on specific local environmental conditions. The @AT07314@ flux has borne many names (e.g. flux, flux density, beam irradiance @AT07314@ irradiance, integrated intensity) which has caused some confusion. It is important to distinguish the @AT07314@ flux from the @S05817@, which refers to energy arrival on a flat surface having fixed spatial orientation (\(\text{J m}^{-2}\ \text{nm}^{-1}\)) given by: \[E(\lambda) = \int _{\theta}\, \int _{\phi} L\left (\lambda,\theta,\varphi \right )\, \text{cos}\,\theta \: \text{sin}\,\theta\: \text{d}\theta\: \text{d}\varphi\] The @AT07314@ flux does not refer to any specific orientation because molecules are oriented randomly in the atmosphere. This distinction is of practical relevance: the @AT07314@ flux (and therefore a j-value) near a brightly reflecting surface (e.g. over snow or above a thick cloud) can be a factor of three higher than that near a non-reflecting surface. The more descriptive name of @S05832@ is suggested for the quantity herein called @AT07314@ flux. See also:
flux density,
photon |
$\def\abs#1{|#1|}\def\i{\mathbf {i}}\def\ket#1{|{#1}\rangle}\def\bra#1{\langle{#1}|}\def\braket#1#2{\langle{#1}|{#2}\rangle}\def\tr{\mathord{\mbox{tr}}}\mathbf{Exercise\ 4.7}$
Using the projection operator formalism
a) compute the probability of each of the possible outcomes of measuring the first qubit of an arbitrary two-qubit state in the Hadamard basis $\{\ket +, \ket - \}$.
b) compute the probability of each outcome for such a measurement on the state $\ket{\Psi^+} = \frac{1}{\sqrt 2}(\ket{00} + \ket{11}$.
c) for each possible outcome in part b), describe the possible outcomes if we now measure the second qubit in the standard basis.
d) for each possible outcome in part b), describe the possible outcomes if we now measure the second qubit in the Hadamard basis. |
I calculated correlation function $C(t)=\langle x(t)x(0)\rangle$ for ground state of Simple Harmonic Oscillator (SHO) in two different methods. But the results do not match.
First Attempt:
From Heisenberg equations of motion, $$\mathbf{X}(t)=\mathbf{X}(0)\cos(\omega t)+\frac{\mathbf{P}(0)}{m \omega} \sin(\omega t)$$
So, calculated the required terms,
$$\langle 0|\mathbf{X}^2(0)|0\rangle =\frac{\hbar}{2 m \omega}$$ and
$$\langle 0| \mathbf{P}(0) \mathbf{X}(0) |0\rangle =-~\frac{i \hbar}{2} $$
Using the above two terms and the equation of $\mathbf{X}(t)$ I obtain, $$\langle 0| \mathbf{X}(t)\mathbf{X}(0)|0\rangle =\frac{\hbar}{2 m \omega} \exp(-~i\omega t)$$
This is the required correlation function.
Second Method:
I attempted to solve it using explicit ground state wave function in position basis. In this case, I obtain,
$$\int \psi_0^*(x)\ x^2 \psi_0(x)~ \mathrm{d}x =\frac{\hbar}{2m \omega},$$ which is similar to the first calculation.
Again,
$$\int \psi_0^*(x)\ x (-~i \hbar)\frac{\partial}{\partial x} \psi_0 ~\mathrm{d}x=0,$$ which DOES NOT match with the first method.
And hence, using expression for $X(t)$ like the first method I obtained,
$$\langle X(t)X(0)\rangle =\frac{\hbar}{2m \omega} \cos (\omega t)$$
So method 1 and method 2 does not match. But are not they supposed to match up? I can not figure out where I am making mistake. Any help figuring out the mistakes will be very much appreciated. |
Returns an array of cells for the quick guess, optimal (calibrated) or std. errors of the values of the model's parameters.
Syntax ARMA_PARAM( X, Order, mean, sigma, phi, theta, Type, maxIter) X is the univariate time series data (a one dimensional array of cells (e.g. rows or columns)). Order is the time order in the data series (i.e. the first data point's corresponding date (earliest date=1 (default), latest date=0)).
Order Description 1 ascending (the first data point corresponds to the earliest date) (default) 0 descending (the first data point corresponds to the latest date) mean is the ARMA model long-run mean (i.e. mu). sigma is the standard deviation of the model's residuals/innovations. phi are the parameters of the AR(p) component model (starting with the lowest lag). theta are the parameters of the MA(q) component model (starting with the lowest lag). Type is an integer switch to select the output array: (1=Quick Guess (default), 2=Calibrated, 3=Std. Errors).
Order Description 1 Quick guess (non-optimal) of parameters values (default) 2 Calibrated (optimal) values for the model's parameters 3 Standard error of the parameters' values maxIter is the maximum number of iterations used to calibrate the model. If missing, the default maximum of 100 is assumed. Remarks The underlying model is described here. The time series is homogeneous or equally spaced. The time series may include missing values (e.g. #N/A) at either end. ARMA_PARAM returns an array for the values (or errors) of the model's parameters in the following order: $\mu$ $\phi_1,\phi_2,...,\phi_p$ $\theta_1,\theta_2,...,\theta_q$ $\sigma$ The function was added in version 1.63 SHAMROCK. Files Examples References D. S.G. Pollock; Handbook of Time Series Analysis, Signal Processing, and Dynamics; Academic Press; Har/Cdr edition(Nov 17, 1999), ISBN: 125609906 James Douglas Hamilton; Time Series Analysis; Princeton University Press; 1st edition(Jan 11, 1994), ISBN: 691042896 Tsay, Ruey S.; Analysis of Financial Time Series; John Wiley & SONS; 2nd edition(Aug 30, 2005), ISBN: 0-471-690740 Box, Jenkins and Reisel; Time Series Analysis: Forecasting and Control; John Wiley & SONS.; 4th edition(Jun 30, 2008), ISBN: 470272848 Walter Enders; Applied Econometric Time Series; Wiley; 4th edition(Nov 03, 2014), ISBN: 1118808568 |
Take an elementary convergent integral like:
$\int^\infty_0 e^{- \lambda x} = \frac{1}{\lambda} $
If you series expand it every term and integrate term-by-term every term integrates to infinity. Is there a systematic way to cut-off the integral if you keep the $n^{th}$ term in the series so that you can reasonably approximate the integral to some quantified error?
EDIT: Clearly the series expansion is of little use in the above integral, but I am interested in a potential case where, for example, I find an integral that converges when I numerically integrate it but the analytical series diverges. |
ISSN:
1937-1632
eISSN:
1937-1179
All Issues
Discrete & Continuous Dynamical Systems - S
March 2009 , Volume 2 , Issue 1
A special issue on
Asymptotic Behavior of Dissipative PDEs
Select all articles
Export/Reference:
Abstract:
This issue consists of ten carefully refereed papers dealing with important qualitative features of dissipative PDEs, with applications to fluid mechanics (compressible Navier-Stokes equations and water waves), reaction-diffusion systems ((bio)chemical reactions and population dynamics), plasma physics and phase separation and transition.
Several contributions are concerned with issues such as regularity, stability and decay rates of solutions. Furthermore, an emphasis is laid on the study of the global dynamics of the systems, in terms of attractors, and of the convergence of single trajectories to stationary solutions.
We wish to thank the referees for their valuable help in evaluating and improving the papers.
Abstract:
We show that the trajectories of a conserved phase-field model with memory are compact in the space of continuous functions and, for an exponential relaxation kernel, we establish the convergence of solutions to a single stationary state as time goes to infinity. In the latter case, we also estimate the rate of decay to equilibrium.
Abstract:
We show that infinite-dimensional integro-differential equations which involve an integral of the solution over the time interval since starting can be formulated as non-autonomous delay differential equations with an infinite delay. Moreover, when conditions guaranteeing uniqueness of solutions do not hold, they generate a non-autonomous (possibly) multi-valued dynamical system (MNDS). The pullback attractors here are defined with respect to a universe of subsets of the state space with sub-exponetial growth, rather than restricted to bounded sets. The theory of non-autonomous pullback attractors is extended to such MNDS in a general setting and then applied to the original integro-differential equations. Examples based on the logistic equations with and without a diffusion term are considered.
Abstract:
In this article, we consider the two-dimensional dissipative Boussinesq systems which model surface waves in three space dimensions. The long time asymptotics of the solutions for a large class of such systems are obtained rigorously for small initial data.
Abstract:
We study the uniform global attractor for a general nonautonomous reaction-diffusion system without uniqueness using a new developed framework of an evolutionary system. We prove the existence and the structure of a weak uniform (with respect to a symbol space) global attractor $\mathcal A$. Moreover, if the external force is normal, we show that this attractor is in fact a strong uniform global attractor. The existence of a uniform (with respect to the initial time) global attractor $\mathcal A^0$ also holds in this case, but its relation to $\mathcal A$ is not yet clear due to the non-uniqueness feature of the system.
Abstract:
This paper studies a wave equation on a bounded domain in $\bbR^d$ with nonlinear dissipation which is localized on a subset of the boundary. The damping is modeled by a continuous monotone function without the usual growth restrictions imposed at the origin and infinity. Under the assumption that the observability inequality is satisfied by the solution of the associated
linearproblem, the asymptotic decay rates of the energy functional are obtained by reducing the nonlinear PDEproblem to a linear PDEand a nonlinear ODE. This approach offers a generalized framework which incorporates the results on energy decay that appeared previously in the literature; the method accommodates systems with variable coefficients in the principal elliptic part, and allows to dispense with linear restrictions on the growth of the dissipative feedback map. Abstract:
We study the impact of an oscillating external force on the motion of a viscous, compressible, and heat conducting fluid. Assuming that the frequency of oscillations increases sufficiently fast as the time goes to infinity, the solutions are shown to stabilize to a spatially homogeneous static state.
Abstract:
We consider a model of non-isothermal phase separation taking place in a confined container. The order parameter $\phi $ is governed by a viscous or non-viscous Cahn-Hilliard type equation which is coupled with a heat equation for the temperature $\theta $. The former is subject to a nonlinear dynamic boundary condition recently proposed by physicists to account for interactions with the walls, while the latter is endowed with a standard (Dirichlet, Neumann or Robin) boundary condition. We indicate by $\alpha $ the viscosity coefficient, by $\varepsilon $ a (small) relaxation parameter multiplying $\partial _{t}\theta $ in the heat equation and by $\delta $ a small latent heat coefficient (satisfying $\delta \leq \lambda \alpha $, $\delta \leq \overline{\lambda }\varepsilon $, $\lambda , \overline{\lambda }>0$) multiplying $\Delta \theta $ in the Cahn-Hilliard equation and $\partial _{t}\phi $ in the heat equation. Then, we construct a family of exponential attractors $\mathcal{M}_{\varepsilon ,\delta ,\alpha }$ which is a robust perturbation of an exponential attractor $\mathcal{M} _{0,0,\alpha }$ of the (isothermal) viscous ($\alpha >0$) Cahn-Hilliard equation, namely, the symmetric Hausdorff distance between $\mathcal{M} _{\varepsilon ,\delta ,\alpha }$ and $\mathcal{M}_{0,0,\alpha }$ goes to 0, for each fixed value of $\alpha >0,$ as $( \varepsilon ,\delta) $ goes to $(0,0),$ in an explicitly controlled way. Moreover, the robustness of this family of exponential attractors $\mathcal{M}_{\varepsilon ,\delta ,\alpha }$ with respect to $( \delta ,\alpha ) \rightarrow ( 0,0) ,$ for each fixed value of $\varepsilon >0,$ is also obtained. Finally, assuming that the nonlinearities are real analytic, with no growth restrictions, the convergence of solutions to single equilibria, as time goes to infinity, is also proved.
Abstract:
In this paper we study the finite dimensionality of the global attractor for the following system of Klein-Gordon-Schrödinger type
$ i\psi_t +\kappa \psi_{xx} +i\alpha\psi = \phi\psi+f,$
$ \phi_{tt}- \phi_{xx}+\phi+\lambda\phi_t = -Re \psi_{x}+g, $ $\psi (x,0)=\psi_0 (x), \phi(x,0) = \phi_0 (x), \phi_t (x,0)=\phi_1(x),$ $ \psi(x,t)= \phi(x,t)=0, x \in \partial \Omega, t>0, $
where $x \in \Omega, t>0, \kappa > 0, \alpha
>0, \lambda >0,$ $f$ and $g$ are driving terms and $\Omega$ is a
bounded interval of
R With the help of the Lyapunov exponents
we give an estimate of the upper bound of its Hausdorff and Fractal
dimension. Abstract:
This paper is concerned with the interior regularity of global solutions for the one-dimensional compressible isentropic Navier-Stokes equations with degenerate viscosity coefficient and vacuum. The viscosity coefficient $\mu$ is proportional to $\rho^{\theta}$ with $0<\theta<1/3$, where $\rho$ is the density. The global existence has been established in [44] (Vong, Yang and Zhu, J. Differential Equations,
192(2), 475--501). Some ideas and more delicate estimates are introduced to prove these results. Abstract:
The existence of a global attractor for the solution semiflow of Selkov equations with Neumann boundary conditions on a bounded domain in space dimension $n\le 3$ is proved. This reaction-diffusion system features the oppositely-signed nonlinear terms so that the dissipative sign-condition is not satisfied. The asymptotical compactness is shown by a new decomposition method. It is also proved that the Hausdorff dimension and fractal dimension of the global attractor are finite.
Readers Authors Editors Referees Librarians More Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top] |
Let’s say we have a current wire with a current $I$ flowing. We know there is a field of $B=\frac{\mu_0I}{2\pi r}$ by using Ampère's law, and a simple integration path which goes circularly around the wire. Now if we take the path of integration as so the surface spans doesn’t intercept the wire we trivially get a $B=0$ which is obviously incorrect.
I see that I have essentially treated it as if there is no current even present. But a similar argument is used in other situations without fault.
Take for example a conducting cylinder with a hollow, cylindrical shaped space inside. By the same argument there is no field inside.
To further illustrate my point, the derivation of the B field inside of a solenoid requires you to intercept the currents. You can’t simply do the loop inside of the air gap.
This, at least to me, seems like the same thing, and I can’t justify why one is incorrect and the other is incorrect. Please point out why I am stupid. |
For a spin-1 particle at rest, it has three spin states(+1, -1, 0, along the z axis). If we rotate the z axis to -z direction, the spin +1 state will become the spin -1 state. Can we transfer the spin +1 state to the spin 0 state by the frame rotation?
Let $|\pm 1\rangle$ and $|0\rangle$ be the eigenstates of the observable $J_z$ that represents the $z$-component of the spin, with eigenvalues $\pm 1$ and $0$, respectively. The same operator $J_z$ generates rotations about the $z$ axis, each of the states $|\pm 1\rangle$ and $|0\rangle$ must be
invariant under rotations about the $z$ axis, except for an overall complex coefficient that doesn't affect the physical significance of the state. (Only relative complex coefficients, among the terms in a superposition, affect the physical significance of a state.)
A rotation of a specific direction is another specific direction. Therefore, if $|0\rangle$ could be obtained from $|+1\rangle$ by a rotation in space, then $|0\rangle$ would itself need to represent some particular direction in space. But
which direction would the state $|0\rangle$ represent? According to the preceding paragraph, whatever direction is represented by $|0\rangle$ must be invariant under rotations about the $z$-axis. No such invariant direction exists, other than the $z$-axis itself. So the only rotation that could possibly convert $|+1\rangle$ to $|0\rangle$ is a $180^\circ$ rotation that reverses the direction of the $z$-axis; but this rotation converts $|+1\rangle$ to $|-1\rangle$, not to $|0\rangle$.
Altogether, this shows a rotation in 3-d space cannot convert $|+1\rangle$ to $|0\rangle$.
If, by spin 0 state you mean the projection rather than length, the answer is no.
For $s=1$, the rotation matrix is given by (with basis ordering $m_s=-1,0,1$): $$ R=\left( \begin{array}{ccc} e^{i (\alpha +\gamma )} \cos ^2\left(\frac{\beta }{2}\right) & \frac{e^{i \alpha } \sin (\beta )}{\sqrt{2}} & e^{i (\alpha -\gamma )} \sin ^2\left(\frac{\beta }{2}\right) \\ -\frac{e^{i \gamma } \sin (\beta )}{\sqrt{2}} & \cos (\beta ) & \frac{e^{-i \gamma } \sin (\beta )}{\sqrt{2}} \\ e^{-i (\alpha -\gamma )} \sin ^2\left(\frac{\beta }{2}\right) & -\frac{e^{-i \alpha } \sin (\beta )}{\sqrt{2}} & e^{-i (\alpha +\gamma )} \cos ^2\left(\frac{\beta }{2}\right) \\ \end{array} \right) $$ The choice $\beta=\pi$ will make this to $$ \left( \begin{array}{ccc} 0 & 0 & e^{i (\alpha -\gamma )} \\ 0 & -1 & 0 \\ e^{-i (\alpha -\gamma )} & 0 & 0 \\ \end{array} \right) $$ which interchanges the $m_s=1$ and $m_s=-1$ states, up to an unimportant phase.
A matrix that would rotate $m_s=1$ to $m_s=0$ (up to a phase) would have to be of the form $$ \left(\begin{array}{ccc} -1&0&0\\ 0&0&e^{i\varphi}\\ 0&e^{-i\varphi}&0 \end{array}\right) $$ and there is no choice of angles that will produce this outcome.
Short answer: No, you can't. This is one of the few things you can think of classically. If you have an arrow pointing upwwards, can you expect that the arrow reduces its lenght (to 0 in this case) by just rotating it?
Edit:
To see it clearer. There is a vector, called spin vector, which is found in the real space. The problem is that we do not know where that vector points exactly, but we can know its modulus and the $z$ projection. In the picture below, the vector is the blue arrow. We only know its lenght and the $z$ projection (green). However, it can point in any direction along the blue cone.
The most classical-like version of this is a vector with $(\langle S_x\rangle,\langle S_y\rangle,\langle S_z\rangle$).
The state represented above is $|1 \rangle$, as you will always get $+1$ if you measure $S_z$ (in $\hbar$ units).
If you perform a rotation of 180 degrees around the $y$-axis as it is coloured in green, you can obtain $|-1\rangle$.
However, there is no way you can get $0\rangle$. You can say... if we rotate 90 degrees instead of 180, then the green projection over the $z$-axis is 0. Yes, that's true: the green bar will now lay on the $x-axis$. Nevertheless, the are more projections. Now the blue arrow will probably have a projection on the z-axis. You cannot know which one, because it is uncertain. This means you can have $\langle S_z\rangle=0$, but only as a mean value. If you perform one certain measurement, you can get any value, so it is not $0\rangle$ (this one will always give $S_z=0$. Instead, you've got an entangled state with multiple possible measurements. |
Let's take an aqueous solution of a salt $\ce{NaHA}$ with the initial concentration $C$ when added to water. It will completely dissociate according to the eaquation: $\ce{NaHA(s) \rightarrow Na^+ +HA^-}$.
$\ce{HA^-}$ will participate in three equilibria:
$\ce{2HA^- \leftrightarrows H2A +A^{2-}\quad \quad \quad }$ ${K_1^0=K_{A2}/K_{A1}}$
$\ce{HA^- +H2O\leftrightarrows H3O^+ +A^{2-}\quad }$ $K_2^0=K_{A2}$
$\ce{HA^- +H2O\leftrightarrows OH^- +H_2A^\quad }$ $K_3^0=K_{B1}={K_w}/{K_{A1}}$
In most cases, $K_1^0$ is far bigger than $K_2^0$ and $K_3^0$. So, the first equilibrium is
, and this reaction will impose the pH of the solution. the preponderant reaction
Let's now calculate the product $K_{A2}\times K_{A1}$:
$K_{A2}\times K_{A1}= \frac{[\ce{}A^{2-}].[\ce{H3O+}]}{\ce{[HA^-]}}.\frac{[\ce{}HA^{-}].[\ce{H3O+}]}{\ce{[H_2A]}}$
According to the stoichiometry of the preponderant reaction, we have $\ce{[A^{2-}]=[H_2A^]}$. So the product$K_{A2}\times K_{A1}= \ce{[H3O+}]^2}$ .i.e. $\ce{pH}=0.5(\ce{p}K_{A2} +\ce{p} K_{A1})$ |
My question relates to Chapter 3, Exercise 8 in "Baby Rudin". It states:
If $\sum_n a_n$ converges, and if $\{b_n\}$ is monotonic and bounded, prove that $\sum_n a_n b_n$ converges.
My attempt would have been:
Since $\{b_n\}$ is monotonic and bounded, $\{b_n\}$ converges and it exists $\inf \{b_n\}$ as well as $\sup \{b_n\}$. But then we have$$ \left| \sum_n a_n \inf \{b_n\} \right| \leq \left| \sum_n a_n b_n \right| \leq \left| \sum_n a_n \sup \{b_n\} \right| \leq \max \left( \left| \sup \{b_n\} \right|, \left| \inf \{b_n \} \right| \right)\varepsilon \leq \tilde{\varepsilon} $$since $\sum_n a_n$ converges and $\max \left( \left| \sup \{b_n\} \right|, \left| \inf \{b_n \} \right| \right)$ is a finite number. This would imply, by the comparison test, that $\sum_n a_n b_n$ converges as well. $\quad \Box$
But as there was a way longer, more rigorous proof chosen in this solution manual, I'm a bit suspicious that my proof is not complete. Am I missing something?
EDIT: Thanks everyone! @hermes:
The last part of your proof gave me the following idea. As $\lim_{n \to \infty} b_n = c$ and $\{b_n\}$ is monotonic, couldn't we just set $b_n = c - c_n$ with a monotonically decreasing sequence $\{c_n\}$ which has $\lim_{n \to \infty} c_n = 0$. Then we have
$$\sum_n a_n b_n = \underbrace{c \sum_n a_n}_{\text{converges by assumption}} - \underbrace{\sum_n a_n c_n}_{\text{converges by Theorem 3.42}} \leq \varepsilon_1 - \varepsilon_2 = \varepsilon $$
since
Theorem 3.42 Suppose the partial sums of $\sum_n a_n$ form a bounded space $\quad \checkmark$ $c_0 \geq c_1 \geq c_2 \geq \dots \quad \checkmark$ $\lim_{n \to \infty} c_n = 0 \quad \checkmark$
Then $\sum_n a_n c_n$ converges.
Thus $\sum_n a_n b_n$ converges as well. $\quad \Box$
Now that should hold, I think. So I wouldn't need to go through all the estimates. |
When writing $$\arg(z^n) = n\arg(z) + 2πk$$ and letting $\arg$ denote the principal complex argument of $z$. Is $k$ generally an integer or is it that $0\lt k\lt n$ or $k=[\frac{1}{2}-\frac{n}{2\pi}\arg(z)]$ as some books suggest? Obviously, I don't understand any of this and would appreciate if someone explained this tricky situation.. Thanks in advance !
If $n$ is a non-negative integer, then $z^n=z \cdot z \cdot \; \cdots$ is well defined (analytic and entire) on all $\mathbb C$.
If $n$ is a negative integer, then $z^{n}=(1/z)^{|n|}$ and it is meromorphic, with only a pole of order $|n|$ at $z=0$.
In both cases,the argument (apart the $i2\pi$) is also well defined to be $\{n\arg(z)/(2\pi)\}(2\pi)=(n\arg(z))\mod{(2\pi)}$ where the brackets indicate the fractional part. That as much as $\arg(z)$ is defined.
The above if you define $0\le \arg(z) <2\pi$.
If instead, as rightly indicated in a comment, the definition is $-\pi < \arg(z) \le \pi$ (which is that adopted in all major CAS nowadays), in any case you shall reduce $n\arg(z)$ to fall therein.
If $n$ is instead rational, then it comes that you have to choice the
branch: the example of $z^{1/2}=\pm \sqrt{z}$ is well known and I will not continue further (you can find a more authoritative explanation here). |
From Noether's theorem applied to fields we can get the general expression for the stress-energy-momentum tensor for some fields:
$$T^{\mu}_{\;\nu} = \sum_{i} \left(\frac{\partial \mathcal{L}}{\partial \partial_{\mu}\phi_{i}}\partial_{\nu}\phi_{i}\right)-\delta^{\mu}_{\;\nu}\mathcal{L}$$
The EM Lagrangian, in the Weyl gauge, is:
$$\mathcal{L} = \frac{1}{2}\epsilon_{0}\left(\frac{\partial \vec{A}}{\partial t}\right)^{2}-\frac{1}{2\mu_{0}}\left(\vec{\nabla}\times \vec{A}\right)^{2}$$
Applying the above, all I manage to get for the pressure along x, which I believe corresponds to the first diagonal element of the Maxwell stress tensor, is:
$$p_{x} = \sigma_{xx} = -T^{xx} = \frac{-1}{\mu_{0}}\left(\left(\partial_{x}A_{z}\right)^{2}-\partial_{x}A_{z}\partial_{z}A_{x}-\left(\partial_{x}A_{y}\right)^{2}+\partial_{x}A_{y}\partial_{y}A_{x}\right)+\mathcal{L}$$
But I can't see how this can be equal to what is given in Wikipedia.Why is this? |
Yeah, this software cannot be too easy to install, my installer is very professional looking, currently not tied into that code, but directs the user how to search for their MikTeX and or install it and does a test LaTeX rendering
Some body like Zeta (on codereview) might be able to help a lot... I'm not sure if he does a lot of category theory, but he does a lot of Haskell (not that I'm trying to conflate the two)... so he would probably be one of the better bets for asking for revision of code.
he is usually on the 2nd monitor chat room. There are a lot of people on those chat rooms that help each other with projects.
i'm not sure how many of them are adept at category theory though... still, this chat tends to emphasize a lot of small problems and occasionally goes off tangent.
you're project is probably too large for an actual question on codereview, but there is a lot of github activity in the chat rooms. gl.
In mathematics, the Fabius function is an example of an infinitely differentiable function that is nowhere analytic, found by Jaap Fabius (1966). It was also written down as the Fourier transform off^(z)=∏m=1∞(cos...
Defined as the probability that $\sum_{n=1}^\infty2^{-n}\zeta_n$ will be less than $x$, where the $\zeta_n$ are chosen randomly and independently from the unit interval
@AkivaWeinberger are you familiar with the theory behind Fourier series?
anyway here's a food for thought
for $f : S^1 \to \Bbb C$ square-integrable, let $c_n := \displaystyle \int_{S^1} f(\theta) \exp(-i n \theta) (\mathrm d\theta/2\pi)$, and $f^\ast := \displaystyle \sum_{n \in \Bbb Z} c_n \exp(in\theta)$. It is known that $f^\ast = f$ almost surely.
(a) is $-^\ast$ idempotent? i.e. is it true that $f^{\ast \ast} = f^\ast$?
@AkivaWeinberger You need to use the definition of $F$ as the cumulative function of the random variables. $C^\infty$ was a simple step, but I don't have access to the paper right now so I don't recall it.
> In mathematics, a square-integrable function, also called a quadratically integrable function, is a real- or complex-valued measurable function for which the integral of the square of the absolute value is finite.
I am having some difficulties understanding the difference between simplicial and singular homology. I am aware of the fact that they are isomorphic, i.e. the homology groups are in fact the same (and maybe this doesnt't help my intuition), but I am having trouble seeing where in the setup they d...
Usually it is a great advantage to consult the notes, as they tell you exactly what has been done. A book will teach you the field, but not necessarily help you understand the style that the prof. (who creates the exam) creates questions.
@AkivaWeinberger having thought about it a little, I think the best way to approach the geometry problem is to argue that the relevant condition (centroid is on the incircle) is preserved by similarity transformations
hence you're free to rescale the sides, and therefore the (semi)perimeter as well
so one may (for instance) choose $s=(a+b+c)/2=1$ without loss of generality
that makes a lot of the formulas simpler, e.g. the inradius is identical to the area
It is asking how many terms of the Euler Maclaurin formula do we need in order to compute the Riemann zeta function in the complex plane?
$q$ is the upper summation index in the sum with the Bernoulli numbers.
This appears to answer it in the positive: "By repeating the above argument we see that we have analytically continued the Riemann zeta-function to the right-half plane σ > 1 − k, for all k = 1, 2, 3, . . .." |
In the idealised case, the answer to this is slightly surprising. The fact that the mass of a rocket must include the mass of its fuel is embodied in the rocket equation, $$\Delta v = v_e \ln\frac{m_i}{m_f},$$where $m_i$ is the initial mass of the rocket (including fuel, payload and everything else), and $m_f$ is the final mass, including the payload but much less fuel. $v_e$ is the effective exhaust velocity, which we might as well assume stays fixed for a given type of rocket, and $\Delta v$ is essentially the velocity change required to reach escape velocity, which we'll also assume stays constant.
The above equation does not include the acceleration due to gravity, which is of course an important factor. This is because (as is usually done) it's included in the $\Delta v$ term, which includes the velocity you lose to gravitational acceleration as the rocket ascends. You can put in the gravitational acceleration explicitly and the result doesn't change, as I'll show below.
Rearranging the rocket equation gives us$$m_i = m_f e^{\Delta v/v_e},$$which tells us the amount of fuel (the majority of $m_i$) we need to lift a mass $m_f$. You can see that this is exponential in $\Delta v$, meaning that if we want to go a little bit faster we need a much bigger rocket. This is called "the tyranny of the rocket equation."
In this case we don't want to go faster, we just want to send more stuff, i.e. we want to increase $m_f$. But the equation is not exponential in $m_f$, it's
linear. Therefore if we ignore any changes in rocket design that would be needed to increase its size, we can conclude that if you want to double the payload, you only need to double the size of the rocket, not quadruple it.
If we want to do this more precisely, we should include gravitational acceleration in the rocket equation. As per this answer by Asad to another question, this gives us$$\Delta v = v_e ln \frac{m_i}{m_f} - g\left(\frac{m_f}{\dot m}\right),$$where $g$ is acceleration due to gravity and $\dot m$ is the rate at which fuel is burned, which we assume is constant over time. According to the reasoning in Asad's answer, we end up with$$m_i = m_f \left(\exp\left(\frac{\Delta v + g\left(\frac{m_f}{\dot m}\right)}{v_e}\right) -1\right)^{-1},$$where $\Delta v$ is now the true escape velocity rather than the effective escape velocity. In Asad's answer, he assumes that $\dot m$ stays constant as you change $m_f$, and he concludes that there is a strong limit to the size of a rocket. But in fact if you were going to make a rocket twice the size, it wouldn't make sense to keep $\dot m$ the same. To take it to an extreme, imagine building something the size of a Saturn V that burns fuel at the same rate as a hobby rocket. It obviously wouldn't be able to lift itself off the launch pad, and nobody would consider building such a design. So let's instead assume that the burn rate is proportional to the size of the rocket. This means that $\frac{m_f}{\dot m}$ is a constant, and the equation as a whole is still of the form$$m_i = m_f \times \text{a constant},$$so it's linear in $m_f$.
In fact none of this is really all that surprising after all, because if you want to send twice the mass you could always just use two rockets of the original size. By just strapping those rockets next to each other you've got one of twice the size that can send twice the payload. Moreover, it burns fuel at twice the rate, just as I assumed above. There's no reason that wouldn't work in principle. (Though in practice it would be another matter of course!)
If the equation had been exponential in $m_f$ then there would have been a point at which increasing the payload mass would require an unreasonable amount of extra fuel, and that would have imposed a strong practical limit on rocket size. But since it's linear this doesn't really happen. The limits on rocket size are not due to an exponential increase in propellant mass, but to the engineering challenges in building a structure of that size and complexity that won't fail under the violent conditions of a rocket launch.
These include factors to do with the way the strength of a structure scales with its size and (I imagine) practical issues involved in getting fuel where it needs to be at the right time. In this respect the factors that limit the size of rockets are quite similar to the factors that limit the size of buildings. |
I'll help you answer your second question.
But first, there are some difficulties with the problem you've been given. Firstly your question isn't fully specified: you need to have the Hamiltonian itself so you need at least the potential $V(x)$. The expression for $E_n$ you are given bespeaks either a quantum harmonic oscillator or an infinite well. Secondly, the there is a typo in your beginning quantum state. The two inequalities in the definition should read $|x|<a$ and $|x|>a$ (not $|x|<0$, which no $x\in\mathbb{R}$ fulfills!).
So I'll presume an infinite well as this ties in the with Fourier series method of akhmeteli's answer.
As in akmetieli's answer, you expand the quantum state in energy eigenstates $\mathcal{N}^{-1} \cos\left(\left(n + \frac{1}{2}\right)\frac{\pi}{a}x\right);\,n=0,1,2,\cdots$ (when $|x|<a$, nought outside the interval) corresoponding to the energies $E_n$ (here $\mathcal{N}$ is the normalization to make $\int_{-a}^a|\psi|^2dx=1$.
Note that this is a
discrete series. The energy doesn't take on a continuous values, so the magnitude squared of your Fourier series (not integral as for continuously varying energy) weights are the probabilities, not probability densities, that the quantum state will be found in each energy.
So when you do your Fourier series, you should check that $\sum_n |w_n|^2 = 1$, where $w_n$ are the Fourier series weights.
Now the probability $|w_n|^2$ does not vary with time. The phase of $w_n$ does, so the energy eigenstates interfere with one another to give a time varying wavefunction, but the probabilities to be in each energy eigenstate are constant. So you don't even need to know the time when you calculate your probability.
This should let you finish your question.
Next stage: Since you are dealing with the quantum harmonic oscillator, your energy eigenstates are the following set:
$$\psi_n(x) = \frac{1}{\sqrt{2^n\,n!}} \cdot \left(\frac{m\omega}{\pi \hbar}\right)^{1/4} \cdot e^{- \frac{m\omega x^2}{2 \hbar}} \cdot H_n\left(\sqrt{\frac{m\omega}{\hbar}} x \right), \qquad n = 0,1,2,\ldots.\qquad(1) $$
where $H_n(x)=(-1)^n e^{x^2}\frac{d^n}{dx^n}\left(e^{-x^2}\right)$ is the $n^{th}$ Hermite polynomial.
These eigenfunctions are orthogonal in the sense that:
$$\left<\psi_n, \psi_m\right> \stackrel{def}{=} \int_{-\infty}^\infty \psi_n(x)^* \psi_m(x) {\rm d}x = \delta_{m,n}\qquad (2)$$
i.e. the "inner product" is nought for different discrete energy eigenfunctions and 1 for $n=m$, so each eigenstate is "normalized" or said to have unit "length" in the Hilbert space spanned by the energy eigenstates (don't worry if you don't understand all of this; these are the kinds of statements that'll become more and more wonted to you if you keep at your self study). The above orthogonality is the key to the kind of decomposition that Akhmeteli and I have spoken of: any initial quantum state $\psi(x)$ can be resolved into its energy eigenstates by assuming:
$$\psi(x) = \sum\limits_{m=0}^\infty w_m \,\psi_m(x)\qquad(3)$$
then multiplying both sides of (3) by the $m^{th}$ eigenstate in turn and integrating over the whole real interval, applying (2) to find (noting that we can integrate the series termwise):
$$w_m = \int_{-\infty}^\infty \psi(x)\, \psi(m)\, {\rm d}x \,\qquad(4)$$
Now it is the field of spectral theory that shows that our set of eigenfunctions (1) is
complete, i.e. that a sum of the kind (3) can indeed represent (in the appropriate measure theoretic sense) any piecewise continuous $\psi(x)$ fulfilling $\int_{-\infty}^\infty |\psi(x)|^2 {\rm d}x < \infty$, which of course is true for valid quantum states since we must have $\int_{-\infty}^\infty |\psi(x)|^2 {\rm d}x =1$ (this is just a necessary condition for $|\psi(x)|^2$ to be a probability density in $x$).
You will ultimately learn that this "orthogonality" is a property of all eigenfunctions of any quantum observable, not only the Hamiltonian. This result comes to us from Sturm Liouville theory and is owing to the self-adjointhood of quantum observables (more generally it holds for any normal operator - one that commutes with its own adjoint).
Lastly, note that, since $\psi_n(x)$ is the $n^{th}$ energy eigenfunction, its full space and time variation is $\psi_n(x, t) = \psi_n(x) \exp(-i E_n t/\hbar)$. So once you've resolved your beginning quantum state into a superposition like (3), you can write down its general time dependence:
$$\psi(x, t) = \sum\limits_{m=0}^\infty \left(w_m \,\psi_m(x)\,e^{-i \frac{E_n}{\hbar} t}\right)\qquad(5)$$
This should let you get a bit further! |
Yeah, this software cannot be too easy to install, my installer is very professional looking, currently not tied into that code, but directs the user how to search for their MikTeX and or install it and does a test LaTeX rendering
Some body like Zeta (on codereview) might be able to help a lot... I'm not sure if he does a lot of category theory, but he does a lot of Haskell (not that I'm trying to conflate the two)... so he would probably be one of the better bets for asking for revision of code.
he is usually on the 2nd monitor chat room. There are a lot of people on those chat rooms that help each other with projects.
i'm not sure how many of them are adept at category theory though... still, this chat tends to emphasize a lot of small problems and occasionally goes off tangent.
you're project is probably too large for an actual question on codereview, but there is a lot of github activity in the chat rooms. gl.
In mathematics, the Fabius function is an example of an infinitely differentiable function that is nowhere analytic, found by Jaap Fabius (1966). It was also written down as the Fourier transform off^(z)=∏m=1∞(cos...
Defined as the probability that $\sum_{n=1}^\infty2^{-n}\zeta_n$ will be less than $x$, where the $\zeta_n$ are chosen randomly and independently from the unit interval
@AkivaWeinberger are you familiar with the theory behind Fourier series?
anyway here's a food for thought
for $f : S^1 \to \Bbb C$ square-integrable, let $c_n := \displaystyle \int_{S^1} f(\theta) \exp(-i n \theta) (\mathrm d\theta/2\pi)$, and $f^\ast := \displaystyle \sum_{n \in \Bbb Z} c_n \exp(in\theta)$. It is known that $f^\ast = f$ almost surely.
(a) is $-^\ast$ idempotent? i.e. is it true that $f^{\ast \ast} = f^\ast$?
@AkivaWeinberger You need to use the definition of $F$ as the cumulative function of the random variables. $C^\infty$ was a simple step, but I don't have access to the paper right now so I don't recall it.
> In mathematics, a square-integrable function, also called a quadratically integrable function, is a real- or complex-valued measurable function for which the integral of the square of the absolute value is finite.
I am having some difficulties understanding the difference between simplicial and singular homology. I am aware of the fact that they are isomorphic, i.e. the homology groups are in fact the same (and maybe this doesnt't help my intuition), but I am having trouble seeing where in the setup they d...
Usually it is a great advantage to consult the notes, as they tell you exactly what has been done. A book will teach you the field, but not necessarily help you understand the style that the prof. (who creates the exam) creates questions.
@AkivaWeinberger having thought about it a little, I think the best way to approach the geometry problem is to argue that the relevant condition (centroid is on the incircle) is preserved by similarity transformations
hence you're free to rescale the sides, and therefore the (semi)perimeter as well
so one may (for instance) choose $s=(a+b+c)/2=1$ without loss of generality
that makes a lot of the formulas simpler, e.g. the inradius is identical to the area
It is asking how many terms of the Euler Maclaurin formula do we need in order to compute the Riemann zeta function in the complex plane?
$q$ is the upper summation index in the sum with the Bernoulli numbers.
This appears to answer it in the positive: "By repeating the above argument we see that we have analytically continued the Riemann zeta-function to the right-half plane σ > 1 − k, for all k = 1, 2, 3, . . .." |
Yes, even when
n is known in advance
and
each string-length is $\lceil$log 2(n)$\rceil$ + 1
and
we only care about the state between updates,
not the computation required to perform updates
and
the update procedure can be non-computable
.
For all positive integers n and all elements x of {0,1} n, by [the simpler version of the Chernoff bound], the probability of [a random element y of {0,1} n differing from x on at most 4/9 of the positions] is at most exp(-(2/324)$\cdot$n), which is less than 2 -(1/113)$\cdot$n. Thus, for choosing a subset of {0,1} n whose elements pairwise differ on more than 4/9 of the positions, each choice eliminates less than 2 n/113 of the original possibilities, so there is such a subset with more than 2 n/113 elements. Assume n is an integer that's greater than 2, and let S be a subset of {0,1} n-2 with more than 2 (n-2)/113 elements which is such that distinct elements of S differ on more than 4/9 of the positions, and consider any initially-randomized algorithm whose error probability will be at most 1/6 when the inputs are chosen as follows:
Choose s uniformly from S, and send
(0,s 0) , (1,s 1) , (2,s 2) , (3,s 3) , (4,s 4) , ... , (n-4,s n-4) , (n-3,s n-3) , $\star$ along both streams. Choose m uniformly from from {0,1,2,3,4,...,n-4,n-3} and then send $\star$ on stream 0 and (m,1) on stream 1. The expected value, over its own possible choices of randomness for each update, of its error probability (over the choice of input) conditioned on its own randomness, is at most 1/6, so there is some internal randomness for which its error probability (over the choice of input) will be at most 1/6. Fix some such randomness for each update, giving a deterministic algorithm, which I'll call DSSEA, whose error probability on that input distribution is at most 1/6. Consider a guesser G which uses [DSSEA and DSSEA's state just before receiving the last pair of strings] as follows:
Let s' be the element of {0,1}
n-2 given by [s' i is DSSEA's output after sending $\star$ , (i,1) along streams 0,1 respectively ]. Output the lexicographically least element of S which differs from s' on a minimum number of positions. The expected value, over the choice of strings other than the last pair, of DSSEA's error probability (over the choice of the last pair) conditioned on the other strings, is at most 1/6, so the probability of that conditional probability exceeding 2/9 is at most 3/4. Whenever that conditional probability is at most 2/9, s' will differ from s on at most 2/9 of the positions, so G will output s, since other all elements of S differ from s on more than 4/9 of the positions, and so from s' on more than 2/9 of the positions. Thus G has probability at least 1/4 of outputting s, so in particular has more than $\big(\hspace{-0.03 in}$2 (n-2)/113$\hspace{-0.03 in}\big)\hspace{-0.03 in}\big/\hspace{-0.03 in}$4 possible outputs. By the choice of G, that means DSSEA has more than 2 ((n-2)/113)-2 possible internal states just before the last update. Hardcording separate randomness for each update does not increase that number, so the initial randomized algorithm must be able to keep at least ((n-2)/113)-2 bits of state between updates.
For all integers n, if 25992 < n then n/114 < ((n-2)/113)-2 .
For constant error rates above 1/6, just reduce the error rate by taking a majority vote of O(1) independent parallel runs.
For
numbers M of possible strings
and positive integers j in o(log(M)) and error probabilities bounded above by 1$\hspace{-0.04 in}\big/\hspace{-0.04 in}\big(\hspace{-0.04 in}$M (1+Ω(1))/j$\hspace{-0.03 in}\big)$
, one can similarly get an
asymptotic lower bound of $\big(\hspace{-0.04 in}\lfloor \hspace{-0.03 in}$log 2(M choose n-1)$\rfloor$ - 1$\hspace{-0.04 in}\big)$ / (2$\cdot$j - 1) , although I have neither tried working out whether-or-not that dependence on j should be within a constant factor of tight nor tried bounds for other parameter regimes. |
CentralityBin () CentralityBin (const char *name, Float_t low, Float_t high) CentralityBin (const CentralityBin &other) virtual ~CentralityBin () CentralityBin & operator= (const CentralityBin &other) Bool_t IsAllBin () const Bool_t IsInclusiveBin () const const char * GetListName () const virtual void CreateOutputObjects (TList *dir, Int_t mask) virtual Bool_t ProcessEvent (const AliAODForwardMult *forward, UInt_t triggerMask, Bool_t isZero, Double_t vzMin, Double_t vzMax, const TH2D *data, const TH2D *mc, UInt_t filter, Double_t weight) virtual Double_t Normalization (const TH1I &t, UShort_t scheme, Double_t trgEff, Double_t &ntotal, TString *text) const virtual void MakeResult (const TH2D *sum, const char *postfix, bool rootProj, bool corrEmpty, Double_t scaler, Int_t marker, Int_t color, TList *mclist, TList *truthlist) virtual bool End (TList *sums, TList *results, UShort_t scheme, Double_t trigEff, Double_t trigEff0, Bool_t rootProj, Bool_t corrEmpty, Int_t triggerMask, Int_t marker, Int_t color, TList *mclist, TList *truthlist) Int_t GetColor (Int_t fallback=kRed+2) const void SetColor (Color_t colour) TList * GetResults () const const char * GetResultName (const char *postfix="") const TH1 * GetResult (const char *postfix="", Bool_t verbose=true) const void SetDebugLevel (Int_t lvl) void SetSatelliteVertices (Bool_t satVtx) virtual void Print (Option_t *option="") const const Sum * GetSum (Bool_t mc=false) const Sum * GetSum (Bool_t mc=false) const TH1I * GetTriggers () const TH1I * GetTriggers () const TH1I * GetStatus () const TH1I * GetStatus ()
Calculations done per centrality. These objects are only used internally and are never streamed. We do not make dictionaries for this (and derived) classes as they are constructed on the fly.
Definition at line 701 of file AliBasedNdetaTask.h.
Calculate the Event-Level normalization.
The full event level normalization for trigger \(X\) is given by
\begin{eqnarray*} N &=& \frac{1}{\epsilon_X} \left(N_A+\frac{N_A}{N_V}(N_{-V}-\beta)\right)\\ &=& \frac{1}{\epsilon_X}N_A \left(1+\frac{1}{N_V}(N_T-N_V-\beta)\right)\\ &=& \frac{1}{\epsilon_X}N_A \left(1+\frac{N_T}{N_V}-1-\frac{\beta}{N_V}\right)\\ &=& \frac{1}{\epsilon_X}N_A \left(\frac{1}{\epsilon_V}-\frac{\beta}{N_V}\right) \end{eqnarray*}
where
\(\epsilon_X=\frac{N_{T,X}}{N_X}\) is the trigger efficiency evaluated in simulation. \(\epsilon_V=\frac{N_V}{N_T}\) is the vertex efficiency evaluated from the data \(N_X\) is the Monte-Carlo truth number of events of type \(X\). \(N_{T,X}\) is the Monte-Carlo truth number of events of type \(X\) which was also triggered as such. \(N_T\) is the number of data events that where triggered as type \(X\) and had a collision trigger (CINT1B) \(N_V\) is the number of data events that where triggered as type \(X\), had a collision trigger (CINT1B), and had a vertex. \(N_{-V}\) is the number of data events that where triggered as type \(X\), had a collision trigger (CINT1B), but no vertex. \(N_A\) is the number of data events that where triggered as type \(X\), had a collision trigger (CINT1B), and had a vertex in the selected range. \(\beta=N_a+N_c-N_e\) is the number of control triggers that were also triggered as type \(X\). \(N_a\) Number of beam-empty events also triggered as type \(X\) events (CINT1-A or CINT1-AC). \(N_c\) Number of empty-beam events also triggered as type \(X\) events (CINT1-C). \(N_e\) Number of empty-empty events also triggered as type \(X\) events (CINT1-E).
Note, that if \( \beta \ll N_A\) the last term can be ignored, and the expression simplyfies to
\[ N = \frac{1}{\epsilon_X}\frac{1}{\epsilon_V}N_A \]
Parameters
t Histogram of triggers scheme Normalisation scheme trgEff Trigger efficiency ntotal On return, the total number of events to normalise to. text If non-null, fill with normalization calculation Returns \(N_A/N\) or negative number in case of errors.
Definition at line 1784 of file AliBasedNdetaTask.cxx.
Referenced by End(). |
I'd like to set
$1024 \times 768$ without any space between the three items. Is this possible? If so, how?
E.g., what I get is:
1024 x 768
and what I want is:
1024x768
TeX - LaTeX Stack Exchange is a question and answer site for users of TeX, LaTeX, ConTeXt, and related typesetting systems. It only takes a minute to sign up.Sign up to join this community
Math binary operators and relations automatically add appropriate spaces between the symbol and their operands. If you want to remove this space, you can turn the operator into a regular symbol by enclosing it in braces. For example
$1024 {\times} 768$
If you will be using this often you can also define a new command and say something like
\newcommand{\stimes}{{\times}}$1024 \stimes 768$
where
\stimes is a
symbol version of the
\times
operator.
These answers seem overly complicated to me. I personally just use
\! between symbols as in:
$W \! \rightarrow \! \mu$
This brings the symbols closer together. You can also use multiple in a row
$W \! \! \! \rightarrow \! \mu$
Perhaps defining it as an ordinary math symbol might be better than just enclose it in braces and expect that would do it now and in the future. So, I would use
\mathord:
$1024\mathord{\times}768$
It seems to me that what you really want is a multiplication sign that works in text mode. You can get this by writing $\times$ or, to answer your whole question
1024$\times$768.
By the way, nice question. This is a good example of where it makes sense not to use normal math typography. |
For a test case, I want to determine the velocity profile of a viscously damped standing wave.
By linearizing the density ($\rho=\rho_0+\rho'$) and velocity ($ux=ux'$), the continuity and Navier-Stokes equations result in, respectively:
\begin{align} \partial_t\rho' + \rho_0\partial_xu_x' &= 0 \tag{1} \\ \partial_t^2\rho' &= \partial_x^2\rho'c_s^2 + \nu\partial_t\partial_x\rho' \tag{2} \end{align}
The $c_s$ is just a constant indicating we are dealing with an ideal pressure term ($p=\rho c_s^2$)
A solution for the density to $(2)$ is given by:
$$\rho=\rho_0+\Delta\rho\sin(k_xx)\cos(\omega_it)\exp(-\omega_rt)$$
where $$k_x=2\pi/n_x, \quad \omega_r=\frac{1}{2}k_x^2\nu, \quad \omega_i=k_xc_s\sqrt{1-\left(\frac{1}{2}\frac{k_x\nu}{c_s} \right)^2} \, .$$
Now I want to determine the velocity; it would seem straightforward to use $(1)$ to get
$$\partial_xu_x'=-\partial_t\rho'/\rho_0=\frac{\triangle\rho}{\rho_{0}}\sin\left(k_{x}x\right)\left[\omega_{r}\cos\left(\omega_{i}t\right)-\omega_{i}\sin\left(\omega_{i}t\right)\right]\exp\left(-\omega_{r}t\right)$$
and integrate to get
$$u_{x}'=-\frac{1}{k_{x}}\frac{\triangle\rho}{\rho_{0}}\cos\left(k_{x}x\right)\left[\omega_{r}\cos\left(\omega_{i}t\right)-\omega_{i}\sin\left(\omega_{i}t\right)\right]\exp\left(-\omega_{r}t\right)+K$$
where $K$ is an integration constant. My approach was to determine $K$ by setting the velocity zero at a antinode (at $x=n_x/4$), to get
$$u_{x}'=-\frac{1}{k_{x}}\frac{\triangle\rho}{\rho_{0}}\cos\left(k_{x}x\right)\left[\omega_{r}\cos\left(\omega_{i}t\right)-\omega_{i}\sin\left(\omega_{i}t\right)\right]\exp\left(-\omega_{r}t\right) \, .$$
However, comparing the simulation with the analytical solution it seems that the amplitude of the velocity is much larger in the simulation.
Is my approach described above at all correct? |
The problem isn't 100% clear, and a full treatment would probably require the use of coupled oscillation techniques that you may or may not have learned yet. But if this is meant to be solved with "basic" techniques, here's how I would think about it:
For a normal pendulum, the tension in the string is largest when the string is passing through the horizontal (since its angular speed is largest there.)
Thus, if the pendulum has a frequency $f$, the tension in the string will oscillate with a frequency $2f$.
The pendulum string is therefore acting as a driving force with frequency $2f$ on the system consisting of the weight $W$ and the spring $k$. What's more, this driving force, at this frequency, causes the weight-spring system to oscillate with a "large amplitude".
Take it from there.
To sketch out a more formal technique, we can use Lagrangian mechanics. Let $Y$ denote the displacement of the upper mass from its equilibrium position, let $\theta$ denote the angle between the string and the vertical, and let $M = W/g$ denote the mass of the upper block. After some geometry, we can show that the Lagrangian for this system is$$\mathcal{L} = \frac{1}{2} (m+M) \dot{Y}^2 - m \ell \sin \theta \dot{\theta} \dot{Y} + \frac{m}{2} \ell^2 \dot{\theta}^2 + m g \ell \cos \theta - \frac{1}{2} k Y^2,$$ and taking the associated Euler-Lagrange equations, we conclude that\begin{align*}M \ddot{Y} - m \ell \left(\cos \theta \dot{\theta}^2 + \sin \theta \ddot{\theta} \right) &= - k Y \\- m \ell \sin \theta \ddot{Y} + m \ell^2 \ddot{\theta} &= - m g \ell \sin \theta. \end{align*}
We now can look for a formal power series solution to these equations:\begin{align}Y(t) &= \epsilon Y^{(1)}(t) + \epsilon^2 Y^{(2)}(t) + \dots \\\theta(t) &= \epsilon \theta^{(1)}(t) + \epsilon^2 \theta^{(2)}(t) + \dots \end{align}We now want to plug these in to the Euler-Lagrange equations and expand them out order by order in $\epsilon$. At $\mathcal{O}(\epsilon)$, we find that$$M \ddot{Y}^{(1)} = -k Y^{(1)}, \qquad m \ell^2 \ddot{\theta}^{(1)} = - m g \ell \theta^{(1)},$$from which we conclude that for small oscillations, we have simple harmonic motion in both $\theta$ and $Y$. Moreover, these oscillations are uncoupled; at this level of approximation, we would not see the behavior described in the problem.
To see the coupling effects between the two coordinates, we have to expand the Euler-Lagrange equations to $\mathcal{O}(\epsilon^2)$; if we do this, we get (after some algebra)\begin{align}M \ddot{Y}^{(2)} = m \ell \left( \left( \dot{\theta}^{(1)} \right)^2 + \theta^{(1)} \ddot{\theta}^{(1)} \right) -k Y^{(2)}\end{align}along with a similar equation for $\theta^{(2)}$. This latter equation can be rearranged to yield an undamped driven oscillator equation, where the function $\theta^{(1)}$ and its derivatives act as the "driving force" for the second-order perturbations $Y^{(2)}$. The fact that these oscillations become "large" allows us to say something about the values of $k$ and $M$. |
LHCb Collaboration; Bernet, R; Büchler-Germann, A; Bursche, A; Chiapolini, N; De Cian, M; Elsasser, C; Müller, K; Palacios, J; Salzmann, C; Serra, N; Steinkamp, O; Straumann, U; Tobin, M; Vollhardt, A; Anderson, J; Aaij, R; Abellán Beteta, C; Adeva, B; Zvyagin, A (2012).
Measurement of the ratio of prompt $\chi_{c}$ to $J/\psi$ production in $pp$ collisions at $\sqrt{s}=7$ TeV. Physics Letters B, 718(2):431-440. Abstract
The prompt production of charmonium $\chi_{c}$ and $J/\psi$ states is studied in proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV at the Large Hadron Collider. The $\chi_{c}$ and $J/\psi$ mesons are identified through their decays $\chi_{c}\rightarrow J/\psi \gamma$ and $J/\psi\rightarrow \mu^+\mu^-$ using 36 pb$^{-1}$ of data collected by the LHCb detector in 2010. The ratio of the prompt production cross-sections for $\chi_{c}$ and $J/\psi$, $\sigma (\chi_{c}\rightarrow J/\psi \gamma)/ \sigma (J/\psi)$, is determined as a function of the $J/\psi$ transverse momentum in the range $2 < p_{\mathrm T}^{J/\psi} < 15$ GeV/$c$. The results are in excellent agreement with next-to-leading order non-relativistic expectations and show a significant discrepancy compared with the colour singlet model prediction at leading order, especially in the low $p_{\mathrm T}^{J/\psi}$ region.
Abstract
The prompt production of charmonium $\chi_{c}$ and $J/\psi$ states is studied in proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV at the Large Hadron Collider. The $\chi_{c}$ and $J/\psi$ mesons are identified through their decays $\chi_{c}\rightarrow J/\psi \gamma$ and $J/\psi\rightarrow \mu^+\mu^-$ using 36 pb$^{-1}$ of data collected by the LHCb detector in 2010. The ratio of the prompt production cross-sections for $\chi_{c}$ and $J/\psi$, $\sigma (\chi_{c}\rightarrow J/\psi \gamma)/ \sigma (J/\psi)$, is determined as a function of the $J/\psi$ transverse momentum in the range $2 < p_{\mathrm T}^{J/\psi} < 15$ GeV/$c$. The results are in excellent agreement with next-to-leading order non-relativistic expectations and show a significant discrepancy compared with the colour singlet model prediction at leading order, especially in the low $p_{\mathrm T}^{J/\psi}$ region.
Statistics Downloads
0 downloads since deposited on 12 Mar 2013
0 downloads since 12 months Additional indexing |
Does the sequence $\sin(n!)$ diverge(converge)?
It seems the sequence diverges. I tried for a contradiction but with no success. Thanks for your cooperation.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Does the sequence $\sin(n!)$ diverge(converge)?
It seems the sequence diverges. I tried for a contradiction but with no success. Thanks for your cooperation.
Depends on whether the parameter of $\sin$ is in radians or degrees. If in degree, $n!$ becomes multiple of 360 and after that function value will be zero, for all value of $n$.
In radians this will not happen as $\pi$ is irrational.
Hint: Take a subsequence in which $a_{n_i} \approx \pi ( 4i+1)/2 $ and another where $b_{n_j} \approx \pi ( 4i +3)/2$. |
I have developed a differential equation for the variation of a star's semi-major axis with respect to its eccentricity.
It is as follows:
$$\frac{dy}{dx}=\frac{12}{19}\frac{y\left(1+\left(\frac{73}{24}x^2\right)+\left(\frac{37}{26}x^4\right)\right)}{x\left(1+\left(\frac{121}{304}x^2\right)\right)}$$
Where $y$ is the semi-major axis and $x$ is the eccentricity.
Where $y$ is the semi-major axis and $x$ is the eccentricity. The 3-D plots of this equation can be found [here](http://www.wolframalpha.com/input/?i=3D+plot++%5Cfrac%7B12%7D%7B19%7D%5Cfrac%7By%5Cleft(1%2B%5Cleft(%5Cfrac%7B73%7D%7B24%7Dx%5E2%5Cright)%2B%5Cleft(%5Cfrac%7B37%7D%7B26%7Dx%5E4%5Cright)%5Cright)%7D%7Bx%5Cleft(1%2B%5Cleft(%5Cfrac%7B121%7D%7B304%7Dx%5E2%5Cright)%5Cright)%7D)
And this is the solution to the above DE [here](http://www.wolframalpha.com/input/?i=solve+y'%3D+%5Cfrac%7B12%7D%7B19%7D%5Cfrac%7By%5Cleft(1%2B%5Cleft(%5Cfrac%7B73%7D%7B24%7Dx%5E2%5Cright)%2B%5Cleft(%5Cfrac%7B37%7D%7B26%7Dx%5E4%5Cright)%5Cright)%7D%7Bx%5Cleft(1%2B%5Cleft(%5Cfrac%7B121%7D%7B304%7Dx%5E2%5Cright)%5Cright)%7D)
The decay time of stars can be found by solving the following integral:
$$T(a_{0},e_{0})=\frac{12(c_{0}^4)}{19\gamma}\int_{0}^{e_0}{\frac{e^{29/19}[1+(121/304)e^2]^{1181/2299}}{(1-e^2)^{3/2}}}de\tag1$$ Where $$\gamma=\frac{64G^3}{5c^5}m_{1}m_{2}(m_{1}+m_{2})$$ For $e_{0}$ close to $1$ the equation becomes: $$T(a_{0},e_{0})\approx\frac{768}{425}T_{f}a_{0}(1-e_{0}^2)^{7/2}\tag2$$ Where $$T_{f}=\frac{a_{0}^4}{4\gamma}$$
I used Appell's hypergeometric functions to solve integral (1), but is there any way in which I can express the solutions in terms of few special functions with simpler symmetries, so that the analysis becomes easier.There is a well defined symmetry for the above equation from the plot. Hence, is it possible to express this in terms of other special function (which have different symmetries).
EDIT: I was suggested that since the powers in the integrand in equation (1) are very non-trivial, probably the hypergeometric function can't be further simplified. But I fail to understand why this might seem to pose a problem. Can't this D.E. be solved by Lie symmetry methods? Or can this solution's field be treated using Frobenius' theorem and the dimensions of it analysed? |
Consider the series:
$$\sum_{n=1}^{\infty}\frac{\zeta(2n+1)}{n(2n+1)}$$
We can easily prove that it's a convergent series. My question, is there a way to express this series in terms of zeta constants ?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
It is not difficult to find the generating function for the values of the $\zeta$ function over the positive odd integers:
$$ f(x)=\sum_{n\geq 1}\zeta(2n+1) x^{2n} = -\gamma-\frac{1}{2}\left[\psi(1-x)+\psi(1+x)\right] \tag{1}$$ and: $$ \sum_{n\geq 1}\frac{\zeta(2n+1)}{n(2n+1)}=2\sum_{n\geq 1}\zeta(2n+1)\left(\frac{1}{2n}-\frac{1}{2n+1}\right)=2\int_{0}^{1}f(x)\left(\frac{1}{x}-1\right)\,dx\tag{2}$$
but the integrals $\int\frac{\psi(1\pm x)}{x}\,dx$ or $\int \log(x)\,\psi'(1\pm x)\,dx$ do not have nice closed forms (by my knowledge) but in terms of the Hurwitz zeta function and its derivatives. |
Yeah, this software cannot be too easy to install, my installer is very professional looking, currently not tied into that code, but directs the user how to search for their MikTeX and or install it and does a test LaTeX rendering
Some body like Zeta (on codereview) might be able to help a lot... I'm not sure if he does a lot of category theory, but he does a lot of Haskell (not that I'm trying to conflate the two)... so he would probably be one of the better bets for asking for revision of code.
he is usually on the 2nd monitor chat room. There are a lot of people on those chat rooms that help each other with projects.
i'm not sure how many of them are adept at category theory though... still, this chat tends to emphasize a lot of small problems and occasionally goes off tangent.
you're project is probably too large for an actual question on codereview, but there is a lot of github activity in the chat rooms. gl.
In mathematics, the Fabius function is an example of an infinitely differentiable function that is nowhere analytic, found by Jaap Fabius (1966). It was also written down as the Fourier transform off^(z)=∏m=1∞(cos...
Defined as the probability that $\sum_{n=1}^\infty2^{-n}\zeta_n$ will be less than $x$, where the $\zeta_n$ are chosen randomly and independently from the unit interval
@AkivaWeinberger are you familiar with the theory behind Fourier series?
anyway here's a food for thought
for $f : S^1 \to \Bbb C$ square-integrable, let $c_n := \displaystyle \int_{S^1} f(\theta) \exp(-i n \theta) (\mathrm d\theta/2\pi)$, and $f^\ast := \displaystyle \sum_{n \in \Bbb Z} c_n \exp(in\theta)$. It is known that $f^\ast = f$ almost surely.
(a) is $-^\ast$ idempotent? i.e. is it true that $f^{\ast \ast} = f^\ast$?
@AkivaWeinberger You need to use the definition of $F$ as the cumulative function of the random variables. $C^\infty$ was a simple step, but I don't have access to the paper right now so I don't recall it.
> In mathematics, a square-integrable function, also called a quadratically integrable function, is a real- or complex-valued measurable function for which the integral of the square of the absolute value is finite.
I am having some difficulties understanding the difference between simplicial and singular homology. I am aware of the fact that they are isomorphic, i.e. the homology groups are in fact the same (and maybe this doesnt't help my intuition), but I am having trouble seeing where in the setup they d...
Usually it is a great advantage to consult the notes, as they tell you exactly what has been done. A book will teach you the field, but not necessarily help you understand the style that the prof. (who creates the exam) creates questions.
@AkivaWeinberger having thought about it a little, I think the best way to approach the geometry problem is to argue that the relevant condition (centroid is on the incircle) is preserved by similarity transformations
hence you're free to rescale the sides, and therefore the (semi)perimeter as well
so one may (for instance) choose $s=(a+b+c)/2=1$ without loss of generality
that makes a lot of the formulas simpler, e.g. the inradius is identical to the area
It is asking how many terms of the Euler Maclaurin formula do we need in order to compute the Riemann zeta function in the complex plane?
$q$ is the upper summation index in the sum with the Bernoulli numbers.
This appears to answer it in the positive: "By repeating the above argument we see that we have analytically continued the Riemann zeta-function to the right-half plane σ > 1 − k, for all k = 1, 2, 3, . . .." |
Does $\int_1^\infty\sin (\frac{\sin x}{x})\mathrm d x$diverge or not? If it converges, does it converge conditionally or absolutely? I guess that it converges conditionally, also,I think it may be related to $\int_{n\pi}^{(n+1)\pi}\frac{\sin x}{x}\mathrm d x$ , but I do not know how to start? Any help will be appreciated.
For any $n\in\mathbb{Z}^+$ the integral $\int_{0}^{+\infty}\left(\frac{\sin x}{x}\right)^n\,dx$ can be explicitly computed (see here, for instance).
From the approximation $\frac{\sin x}{x}\approx e^{-x^2/6}$, it is expected to be positive and decay like $\sqrt{\frac{3\pi}{2n}}$.
These facts already, together with $\sin z$ being an entire function, give that our integral is a converging one. A more convincing argument, maybe, is that for any $x\geq 1$ the inequality: $$ \left(1-\frac{1}{5x^2}\right)\cdot\frac{\sin x}{x}\leq \sin\left(\frac{\sin x}{x}\right)\leq\frac{\sin x}{x} $$ holds. Being equivalent to $\int_{0}^{+\infty}\frac{\sin x}{x}\,dx$, our integral is converging but not absolutely converging. |
Yes, this presents no difficulty.
As long as you can sample from the full conditionals (and it sounds like you can) then yes.
For a bivariate $(U,V)$ that's just sampling $(V|U=u)$ and $(U|V=v)$. Let's consider a simple case (for which we don't really need Gibbs sampling). Let:
$f_{X,Q}(x,q)= \frac{{n\choose x}} {\mathrm{B}(\alpha,\beta)} q^{x+\alpha-1} (1-q)^{n-x+\beta-1}\,, \quad x=0,1,...,n \quad 0<q<1 $
which can be written as
$f_{X,Q}(x,q) = f_{X|Q=q}(x) f_Q(q)$
$\qquad\qquad\:\:= {n\choose x}q^x(1-q)^{n-x}\,\cdot\,\frac{1} {\mathrm{B}(\alpha,\beta)} q^{\alpha-1} (1-q)^{\beta-1} $
So if we know $Q=q$ we can sample from $X$; it's just binomial.
On the other hand, conditional on $X=x$ we can see that $Q$ just has a beta distribution, so again we can sample from that.
[If we perform Gibbs sampling on that pair of full conditionals we'd be ultimately sampling from a beta-binomial marginal distribution for $X$; if that marginal was of primary interest we could calculate it directly by integration.] |
I know that if we move a rectangular wire from no magnetic field to through a magnetic field, there would be an induced voltage because there is change in flux (b∆x). However, if we moved a wire/rod in the same situation, it will also induce a voltage but is it due to the change in flux (b∆x) or charge separation?
The induced voltage depends on the change in flux according to faraday's law $$volt=-N\frac{\mathrm{d}\phi}{\mathrm{d}t}=-N\mathop{\iint}_{S^{'}}\frac{\mathrm{d}\vec{B}}{\mathrm{d}t}\cdot\mathrm{d}\vec{S}=-N\mathop{\oint}_{C^{'}}\frac{\mathrm{d}}{\mathrm{d}t}\left\{\nabla\times\vec{B}\right\}\cdot\mathrm{d}\vec{l}$$
So as you can see that the flux or the math associated with it needs a defined closed surface for inspection, since the potential induced in your case is NOT electrostatic, but electrodynamic. Electrodynamic potentials are NOT absolute and need a defined closed circuit for their realization. Hence, in this case, the question doesn't make sense since you don't have a closed circuit and are asking for an analysis of electrodynamic potential.
What would make sense is to ask that if you have a multimeter attached to the rod and then continuously monitor the potential, would the potential change? The answer depends on the state of motion of the multimeter. If the multimeter is static and the rod moves and while the rod moves the wires from the multimeter to the rod ends unfurl, then yes, you would see a voltage.
If, on the other hand the multimeter is soldered to the rod with two other rods so that they form a static hoop, then you will not see any voltage since, in that case, you'd have $$\mathop{\oint}_{C^{'}}\nabla\times\vec{B}\cdot\mathrm{d}\vec{l}=const$$ and the rate of change of that quantity wrt time is zero.
Now, when you talk of a Lorentz force, then $\vec{F}=q\left(\vec{v}\times\vec{B}\right)$ and you see the perpendicularity of the velocity and the magnetic field hence creating a force on the electrons which sift towards a given side. This creates a charge seggregation and hence a potential difference on the ends of the rod. |
Yeah, this software cannot be too easy to install, my installer is very professional looking, currently not tied into that code, but directs the user how to search for their MikTeX and or install it and does a test LaTeX rendering
Some body like Zeta (on codereview) might be able to help a lot... I'm not sure if he does a lot of category theory, but he does a lot of Haskell (not that I'm trying to conflate the two)... so he would probably be one of the better bets for asking for revision of code.
he is usually on the 2nd monitor chat room. There are a lot of people on those chat rooms that help each other with projects.
i'm not sure how many of them are adept at category theory though... still, this chat tends to emphasize a lot of small problems and occasionally goes off tangent.
you're project is probably too large for an actual question on codereview, but there is a lot of github activity in the chat rooms. gl.
In mathematics, the Fabius function is an example of an infinitely differentiable function that is nowhere analytic, found by Jaap Fabius (1966). It was also written down as the Fourier transform off^(z)=∏m=1∞(cos...
Defined as the probability that $\sum_{n=1}^\infty2^{-n}\zeta_n$ will be less than $x$, where the $\zeta_n$ are chosen randomly and independently from the unit interval
@AkivaWeinberger are you familiar with the theory behind Fourier series?
anyway here's a food for thought
for $f : S^1 \to \Bbb C$ square-integrable, let $c_n := \displaystyle \int_{S^1} f(\theta) \exp(-i n \theta) (\mathrm d\theta/2\pi)$, and $f^\ast := \displaystyle \sum_{n \in \Bbb Z} c_n \exp(in\theta)$. It is known that $f^\ast = f$ almost surely.
(a) is $-^\ast$ idempotent? i.e. is it true that $f^{\ast \ast} = f^\ast$?
@AkivaWeinberger You need to use the definition of $F$ as the cumulative function of the random variables. $C^\infty$ was a simple step, but I don't have access to the paper right now so I don't recall it.
> In mathematics, a square-integrable function, also called a quadratically integrable function, is a real- or complex-valued measurable function for which the integral of the square of the absolute value is finite.
I am having some difficulties understanding the difference between simplicial and singular homology. I am aware of the fact that they are isomorphic, i.e. the homology groups are in fact the same (and maybe this doesnt't help my intuition), but I am having trouble seeing where in the setup they d...
Usually it is a great advantage to consult the notes, as they tell you exactly what has been done. A book will teach you the field, but not necessarily help you understand the style that the prof. (who creates the exam) creates questions.
@AkivaWeinberger having thought about it a little, I think the best way to approach the geometry problem is to argue that the relevant condition (centroid is on the incircle) is preserved by similarity transformations
hence you're free to rescale the sides, and therefore the (semi)perimeter as well
so one may (for instance) choose $s=(a+b+c)/2=1$ without loss of generality
that makes a lot of the formulas simpler, e.g. the inradius is identical to the area
It is asking how many terms of the Euler Maclaurin formula do we need in order to compute the Riemann zeta function in the complex plane?
$q$ is the upper summation index in the sum with the Bernoulli numbers.
This appears to answer it in the positive: "By repeating the above argument we see that we have analytically continued the Riemann zeta-function to the right-half plane σ > 1 − k, for all k = 1, 2, 3, . . .." |
Yeah, this software cannot be too easy to install, my installer is very professional looking, currently not tied into that code, but directs the user how to search for their MikTeX and or install it and does a test LaTeX rendering
Some body like Zeta (on codereview) might be able to help a lot... I'm not sure if he does a lot of category theory, but he does a lot of Haskell (not that I'm trying to conflate the two)... so he would probably be one of the better bets for asking for revision of code.
he is usually on the 2nd monitor chat room. There are a lot of people on those chat rooms that help each other with projects.
i'm not sure how many of them are adept at category theory though... still, this chat tends to emphasize a lot of small problems and occasionally goes off tangent.
you're project is probably too large for an actual question on codereview, but there is a lot of github activity in the chat rooms. gl.
In mathematics, the Fabius function is an example of an infinitely differentiable function that is nowhere analytic, found by Jaap Fabius (1966). It was also written down as the Fourier transform off^(z)=∏m=1∞(cos...
Defined as the probability that $\sum_{n=1}^\infty2^{-n}\zeta_n$ will be less than $x$, where the $\zeta_n$ are chosen randomly and independently from the unit interval
@AkivaWeinberger are you familiar with the theory behind Fourier series?
anyway here's a food for thought
for $f : S^1 \to \Bbb C$ square-integrable, let $c_n := \displaystyle \int_{S^1} f(\theta) \exp(-i n \theta) (\mathrm d\theta/2\pi)$, and $f^\ast := \displaystyle \sum_{n \in \Bbb Z} c_n \exp(in\theta)$. It is known that $f^\ast = f$ almost surely.
(a) is $-^\ast$ idempotent? i.e. is it true that $f^{\ast \ast} = f^\ast$?
@AkivaWeinberger You need to use the definition of $F$ as the cumulative function of the random variables. $C^\infty$ was a simple step, but I don't have access to the paper right now so I don't recall it.
> In mathematics, a square-integrable function, also called a quadratically integrable function, is a real- or complex-valued measurable function for which the integral of the square of the absolute value is finite.
I am having some difficulties understanding the difference between simplicial and singular homology. I am aware of the fact that they are isomorphic, i.e. the homology groups are in fact the same (and maybe this doesnt't help my intuition), but I am having trouble seeing where in the setup they d...
Usually it is a great advantage to consult the notes, as they tell you exactly what has been done. A book will teach you the field, but not necessarily help you understand the style that the prof. (who creates the exam) creates questions.
@AkivaWeinberger having thought about it a little, I think the best way to approach the geometry problem is to argue that the relevant condition (centroid is on the incircle) is preserved by similarity transformations
hence you're free to rescale the sides, and therefore the (semi)perimeter as well
so one may (for instance) choose $s=(a+b+c)/2=1$ without loss of generality
that makes a lot of the formulas simpler, e.g. the inradius is identical to the area
It is asking how many terms of the Euler Maclaurin formula do we need in order to compute the Riemann zeta function in the complex plane?
$q$ is the upper summation index in the sum with the Bernoulli numbers.
This appears to answer it in the positive: "By repeating the above argument we see that we have analytically continued the Riemann zeta-function to the right-half plane σ > 1 − k, for all k = 1, 2, 3, . . .." |
The subway train will indeed be hit by light, but it will be hard to see the lights through the window.
The problem in seeing the tunnel light is relativistic aberration, that the angle of the light is changed by your relative velocity. The formula is $$\cos(\theta_2)=\frac{\cos(\theta_1)+\beta}{1+\beta\cos(\theta_1)}$$ where $\beta =v/c$. $\theta_1$ is the incident light angle, $\theta_2$ the perceived angle (where 0 degrees is forward). As you speed up, most light will arrive from the front - the "searchlight effect".
If you have lights at regular intervals along the tunnel there will be some lights far behind you that get a perceived angle of $\pi/2$ just outside the window, so you will indeed see some lights. However, they will be far behind you so their intensity will be low: the near lights will be shining on the front of the train.
Plot of light angles and intensities for $\beta$ 0.5 and 0.999999. The circles denote individual lights at coordinates $(x,1)$ seen from the origin, with area proportional to the intensity $1/r^2=1/(1+x^2)$. For $\beta=2$ the brightest light just outside the train is seen at 60 degrees angle, while for the faster train nearly all light comes from the front and will be hard to see from inside the train.
In addition, there will be redshifting/blueshifting making lights behind you redshifted, so the light apparently just outside may be impossible to see if it has a spectrum with little ultraviolet light. |
Hello,I am an undergraduate who has taken basic linear algebra and ODE. As for physics, I have taken an online edX quantum mechanics course.I am looking at studying some of the necessary math and physics needed for QFT and particle physics. It looks like I need tensors and group theory...
Hello, I am newish in group theory so sorry if anything in the following is not entirely correct.In general, one can anticipate if a matrix element <i|O|j> is zero or not by seeing if O|j> shares any irreducible representation with |i>.I know how to reduce to IRs the former product but I...
Looking into the infinitesimal view of rotations from Lie, I noticed that the vector cross product can be written in terms of the generators of the rotation group SO(3). For example:$$\vec{\mathbf{A}} \times \vec{\mathbf{B}} = (A^T \cdot J_x \cdot B) \>\> \hat{i} + (A^T \cdot J_y \cdot B)...
1. Homework StatementI am trying to get the C-G Decomposition for 6 ⊗ 3.2. Homework EquationsNeglecting coefficients a tensor can be decomposed into a symmetric part and an antisymmetric part. For the 6 ⊗ 3 = (2,0) ⊗ (1,0) this is:Tij ⊗ Tk = Qijk = (Q{ij}k + Q{ji}k) + (Q[ij]k +...
Hi!I'm doing my master thesis in AdS/CFT and I've read several times that "Fields transforms in the adjoint representation" or "Fields transforms in the fundamental representation". I've had courses in Advanced mathematics (where I studied Group theory) and QFTs, but I don't understand (or...
1. Homework StatementLet ##G## be a group of order ##2p## with p a prime and odd number.a) We suppose ##G## as abelian. Show that ##G \simeq \mathbb{Z}/2p\mathbb{Z}##2. Homework Equations3. The Attempt at a SolutionIntuitively I see why but I would like some suggestion of what...
1. Homework Statement[G,G] is the commutator group.Let ##H\triangleleft G## such that ##H\cap [G,G]## = {e}. Show that ##H \subseteq Z(G)##.2. Homework Equations3. The Attempt at a SolutionIn the previous problem I showed that ##G## is abelian iif ##[G,G] = {e}##. I also showed that...
1. Homework StatementLet ##G## be a group. Let ##H \triangleleft G## and ##K \leq G## such that ##H\subseteq K##.a) Show that ##K\triangleleft G## iff ##K/H \triangleleft G/H##b) Suppose that ##K/H \triangleleft G/H##. Show that ##(G/H)/(K/H) \simeq G/K##2. Homework EquationsThe three...
It goes without saying that theoretical physics has over the years become overrun with countless distinct - yet sometimes curiously very similar - theories, in some cases even dozens of directly competing theories. Within the foundations things can get far worse once we start to run into...
1. Homework StatementLet m ≥ 3. Show that $$D_m \cong \mathbb{Z}_m \rtimes_{\varphi} \mathbb{Z}_2 $$where $$\varphi_{(1+2\mathbb{Z})}(1+m\mathbb{Z}) = (m-1+m\mathbb{Z})$$2. Homework EquationsI have seen most basic concepts of groups except group actions. Si ideally I should not use them...
I teach group theory for physicists, and I like to teach it following some papers. In general my students work with condensed matter, so I discuss group theory following these papers:[1] Group Theory and Normal Modes, American Journal of Physics 36, 529 (1968)[2] Nonsymmorphic Symmetries and...
1. Homework StatementI am translating so bear with me.We have two group homomorphisms:α : G → G'β : G' → GLet β(α(x)) = x ∀x ∈ GShow that1)β is a surjection2)α an injection3) ker(β) = ker(α ο β) (Here ο is the composition of functions.)2. Homework EquationsThis is from a...
I do have a fair amount of visual/geometric understanding of groups, but when I start solving problems I always wind up relying on my algebraic intuition, i.e. experience with forms of symbolic expression that arise from theorems, definitions, and brute symbolic manipulation. I even came up with...
I'm having a bit of an issue wrapping my head around the adjoint representation in group theory. I thought I understood the principle but I've got a practice problem which I can't even really begin to attempt. The question is this:My understanding of this question is that, given a...
1. Homework StatementDetermine ##\phi(R_{180})##, if ##\phi:D_n\to D_n## is an automorphism where ##n## is even so let ##n=2k##.The solutions manual showed that since the center of ##D_n## is ##\{R_0, R_{180}\}## and ##R_{180}## is not the identity then it can only be that...
1. Homework StatementDecide all abelian groups of order 675. Find an element of order 45 in each one of the groups, if it exists.2. Homework Equations /propositions/definitionsFundamental Theorem of Finite Abelian GroupsLagrange's Theorem and its corollaries (not sure if helpful for this...
1. Homework StatementThe SO(3) representation can be represented as ##3\times 3## matrices with the following form:$$J_1=\frac{1}{\sqrt{2}}\left(\matrix{0&1&0\\1&0&1\\ 0&1&0}\right) \ \ ; \ \ J_2=\frac{1}{\sqrt{2}}\left(\matrix{0&-i&0\\i&0&-i\\ 0&i&0}\right) \ \ ; \ \...
In Arnold's book, ordinary differential equations 3rd. WHY Arnold say Tg:M→M instead of Tg:G→S(M) for transformations Tfg=Tf Tg,Tg^-1=(Tg)^-1.Let M be a group and M a set. We say that an action of the group G on the set M is defined if to each element g of G there corresponds a...
I've been trying to understand representations of the Lorentz group. So as far as I understand, when an object is in an (m,n) representation, then it has two indices (let's say the object is ##\phi^{ij}##), where one index ##i## transforms as ##\exp(i(\theta_k-i\beta_k)A_k)## and the other index...
1. Homework StatementFor a left invariant vector field γ(t) = exp(tv). For a gauge transformation t -> t(xμ). Intuitively, what happens to the LIVF in the latter case? Is it just displaced to a different point in spacetime or something else?2. Homework Equations3. The Attempt at a...
Hello guys,In 90% of the papers I've read about diferent ways to achieve generalizations of the Proca action I've found there's a common condition that has to be satisfied, i.e: The number of degrees of freedom allowed to be propagated by the theory has to be three at most (two if the fields...
Hi allI have a shallow understanding of group theory but until now it was sufficient. I'm trying to generalize a problem, it's a Lagrangian with SU(N) symmetry but I changed some basic quantity that makes calculations hard by using a general SU(N) representation basis. Hopefully the details of...
Could you please help me to understand what is the difference between notions of «transformation» and «automorphism» (maybe it is more correct to talk about «inner automorphism»), if any? It looks like those two terms are used interchangeably.By «transformation» I mean mapping from some set...
1. Homework StatementAre these functions homomorphisms, determine the kernel and image, and identify the quotient group up to isomorphism?C^∗ is the group of non-zero complex numbers under multiplication, and C is the group of all complex numbers under addition.2. Homework Equationsφ1 ...
The group of moves for the 3x3x3 puzzle cube is the Rubik’s Cube group: https://en.wikipedia.org/wiki/Rubik%27s_Cube_group.What are the groups of moves for NxNxN puzzle cubes called in general? Is there even a standardized term?I've been trying to find literature on the groups for the...
1. Homework StatementThe dicyclic group of order 12 is generated by 2 generators x and y such that: ##y^2 = x^3, x^6 = e, y^{-1}xy =x^{-1} ## where the element of Dic 12 can be written in the form ##x^{k}y^{l}, 0 \leq x < 6, y = 0,1##. Write the product between two group elements in the form...
1. Homework StatementConsider the contractions of the 3D Euclidean symmetry while preserving the SO(2) subgroup. In the physics point of view, explain the resulting symmetries G(2) (Galilean symmetry group) and H(3) (Heisenberg-Weyl group for quantum mechanics) and give their Lie algebras...
The Galilean transformations are simple.x'=x-vty'=yz'=zt'=t.Then why is there so much jargon and complication involved in proving that Galilean transformations satisfy the four group properties (Closure, Associative, Identity, Inverse)? Why talk of 10 generators? Why talk of rotation as... |
In combinatorics there are quite many such disproven conjectures. The most famous of them are:
1) Tait conjecture:
Any 3-vertex connected planar cubic graph is Hamiltonian
The first counterexample found has 46 vertices. The "least" counterexample known has 38 vertices.
2) Tutte conjecture:
Any bipartite cubic graph is Hamiltonian
The first counterexample found has 96 vertices. The "least" counterexample known has 54 vertices.
3) Thom conjecture
If two finite undirected simple graphs have conjugate adjacency matrices over $\mathbb{Z}$, then they are isomorphic.
The least known counterexample pair is formed by two trees with 11 vertices.
4) Borsuk conjecture:
Every bounded subset $E$ of $\mathbb{R}^n$can be partitioned into $n+1$ sets, each of which has a smaller diameter, than $E$
In the first counterexample found $n = 1325$. In the "least" counterexample known $n = 64$.
5) Danzer-Gruenbaum conjecture:
If $A \subset \mathbb{R}^n$ and $\forall u, v, w \in A$ $(u - w, v - w) > 0,$ then $|A| \leq 2n - 1$
This statement is not true for any $n \geq 35$
6) The Boolean Pythagorean Triple Conjecture:
There exists $S \subset \mathbb{N}$, such that neither $S$, nor $\mathbb{N} \setminus S$ contain Pythagorean triples
This conjecture was disproved by M. Heule, O. Kullman and V. Marek. They proved, that there do exist such $S \subset \{n \in \mathbb{N}| n \leq k\}$, such that neither $S$, nor $\{n \in \mathbb{N}| n \leq k\} \setminus S$ contain Pythagorean triples, for all $k \leq 7824$, but not for $k = 7825$
7) Burnside conjecture:
Every finitely generated group with period n is finite
This statement is not true for any odd $n \geq 667$
8) Otto Shmidt conjecture:
If all proper subgroups of a group $G$ are isomorphic to $C_p$, where $p$ is a fixed prime number, then $G$ is finite.
Alexander Olshanskii proved, that there are continuum many non-isomorphic counterexamples to this conjecture for any $p > 10^{75}$.
9) Von Neuman conjecture
Any non-amenable group has a free subgroup of rank 2
The least known finitely presented counterexample has 3 generators and 9 relators
10) Word problem conjecture:
Word problem is solvable for any finitely generated group
The "least" counterexample known has 12 generators.
11) Leinster conjecture:
Any Leinster group has even order
The least counterexample known has order 355433039577.
12) Rotman conjecture:
Automorphism groups of all finite groups not isomorphic to $C_2$ have even order
The first counterexample found has order 78125. The least counterexample has order 2187. It is the automorphism group of a group with order 729.
13) Rose conjecture:
Any nontrivial complete finite group has even order
The least counterexample known has order 788953370457.
14) Hilton conjecture
Automorphism group of a non-abelian group is non-abelian
The least counterexample known has order 64.
15)Hughes conjecture:
Suppose $G$ is a finite group and $p$ is a prime number. Then $[G : \langle\{g \in G| g^p \neq e\}\rangle] \in \{1, p, |G|\}$
The least known counterexample has order 142108547152020037174224853515625.
16) Moreto conjecture:
Let $S$ be a finite simple group and $p$ the largest prime divisor of $|S|$. If $G$ is a finite group with the same number of elements of order $p$ as $S$ and $|G| = |S|$, then $G \cong S$
The first counterexample pair constructed is formed by groups of order 20160 (those groups are $A_8$ and $L_3(4)$)
17) This false statement is not a conjecture, but rather a popular mistake done by many people, who have just started learning group theory:
All elements of the commutant of any finite group are commutators
The least counterexample has order 96.
If the numbers mentioned in this text do not impress you, please, do not feel disappointed: there are complex combinatorial objects "hidden" behind them. |
I'm given series $\sum_{n = 1}^{+\infty} \frac{(-1)^{n}}{(n+1)!}\left(1 + 2! + \cdots + n!\right)$ and I have to find whether it is convergent.
Testing for absolute convergence, we have $a_n = \frac{1}{(n+1)!} + \frac{2}{(n+1)!} + \cdots + \frac{(n-1)!}{(n+1)!} + \frac{n!}{(n+1)!}$ and since last term is $\frac{n!}{(n+1)!} = \frac{1}{n+1}$ series diverge in comparison with harmonic series and hence can only be conditionally convergent, which I will try to prove from Leibniz criterion.
Now, I have to show, that $a_n$-th term is monotonically decreasing and $\lim a_n = 0$.
Treating $a_n$ as $\frac{a_n}{b_n} = \frac{1! + 2! + \cdots + n!}{(n+1)!}$ I can use Stolz-Cesàro theorem ($\lim \frac{a_n}{b_n} = \lim\frac{a_{n+1} - a_n}{b_{n+1} - b_n}$) since $b_n$ is monotonically increasing and $\lim b_n = +\infty$. Then $$\lim \frac{a_n}{b_n} = \lim\frac{a_{n+1} - a_n}{b_{n+1} - b_n} = \lim\frac{(n+1)!}{(n+2)! - (n+1)!} = \lim \frac{1}{n+2}\frac{1}{1 - \frac{1}{n+2}} = 0.$$
But how to prove monotonicity? I've tried $\frac{a_{n+1}}{a_n}$ but it didn't get me anywhere. What are some ways to show monotonicity of sequences like $a_n$? |
Recent developments of CRISPR-Cas9 based homing endonuclease gene drive systems for the suppression or replacement of mosquito populations have generated much interest in their use for control of mosquito-borne diseases (such as dengue, malaria, Chikungunya and Zika). This is because genetic control of pathogen transmission may complement or even substitute traditional vector-control interventions, which have had limited success in bringing the spread of these diseases to a halt.
Despite excitement for the use of gene drives for mosquito control, current modeling efforts have analyzed only a handful of these new approaches (usually studying just one per framework). Moreover, these models usually consider well-mixed populations with no explicit spatial dynamics. To this end, we are developing
MGDrivE (Mosquito Gene DRIVe Explorer), in cooperation with the 'UCI Malaria Elimination Initiative', as a flexible modeling framework to evaluate a variety of drive systems in spatial networks of mosquito populations. This framework provides a reliable testbed to evaluate and optimize the efficacy of gene drive mosquito releases. What separates MGDrivE from other models is the incorporation of mathematical and computational mechanisms to simulate a wide array of inheritance-based technologies within the same, coherent set of equations.
We do this by treating the population dynamics, genetic inheritance operations, and migration between habitats as separate processes coupled together through the use of mathematical tensor operations. This way we can conveniently swap inheritance patterns whilst still making use of the same set of population dynamics equations. This is a crucial advantage of our system, as it allows other research groups to test their ideas without developing new models and without the need to spend time adapting other frameworks to suit their needs.
MGDrivE is based on the idea that we can decouple the genotype inheritance process from the population dynamics equations. This allows the system to be treated and developed in three semi-independent modules that come together to form the system.
The original version of this model was based on work by Deredec et al. (2011) and Hancock & Godfray (2007), and adapted to accommodate CRISPR homing dynamics in a previous publication by our team (Marshall et al. 2017). As described, we extended this framework to be able to handle a variable number of genotypes, and migration across spatial scenarios. We did this by adapting the equations to work in a tensor-oriented manner, where each genotype can have different processes affecting their particular strain (death rates, mating fitness, sex-ratio bias, et cetera).
Before beginning the full description of the model we will define some of the conventions we followed for the notation of the written description of the system.
In the case of one dimensional tensors, each slot represents a genotype of the population. For example, the male population is stored in the following way: \[ \overline{Am} = \left(\begin{array}{c} g_1 \ g_2 \ g_3 \ \vdots \ g_n \end{array}\right) _{i} \]
All the processes that affect mosquitoes in a genotype-specific way are defined and stored in this way within the framework.
There are two tensors of squared dimensionality in the model: the adult females matrix, and the genotype-specific viability mask. In the case of the former the rows represent the females' genotype, whilst the columns represent the genotype of the male they mated with: \[ \overline{\overline{Af}} = \left(\begin{array}{ccccc} g_{11} & g_{12} & g_{13} & \cdots & g_{1n}\ g_{21} & g_{22} & g_{23} & \cdots & g_{2n}\ g_{31} & g_{32} & g_{33} & \cdots & g_{3n}\ \vdots & \vdots & \vdots & \ddots & \vdots\ g_{n1} & g_{n2} & g_{n3} & \cdots & g_{nn} \end{array}\right) _{ij} \]
The genotype-specific viability mask, on the other hand, stores the mothers' genotype in the rows, and the potential eggs' genotype in the columns of the matrix.
To model an arbitrary number of genotypes efficiently in the same mathematical framework we use a 3-dimensional array structure (cube) where each axis represents the following information:
The cube structure gives us the flexibility to apply tensor operations to the elements within our equations, so that we can calculate the stratified population dynamics rapidly; and within a readable, flexible computational framework. This becomes apparent when we define the equation we use for the computation of eggs laid at any given point in time:
\[ \overline{O(T_x)} = \sum_{j=1}^{n} \Bigg( \bigg( (\beta*\overline{s} * \overline{ \overline{Af_{[t-T_x]}}}) * \overline{\overline{\overline{Ih}}} \bigg) * \Lambda \Bigg)^{\top}_{ij} \]
In this equation, the matrix containing the number of mated adult females \((\overline{\overline{Af}})\) is multiplied element-wise with each one of the layers containing the eggs genotypes proportions expected from this cross \((\overline{\overline{\overline{Ih}}})\).
The resulting matrix is then multiplied by a binary 'viability mask' \((\Lambda)\) that filters out female-parent to offspring genetic combinations that are not viable due to biological impediments (such as cytoplasmic incompatibility).
The summation of the transposed resulting matrix returns us the total fraction of eggs resulting from all the male to female genotype crosses (\(\overline{O(T_x)}\)).
Note: For inheritance operations to be consistent within the framework the summation of each element in the z-axis (this is, the proportions of each one of the offspring's genotypes) must be equal to one.
An inheritance cube in an array object that specifies inheritance probabilities (offspring genotype probability)stratified by male and female parent genotypes.
MGDrivE provides the following cubes to model different gene drive systems:
During the three aquatic stages, a density-independent mortality process takes place: \[ \theta_{st}=(1-\mu_{st})^{T_{st}} \] Along with a density dependent process dependent on the number of larvae in the environment: \[ F(L[t])=\Bigg(\frac{\alpha}{\alpha+\sum{\overline{L[t]}}}\Bigg)^{1/T_l} \] where \(\alpha\) represents the strength of the density-dependent process. This parameter is calculated with: \[ \alpha=\Bigg( \frac{½ * \beta * \theta_e * Ad_{eq}}{R_m-1} \Bigg) * \Bigg( \frac{1-(\theta_l / R_m)}{1-(\theta_l / R_m)^{1/T_l}} \Bigg) \] in which \(\beta\) is the species' fertility in the absence of gene-drive, \(Ad_{eq}\) is the adult mosquito population equilibrium size, and \(R_{m}\) is the population growth in the absence of density-dependent mortality. This population growth is calculated with the average generation time (\(g\)), the adult mortality rate (\(\mu_{ad}\)), and the daily population growth rate (\(r_{m}\)): \[ g=T_{e}+T_{l}+T_{p}+\frac{1}{\mu_{ad}}\R_{m}=(r_{m})^{g} \]
The computation of the larval stage in the population is crucial to the model because the density dependent processes necessary for equilibrium trajectories to be calculated occur here. This calculation is performed with the following equation: \[ D(\theta_l,T_x) = \left\{ \begin{array}{ll} \theta_{l[0]}^{‘}=\theta_l & \quad i = 0 \ \theta_{l[i+1]}^{’} = \theta_{l[i]}^{‘} *F(\overline{L_{[t-i-T_x]}}) & \quad i \leq T_l \end{array} \right. \]
In addition to this, we need the larval mortality (\(\mu_{l}\)): \[ %L_{eq}=&\alpha*\lfloor R_{m} -1\rfloor %& \mu_{l}=1-\Bigg( \frac{R_{m} * \mu_{ad}}{½ * \beta * (1-\mu_{m})} \Bigg)^{\frac{1}{T_{e}+T_{l}+T_{p}}} \]
With these mortality processes, we are now able to calculate the larval population: \[ \overline{L_{[t]}}= \overline{L_{[t-1]}} * (1-\mu_{l}) * F(\overline{L_{[t-1]})}\ +\overline{O(T_{e})}* \theta_{e} \ %+\overline{\beta}* \theta_{e} * (\overline{\overline{Af_{(t-T_{e})}}} \circ \overline{\overline{\overline{Ih}}})\ - \overline{O(T_{e}+T_{l})} * \theta_{e} * D(\theta_{l},0) %\prod_{i=1}^{T_{l}} F(\overline{L_{[t-i]}}) %\theta_{l} \] where the first term accounts for larvae surviving one day to the other; the second therm accounts for the eggs that ha hatched within the same period of time; and the last term computes the number of larvae that have transformed into pupae.
We are ultimately interested in calculating how many adults of each genotype exist at any given point in time. For this, we first calculate the number of eggs that are laid and survive to the adult stages with the equation: \[ \overline{E^{’}}= \overline{O(T_{e}+T_{l}+T_{p})} \ * \bigg(\overline{\xi_{m}} * (\theta_{e} * \theta_{p}) * (1-\mu_{ad}) * D(\theta_{l},T_{p}) \bigg) \]
With this information we can calculate the current number of male adults in the population by computing the following equation: \[ \overline{Am_{[t]}}= \overline{Am_{[t-1]}} * (1-\mu_{ad})*\overline{\omega_{m}}\ + (1-\overline{\phi}) * \overline{E^{‘}}\ + \overline{\nu m_{[t-1]}} \] in which the first term represents the number of males surviving from one day to the next; the second one, the fraction of males that survive to adulthood (\eqn{\overline{E'}}) and emerge as males (\eqn{1-\phi}); the last one is used to add males into the population as part of gene-drive release campaigns.
Female adult populations are calculated in a similar way: \[ \overline{\overline{Af_{[t]}}}= \overline{\overline{Af_{[t-1]}}} * (1-\mu_{ad}) * \overline{\omega_{f}}\ + \bigg( \overline{\phi} * \overline{E^{’}}+\overline{\nu f_{[t-1]}}\bigg)^{\top} * \bigg( \frac{\overline{\eta}*\overline{Am_{[t-1]}}}{\sum{\overline{Am_{[t-1]}}}} \bigg)%\overline{\overline{Mf}} \] where we first compute the surviving female adults from one day to the next; and then we calculate the mating composition of the female fraction emerging from pupa stage. To do this, we obtain the surviving fraction of eggs that survive to adulthood (\(\overline{E'}\)) and emerge as females (\(\phi\)), we then add the new females added as a result of gene-drive releases (\(\overline{\nu f_{[t-1]}}\)). After doing this, we calculate the proportion of males that are allocated to each female genotype, taking into account their respective mating fitnesses (\(\overline{\eta}\)) so that we can introduce the new adult females into the population pool.
As it was briefly mentioned before, we are including the option to release both male and/or female individuals into the populations. Another important thing to emphasize is that we allow flexible releases sizes and schedules. Our model handles releases internally as lists of populations compositions so, it is possible to have releases performed at irregular intervals and with different numbers of mosquito genetic compositions as long as no new genotypes are introduced (which have not been previously defined in the inheritance cube). \[ \overline{\nu} = \bigg\{ \left(\begin{array}{c} g_1 \ g_2 \ g_3 \ \vdots \ g_n \end{array}\right)_{t=1} , \left(\begin{array}{c} g_1 \ g_2 \ g_3 \ \vdots \ g_n \end{array}\right)_{t=2} , \cdots , \left(\begin{array}{c} g_1 \ g_2 \ g_3 \ \vdots \ g_n \end{array}\right)_{t=x} \bigg\} \]
So far, however, we have not described the way in which the effects of these gene-drives are included into the mosquito populations dynamics. This is done through the use of various modifiers included in the equations:
To simulate migration within our framework we are considering patches (or nodes) of fully-mixed populations in a network structure. This allows us to handle mosquito movement across spatially-distributed populations with a transitions matrix, which is calculated with the tensor outer product of the genotypes populations tensors and the transitions matrix of the network as follows: \[ \overline{Am_{(t)}^{i}}= \sum{\overline{A_{m}^j} \otimes \overline{\overline{\tau m_{[t-1]}}}} \ \overline{\overline{Af_{(t)}^{i}}}= \sum{\overline{\overline{A_{f}^j}} \otimes \overline{\overline{\tau f_{[t-1]}}}} \]
In these equations the new population of the patch \(i\) is calculated by summing the migrating mosquitoes of all the \(j\) patches across the network defined by the transitions matrix \(\tau\), which stores the mosquito migration probabilities from patch to patch. It is worth noting that the migration probabilities matrices can be different for males and females; and that there's no inherent need for them to be static (the migration probabilities may vary over time to accommodate wind changes due to seasonality).
MGDrivE allows all inheritance, migration, and population dynamics processes to be simulated stochastically; this accounts for the inherent probabilistic nature of the processes governing the interactions and life-cycles of organisms. In the next section, we will describe all the stochastic processes that can be activated in the program. It should be noted that all of these can be turned on and off independently from one another as required by the researcher. Oviposition
Stochastic egg laying by female/male pairs is separated into two steps: calculating the number of eggs laid by the females and then distributing laid eggs according to their genotypes. The number of eggs laid follows a Poisson distribution conditioned on the number of female/male pairs and the fertility of each female.
\[ Poisson( \lambda = numFemales*Fertility) \]
Multinomial sampling, conditioned on the number of offspring and the relative viability of each genotype, determines the genotypes of the offspring.
\[ Multinomial \left(numOffspring, p_1, p_2\dots p_b \right)=\frac{numOffspring!}{p_1!\,p_2\,\dots p_n}p_1^{n_1}p_2^{n_2}\dots p_n^{n_n} \]
Sex Determination
Sex of the offspring is determined by multinomial sampling. This is conditioned on the number of eggs that live to hatching and a probability of being female, allowing the user to design systems that skew the sex ratio of the offspring through reproductive mechanisms.
\[ Multinomial(numHatchingEggs, p_{female}, p_{female}) \]
Mating
Stochastic mating is determined by multinomial sampling conditioned on the number of males and their fitness. It is assumed that females mate only once in their life, therefore each female will sample from the available males and be done, while the males are free to potentially mate with multiple females. The males' ability to mate is modulated with a fitness term, thereby allowing some genotypes to be less fit than others (as seen often with lab releases).
\[ Multinomial(numFemales, p_1f_1, p_2f_2, \dots p_nf_n) \]
Other Stochastic Processes
All remaining stochastic processes (larval survival, hatching , pupating, surviving to adult hood) are determined by multinomial sampling conditioned on factors affecting the current life stage. These factors are determined empirically from mosquito population data.
Migration
Variance of stochastic movement (not used in diffusion model of migration). It affects the concentration of probability in the Dirchlet simplex, small values lead to high variance and large values lead to low variance. |
I simply want to calculate the bulk modulus of water at 50C and increasing pressures. I think I am correctly calculating the new specific volume from the original conditions at (25C and 1atm) to 50C and higher pressures. I am rightly getting a decrease in specific volume with increasing pressure at constant temperature (Column7). Here is the spreadsheet:
${V}^{'}={V}_{o}e^{{\beta}(T-25)-\kappa\Delta P}$
where:
${V}^{'}$ is column 7
${V}_{o}$ is column 1, the specific volume of water at 1 atm and 25C
$T$ is in Celsius
$P$ is in atm.
I used the above cross plot to graphically solve the slope $(\frac{\partial v} {\partial P})_{T} $ and input it into Column 8:
Then to calculate the new compressibility at 50C (${\kappa}$) Column 9:
${\kappa}=-\frac{1} {V}(\frac{\partial v} {\partial P})_{T} $
which gives me the new compressibiliy Column 9. Then I I just take the reciprocal and convert the units to GPa.
Oops, bulk modulus (Column 10) should be increasing with pressure at a constant temperature, not decreasing. I know that since dividing by an ever decreasing specific volume as pressure increases will give me a larger compressibility (Column 9) and a decreasing Bulk Modulus (Column 10). But everyone knows increasing pressure should have the opposite effect. Where did I go wrong? |
The argument for the first question goes as follows:
Consider the Pauli-Lubanski vector $ W_{\mu} = \epsilon_{\mu\nu\rho\sigma}P^{\nu}M^{\rho\sigma}$. Where $P^{\mu}$ are the momenta and $M^{\mu\nu}$ are the Lorentz generators. (The norm of this vector is a Poincare group casimir but this fact will not be needed for the argument.)
By symmetry considerations We have $W_{\mu} P^{\mu} = 0$. Now, in the case of a massless particle, a vector orthogonal to a light-like vector must be proportional to it (easy exercise). Thus$ W^{\mu} = h P^{\mu}$, ($ h = const.$). Now, the zero component of the Pauli-Lubanski vector is given by:
$ W_{0} = \epsilon_{0\nu\rho\sigma}P^{\mu}M^{\mu\nu} = \epsilon_{abc}P^{a}M^{bc} = \mathbf{P}.\mathbf{J}$, (where the summation after the second equality is on the spatial indices only, and $\mathbf{J}$ are the rotation generators ).
Therefore the proportionality constant$ h = \frac{W^{0}}{P^{0}}= \frac{\mathbf{P}.\mathbf{J}}{|\mathbf{P}|}$is the helicity.
Now, on the quantum level, if we rotate by an angle of $2 \pi$ around the momentum axis, the wave function acquires a phase of:$exp(2 \pi i\frac{\mathbf{P}}{|\mathbf{P}|}.\mathbf{J}) = exp(2 \pi i h)$.This factor should be $\pm 1$ according to the particle statistics thus $h$ must be half integer.
As for the second question, a very powerful method to construct the gluon amplitudes is by the twistor approach.Please see the following article by N.P. Nair for a clear exposition.
Update:
This update refers to the questions asked by user6818 in the comments:
For simplicity I'll consider the case of a photon and not gluons.
The strategy of the solution is based on the explicit construction of the angular momentum and spin of a free photon field (which depend on the polarization vectors) and showing that the above relations are satisfied for the photon field.The photon momentum and the angular momentum densities can be obtained via the Noether theorem from the photon Lagrangian. Alternatively, it is well known that the photon linear momentum is given by the Poynting vector (proportional to) $\vec{E}\times\vec{B}$, and it is not difficult to convince onself that the total angular momentum density is (proportional to) $\vec{x}\times \vec{E}\times\vec{B}$.
Now, the total angular momentum can be decomposed into angular and spin angular momenta (please see K.T. Hecht: quantum mechanics (page 584 equation 16))
$\vec{J} = \int d^3x (\vec{x}\times \vec{E}\times\vec{B}) =\int d^3x (\vec{E}\times\vec{A} + \sum_{i=1}^3 E_j \vec{x} \times \vec{\nabla} A_j )$
The first term on the right hand side can be interpreted as the spin and the second as the orbital angular momentum as it is proportional to the position.
Now, Neither the spin nor the orbital angular momentum densities are gauge invariant (only their sum is). But, one can argue that the total orbital angular momentum is zero because the position averages to zero, thus the total spin:
$ \vec{S} =\int d^3x (\vec{E}\times\vec{A})$
is gauge invariant:
Now, we can obseve that in canonical quantization: $[A_j, E_k] = i \delta_{jk}$, we get $[S_j, S_k] = 2i \epsilon_{jkl} S_l$. Which are the angular momentum commutation relations apart from the factor 2.
Now, by substituting the plane wave solution:
$A_k = \sum_{k,m=1,2} a_{km} \vec{\epsilon_m}(k) exp(i(\vec{k}.\vec{x}-|k|t)) +h.c.$
(The condition $\vec{\epsilon_m}(k).\vec{k} = 0$, is just a consequence of the vanishing of the sources).
We obtain:
$\vec{S} = \sum_{k,m=1,2}(-1){m} a^\dagger_{km}a_{km} \hat{k} = \sum_{k}(n_1-n_2)\hat{k}$
(where $n_1$, $n_2$ are the numbers of right and left circularly polarized photons). Thus for a single free photon, the total spin, thus the total angular momentum are aligned along or opposite to the momentum, which is the same result stated in the first part of the answer.
Secondly, the photon total spin operators exist and transform (up to a factor of two) as spin 1/2 angular momentum operators. |
As mentioned in NotAstronaut's answer, objects smaller than 25 meters will typically burn up in the atmosphere. One can very easily see why this should be the case using Newton's impact depth formula. This is based on approximating the problem by assuming that the matter in the path of the object is being pushed at the same velocity of the object, so as soon as the object has swiped out path containing the same mass as its own mass, it will have lost all of its initial momentum. All its kinetic energy will then have dissipated there, so if this happens in the atmosphere it will have burned up before reaching the ground.
This is, of course, a gross oversimplification, but it will yield correct order of magnitude estimates. We can then calculate the critical diameter as follows. The mass of the atmosphere per unit area equals the atmospheric pressure at sea level divided by the gravitational acceleration, so this is about $10^4\text{ kg/m}^2$. If an asteroid of diameter $D$ and density $\rho$ is to penetrate the atmosphere, its mass of $1/6\pi \rho D^3$ should be larger than the mass of the atmosphere it will encounter on its way to the ground, which is $5/2 \pi 10^3 D^2\text{ kg/m}^2$. Therefore:
$$ D > \frac{1.5\times 10^4}{\rho} \text{ kg/m}^2$$
If we take the density $\rho$ to be that of a typical rock of $3\times 10^3 \text{ kg}/\text{m}^3$, then we see that $D>5\text{ m}$, which is reasonably close order of magnitude estimate to the correct answer. |
Find all real numbers $a_1, a_2, a_3, b_1, b_2, b_3$ such that for every $i\in \lbrace 1, 2, 3 \rbrace$ numbers $a_{i+1}, b_{i+1}$ are distinct roots of equation $x^2+a_ix+b_i=0$ (suppose $a_4=a_1$ and $b_4=b_1$).
There are many ways to do it but I've really wanted to finish the following idea:
From Vieta's formulas we get:
\begin{align} \begin{cases} a_1+b_1=-a_3 \ \ \ \ \ \ \ \ (a) \\a_2+b_2=-a_1\ \ \ \ \ \ \ \ (b)\\a_3+b_3=-a_2\ \ \ \ \ \ \ \ (c)\\a_1b_1=b_3\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (d)\\a_2b_2=b_1\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (e)\\a_3b_3=b_2\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (f)\end{cases} \end{align} First we notice that each $b_i$ is nonzero. Indeed, suppose $b_1=0$. Then from (d) and (f) we deduce that $b_3=0$ and $b_2=0$, so from (a), (b), (c) we get $a_1=-a_3=-(-a_2)=-(-(-a_1))=-a_1$, hence $a_1=0$ which is impossible.
Now, from (a), (b), (c), (d), (e), (f) we obtain: \begin{align} \begin{cases} a_1+b_1-a_1b_1=-a_3-b_3 \ \ \ \ \ \ \ \ \\a_2+b_2-a_2b_2=-a_1-b_1\ \ \ \ \ \ \ \ \\a_3+b_3-a_3b_3=-a_2-b_2\end{cases}, \end{align} so: \begin{align} \begin{cases} (b_1-1)(a_1-1)=1-a_2 \ \ \ \ \ \ \ \ \\(b_2-1)(a_2-1)=1-a_3\ \ \ \ \ \ \ \ \\(b_3-1)(a_3-1)=1-a_1\end{cases}. \end{align} Therefore: \begin{align*} (b_1-1)(b_2-1)(b_3-1)(a_1-1)(a_2-1)(a_3-1)=(1-a_1)(1-a_2)(1-a_3), \end{align*} which implies: \begin{align*} \bigl((a_1-1)(a_2-1)(a_3-1)\bigr)\bigl((b_1-1)(b_2-1)(b_3-1)+1\bigr)=0. \end{align*}
I got stuck here. Is it possible to prove that in this case $b_i=0$ is the only solution to equation $(b_1-1)(b_2-1)(b_3-1)=-1$ or maybe get contradiction in some other way? If so, we can assume that $a_1=1$ and from here we can easily show that also $a_2=a_3=1$, so $b_1=b_2=b_3=-2$.
Since $(b_1-1)(b_2-1)(b_3-1)>-1$ for every $b_i>0$ and $(b_1-1)(b_2-1)(b_3-1)<-1$ for every $b_i<0$, it suffices to prove that the signs of $b_1, b_2, b_3$ can't be different but I don't know how to do it. I also found out that $(b_1+1)^2+(b_2+1)^2+(b_3+1)^2=3$, so $b_i\in [-\sqrt{3}-1, \sqrt{3}-1]$ but I don't know if we can use it somehow. |
I am reading the classical article of A. Salomaa where he gives two axiom systems for regular sets and proofs consistency and completeness.
As I have understood it, an axiomatic system in some logic (lets suppose predicate first order logic) are axioms formulated in the language of the logic, i.e. well-formed formulas together with primitive notions (constant, predicate or function symbols). And a (set theoretical) model is an interpretation for this.
For example, consider the theory of groups. The primitive notions are groups, multiplication, inversion and idenity, mostly written as $(G, \cdot, ^{-1}, 1)$ and the axioms would be \begin{align*} & \forall x,y,z : (xy)z = x(yz) \\ & \forall x \exists y : xy = 1 \\ & \forall x : x1 = x \land 1x = x. \end{align*} The existence of certain groups shows its consistency, and as there are models such that for example the sentence $\forall x \forall y : xy = xy$ is either true or false, it is not complete. But essential here is that when talking about the theory we have just the axioms in mind, without bearing to any actual realisation/model.
Now to come back to Salomaa's paper, in his system $F_1$ he lists $11$ Axioms. Now it is easy to see that regular expression (defined as terms over some alphabet) are a model for these axioms, but besides that their might be other models. When dealing with questions about this axiom system in general we cannot argue with one specific model, or?
To be more specific, in Lemma 4 of his paper he shows that every regular expression has an equational characterisation (i.e. a set of equations this expression fulfills) and this is essential for the completeness proof. And the proof goes by induction over the construction of regular expression, so it works just for this specific model. But in fact he must how that everything (not just regular expressions) obeying the axioms have such an equational characterisation, so he must argue more general than using the specific model of regular expression?
Am I right? Or why does this works out... or am I confusing something here, in what sense does regular expressions go into the axiom system that we can use this model in proving statements about the axiom system (I guess this is not the only model, or?). |
Interpolation and optimal hitting for complete minimal surfaces with finite total curvature 87 Downloads Abstract
We prove that, given a compact Riemann surface \(\Sigma \) and disjoint finite sets \(\varnothing \ne E\subset \Sigma \) and \(\Lambda \subset \Sigma \), every map \(\Lambda \rightarrow \mathbb {R}^3\) extends to a complete conformal minimal immersion \(\Sigma \setminus E\rightarrow \mathbb {R}^3\) with finite total curvature. This result opens the door to study optimal hitting problems in the framework of complete minimal surfaces in \(\mathbb {R}^3\) with finite total curvature. To this respect we provide, for each integer \(r\ge 1\), a set \(A\subset \mathbb {R}^3\) consisting of \(12r+3\) points in an affine plane such that if
A is contained in a complete nonflat orientable immersed minimal surface \(X:M\rightarrow \mathbb {R}^3\), then the absolute value of the total curvature of X is greater than \(4\pi r\). In order to prove this result we obtain an upper bound for the number of intersections of a complete immersed minimal surface of finite total curvature in \(\mathbb {R}^3\) with a straight line not contained in it, in terms of the total curvature and the Euler characteristic of the surface. Mathematics Subject Classification53A10 52C42 30D30 32E30 Notes Acknowledgements
The authors were partially supported by the State Research Agency (SRA) and European Regional Development Fund (ERDF) via the Grants Nos. MTM2014-52368-P and MTM2017-89677-P, MICINN, Spain. They wish to thank an anonymous referee for valuable suggestions which led to an improvement of the exposition.
References 1. 2.Alarcón, A., Forstnerič, F.: New complex analytic methods in the theory of minimal surfaces: a survey. J. Aust. Math. Soc. (2018). https://doi.org/10.1017/S1446788718000125 3. 4.Alarcón, A., Forstnerič, F., López, F.J.: New complex analytic methods in the study of non-orientable minimal surfaces in \({\mathbb{R}}^n\). Mem. Amer. Math. Soc. (in press)Google Scholar 5. 6. 7. 8.Barbosa, J.L.M., Colares, A.G.: Minimal Surfaces in \({\mathbf{R}}^{3}\), Volume 1195 of Lecture Notes in Mathematics. Springer, Berlin (1986). Translated from the PortugueseGoogle Scholar 9. 10. 11.Forstnerič, F.: Stein manifolds and holomorphic mappings. The homotopy principle in complex analysis (2nd edn), Volume 56 of Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics. Springer, Berlin (2017)Google Scholar 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. |
I have to prove if this function is differentiable.
$$f(x,y)= \begin{cases} \frac{\cos x-\cos y}{x-y} \iff x \neq y \\-\sin x \iff x=y \end{cases}$$
if $x \neq y$ it is continuous, but i want to see if it is continuous in x=y too.
i can rewrite f as $$ f(x,y)= \begin{cases} \frac{g(x)-g(y)}{x-y} \iff x \neq y \\ g'(x)=g'(y) \iff x=y \end{cases}$$
and see that $lim_{xy \to xx} g(x,y)=g'(x)$. THus, it is continuous. Also, the partial derivatives exist: $$f_x(x,y)=\begin{cases} \frac{-\sin x(x-y)-\cos x+\cos y}{(x-y)^2} \\ -\cos(x) \end{cases}$$ $$f_y(x,y)=\begin{cases} \frac{\sin y(x-y)+\cos x-\cos y}{(x-y)^2} \\ 0 \end{cases}$$ If I proved that they are continuous, too, for the theorem of the total differential, the function would be differentiable. Still, I'm not sure this is the right way of reasoning. |
This is the first entry in what will become an ongoing series on regression analysis and modeling. In this tutorial, we will start with the general definition or topology of a regression model, and then use NumXL program to construct a preliminary model. Next, we will closely examine the different output elements in an attempt to develop a solid understanding of regression, which will pave the way to a more advanced treatment in future issues.
In this tutorial, we will use a sample data set gathered from 20 different sales persons. The regression model attempts to explain and predict a sales person’s weekly sales (dependent variable) using two explanatory variables: Intelligence (IQ) and extroversion.
Data Preparation
First, let’s organize our input data. Although not necessary, it is customary to place all independent variables (X’s) on the left, where each column represents a single variable. In the right-most column, we place the response or the dependent variable values.
In this example, we have 20 observations and two independent (explanatory) variables. The amount of weekly sales is the response or dependent variable.
Process
Now we are ready to conduct our regression analysis. First, select an empty cell in your worksheet where you wish the output to be generated, then locate and click on the regression icon in the NumXL tab (or toolbar).
Now the Regression Wizard will appear.
Select the cells range for the response/dependent variable values (i.e. weekly sales). Select the cells range for the explanatory (independent) variables values.
Notes: The cells range includes (optional) the heading (Label) cell, which would be used in the output tables where it references those variables. The explanatory variables (i.e. X) are already grouped by columns (each column represents a variable), so we don’t need to change that. Leave the “Variable Mask” field blank for now. We will revisit this field in later entries. By default, the output cells range is set to the current selected cell in your worksheet.
Finally, once we select the X and Y cells range, the “options,” “Forecast” and “Missing Values” tabs will become available (enabled).
Next, select the “Options” tab.
Initially, the tab is set to the following values:
The regression intercept/constant is left blank. This indicates that the regression intercept will be estimated by the regression. To set the regression to a fixed value (e.g. zero (0)), enter it there. The significance level (aka. ) is set to 5%. In the output section, the most common regression analysis is selected. For auto-modeling, let’s leave it unchecked. We will discuss this functionality in a later issue. Now, click on the “Missing Values” tab.
In this tab, you can select the approach to handle missing values in the data set (X and Y). By default, any missing value found in X or in Y in any observation would exclude the observation from the analysis.
This treatment is a good approach for our analysis, so let’s leave it unchanged.
Now, click “Ok” to generate the output tables.
Analysis
Let’s now examine the different output tables more closely.
1. Regression Statistics
In this table, a number of summary statistics for the goodness-of-fit of the regression model, given the sample, is displayed.
The coefficient of determination (R square) describes the ratio of variation in Y described by the regression. The adjusted R-square is an alteration of R square to take into account the number of explanatory variables. The standard error ($\sigma$) is the regression error. In other words, the error in the forecast has a standard deviation around \$332. Log-likelihood function (LLF), Akaike information criterion (AIC), and Schwartz/Bayesian information criterion (SBIC) are different probabilistic measures for the goodness of fit. Finally, “Observations” is the number of non-missing observations used in the analysis. 2. ANOVA
Before we can seriously consider the regression model, we must answer the following question:
“Is the regression model statistically significant or a statistical data anomaly?”
The regression model we have hypothesized is:$$Y_i=\hat Y_i+e_i =\alpha + \beta_1\times X_{1,i}+\beta_2\times X_{2,i}+e_i$$ $$e_i\sim \textrm{i.i.d}\sim N(0,\sigma^2)$$
Where
$\hat Y_i$ is the estimated value for the i-th observation. $e_i$ is the error term for the i-th observation. $e_i$ iis assumed to be independent and identically distributed (Gaussian). $\sigma^2$ is the regression variance (standard error squared). $\beta_1 ,\beta_2$ are the regression coefficients. $\alpha$ is the intercept or the constant of the regression.
$$H_o:\beta_1=\beta_2=0$$ $$H_1:\exists\beta_k\neq 0$$ $$1\leqslant k\leqslant 2$$
Alternatively, the question can be stated as follows:
The analysis of variance (ANOVA) table answers this question.
In the first row of the table (i.e. “Regression”), we compute the test-score (F-Stat)and P-Value, then compare them against the significance level ($\alpha$). In our case, the regression model is statistically valid, and it does explain some of the variation in values of the dependent variable (weekly sales).
The remaining calculations in the table are simply to help us to get to this point. To be complete, we described its computation, but you can skip that to the next table.
$\textrm{df}$ is the degrees of freedom. (For regression, it is the number of explanatory variables ($p$ ). For the total, it is the number of non-missing observations minus one $\left(N-1\right)$ , and for residuals, it is the difference between the two ($N-p-1$ )). Sum of squares (SS) $$SSR=\sum_{i=1}^N(\hat Y_i - \bar Y)^2$$ $$SST=\sum_{i=1}^N(Y_i - \bar Y)^2$$ $$SSE=\sum_{i=1}^N(Y_i - \hat Y_i)^2$$ Mean Square (MS): $$MSR=\frac{SSR}{p}$$ $$MSE=\frac{SSE}{N-p-1}$$ Test statistics: $$F=\frac{MSR}{MSE}\sim F_{p,N-p-1}(.)$$ 3. Residuals Diagnosis Table
Once we confirm that the regression model explains some of the variation in the values of the response variable (weekly sales), we can examine the residuals to make sure that the underlying model’s assumptions are met.$$Y_i=\hat Y_i+e_i =\alpha + \beta_1\times X_{1,i}+\beta_2\times X_{2,i}+e_i$$ $$e_i\sim \textrm{i.i.d}\sim N(0,\sigma^2)$$
Using the standardized residuals (\tex>\frac{e_i}{\sigma_i}$), we perform a series of statistical tests to the mean, variance, skew, excess kurtosis and finally, the normality assumption.
In this example, the standardized residuals pass the tests with 95% confidence.
Note: the standardized (aka “studentized”) residuals are computed using the prediction error ($S_{\textrm{pred}}$). for each observation. $S_{\textrm{pred}}$ takes into account the errors in the values of the regression coefficient, in addition to the general regression error (RMSE or $\sigma$).
4. Regression Coefficients table
Once we establish that the regression model is significant, we can look closer at the regression coefficients.
Each coefficient (including the intercept) is shown on a separate row, and we compute the following statistics:
Value (i.e. $\alpha,\beta_1,\cdots$) Standard error in the coefficient value. Test score (T-stat) for the following hypothesis: $$H_o:\beta_k=0$$ $$H_1:\beta_k \neq 0$$ The P-Values of the test statistics (using Student’s t-distribution). Upper and lower limits of the confidence interval for the coefficient value. A reject/accept decision for the significance of the coefficient value.
In our example, only the “extroversion” variable is found significant while the intercept and the “Intelligence” are not found significant.
Conclusion
In this example, we found that the regression model is statistically significant in explaining the variation in the values of the weekly sales variable, it satisfies the model’s assumptions, but the value of one or more regression coefficient is not significantly different from zero.
What do we do now?
There may be a number of reasons why this is the case, including possible multicollinearity between the variables or simply that one variable should not be included in the model. As the number of explanatory variables increases, answering such question gets more involved, and we need further analysis.
We will cover this particular issue in a separate entry of our series. |
You can at least sketch an answer to the "multiply $t,t^2,\dots,t^{m-1}$ where $m$ is the multiplicity" question by considering a family of IVPs where two roots are approaching one another. Consider the IVPs
$$y''-(1+a)y'+ay,y(0)=0,y'(0)=1$$
where $a$ is approaching $1$. Let us solve this whenever $a$ is not $1$. The roots of the characteristic polynomial are $1$ and $a$, so the general solution is $c_1 e^{t} + c_2 e^{at}$. To solve the IVP we have to solve the system of equations
$$\begin{bmatrix} 1 & 1 \\ 1 & a \end{bmatrix} \begin{bmatrix} c_1 \\ c_2 \end{bmatrix} = \begin{bmatrix} 0 \\ 1 \end{bmatrix}.$$
The solution to this system is $\begin{bmatrix} \frac{-1}{a-1} \\ \frac{1}{a-1} \end{bmatrix}$. Thus the solution to this IVP is $\frac{e^{at} - e^{ t}}{a-1}$. Now compute $\lim_{a \to 1}$ of that. You'll find the familiar $te^t$. Regular perturbation theory then tells us that $te^t$ solves the IVP $y''-2y'+y=0,y(0)=0,y'(0)=1$, as you can of course check directly.
Of course you would not want to do this in the general situation. But this hints at trying it. Once you see that, to see why it actually works in the general situation, you can note that when you apply a linear differential operator with constant coefficients to a polynomial of degree $d$ times an exponential, you get a polynomial times that same exponential. The degree of the resulting polynomial just turns out to be $d-m$. So if you go up to polynomials of degree $d+m$ you can make the resulting polynomial be an arbitrary polynomial of degree $d$.
Another perspective is offered by linear algebra. Here the factors of $t$ arise when you take the matrix exponential of a Jordan block. For example, one can explicitly calculate $e^{At}$ where $A=\begin{bmatrix} 1 & 1 \\ 0 & 1 \end{bmatrix}$ using the series expansion. You get $\begin{bmatrix} e^t & t e^t \\ 0 & e^t \end{bmatrix}$. Now the matrix associated to $y''-2y'+y=0$ is $\begin{bmatrix} 0 & 1 \\ -1 & 2 \end{bmatrix}$ which has the Jordan form above. You can do the same for higher order Jordan blocks.
I'm not so sure how to give a sketch of why variation of parameters should work, though. One thing that might help a little bit is to realize that the only ad hoc part of variation of parameters is the idea of multiplying the homogeneous solutions by variable coefficients. Everything else follows by just following through the algebra/calculus. Typically in textbooks this "everything else" is taught as part of the method itself because this algebra is actually a little messy. That's great for ease of just making the method work, not so great for understanding where it came from. |
Given $a$, $b$ and $c$ are positive real numbers. Prove that:$$\sum \limits_{cyc}\frac {a}{(b+c)^2} \geq \frac {9}{4(a+b+c)}$$
Additional info: We can't use induction. We should mostly use Cauchy inequality. Other inequalities can be used rarely.
Things I have done so far: The inequality look is similar to Nesbitt's inequality.
We could re-write it as: $$\sum \limits_{cyc}\frac {a}{(b+c)^2}(2(a+b+c)) \geq \frac{9}{2}$$
Re-write it again:$$\sum \limits_{cyc}\frac {a}{(b+c)^2}\sum \limits_{cyc}(b+c) \geq \frac{9}{2}$$ Cauchy appears: $$\sum \limits_{cyc}\frac {a}{(b+c)^2}\sum \limits_{cyc}(b+c) \geq \left(\sum \limits_{cyc}\sqrt\frac{a}{b+c}\right)^2$$ So, if I prove $\left(\sum \limits_{cyc}\sqrt\frac{a}{b+c}\right)^2 \geq \frac {9}{2}$ then problem is solved.
Re-write in semi expanded form:$$2\left(\sum \limits_{cyc}\frac{a}{b+c}+2\sum \limits_{cyc}\sqrt\frac{ab}{(b+c)(c+a)}\right) \geq 9$$
We know that $\sum \limits_{cyc}\frac{a}{b+c} \geq \frac {3}{2}$.So$$4\sum \limits_{cyc}\sqrt\frac{ab}{(b+c)(c+a)} \geq 6$$
So the problem simplifies to proving this $$\sum \limits_{cyc}\sqrt\frac{ab}{(b+c)(c+a)} \geq \frac{3}{2}$$
And I'm stuck here. |
When a symmetry is anomalous, the path integral $Z=\int\mathcal{D}\phi e^{iS[\phi]}$ is not invariant under that group of symmetry transformations $G$. This is because though the classical action $S[\phi]$ is invariant the measure may not be invariant. Since 1PI effective action $\Gamma[\phi_{c}]$ takes quantum corrections into account, I expect that it is not invariant under the symmetry. Is there a way to see/prove whether $\Gamma[\phi_{c}]$ is invariant under the anomalous symmetry? That requires one to know how $\phi_c$ changes under the symmetry.
The effective action $\Gamma[\phi_c]$ is the Legendre transform of the generator of connected correlation functions $W[J]$ which for most QFTs is defined as $$W[J]=-i\log Z[J]$$ $Z[J]$ is the generating functional/partition function of the theory. So you can write $$\Gamma[\phi_c]=W[J]-\int d^4x \ \phi_c(x) J(x)$$ where $\phi_c=\langle \phi \rangle_{J}=\frac{\delta W}{\delta J}$ is the vacuum expectation value in the presence of the external source $J$.
Ok so now, let's derive the Slavnov-Tylor identities in the case of an anomalous local gauge symmetry or a global symmetry: $$\phi'(x)\rightarrow \phi(x)+\delta \phi(x), \qquad \delta \phi(x) = \epsilon(x) F(x,\phi(x))$$ such that the classical action is left invariant $$S[\phi+\epsilon F]=S[\phi]$$
however if the symmetry is anomalous the functional measure will change: $$\mathcal{D}\phi\rightarrow \mathcal{D}(\phi+\epsilon F)=\mathcal{D}\phi\ e^{i\int d^4x\ \epsilon(x) \mathcal{A}(x)}$$ where $\mathcal{A}$ is the anomaly function in the parametrization we have given. Therefore the connected generating functional $W$ will transform as \begin{align} e^{i W[J]}&=\int \mathcal{D}\phi \exp\left[i S[\phi]+i\int d^4x\ \phi(x)J(x)\right]\\ &=\int \mathcal{D}\phi' \exp\left[i S[\phi']+i\int d^4x\ \phi'(x)J(x)\right]\\ &=\int \mathcal{D}\phi \exp\left[i S[\phi]+i\int d^4x \phi(x)J(x)+i\int d^4x \ \epsilon(x) F(x)J(x)+i\int d^4x\ \epsilon(x) \mathcal{A}(x)\right] \end{align} So, expanding to $O(\epsilon)$ we obtain the Ward identity $$\int d^4x \left(\mathcal{A}(x)+J(x)\langle F(x,\phi)\rangle _J\right)=0$$
now we can perform the Legendre tranformation and get the Slavnov-Tylor identity which is for the effective action: $$\int d^4x \left(\mathcal{A}(x)-\langle F(x,\phi)\rangle _{J_{\phi_c}}\frac{\delta \Gamma[\phi_c]}{\delta \phi_c(x)}\right)=0$$ the reason for this formula is that $J=-\frac{\delta \Gamma[\phi_c]}{\delta \phi_c(x)}$ from the Legendre tranform, while ${J_{\phi_c}}$ means that $J$ is fixed by solving the equation $\phi_c=\langle \phi \rangle_{J}=\frac{\delta W}{\delta J}$ and so it correspond to a fixed value of $\phi_c$. So, the symmetry under which the effective action transforms is $$\phi_c\rightarrow \phi_c+\epsilon\langle F(x,\phi(x))\rangle_{J_{\phi_c}}$$
Up until now we have done all in general, however if we choose a linear transformation to be anomalous, i.e. $F(x,\phi)=f(x)\phi(x)$ depends linearly on the fields (which are most symmetry transformations), we obtain $$\int d^4x \left(\mathcal{A}(x)-F(x,\phi_c)\frac{\delta \Gamma[\phi_c]}{\delta \phi_c(x)}\right)=0$$ where the expectation value is replaced simply by $F$ calculated in $\phi_c$, since $\langle f(x)\phi(x)\rangle=f(x)\phi_c$.
Therefore, remembering that $\epsilon F\equiv \delta \phi$, for a linear simmetry we can write that the variation of the effctive action as $$\delta_\epsilon \Gamma[\phi_c]\equiv\int d^4x\ \delta\phi_c(x)\frac{\delta \Gamma[\phi_c]}{\delta \phi_c(x)}=\int d^4x\ \epsilon(x)\mathcal{A}(x)$$ so that if the symmetry was not anomalous the effective action would be invariant.
Therefore, for general non-linear symmetry transformation the relation between the field tranformation and the effective action is complicated and has no simple classical analogue since one has to take the expectation value of the transformation, on the other hand if the symmetry is linear then the effective action tranforms as the classical action with $\phi_c$ instead of $\phi$. In both cases the presence of the anomaly makes the variation of the effective action nonzero as we have shown.
EDIT: The derivation is perfectly valid for global symmetries too, just pull $\epsilon$ out of the integrals and ignore its $x$ dependence.
In the case of gauge symmetries is not that anomalies have to be removed, it is that for a gauge theory to be consistent the anomalies have to cancel. Suppose you have a gauge theory with massless fermions (only massless fermions contribute to the anomaly) $$\mathcal{L}=-\frac{1}{4} F^{\mu\nu}F_{\mu\nu}+\bar{\psi}\left(i\gamma_\mu D^{\mu}\right)\psi$$ where $F_{\mu\nu}=\partial_\mu A_\nu-\partial_\nu A_\mu+i g \left[A_\mu,A_\nu\right]$ and $D_\mu=\partial_\mu+i g A_\mu$. The Fermionic effective action can be calculated exactley $$S_{eff} = -\frac{1}{4} \int d^4x\ F^{\mu\nu}F_{\mu\nu}+tr \log\left(i\gamma_\mu D^\mu[A]\right) = -\frac{1}{4} \int d^4x\ F^{\mu\nu}F_{\mu\nu}+S_F[A]$$
Now, if $S_F[A]$ is gauge invariant you can quantize the theory as usual with ghosts and all that and you get an unitary renormalizable theory. However, if $S_F[A]$ is not gauge invariant, the theory becomes inconsistent. In the presence of a gauge anomaly that's precisely the case, let's write the effective action before the path integration over the matter fields is done. And let'd do a gauge tranformation of the fields, with $A'= A+\delta A$ being a gauge tranformation of the gauge field \begin{align} e^{i S_F[A']}=&\int \mathcal{D}\psi\mathcal{D}\bar{\psi}\ e^{i\int d^4x\ \bar{\psi}\left(i\gamma_\mu D^{\mu}[A']\right)\psi} = \int \mathcal{D}\psi'\mathcal{D}\bar{\psi}'\ e^{i\int d^4x\ \bar{\psi}'\left(i\gamma_\mu D^{\mu}[A']\right)\psi'} \\ &=\int \mathcal{D}\psi\mathcal{D}\bar{\psi}\ e^{i\int d^4x\ \bar{\psi}\left(i\gamma_\mu D^{\mu}[A]\right)\psi} e^{i\int d^4x \ \epsilon(x)\mathcal{A}(x)} = e^{i\int d^4x \ \epsilon(x)\mathcal{A}(x)} e^{i S_F[A]} \end{align} where we have followed the same steps as before where the gauge anomaly pops out of the measure tranformation. Therefore the variation of the effective action under a gauge tranformation is $$\delta_\epsilon S_F[A] \equiv S_F[A+\delta A]-S_F[A]=\int d^4 x\ \epsilon(x) \mathcal{A}(x)$$ and so you see that the anomaly breaks the gauge invariance of the matter fields effective action. This in turn breaks unitarity etc.
Ok so now that we have a more precise definition of what inconsistent means, we can see that the anomaly just can't be removed, it has to vanish. This is a property only a few gauge groups have, including the standard model's $SU(3)\times SU(2)\times U(1)$. The reason for this is in the explicit form of gauge anomalies (I will not repeat the derivation here since it can be easily found in textbooks and is quite long) : $$\mathcal{A}_a(x)=-\frac{1}{32 \pi^2}D_{abc}\epsilon^{\mu\nu\rho\sigma}F^b_{\mu\nu}F^c_{\rho\sigma}$$ where $a,b,c$ are indices belonging to the gauge group. The anomaly cancels if the group theory factors $D_{abc} = 0$. |
Noether's theorem states that, for every
continuous symmetry of an action, there exists a conserved quantity, e.g. energy conservation for time invariance, charge conservation for $U(1)$. Is there any similar statement for discrete symmetries?
Noether's theorem states that, for every
migrated from math.stackexchange.com Apr 12 '11 at 16:05
This question came from our site for people studying math at any level and professionals in related fields.
For continuous global symmetries, Noether theorem gives you a locally conserved charge density (and an associated current), whose integral over all of space is conserved (i.e. time independent).
For global discrete symmetries, you have to distinguish between the cases where the conserved charge is continuous or discrete. For
infinite symmetries like lattice translations the conserved quantity is continuous, albeit a periodic one. So in such case momentum is conserved modulo vectors in the reciprocal lattice. The conservation is local just as in the case of continuous symmetries.
In the case of
finite group of symmetries the conserved quantity is itself discrete. You then don't have local conservation laws because the conserved quantity cannot vary continuously in space. Nevertheless, for such symmetries you still have a conserved charge which gives constraints (selection rules) on allowed processes. For example, for parity invariant theories you can give each state of a particle a "parity charge" which is simply a sign, and the total charge has to be conserved for any process, otherwise the amplitude for it is zero.
Put into one sentence, Noether's first Theorem states that a continuous, global, off-shell symmetry of an action $S$ implies a local on-shell conservation law. By the words
on-shell and off-shell are meant whether Euler-Lagrange equations of motion are satisfied or not.
Now the question asks if
continuous can be replace by discrete?
It should immediately be stressed that Noether Theorem is a machine that for
each input in form of an appropriate symmetry produces an output in form of a conservation law. To claim that a Noether Theorem is behind, it is not enough to just list a couple of pairs (symmetry, conservation law).
Now, where could a discrete version of Noether's Theorem live? A good bet is in a discrete lattice world, if one uses finite differences instead of differentiation. Let us investigate the situation.
Our intuitive idea is that finite symmetries, e.g., time reversal symmetry, etc, can not be used in a Noether Theorem in a lattice world because they don't work in a continuous world. Instead we pin our hopes to that discrete infinite symmetries that become continuous symmetries when the lattice spacings go to zero, can be used.
Imagine for simplicity a 1D point particle that can only be at discrete positions $q_t\in\mathbb{Z}a$ on a 1D lattice $\mathbb{Z}a$ with lattice spacing $a$, and that time $t\in\mathbb{Z}$ is discrete as well. (This was, e.g., studied in J.C. Baez and J.M. Gilliam, Lett. Math. Phys. 31 (1994) 205; hat tip: Edward.) The velocity is the finite difference
$$v_{t+\frac{1}{2}}:=q_{t+1}-q_t\in\mathbb{Z}a,$$
and is discrete as well. The action $S$ is
$$S[q]=\sum_t L_t$$
with Lagrangian $L_t$ on the form
$$L_t=L_t(q_t,v_{t+\frac{1}{2}}).$$
Define momentum $p_{t+\frac{1}{2}}$ as
$$ p_{t+\frac{1}{2}} := \frac{\partial L_t}{\partial v_{t+\frac{1}{2}}}. $$
Naively, the action $S$ should be extremized wrt. neighboring virtual discrete paths $q:\mathbb{Z} \to\mathbb{Z}a$ to find the equation of motion. However, it does not seem feasible to extract a discrete Euler-Lagrange equation in this way, basically because it is not enough to Taylor expand to the first order in the variation $\Delta q$ when the variation $\Delta q\in\mathbb{Z}a$ is not infinitesimal. At this point, we throw our hands in the air, and
declare that the virtual path $q+\Delta q$ (as opposed to the stationary path $q$) does not have to lie in the lattice, but that it is free to take continuous values in $\mathbb{R}$. We can now perform an infinitesimal variation without worrying about higher order contributions,
$$0 =\delta S := S[q+\delta q] - S[q] = \sum_t \left[\frac{\partial L_t}{\partial q_t} \delta q_t + p_{t+\frac{1}{2}}\delta v_{t+\frac{1}{2}} \right] $$ $$ =\sum_t \left[\frac{\partial L_t}{\partial q_t} \delta q_{t} + p_{t+\frac{1}{2}}(\delta q_{t+1}- \delta q_t)\right] $$ $$=\sum_t \left[\frac{\partial L_t}{\partial q_t} - p_{t+\frac{1}{2}} + p_{t-\frac{1}{2}}\right]\delta q_t + \sum_t \left[p_{t+\frac{1}{2}}\delta q_{t+1}-p_{t-\frac{1}{2}}\delta q_t \right].$$
Note that the last sum is telescopic. This implies (with suitable boundary conditions) the discrete Euler-Lagrange equation
$$\frac{\partial L_t}{\partial q_t} = p_{t+\frac{1}{2}}-p_{t-\frac{1}{2}}.$$
This is the evolution equation. At this point it is not clear whether a solution for $q:\mathbb{Z}\to\mathbb{R}$ will remain on the lattice $\mathbb{Z}a$ if we specify two initial values on the lattice. We shall from now on restrict our considerations to such systems for consistency.
As an example, one may imagine that $q_t$ is a cyclic variable, i.e., that $L_t$ does not depend on $q_t$. We therefore have a discrete global translation symmetry $\Delta q_t=a$. The Noether current is the momentum $p_{t+\frac{1}{2}}$, and the Noether conservation law is that momentum $p_{t+\frac{1}{2}}$ is conserved. This is certainly a nice observation. But this does
not necessarily mean that a Noether Theorem is behind.
Imagine that the enemy has given us a global vertical symmetry $\Delta q_t = Y(q_t)\in\mathbb{Z}a$, where $Y$ is an arbitrary function. (The words
vertical and horizontal refer to translation in the $q$ direction and the $t$ direction, respectively. We will for simplicity not discuss symmetries with horizontal components.) The obvious candidate for the bare Noether current is
$$j_t = p_{t-\frac{1}{2}}Y(q_t).$$
But it is unlikely that we would be able to prove that $j_t$ is conserved merely from the symmetry $0=S[q+\Delta q] - S[q]$, which would now unavoidably involve higher order contributions. So while we stop short of declaring a no-go theorem, it certainly does not look promising.
Perhaps, we would be more successful if we only discretize time, and leave the coordinate space continuous? I might return with an update about this in the future.
An example from the continuous world that may be good to keep in mind: Consider a simple gravity pendulum with Lagrangian
$$L(\varphi,\dot{\varphi}) = \frac{m}{2}\ell^2 \dot{\varphi}^2 + mg\ell\cos(\varphi).$$
It has a global discrete periodic symmetry $\varphi\to\varphi+2\pi$, but the (angular) momentum $p_{\varphi}:=\frac{\partial L}{\partial\dot{\varphi}}= m\ell^2\dot{\varphi}$ is
not conserved if $g\neq 0$.
You mentioned crystal symmetries. Crystals have a discrete translation invariance: It is not invariant under an infinitesimal translation, but invariant under translation by a lattice vector. The result of this is conservation of momentum
up to a reciprocal lattice vector.
There is an additional result: Suppose the Hamiltonian itself is time independent, and suppose the symmetry is related to an operator $\hat S$. An example would be the parity operator $\hat P|x\rangle = |-x\rangle$. If this operator is a symmetry, then $[H,P] = 0$. But since the commutator of an operator with the Hamiltonian also gives you the derivative, you have $\dot P = 0$.
Actually there are analogies or generalisations of results which reduce to
Noether's theorems under usual cases and which do hold for discrete (and not necesarily discretised) symmetries (including CPT-like symmetries)
AbstractWe introduce a method to construct conservation laws for a large class of linear partial differential equations. In contrast to the classical result of Noether, the conserved currents are generated by any symmetry of the operator, including those of the non-Lie type. An explicit example is made of the Dirac equation were we use our construction to find a class of conservation laws associated with a 64 dimensional Lie algebra of discrete symmetries that includes CPT.
The way followed is a succesive relaxation of the conditions of Noether's theorem on continuous (Lie) symmetries, which generalise the result in other cases.
For example (from above), emphasis, additions mine:
The connection between symmetry and conservation laws has been inherent in all of mathematical physics since Emmy Noether published, in 1918, her hugely influential work linking the two. ..[M]any have put forward approaches to study conservation laws, through a variety of different means. In each case, a conservation law is defined as follows.
Definition 1.Let $\Delta[u] = 0$ be a system of equations depending on the independent variables $x = (x_1, \dots , x_n)$, the dependent variables $u = (u_1, \dots , u_m)$ and derivatives thereof. Then a conservation law for $\Delta$ is defined by some $P = P[u]$ such that: $${\operatorname{Div} P \; \Big|}_{\Delta=0} = 0 \tag{1.1}$$
where $[u]$ denotes the coordinates on the $N$-th jet of $u$, with $N$ arbitrary.
Noether’s [original] theorem is applicable in the [special] case where $\Delta[u] = 0$ arises as the Euler-Lagrange equation to an associated variational problem. It is well known that a PDE has a variational formulation
if and only if it has self-adjoint Frechet derivative. That is to say: if the system of equations $\Delta[u] = 0$ is such that $D_{\Delta} = {D_{\Delta}}^*$ then the following result is applicable.
Theorem (Noether).For a non-degenerate variational problem with $L[u] = \int_{\Omega} \mathfrak{L} dx$, the correspondence between nontrivial equivalence classes of variational symmetries of $L[u]$ and nontrivial equivalence classes of conservation laws is one-to-one.
[..]Given that [the general set of symmetries] is far larger than those considered in the classical work of Noether, there is potentially an even stronger correspondence between symmetry and conservation laws for PDEs[..]
Definition 2.We say the operator $\Gamma$ is a symmetry of the linear PDE $\Delta[u] \equiv L[u] = 0$ if there exists an operator $\alpha_{\Gamma}$ such that: $$[L, \Gamma] = \alpha_{\Gamma} L$$ where $[\cdot, \cdot]$ denotes the commutator by composition of operators so $L \Gamma = L \circ \Gamma$. We denote the set of all such symmetries by $sym(\Delta)$.
Corollary 1.If $L$ is self-adjoint or skew-adjoint, then each $\Gamma \in sym(L)$ generates a conservation law.
Specificaly, for the
Dirac Equation and CPT symmetry the following conservation law is derived ( ibid.):
No, because discrete symmetries have no infinitesimal form which would give rise to the (characteristic of) conservation law. See also this article for a more detailed discussion.
As was said before, this depends on what kind of 'discrete' symmetry you have: if you have a
bona fide discrete symmetry, as e.g. $\mathbb{Z}_n$, then the answer is in the negative in the context of Nöther's theorem(s) — even though there are conclusions that you can draw, as Moshe R. explained.
However, if you're talking about a discretized symmetry, i.e. a continuous symmetry (global or local) that has been somehow discretized, then you do have an analogue to Nöther's theorem(s) à la Regge calculus. A good talk introducing some of these concepts is Discrete Differential Forms, Gauge Theory, and Regge Calculus (PDF): the bottom line is that you have to find a Finite Difference Scheme that preserves your differential (and/or gauge) structure.
There's a big literature on Finite Difference Schemes for Differential Equations (ordinary and partial).
Sobering thoughts:
Conservation laws are not related to any
symmetry, to tell the truth. For a mechanical system with N degrees of freedom there always are N conserved quantities. They are complicated combinations of the dynamical variables. Their existence is provided with existence of the problem solutions.
When there is a symmetry, the conserved quantities get just a simpler look.
EDIT: I do not know how they teach you but the conservation laws are not related to Noether theorem. The latter just shows how to construct some of conserved quantities from the problem Lagrangian and the problem solutions. Any combination of conserved quantities is also a conserved quantity. So what Noether gives is not unique at all.
Maybe,
I am by no means an expert, but I read this a few weeks ago. In that paper they consider a 2d lattice and construct an energy analogue. They show it behaves as energy should, and then conclude that for this energy to be conserved space-time would need to be invariant.
Electric charge conservation is a "discrete" symmetry. Quarks and anti-quarks have discrete fractional electric charges (±1/3, ±2/3) electrons, positrons and protons have integer charges. |
Let's consider $0<\alpha<1/2$ and denote by $W_T^{1-\alpha,\infty}(0,T)$ the space of measurable functions $g:[0,T]\to\Bbb R$ such that $$ ||g||_{1-\alpha,\infty,T}:=\sup_{0<s<t<T}\left[\frac{|g(t)-g(s)|}{(t-s)^{1-\alpha}}+\int_s^t\frac{|g(y)-g(s)|}{(y-s)^{2-\alpha}}\,dy\right]<+\infty\;\;\;. $$
Moreover, we define the
right sided Riemann-Liouville integral of order $1-\alpha$ of a function $f\in L^p(0,t)$, with $1\le p\le\infty$, as$$I_{t-}^{1-\alpha}f(x):=\frac{(-1)^{\alpha-1}}{\Gamma(1-\alpha)}\int_x^t(y-x)^{-\alpha}f(y)\,dy\;\;\;$$for a.a. $x\in[0,t]$.
My problem is the following: in a paper by Nulart and Rascanu it is stated that, if $g\in W_T^{1-\alpha,\infty}(0,T)$ then its restriction to $[0,t]$ stays in $I_{t-}^{1-\alpha}(L^{\infty}(0,t))$ for all $0<t<T$; so in other words, given such $g$, there exists $h\in L^{\infty}(0,t)$ such that $$ g(x)|_{[0,t]}=I_{t-}^{1-\alpha}h(x)=\frac{(-1)^{\alpha-1}}{\Gamma(1-\alpha)}\int_x^t(y-x)^{-\alpha}h(y)\,dy\;\;\;.$$
It seems to me, that I DON'T HAVE to search explicitly the $h$ depending on the $g$ but rather I should use a theoretical argument, which proves the existence of such $h$; but I don't know how to do it.
I'm quite lost, can someone shade a light please?
EDIT: Obviously $\Gamma$ is the Euler Gamma function and $(-1)^{\alpha-1}=e^{i\pi(\alpha-1)}$, but these terms are constant, thus this doesn't play any relevant role here. SECOND EDIT: We can state the "symmetric" claim; the underlying duality could help.
We denote by $W_0^{\alpha,1}(0,T)$ the space of measurable functions $f:[0,T]\to\Bbb R$ such that $$ ||f||_{\alpha,1}:=\int_0^T\frac{|f(s)|}{s^{\alpha}}\,ds+\int_0^T\int_0^s\frac{|f(s)-f(y)|}{(s-y)^{\alpha+1}}\,dyds<+\infty $$
As before we define the
left sided Riemann-Liouville integral of order $\alpha$ of a function $f\in L^p(0,t)$, with $1\le p\le\infty$, as$$I_{0+}^{\alpha}f(x):=\frac{1}{\Gamma(\alpha)}\int_0^x(x-y)^{\alpha-1}f(y)\,dy\;\;\;$$for a.a. $x\in[0,t]$.
Then if $g\in W_0^{\alpha,1}(0,T)$ then its restriction to $[0,t]$ stays in $I_{0+}^{\alpha}(L^1(0,t))$ for all $0<t<T$. |
I have an ellipse centered at $(h,k)$, with semi-major axis $r_x$, semi-minor axis $r_y$, both aligned with the Cartesian plane.
How do I determine if a circle with center $(x,y)$ and radius $r$ is within the area bounded by the ellipse
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
I have an ellipse centered at $(h,k)$, with semi-major axis $r_x$, semi-minor axis $r_y$, both aligned with the Cartesian plane.
How do I determine if a circle with center $(x,y)$ and radius $r$ is within the area bounded by the ellipse
Idea: doing a translation we can suppose than the circle is centered at $(0,0)$. Parametrize the equation of the ellipse:
$$x(t) = h + r_x\cos t,$$ $$y(t) = k + r_y\sin t.$$ And find the maximum and minimum of $t\mapsto x(t)^2 + y(t)^2$.
Solve for $x$ or $y$ from
$$\frac{(x-h)^2}{r_x^2}+\frac{(y-k)^2}{r_y^2}=1$$
and
$$\frac{(x-h)^2}{r^2}+\frac{(y-k)^2}{r^2}=1 . $$
If the point of intersection is complex due to quantity under radical sign (discriminant) being negative, then the circle is inside the ellipse.
If it is zero, the circle touches the ellipse and if posive there is intersection between them at 2 or 4 points. |
So, for the past few years it's been my goal to create an equation that would give me the position of an object in a gravitational field at time $t$, given it's initial position and velocity. At first the problem was that I didn't know enough to do the math. Now that I can do multivariable calculus I thought that problem would be solved, but I've just ended up running into a new problem. Please don't tell me how to solve it, but if you can give me a hint that would be great. Here's the set up for the problem:
A planet of mass $M$ (and radius = 0) is situated at the origin. I know that the magnitude of acceleration due to gravity is $$\frac{GM}{r^2}$$ so an object at $(x,y)$ will have acceleration $$a(x,y)= \frac{GM}{x^2+y^2},$$ or, as a vector, $$\overrightarrow{a}(x,y)= \left\langle \frac{GM}{x^2+y^2}\cos\theta, \frac{GM}{x^2+y^2}\sin\theta\right\rangle$$
$$= \left\langle \frac{GM}{x^2+y^2}\frac{x}{\sqrt{x^2+y^2}}, \frac{GM}{x^2+y^2}\frac{y}{\sqrt{x^2+y^2}}\right\rangle$$ $$= \left\langle \frac{GMx}{(x^2+y^2)^{3/2}}, \frac{GMy}{(x^2+y^2)^{3/2}}\right\rangle$$
So, here's where I'm stuck. I can integrate with respect to distance and get
$$ W(x,y) = \left\langle -\frac{GM}{\sqrt{x^2+y^2}}, -\frac{GM}{\sqrt {x^2+y^2}}\right\rangle$$
which I think is a vector who's magnitude is the work done, but that doesn't tell me anything about time. I can integrate with respect to time, but that would give
$$f(x,y)= \left\langle \frac{GMx}{(x^2+y^2)^{3/2}}t, \frac{GMy}{(x^2+y^2)^{3/2}}t\right\rangle$$
which... I mean is naïve at best. It doesn't take into account the change in position that happens over time. The only thing that I can think of to do is somehow find parametric equations where $x$ and $y$ are functions of $t$, but that's basically what I'm trying to do anyway.
Any ideas? I want to find an equation such that I can put in a location and velocity and the equation will tell me what path the object will take. Is that even possible? |
I sought-for the equations of motion of an unrestrained rigid body. The equations of motion are readily available in the literature, but my concern is to derive them by Hamilton's principle.
Expressing the position of an infinitesimal particle within the body as:
$$ \vec{R} = \vec{R}_0 + \vec{r} $$
where $\vec{R}_0$ and $\vec{r}$ are expressed in terms of body coordinates, origin of which is located at the center of mass. Additionally, $\vec{R}_0$ is the position of the center of mass measured from inertial frame, $\vec{r}$ is the position of the point measured from body frame.
The velocity of this point with respect to the inertial frame can be found as:
$$ \vec{V} = \dot{\vec{R}}_0 + \vec{\omega} \times (\vec{R}_0+\vec{r}) $$
where $ (\dot{}) $ represents the time derivative with respect to the body frame. To find the acceleration, differentiate with respect to inertial frame once more yields:
$$ \vec{a} = \ddot{\vec{R}}_0 + \dot{\vec{\omega}}\times(\vec{R}_0+\vec{r}) + \vec{\omega} \times \dot{\vec{R}}_0 + \vec{\omega} \times \vec{V} $$
One can find the variation of velocity by replacing the time derivatives by variational $\delta$ operator and using $\vec{\delta\theta}$ infinitesimal rotation vector:
$$ \delta \vec{V} = \delta \dot{\vec{R}}_0 + \delta \vec{\omega} \times(\vec{R}_0+\vec{r}) + \vec{\omega} \times \delta \vec{R}_0 + \vec{\delta\theta} \times \vec{V} $$
Now, I can use variation of kinetic energy. For the simplicity, I do not consider potential energy and work done by external forces:
$$ \int_{t_1}^{t_2} \delta T dt = 0 $$
where
$$ \delta T = \int_D \rho \vec{V} \cdot \delta\vec{V} dD $$
By first, calculating the $ \delta \vec{R}_0 $ , I obtain the following:
$$ \int_{t_1}^{t_2} \int_D \rho \left[ \left( \dot{\vec{R}}_0 + \vec{\omega} \times (\vec{R}_0+\vec{r}) \right) \cdot \delta \dot{\vec{R}}_0 + \left( \dot{\vec{R}}_0 + \vec{\omega} \times (\vec{R}_0+\vec{r}) \right) \cdot \left( \vec{\omega} \times \delta \vec{R}_0 \right) \right] dD dt $$
The first part of this integral can be integrated by parts, and hence one can obtain the translational equation of motion.
The rotational equations of motion can be obtained from the second part:
$$ \int_{t_1}^{t_2} \int_D \rho \left[ \left( \dot{\vec{R}}_0 + \vec{\omega} \times (\vec{R}_0+\vec{r}) \right) \cdot \left( \delta \vec{\omega} \times(\vec{R}_0+\vec{r}) \right) + \left( \dot{\vec{R}}_0 + \vec{\omega} \times (\vec{R}_0+\vec{r}) \right) \cdot \left( \vec{\delta\theta} \times \vec{V} \right) \right] dD dt $$
The second part of the integral is zero, hence we end up with the following form:
$$ \int_{t_1}^{t_2} \int_D \rho \left[ \left( \vec{R}_0+\vec{r} \right) \times \left( \dot{\vec{R}}_0 + \vec{\omega} \times (\vec{R}_0+\vec{r}) \right) \right] \cdot \delta \vec{\omega} dD dt $$
My question is here. Since angular velocity is non-holonomic, it cannot be expressed as a derivative of a vector, i.e., one cannot obtain an expression for rotation. I need to evaluate this integral by parts to obtain the rotational equations of motion.
In other words, how can I find the relation between $ \delta \vec{\omega} $ and $ \vec{\delta \theta} $?
Please note that the usage of the following expresion does not yield the correct result:
$$ \delta \vec{\omega} = \frac{d (\vec{\delta \theta}) }{dt} $$ |
Chips Packaging Machine Manufacturer, Factory Supplierchips packing machine animal food packing machine rice husk powder packing PRODUCTS Detail
Our company has a long history in China to produce
Chips Packaging Machine Manufacturer, Factory Supplierchips packing machine animal food packing machine rice husk powder packing machine high speed vertical packing machine multihead weigher manufacturers, we are a professional and trustworthy manufacturer.In order to provide trustworthy products, we are stricted to control our technological process.Due to our matured craft and professional mechanic, we can providing chips packing machine products in reasonable price.As you know, business is only the first step. We hope that can build a relationship of mutual trust and long-term cooperation with you.Good service is as important as product quality.Thank you for making us acquainted.
the load of potato chips in a medium-dimension bag is mentioned to be 10 oz.. The quantity that the packaging computing device places in these bags is believed to have a normal mannequin with an average of oz. and a typical deviation of oz. (circular to 4 decimal areas as necessary.)
a) What fraction of all luggage bought are underweight?
b) one of the crucial chips are sold in "bargain packs" of 33 baggage. what's the chance that not one of the 33 is underweight?
c) what is the chance that the suggest weight of the 33 bags is under the mentioned volume?
d) what's the likelihood that the mean weight of a 20-bag case of potato chips is under 10 oz.?ordinary Distribution:
A random variable X is said to be following common distribution with meaneq\mu /eq and variance eq\sigma^2 /eq if its distribution is given as eqf(x|\mu,\sigma^2)=\frac1\sqrt2\pi \sigma^2e^-\frac(x-\mu)^22\sigma^2 \qquad , -\infty \leq X \leq \infty \\ \bar x \area \sim \house N(\mu,\frac\sigma^2n). /eq
general distribution is given by way of gauss and at the beginning it's used for modeling the error .
It is assumed that all the natural phenomenon follows standard distribution.answer and rationalization:
it's given that medium size chips is of 10 ounces
The volume that packaging computer put observe standard distribution with suggest and standard deviation
eq\mu= \\ \sigma= /eq
a) what fraction of baggage are underweight under 10 oz
So we need to locate zscore and then the usage of NORMSDIST (z) feature of MS Excel we can get the likelihood
eqP(X <10)=P(Z<\ /eq
b) P(None is underweight)=?
None is underweight skill all 33 aren't under 10 oz.
Let y denotes variety of underweight
y ~Binom(33,
P(None is underweight)=P(Y=0)=?
eqP(Y=0)=\ /eq
c) right here n=33
So we comprehend that eq\bar x \sim N(\mu ,\frac\sigma^2n) /eq
for that reason eq\bar x /eq comply with commonplace distribution with mean and average deviation
eqP(\bar x <10)=P(Z<\ /eq
d) in a similar fashion here n=20
for this reason eq\bar x /eq observe general distribution with suggest and commonplace deviation
eqP(\bar x <10)=P(Z<\ /eq |
Let $X_n(\Bbb{Z})$ be the simplicial complex whose vertex set is $\Bbb{Z}$ and such that the vertices $v_0,...,v_k$ span a $k$-simplex if and only if $|v_i-v_j| \le n$ for every $i,j$. Prove that $X_n(\Bbb{Z})$ is $n$-dimensional...
no kidding, my maths is foundations (basic logic but not pedantic), calc 1 which I'm pretty used to work with, analytic geometry and basic linear algebra (by basic I mean matrices and systems of equations only
Anyway, I would assume if removing $ fixes it, then you probably have an open math expression somewhere before it, meaning you didn't close it with $ earlier. What's the full expression you're trying to get? If it's just the frac, then your code should be fine
This is my first time chatting here in Math Stack Exchange. So I am not sure if this is frowned upon but just a quick question: I am trying to prove that a proper subgroup of $\mathbb{Z}^n$ is isomorphic to $\mathbb{Z}^k$, where $k \le n$. So we must have $rank(A) = rank(\mathbb{Z}^k)$ , right?
For four proper fractions $a, b, c, d$ X writes $a+ b + c >3(abc)^{1/3}$. Y also added that $a + b + c> 3(abcd)^{1/3}$. Z says that the above inequalities hold only if a, b,c are positive. (a) Both X and Y are right but not Z. (b) Only Z is right (c) Only X is right (d) Neither of them is absolutely right.
Yes, @TedShifrin the order of $GL(2,p)$ is $p(p+1)(p-1)^2$. But I found this on a classification of groups of order $p^2qr$. There order of $H$ should be $qr$ and it is present as $G = C_{p}^2 \rtimes H$. I want to know that whether we can know the structure of $H$ that can be present?
Like can we think $H=C_q \times C_r$ or something like that from the given data?
When we say it embeds into $GL(2,p)$ does that mean we can say $H=C_q \times C_r$? or $H=C_q \rtimes C_r$? or should we consider all possibilities?
When considering finite groups $G$ of order, $|G|=p^2qr$, where $p,q,r$ are distinct primes, let $F$ be a Fitting subgroup of $G$. Then $F$ and $G/F$ are both non-trivial and $G/F$ acts faithfully on $\bar{F}:=F/ \phi(F)$ so that no non-trivial normal subgroup of $G/F$ stabilizes a series through $\bar{F}$.
And when $|F|=pr$. In this case $\phi(F)=1$ and $Aut(F)=C_{p-1} \times C_{r-1}$. Thus $G/F$ is abelian and $G/F \cong C_{p} \times C_{q}$.
In this case how can I write G using notations/symbols?
Is it like $G \cong (C_{p} \times C_{r}) \rtimes (C_{p} \times C_{q})$?
First question: Then it is, $G= F \rtimes (C_p \times C_q)$. But how do we write $F$ ? Do we have to think of all the possibilities of $F$ of order $pr$ and write as $G= (C_p \times C_r) \rtimes (C_p \times C_q)$ or $G= (C_p \rtimes C_r) \rtimes (C_p \times C_q)$ etc.?
As a second case we can consider the case where $C_q$ acts trivially on $C_p$. So then how to write $G$ using notations?
There it is also mentioned that we can distinguish among 2 cases. First, suppose that the sylow $q$-subgroup of $G/F$ acts non trivially on the sylow $p$-subgroup of $F$. Then $q|(p-1) and $G$ splits over $F$. Thus the group has the form $F \rtimes G/F$.
A presentation $\langle S\mid R\rangle$ is a Dehn presentation if for some $n\in\Bbb N$ there are words $u_1,\cdots,u_n$ and $v_1,\cdots, v_n$ such that $R=\{u_iv_i^{-1}\}$, $|u_i|>|v_i|$ and for all words $w$ in $(S\cup S^{-1})^\ast$ representing the trivial element of the group one of the $u_i$ is a subword of $w$
If you have such a presentation there's a trivial algorithm to solve the word problem: Take a word $w$, check if it has $u_i$ as a subword, in that case replace it by $v_i$, keep doing so until you hit the trivial word or find no $u_i$ as a subword
There is good motivation for such a definition here
So I don't know how to do it precisely for hyperbolic groups, but if $S$ is a surface of genus $g \geq 2$, to get a geodesic representative for a class $[\alpha] \in \pi_1(S)$ where $\alpha$ is an embedded loop, one lifts it to $\widetilde{\alpha}$ in $\Bbb H^2$ by the locally isometric universal covering, and then the deck transformation corresponding to $[\alpha]$ is an isometry of $\Bbb H^2$ which preserves the embedded arc $\widetilde{\alpha}$
It has to be an isometry fixing a geodesic $\gamma$ with endpoints at the boundary being the same as the endpoints of $\widetilde{\alpha}$.
Consider the homotopy of $\widetilde{\alpha}$ to $\gamma$ by straightline homotopy, but straightlines being the hyperbolic geodesics. This is $\pi_1(S)$-equivariant, so projects to a homotopy of $\alpha$ and the image of $\gamma$ (which is a geodesic in $S$) downstairs, and you have your desired representative
I don't know how to interpret this coarsely in $\pi_1(S)$
@anakhro Well, they print in bulk, and on really cheap paper, almost transparent and very thin, and offset machine is really cheaper per page than a printer, you know, but you should be printing in bulk, its all economy of scale.
@ParasKhosla Yes, I am Indian, and trying to get in some good masters progam in math.
Algebraic graph theory is a branch of mathematics in which algebraic methods are applied to problems about graphs. This is in contrast to geometric, combinatoric, or algorithmic approaches. There are three main branches of algebraic graph theory, involving the use of linear algebra, the use of group theory, and the study of graph invariants.== Branches of algebraic graph theory ===== Using linear algebra ===The first branch of algebraic graph theory involves the study of graphs in connection with linear algebra. Especially, it studies the spectrum of the adjacency matrix, or the Lap...
I can probably guess that they are using symmetries and permutation groups on graphs in this course.
For example, orbits and studying the automorphism groups of graphs.
@anakhro I have heard really good thing about Palka. Also, if you do not worry about little sacrifice of rigor (e.g. counterclockwise orientation based on your intuition, rather than, on winding numbers, etc.), Howie's Complex analysis is good. It is teeming with typos here and there, but you will be fine, i think. Also, thisbook contains all the solutions in appendix!
Got a simple question: I gotta find kernel of linear transformation $F(P)=xP^{''}(x) + (x+1)P^{'''}(x)$ where $F: \mathbb{R}_3[x] \to \mathbb{R}_3[x]$, so I think it would be just $\ker (F) = \{ ax+b : a,b \in \mathbb{R} \}$ since only polynomials of degree at most 1 would give zero polynomial in this case
@chandx you're looking for all the $G = P''$ such that $xG + (x+1)G' = 0$; if $G \neq 0$ you can solve the DE to get $G'/G = -x/(x+1) = -1 + 1/(x+1) \implies \ln G = -x + \ln(1+x) \implies G = (1+x)e^(-x) + C$ which is obviously not a polyonomial, so $G = 0$ and thus $P = ax + b$
could you suppose that $\operatorname{deg} P \geq 2$ and show that you wouldn't have nonzero polynomials? Sure. |
I'm trying to understand BRST complex in its Lagrangian incarnation i.e. in the form mostly closed to original Faddeev-Popov formulation. It looks like the most important part of that construction (proof of vanishing of higher cohomology groups) is very hard to find in the literature, at least I was not able to do so. Let me formulate couple of questions on BRST, but in the form of exercises on Lie algebra cohomology.
Let $X$ be a smooth affine variety, and $g$ is a (reductive?) Lie algebra acting on $X$, I think we assume $g$ to be at least unimodular, otherwise BRST construction won't work, and also assume that map $g \to T_X$ is injective. In physics language this is closed and irreducible action of a Lie algebra of a gauge group of the space of fields $X$. Structure sheaf $\mathcal{O}_X$ is a module over $g$, and I could form Chevalley-Eilenberg complex with coefficients in this module$$C=\wedge g^* \otimes \mathcal{O}_X.$$
The ultimate goal if BRST construction is to provide "free model" of algebra of invarinats $\mathcal{O}_X^g$, it is nor clear what is "free model", but I think BRST construction is just Tate's procedure of killing cycles for Chevalley-Eilenberg complex above (Tate's construction works for any dg algebra, and $C$ is a dg algebra).
My first question is what exactly are cohomology of the complex $C$? In other words before killing cohomology I'd like to understand what exactly have to be killed. For me it looks like a classical question on Lie algebra cohomology and, perhaps, it was discussed in the literature 60 years ago.
It is not necessary to calculate these cohomology groups and then follow Tate's approach to construct complete BSRT complex (complete means I added anti-ghosts and lagrange multipliers to $C$ and modified the differential), but even if I start with BRST complex$$C_{BRST}=(\mathcal{O}_X \otimes \wedge (g \oplus g^*) \otimes S(g), d_{BRST}=d_{CE}+d_1),$$where I could find a proof that all higher cohomology vanishes? This post imported from StackExchange MathOverflow at 2014-08-24 09:17 (UCT), posted by SE-user Sasha Pavlov |
ISSN:
1078-0947
eISSN:
1553-5231
All Issues
Discrete & Continuous Dynamical Systems - A
January 2014 , Volume 34 , Issue 1
Special issue
on Infinite Dimensional Dynamics and Applications
Select all articles
Export/Reference:
Abstract:
The theory of infinite dimensional and stochastic dynamical systems is a rapidly expanding and vibrant field of mathematics. In the recent three decades it has been highlighted as a core knowledge and an advancing thrust in the qualitative study of complex systems and processes described by evolutionary partial differential equations in many different settings, stochastic differential equations, functional differential equations and lattice differential equations. The central research topics include the invariant and attracting sets, stability and bifurcation of patterns and waves, asymptotic theory of dissipative systems and reduction of dimensions, and more and more problems of nonlocal systems, ill-posed systems, multicomponent and network dynamics, random dynamics and chaotic dynamics.
For more information please click the “Full Text” above
Abstract:
We establish the existence of a global invariant manifold of bubble states for the mass-conserving Allen-Cahn Equation in two space dimensions and give the dynamics for the center of the bubble.
Abstract:
In this paper statistical solutions of the 3D Navier-Stokes-$\alpha$ model with periodic boundary condition are considered. It is proved that under certain natural conditions statistical solutions of the 3D Navier-Stokes-$\alpha$ model converge to statistical solutions of the exact 3D Navier-Stokes equations as $\alpha$ goes to zero. The statistical solutions considered here arise as families of time-projections of measures on suitable trajectory spaces.
Abstract:
In this paper we first prove a rather general theorem about existence of solutions for an abstract differential equation in a Banach space by assuming that the nonlinear term is in some sense weakly continuous.
We then apply this result to a lattice dynamical system with delay, proving also the existence of a global compact attractor for such system.
Abstract:
This article is devoted to the existence and uniqueness of pathwise solutions to stochastic evolution equations, driven by a Hölder continuous function with Hölder exponent in $(1/2,1)$, and with nontrivial multiplicative noise. As a particular situation, we shall consider the case where the equation is driven by a fractional Brownian motion $B^H$ with Hurst parameter $H>1/2$. In contrast to the article by Maslowski and Nualart [17], we present here an existence and uniqueness result in the space of Hölder continuous functions with values in a Hilbert space $V$. If the initial condition is in the latter space this forces us to consider solutions in a different space, which is a generalization of the Hölder continuous functions. That space of functions is appropriate to introduce a non-autonomous dynamical system generated by the corresponding solution to the equation. In fact, when choosing $B^H$ as the driving process, we shall prove that the dynamical system will turn out to be a random dynamical system, defined over the ergodic metric dynamical system generated by the infinite dimensional fractional Brownian motion.
Abstract:
We introduce and analyze a prototype model for chemotactic effects in biofilm formation. The model is a system of quasilinear parabolic equations into which two thresholds are built in. One occurs at zero cell density level, the second one is related to the maximal density which the cells cannot exceed. Accordingly, both diffusion and taxis terms have degenerate or singular parts. This model extends a previously introduced degenerate biofilm model by combining it with a chemotaxis equation. We give conditions for existence and uniqueness of weak solutions and illustrate the model behavior in numerical simulations.
Abstract:
We consider the compressible Navier-Stokes system coupled with the Maxwell equations governing the time evolution of the magnetic field. We introduce a relative entropy functional along with the related concept of dissipative solution. As an application of the theory, we show that for small values of the Mach number and large Reynolds number, the global in time weak (dissipative) solutions converge to the ideal MHD system describing the motion of an incompressible, inviscid, and electrically conducting fluid. The proof is based on frequency localized Strichartz estimates for the Neumann Laplacean on unbounded domains.
Abstract:
Here we consider the nonlocal Cahn-Hilliard equation with constant mobility in a bounded domain. We prove that the associated dynamical system has an exponential attractor, provided that the potential is regular. In order to do that a crucial step is showing the eventual boundedness of the order parameter uniformly with respect to the initial datum. This is obtained through an Alikakos-Moser type argument. We establish a similar result for the viscous nonlocal Cahn-Hilliard equation with singular (e.g., logarithmic) potential. In this case the validity of the so-called separation property is crucial. We also discuss the convergence of a solution to a single stationary state. The separation property in the nonviscous case is known to hold when the mobility degenerates at the pure phases in a proper way and the potential is of logarithmic type. Thus, the existence of an exponential attractor can be proven in this case as well.
Abstract:
In this paper we strengthen some results on the existence and properties of pullback attractors for a non-autonomous 2D Navier-Stokes model with infinite delay. Actually we prove that under suitable assumptions, and thanks to regularity results, the attraction also happens in the $H^1$ norm for arbitrarily large finite intervals of time. Indeed, from comparison results of attractors we establish that all these families of attractors are in fact the same object. The tempered character of these families in $H^1$ is also analyzed.
Abstract:
This paper treats the existence of pullback attractors for the non-autonomous 2D Navier--Stokes equations in two different spaces, namely $L^2$ and $H^1$. The non-autonomous forcing term is taken in $L^2_{\rm loc}(\mathbb R;H^{-1})$ and $L^2_{\rm loc}(\mathbb R;L^2)$ respectively for these two results: even in the autonomous case it is not straightforward to show the required asymptotic compactness of the flow with this regularity of the forcing term. Here we prove the asymptotic compactness of the corresponding processes by verifying the flattening property -- also known as ``Condition (C)". We also show, using the semigroup method, that a little additional regularity -- $f\in L^p_{\rm loc}(\mathbb R;H^{-1})$ or $f\in L^p_{\rm loc}(\mathbb R;L^2)$ for some $p>2$ -- is enough to ensure the existence of a compact pullback absorbing family (not only asymptotic compactness). Even in the autonomous case the existence of a compact absorbing set for this model is new when $f$ has such limited regularity.
Abstract:
This paper studies the asymptotic behavior of solutions for the non-autonomous lattice Selkov model. We prove the existence of a uniform attractor for the generated family of processes and obtain an upper bound of the Kolmogorov $\varepsilon$-entropy for it. Also we establish the upper semicontinuity of the uniform attractor when the infinite lattice systems are approximated by finite lattice systems.
Abstract:
Frequency domain conditions for the existence of finite-dimensional projectors and determining observations for the set of amenable solutions of semi-dynamical systems in Hilbert spaces are derived. Evolutionary variational equations are considered as control systems in a rigged Hilbert space structure. As an example we investigate a coupled system of Maxwell's equations and the heat equation in one-space dimension. We show the controllability of the linear part and the frequency domain conditions for this example.
Abstract:
This paper is concerned with the asymptotic behavior of solutions of the damped non-autonomous stochastic wave equations driven by multiplicative white noise. We prove the existence of pullback random attractors in $H^1(\mathbb{R}^n) \times L^2(\mathbb{R}^n)$ when the intensity of noise is sufficiently small. We demonstrate that these random attractors are periodic in time if so are the deterministic non-autonomous external terms. We also establish the upper semicontinuity of random attractors when the intensity of noise approaches zero. In addition, we prove the measurability of random attractors even if the underlying probability space is not complete.
Abstract:
For a typical stochastic reversible reaction-diffusion system with multiplicative white noise, the trimolecular autocatalytic Gray-Scott system on a three-dimensional bounded domain with random noise perturbation proportional to the state of the system, the existence of a random attractor and its robustness with respect to the reverse reaction rates are proved through sharp and uniform estimates showing the pullback uniform dissipation and the pullback asymptotic compactness.
Readers Authors Editors Referees Librarians More Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top] |
I'm trying to understand BRST complex in its Lagrangian incarnation i.e. in the form mostly closed to original Faddeev-Popov formulation. It looks like the most important part of that construction (proof of vanishing of higher cohomology groups) is very hard to find in the literature, at least I was not able to do so. Let me formulate couple of questions on BRST, but in the form of exercises on Lie algebra cohomology.
Let $X$ be a smooth affine variety, and $g$ is a (reductive?) Lie algebra acting on $X$, I think we assume $g$ to be at least unimodular, otherwise BRST construction won't work, and also assume that map $g \to T_X$ is injective. In physics language this is closed and irreducible action of a Lie algebra of a gauge group of the space of fields $X$. Structure sheaf $\mathcal{O}_X$ is a module over $g$, and I could form Chevalley-Eilenberg complex with coefficients in this module$$C=\wedge g^* \otimes \mathcal{O}_X.$$
The ultimate goal if BRST construction is to provide "free model" of algebra of invarinats $\mathcal{O}_X^g$, it is nor clear what is "free model", but I think BRST construction is just Tate's procedure of killing cycles for Chevalley-Eilenberg complex above (Tate's construction works for any dg algebra, and $C$ is a dg algebra).
My first question is what exactly are cohomology of the complex $C$? In other words before killing cohomology I'd like to understand what exactly have to be killed. For me it looks like a classical question on Lie algebra cohomology and, perhaps, it was discussed in the literature 60 years ago.
It is not necessary to calculate these cohomology groups and then follow Tate's approach to construct complete BSRT complex (complete means I added anti-ghosts and lagrange multipliers to $C$ and modified the differential), but even if I start with BRST complex$$C_{BRST}=(\mathcal{O}_X \otimes \wedge (g \oplus g^*) \otimes S(g), d_{BRST}=d_{CE}+d_1),$$where I could find a proof that all higher cohomology vanishes? This post imported from StackExchange MathOverflow at 2014-08-24 09:17 (UCT), posted by SE-user Sasha Pavlov |
In his celebrated paper "Conjugate Coding" (written around 1970), Stephen Wiesner proposed a scheme for quantum money that is unconditionally impossible to counterfeit, assuming that the issuing bank has access to a giant table of random numbers, and that banknotes can be brought back to the bank for verification. In Wiesner's scheme, each banknote consists of a classical "serial number" $s$, together with a quantum money state $|\psi_s\rangle$ consisting of $n$ unentangled qubits, each one either
$$|0\rangle,\ |1\rangle,\ |+\rangle=(|0\rangle+|1\rangle)/\sqrt{2},\ \text{or}\ |-\rangle=(|0\rangle-|1\rangle)/\sqrt{2}.$$
The bank remembers a classical description of $|\psi_s\rangle$ for every $s$. And therefore, when $|\psi_s\rangle$ is brought back to the bank for verification, the bank can measure each qubit of $|\psi_s\rangle$ in the correct basis (either $\{|0\rangle,|1\rangle\}$ or ${|+\rangle,|-\rangle}$), and check that it gets the correct outcomes.
On the other hand, because of the uncertainty relation (or alternatively, the No-Cloning Theorem), it's "intuitively obvious" that, if a counterfeiter who
doesn't know the correct bases tries to copy $|\psi_s\rangle$, then the probability that both of the counterfeiter's output states pass the bank's verification test can be at most $c^n$, for some constant $c<1$. Furthermore, this should be true regardless of what strategy the counterfeiter uses, consistent with quantum mechanics (e.g., even if the counterfeiter uses fancy entangled measurements on $|\psi_s\rangle$).
However, while writing a paper about other quantum money schemes, my coauthor and I realized that we'd never seen a rigorous proof of the above claim anywhere, or an explicit upper bound on $c$: neither in Wiesner's original paper nor in any later one.
So,
has such a proof (with an upper bound on $c$) been published? If not, then can one derive such a proof in a more-or-less straightforward way from (say) approximate versions of the No-Cloning Theorem, or results about the security of the BB84 quantum key distribution scheme?
Update: In light of the discussion with Joe Fitzsimons below, I should clarify that I'm looking for more than just a reduction from the security of BB84. Rather, I'm looking for an explicit upper bound on the probability of successful counterfeiting (i.e., on $c$)---and ideally, also some understanding of what the optimal counterfeiting strategy looks like. I.e., does the optimal strategy simply measure each qubit of $|\psi_s\rangle$ independently, say in the basis
$$\{ \cos(\pi/8)|0\rangle+\sin(\pi/8)|1\rangle, \sin(\pi/8)|0\rangle-\cos(\pi/8)|1\rangle \}?$$
Or is there an entangled counterfeiting strategy that does better?
Update 2: Right now, the best counterfeiting strategies that I know are (a) the strategy above, and (b) the strategy that simply measures each qubit in the $\{|0\rangle,|1\rangle\}$ basis and "hopes for the best." Interestingly, both of these strategies turn out to achieve a success probability of (5/8) n. So, my conjecture of the moment is that (5/8) n might be the right answer. In any case, the fact that 5/8 is a lower bound on c rules out any security argument for Wiesner's scheme that's "too" simple (for example, any argument to the effect that there's nothing nontrivial that a counterfeiter can do, and therefore the right answer is c=1/2).
This post has been migrated from (A51.SE)
Update 3: Nope, the right answer is (3/4) n! See the discussion thread below Abel Molina's answer. |
Find the points $(x,y)\in \mathbb R^2$ and unit vectors $\vec u$ such that the directional derivative of $f(x,y)=3x^2+y$ has the maximum value if $(x,y)$ is in the circle $x^2+y^2=1$
My attempt:
I know that the directional derivative is $D_{\vec u}f=\nabla f\cdot \vec u=6xu_1+u_2$ which does not depends on $y$ if $u_1$ is positive then $D_uf$ attains it´s maximum in $x=1$ and if $u_1$ is negative then $D_uf$ attains it´s maximum in $x=-1$
then we have two cases 1)$6u_1+u_2$ and 2)$-6u_1+u_2$ then I need to find the points $(u_1,u_2)$ such that $u_1^2+u_2^2=1$
Using Lagrange multipliers I found that $u_1=u_2={\pm {1\over \sqrt{2}}}$ hence $(x,y)=(\pm 1,0)$ and $(u_1,u_2)=(\pm {1\over \sqrt{2}},\pm{1\over \sqrt{2}})$
Is this approach correct? or Am I missing something? |
Wallace Neutrality: It all depends on why, exactly, money is valuable Matthew Martin8/11/2014 10:30:00 AM
Tweetable
It is typically assumed in practice that a currency's monetary authority has the ability to set interest rates, and that it does so primarily by manipulating the supply of money in open-ended operations to achieve it's interest target--a process known as "open market operations" (OMO). Most monetary DSGE models include a Taylor Rule that actually skip over OMO and assume that the monetary authority sets the interest rate outright, without ever bothering to compute the time-path of money supply required. However, Smith points to a paper by Neil Wallace called "A Modigliani-Miller Theorem for Open-Market Operations" which defines a principle of "Wallace neutrality" which says that the standard treatment is all wrong--OMO cannot affect either inflation or interest rates at all! That's quite a claim--no matter how much money the fed "prints" (via OMO), it will never cause inflation. As for the well-established empirical relationship between inflation and money supply increases, Wallace says this:
"Most economists are aware of considerable evidence showing that the price level and the amount of money are closely related. That evidence, though, does not imply that the irrelevance proposition [Wallace neutrality] is inapplicable to actual economies. The irrelevance proposition applies to asset exchanges [OMO] under some conditions. Most of the historical variation in money supplies has not come about by way of asset exchanges; gold discoveries, banking panics, and government deficits and surpluses account for much of it."Basically, he's arguing that the observed correlation between prices and money may not apply to the measured changes in money supply brought about by OMO. It's quite a claim.
Let me offer a model. There's an infinitely lived household that derives utility from consumption [$]C_t[$] and real money balances [$]\frac{M_t^h}{p_t}[$] where [$]M_t^h[$] is the households nominal bank account balance at the beginning of period [$]t[$], and [$]p_t[$] is the price of a unit of consumption in period [$]t[$]. The utility function is given by [$$]\sum_{t=0}^\infty\beta^t\left[ln\left(C_t\right)+ln\left(\frac{M_t^h}{p_t}\right)\right].[$$] Further, the household is endowed in each period with an income of [$]y_t[$] units of the consumption good, and must pay [$]\tau_t[$] units in taxes to the fiscal authority. In addition to saving money [$]M_{t+1}^h[$] for next period the household can invest in bonds [$]B_{t+1}[$] that will yield gross real interest of [$]R_{t+1}[$] in period [$]t+1[$] giving us the budget constraint [$$]C_t+B_{t+1}+\frac{M_{t+1}}{p_t}\leq y_t+R_tB_t-\tau_t.[$$] We assume (as Wallace does) that households have perfect foresight of all other agent's actions. Solving the household's constrained maximization problem yields [$$]C_{t+1}=\beta R_{t+1} C_t[$$] which describes the inter-temporal consumption tradeoff for a given interest rate, as well as [$$]\frac{1}{\beta}\left(\frac{p_{t+1}}{p_t}+\frac{1}{R_{t+1}}\right)\frac{M_{t+1}^h}{p_{t+1}}+B_{t+1}^h+\frac{M_{t+1}^h-M_t^h}{p_t}-R_tB_t^h +\tau_t=y_t[$$] which can be thought of as describing the conditions under which the household is indifferent between money, bonds, and consumption, a necessary condition for an optimum.
In addition to the household there are two other agents: a monetary authority (aka the Federal Reserve) and a fiscal authority (aka Congress). Congress sets government consumption levels [$]G_t[$], measured in units of the consumption good, issues bonds [$]B_{t+1}[$] that must be repaid at gross real interest [$]R_{t+1}[$], all financed by lump-sum taxation of [$]\tau_t[$] units of the household's income. Thus the Congress is bound by the budget constraint [$$]G_t+B_{t+1}=R_tB_t+\tau_t.[$$] The Fed engages in OMO by buying bonds financed by increasing the money supply, according to the budget constraint [$$]B_{t+1}^f-R_tB_t^f=\frac{M_{t+1}-M_t}{p_t}.[$$]
We define an equilibrium as a sequence of prices [$]\left\{p_t,R_t\right\}_{t=0}^\infty[$] such that the fiscal authority's budget constraint is satisfied, the monetary authority's constraint is satisfied, household utility is maximized, and [$]B_{t+1}^h+B_{t+1}^f=B_{t+1},[$] (bond market clearing condition) [$]M_{t+1}^h=M_{t+1},[$] (money market clearing condition) and [$]C_t+G_t=y_t[$] (aggregate resource constraint) for all periods. Plugging in the fiscal and monetary authority's budget constraints, along with the bond and money market clearing conditions into the solution to the household problem yields [$$]\frac{1}{\beta}\left(\frac{p_{t+1}}{p_t}+\frac{1}{R_{t+1}}\right)\frac{M_{t+1}}{p_{t+1}}=y_t-G_t.[$$] This is Robert Barro's famous "Riccardian Equivalence" result--holding expenditures constant (and assuming perfect foresight and only lump-sum taxation), government debt has absolutely no effects whatsoever on equilibrium--it is as if government spending entered directly into households' budget constraints. Recall that [$$]C_{t+1}=\beta R_{t+1} C_t[$$] which combined with the aggregate resource constraint implies [$$]R_{t+1}=\frac{y_{t+1}-G_{t+1}}{\beta\left(y_t-G_t\right)}[$$] and further combining that with the above result yields [$$]\pi_{t+1}=\frac{\frac{y_t-g_t}{y_{t+1}-G_{t+1}}\left(Q-\frac{M_t}{p_t}\right)}{y_t-G_t-\frac{1}{\beta}\left(Q+\frac{M_t}{p_t}\right)}[$$] where [$]Q\equiv \frac{M_{t+1}-M_t}{p_t}[$] was the Fed's bond purchases, or OMO, and the left hand side, [$]\pi_{t+1}\equiv\frac{P_{t+1}-p_t}{p_t}[$] is the inflation rate. I think there's a way to make this result a little prettier, but no matter, in this form it tells us what we wanted to find out:the Fed's bond purchases--[$]Q[$] in the equation--clearly and unambiguously cause inflation, a direct contradiction of Wallace neutrality.
So did I just disprove Neil Wallace and embarrass the American Economic Review with a first-year grad student homework problem?
It all boils down to differing views of why money has value embedded in Wallace's and my models. It's a surprisingly difficult question for monetary economists. We all know, intuitively, why money is valuable--you need it to buy stuff and pay taxes--and microeconomics professors will give you hours-long lectures on the double-coincidence of wants and what a marvelous innovation the idea of liquidity represents. But none of those reasons answer the fundamental question of why people
hold money, which is different than using it for transactions. Why not, for example, put all your savings into bonds until you want to buy stuff, then simultaneously sell the bonds and buy the stuff such that your cash account never rises above zero? After all, so long as interest rates are above zero, you actually lose money by holding it!
The model above is a standard money-in-utility (MIU) model at the core of most monetary New Keynesian models. The basic assumption needed to get MIU models is this: people
wantthe flexibility that comes with holding your assets in a highly liquid form. The standard alternative, and the one that Milton Friedman believed in more or less his whole life (but did not fair well in the Great Recession) is the cash-in-advance model, which says that not only do you need to have money to buy things, but for some unstated reason you must actually possess that money in liquid form for a period of time prior to making your purchase. Neither of these models exhibits Wallace neutrality, nor anything close to resembling it.
In Wallace's model, money is neither desired for it's liquidity nor even necessary to make purchases. Money has no actual function in Wallace's economy. So why the heck would agents in his model want to buy and sell money? That's a very good question.
Wallace's model is what's known as an overlapping-generations model (OLG). In these models, unlike the model above, households live brief lives and die. Younger generations want to save up to insure their future incomes, but older generations want to dissave--to sell off whatever assets they have and consume as much as possible before they die. There are two ways to save, in general: either invest in some type of durable good, or lend money to a borrower. If there's no durable good to invest in, you have to find credit-worthy borrowers if you want to save. Unfortunately, the old generation (the only people who might want to borrow) is not credit-worthy--they are about to die soon, and therefore have no future incomes with which to repay. Because of this problem, OLG models are actually quite prone to inefficiency. One way to reduce that inefficiency is to introduce a fictitious durable "good," like money, that can be traded
as ifit was a storage technology for perishable consumption goods. In this way the young can purchase money from either the old or the Fed, and when they become old, sell that money in exchange for consumption (ie, buy stuff). In this way, money acts, not as a medium of exchange, but as a line of credit to people who would otherwise not be credit-worthy--in an OLG economy, money facilitates loans from the young to the old, benefiting both by increasing the latter's consumption and insuring the former's future consumption. Unlike an actual loan which dies when the borrower dies, money is a durable good and will outlive it's owner. However, in Wallace's model, there actually is an alternative durable good the young could invest in, so that when the Fed engages in OMO, it is actually just buying durable goods that the young would have invested in, and selling a virtually identical durable good--money--in it's stead, in an economy where money itself is neither desirable nor necessary. No wonder this has no effects!
So it is in situations where money exists only to facilitate intergenerational transfers and nearly identical alternative durable goods exist, where the money supply is expanded specifically by swapping between the two, and only in these kinds of situations, that economies exhibit Wallace neutrality. I do not deny that Wallace's model to some extent does resemble reality--we really do use money as a type of intergenerational loan--but this exists along side other reasons to hold money, such as the liquidity preference of MIU models. That alone ensures that the real world is not Wallace neutral. Wallace neutrality is significantly less robust in theory than Riccardian Equivalence or it's older brother, Miller-Modigliani. This is the point that neither Smith nor anyone else seems to have made.
But then, lots of other people have made good points too--basically, all of the same critiques of Riccardian Equivalence also apply to Wallace neutrality, which requires that individuals have perfect information, non-distortionary taxation, etc. |
A full proof (based on superconcentrators) can be found in chapter 24 "The pebble game" of the bookUwe Schöning and Randall Pruim:Gems of Theoretical Computer ScienceSpringer, 1998ISBN 978-3-642-64352-1https://link.springer.com/book/10.1007%2F978-3-642-60322-8
Two answers that I learnt while writing a blog post about this questionNo: In black-box variants, quantum query/communication complexity offer the Grover quadratic speedup, but not more than that. As Ron points out, this extends to computational complexity under plausible assumptions.Maybe: Nash equilibrium is arguably the flagship problem of "Christos ...
Not sure whether I am missing something, but...The Omega(n/log n) lower bound is from:[PTC77] Wolfgang J. Paul, Robert Endre Tarjan, and James R. Celoni. Space bounds for a game on graphs. Mathematical Systems Theory, 10:239–251, 1977.There is a strengthening of this to a non-deterministic version of the pebble game (so-called black-white pebbling) in:...
From this question on cs.stackexchange, I became aware of the genus hierarchy of regular languages. Essentially, you can characterize regular languages based on the minimum genus surface in which the graph of their DFA may be embedded. It is shown in [1] that there exist languages of arbitrarily large genus and that this hierarchy is proper.Bonfante, ...
In regards to (2), conditional super-linear lower bounds are known. A recent preprint by Afshani, Freksen, Kamma, and Larsen proves an $\Omega(n \log n)$ lower bound for the size of Boolean circuits computing integer multiplication, assuming a certain conjecture on network coding in undirected graphs. (See also this blog post and a follow-up post.)From the ...
Theorem: The special case of 1-in-3-SAT where each variableappears an even number of times is NP-hard.Proof: Consider an instance $I$ of 1-in-3-SAT, and let $a_1,\ldots,a_n$be an enumeration of the variables in $I$.Assume that variables $a_1,\ldots,a_m$ occur an odd number of times,whereas $a_{m+1},\ldots,a_n$ occur an even number of times....
I will attempt to elaborate a bit on why CHKPRR shows that $\mathsf{PPAD}$ is plausibly hard for quantum computers.At a high level, CHKPRR builds a distribution over end-of-line instances where finding a solution requires to either:break the soundness of the proof system obtained by applying the Fiat-Shamir heuristic to the famous sumcheck protocol, or...
In the approximation algorithms side, there is an $(2-2/\Delta)$-approximation algorithm by Hochbaum where $\Delta$ is the maximum degree of the graph. This translates to a 1.33-approximation algorithm for cubic graphs. It seems like there hasn't been any improvement over this.
Actually, injective reductions are useful in cryptography.Suppose you have a ZK proof system for an NP relation R over the language L.If you want to build a ZK proof for another NP relation R' over a language L', you have to find two functions f and g with the following properties:1. x belongs to L' iff f(x) belongs to L,2. If (x,w) belongs to R' then (... |
ISSN:
1078-0947
eISSN:
1553-5231
All Issues
Discrete & Continuous Dynamical Systems - A
April 2014 , Volume 34 , Issue 4
Special Issue on Optimal Transport and Applications
Select all articles
Export/Reference:
Abstract:
Optimal mass transportation can be traced back to Gaspard Monge's paper in 1781. There, for engineering/military reasons, he was studying how to minimize the cost of transporting a given distribution of mass from one location to another, giving rise to a challenging mathematical problem. This problem, an optimization problem in a certain class of maps, had to wait for almost two centuries before seeing significant progress (starting with Leonid Kantorovich in 1942), even on the very fundamental question of the existence of an optimal map. Due to these connections with several other areas of pure and applied mathematics, optimal transportation has received much renewed attention in the last twenty years. Indeed, it has become an increasingly common and powerful tool at the interface between partial differential equations, fluid mechanics, geometry, probability theory, and functional analysis. At the same time, it has led to significant developments in applied mathematics, with applications ranging from economics, biology, meteorology, design, to image processing. Because of the success and impact that this subject is still receiving, we decided to create a special issue collecting selected papers from leading experts in the area.
For more information please click the “Full Text” above.
Abstract:
Exploiting recent regularity estimates for the Monge-Ampère equation, under some suitable assumptions on the initial data we prove global-in-time existence of Eulerian distributional solutions to the semigeostrophic equations in 3-dimensional convex domains.
Abstract:
In this paper, we generalize a result by Alexandrov on the Gauss curvature prescription for Euclidean convex bodies. We prove an analogous result for hyperbolic orbifolds. In addition to the duality theory for convex sets, our main tool comes from optimal mass transport.
Abstract:
We consider the very simple Navier-Stokes model for compressible fluids in one space dimension, where there is no temperature equation and both the pressure and the viscosity are proportional to the density. We show that the resolution of this Navier-Stokes system can be reduced, through the crucial intervention of a monotonic rearrangement operator, to the time discretization of a very elementary differential equation with noise. In addition, our result can be easily extended to a related Navier-Stokes-Poisson system.
Abstract:
In the paper a model problem for the location of a given number $N$ of points in a given region $\Omega$ and with a given resources density $\rho(x)$ is considered. The main difference between the usual location problems and the present one is that in addition to the location cost an extra
routing costis considered, that takes into account the fact that the resources have to travel between the locations on a point-to-point basis. The limit problem as $N\to\infty$ is characterized and some applications to airfreight systems are shown. Abstract:
We prove uniqueness in the class of integrable and bounded nonnegative solutions in the energy sense to the Keller-Segel (KS) chemotaxis system. Our proof works for the fully parabolic KS model, it includes the classical parabolic-elliptic KS equation as a particular case, and it can be generalized to nonlinear diffusions in the particle density equation as long as the diffusion satisfies the classical McCann displacement convexity condition. The strategy uses Quasi-Lipschitz estimates for the chemoattractant equation and the above-the-tangent characterizations of displacement convexity. As a consequence, the displacement convexity of the free energy functional associated to the KS system is obtained from its evolution for bounded integrable initial data.
Abstract:
A usual approach for proving the existence of an optimal transport map, be it in ${\mathbb R}^d$ or on more general manifolds, involves a regularity condition on the transport cost (the so-called Left Twist condition, i.e. the invertibility of the gradient in the first variable) as well as the fact that any optimal transport plan is supported on a cyclically-monotone set. Under the classical assumption that the initial measure does not give mass to sets with $\sigma$-finite $\mathcal{H}^{d-1}$ measure and a stronger regularity condition on the cost (the Strong Left Twist), we provide a short and self-contained proof of the fact that any feasible transport plan (optimal or not) satisfying a $c$-monotonicity assumption is induced by a transport map. We also show that the usual costs induced by Tonelli Lagrangians satisfy the Strong Left Twist condition we propose.
Abstract:
We consider discrete porous medium equations of the form $\partial_t\rho_t = \Delta \phi(\rho_t)$, where $\Delta$ is the generator of a reversible continuous time Markov chain on a finite set $\boldsymbol{\chi} $, and $\phi$ is an increasing function. We show that these equations arise as gradient flows of certain entropy functionals with respect to suitable non-local transportation metrics. This may be seen as a discrete analogue of the Wasserstein gradient flow structure for porous medium equations in $\mathbb{R}^n$ discovered by Otto. We present a one-dimensional counterexample to geodesic convexity and discuss Gromov-Hausdorff convergence to the Wasserstein metric.
Abstract:
Some quantum fluid models are written as the Lagrangian flow of mass distributions and their geometric properties are explored. The first model includes magnetic effects and leads, via the Madelung transform, to the electromagnetic Schrödinger equation in the Madelung representation. It is shown that the Madelung transform is a symplectic map between Hamiltonian systems. The second model is obtained from the Euler-Lagrange equations with friction induced from a quadratic dissipative potential. This model corresponds to the quantum Navier-Stokes equations with density-dependent viscosity. The fact that this model possesses two different energy-dissipation identities is explained by the definition of the Noether currents.
Abstract:
We present an approach for proving uniqueness of ODEs in the Wasserstein space. We give an overview of basic tools needed to deal with Hamiltonian ODE in the Wasserstein space and show various continuity results for value functions. We discuss a concept of viscosity solutions of Hamilton-Jacobi equations in metric spaces and in some cases relate it to viscosity solutions in the sense of differentials in the Wasserstein space.
Abstract:
We prove that every one-dimensional real Ambrosio-Kirchheim current with zero boundary (i.e. a cycle) in a lot of reasonable spaces (including all finite-dimensional normed spaces) can be represented by a Lipschitz curve parameterized over the real line through a suitable limit of Cesàro means of this curve over a subsequence of symmetric bounded intervals (viewed as currents). It is further shown that in such spaces, if a cycle is indecomposable, i.e. does not contain ``nontrivial'' subcycles, then it can be represented again by a Lipschitz curve parameterized over the real line through a limit of Cesàro means of this curve over every sequence of symmetric bounded intervals, that is, in other words, such a cycle is a solenoid.
Abstract:
Symmetric Monge-Kantorovich transport problems involving a cost function given by a family of vector fields were used by Ghoussoub-Moameni to establish polar decompositions of such vector fields into $m$-cyclically monotone maps composed with measure preserving $m$-involutions ($m\geq 2$). In this note, we relate these symmetric transport problems to the Brenier solutions of the Monge and Monge-Kantorovich problem, as well as to the Gangbo-Święch solutions of their multi-marginal counterparts, both of which involving quadratic cost functions.
Abstract:
We prove that the Abresch-Gromoll inequality holds on infinitesimally Hilbertian $CD(K,N)$ spaces in the same form as the one available on smooth Riemannian manifolds.
Abstract:
We study the optimal transportation mapping $\nabla \Phi : \mathbb{R}^d \mapsto \mathbb{R}^d$ pushing forward a probability measure $\mu = e^{-V} \ dx$ onto another probability measure $\nu = e^{-W} \ dx$. Following a classical approach of E. Calabi we introduce the Riemannian metric $g = D^2 \Phi$ on $\mathbb{R}^d$ and study spectral properties of the metric-measure space $M=(\mathbb{R}^d, g, \mu)$. We prove, in particular, that $M$ admits a non-negative Bakry--Émery tensor provided both $V$ and $W$ are convex. If the target measure $\nu$ is the Lebesgue measure on a convex set $\Omega$ and $\mu$ is log-concave we prove that $M$ is a $CD(K,N)$ space. Applications of these results include some global dimension-free a priori estimates of $\| D^2 \Phi\|$. With the help of comparison techniques on Riemannian manifolds and probabilistic concentration arguments we proof some diameter estimates for $M$.
Abstract:
This article is aimed at presenting the Schrödinger problem and some of its connections with optimal transport. We hope that it can be used as a basic user's guide to Schrödinger problem. We also give a survey of the related literature. In addition, some new results are proved.
Abstract:
In order to observe growth phenomena in biology where dendritic shapes appear, we propose a simple model where a given population evolves feeded by a diffusing nutriment, but is subject to a density constraint. The particles (e.g., cells) of the population spontaneously stay passive at rest, and only move in order to satisfy the constraint $\rho\leq 1$, by choosing the minimal correction velocity so as to prevent overcongestion. We treat this constraint by means of projections in the space of densities endowed with the Wasserstein distance $W_2$, defined through optimal transport. This allows to provide an existence result and suggests some numerical computations, in the same spirit of what the authors did for crowd motion (but with extra difficulties, essentially due to the fact that the total mass may increase). The numerical simulations show, according to the values of the parameter and in particular of the diffusion coefficient of the nutriment, the formation of dendritic patterns in the space occupied by cells.
Abstract:
This note exposes the differential topology and geometry underlying some of the basic phenomena of optimal transportation. It surveys basic questions concerning Monge maps and Kantorovich measures: existence and regularity of the former, uniqueness of the latter, and estimates for the dimension of its support, as well as the associated linear programming duality. It shows the answers to these questions concern the differential geometry and topology of the chosen transportation cost. It also establishes new connections --- some heuristic and others rigorous --- based on the properties of the cross-difference of this cost, and its Taylor expansion at the diagonal.
Abstract:
We prove uniqueness and Monge solution results for multi-marginal optimal transportation problems with a certain class of surplus functions; this class arises naturally in multi-agent matching problems in economics. This result generalizes a seminal result of Gangbo and Święch [17]. Of particular interest, we show that this also yields a partial generalization of the Gangbo-Święch result to manifolds; alternatively, we can think of this as a partial extension of McCann's theorem for quadratic costs on manifolds to the multi-marginal setting [23].
We also show that the class of surplus functions considered here neither contains, nor is contained in, the class of surpluses studied in [27], another generalization of Gangbo and Święch's result.
Abstract:
We prove that the linear ``heat'' flow in a $RCD (K, \infty)$ metric measure space $(X, d, m)$ satisfies a contraction property with respect to every $L^p$-Kantorovich-Rubinstein-Wasserstein distance, $p\in [1,\infty]$. In particular, we obtain a precise estimate for the optimal $W_\infty$-coupling between two fundamental solutions in terms of the distance of the initial points.
The result is a consequence of the equivalence between the $RCD (K, \infty)$ lower Ricci bound and the corresponding Bakry-Émery condition for the canonical Cheeger-Dirichlet form in $(X, d, m)$. The crucial tool is the extension to the non-smooth metric measure setting of the Bakry's argument, that allows to improve the commutation estimates between the Markov semigroup and the
Carré du Champ$\Gamma$ associated to the Dirichlet form.
This extension is based on a new a priori estimate and a capacitary argument for regular and tight Dirichlet forms that are of independent interest.
Abstract:
We develop the fundamentals of a local regularity theory for prescribed Jacobian equations which extend the corresponding results for optimal transportation equations. In this theory the cost function is extended to a
generating functionthrough dependence on an additional scalar variable. In particular we recover in this generality the local regularity theory for potentials of Ma, Trudinger and Wang, along with the subsequent development of the underlying convexity theory. Abstract:
In this paper, we introduce a multiple-sources version of the landscape function which was originally introduced by Santambrogio in [10]. More precisely, we study landscape functions associated with a transport path between two atomic measures of equal mass. We also study p-harmonic functions on a directed graph for nonpositive $p$. We show an equivalence relation between landscape functions associated with an $\alpha $-transport path and $ p$-harmonic functions on the underlying graph of the transport path for $ p=\alpha /(\alpha -1)$, which is the conjugate of $\alpha $. Furthermore, we prove the Lipschitz continuity of a landscape function associated with an optimal transport path on each of its connected components.
Readers Authors Editors Referees Librarians More Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top] |
Three players are each dealt, in a random manner, five cards from a deck containing 52 cards. Four of the 52 cards are aces. What is the probability that at least one person receives exactly two aces in their five cards?
Let $A_i$ represent the player $i$ with two aces where $i = 1,2,3$. The probability a player receives two aces is the following. $$P(A_i) = \frac{{4 \choose 2}{48 \choose 3}}{{52 \choose 5}} \approx .0399$$
Then the probability at least one person receives exactly two aces is the following. $$3 \cdot .0399 - 3 \cdot .0399^2 \approx .1149$$
Is this correct? |
Search
Now showing items 1-2 of 2
D-meson nuclear modification factor and elliptic flow measurements in Pb–Pb collisions at $\sqrt {s_{NN}}$ = 5.02TeV with ALICE at the LHC
(Elsevier, 2017-11)
ALICE measured the nuclear modification factor ($R_{AA}$) and elliptic flow ($\nu_{2}$) of D mesons ($D^{0}$, $D^{+}$, $D^{⁎+}$ and $D^{s+}$) in semi-central Pb–Pb collisions at $\sqrt{s_{NN}} =5.02$ TeV. The increased ...
ALICE measurement of the $J/\psi$ nuclear modification factor at mid-rapidity in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV
(Elsevier, 2017-11)
ALICE at the LHC provides unique capabilities to study charmonium production at low transverse momenta ( p T ). At central rapidity, ( |y|<0.8 ), ALICE can reconstruct J/ ψ via their decay into two electrons down to zero ... |
Search
Now showing items 1-10 of 76
Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV
(Elsevier, 2017-12-21)
We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ...
J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(American Physical Society, 2017-12-15)
We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ...
Highlights of experimental results from ALICE
(Elsevier, 2017-11)
Highlights of recent results from the ALICE collaboration are presented. The collision systems investigated are Pb–Pb, p–Pb, and pp, and results from studies of bulk particle production, azimuthal correlations, open and ...
Event activity-dependence of jet production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV measured with semi-inclusive hadron+jet correlations by ALICE
(Elsevier, 2017-11)
We report measurement of the semi-inclusive distribution of charged-particle jets recoiling from a high transverse momentum ($p_{\rm T}$) hadron trigger, for p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in p-Pb events ...
System-size dependence of the charged-particle pseudorapidity density at $\sqrt {s_{NN}}$ = 5.02 TeV with ALICE
(Elsevier, 2017-11)
We present the charged-particle pseudorapidity density in pp, p–Pb, and Pb–Pb collisions at sNN=5.02 TeV over a broad pseudorapidity range. The distributions are determined using the same experimental apparatus and ...
Photoproduction of heavy vector mesons in ultra-peripheral Pb–Pb collisions
(Elsevier, 2017-11)
Ultra-peripheral Pb-Pb collisions, in which the two nuclei pass close to each other, but at an impact parameter greater than the sum of their radii, provide information about the initial state of nuclei. In particular, ...
Measurement of $J/\psi$ production as a function of event multiplicity in pp collisions at $\sqrt{s} = 13\,\mathrm{TeV}$ with ALICE
(Elsevier, 2017-11)
The availability at the LHC of the largest collision energy in pp collisions allows a significant advance in the measurement of $J/\psi$ production as function of event multiplicity. The interesting relative increase ...
Multiplicity dependence of jet-like two-particle correlations in pp collisions at $\sqrt s$ =7 and 13 TeV with ALICE
(Elsevier, 2017-11)
Two-particle correlations in relative azimuthal angle (Δ ϕ ) and pseudorapidity (Δ η ) have been used to study heavy-ion collision dynamics, including medium-induced jet modification. Further investigations also showed the ...
Electroweak boson production in p–Pb and Pb–Pb collisions at $\sqrt{s_\mathrm{NN}}=5.02$ TeV with ALICE
(Elsevier, 2017-11)
W and Z bosons are massive weakly-interacting particles, insensitive to the strong interaction. They provide therefore a medium-blind probe of the initial state of the heavy-ion collisions. The final results for the W and ...
Investigating the Role of Coherence Effects on Jet Quenching in Pb-Pb Collisions at $\sqrt{s_{NN}} =2.76$ TeV using Jet Substructure
(Elsevier, 2017-11)
We report measurements of two jet shapes, the ratio of 2-Subjettiness to 1-Subjettiness ($\it{\tau_{2}}/\it{\tau_{1}}$) and the opening angle between the two axes of the 2-Subjettiness jet shape, which is obtained by ... |
No, it is not wrong to write that, you're spot on the mark; therefore, your conclusion is right. Your example has an interesting generalisation beyond $O(3)$ and indeed beyone Lie groups as the following is true for all topological groups. For a topological group $\mathfrak{G}$, the "identity component" $\mathfrak{G}_\mathrm{id}$ (
i.e. the connected component of the group with the identity in it) is always: Clopen (closed and open at once), therefore, by the connectedness argument, the whole group $\mathfrak{G}$ is connected if and only if $\mathfrak{G}_\mathrm{id}$ is a proper subset of $\mathfrak{G}$; A normal subgroup of $\mathfrak{G}$, therefore the connected components of $\mathfrak{G}$ are precisely $\mathfrak{G}_\mathrm{id}$ and its cosets.
See the discussion as Theorem 9.31 on my website here of this situation. Mine is a retelling of this slick little proof that I found a long time ago in
Sagle, A. A. and Walde, R. E., “Introduction to Lie Groups and Lie Algebras“, Academic Press, New York, 1973. §3.3
This is widely known, BTW, but it's a little pearl of a proof, showing how powerful the connectedness argument is. I still enjoy reading it, just like I still like listening to "Walk Like an Egyptian" (for me, they come from about the same time of life!)
In your example, $\mathfrak{G}=O(3)$, the identity component is the smallest Lie group containing $\exp(\mathfrak{g})$, where $\mathfrak{g}=\operatorname{Lie}(\mathfrak{G}) = \mathfrak{so}(3)$; indeed, in this compact case, $\mathfrak{G}_\mathrm{id}=\exp(\mathfrak{g})$ is the whole connected component (in noncompact groups,
e.g. $SL(2,\mathbb{C})$, the identity component is strictly bigger than $\exp(\mathfrak{g})$). $\mathfrak{so}(3)$ is of course the Lie algebra of skew-symmetric, real $3\times 3$ matrices and, since $\det(\exp(H)) = \exp(\mathrm{tr}(H))=1$ for any such matrix $H$, we see that $SO(3)$ is the whole of $\mathfrak{G}_\mathrm{id}$. It is a normal subgroup of $O(3)$, and the group of cosets is simply $O(3)/SO(3)\cong\{+1,\,-1\}\cong\mathbb{Z}_2$, which is another way of writing your decomposition.
Answer to edit 1: For manifolds, which are locally Euclidean (locally homeomorphic to $\mathbb{R}^N$) path connectedness and connectedness are the same notion. Moreover, path connectedness
always implies connectedness (to prove this, assume otherwise and let $\alpha,\,\beta\in\mathbb{X}$ belong to separate connected components $\mathbb{U},\,\mathbb{U}^\sim$ linked by path $\sigma:[0,\,1]\to\mathbb{X}$ where $\mathbb{X}$ is the topological space in question and $\mathbb{X}=\mathbb{U}\bigcup\mathbb{U}^\sim$. By assumption (path connectedness), $\sigma$ is continuous when $[0,\,1]$ has its wonted topology. But $\mathbb{U},\,\mathbb{U}^\sim$ are disjoint, therefore so is $\mathbb{V}=\sigma([0,\,1])\bigcap\mathbb{X}$, so the inverse image $\sigma^{-1}(\mathbb{V})$, namely $[0,\,1]$, must also be the union of disjoint open sets, contradicting the known connectedness of $[0,\,1]$). However, not all connected topological spaces are path connected (look up the weird "topologist's Sine Curve" as a counterexample).
Answer to Edit 2. We have actually already done this above, because every matrix in $SO(3)$ can be written as a $\exp(H)$, where $H\in\mathfrak{so}(3)$, so you can take your path to be $\sigma:[0,\,1]\to SO(3);\;\sigma(\tau) = e^{\tau\,H}$. To prove that every $SO(3)$ matrix can be written in this was from first principles, simply witness that $\gamma\in SO(3)$ is
normal, i.e. commutes with its Hermitian transpose and thus always has a diagonalisation with orthonormal eigenvectors, so $\exists U\ni\,\gamma=U\,\Lambda\,U^\dagger$, where $\Lambda$ is the diagonal matrix of eigenvalues, none of which are nought ($SO(3)$ is a group). Thus, you can always define $\log\gamma = H = U\,\log\Lambda\,U^\dagger$, and you are done.
Note that this does not work for $O(3)$, because now $H$, even though definable as $H = \log \gamma$, $H$ now has diagonal elements if $\gamma\not\in SO(3)$, so $H\not\in\mathfrak{so}(3)$ and you can't find a path through $O(3)$'s charts, because $\mathfrak{so}(3)$ is the Lie algebra of $O(3)$ as well.This post imported from StackExchange Mathematics at 2014-10-05 10:06 (UTC), posted by SE-user WetSavannaAnimal aka Rod Vance |
An advertiser goes to a printer and is charged $44 for 70 copies of one flyer and $62 for 231 copies of another flyer.
The printer charges a fixed setup cost plus a charge for every copy of the flyer.
Find a function that describes the cost of a printing job, if nn is the number of copies made. (You must either use fractions or numbers with 3 decimal places of accuracy.)
\(c(n) = fixedcost + n \cdot percopycost\\ 44 = fixedcost+70 \cdot percopycost \\ 62 = fixedcost + 231 \cdot percopycost\\ \text{subtract the first equation from the second}\\ 18= 161 \cdot percopycost \\ percopycost=\dfrac{18}{161} \approx $0.112/copy\\ fixedcost = 44-\dfrac{70\cdot 18}{161} \approx $36.174\\ c(n) = $(36.174 + 0.112n)\). |
Given: $\prod_{i=1}^n x_i = 1$ leads to constraint function $G(x_1,x_2,...,x_n)=\prod_{i=1}^n x_i-1$
($\prod_{i=1}^n x_i =x_1 x_2...x_n$)
Task is to to find the minimum
using conditional extrema of the following (the induction method that is most convinient is forbidden), if we proove this special case then the derivation can be generalised to prove the AMGM theorem: $F(x_1,x_2,...,x_n) = \sum_{i=1}^n x_i$
($\sum_{i=1}^n x_i =x_1+x_2+...+x_n$)
Idea is that it should be $\sum_{i=1}^n x_i \geqslant n$ afaik
And finally
Using the derivation above prove the AM-GM theorem:
$\frac{\sum_{i=1}^n x_i}{n} \geqslant \sqrt[n]{\prod_{i=1}^n x_i - 1}$
Solution so far:
what I come up with is writing down the Lagrangian:
$L = F(x_1,x_2,...,x_n) - \lambda G(x_1,x_2,...,x_n) \implies L = \sum_{i=1}^n x_i - \lambda (\prod_{i=1}^n x_i-1)$
taking the partial derivatives
$\frac{dL}{dx_1} = 1 - \lambda \frac{\prod_{i=1}^n x_i}{x_1}=0 \implies \lambda \frac{\prod_{i=1}^n x_i}{x_1}=1 \implies \lambda \frac{1}{x_1}=1 \implies \lambda = x_1$
$\frac{dL}{dx_2} = 1 - \lambda \frac{\prod_{i=1}^n x_i}{x_2}=0 \implies \lambda \frac{\prod_{i=1}^n x_i}{x_2}=1 \implies \lambda \frac{1}{x_2}=1\implies \lambda = x_2$
...
$\frac{dL}{dx_n} = 1 - \lambda \frac{\prod_{i=1}^n x_i}{x_n}=0 \implies \lambda \frac{\prod_{i=1}^n x_i}{x_n}=1 \implies \lambda \frac{1}{x_n}=1 \implies \lambda = x_n$
$\frac{dL}{d\lambda} = - \prod_{i=1}^n x_i + 1= 0 \implies$ $\prod_{i=1}^n x_i = 1$
$ \lambda = x_1 = x_2 = ...=x_n = 1$ is our critical point.
Taking the differential of our constraint
$dG(x_1,x_2,...,x_n) = 0$
$\frac{\partial G}{\partial x_1}\Delta x_1+\frac{\partial G}{\partial x_2}\Delta x_2+...+\frac{\partial G}{\partial x_n}\Delta x_n=0$
$\frac{\prod_{i=1}^n x_i}{x_1}\Delta x_1 + \frac{\prod_{i=1}^n x_i}{x_2}\Delta x_2+...+\frac{\prod_{i=1}^n x_i}{x_n}\Delta x_n=0$
substituting the roots of critical point $x_1=x_2=...=x_n=1$ and $\prod_{i=1}^n x_i=1$ leads to
$\Delta x_1 + \Delta x_2 +...+ \Delta x_n = 0$
First question. update, answered by Andreas below
The second order differential of Lagrangian has to be positive, but I’m getting negative sign
$d^2L = \sum_{j=1}^n\sum_{i=1}^n L_{x_j x_i} \Delta x_j \Delta x_i=-\lambda (\prod x_i )(\frac{\Delta x_i \Delta x_j}{x_i x_j})<0$
Here I took the second order partial derivatives of L
$\frac{d^2 L} {dx_1 dx_1} = -\lambda \frac{\prod x_i}{x_1 x_1}$
$\frac{d^2 L} {dx_1 dx_2} = -\lambda \frac{\prod x_i}{x_1 x_2}$
...
$\frac{d^2 L} {dx_n dx_n} = -\lambda \frac{\prod x_i}{x_n x_n}$
Second question. Are these reasonings correct?
It is needed to justify why local extrema is global as well.
If the second differential will become positive and therefore at the point (1,1,...,1) is
local minima(since the function might be just like a cubic polynomial with no global minima), then we can check the function limit along axis direction (by sending all but one variable to infinity and getting the that one variable limit to zero) and thus we find out that the function F is located in the positive n-dimensional quadrant and therefore the extreme point has to be global minima. I don't know it's just a feeling, was thinking of rotating the function 45 degrees towards the vertical axis and then stating that function goes to infinity in each direction.
Update. Probs solved the global issue part, gonna consilt with Professor and update the solution if my assumptions are right.
Update. Just posted Proof to Wiki: MyProof Idea was to simply use the Weierstrass theorem and apply it to any closed domain interval of function. |
Newform invariants
Coefficients of the \(q\)-expansion are expressed in terms of a basis \(1,\beta_1,\beta_2\) for the coefficient ring described below. We also show the integral \(q\)-expansion of the trace form.
Basis of coefficient ring in terms of a root \(\nu\) of \(x^{3}\mathstrut -\mathstrut \) \(x^{2}\mathstrut -\mathstrut \) \(1597028177\) \(x\mathstrut +\mathstrut \) \(23572260890640\):
\(\beta_{0}\) \(=\) \( 1 \) \(\beta_{1}\) \(=\) \((\)\( 256 \nu^{2} + 4304128 \nu - 272560910336 \)\()/319\) \(\beta_{2}\) \(=\) \((\)\( 1491456 \nu^{2} + 44832004608 \nu - 1587946449002496 \)\()/319\)
\(1\) \(=\) \(\beta_0\) \(\nu\) \(=\) \((\)\(\beta_{2}\mathstrut -\mathstrut \) \(5826\) \(\beta_{1}\mathstrut +\mathstrut \) \(20643840\)\()/61931520\) \(\nu^{2}\) \(=\) \((\)\(-\)\(16813\) \(\beta_{2}\mathstrut +\mathstrut \) \(175125018\) \(\beta_{1}\mathstrut +\mathstrut \) \(65937588343603200\)\()/61931520\)
For each embedding \(\iota_m\) of the coefficient field, the values \(\iota_m(a_n)\) are shown below.
For more information on an embedded modular form you can click on its label.
This newform does not admit any (nontrivial) inner twists.
\( p \) Sign \(2\) \(-1\)
There are no other newforms in \(S_{36}^{\mathrm{new}}(\Gamma_0(4))\). |
I encounter the following problem :
I have the equality in distribution:
for all $\lambda >0, ((1/\lambda)*\int_{0}^{\lambda t}\sigma_{u}^{2}du,t\geq0)=(\int_{0}^{t}\sigma_{u}^{2}du,t\geq0)$
where $(\sigma_{t})$ is a predictable process.
Now I don't understand that when $\lambda->0$ and when we use the continuity of $(|\sigma_{u}|,u\geq0)$ at 0 then we get: $(\int_{0}^{t}\sigma_{u}^{2}du,t\geq0)=(c^{2}t,t\geq0)$ (in distribution)
I try to recognize a derivative but I don't get it... Thank you |
I'm learning Abstract Algebra, specifically cyclic groups, and need help with the following problem:
Let $G$ be an infinite cyclic group and $\{1_{G}\} \neq H \leq G$. Show that $(G:H) < \infty$.
Since I'm new to this subject, I first tried to rephrase the problem with my own words. If I understand it correctly, I need to show that for a non-trivial subgroup $H$ of an infinite cyclic group $G$, the index of $H$ in $G$ is finite.
The only relation I could make with the theory is that since $G$ is an infinite cyclic group, then the subgroup $H$ of $G$ is also cyclic. Also, I know that the index of $H$ in $G$ is closely related to Lagrange's Theorem but I don't know if it is useful here. |
Circle \(\Gamma\) is the incircle of triangle ABC and is also the circumcircle of triangle XYZ. The point X is on \(\overline{BC}\), point Y is on \(\overline{AB}\), and the point Z is on \(\overline{AC}\). If \(\angle A=40^\circ, \angle B=60^\circ\)and \(\angle C=80^\circ\),what is the measure of \(\angle AYX\)?
Assume Circle \(\Gamma \) is center at point \(\Gamma \). Connect \(\Gamma \) to point X and Y
Angle \(\Gamma \)XB = Angle \(\Gamma \)YB =90 degrees.
Angle X\(\Gamma \)Y= 360 - 60 - 90 - 90 =120 degree
And segment \(\Gamma \)X = segment \(\Gamma \)Y \(\Rightarrow \) angle \(\Gamma \)XY = angel \(\Gamma \)YX =1/2 *(180 degree - angel X\(\Gamma \)Y)=1/2(180-120)=30 degrees.
Angel AYX= angel AY\(\Gamma \) +angle \(\Gamma \)YX=90 degrees + 30 degrees= 120 degrees.
edited: to understand my answer, you need know the properties of inscrided circle(incircle) and angles. I urge you to draw picture to better understand my answer. You should check my answer ,because it might not be correct.
Assume Circle \(\Gamma \) is center at point \(\Gamma \). Connect \(\Gamma \) to point X and Y
Angle \(\Gamma \)XB = Angle \(\Gamma \)YB =90 degrees.
Angle X\(\Gamma \)Y= 360 - 60 - 90 - 90 =120 degree
And segment \(\Gamma \)X = segment \(\Gamma \)Y \(\Rightarrow \) angle \(\Gamma \)XY = angel \(\Gamma \)YX =1/2 *(180 degree - angel X\(\Gamma \)Y)=1/2(180-120)=30 degrees.
Angel AYX= angel AY\(\Gamma \) +angle \(\Gamma \)YX=90 degrees + 30 degrees= 120 degrees.
edited: to understand my answer, you need know the properties of inscrided circle(incircle) and angles. I urge you to draw picture to better understand my answer. You should check my answer ,because it might not be correct.
Fiona's answer is correct
See the diagram :
Since in triangle XBY...angle XBY = 60
And since BX and BY are tangents to the incircle drawn from B....then BX = BY
And the angles opposite these sides are also equal = [ 180 - angle XBY ] / 2 =
[ 180 - 60 ] / 2 = 120 / 2 = 60
So angle BYX = angle BXY = 60
But angle BYX is supplemental to angle AYX = 180 - 60 = 120 (degrees) |
Chapter 2.2 - Properties of the Top Quark
The complete Chapter 2.2 document is availablehere.
Figures
Figure 2.1 - $\Delta\phi$ vs. $\met$ in the dilepton sample. The small grey dotsare the result of a $t\overline{t}$ Monte Carlo simulation with ${\rm M_{top}} = 175$ GeV/c$^{2}$.
Figure 2.2 - The proper time distribution for the $b$-taggedjets in the signal region (W+$\ge$ 3 jets). The openhistogram shows the expecteddistribution of $b$'s from 175 GeV/$c^2$ $\ttbar$ Monte Carlo simulation. Theshaded histogram indicates the background in W+jet events.
Figure 2.3 - Top Left:The jet multiplicity distribution in SVX tagged W+jet events. Closedcircles are number of events before $b$-tagging, dark triangles are number of $b$-tagged events in each bin and hatched areas are the background prediction for the number of tagged events and its uncertainty. Top Right:Mass spectrum for $b$-tagged lepton+jet events in 110 pb$^{-1}$ of data. Theshaded area is the expectation from background. The dashed curve isfrom background plus top production. The likelihood fit is shown asan inset. Bottom Left: The jet multiplicity distribution for the all-hadronic mode. The darktriangles represent the observed number of $b$-tags in each jet multiplicity bin and thehatched areas represent the background prediction as well as its estimated uncertainty. Bottom Right:Mass spectrum for all-hadronic $b$-tagged events in 110 pb$^{-1}$ of data. Theshaded area is the expectation from background. The histogram isfrom background plus top production.
Figure 2.4 - The measured cross section for $t\bar{t}$ production for each of the separate production channels measured at CDF as well as the combined lepton+jets and dilepton measurements. The vertical line representsthe spread of the central values of the three most current theoreticalcalculations evaluated at a top mass of 175 GeV/$c^2$.
Figure 2.5 - The optimized lepton+jets top quark mass plot for each of the four data samples. The insert shows the -$\Delta$log(likelihood) for the data in comparison to mass spectra derived from Monte Carlosamples of various $m_t$. This technique resultsin a measured top quark mass of ${\rm 176.8\pm 4.4~(stat.)\pm 4.8~(syst.)}$ GeV/c$^{2}$-- a 30\% improvement over the old analysis.
Figure 2.6 - The $M_{jj}^W$ distribution is shown for data (solid), expected top+background (dashed), and background (shaded),for W+4 jet events which contain two $b$-tagged jets. The valueof $M_{jj}^W$ is 79.8 $\pm$ 6.2 GeV/c$^2$. The top mass from this subsamplehas been determined to be 174.8 $\pm$ 9.7 GeV/c$^2$.
Figure 2.7 - The $\cos \theta^*_e$ distribution for 1000 events andfit to the Standard Model hypothesis $\sim$ 30\% $W_{\rm left}$ +70\% $W_{\rm long}$.
Figure 2.8 - A hypothetical $M_{t\bar{t}}$ spectrum with an 800 GeV/c$^2$ Z$^\prime$ topcolorboson. The rate is based on the theoretical predicted cross section for$t\bar{t}$ production and Z$^\prime$ production \protect\cite{chill} with 2 fb$^{-1}$. |
Electronic Journal of Statistics Electron. J. Statist. Volume 10, Number 1 (2016), 1223-1295. Statistical inference versus mean field limit for Hawkes processes Abstract
We consider a population of $N$ individuals, of which we observe the number of
actions until time $t$. For each couple of individuals $(i,j)$, $j$ may or not influence $i$, which we model by i.i.d. Bernoulli$(p)$-random variables, for some unknown parameter $p\in(0,1]$. Each individual acts autonomously at some unknown rate $\mu>0$ and acts by mimetism at some rate proportional to the sum of some function $\varphi$ of the ages of the actions of the individuals which influence him. The function $\varphi$ is unknown but assumed, roughly, to be decreasing and with fast decay. The goal of this paper is to estimate $p$, which is the main characteristic of the graph of interactions, in the asymptotic $N\to\infty$, $t\to \infty$. The main issue is that the mean field limit (as $N\to\infty$) of this model is unidentifiable, in that it only depends on the parameters $\mu$ and $p\varphi$. Fortunately, this mean field limit is not valid for large times. We distinguish the subcritical case, where, roughly, the mean number $m_{t}$ of actions per individual increases linearly and the supercritical case, where $m_{t}$ increases exponentially. Although the nuisance parameter $\varphi$ is non-parametric, we are able, in both cases, to estimate $p$ without estimating $\varphi $ in a nonparametric way, with a precision of order $N^{-1/2}+N^{1/2}m_{t}^{-1}$, up to some arbitrarily small loss. We explain, using a Gaussian toy model, the reason why this rate of convergence might be (almost) optimal. Article information Source Electron. J. Statist., Volume 10, Number 1 (2016), 1223-1295. Dates Received: September 2015 First available in Project Euclid: 12 May 2016 Permanent link to this document https://projecteuclid.org/euclid.ejs/1463068346 Digital Object Identifier doi:10.1214/16-EJS1142 Mathematical Reviews number (MathSciNet) MR3499526 Zentralblatt MATH identifier 1343.62050 Citation
Delattre, Sylvain; Fournier, Nicolas. Statistical inference versus mean field limit for Hawkes processes. Electron. J. Statist. 10 (2016), no. 1, 1223--1295. doi:10.1214/16-EJS1142. https://projecteuclid.org/euclid.ejs/1463068346 |
The famous Newey 94 paper on the asymptotic convergence of semiparametric estimators with a first non parametric step and a second parametric one, http://www.jstor.org/stable/2951752, establishes that it does not matter the rate of convergence of the particular non parametric estimator, as long a a number of regularity assumptions are fulfilled then the estimator of the second step is $\sqrt{N}$ convergent to a normal distribution. Here I am asking for intuition why does this a complicated stochastic process, say Naradaya-Watson asymptotic distribution converges to this nice distribution.
The usual proof of the classical Central Limit Theorem (CLT), I believe provides the most intuition there is about this phenomenon. And it is not much of an intuition anyway.
This "usual" proof is through characteristic functions.
Consider a random variable $X$ with characteristic function
$$\phi_X(t) = E(e^{i tX}), \;\;\;i^2 = -1$$
Now consider its centered and scaled version
$$Y = \frac {X-\mu}{\sigma} = \frac 1{\sigma}X - \frac {\mu}{\sigma}$$.
with $E(Y) = 0 , {\rm Var}(Y) E(Y^2)= 1$. Moreover, $Y$ is the sum of two independent random variables, the second degenerate (being a constant, and so also independent of everything). So, by the properties of the characteristic function for the sum of two independent random variables
$$\phi_Y(t) = \phi_Y\left(\frac 1{\sigma}t\right)\cdot e^{-i\frac {\mu}{\sigma}t} = E\left[\exp{\left\{it\frac 1{\sigma}X - i\frac {\mu}{\sigma}t\right\}}\right] = E(e^{i tY})$$
Now take the second-order Taylor expansion of $\phi_Y(t)$, with respect to $Y$ and with center of expansion $E(Y) =0$: $$\phi_Y(t) = E\left[e^{it\frac 1{\sigma}\cdot 0} + ite^{it\cdot 0}Y + \frac 12 i^2t^2e^{it\cdot 0}Y^2 + o(t^2)= \right]$$
The first term is zero, the second term vanishes because $E(Y) =0$, and for the third term we use $i^2=-1, E(Y^2) =1$ to arrive at
$$\phi_Y(t) = 1 - \frac {t^2}2 + o(t^2)$$
The "phenomenon" is already here because
$$ \frac {t^2}2 = \ln MGF_{Z}(t),\;\; Z\sim {\rm N}(0,1)$$
In words: The characteristic function of random variable with finite mean and variance that is centered and scaled accordingly, has a strong connection with the moment generating function of the Standard Normal distribution ($\ln MGF_{Z}(t)$ is actually the cumulant generating function). any
How can this happen? Have we uncovered a "law of nature" here, this fundamental connection of different "types of uncertain behavior" to the specific type labelled "standard normal distribution", or it is just our mathematical system, through which we have modeled this thing called "uncertainty", producing some artificial connection that perhaps reveals some aspect of its own internal structure, but it has nothing to do with
the real world?
Let's first complete the proof of the CLT: consider the random variable $$Z_n = \frac 1{\sqrt{n}}\sum_{i=1}^nY_i$$
where the $Y_i$'s are independently and identically distributed. Again by the properties of the characteristic function we have
$$\phi_{Z_n}(t) = \prod_{i=1}^n\phi_{Y}(t\sqrt{n}) = \Big[1 - \frac {t^2}{2n} + o(\frac {t^2}{n})\Big]^n \xrightarrow{n\rightarrow \infty} e^{-t^2/2} = \phi_{Z}(t),\;\; Z\sim {\rm N}(0,1)$$
and so
$$Z_n \xrightarrow{d} {\rm N}(0,1)$$
...and now we have arrived at the Standard Normal distribution proper. And this
is a law of nature: mathematics aside, computer simulations aside, real world data consistently validate this result. So there is no further "why?" here -as it cannot be with any law of nature. We just discover them -and we feel like gaining added intuition when we discover the interconnections between these laws (but this is not really insightful, it is just more discoveries).
Of course this is the first and most simple case, in a long line of Central Limit Theorems that increasingly deal with more complicated, interdependent, functions of random variables, stochastic processes etc as the one mentioned in the OP. And there are also the generalizations of CLT to stable distributions and not just the normal, and there is extreme value theory... all these are different aspects of the same conclusion: that collective behavior (even in the simple sense of
pooled behavior) is much more homogeneous than individual behaviors, even though it is just the pool of the latter -and this should be one of the most counter-intuitive results ever encountered. P.S. An inspired attempt at bottom-up intuition is provided by @whuber in this Cross Validated thread: https://stats.stackexchange.com/questions/3734/what-intuitive-explanation-is-there-for-the-central-limit-theorem |
The ancient Greeks had a theory that the sun, the moon, and the planets move around the Earth in circles. This was soon shown to be wrong. The problem was that if you watch the planets carefully, sometimes they move backwards in the sky. So Ptolemy came up with a new idea - the planets move around in one big circle, but then move around a little circle at the same time. Think of holding out a long stick and spinning around, and at the same time on the end of the stick there's a wheel that's spinning. The planet moves like a point on the edge of the wheel.
Well, once they started watching really closely, they realized that even this didn't work, so they put circles on circles on circles...
Eventually, they had a map of the solar system that looked like this:
This "epicycles" idea turns out to be a bad theory. One reason it's bad is that we know now that planets orbit in ellipses around the sun. (The ellipses are not perfect because they're perturbed by the influence of other gravitating bodies, and by relativistic effects.)
But it's wrong for an even worse reason that that, as illustrated in this wonderful youtube video.
In the video, by adding up enough circles, they made a planet trace out Homer Simpson's face. It turns out we can make any orbit at all by adding up enough circles, as long as we get to vary their size and speeds.
So the epicycle theory of planetary orbits is a bad one not because it's wrong, but because it doesn't say anything at all about orbits. Claiming "planets move around in epicycles" is mathematically equivalent to saying "planets move around in two dimensions". Well, that's not saying nothing, but it's not saying much, either!
A simple mathematical way to represent "moving around in a circle" is to say that positions in a plane are represented by complex numbers, so a point moving in the plane is represented by a complex function of time. In that case, moving on a circle with radius $R$ and angular frequency $\omega$ is represented by the position
$$z(t) = Re^{i\omega t}$$
If you move around on two circles, one at the end of the other, your position is
$$z(t) = R_1e^{i\omega_1 t} + R_2 e^{i\omega_2 t}$$
We can then imagine three, four, or infinitely-many such circles being added. If we allow the circles to have every possible angular frequency, we can now write
$$z(t) = \int_{-\infty}^{\infty}R(\omega) e^{i\omega t} \mathrm{d}\omega.$$
The function $R(\omega)$ is the Fourier transform of $z(t)$.
If you start by tracing any time-dependent path you want through two-dimensions, your path can be perfectly-emulated by infinitely many circles of different frequencies, all added up, and the radii of those circles is the Fourier transform of your path. Caveat: we must allow the circles to have complex radii. This isn't weird, though. It's the same thing as saying the circles have real radii, but they do not all have to start at the same place. At time zero, you can start however far you want around each circle.
If your path closes on itself, as it does in the video, the Fourier transform turns out to simplify to a Fourier series. Most frequencies are no longer necessary, and we can write
$$z(t) = \sum_{k=-\infty}^\infty c_k e^{ik \omega_0 t}$$
where $\omega_0$ is the angular frequency associated with the entire thing repeating - the frequency of the slowest circle. The only circles we need are the slowest circle, then one twice as fast as that, then one three times as fast as the slowest one, etc. There are still infinitely-many circles if you want to reproduce a repeating path perfectly, but they are countably-infinite now. If you take the first twenty or so and drop the rest, you should get close to your desired answer. In this way, you can use Fourier analysis to create your own epicycle video of your favorite cartoon character.
That's what Fourier analysis says. The questions that remain are how to do it, what it's for, and why it works. I think I will mostly leave those alone. How to do it - how to find $R(\omega)$ given $z(t)$ is found in any introductory treatment, and is fairly intuitive if you understand orthogonality. Why it works is a rather deep question. It's a consequence of the spectral theorem.
What it's for has a huge range. It's useful in analyzing the response of linear physical systems to an external input, such as an electrical circuit responding to the signal it picks up with an antenna or a mass on a spring responding to being pushed. It's useful in optics; the interference pattern from light scattering from a diffraction grating is the Fourier transform of the grating, and the image of a source at the focus of a lens is its Fourier transform. It's useful in spectroscopy, and in the analysis of any sort of wave phenomena. It converts between position and momentum representations of a wavefunction in quantum mechanics. Check out this question on physics.stackexchange for more detailed examples. Fourier techniques are useful in signal analysis, image processing, and other digital applications. Finally, they are of course useful mathematically, as many other posts here describe. |
I have a problem to understand the meaning of a
complex measure; i.e., when someone writes ($i \equiv \sqrt{-1}$)$$\int_{\mathbb{R}^2} d\mathrm{Re}z \, d\mathrm{Im}z \equiv \int_{\mathbb{C}} \frac{dz d\bar{z}}{2i} \quad (\ast)$$The lefthand-side will yield a real number (after performing the integration over a real-valued function), while it is not obvious that the righthand-side yields a real number. Furthermore, how can one obtain the equivalence relation? Is the factor $\frac{1}{2i}$ the Jacobian of some transformation like $$ z = \mathrm{Re} z + i \, \mathrm{Im} z ,\\ \bar{z} = \mathrm{Re} z - i \, \mathrm{Im} z ,\\ $$ So, the complex measure $dz d\bar{z}$ does not have the same meaning as a ‘simple’ complex integration which in complex calculus (integration over a path in the complex plane).
Please provide an explanation for the complex measure and the equivalence relation ($\ast$) above.
Notes 1. A similar question is asked here; yet no clear proof or justification is provided. 2. An example of the relation appears, for instance, in Altland, A. and B. D. Simons. “Condensed Matter Field Theory” (2nd ed., 2010), p. 102: |
I am trying to understand the proof of
Lemma 1.35 (Smooth Manifold Chart Lemma) of John. M. Lee's Introduction to Smooth Manifolds, 2nd Edition.
The Lemma is an existence-and-uniqueness-lemma. I understand the existence part of it but not the uniqueness part. Here I state the Lemma and the proof of the existence part (the proof is essentially just a detailed version of the proof given in Lee's book.)
LEMMA.Let $M$ be a set and $\{U_\alpha\}_{\alpha\in J}$ be a collection of subsets of $M$, along with maps $\varphi_\alpha:U_\alpha\to\mathbf R^n$, such that the following properties are satisfied:
(i) $\forall \alpha\in J$: $\varphi_\alpha$ is an injective map and $\varphi_\alpha(U_\alpha)$ is open in $\mathbf R^n$.
(ii) $\forall \alpha,\beta\in J$: the sets $\varphi_\alpha(U_\alpha\cap U_\beta)$ and $\varphi_\beta(U_\alpha\cap U_\beta)$ are open in $\mathbf R^n$.
(iii) $\forall\alpha,\beta\in J$: $U_\alpha\cap U_\beta\neq \emptyset \quad \Rightarrow \quad \varphi_\beta\circ\varphi_\alpha^{-1}:\varphi_\alpha(U_\alpha\cap U_\beta)\to \varphi_\beta(U_\alpha\cap U_\beta)$ is smooth.
(iv) Countably many of the sets $U_\alpha$ cover $M$.
(v) $ \left. \begin{array}{c} p,q\in M\\ p\neq q \end{array} \right\} \quad \Rightarrow \quad \left\{ \begin{array}{c} \exists \alpha\in J\text{ such that } p,q\in U_\alpha,\quad\text{ or}\\ \exists \alpha,\beta\in J\text{ such that } p\in U_\alpha, q\in U_\beta \text{ and } U_\alpha\cap U_\beta=\emptyset \end{array} \right. $
Then $M$ has a unique manifold structure such that each pair $(U_\alpha,\varphi_\alpha)$ is a smooth chart.
PROOF. Let $\mathcal B=\{\varphi_\alpha^{-1}(V):\alpha\in J, V\text{ open in } \mathbf R^n\}$. Claim 1: $\mathcal B$ forms a basis for $M$. Proof: We use $(i)$---$(iv)$ in this proof. From $(iv)$ we see that the elements of $\mathcal B$ cover $M$.Now let $\varphi_\alpha^{-1}(V)$ and $\varphi_\beta^{-1}(W)$ be two elements of $\mathcal B$, where $V$ and $W$ are open in $\mathbf R^n$.To show that $\mathcal B$ forms a basis, it is enough to show that $ \varphi_\alpha^{-1}(V)\cap\varphi_\beta^{-1}(W)$ itself lies in $\mathcal B$.Note that \begin{equation*}\varphi_\alpha^{-1}(V)\cap \varphi_\beta^{-1}(W)=\varphi_\alpha^{-1}\Big(V\cap(\varphi_\beta\circ\varphi_\alpha^{-1})^{-1}(W)\Big)\tag{1}\end{equation*}But by (iii), $\varphi_\beta\circ\varphi_\alpha^{-1}$ is continuous, and therefore $(\varphi_\beta\circ\varphi_\alpha^{-1})^{-1}(W)$ is open in $\varphi_\alpha(U_\alpha\cap U_\beta)$.By (ii), $\varphi_\alpha(U_\alpha\cap U_\beta)$ is open in $\mathbf R^n$ and therefore $(\varphi_\beta\circ\varphi_\alpha^{-1})^{-1}(W)$ is open in $\mathbf R^n$.Using this in $(1)$, we immediately see that $\varphi_\alpha^{-1}(V)\cap\varphi_\beta^{-1}(W)$ is in $\mathcal B$.This settles the claim.
Let $\tau$ be the topology generated on $M$ by $\mathcal B$. By definition of $\mathcal B$, each function $\varphi_\alpha$ is a homeomorphism onto its image. Thus $(M,\tau)$ is locally Euclidean of dimension $n$.
Claim 2: $(M,\tau)$ is Hausdorff. Proof: This uses $(v)$. Let $p,q\in M$ with $p\neq q$. If $\exists \alpha,\beta\in J$ such that $p\in U_\alpha, q\in U_\beta$ and $U_\alpha\cap U_\beta=\emptyset$, then we have nothing to prove. The other possibility if that $\exists \alpha\in J$ such that $p,q\in U_\alpha$. Now since $\varphi_\alpha(U_\alpha)$ is open in $\mathbf R^n$, there exist disjoint open sets $V$ and $W$ open in $\varphi_\alpha(U_\alpha)$ containing $p$ and $q$ respectively. The neighborhoods $\varphi_\alpha^{-1}(V)$ and $\varphi_\alpha^{-1}(W)$ separate $p$ and $q$ in $M$. Thus the claim is settled. Claim 3: $(M,\tau)$ is second countable. Proof: Note that since $\varphi_\alpha(U_\alpha)$ is second countable, and since $\varphi_\alpha:U_\alpha\to\varphi_\alpha(U_\alpha)$ is a homeomorphism, we must have $U_\alpha$ is second countable. The proof is now immediate from $(iv)$ and Lemma given at the bottom. The above working shows that $(M,\tau)$ is a topological $n$-manifold. Now from $(iii)$ it is clear that $\{(U_\alpha,\varphi_\alpha)\}_{\alpha\in J}$ is a smooth atlas on $M$, giving $M$ a smooth structure.
Now we need to establish that this is the only smooth structure on $M$ such that each $\varphi_\alpha:U_\alpha\to\varphi_\alpha(U_\alpha)$ is a smooth chart on $M$ and here I am stuck. In fact what Lee writes is that "It is clear that this topology and smooth structure are the unique ones satisfying the conclusions (conditions?) of the lemma." Can somebody please explain this to me.
LEMMA. Let $X$ be a topological space and $\{U_n\}_{n\in\mathbf N}$ be a countable open cover of $X$ such that each $U_i$ is second countable in the subspace topology. Then $X$ is second countable. |
I have the thermal partition function and the density of states for the 3D simple harmonic oscillator, which are given below $$ Z(\beta) = \frac { 1 } { \left( 2 \sinh \left( \frac { \beta \omega } { 2 } \right) \right) ^ { 3 } } $$ and $$ \rho ( E ) = \frac { \left( \frac { E } { \omega } - \frac { 1 } { 2 } \right) \left( \frac { E } { \omega } + \frac { 1 } { 2 } \right) } { 2 \omega } $$ where $\beta$ is the inverse temperature and $\omega$ is the natural frequency of the SHO.
Now, the partition function can also be written as the following integral $$ Z(\beta) = \int_{0}^{\infty} dE \: \rho(E) e^{-\beta E} $$ which is a Laplace transform, hence we can calculate the asymptotic of the density of states by inverting this Laplace transform. The inverse Laplace transform can be done using the Bromwich contour integral which is $$ \rho(E) = \frac{1}{2\pi i} \int_{\gamma - i \infty}^{\gamma + i \infty} d\beta \: Z(\beta) e^{\beta E} $$ where $\gamma$ is greater than the real part of all the singularities of $Z(\beta)$. Now, I tried doing this integral using the residue theorem by closing the contour on the left half of the complex plane. But somehow I am getting the correct density of states using the residue of just one pole, whereas there is an infinite number of poles on the imaginary axis due to the periodicity of the hyperbolic sine function (on the imaginary axis).
Can someone explain where I am wrong in this process?
P.S. We also know that $n = \frac{E}{\omega} - \frac{3}{2}$ is a positive integer. |
Why LL(k) and LL(∞) are incompatible with left-recursion? I understand that a LL(k) language can support left-recursivity provided that with k-overahead tokens can be resolved any ambiguity. But, with a LL(∞) grammar, which type of ambiguities can't be solved?
The problem that $LL$ variants have with left recursion is inherent to the way $LL$ works: it is a top-down type parser, which means it replaces nonterminals by their productions.
An $LL$-style parser works as follows. It traverses the input from left to right in one go. If we are at some point in the input, then we know that everything to the left of this point is OK. For everything to the right of this point, the parser has constructed an 'approximation' of what it expects to see next. Consider for example this grammar:
1: $E \to E + E$
2: $E \to x$
Note that the grammar is not $LL$, but we can still parse inputs in $LL$-style. On input $x+x+x$, an $LL$-style parser may end up at position $x+\bullet x+x$. Let's assume it has decided that the left part, $x+$, is fine, and for the rest of the input it expects to see $x+E$. It will then find out that $x+x+$ is fine, with $E$ remaining. It may then replace this $E$ by a production, in particular production 2 above. With $x$ remaining, the parser will accept the input.
The trick is then to correctly decide the replacing production for a given nonterminal. A grammar is $LL(k)$ if we can do this by just looking at the next $k$ input symbols, and other techniques are known that are more powerful.
Now consider the following grammar:
1: $A \to A a$
2: $A \to \varepsilon$
If an $LL$ parser tries to replace $A$ by a production, it has to decide between production 1 and 2.
Let's consider what the proper course of action would be if our parser was omniscient. Every time it replaces the $A$ by production 1, it 'adds' an $a$ to what it expects for the remaining input (the expected remainder goes from $A$ to $Aa$ to $Aaa$...), but the $A$ at the start does not go away. Eventually, it must pick production 2, after which the $A$ disappears and it can never again add $a$s to the expectation.
As there is no chance to match a few more input symbols, the parser must decide at exactly that input position how many times production 1 must be matched. This means it must know exactly how many times in our case the $a$ will appear in the remainder of the input at this moment.
However, $LL(k)$ can see only $k$ symbols ahead. This means that if production 1 must be chosen more than $k$ times, the parser cannot 'see' this and so is doomed to fail. $LL(*)$ is better at parsing than $LL(k)$, because it can see arbitrarily far ahead in the input, but the crucial detail (which is not always mentioned) is that this lookahead is
regular.
To imagine what happens, you can view the algorithm as follows: when it has to decide which production to take, it starts up a finite state machine (a DFA, which is equivalent in power to regular expressions) and lets this machine look at the rest of the input. This machine can then report 'use this production'. However, this machine is severely limited in what it can do. Although it is strictly better than looking at only the next $k$ symbols, it cannot for instance 'count', which means that it cannot help in the above situation.
Even if you were to 'hack' in some counting function in this finite automaton, then there are still left-recursive grammars for which you really need more power. For instance, for this grammar:
$A \to A B$
$A \to \varepsilon$ $B \to ( B )$ $B \to \varepsilon$
you would have to match 'towers' of matching braces, which is something a finite automaton cannot do. Worse still:
$A \to B C A D E$
$A \to A'$ $A' \to A' D E$ $A' \to \varepsilon$ $B \to a B a \mid b B b \mid a a \mid bb$ $C \to c C c \mid d C d \mid c c \mid d d$ $D \to e D e \mid f D f \mid e e \mid f f$ $E \to g E g \mid h E h \mid g g \mid h h$
is a totally awful grammar, for which I'm pretty sure no known linear time parsing algorithm works and all known general parsing algorithms take quadratic time. Worse still, any grammar describing this language is necessarily left-recursive. The grammar is still unambiguous however. You need a hand-crafted parser to parse these monsters in linear time. |
I sort of know how carbonated beverages are carbonated: a lot of $\ce{CO2}$ gets pushed into the liquid, and the container is sealed. There are at least two things I don't know. First, how much carbon dioxide is actually dissolved in the liquid? Second, what is the resulitng partial pressure of $\ce{CO2}$ in the headspace and the total pressure in the headspace? I'm interested in cans, plastic bottles, and glass bottles. I know from experience that there is some variation among manufacturers even for the same beverage, so I will be happy with general numbers or a good estimate.
General estimates have placed a can of Coca-Cola to have 2.2 grams of $\ce{CO2} $ in a single can. As a can is around 12 fluid ounces, or 355 ml, the amount of $\ce{CO2}$ in a can is:
$$\text{2.2 g} \ \ce{CO2}* \frac{\text{1 mol} \ \ce{CO2}}{\text{44 g} \ \ce{CO2} } = 0.05 \ \text{mol}$$
$$ \text{355 mL} * \frac{\text{1 L}}{\text{1000 mL}} = 0.355 \ \text{L} $$
So here we can see we have about 0.05 mol/0.355 L or about 0.14 mol of carbon dioxide per liter of soda. Of course this value varies by manufacturer, type of drink, container, etc.
Looking at Wikipedia, inside Coca-Cola is:
Carbonated water, Sugar (sucrose or high-fructose corn syrup depending on country of origin), Caffeine, Phosphoric acid, Caramel color (E150d), Natural flavorings
A can of Coke (12 fl ounces/355 ml) has 39 grams of carbohydrates (all from sugar, approximately 10 teaspoons), 50 mg of sodium, 0 grams fat, 0 grams potassium, and 140 calories.
Thus, we can calculate the pressure of $CO_2$ gas using the Ideal Gas equation if we store our coke at, say, 20 Celsius:
$$ P = \frac{nRT}{V} $$ $$ P = \frac{\text{0.05 mol} * \text{0.08206} \frac{\text{L} \cdot \text{atm}}{\text{mol} \cdot \text{K} } * \text{293.15 K}}{\text{0.355 L}}$$ $$ P = 3.39 \ \text{atm} $$
According to this website, http://hypertextbook.com/facts/2000/SeemaMeraj.shtml:
On average, the 12 ounce soda cans sold in the US tend to have a pressure of roughly 120 kPa when canned at 4 °C, and 250 kPa when stored at 20 °C.
$$\text{3.39 atm} * \frac{\text{760 torr}}{\text{1 atm}} * \frac{\text{133 Pa}}{\text{1 torr}} * \frac{\text{1 kPa}}{\text{1000 Pa}} = 342.66 \ \text{kPa}$$
Water vapor exerts it's own partial pressure. Looking at standard tabulated values for water vapor pressure, water exerts a pressure of 17.5 torr at 20 Celsius.
$$\text{17.5 torr}* \frac{\text{133 Pa}}{\text{1 torr}} * \frac{\text{1 kPa}}{\text{1000 Pa}} = 2.3275 \ \text{kPa} $$
Knowing that our total pressure is the sum of all our pressures:
$$P_{total} = \text{342.66 kPa} + \text{2.3275 kPa} = 344.99 \ \text{kPa}$$
Here, we are roughly about 100 kPa off from the data provided by the website. This is just an approximation. A more accurate way would be to calculate the moles of each product inside the soda, and knowing the total pressure or partial pressure of one of the parts, we can calculate the pressures more accurately. However, that information is proprietary. It's their secret recipe!
Headspace:
We have to determine the volume of the headspace - again, I am not sure of exact data - which is between 1/2 inch to 1 and 1/2 inches depending on the container and what it holds. I will assume that the headpsace occupies 6% of the total volume of the can which is 21.3 mL.
At manufacturing and at storage, the can is at different temperatures. Taking the above data, we'll say it is manufactured at $4^\circ C$ and say, stored at $20^\circ C$
Furthermore, when carbon dioxide is solubilized in water, it forms carbonic acid. I will neglect that as the ionization constant is small.
Assuming our 2.2 grams of carbon dioxide is the
maximum amount of carbon dioxide that can be placed inside, some of the carbon dioxide is soluble in water while the rest exerts pressure inside the headspace to force the carbon dioxide inside the liquid. The pressure is necessary inside this closed volume as, once you open the cap, the carbon dioxide tries to achieve equilibrium.
$$ \ce{CO_2 (solution) <=> CO2 (g)} $$
In general, the solubility of gases
increases at lower temperatures and decreases at higher temperatures. A notable exception is the noble gases. In regards to pressure, Henry's law states that the solubility of a gas in a liquid is directly proportional to the pressure of that gas above the surface of the solution.
The solubility of carbon dioxide in water shifts according to Le Chatlier's principle.
For the solution to this problem, we would need to know several things about the manufacturing of soda's, or rather, more quantitative data. At manufacturing, the pressure is high and temperature low, so as much carbon dioxide as possible is solubilized in the water.
Once shipped out and stored, the pressure inside the headspace increases as the pressure is decreased from manufacturing, the carbon dioxide then leaves solution and enters the headspace. Increasing temperature also decreases solubility.
If you can provide me with more information, I will be more than happy to help. :)
Through an interesting approach, authors estimate $\ce{CO2}$ pressure inside carbonated beverages, by measuring the freezing point ($fp$) depression caused by $\ce{CO2}$. In other words, when $\ce{CO2}$ is dissolved in water, a solution is formed and the freezing point is lowered.
The molality of $m_{\ce{CO2}}$ in water can be obtained, according to:
$$ m_{\ce{CO2}}=\Delta{T_{fp}}/k_{fp} $$
with $k_{fp}$ for water $=-1.86\,^{o}C \, kg \, mol^{-1}$. For a particular brand of sparkling water, a value of $\Delta{T_{fp}}$ = $-0.42^oC$ was measured, thus yielding a value for $m_{\ce{CO2}}$ equal to $0.23$ $mol \, kg^{-1}$. This first result allows to estimate the mass of ${\ce{CO2}}$ dissolved in one liter of that kind of beverage.
Said mass of ${\ce{CO2}}$ is approximately equal to $10\,g$ per liter (or $3.6\,g$ per can content volume).
Applying Henry's law,
$$ P_{\ce{CO2}}=m_{\ce{CO2}}/k_{H} $$
with $k_{H}$ being Henry's law constant ($~0.077\,mol\,kg^{-1}\,atm^{-1}$ at $0^{o}C$), one finally gets the pressure $P_{\ce{CO2}}$ exerted by the gas over the liquid, at $0^{o}C$:
$$ P_{\ce{CO2}} = 3.0\,\text{atm} $$
that corresponds to a mass of about $0.1\,g$ of ${\ce{CO2}}$ inside the headspace volume of a can (about $15\,ml$), mass calculated via $PV=nRT$.
EDIT: for most carbonated beverages, $\Delta{T_{fp}}$ should fall at around $-0.2^oC$ (as reported by Brooker): that would give a $P_{\ce{CO2}} = 1.5\,\text{atm}$ at $0^{o}C$, that rises at about $3\,\text{atm}$ at $25^{o}C$.
It depends from drink and time, according to S. Teerasong, Analytica Chimica Acta 668 (2010) 47–53 for a normal Cola is about $3.1 \frac{g_{CO_2}}{L}$ (take a look even to Glevitzky, Chem. Bull. "POLITEHNICA" Univ. (Timişoara) Volume 50 (64), 1-2,2005 but his esteem is very high).
Regarding the pressure in the headspace this is very difficult to esteem theoretically because you have to take in account not only the temperature but the whole system.
:you can shake the bottle and increase the pressure dramatically.
protected by jonsca♦ Nov 5 '14 at 19:24
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead? |
I have a question in reading Polchinski's string theory volume 1.
p12-p13
Given the Polyakov action $S_P[X,\gamma]= - \frac{1}{4 \pi \alpha'} \int_M d \tau d \sigma (-\gamma)^{1/2} \gamma^{ab} \partial_a X^{\mu} \partial_b X_{\mu}$ (1.2.13),
how to show it has a Weyl invariance
$\gamma'_{ab}(\tau,\sigma) = \exp (2\omega(\tau,\sigma)) \gamma_{ab} (\tau,\sigma)$?
Because both $ (-\gamma)^{1/2} $ and $\gamma^{ab}$ give a factor $\exp(2\omega(\tau,\sigma))$, they do not cancel each other
Thank you very much in advance |
Short answer:
I think the notation is the main problem here. In your second equation, the LHS $\rho\mathbf{u}$ is a function of $\mathbf{x}_0$ and $t$, while your RHS $\rho\mathbf{u}$ is a function of $\mathbf{x}$ and $t$. The subtle difference is that $\mathbf{x}_0$ should be treated as a particle label, not an actual position. As you suspected, the bridge between the two formulations of $\rho\mathbf{u}$ is related by $\phi$:$$(\rho\mathbf{u})(\mathbf{x},t) = (\rho\mathbf{u})\left[\phi(\mathbf{x},t),t\right]$$In your second equation, the time derivative on the LHS is with respect to a fixed collection of particles, whereas on the RHS it should be with respect to a fixed locations in space. In fact, the material derivative $\frac{D}{Dt}$ applies to functions of $\mathbf{x}$ and $t$. Using it on a function of particle label and time does not make sense. So the second equation comes from taking the time derivative of the expression above.
$$\begin{align}\frac{d}{dt}(\rho\mathbf{u})\left[\phi(\mathbf{x},t),t\right] & = \frac{d}{dt}(\rho\mathbf{u})(\mathbf{x},t) \\& = \frac{D}{Dt}(\rho\mathbf{u})(\mathbf{x},t)\end{align}$$
Long Answer/Derivation:
To keep the derivation more readable, we will define $\mathbf{f}$ as the transport property we are interested in. In your case, $\mathbf{f} = \rho\mathbf{u}$.The change of variables that you want to perform transforms $\mathbf{f}$ between two different frames of reference.
The
Eulerian frame of reference looks at changes in $\mathbf{f}$ from the point of view of an outside observer watching the entire flow field.The property of the fluid depends on its location and time:$$\mathbf{f} = \mathbf{f}(\mathbf{x}, t)$$The Eulerian frame is convenient for experiments and operationsinvolving spatial gradients. However, applying the laws of classical mechanics in this frame of reference is not as straight forward, since the particles that occupy a location $\mathbf{x}$ at different times are usually not the same. In order to use the conservation laws, we switch to a more suitable reference frame.
The
Lagrangian reference frame will monitor the change of $\mathbf{f}$ from the pointof view of a fluid particle. In such a reference frame, each particle has an associated $\mathbf{f}(t)$. Let's name each particlein our continuum with a label $\mathbf{\xi}$, so that$$\mathbf{f} = \mathbf{f}(\mathbf{\xi}, t)$$gives the property of particle $\mathbf{\xi}$ at time $t$. To distinguish properties in the two reference frames, we will accent the Lagrangian properties with a tilde.$$\tilde{\mathbf{f}} = \tilde{\mathbf{f}}(\mathbf{\xi}, t)$$
Now we need a map between the $\tilde{\mathbf{f}}$ and $\mathbf{f}$. Let's define $\phi$ as themapping that takes the particle's location $\mathbf{x}$ at some time $t$ and returns the label of the particle $\mathbf{\xi}$$$\mathbf{\xi} = \phi(\mathbf{x},t)$$If we used the location of a particle at $t=0$ as its label, then we get the trajectory function you used$$\mathbf{x}_0 = \phi(\mathbf{x}, t)$$It is important to note that $\mathbf{x}_0$ serves as a particle label, so while$$\tilde{\mathbf{f}}(\mathbf{x}_0,0) = \mathbf{f}(\mathbf{x}_0,0)$$, in general$$\tilde{\mathbf{f}}(\mathbf{x},t) \ne \mathbf{f}(\mathbf{x},t)$$
Instead, the two reference frames are related by$$\mathbf{f}(\mathbf{x},t) = \tilde{\mathbf{f}}\left[\phi(\mathbf{x},t),t\right]$$and the Jacobian should be a function of particle label and time:$$J = J(\mathrm{\xi},t)$$
Using this notation, the first integral in your question becomes$$\begin{align}\frac{d}{dt}\int_{W(t)} \mathbf{f}(\mathbf{x},t) dV & = \frac{d}{dt}\int_W \tilde{\mathbf{f}}\left[\phi(\mathbf{x},t),t\right]J(\mathbf{\xi},t)dV \\& = \int_W \left[ \frac{d\tilde{\mathbf{f}}}{dt}J + \tilde{\mathbf{f}}\frac{dJ}{dt} \right] dV\end{align}$$
The second equation in your question is not applied until the integral above is transformed back from the $(\mathbf{\xi},t)$ frame to the $(\mathbf{x},t)$ frame. Noting that the time derivative of this Jacobian can be written as (this is a whole other derivation):$$\frac{dJ}{dt} = (\nabla \cdot \mathbf{u})J(\mathbf{x},t)$$
we have:
$$\begin{align}\int_W \left[ \frac{d\tilde{\mathbf{f}}}{dt}J + \tilde{\mathbf{f}}\frac{dJ}{dt} \right] dV & = \int_{W(t)} \left[\frac{d}{dt}\mathbf{f}(\mathbf{x},t) + \mathbf{f}(\mathbf{x},t)(\nabla \cdot \mathbf{u})\right] dV\end{align}$$
The first term in the integral can be expanded using the chain rule$$\frac{d}{dt}\mathbf{f}(\mathbf{x},t) = \frac{\partial \mathbf{f}}{\partial t} + \frac{\partial \mathbf{x}}{\partial t} \cdot \frac{\partial \mathbf{f}}{\partial \mathbf{x}}$$
which is just the definition of the material derivative $\frac{D}{Dt}$ |
A general diffeomorphism is
not part of the conformal group. Rather, the conformal group is a subgroup of the diffeomorphism group. For a diffeomorphism to be conformal, the metric must change as,
$$g_{\mu\nu}\to \Omega^2(x)g_{\mu\nu}$$
and only then may it be deemed a conformal transformation. In addition, all conformal groups are Lie groups, i.e. with elements arbitrarily close to the identity, by applying infinitesimal transformations.
Example: Conformal Group of Riemann Sphere
The conformal group of the Riemann sphere, also known as the complex projective space, $\mathbb{C}P^1$, is called the Möbius group. A general transformation is written as,
$$f(z)= \frac{az+b}{cz+d}$$
for $a,b,c,d \in \mathbb{C}$ satisfying $ad-bc\neq 0$.
Example: Flat $\mathbb{R}^{p,q}$ Space
For flat Euclidean space, the metric is given by
$$ds^2 = dz d\bar{z}$$
where we treat $z,\bar{z}$ as independent variables, but the condition $\bar{z}=z^{\star}$ signifies we are really on the real slice of the complex plane. A conformal transformation takes the form,
$$z\to f(z)\quad \bar{z}\to\bar{f}(\bar{z})$$
which is simply a coordinate transformation, and the metric changes by,
$$dzd\bar{z}\to\left( \frac{df}{dz}\right)^{\star}\left( \frac{df}{dz}\right)dzd\bar{z}$$
as required to ensure it is conformal. We can specify an infinite number of $f(z)$, and hence an infinite number of conformal transformations. However, for general $\mathbb{R}^{p,q}$, this is not the case, and the conformal group is $SO(p+1,q+1)$, for $p+q > 2$.This post imported from StackExchange Physics at 2014-06-06 20:08 (UCT), posted by SE-user JamalS |
For proving the quadratic reciprocity, Gauss sums are very useful. However this seems an ad-hoc construction. Is this useful in a wider context? What are some other uses for Gauss sums?
Gauss sums are not an ad-hoc construction! I know two ways to motivate the definition, one of which requires that you know a little Galois theory and the other which is totally mysterious to me.
Here is the Galois-theoretic explanation. Let $\zeta_p$ be a primitive $p^{th}$ root of unity, for $p$ prime. The cyclotomic field $\mathbb{Q}(\zeta_p)$ is Galois, so one can define its Galois group, the group of all field automorphisms which preserve $\mathbb{Q}$. Such an automorphism is determined by what it does to $\zeta_p$, and it must send $\zeta_p$ to another primitive $p^{th}$ root of unity. It follows that the Galois group $G = \text{Gal}(\mathbb{Q}(\zeta_p)/\mathbb{Q})$ is isomorphic to $(\mathbb{Z}/p\mathbb{Z})^{\times}$, which is cyclic of order $p-1$.
Now suppose $p$ is odd. As a cyclic group of even order, $G$ has a unique subgroup $H$ of index two given precisely by the multiplicative group of quadratic residues $\bmod p$, so by the fundamental theorem of Galois theory the fixed field $\mathbb{Q}(\zeta_p)^H$ is the unique quadratic subextension of $\mathbb{Q}(\zeta_p)$. And it's not hard to see that this unique quadratic subextension must be generated by
$$\sum_{\sigma \in H} \sigma(\zeta_p) = \sum_{a \text{ is a QR}} \zeta_p^a = \frac{1}{2} \left( \sum_{a=1}^{p-1} \zeta_p^{a^2} \right)$$
which you will of course recognize as a Gauss sum! So the Gauss sum generates a quadratic subextension, and any of various methods will tell you that this subextension is precisely $\mathbb{Q}(\sqrt{p^{\ast}})$ where $p^{\ast} = (-1)^{ \frac{p-1}{2} } p$. (This does not actually require any computation: if you know enough algebraic number theory, it follows from a consideration of which primes ramify in cyclotomic extensions.)
The totally mysterious explanation is that Gauss sums naturally appear when you start thinking about the discrete Fourier transform. For example, the trace of the DFT matrix is a Gauss sum. But more mysteriously, Gauss sums are eigenfunctions of the DFT in a certain sense. (I sketch how this works here.) There is a sort of mysterious connection here to the Gaussian distribution, which is an eigenfunction of the continuous Fourier transform; see this MO question. Again, I don't know what to make of this. There is a book by Berg called The Fourier-analytic proof of quadratic reciprocity and it may or may not be about this construction.
Not just
quadratic reciprocity, one can use them to prove higherreciprocity laws: see Ireland and Rosen's A Classical Introductionto Modern Number Theory. They also turn up in the functionalequation for Dirichlet L-functions (and are massively generalized in thetopic of root numbers).
They are also used to describe something called the Talbot Effect:
look at #8 in the list. I attended a seminar by Mike Berry about 12 years ago where he claimed that the Talbot Effect was a physical manifestation of Gauss Sums.
Srinivasa Ramanujan actually had discovered some definite integral formulas related to the Gauss sums. Please see the below article:
Some definite integrals connected with Gauss sums. Messenger of MathematicsXLIV, $1915$, $75-85$
From Wikipedia: ( Sorry, I can't explain this.)
The absolute value of Gauss sumsis usually found as an application of Plancherel's theorem on finite groups.
Another application of the Gauss sum: How to prove that: $\tan(3\pi/11) + 4\sin(2\pi/11) = \sqrt{11}$
Gauss sums and exponential sums in general are particularly useful for determining the size of certain algebraic varieties in finite fields or even in general abelian groups. If one defines
$$ A_t = \{x \in \mathbb{F}_q^d : f(x) = t\} $$
where $t \in \mathbb{F}_q\setminus\{0\}$, then by orthogonality we have
$$ |A_t| = q^{-1} \sum_{s \in \mathbb{F}_q} \sum_{x \in \mathbb{F}_q^d} \chi(s (f(x) - t)), $$
where $\chi$ is any nontrivial additive character on $\mathbb{F}_q$.
For example, if one considers $x = (x_1, \dots , x_d) \in \mathbb{F}_q^d$ and defines $f(x) = x_1^2 + \dots + x_d^2$, then $A_t$ would be some finite field analogue of a sphere. Bounding such a set would then be equivalent to bounding
$$ q^{-1}\sum_{s \in \mathbb{F}_q} \left(\sum_{x \in \mathbb{F}_q} \chi(sx^2) \right)^d \chi(-st). $$
Gauss Sums and in particular well-known bounds for Gauss sums imply that such a sum is of size $q^{d-1}(1 + o_d(1))$ as $q \to \infty$.
As Qiaochu points out above, such bounds are nice to have when one works with the discrete Fourier transform.
A small additional note, in line with an earlier answer: Gauss sums are, literally, the Lagrange resolvents obtained in the course of expressing roots of unity in terms of radicals. (Yes, then the Kummer-Stickelberger business can be used to effectively obtain the actual radical expressions...: here .) |
If the function $f$ is defined on an unbounded above domain $D \subseteq \Re $ and is eventually monotone and eventually bounded, then $ \lim_{x\rightarrow \infty} f(x)$ is finite
I tried to workout the proof as:
Since $f$ is eventually monotone $\Rightarrow \exists x^*, x^* \leq x_1 < x_2 $ we have $f(x_1) \leq f(x_2)$
and since $f$ is eventually bounded $\Rightarrow \exists \hat{x},\ \exists \ L \leq M \in \mathbb{R} \ s.t. \lim_{x\rightarrow \infty} f(x) = L \\\forall \ \hat{x}\leq x $
Take $x = max(\hat{x}, x^*)$ and we have $\lim_{x\rightarrow \infty} f(x) = L$ |
The existence of $\ce{H4O^{2+}}$ has been inferred from hydrogen/deuterium isotopic exchange monitored through $\ce{^{17}O}$ NMR spectroscopy in the most extremely acidic condensed phase superacid we can make, fluoroantimonic acid ($\ce{HF:SbF5}$ or $\ce{HSbF6}$). It seems that even the slightly weaker but still very much superacidic magic acid $\ce{HSO3F:...
No, it is not correct because it happens with other liquids (solvents) also.Taking Wikipedia's definition:Osmosis is the spontaneous net movement of solvent molecules through asemi-permeable membrane into a region of higher solute concentration,in the direction that tends to equalize the solute concentrations onthe two sides.Only as extra ...
I think, first I should clarify what causes the osmotic pressure:Osmosis occurs when two solutions of different concentrations are separated by a membrane which will selectively allow some species, e.g. the solvent, through it but not others, e.g. the solute.So, there is a concentration gradient between the two solutions which would lead to a diffusion ...
According to this document (https://arxiv.org/pdf/physics/0305011v1.pdf)"Water molecules can pass through the membranes in either direction, and they do. But because the concentration of water molecules is greater in the pure water than in the solution, there is a net flow from the pure water into the solution."Hydraulic equilibrium is achieved when ...
Your example with the no-polar substance wouldn't qualify in a discussion of osmosis. There needs to be a solution on at least one side of the membrane. Since a typical non-polar substance would be insoluble in water, osmosis would not apply.However, in your diagram, water would move from left to right due to a concept called pressure potential. You will ...
It has to do with the fact, that osmotic pressure of any solution is dependent only on the relative number of particles in the solution (in case of dilute solutions), irrespective of the nature(called colligative property). Therefore, you take certain known weight of the substance you want to calculate the molecular mass of, prepare a solution and measure ...
IIRC, the solubility of virtually all gases in water decreases with temperature. This is the reason why heating water will result in bubble nucleation, and this can be exploited to produce very clear ice by boiling water, perhaps several times, before putting it into the trays.
Your concept of a cell is a little vague. Although the cell wall is made from a diglyceride bilayer, it is not impregnable. In fact, there are many proteins that pass through the cell wall where dissolved material can pass. One of the proteins is called the sodium potassium pump. In a living cell, the protein uses ATP to move sodium out of the cell and drag ...
Yes, I think so. Based on our comments above, I believe you want to prepare two solutions that are isotonic with each other (have the same concentrations of different solutes) and allow them to communicate across a membrane that is selectively-permeable to water only.Tonicity is described here as:Tonicity is a measure of the effective osmotic pressure ...
You are correct in thinking that urea and sucrose would have van't hoff factor (i) = 1, since they are non-electrolytes and don't undergo disassociation(or ion pairing).What you are left with are the following: $\ce{NaCl}$ , $\ce{BaCl_2}$ and $\ce{[Cr(NH_3)_4Cl_2]Cl}$.Let us begin with the the two chloride salts: $\ce{NaCl}$ , $\ce{BaCl_2}$. we assume ...
To minimize the air bubbles in ice, either 1) hold the water under a vacuum or 2) 'degas' the water with an ultrasonic cleaner before freezing.1) Holding the water under a vacuum will pull the dissolved gases out of solution. The will diffuse to the air-water surface, and then be vacuumed away.2) An ultrasonic cleaner will vibrate the liquid at ...
There are way more sensitive ways to measure dextrose (glucose) concentration than polarimetry. For example, there are reducing sugar tests like the Dinitrosalicylic Acid test or (better) the 2,2'-Biconchoninic acid test. Just measure the dextrose concentrations inside and outside of the bag. I have protocols for these tests if you want; it involves getting ...
Benedict's reagent detects reducing sugars (free aldehyde group) by reducing soluble blue Cu(II) to insoluble red Cu(I) oxide. Sucrose aquoous solution is inert unless it is pre-hydrolyzed by heating with strong acid catalyst.That certainly is a poser. One possibility (SWAG, not fact) is regenerated crosslinked cellulose is overall not permeable to ...
Let the subscript A refer to the solvent and B refer to the solute, and let the subscript 1 refer to the pure solvent container and 2 refer to the mixture container. Then the free energy of the combined system is given by:$$G=n_{A1}\mu_A^0+n_{A2}\left(\mu_A^0+RT\ln{\left[\frac{n_{A2}}{n_{A2}+n_{B_2}}\right]}\right)+n_{B2}\left(\mu_B^0+RT\ln{\left[\frac{n_{...
There exist many types of semi-permeable membranes (the ones used for dialysis tubing), with various pore sizes. One of the very common lab experiment on the topic of osmosis is using a sucrose solution (sucrose is cheap) and small-pores membrane, such that water and small ions (typically Na+ and Cl–) can pass, but not sucrose. The one which you link to ...
I did it in this step: First I use ΔTb=kbm And substitute following Tb=101.45 and kb=0.512 I get m=198.144 mol/kgThis should be the first clue that something went wrong. 198 mol of sucrose is about 67 kilograms of sucrose, so your implied molality means that for every kg of water, there are 67 kg of sucrose. That can't be right. (It would be about the ...
You are not wrong in your understanding of a water softener, nor are you wrong in your understanding of a reverse osmosis filtration system.Does the RO filter care whether the supply water is salty or hard?I would like to make the point that your RO filter is incapable of caring about the hardness of the water. This is more about carefully choosing ...
Most of the time what you say about semi-permeable membranes is correct.a semi-permeable membrane is a membrane that allows only solvent molecules to pass between the membrane; solute molecules are blocked. Solvent molecules flow from the membrane side that has from low solute concentration (i.e. high solvent concentration) to high solute concentration (...
If I understand correctly if the concentration of water is greater on the outside of a bacterial cell the water will go inside the bacteria allowing it to multiply or grow faster. Does glycerin have the same effect?Usually the point of adding salt to water is to lower its activity, in order to make the water concentration outside the cell lower than on the ...
If you dribble a basketball, the force of your hand on the ball, that keeps it bouncing, is not (1) gravity; (2) the weak force; or (3) the strong force. It must be electromagnetic force--the electrons in the molecules of your skin repelling the electrons in the molecules on the surface of the ball. Also, your skin and the ball remain (largely) intact ...
I am afraid you are stuck because you considered glucose as osmotically active.But you assumed that glucose can freely move across the membrane, which is the opposite of osmotic activity.So, short-short answer : ignore the glucose, it is an osmolyte, but it is not osmotically active on the membrane you are thinking of; its concentration is going to ...
To evaluate whether the process is spontaneous you have to take the difference of the chemical potentials $\mu$. Solvent molecules go from the pure solvent (reactant) to the solution (product):$$\Delta_r G = \mu_\text{product} - \mu_\text{reactant} =$$$$ = (\mu_\text{solvent}^\circ + R T \ln x_A) - \mu_\text{solvent}^\circ < 0$$Throughout the process, ...
Polyethylene oxide, AKA polyethylene glycol or PEG has MW > 100,000. You can find higher-MW PEG in disposable diapers and similar absorbent products.You could also use agar, pectin or gelatin, available in groceries and from lab supply companies.One of my favorite demos was to make an agar solution with a bit of phenolphthalein added and put it in ...
To first order the solute doesn't matter, though it probably has a small influence. The driving force for osmosis is entropic. The osmotic pressure then is not due to the forces between the solute and solvent, but mostly from the fact that you can mix the two together.Indeed, osmosis works even when there's less interaction between the solvent and solute, ...
A semi-permeable membrane is not ‘smart’. If you wish, it is actually rather dumb. All it can do is perform a simple selection of ‘I’m letting you through’ versus ‘I’m not letting you through.’ Most membranes will work along the lines of size, polarisability or both.Size is easily explained: The membrane’s pores have a certain size and what is larger than ...
The short answer is pressure. What you've described above is basically just osmosis, and without applied pressure to overcome the osmotic pressure of the system, reverse osmosis doesn't happen.This Wikipedia article gives a good description of the concept, and the difference between filtration, osmosis and reverse osmosis:Osmosis is a natural ...
The expression you need is$$\Pi = i\mathrm{MRT}$$where $i$, the van't Hoff factor, describes the degree of ionization of the solute (for sucrose, it's 1; for $\ce{NaCl}$, it's 2, since we get 2 moles of ions for each mole of salt we dissolve). The other variables are as you describe. Thus, the osmotic pressure for your salt solution is twice that of your ...
The water molecules move across the membrane due to thermal energy. If concentration of solute on both sides of the membranes is the same, the amount of water crossing the membrane in either direction is same but when the concentration is unequal, the side with more solute particles (higher concentration, that is) attracts the water molecules and thus ... |
This question already has an answer here:
If we agree that $\textbf{(a) }\dfrac{x}{x}=1$, $\textbf{(b) }\dfrac{0}{x}=0$, and that $\textbf{(c) }\dfrac{x}{0}=\infty^{\large\dagger}$, and let us suppose $z=0$:
$$\begin{align*}z&=0&&\text{given.}\\\dfrac{z}{z}&=\dfrac{0}{z}&&\text{divide each side by }z.\\\dfrac{z}{z}&=0&&\text{by }\textbf{(b)}.\\1&=0&&\text{by }\textbf{(a)}.\end{align*}$$Now, from this, we can get say that $\dfrac{0}{0}=1$. What went wrong?
$\dagger$ Or undefined, if you prefer it. |
I want to transform a differential equation from polar coordinates $(r,\theta)$ to the following $(u, v, \phi)$ coordinate system:
$$ u = r \cos(\theta - \phi) \\ v = r \sin(\theta - \phi) \\ \phi = \theta + \arctan(\dot r,\ r\dot\theta) $$
$u$ and $v$ form a rectilinear coordinate system aligned with the direction of motion along the $u$-axis. Put another way, $\phi$ is heading, $u$ is distance from the origin along the track, and $v$ is cross-track distance from the origin. (The reason I want to use coordinate system is numerical accuracy.)
I know that adding a variable means I need to add an equation, and I have a differential equation for $\phi$:
$$ \dot \phi = -\frac{\dot v}{u} $$
I'm also using the following substitutions:
$$ r = \sqrt{u^2+v^2} \\ \theta = \phi + \arctan(u, v) $$
The problem comes when I try taking the derivatives of $r$ and $\theta$:
$$ \ddot r = \frac{u\ddot u + v\ddot v}{\sqrt{u^2+v^2}} + \cdots \\ \ddot \theta = -\frac{v}{u}\frac{u\ddot u + v\ddot v}{u^2+v^2} + \cdots $$
Both of them contain a multiple of $u\ddot u+ v\ddot v$. Therefore I can't solve for both $\ddot u$ and $\ddot v$ (in fact the equations are inconsistent, so I can't even get one in terms of the other).
Is there something about my choice of coordinate system that is causing problems, or is it just my method of substitution?
Update
I think I've figured out part of the problem. I can set up two equations for $r$ and $\theta$ in terms of $u$, $v$, and $\phi$, and I can set up four more for the first and second derivatives. Plus, I can add one more for $\phi$ as a function of $\theta$, $r$, $\dot r$ and $\dot\theta$. This gives me seven equations.
I need to solve for $\ddot u$, $\ddot v$, and $\dot \phi$ to set up my differential equation. I also need to solve for $r$, $\theta$, $\dot r$ and $\dot theta$ to eliminate them from the final equations. This uses up all seven of my equations.
However, there is still a $\ddot\phi$ floating around. I can try to solve for it by taking the derivative of $\dot\phi$, but it contains $\ddot u$ and $\ddot v$; and when I substitute it into the expression for $\ddot v$ I end up with $\ddot v = \cdots + \ddot v$, which is false.
What is another way to get $\ddot \phi$ that avoids this problem? |
First:
Yes, when you are dealing with a function $f$ of one real variable $x$, the partial derivative $\frac{\partial f}{\partial x}$ coincides with the total derivative of $f$ with respect to $x$. Be ware that those are generally two different things. They only coincide for functions that have purely explicit relations to the variable $x$ in question. That is, the other variables do not depend on $x$.
Intuitively the total derivative of $f$ measures how $f$ changes along the direction $x$ as all its variables vary. The partial derivative measures how $f$ changes along $x$ when the other parameters do not vary. You will soon learn about the derivative along a vector field or Li derivative. This is the real nice one as it does not depend on your choice of coordinates $x_1,\dots,x_n$
Second:
After a brief council with Git Gud I hereby revise the second section.
The symbol $\frac{\partial f(x)}{\partial x}$ is not meaningless. The following abstract nonsense is well defined. Consider $\frac{\partial f(x)}{\partial x}$ to be the derivative of $f$ with respect to a constant $x$ evaluated at that constant. Note that the derivative of any function with respect to a constant always turns out to be the empty function. In other word each value is the empty set. Now the symbol $\frac{\partial f(x)}{\partial x}$ stands for the empty set, hence the formula in question is well formed.
Your lector must have had something similar in mind or the formula he wrote is not well formed. Even within this context it is false. You have the right to be confused, but only ask him politely about it.
Third:
It is strange at first but when you think about it, you realize the need for your parameters to depend further on some other quantities. Consider a change of variables. Your new and your old variables are related to each other... Now if you define the differential $df$ by $$df(x_1,x_2,\dots,x_n)=\sum_{i=1}^n\frac{\partial f}{\partial x_i}dx_i$$Then this is what the differential means. This meaning becomes more sensible when you study differential forms in differential geometry. In laymen’s terms, the differentials also called differential $1$-forms are just the smooth covector fields.
EDIT: (Answer to the comments)
The symbol $\frac{\partial f}{\partial x_1}\frac{\partial x_1}{\partial x}$ denotes the product of the partial derivative of $f$ with respect to $x_1$ with the partial derivative of $x_1$ with respect to $x$. Here $f$ is a twice smooth function of $x$ and $x_1$. You seem to be having trouble allowing the arguments $x$ and $x_1$ to depend on each other. This is an abstraction you have to allow.
I hope the concept becomes clear through examples.
a) Most often, there are no relations between the arguments indeed. Then $\frac{\partial x_i}{\partial x}=0$ and $\frac{\partial x}{\partial x_i}=0$. Then the total derivative coincides with the partial derivative.
b) $\quad x'=\frac{dx}{dx}=\frac{\partial x}{\partial x}=1$. Then $\frac{\partial f}{\partial x}=\frac{\partial f}{\partial x}\frac{\partial x}{\partial x}$
c) Imagine $x_1$ as a function of $x$, for instance $x_1(x)=x^2$. Now define $f$ by $f(x,x_1)=x+x_1(x)$. This is a smooth function of $x$ and of $x_1$. Also $$\frac{df}{dx}=\frac{\partial f}{\partial x}\frac{\partial x}{\partial x}+\frac{\partial f}{\partial x_1}\frac{\partial x_1}{\partial x}=1+2x$$
d) Using the above notation consider the restrictions of $x_1$ and of $f$ to the non-negative part of the reals. Then $x=\sqrt x_1$ and $$\frac{df}{dx}=\frac{\partial f}{\partial x}\frac{\partial x}{\partial x}+\frac{\partial f}{\partial x_1}\frac{\partial x_1}{\partial x}=1+\frac{1}{2\sqrt x_1}$$
e) More generally consider a smooth function $f(x_1,\dots,x_n)$ and a relation $R(x_1,\dots,x_n)=0$ such that the function $R$ is also smooth. Now you may talk about $\frac{\partial x_i}{\partial x_j}$ for you can always solve the relation for $x_i$ as a function of the rest. This is as clear as it can get.
f) Consider also a change of variables.
g) Consider implicit functions.
h) Foremost I recommend reading about constraints and holonomic systems. This makes the concept more sensible... Wikipedia may not be a good source. I recommend E. T. Whittakers Analytical Dynamics and Arnolds Mathematical methods of classical mechanics |
Let $X$ be a topological space. If $Y$ is a subspace of $X$, then $Y$ is a retract of $X$ if there exists a continuous function $r:X \rightarrow Y$ such that $r(y)=y$ for each $y\in Y$. The continuous map r is called the retraction.
So my question is how to show that the logarithmic spiral $C=\{ 0 \times 0 \} \cup \{ \mathbb{e}^t\mathbb{cos}t \times \mathbb{e}^t\mathbb{sin}t \; | \; t\in \mathbb{R}\}$ is a retract of $\mathbb{R}^2$?
And is it possible to give a specific retraction $r:\mathbb{R}^2 \rightarrow C$?
Question from
Topology by J. Munkres 2nd ed., Page 223. EDIT: Using hints in comments I have a possible retraction
Take $f: \mathbb{R}^2 \rightarrow x-axis$. Now each point (x,y) in $\mathbb{R}^2$ can be written as $(x,y)=(r\cos \theta,r\sin \theta)$. But r can be written as $e^{\log r}$. So define $f(r\cos \theta,r\sin \theta)=\left\{ \begin{array}{lr} (0,0) & : r=0\\ (\log r,0) & : otherwise\\ \end{array} \right.$
Now define $g: \{(x,0):x\ge 0\} \rightarrow C$ as $g(x,0)=\left\{ \begin{array}{lr} (0,0) & :x=0\\ (e^x\cos x,e^x\sin x) & : otherwise\\ \end{array} \right.$
The composition function $f\circ g$ takes $\{ \mathbb{e}^t\mathbb{cos}t \times \mathbb{e}^t\mathbb{sin}t \; | \; t\ge 0\}$ to itself. But what about when $t<0$?
Is this approach completely wrong? Or can this be modified somehow? |
Let $X_n(\Bbb{Z})$ be the simplicial complex whose vertex set is $\Bbb{Z}$ and such that the vertices $v_0,...,v_k$ span a $k$-simplex if and only if $|v_i-v_j| \le n$ for every $i,j$. Prove that $X_n(\Bbb{Z})$ is $n$-dimensional...
no kidding, my maths is foundations (basic logic but not pedantic), calc 1 which I'm pretty used to work with, analytic geometry and basic linear algebra (by basic I mean matrices and systems of equations only
Anyway, I would assume if removing $ fixes it, then you probably have an open math expression somewhere before it, meaning you didn't close it with $ earlier. What's the full expression you're trying to get? If it's just the frac, then your code should be fine
This is my first time chatting here in Math Stack Exchange. So I am not sure if this is frowned upon but just a quick question: I am trying to prove that a proper subgroup of $\mathbb{Z}^n$ is isomorphic to $\mathbb{Z}^k$, where $k \le n$. So we must have $rank(A) = rank(\mathbb{Z}^k)$ , right?
For four proper fractions $a, b, c, d$ X writes $a+ b + c >3(abc)^{1/3}$. Y also added that $a + b + c> 3(abcd)^{1/3}$. Z says that the above inequalities hold only if a, b,c are positive. (a) Both X and Y are right but not Z. (b) Only Z is right (c) Only X is right (d) Neither of them is absolutely right.
Yes, @TedShifrin the order of $GL(2,p)$ is $p(p+1)(p-1)^2$. But I found this on a classification of groups of order $p^2qr$. There order of $H$ should be $qr$ and it is present as $G = C_{p}^2 \rtimes H$. I want to know that whether we can know the structure of $H$ that can be present?
Like can we think $H=C_q \times C_r$ or something like that from the given data?
When we say it embeds into $GL(2,p)$ does that mean we can say $H=C_q \times C_r$? or $H=C_q \rtimes C_r$? or should we consider all possibilities?
When considering finite groups $G$ of order, $|G|=p^2qr$, where $p,q,r$ are distinct primes, let $F$ be a Fitting subgroup of $G$. Then $F$ and $G/F$ are both non-trivial and $G/F$ acts faithfully on $\bar{F}:=F/ \phi(F)$ so that no non-trivial normal subgroup of $G/F$ stabilizes a series through $\bar{F}$.
And when $|F|=pr$. In this case $\phi(F)=1$ and $Aut(F)=C_{p-1} \times C_{r-1}$. Thus $G/F$ is abelian and $G/F \cong C_{p} \times C_{q}$.
In this case how can I write G using notations/symbols?
Is it like $G \cong (C_{p} \times C_{r}) \rtimes (C_{p} \times C_{q})$?
First question: Then it is, $G= F \rtimes (C_p \times C_q)$. But how do we write $F$ ? Do we have to think of all the possibilities of $F$ of order $pr$ and write as $G= (C_p \times C_r) \rtimes (C_p \times C_q)$ or $G= (C_p \rtimes C_r) \rtimes (C_p \times C_q)$ etc.?
As a second case we can consider the case where $C_q$ acts trivially on $C_p$. So then how to write $G$ using notations?
There it is also mentioned that we can distinguish among 2 cases. First, suppose that the sylow $q$-subgroup of $G/F$ acts non trivially on the sylow $p$-subgroup of $F$. Then $q|(p-1) and $G$ splits over $F$. Thus the group has the form $F \rtimes G/F$.
A presentation $\langle S\mid R\rangle$ is a Dehn presentation if for some $n\in\Bbb N$ there are words $u_1,\cdots,u_n$ and $v_1,\cdots, v_n$ such that $R=\{u_iv_i^{-1}\}$, $|u_i|>|v_i|$ and for all words $w$ in $(S\cup S^{-1})^\ast$ representing the trivial element of the group one of the $u_i$ is a subword of $w$
If you have such a presentation there's a trivial algorithm to solve the word problem: Take a word $w$, check if it has $u_i$ as a subword, in that case replace it by $v_i$, keep doing so until you hit the trivial word or find no $u_i$ as a subword
There is good motivation for such a definition here
So I don't know how to do it precisely for hyperbolic groups, but if $S$ is a surface of genus $g \geq 2$, to get a geodesic representative for a class $[\alpha] \in \pi_1(S)$ where $\alpha$ is an embedded loop, one lifts it to $\widetilde{\alpha}$ in $\Bbb H^2$ by the locally isometric universal covering, and then the deck transformation corresponding to $[\alpha]$ is an isometry of $\Bbb H^2$ which preserves the embedded arc $\widetilde{\alpha}$
It has to be an isometry fixing a geodesic $\gamma$ with endpoints at the boundary being the same as the endpoints of $\widetilde{\alpha}$.
Consider the homotopy of $\widetilde{\alpha}$ to $\gamma$ by straightline homotopy, but straightlines being the hyperbolic geodesics. This is $\pi_1(S)$-equivariant, so projects to a homotopy of $\alpha$ and the image of $\gamma$ (which is a geodesic in $S$) downstairs, and you have your desired representative
I don't know how to interpret this coarsely in $\pi_1(S)$
@anakhro Well, they print in bulk, and on really cheap paper, almost transparent and very thin, and offset machine is really cheaper per page than a printer, you know, but you should be printing in bulk, its all economy of scale.
@ParasKhosla Yes, I am Indian, and trying to get in some good masters progam in math.
Algebraic graph theory is a branch of mathematics in which algebraic methods are applied to problems about graphs. This is in contrast to geometric, combinatoric, or algorithmic approaches. There are three main branches of algebraic graph theory, involving the use of linear algebra, the use of group theory, and the study of graph invariants.== Branches of algebraic graph theory ===== Using linear algebra ===The first branch of algebraic graph theory involves the study of graphs in connection with linear algebra. Especially, it studies the spectrum of the adjacency matrix, or the Lap...
I can probably guess that they are using symmetries and permutation groups on graphs in this course.
For example, orbits and studying the automorphism groups of graphs.
@anakhro I have heard really good thing about Palka. Also, if you do not worry about little sacrifice of rigor (e.g. counterclockwise orientation based on your intuition, rather than, on winding numbers, etc.), Howie's Complex analysis is good. It is teeming with typos here and there, but you will be fine, i think. Also, thisbook contains all the solutions in appendix!
Got a simple question: I gotta find kernel of linear transformation $F(P)=xP^{''}(x) + (x+1)P^{'''}(x)$ where $F: \mathbb{R}_3[x] \to \mathbb{R}_3[x]$, so I think it would be just $\ker (F) = \{ ax+b : a,b \in \mathbb{R} \}$ since only polynomials of degree at most 1 would give zero polynomial in this case
@chandx you're looking for all the $G = P''$ such that $xG + (x+1)G' = 0$; if $G \neq 0$ you can solve the DE to get $G'/G = -x/(x+1) = -1 + 1/(x+1) \implies \ln G = -x + \ln(1+x) \implies G = (1+x)e^(-x) + C$ which is obviously not a polyonomial, so $G = 0$ and thus $P = ax + b$
could you suppose that $\operatorname{deg} P \geq 2$ and show that you wouldn't have nonzero polynomials? Sure. |
$\def\abs#1{|#1|}\def\i{\mathbf {i}}\def\ket#1{|{#1}\rangle}\def\bra#1{\langle{#1}|}\def\braket#1#2{\langle{#1}|{#2}\rangle}\def\tr{\mathord{\mbox{tr}}}\mathbf{Exercise\ 4.16}$
This exercise shows that for any Hermitian operator $O:V\to V$, the direct sum of all eigenspaces of $O$ is $V$.
A
unitary operator $U$ satisfies $U^\dagger U = I$.
a) Show that the columns of a unitary matrix $U$ form an orthonormal set.
b) Show that if $O$ is Hermitian, then so is $UOU^{-1}$ for any unitary operator $U$.
c) Show that any operator has at least one eigenvalue $\lambda$ and $\lambda$-eigenvector $v_\lambda$.
d) Use part c) to show that for any matrix $A:V\to V$, there is a unitary operator $U$ such that the matrix for $UAU^{-1}$ is upper triangular (meaning all entries below the diagonal are zero).
e) Show that for any Hermitian operator $O:V\to V$ with eigenvalues $\lambda_1, \dots, \lambda_k$, the direct sum of the $\lambda_i$-eigenspaces $S_{\lambda_i}$ gives the whole space:(1) |
The TempleMetrics package is a collection of functions implemented by members of the Econometrics Reading Group at Temple University. The main functions (at the moment) are built for distribution regression. That is, one can estimate the distribution of (Y) conditional on (X) using a model for a binary outcome. For example, \[\begin{align*} F_{Y|X}(y|x) = \Lambda(x'\beta) \end{align*}\] where () is some known link function, such as logit.
You can install TempleMetrics from github with:
or from CRAN using
The first example is how to run distribution regression for a single value of (y) of (Y). This example uses the
igm dataset which is a collection of 500 parent-child pairs of income along with the parent’s education level which comes from the Panel Study of Income Dynamics (PSID).
y0 <- median(igm$lcfincome)dreg <- distreg(lcfincome ~ lfincome + HEDUC, igm, y0)dreg#> $yvals#> [1] 11.04563#> #> $glmlist#> $glmlist[[1]]#> #> Call: glm(formula = formla, family = binomial(link = link), data = dta)#> #> Coefficients:#> (Intercept) lfincome HEDUCHS HEDUCLessHS #> 15.3320 -1.3976 0.1629 0.2256 #> #> Degrees of Freedom: 499 Total (i.e. Null); 496 Residual#> Null Deviance: 693.1 #> Residual Deviance: 640.9 AIC: 648.9#> #> #> attr(,"class")#> [1] "DR"
In many cases, of primary interest with distribution regression is obtaining (_{Y|X}(y|x)) for some particular values of (y) and (x). That’s what we do in this example.
yvals <- seq(quantile(igm$lcfincome,.05,type=1),quantile(igm$lcfincome,.95, type=1), length.out=100)dres <- distreg(lcfincome ~ lfincome + HEDUC, igm, yvals)xdf <- data.frame(lfincome=10, HEDUC="LessHS")y0 <- yvals[50]ecdf(igm$lcfincome)(y0)#> [1] 0.328Fycondx(dres, y0, xdf)#> [[1]]#> Empirical CDF #> Call: NULL#> x[1:100] = 9.6856, 9.7073, 9.7291, ..., 11.814, 11.836
This example says that: (1) the fraction of “children” in the dataset with income below 46628 is 0.33, but (2) we estimate that the fraction of children whose parent’s income is 22026 and have parent’s with less than a HS education with income below 46628 is 0.73. |
X
Search Filters
Format
Subjects
Library Location
Language
Publication Date
Click on a bar to filter by decade
Slide to change publication date range
1. Measurement of the ratio of the production cross sections times branching fractions of B c ± → J/ψπ ± and B± → J/ψK ± and ℬ B c ± → J / ψ π ± π ± π ∓ / ℬ B c ± → J / ψ π ± $$ \mathrm{\mathcal{B}}\left({\mathrm{B}}_{\mathrm{c}}^{\pm}\to \mathrm{J}/\psi {\pi}^{\pm }{\pi}^{\pm }{\pi}^{\mp}\right)/\mathrm{\mathcal{B}}\left({\mathrm{B}}_{\mathrm{c}}^{\pm}\to \mathrm{J}/\psi {\pi}^{\pm}\right) $$ in pp collisions at s = 7 $$ \sqrt{s}=7 $$ TeV
Journal of High Energy Physics, ISSN 1029-8479, 1/2015, Volume 2015, Issue 1, pp. 1 - 30
The ratio of the production cross sections times branching fractions σ B c ± ℬ B c ± → J / ψ π ± / σ B ± ℬ B ± → J / ψ K ± $$ \left(\sigma...
B physics | Branching fraction | Hadron-Hadron Scattering | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory
B physics | Branching fraction | Hadron-Hadron Scattering | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory
Journal Article
2. Search for nonresonant Higgs boson pair production in the b b ¯ b b ¯ $$ \mathrm{b}\overline{\mathrm{b}}\mathrm{b}\overline{\mathrm{b}} $$ final state at s $$ \sqrt{s} $$ = 13 TeV
Journal of High Energy Physics, ISSN 1029-8479, 4/2019, Volume 2019, Issue 4, pp. 1 - 49
Results of a search for nonresonant production of Higgs boson pairs, with each Higgs boson decaying to a b b ¯ $$ \mathrm{b}\overline{\mathrm{b}} $$ pair, are...
Higgs physics | Beyond Standard Model | Hadron-Hadron scattering (experiments) | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory
Higgs physics | Beyond Standard Model | Hadron-Hadron scattering (experiments) | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory
Journal Article
3. Studies of ${\mathrm {B}} ^{}_{{\mathrm {s}}2}(5840)^0 $ and ${\mathrm {B}} _{{\mathrm {s}}1}(5830)^0 $ mesons including the observation of the ${\mathrm {B}} ^{}_{{\mathrm {s}}2}(5840)^0 \rightarrow {\mathrm {B}} ^0 \mathrm {K} ^0_{\mathrm {S}} $ decay in proton-proton collisions at $\sqrt{s}=8\,\text {TeV}
European Physical Journal. C, Particles and Fields, ISSN 1434-6044, 11/2018, Volume 78, Issue 11
Measurements of $\mathrm{B}^*_\mathrm{s2}(5840)^0$ and $\mathrm{B}_\mathrm{s1}(5830)^0$ mesons are performed using a data sample of proton-proton collisions...
PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | Heavy flavour spectroscopy | b hadrons | Hadron spectroscopy | CMS | Physics | Experimental results
PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | Heavy flavour spectroscopy | b hadrons | Hadron spectroscopy | CMS | Physics | Experimental results
Journal Article
4. Studies of $${\mathrm {B}} ^{}_{{\mathrm {s}}2}(5840)^0 $$ Bs2∗(5840)0 and $${\mathrm {B}} _{{\mathrm {s}}1}(5830)^0 $$ Bs1(5830)0 mesons including the observation of the $${\mathrm {B}} ^{}_{{\mathrm {s}}2}(5840)^0 \rightarrow {\mathrm {B}} ^0 \mathrm {K} ^0_{\mathrm {S}} $$ Bs2∗(5840)0→B0KS0 decay in proton-proton collisions at $$\sqrt{s}=8\,\text {TeV} $$ s=8TeV
The European Physical Journal C, ISSN 1434-6044, 11/2018, Volume 78, Issue 11, pp. 1 - 26
Measurements of $${\mathrm {B}} ^{*}_{{\mathrm {s}}2}(5840)^0 $$ Bs2∗(5840)0 and $${\mathrm {B}} _{{\mathrm {s}}1}(5830)^0 $$ Bs1(5830)0 mesons are performed...
Heavy flavour spectroscopy | b hadrons | Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Hadron spectroscopy | CMS | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology | Experimental results
Heavy flavour spectroscopy | b hadrons | Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Hadron spectroscopy | CMS | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology | Experimental results
Journal Article
5. Search for t t ¯ H production in the H → b b ¯ decay channel with leptonic t t ¯ decays in proton-proton collisions at √s=13 TeV
Journal of High Energy Physics, ISSN 1126-6708, 03/2019, Volume 2019, Issue 3, pp. 1 - 62
A search is presented for the associated production of a standard model Higgs boson with a top quark-antiquark pair (tt¯H\[...
Hadron-Hadron scattering (experiments) | Top physics | Higgs physics | Protons | Confidence intervals | Large Hadron Collider | Particle collisions | Signal strength | Decay | Luminosity | Higgs bosons | Quarks | Solenoids | Muons
Hadron-Hadron scattering (experiments) | Top physics | Higgs physics | Protons | Confidence intervals | Large Hadron Collider | Particle collisions | Signal strength | Decay | Luminosity | Higgs bosons | Quarks | Solenoids | Muons
Journal Article
6. Search for Resonant and Nonresonant Higgs Boson Pair Production in the b b τ+τ- Decay Channel in pp Collisions at s =13 TeV with the ATLAS Detector
Physical Review Letters, ISSN 0031-9007, 11/2018, Volume 121, Issue 19
A search for resonant and nonresonant pair production of Higgs bosons in the b¯bτ+τ− final state is presented. The search uses 36.1 fb−1 of pp collision data...
PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | ATLAS detectors | Gravitation | Randall-Sundrum | Subatomic Physics | The standard model | Confidence levels | Photons | Supersymmetric models | Dynamic mechanical analysis | Germanium compounds | Randall Sundrum gravitons | ATLAS experiment | Fysik | Physical Sciences | Naturvetenskap | Tellurium compounds | Subatomär fysik | Natural Sciences | Tau lepton pairs | Bosons
PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | ATLAS detectors | Gravitation | Randall-Sundrum | Subatomic Physics | The standard model | Confidence levels | Photons | Supersymmetric models | Dynamic mechanical analysis | Germanium compounds | Randall Sundrum gravitons | ATLAS experiment | Fysik | Physical Sciences | Naturvetenskap | Tellurium compounds | Subatomär fysik | Natural Sciences | Tau lepton pairs | Bosons
Journal Article
7. Measurement of the Λb cross section and the Λ¯b to Λb ratio with J/ψΛ decays in pp collisions at s=7 TeV
Physics Letters B, ISSN 0370-2693, 08/2012, Volume 714, Issue 2-5, pp. 136 - 157
Journal Article
8. Measurement of VH, H → b b ¯ production as a function of the vector-boson transverse momentum in 13 TeV pp collisions with the ATLAS detector
Journal of High Energy Physics, ISSN 1126-6708, 05/2019, Volume 2019, Issue 5, pp. 1 - 36
Cross-sections of associated production of a Higgs boson decaying into bottomquark pairs and an electroweak gauge boson, W or Z, decaying into leptons are...
Hadron-Hadron scattering (experiments) | Higgs physics | Couplings | Parameter modification | Large Hadron Collider | Leptons | Particle collisions | Transverse momentum | Higgs bosons | Quarks | Parameter sensitivity | Cross sections | Bosons
Hadron-Hadron scattering (experiments) | Higgs physics | Couplings | Parameter modification | Large Hadron Collider | Leptons | Particle collisions | Transverse momentum | Higgs bosons | Quarks | Parameter sensitivity | Cross sections | Bosons
Journal Article
9. Search for nonresonant Higgs boson pair production in the b b ¯ b b ¯ \[ \mathrm{b}\overline{\mathrm{b}}\mathrm{b}\overline{\mathrm{b}} \] final state at s \[ \sqrt{s} \] = 13 TeV
Journal of High Energy Physics, 04/2019, Volume 2019, Issue 4, pp. 1 - 49
Results of a search for nonresonant production of Higgs boson pairs, with each Higgs boson decaying to a bb¯\[ \mathrm{b}\overline{\mathrm{b}} \] pair, are...
Confidence intervals | Couplings | Standard model (particle physics) | Large Hadron Collider | Particle collisions | Data search | Luminosity | Pair production | Higgs bosons | Quarks | Superconducting supercolliders | Solenoids
Confidence intervals | Couplings | Standard model (particle physics) | Large Hadron Collider | Particle collisions | Data search | Luminosity | Pair production | Higgs bosons | Quarks | Superconducting supercolliders | Solenoids
Journal Article
Physics Letters B, ISSN 0370-2693, 05/2016, Volume 756, Issue C, pp. 84 - 102
A measurement of the ratio of the branching fractions of the meson to and to is presented. The , , and are observed through their decays to , , and ,...
scattering [p p] | pair production [pi] | statistical | Phi --> K+ K | f0 --> pi+ pi | High Energy Physics - Experiment | Compact Muon Solenoid | pair production [K] | mass spectrum [K+ K-] | Ratio B | Large Hadron Collider (LHC) | 7000 GeV-cms | leptonic decay [J/psi] | (b)over-bar(s) | J/psi --> muon+ muon | experimental results | Nuclear and High Energy Physics | Physics and Astronomy | branching ratio [B/s0] | CERN LHC Coll | Violating Phase Phi(s) | B/s0 --> J/psi Phi | CMS collaboration ; proton-proton collisions ; CMS ; B physics | Physics | Física | hadronic decay [f0] | Decay | colliding beams [p p] | hadronic decay [Phi] | mass spectrum [pi+ pi-] | B/s0 --> J/psi f0
scattering [p p] | pair production [pi] | statistical | Phi --> K+ K | f0 --> pi+ pi | High Energy Physics - Experiment | Compact Muon Solenoid | pair production [K] | mass spectrum [K+ K-] | Ratio B | Large Hadron Collider (LHC) | 7000 GeV-cms | leptonic decay [J/psi] | (b)over-bar(s) | J/psi --> muon+ muon | experimental results | Nuclear and High Energy Physics | Physics and Astronomy | branching ratio [B/s0] | CERN LHC Coll | Violating Phase Phi(s) | B/s0 --> J/psi Phi | CMS collaboration ; proton-proton collisions ; CMS ; B physics | Physics | Física | hadronic decay [f0] | Decay | colliding beams [p p] | hadronic decay [Phi] | mass spectrum [pi+ pi-] | B/s0 --> J/psi f0
Journal Article |