Datasets:
zjsd
/

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
text
stringlengths
256
16.4k
Assume $u: \mathbb{R}^N \to \mathbb{R}$ is a smooth function with suitable integrability assumptions. I'm interested in a formal computation, do not worry about integrability properties or smoothness of $u$. Let $a$ be a constant. By integration by parts, how can one prove that the identity $$\int_{\mathbb{R}^N}u^a\nabla u\cdot\nabla\Delta u=C(\int_{\mathbb{R}^N}|D^2 (u^{(a+2)/2})|^2+\int_{\mathbb{R}^N}|\nabla(u^{(a+2)/4})|^4)$$ holds for $C$ some constant that depends on $a$? For which $a$ does the previous result hold?
This is a late answer to the question. For an easy typing, i will use the letters $b$ for a root of the polynomial $X^2+2\in\Bbb Q[X]$, and $a$ for a root of the polynomial $X^4 -5X^2-32\in\Bbb Q[X]$. Note than as a short intermezzo the relation, used in the sequel in blue:$$(a^2-5)^2=a^4-10a^2+25 =(a^4-5a^2-32)-5(a^2-5)+32=-5(a^2-5)+32\ .$$Now we are ready to show / compute: $$ \begin{aligned} a(a^2-4b-5) &=\pm \sqrt{-256b-160} \\ &\qquad\text{ in the equivalent, purely algebraic form} \\ \Big(\ a(a^2-4b-5) \ \Big)^2 &=-256b-160 \ . \end{aligned} $$ Computation: $$\begin{aligned}\Big(\ a(a^2-4b-5) \ \Big)^2&=a^2(\ (a^2-5)-4b\ )^2\\&=a^2(\ \color{blue}{(a^2-5)^2}-8b(a^2-5)+4b^2\ )\\&=a^2(\ \color{blue}{-5(a^2-5)+32}-8b(a^2-5)-32\ )\\&=a^2(\ -5(a^2-5)-8b(a^2-5)\ )\\&=a^2(a^2-5)(\ -5-8b\ )\\&=32(\ -5-8b\ )\\&=-256b-160\ .\end{aligned}$$This means that we have the points on the given elliptic curve:$$\begin{aligned}P&=(15-36b,\ 27\sqrt{256b - 160})\\&=(15-36b,\ \pm a(a^2+4b-5))\ ,\\Q&=(15+36b,\ 27\sqrt{-256b - 160})\\&=(15+36b,\ \pm a(a^2-4b-5))\ .\end{aligned}$$But they differ. So the information I was told that the coordinate can be written as... may be possibly traced back to only I was told that the $y$-coordinate of some point in $E(K)$, for $K=\Bbb Q(\alpha,\beta)$, can be written as$27\alpha(\alpha^2−4\beta−5))$... Note: Such computations are easily covered by using computer algebra systems. My choice of the wappon is sage, and we can type to get the above: sage: R.<x> = QQ[] sage: R.<x> = PolynomialRing(QQ) sage: K.<a,b> = NumberField( [x^4 - 5*x^2 - 32, x^2+2] ) sage: ( a*(a^2-4*b-5) )^2 -256*b - 160 For the second case, things are "slightly more complicated". I would start by using a new letter for $\sqrt{17}$, maybe $c=\sqrt{17}$, although we could write it in terms of $a,b$ in the same field $K=\Bbb Q(a,b)$, sage: sqrt(K(17)) -2/3*a^2 + 5/3 and working economically in $\Bbb Q(c)$ we factorize first as much as possible in $\Bbb Q$ and $K$: sage: L.<c> = QuadraticField(17) sage: gcd( 1180628067960, 5672869080264 ).factor() 2^3 * 3^15 * 17 sage: w = ( 1180628067960*c + 5672869080264 ) / (2^3 * 3^15 * 17) sage: w 605*c + 2907 sage: factor(w) (-34840*c + 143649) * (-c) * (-1/2*c - 3/2)^16 * (-1/2*c + 3/2) sage: L.units() (c - 4,) sage: (c-4)^6 -34840*c + 143649 sage: w / ( (c-4)^3 * (c+3)^8 / 2^8 )^2 -3/2*c + 17/2 sage: _.norm().factor() 2 * 17 This gives a hint on how to go on...
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Search Now showing items 1-10 of 18 J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-02) Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ... Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-12) The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ... Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC (Springer, 2014-10) Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ... Centrality, rapidity and transverse momentum dependence of the J/$\psi$ suppression in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2014-06) The inclusive J/$\psi$ nuclear modification factor ($R_{AA}$) in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76TeV has been measured by ALICE as a function of centrality in the $e^+e^-$ decay channel at mid-rapidity |y| < 0.8 ... Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... Multiplicity Dependence of Pion, Kaon, Proton and Lambda Production in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV (Elsevier, 2014-01) In this Letter, comprehensive results on $\pi^{\pm}, K^{\pm}, K^0_S$, $p(\bar{p})$ and $\Lambda (\bar{\Lambda})$ production at mid-rapidity (0 < $y_{CMS}$ < 0.5) in p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV, measured ... Multi-strange baryon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2014-01) The production of ${\rm\Xi}^-$ and ${\rm\Omega}^-$ baryons and their anti-particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been measured using the ALICE detector. The transverse momentum spectra at ... Measurement of charged jet suppression in Pb-Pb collisions at √sNN = 2.76 TeV (Springer, 2014-03) A measurement of the transverse momentum spectra of jets in Pb–Pb collisions at √sNN = 2.76TeV is reported. Jets are reconstructed from charged particles using the anti-kT jet algorithm with jet resolution parameters R ... Two- and three-pion quantum statistics correlations in Pb-Pb collisions at √sNN = 2.76 TeV at the CERN Large Hadron Collider (American Physical Society, 2014-02-26) Correlations induced by quantum statistics are sensitive to the spatiotemporal extent as well as dynamics of particle-emitting sources in heavy-ion collisions. In addition, such correlations can be used to search for the ... Exclusive J /ψ photoproduction off protons in ultraperipheral p-Pb collisions at √sNN = 5.02TeV (American Physical Society, 2014-12-05) We present the first measurement at the LHC of exclusive J/ψ photoproduction off protons, in ultraperipheral proton-lead collisions at √sNN=5.02 TeV. Events are selected with a dimuon pair produced either in the rapidity ...
let $T$ be a linear operator from a Banach space $X$ to Banach space $Y$.and $X=ker(T)\oplus M_1$ where $M_1$ is closed subspace of $X$.let $M$ be a closed subspce of $X$ then I want to prove that there exist a finite dimensional subspace $M_0$ such that $M=M \cap M_1 +M_0$ If $\dim (Ker(T))= +\infty$ the claim is false. Indeed putting $M = Ker(T)$ yeld $M\cap M_1 = \emptyset$ thus $M_0 = M$ must have infinite dimension. Suppose that $\dim (Ker(T))<+\infty$. From linear algebra $M = M\cap M_1 \oplus M_0$ for some $M_0$ (here $\oplus$ is only algebraic, it do not implies that $M\cap M_1$ is complemented as a Banach space). Consider the map $M\to X \to \frac X M_1$ where the first map is the inclusion and the second one is the projection. The kernel of this map is $M\cap M_1$ and the image is $M/M_1$. It follows from elementary algebra that $M/(M\cap M_1)\approx M/M_1$ which is finite dimensional since $M/ M_1 \lhd X/M_1 \approx Ker(T)$ and $\dim (Ker(T))<+\infty$. Thus $M_0\approx M/(M\cap M_1)$ is finite dimensional. It is probably better to post the original problem. If $X$ is separable Hilbert with $X=H_1\oplus H_2$ and $H_1, H_2$ both infinite-dimensional, and $T$ is the orthogonal projection onto $H_2$ then $M_1=H_2$. Take $M = H_1$. Then $M\cap M_1=\{0\}$. Therefore $M_0$ would have to be $H_2$. In your case, do you know something else about $T$?
https://doi.org/10.1351/goldbook.A00086 The quantity of light available to molecules at a particular point in the atmosphere and which, on absorption, drives photochemical processes in the atmosphere. It is calculated by integrating the @S05824@ \(L\left (\lambda,\,\theta,\,\varphi \right )\) over all directions of incidence of the light, \(E(\lambda) = \int _{\theta}\, \int _{\phi} L\left (\lambda,\theta,\varphi \right )\, \text{cos}\,\theta \: \text{sin}\,\theta\: \text{d}\theta\: \text{d}\varphi\). If the @R05037@ is expressed in \(\text{J m}^{-2}\ \text{s}^{-1}\ \text{st}^{-1}\ \text{nm}^{-1}\) and \(hc/\lambda\) is the energy per quantum of light of @W06659@ \(\lambda\), the @AT07314@ flux has units of \(\text{quanta cm}^{-2}\ \text{s}^{-1}\ \text{nm}^{-1}\). This important quantity is one of the terms required in the calculation of j-values, the first order rate coefficients for photochemical processes in the sunlight-absorbing, trace gases in the atmosphere. The @AT07314@ flux is determined by the solar radiation entering the atmosphere and by any changes in this due to atmospheric gases and particles (e.g. @R05160@ absorption by stratospheric ozone, @S05487@ and absorption by aerosols and clouds), and reflections from the ground. It is therefore dependent on the @W06659@ of the light, on the altitude and on specific local environmental conditions. The @AT07314@ flux has borne many names (e.g. flux, flux density, beam irradiance @AT07314@ irradiance, integrated intensity) which has caused some confusion. It is important to distinguish the @AT07314@ flux from the @S05817@, which refers to energy arrival on a flat surface having fixed spatial orientation (\(\text{J m}^{-2}\ \text{nm}^{-1}\)) given by: \[E(\lambda) = \int _{\theta}\, \int _{\phi} L\left (\lambda,\theta,\varphi \right )\, \text{cos}\,\theta \: \text{sin}\,\theta\: \text{d}\theta\: \text{d}\varphi\] The @AT07314@ flux does not refer to any specific orientation because molecules are oriented randomly in the atmosphere. This distinction is of practical relevance: the @AT07314@ flux (and therefore a j-value) near a brightly reflecting surface (e.g. over snow or above a thick cloud) can be a factor of three higher than that near a non-reflecting surface. The more descriptive name of @S05832@ is suggested for the quantity herein called @AT07314@ flux. See also: flux density, photon
$\def\abs#1{|#1|}\def\i{\mathbf {i}}\def\ket#1{|{#1}\rangle}\def\bra#1{\langle{#1}|}\def\braket#1#2{\langle{#1}|{#2}\rangle}\def\tr{\mathord{\mbox{tr}}}\mathbf{Exercise\ 4.7}$ Using the projection operator formalism a) compute the probability of each of the possible outcomes of measuring the first qubit of an arbitrary two-qubit state in the Hadamard basis $\{\ket +, \ket - \}$. b) compute the probability of each outcome for such a measurement on the state $\ket{\Psi^+} = \frac{1}{\sqrt 2}(\ket{00} + \ket{11}$. c) for each possible outcome in part b), describe the possible outcomes if we now measure the second qubit in the standard basis. d) for each possible outcome in part b), describe the possible outcomes if we now measure the second qubit in the Hadamard basis.
I calculated correlation function $C(t)=\langle x(t)x(0)\rangle$ for ground state of Simple Harmonic Oscillator (SHO) in two different methods. But the results do not match. First Attempt: From Heisenberg equations of motion, $$\mathbf{X}(t)=\mathbf{X}(0)\cos(\omega t)+\frac{\mathbf{P}(0)}{m \omega} \sin(\omega t)$$ So, calculated the required terms, $$\langle 0|\mathbf{X}^2(0)|0\rangle =\frac{\hbar}{2 m \omega}$$ and $$\langle 0| \mathbf{P}(0) \mathbf{X}(0) |0\rangle =-~\frac{i \hbar}{2} $$ Using the above two terms and the equation of $\mathbf{X}(t)$ I obtain, $$\langle 0| \mathbf{X}(t)\mathbf{X}(0)|0\rangle =\frac{\hbar}{2 m \omega} \exp(-~i\omega t)$$ This is the required correlation function. Second Method: I attempted to solve it using explicit ground state wave function in position basis. In this case, I obtain, $$\int \psi_0^*(x)\ x^2 \psi_0(x)~ \mathrm{d}x =\frac{\hbar}{2m \omega},$$ which is similar to the first calculation. Again, $$\int \psi_0^*(x)\ x (-~i \hbar)\frac{\partial}{\partial x} \psi_0 ~\mathrm{d}x=0,$$ which DOES NOT match with the first method. And hence, using expression for $X(t)$ like the first method I obtained, $$\langle X(t)X(0)\rangle =\frac{\hbar}{2m \omega} \cos (\omega t)$$ So method 1 and method 2 does not match. But are not they supposed to match up? I can not figure out where I am making mistake. Any help figuring out the mistakes will be very much appreciated.
Returns an array of cells for the quick guess, optimal (calibrated) or std. errors of the values of the model's parameters. Syntax ARMA_PARAM( X, Order, mean, sigma, phi, theta, Type, maxIter) X is the univariate time series data (a one dimensional array of cells (e.g. rows or columns)). Order is the time order in the data series (i.e. the first data point's corresponding date (earliest date=1 (default), latest date=0)). Order Description 1 ascending (the first data point corresponds to the earliest date) (default) 0 descending (the first data point corresponds to the latest date) mean is the ARMA model long-run mean (i.e. mu). sigma is the standard deviation of the model's residuals/innovations. phi are the parameters of the AR(p) component model (starting with the lowest lag). theta are the parameters of the MA(q) component model (starting with the lowest lag). Type is an integer switch to select the output array: (1=Quick Guess (default), 2=Calibrated, 3=Std. Errors). Order Description 1 Quick guess (non-optimal) of parameters values (default) 2 Calibrated (optimal) values for the model's parameters 3 Standard error of the parameters' values maxIter is the maximum number of iterations used to calibrate the model. If missing, the default maximum of 100 is assumed. Remarks The underlying model is described here. The time series is homogeneous or equally spaced. The time series may include missing values (e.g. #N/A) at either end. ARMA_PARAM returns an array for the values (or errors) of the model's parameters in the following order: $\mu$ $\phi_1,\phi_2,...,\phi_p$ $\theta_1,\theta_2,...,\theta_q$ $\sigma$ The function was added in version 1.63 SHAMROCK. Files Examples References D. S.G. Pollock; Handbook of Time Series Analysis, Signal Processing, and Dynamics; Academic Press; Har/Cdr edition(Nov 17, 1999), ISBN: 125609906 James Douglas Hamilton; Time Series Analysis; Princeton University Press; 1st edition(Jan 11, 1994), ISBN: 691042896 Tsay, Ruey S.; Analysis of Financial Time Series; John Wiley & SONS; 2nd edition(Aug 30, 2005), ISBN: 0-471-690740 Box, Jenkins and Reisel; Time Series Analysis: Forecasting and Control; John Wiley & SONS.; 4th edition(Jun 30, 2008), ISBN: 470272848 Walter Enders; Applied Econometric Time Series; Wiley; 4th edition(Nov 03, 2014), ISBN: 1118808568
Take an elementary convergent integral like: $\int^\infty_0 e^{- \lambda x} = \frac{1}{\lambda} $ If you series expand it every term and integrate term-by-term every term integrates to infinity. Is there a systematic way to cut-off the integral if you keep the $n^{th}$ term in the series so that you can reasonably approximate the integral to some quantified error? EDIT: Clearly the series expansion is of little use in the above integral, but I am interested in a potential case where, for example, I find an integral that converges when I numerically integrate it but the analytical series diverges.
ISSN: 1937-1632 eISSN: 1937-1179 All Issues Discrete & Continuous Dynamical Systems - S March 2009 , Volume 2 , Issue 1 A special issue on Asymptotic Behavior of Dissipative PDEs Select all articles Export/Reference: Abstract: This issue consists of ten carefully refereed papers dealing with important qualitative features of dissipative PDEs, with applications to fluid mechanics (compressible Navier-Stokes equations and water waves), reaction-diffusion systems ((bio)chemical reactions and population dynamics), plasma physics and phase separation and transition. Several contributions are concerned with issues such as regularity, stability and decay rates of solutions. Furthermore, an emphasis is laid on the study of the global dynamics of the systems, in terms of attractors, and of the convergence of single trajectories to stationary solutions. We wish to thank the referees for their valuable help in evaluating and improving the papers. Abstract: We show that the trajectories of a conserved phase-field model with memory are compact in the space of continuous functions and, for an exponential relaxation kernel, we establish the convergence of solutions to a single stationary state as time goes to infinity. In the latter case, we also estimate the rate of decay to equilibrium. Abstract: We show that infinite-dimensional integro-differential equations which involve an integral of the solution over the time interval since starting can be formulated as non-autonomous delay differential equations with an infinite delay. Moreover, when conditions guaranteeing uniqueness of solutions do not hold, they generate a non-autonomous (possibly) multi-valued dynamical system (MNDS). The pullback attractors here are defined with respect to a universe of subsets of the state space with sub-exponetial growth, rather than restricted to bounded sets. The theory of non-autonomous pullback attractors is extended to such MNDS in a general setting and then applied to the original integro-differential equations. Examples based on the logistic equations with and without a diffusion term are considered. Abstract: In this article, we consider the two-dimensional dissipative Boussinesq systems which model surface waves in three space dimensions. The long time asymptotics of the solutions for a large class of such systems are obtained rigorously for small initial data. Abstract: We study the uniform global attractor for a general nonautonomous reaction-diffusion system without uniqueness using a new developed framework of an evolutionary system. We prove the existence and the structure of a weak uniform (with respect to a symbol space) global attractor $\mathcal A$. Moreover, if the external force is normal, we show that this attractor is in fact a strong uniform global attractor. The existence of a uniform (with respect to the initial time) global attractor $\mathcal A^0$ also holds in this case, but its relation to $\mathcal A$ is not yet clear due to the non-uniqueness feature of the system. Abstract: This paper studies a wave equation on a bounded domain in $\bbR^d$ with nonlinear dissipation which is localized on a subset of the boundary. The damping is modeled by a continuous monotone function without the usual growth restrictions imposed at the origin and infinity. Under the assumption that the observability inequality is satisfied by the solution of the associated linearproblem, the asymptotic decay rates of the energy functional are obtained by reducing the nonlinear PDEproblem to a linear PDEand a nonlinear ODE. This approach offers a generalized framework which incorporates the results on energy decay that appeared previously in the literature; the method accommodates systems with variable coefficients in the principal elliptic part, and allows to dispense with linear restrictions on the growth of the dissipative feedback map. Abstract: We study the impact of an oscillating external force on the motion of a viscous, compressible, and heat conducting fluid. Assuming that the frequency of oscillations increases sufficiently fast as the time goes to infinity, the solutions are shown to stabilize to a spatially homogeneous static state. Abstract: We consider a model of non-isothermal phase separation taking place in a confined container. The order parameter $\phi $ is governed by a viscous or non-viscous Cahn-Hilliard type equation which is coupled with a heat equation for the temperature $\theta $. The former is subject to a nonlinear dynamic boundary condition recently proposed by physicists to account for interactions with the walls, while the latter is endowed with a standard (Dirichlet, Neumann or Robin) boundary condition. We indicate by $\alpha $ the viscosity coefficient, by $\varepsilon $ a (small) relaxation parameter multiplying $\partial _{t}\theta $ in the heat equation and by $\delta $ a small latent heat coefficient (satisfying $\delta \leq \lambda \alpha $, $\delta \leq \overline{\lambda }\varepsilon $, $\lambda , \overline{\lambda }>0$) multiplying $\Delta \theta $ in the Cahn-Hilliard equation and $\partial _{t}\phi $ in the heat equation. Then, we construct a family of exponential attractors $\mathcal{M}_{\varepsilon ,\delta ,\alpha }$ which is a robust perturbation of an exponential attractor $\mathcal{M} _{0,0,\alpha }$ of the (isothermal) viscous ($\alpha >0$) Cahn-Hilliard equation, namely, the symmetric Hausdorff distance between $\mathcal{M} _{\varepsilon ,\delta ,\alpha }$ and $\mathcal{M}_{0,0,\alpha }$ goes to 0, for each fixed value of $\alpha >0,$ as $( \varepsilon ,\delta) $ goes to $(0,0),$ in an explicitly controlled way. Moreover, the robustness of this family of exponential attractors $\mathcal{M}_{\varepsilon ,\delta ,\alpha }$ with respect to $( \delta ,\alpha ) \rightarrow ( 0,0) ,$ for each fixed value of $\varepsilon >0,$ is also obtained. Finally, assuming that the nonlinearities are real analytic, with no growth restrictions, the convergence of solutions to single equilibria, as time goes to infinity, is also proved. Abstract: In this paper we study the finite dimensionality of the global attractor for the following system of Klein-Gordon-Schrödinger type $ i\psi_t +\kappa \psi_{xx} +i\alpha\psi = \phi\psi+f,$ $ \phi_{tt}- \phi_{xx}+\phi+\lambda\phi_t = -Re \psi_{x}+g, $ $\psi (x,0)=\psi_0 (x), \phi(x,0) = \phi_0 (x), \phi_t (x,0)=\phi_1(x),$ $ \psi(x,t)= \phi(x,t)=0, x \in \partial \Omega, t>0, $ where $x \in \Omega, t>0, \kappa > 0, \alpha >0, \lambda >0,$ $f$ and $g$ are driving terms and $\Omega$ is a bounded interval of R With the help of the Lyapunov exponents we give an estimate of the upper bound of its Hausdorff and Fractal dimension. Abstract: This paper is concerned with the interior regularity of global solutions for the one-dimensional compressible isentropic Navier-Stokes equations with degenerate viscosity coefficient and vacuum. The viscosity coefficient $\mu$ is proportional to $\rho^{\theta}$ with $0<\theta<1/3$, where $\rho$ is the density. The global existence has been established in [44] (Vong, Yang and Zhu, J. Differential Equations, 192(2), 475--501). Some ideas and more delicate estimates are introduced to prove these results. Abstract: The existence of a global attractor for the solution semiflow of Selkov equations with Neumann boundary conditions on a bounded domain in space dimension $n\le 3$ is proved. This reaction-diffusion system features the oppositely-signed nonlinear terms so that the dissipative sign-condition is not satisfied. The asymptotical compactness is shown by a new decomposition method. It is also proved that the Hausdorff dimension and fractal dimension of the global attractor are finite. Readers Authors Editors Referees Librarians More Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]
Let’s say we have a current wire with a current $I$ flowing. We know there is a field of $B=\frac{\mu_0I}{2\pi r}$ by using Ampère's law, and a simple integration path which goes circularly around the wire. Now if we take the path of integration as so the surface spans doesn’t intercept the wire we trivially get a $B=0$ which is obviously incorrect. I see that I have essentially treated it as if there is no current even present. But a similar argument is used in other situations without fault. Take for example a conducting cylinder with a hollow, cylindrical shaped space inside. By the same argument there is no field inside. To further illustrate my point, the derivation of the B field inside of a solenoid requires you to intercept the currents. You can’t simply do the loop inside of the air gap. This, at least to me, seems like the same thing, and I can’t justify why one is incorrect and the other is incorrect. Please point out why I am stupid.
For a spin-1 particle at rest, it has three spin states(+1, -1, 0, along the z axis). If we rotate the z axis to -z direction, the spin +1 state will become the spin -1 state. Can we transfer the spin +1 state to the spin 0 state by the frame rotation? Let $|\pm 1\rangle$ and $|0\rangle$ be the eigenstates of the observable $J_z$ that represents the $z$-component of the spin, with eigenvalues $\pm 1$ and $0$, respectively. The same operator $J_z$ generates rotations about the $z$ axis, each of the states $|\pm 1\rangle$ and $|0\rangle$ must be invariant under rotations about the $z$ axis, except for an overall complex coefficient that doesn't affect the physical significance of the state. (Only relative complex coefficients, among the terms in a superposition, affect the physical significance of a state.) A rotation of a specific direction is another specific direction. Therefore, if $|0\rangle$ could be obtained from $|+1\rangle$ by a rotation in space, then $|0\rangle$ would itself need to represent some particular direction in space. But which direction would the state $|0\rangle$ represent? According to the preceding paragraph, whatever direction is represented by $|0\rangle$ must be invariant under rotations about the $z$-axis. No such invariant direction exists, other than the $z$-axis itself. So the only rotation that could possibly convert $|+1\rangle$ to $|0\rangle$ is a $180^\circ$ rotation that reverses the direction of the $z$-axis; but this rotation converts $|+1\rangle$ to $|-1\rangle$, not to $|0\rangle$. Altogether, this shows a rotation in 3-d space cannot convert $|+1\rangle$ to $|0\rangle$. If, by spin 0 state you mean the projection rather than length, the answer is no. For $s=1$, the rotation matrix is given by (with basis ordering $m_s=-1,0,1$): $$ R=\left( \begin{array}{ccc} e^{i (\alpha +\gamma )} \cos ^2\left(\frac{\beta }{2}\right) & \frac{e^{i \alpha } \sin (\beta )}{\sqrt{2}} & e^{i (\alpha -\gamma )} \sin ^2\left(\frac{\beta }{2}\right) \\ -\frac{e^{i \gamma } \sin (\beta )}{\sqrt{2}} & \cos (\beta ) & \frac{e^{-i \gamma } \sin (\beta )}{\sqrt{2}} \\ e^{-i (\alpha -\gamma )} \sin ^2\left(\frac{\beta }{2}\right) & -\frac{e^{-i \alpha } \sin (\beta )}{\sqrt{2}} & e^{-i (\alpha +\gamma )} \cos ^2\left(\frac{\beta }{2}\right) \\ \end{array} \right) $$ The choice $\beta=\pi$ will make this to $$ \left( \begin{array}{ccc} 0 & 0 & e^{i (\alpha -\gamma )} \\ 0 & -1 & 0 \\ e^{-i (\alpha -\gamma )} & 0 & 0 \\ \end{array} \right) $$ which interchanges the $m_s=1$ and $m_s=-1$ states, up to an unimportant phase. A matrix that would rotate $m_s=1$ to $m_s=0$ (up to a phase) would have to be of the form $$ \left(\begin{array}{ccc} -1&0&0\\ 0&0&e^{i\varphi}\\ 0&e^{-i\varphi}&0 \end{array}\right) $$ and there is no choice of angles that will produce this outcome. Short answer: No, you can't. This is one of the few things you can think of classically. If you have an arrow pointing upwwards, can you expect that the arrow reduces its lenght (to 0 in this case) by just rotating it? Edit: To see it clearer. There is a vector, called spin vector, which is found in the real space. The problem is that we do not know where that vector points exactly, but we can know its modulus and the $z$ projection. In the picture below, the vector is the blue arrow. We only know its lenght and the $z$ projection (green). However, it can point in any direction along the blue cone. The most classical-like version of this is a vector with $(\langle S_x\rangle,\langle S_y\rangle,\langle S_z\rangle$). The state represented above is $|1 \rangle$, as you will always get $+1$ if you measure $S_z$ (in $\hbar$ units). If you perform a rotation of 180 degrees around the $y$-axis as it is coloured in green, you can obtain $|-1\rangle$. However, there is no way you can get $0\rangle$. You can say... if we rotate 90 degrees instead of 180, then the green projection over the $z$-axis is 0. Yes, that's true: the green bar will now lay on the $x-axis$. Nevertheless, the are more projections. Now the blue arrow will probably have a projection on the z-axis. You cannot know which one, because it is uncertain. This means you can have $\langle S_z\rangle=0$, but only as a mean value. If you perform one certain measurement, you can get any value, so it is not $0\rangle$ (this one will always give $S_z=0$. Instead, you've got an entangled state with multiple possible measurements.
Let's take an aqueous solution of a salt $\ce{NaHA}$ with the initial concentration $C$ when added to water. It will completely dissociate according to the eaquation: $\ce{NaHA(s) \rightarrow Na^+ +HA^-}$. $\ce{HA^-}$ will participate in three equilibria: $\ce{2HA^- \leftrightarrows H2A +A^{2-}\quad \quad \quad }$ ${K_1^0=K_{A2}/K_{A1}}$ $\ce{HA^- +H2O\leftrightarrows H3O^+ +A^{2-}\quad }$ $K_2^0=K_{A2}$ $\ce{HA^- +H2O\leftrightarrows OH^- +H_2A^\quad }$ $K_3^0=K_{B1}={K_w}/{K_{A1}}$ In most cases, $K_1^0$ is far bigger than $K_2^0$ and $K_3^0$. So, the first equilibrium is , and this reaction will impose the pH of the solution. the preponderant reaction Let's now calculate the product $K_{A2}\times K_{A1}$: $K_{A2}\times K_{A1}= \frac{[\ce{}A^{2-}].[\ce{H3O+}]}{\ce{[HA^-]}}.\frac{[\ce{}HA^{-}].[\ce{H3O+}]}{\ce{[H_2A]}}$ According to the stoichiometry of the preponderant reaction, we have $\ce{[A^{2-}]=[H_2A^]}$. So the product$K_{A2}\times K_{A1}= \ce{[H3O+}]^2}$ .i.e. $\ce{pH}=0.5(\ce{p}K_{A2} +\ce{p} K_{A1})$
My question relates to Chapter 3, Exercise 8 in "Baby Rudin". It states: If $\sum_n a_n$ converges, and if $\{b_n\}$ is monotonic and bounded, prove that $\sum_n a_n b_n$ converges. My attempt would have been: Since $\{b_n\}$ is monotonic and bounded, $\{b_n\}$ converges and it exists $\inf \{b_n\}$ as well as $\sup \{b_n\}$. But then we have$$ \left| \sum_n a_n \inf \{b_n\} \right| \leq \left| \sum_n a_n b_n \right| \leq \left| \sum_n a_n \sup \{b_n\} \right| \leq \max \left( \left| \sup \{b_n\} \right|, \left| \inf \{b_n \} \right| \right)\varepsilon \leq \tilde{\varepsilon} $$since $\sum_n a_n$ converges and $\max \left( \left| \sup \{b_n\} \right|, \left| \inf \{b_n \} \right| \right)$ is a finite number. This would imply, by the comparison test, that $\sum_n a_n b_n$ converges as well. $\quad \Box$ But as there was a way longer, more rigorous proof chosen in this solution manual, I'm a bit suspicious that my proof is not complete. Am I missing something? EDIT: Thanks everyone! @hermes: The last part of your proof gave me the following idea. As $\lim_{n \to \infty} b_n = c$ and $\{b_n\}$ is monotonic, couldn't we just set $b_n = c - c_n$ with a monotonically decreasing sequence $\{c_n\}$ which has $\lim_{n \to \infty} c_n = 0$. Then we have $$\sum_n a_n b_n = \underbrace{c \sum_n a_n}_{\text{converges by assumption}} - \underbrace{\sum_n a_n c_n}_{\text{converges by Theorem 3.42}} \leq \varepsilon_1 - \varepsilon_2 = \varepsilon $$ since Theorem 3.42 Suppose the partial sums of $\sum_n a_n$ form a bounded space $\quad \checkmark$ $c_0 \geq c_1 \geq c_2 \geq \dots \quad \checkmark$ $\lim_{n \to \infty} c_n = 0 \quad \checkmark$ Then $\sum_n a_n c_n$ converges. Thus $\sum_n a_n b_n$ converges as well. $\quad \Box$ Now that should hold, I think. So I wouldn't need to go through all the estimates.
When writing $$\arg(z^n) = n\arg(z) + 2πk$$ and letting $\arg$ denote the principal complex argument of $z$. Is $k$ generally an integer or is it that $0\lt k\lt n$ or $k=[\frac{1}{2}-\frac{n}{2\pi}\arg(z)]$ as some books suggest? Obviously, I don't understand any of this and would appreciate if someone explained this tricky situation.. Thanks in advance ! If $n$ is a non-negative integer, then $z^n=z \cdot z \cdot \; \cdots$ is well defined (analytic and entire) on all $\mathbb C$. If $n$ is a negative integer, then $z^{n}=(1/z)^{|n|}$ and it is meromorphic, with only a pole of order $|n|$ at $z=0$. In both cases,the argument (apart the $i2\pi$) is also well defined to be $\{n\arg(z)/(2\pi)\}(2\pi)=(n\arg(z))\mod{(2\pi)}$ where the brackets indicate the fractional part. That as much as $\arg(z)$ is defined. The above if you define $0\le \arg(z) <2\pi$. If instead, as rightly indicated in a comment, the definition is $-\pi < \arg(z) \le \pi$ (which is that adopted in all major CAS nowadays), in any case you shall reduce $n\arg(z)$ to fall therein. If $n$ is instead rational, then it comes that you have to choice the branch: the example of $z^{1/2}=\pm \sqrt{z}$ is well known and I will not continue further (you can find a more authoritative explanation here).
From Noether's theorem applied to fields we can get the general expression for the stress-energy-momentum tensor for some fields: $$T^{\mu}_{\;\nu} = \sum_{i} \left(\frac{\partial \mathcal{L}}{\partial \partial_{\mu}\phi_{i}}\partial_{\nu}\phi_{i}\right)-\delta^{\mu}_{\;\nu}\mathcal{L}$$ The EM Lagrangian, in the Weyl gauge, is: $$\mathcal{L} = \frac{1}{2}\epsilon_{0}\left(\frac{\partial \vec{A}}{\partial t}\right)^{2}-\frac{1}{2\mu_{0}}\left(\vec{\nabla}\times \vec{A}\right)^{2}$$ Applying the above, all I manage to get for the pressure along x, which I believe corresponds to the first diagonal element of the Maxwell stress tensor, is: $$p_{x} = \sigma_{xx} = -T^{xx} = \frac{-1}{\mu_{0}}\left(\left(\partial_{x}A_{z}\right)^{2}-\partial_{x}A_{z}\partial_{z}A_{x}-\left(\partial_{x}A_{y}\right)^{2}+\partial_{x}A_{y}\partial_{y}A_{x}\right)+\mathcal{L}$$ But I can't see how this can be equal to what is given in Wikipedia.Why is this?
Yeah, this software cannot be too easy to install, my installer is very professional looking, currently not tied into that code, but directs the user how to search for their MikTeX and or install it and does a test LaTeX rendering Some body like Zeta (on codereview) might be able to help a lot... I'm not sure if he does a lot of category theory, but he does a lot of Haskell (not that I'm trying to conflate the two)... so he would probably be one of the better bets for asking for revision of code. he is usually on the 2nd monitor chat room. There are a lot of people on those chat rooms that help each other with projects. i'm not sure how many of them are adept at category theory though... still, this chat tends to emphasize a lot of small problems and occasionally goes off tangent. you're project is probably too large for an actual question on codereview, but there is a lot of github activity in the chat rooms. gl. In mathematics, the Fabius function is an example of an infinitely differentiable function that is nowhere analytic, found by Jaap Fabius (1966). It was also written down as the Fourier transform off^(z)=∏m=1∞(cos... Defined as the probability that $\sum_{n=1}^\infty2^{-n}\zeta_n$ will be less than $x$, where the $\zeta_n$ are chosen randomly and independently from the unit interval @AkivaWeinberger are you familiar with the theory behind Fourier series? anyway here's a food for thought for $f : S^1 \to \Bbb C$ square-integrable, let $c_n := \displaystyle \int_{S^1} f(\theta) \exp(-i n \theta) (\mathrm d\theta/2\pi)$, and $f^\ast := \displaystyle \sum_{n \in \Bbb Z} c_n \exp(in\theta)$. It is known that $f^\ast = f$ almost surely. (a) is $-^\ast$ idempotent? i.e. is it true that $f^{\ast \ast} = f^\ast$? @AkivaWeinberger You need to use the definition of $F$ as the cumulative function of the random variables. $C^\infty$ was a simple step, but I don't have access to the paper right now so I don't recall it. > In mathematics, a square-integrable function, also called a quadratically integrable function, is a real- or complex-valued measurable function for which the integral of the square of the absolute value is finite. I am having some difficulties understanding the difference between simplicial and singular homology. I am aware of the fact that they are isomorphic, i.e. the homology groups are in fact the same (and maybe this doesnt't help my intuition), but I am having trouble seeing where in the setup they d... Usually it is a great advantage to consult the notes, as they tell you exactly what has been done. A book will teach you the field, but not necessarily help you understand the style that the prof. (who creates the exam) creates questions. @AkivaWeinberger having thought about it a little, I think the best way to approach the geometry problem is to argue that the relevant condition (centroid is on the incircle) is preserved by similarity transformations hence you're free to rescale the sides, and therefore the (semi)perimeter as well so one may (for instance) choose $s=(a+b+c)/2=1$ without loss of generality that makes a lot of the formulas simpler, e.g. the inradius is identical to the area It is asking how many terms of the Euler Maclaurin formula do we need in order to compute the Riemann zeta function in the complex plane? $q$ is the upper summation index in the sum with the Bernoulli numbers. This appears to answer it in the positive: "By repeating the above argument we see that we have analytically continued the Riemann zeta-function to the right-half plane σ > 1 − k, for all k = 1, 2, 3, . . .."
In the idealised case, the answer to this is slightly surprising. The fact that the mass of a rocket must include the mass of its fuel is embodied in the rocket equation, $$\Delta v = v_e \ln\frac{m_i}{m_f},$$where $m_i$ is the initial mass of the rocket (including fuel, payload and everything else), and $m_f$ is the final mass, including the payload but much less fuel. $v_e$ is the effective exhaust velocity, which we might as well assume stays fixed for a given type of rocket, and $\Delta v$ is essentially the velocity change required to reach escape velocity, which we'll also assume stays constant. The above equation does not include the acceleration due to gravity, which is of course an important factor. This is because (as is usually done) it's included in the $\Delta v$ term, which includes the velocity you lose to gravitational acceleration as the rocket ascends. You can put in the gravitational acceleration explicitly and the result doesn't change, as I'll show below. Rearranging the rocket equation gives us$$m_i = m_f e^{\Delta v/v_e},$$which tells us the amount of fuel (the majority of $m_i$) we need to lift a mass $m_f$. You can see that this is exponential in $\Delta v$, meaning that if we want to go a little bit faster we need a much bigger rocket. This is called "the tyranny of the rocket equation." In this case we don't want to go faster, we just want to send more stuff, i.e. we want to increase $m_f$. But the equation is not exponential in $m_f$, it's linear. Therefore if we ignore any changes in rocket design that would be needed to increase its size, we can conclude that if you want to double the payload, you only need to double the size of the rocket, not quadruple it. If we want to do this more precisely, we should include gravitational acceleration in the rocket equation. As per this answer by Asad to another question, this gives us$$\Delta v = v_e ln \frac{m_i}{m_f} - g\left(\frac{m_f}{\dot m}\right),$$where $g$ is acceleration due to gravity and $\dot m$ is the rate at which fuel is burned, which we assume is constant over time. According to the reasoning in Asad's answer, we end up with$$m_i = m_f \left(\exp\left(\frac{\Delta v + g\left(\frac{m_f}{\dot m}\right)}{v_e}\right) -1\right)^{-1},$$where $\Delta v$ is now the true escape velocity rather than the effective escape velocity. In Asad's answer, he assumes that $\dot m$ stays constant as you change $m_f$, and he concludes that there is a strong limit to the size of a rocket. But in fact if you were going to make a rocket twice the size, it wouldn't make sense to keep $\dot m$ the same. To take it to an extreme, imagine building something the size of a Saturn V that burns fuel at the same rate as a hobby rocket. It obviously wouldn't be able to lift itself off the launch pad, and nobody would consider building such a design. So let's instead assume that the burn rate is proportional to the size of the rocket. This means that $\frac{m_f}{\dot m}$ is a constant, and the equation as a whole is still of the form$$m_i = m_f \times \text{a constant},$$so it's linear in $m_f$. In fact none of this is really all that surprising after all, because if you want to send twice the mass you could always just use two rockets of the original size. By just strapping those rockets next to each other you've got one of twice the size that can send twice the payload. Moreover, it burns fuel at twice the rate, just as I assumed above. There's no reason that wouldn't work in principle. (Though in practice it would be another matter of course!) If the equation had been exponential in $m_f$ then there would have been a point at which increasing the payload mass would require an unreasonable amount of extra fuel, and that would have imposed a strong practical limit on rocket size. But since it's linear this doesn't really happen. The limits on rocket size are not due to an exponential increase in propellant mass, but to the engineering challenges in building a structure of that size and complexity that won't fail under the violent conditions of a rocket launch. These include factors to do with the way the strength of a structure scales with its size and (I imagine) practical issues involved in getting fuel where it needs to be at the right time. In this respect the factors that limit the size of rockets are quite similar to the factors that limit the size of buildings.
I'll help you answer your second question. But first, there are some difficulties with the problem you've been given. Firstly your question isn't fully specified: you need to have the Hamiltonian itself so you need at least the potential $V(x)$. The expression for $E_n$ you are given bespeaks either a quantum harmonic oscillator or an infinite well. Secondly, the there is a typo in your beginning quantum state. The two inequalities in the definition should read $|x|<a$ and $|x|>a$ (not $|x|<0$, which no $x\in\mathbb{R}$ fulfills!). So I'll presume an infinite well as this ties in the with Fourier series method of akhmeteli's answer. As in akmetieli's answer, you expand the quantum state in energy eigenstates $\mathcal{N}^{-1} \cos\left(\left(n + \frac{1}{2}\right)\frac{\pi}{a}x\right);\,n=0,1,2,\cdots$ (when $|x|<a$, nought outside the interval) corresoponding to the energies $E_n$ (here $\mathcal{N}$ is the normalization to make $\int_{-a}^a|\psi|^2dx=1$. Note that this is a discrete series. The energy doesn't take on a continuous values, so the magnitude squared of your Fourier series (not integral as for continuously varying energy) weights are the probabilities, not probability densities, that the quantum state will be found in each energy. So when you do your Fourier series, you should check that $\sum_n |w_n|^2 = 1$, where $w_n$ are the Fourier series weights. Now the probability $|w_n|^2$ does not vary with time. The phase of $w_n$ does, so the energy eigenstates interfere with one another to give a time varying wavefunction, but the probabilities to be in each energy eigenstate are constant. So you don't even need to know the time when you calculate your probability. This should let you finish your question. Next stage: Since you are dealing with the quantum harmonic oscillator, your energy eigenstates are the following set: $$\psi_n(x) = \frac{1}{\sqrt{2^n\,n!}} \cdot \left(\frac{m\omega}{\pi \hbar}\right)^{1/4} \cdot e^{- \frac{m\omega x^2}{2 \hbar}} \cdot H_n\left(\sqrt{\frac{m\omega}{\hbar}} x \right), \qquad n = 0,1,2,\ldots.\qquad(1) $$ where $H_n(x)=(-1)^n e^{x^2}\frac{d^n}{dx^n}\left(e^{-x^2}\right)$ is the $n^{th}$ Hermite polynomial. These eigenfunctions are orthogonal in the sense that: $$\left<\psi_n, \psi_m\right> \stackrel{def}{=} \int_{-\infty}^\infty \psi_n(x)^* \psi_m(x) {\rm d}x = \delta_{m,n}\qquad (2)$$ i.e. the "inner product" is nought for different discrete energy eigenfunctions and 1 for $n=m$, so each eigenstate is "normalized" or said to have unit "length" in the Hilbert space spanned by the energy eigenstates (don't worry if you don't understand all of this; these are the kinds of statements that'll become more and more wonted to you if you keep at your self study). The above orthogonality is the key to the kind of decomposition that Akhmeteli and I have spoken of: any initial quantum state $\psi(x)$ can be resolved into its energy eigenstates by assuming: $$\psi(x) = \sum\limits_{m=0}^\infty w_m \,\psi_m(x)\qquad(3)$$ then multiplying both sides of (3) by the $m^{th}$ eigenstate in turn and integrating over the whole real interval, applying (2) to find (noting that we can integrate the series termwise): $$w_m = \int_{-\infty}^\infty \psi(x)\, \psi(m)\, {\rm d}x \,\qquad(4)$$ Now it is the field of spectral theory that shows that our set of eigenfunctions (1) is complete, i.e. that a sum of the kind (3) can indeed represent (in the appropriate measure theoretic sense) any piecewise continuous $\psi(x)$ fulfilling $\int_{-\infty}^\infty |\psi(x)|^2 {\rm d}x < \infty$, which of course is true for valid quantum states since we must have $\int_{-\infty}^\infty |\psi(x)|^2 {\rm d}x =1$ (this is just a necessary condition for $|\psi(x)|^2$ to be a probability density in $x$). You will ultimately learn that this "orthogonality" is a property of all eigenfunctions of any quantum observable, not only the Hamiltonian. This result comes to us from Sturm Liouville theory and is owing to the self-adjointhood of quantum observables (more generally it holds for any normal operator - one that commutes with its own adjoint). Lastly, note that, since $\psi_n(x)$ is the $n^{th}$ energy eigenfunction, its full space and time variation is $\psi_n(x, t) = \psi_n(x) \exp(-i E_n t/\hbar)$. So once you've resolved your beginning quantum state into a superposition like (3), you can write down its general time dependence: $$\psi(x, t) = \sum\limits_{m=0}^\infty \left(w_m \,\psi_m(x)\,e^{-i \frac{E_n}{\hbar} t}\right)\qquad(5)$$ This should let you get a bit further!
Yeah, this software cannot be too easy to install, my installer is very professional looking, currently not tied into that code, but directs the user how to search for their MikTeX and or install it and does a test LaTeX rendering Some body like Zeta (on codereview) might be able to help a lot... I'm not sure if he does a lot of category theory, but he does a lot of Haskell (not that I'm trying to conflate the two)... so he would probably be one of the better bets for asking for revision of code. he is usually on the 2nd monitor chat room. There are a lot of people on those chat rooms that help each other with projects. i'm not sure how many of them are adept at category theory though... still, this chat tends to emphasize a lot of small problems and occasionally goes off tangent. you're project is probably too large for an actual question on codereview, but there is a lot of github activity in the chat rooms. gl. In mathematics, the Fabius function is an example of an infinitely differentiable function that is nowhere analytic, found by Jaap Fabius (1966). It was also written down as the Fourier transform off^(z)=∏m=1∞(cos... Defined as the probability that $\sum_{n=1}^\infty2^{-n}\zeta_n$ will be less than $x$, where the $\zeta_n$ are chosen randomly and independently from the unit interval @AkivaWeinberger are you familiar with the theory behind Fourier series? anyway here's a food for thought for $f : S^1 \to \Bbb C$ square-integrable, let $c_n := \displaystyle \int_{S^1} f(\theta) \exp(-i n \theta) (\mathrm d\theta/2\pi)$, and $f^\ast := \displaystyle \sum_{n \in \Bbb Z} c_n \exp(in\theta)$. It is known that $f^\ast = f$ almost surely. (a) is $-^\ast$ idempotent? i.e. is it true that $f^{\ast \ast} = f^\ast$? @AkivaWeinberger You need to use the definition of $F$ as the cumulative function of the random variables. $C^\infty$ was a simple step, but I don't have access to the paper right now so I don't recall it. > In mathematics, a square-integrable function, also called a quadratically integrable function, is a real- or complex-valued measurable function for which the integral of the square of the absolute value is finite. I am having some difficulties understanding the difference between simplicial and singular homology. I am aware of the fact that they are isomorphic, i.e. the homology groups are in fact the same (and maybe this doesnt't help my intuition), but I am having trouble seeing where in the setup they d... Usually it is a great advantage to consult the notes, as they tell you exactly what has been done. A book will teach you the field, but not necessarily help you understand the style that the prof. (who creates the exam) creates questions. @AkivaWeinberger having thought about it a little, I think the best way to approach the geometry problem is to argue that the relevant condition (centroid is on the incircle) is preserved by similarity transformations hence you're free to rescale the sides, and therefore the (semi)perimeter as well so one may (for instance) choose $s=(a+b+c)/2=1$ without loss of generality that makes a lot of the formulas simpler, e.g. the inradius is identical to the area It is asking how many terms of the Euler Maclaurin formula do we need in order to compute the Riemann zeta function in the complex plane? $q$ is the upper summation index in the sum with the Bernoulli numbers. This appears to answer it in the positive: "By repeating the above argument we see that we have analytically continued the Riemann zeta-function to the right-half plane σ > 1 − k, for all k = 1, 2, 3, . . .."
Yes, even when n is known in advance and each string-length is $\lceil$log 2(n)$\rceil$ + 1 and we only care about the state between updates, not the computation required to perform updates and the update procedure can be non-computable . For all positive integers n and all elements x of {0,1} n, by [the simpler version of the Chernoff bound], the probability of [a random element y of {0,1} n differing from x on at most 4/9 of the positions] is at most exp(-(2/324)$\cdot$n), which is less than 2 -(1/113)$\cdot$n. Thus, for choosing a subset of {0,1} n whose elements pairwise differ on more than 4/9 of the positions, each choice eliminates less than 2 n/113 of the original possibilities, so there is such a subset with more than 2 n/113 elements. Assume n is an integer that's greater than 2, and let S be a subset of {0,1} n-2 with more than 2 (n-2)/113 elements which is such that distinct elements of S differ on more than 4/9 of the positions, and consider any initially-randomized algorithm whose error probability will be at most 1/6 when the inputs are chosen as follows: Choose s uniformly from S, and send (0,s 0) , (1,s 1) , (2,s 2) , (3,s 3) , (4,s 4) , ... , (n-4,s n-4) , (n-3,s n-3) , $\star$ along both streams. Choose m uniformly from from {0,1,2,3,4,...,n-4,n-3} and then send $\star$ on stream 0 and (m,1) on stream 1. The expected value, over its own possible choices of randomness for each update, of its error probability (over the choice of input) conditioned on its own randomness, is at most 1/6, so there is some internal randomness for which its error probability (over the choice of input) will be at most 1/6. Fix some such randomness for each update, giving a deterministic algorithm, which I'll call DSSEA, whose error probability on that input distribution is at most 1/6. Consider a guesser G which uses [DSSEA and DSSEA's state just before receiving the last pair of strings] as follows: Let s' be the element of {0,1} n-2 given by [s' i is DSSEA's output after sending $\star$ , (i,1) along streams 0,1 respectively ]. Output the lexicographically least element of S which differs from s' on a minimum number of positions. The expected value, over the choice of strings other than the last pair, of DSSEA's error probability (over the choice of the last pair) conditioned on the other strings, is at most 1/6, so the probability of that conditional probability exceeding 2/9 is at most 3/4. Whenever that conditional probability is at most 2/9, s' will differ from s on at most 2/9 of the positions, so G will output s, since other all elements of S differ from s on more than 4/9 of the positions, and so from s' on more than 2/9 of the positions. Thus G has probability at least 1/4 of outputting s, so in particular has more than $\big(\hspace{-0.03 in}$2 (n-2)/113$\hspace{-0.03 in}\big)\hspace{-0.03 in}\big/\hspace{-0.03 in}$4 possible outputs. By the choice of G, that means DSSEA has more than 2 ((n-2)/113)-2 possible internal states just before the last update. Hardcording separate randomness for each update does not increase that number, so the initial randomized algorithm must be able to keep at least ((n-2)/113)-2 bits of state between updates. For all integers n, if 25992 < n then n/114 < ((n-2)/113)-2 . For constant error rates above 1/6, just reduce the error rate by taking a majority vote of O(1) independent parallel runs. For numbers M of possible strings and positive integers j in o(log(M)) and error probabilities bounded above by 1$\hspace{-0.04 in}\big/\hspace{-0.04 in}\big(\hspace{-0.04 in}$M (1+Ω(1))/j$\hspace{-0.03 in}\big)$ , one can similarly get an asymptotic lower bound of $\big(\hspace{-0.04 in}\lfloor \hspace{-0.03 in}$log 2(M choose n-1)$\rfloor$ - 1$\hspace{-0.04 in}\big)$ / (2$\cdot$j - 1) , although I have neither tried working out whether-or-not that dependence on j should be within a constant factor of tight nor tried bounds for other parameter regimes.
CentralityBin () CentralityBin (const char *name, Float_t low, Float_t high) CentralityBin (const CentralityBin &other) virtual ~CentralityBin () CentralityBin & operator= (const CentralityBin &other) Bool_t IsAllBin () const Bool_t IsInclusiveBin () const const char * GetListName () const virtual void CreateOutputObjects (TList *dir, Int_t mask) virtual Bool_t ProcessEvent (const AliAODForwardMult *forward, UInt_t triggerMask, Bool_t isZero, Double_t vzMin, Double_t vzMax, const TH2D *data, const TH2D *mc, UInt_t filter, Double_t weight) virtual Double_t Normalization (const TH1I &t, UShort_t scheme, Double_t trgEff, Double_t &ntotal, TString *text) const virtual void MakeResult (const TH2D *sum, const char *postfix, bool rootProj, bool corrEmpty, Double_t scaler, Int_t marker, Int_t color, TList *mclist, TList *truthlist) virtual bool End (TList *sums, TList *results, UShort_t scheme, Double_t trigEff, Double_t trigEff0, Bool_t rootProj, Bool_t corrEmpty, Int_t triggerMask, Int_t marker, Int_t color, TList *mclist, TList *truthlist) Int_t GetColor (Int_t fallback=kRed+2) const void SetColor (Color_t colour) TList * GetResults () const const char * GetResultName (const char *postfix="") const TH1 * GetResult (const char *postfix="", Bool_t verbose=true) const void SetDebugLevel (Int_t lvl) void SetSatelliteVertices (Bool_t satVtx) virtual void Print (Option_t *option="") const const Sum * GetSum (Bool_t mc=false) const Sum * GetSum (Bool_t mc=false) const TH1I * GetTriggers () const TH1I * GetTriggers () const TH1I * GetStatus () const TH1I * GetStatus () Calculations done per centrality. These objects are only used internally and are never streamed. We do not make dictionaries for this (and derived) classes as they are constructed on the fly. Definition at line 701 of file AliBasedNdetaTask.h. Calculate the Event-Level normalization. The full event level normalization for trigger \(X\) is given by \begin{eqnarray*} N &=& \frac{1}{\epsilon_X} \left(N_A+\frac{N_A}{N_V}(N_{-V}-\beta)\right)\\ &=& \frac{1}{\epsilon_X}N_A \left(1+\frac{1}{N_V}(N_T-N_V-\beta)\right)\\ &=& \frac{1}{\epsilon_X}N_A \left(1+\frac{N_T}{N_V}-1-\frac{\beta}{N_V}\right)\\ &=& \frac{1}{\epsilon_X}N_A \left(\frac{1}{\epsilon_V}-\frac{\beta}{N_V}\right) \end{eqnarray*} where \(\epsilon_X=\frac{N_{T,X}}{N_X}\) is the trigger efficiency evaluated in simulation. \(\epsilon_V=\frac{N_V}{N_T}\) is the vertex efficiency evaluated from the data \(N_X\) is the Monte-Carlo truth number of events of type \(X\). \(N_{T,X}\) is the Monte-Carlo truth number of events of type \(X\) which was also triggered as such. \(N_T\) is the number of data events that where triggered as type \(X\) and had a collision trigger (CINT1B) \(N_V\) is the number of data events that where triggered as type \(X\), had a collision trigger (CINT1B), and had a vertex. \(N_{-V}\) is the number of data events that where triggered as type \(X\), had a collision trigger (CINT1B), but no vertex. \(N_A\) is the number of data events that where triggered as type \(X\), had a collision trigger (CINT1B), and had a vertex in the selected range. \(\beta=N_a+N_c-N_e\) is the number of control triggers that were also triggered as type \(X\). \(N_a\) Number of beam-empty events also triggered as type \(X\) events (CINT1-A or CINT1-AC). \(N_c\) Number of empty-beam events also triggered as type \(X\) events (CINT1-C). \(N_e\) Number of empty-empty events also triggered as type \(X\) events (CINT1-E). Note, that if \( \beta \ll N_A\) the last term can be ignored, and the expression simplyfies to \[ N = \frac{1}{\epsilon_X}\frac{1}{\epsilon_V}N_A \] Parameters t Histogram of triggers scheme Normalisation scheme trgEff Trigger efficiency ntotal On return, the total number of events to normalise to. text If non-null, fill with normalization calculation Returns \(N_A/N\) or negative number in case of errors. Definition at line 1784 of file AliBasedNdetaTask.cxx. Referenced by End().
I'd like to set $1024 \times 768$ without any space between the three items. Is this possible? If so, how? E.g., what I get is: 1024 x 768 and what I want is: 1024x768 TeX - LaTeX Stack Exchange is a question and answer site for users of TeX, LaTeX, ConTeXt, and related typesetting systems. It only takes a minute to sign up.Sign up to join this community Math binary operators and relations automatically add appropriate spaces between the symbol and their operands. If you want to remove this space, you can turn the operator into a regular symbol by enclosing it in braces. For example $1024 {\times} 768$ If you will be using this often you can also define a new command and say something like \newcommand{\stimes}{{\times}}$1024 \stimes 768$ where \stimes is a symbol version of the \times operator. These answers seem overly complicated to me. I personally just use \! between symbols as in: $W \! \rightarrow \! \mu$ This brings the symbols closer together. You can also use multiple in a row $W \! \! \! \rightarrow \! \mu$ Perhaps defining it as an ordinary math symbol might be better than just enclose it in braces and expect that would do it now and in the future. So, I would use \mathord: $1024\mathord{\times}768$ It seems to me that what you really want is a multiplication sign that works in text mode. You can get this by writing $\times$ or, to answer your whole question 1024$\times$768. By the way, nice question. This is a good example of where it makes sense not to use normal math typography.
For a test case, I want to determine the velocity profile of a viscously damped standing wave. By linearizing the density ($\rho=\rho_0+\rho'$) and velocity ($ux=ux'$), the continuity and Navier-Stokes equations result in, respectively: \begin{align} \partial_t\rho' + \rho_0\partial_xu_x' &= 0 \tag{1} \\ \partial_t^2\rho' &= \partial_x^2\rho'c_s^2 + \nu\partial_t\partial_x\rho' \tag{2} \end{align} The $c_s$ is just a constant indicating we are dealing with an ideal pressure term ($p=\rho c_s^2$) A solution for the density to $(2)$ is given by: $$\rho=\rho_0+\Delta\rho\sin(k_xx)\cos(\omega_it)\exp(-\omega_rt)$$ where $$k_x=2\pi/n_x, \quad \omega_r=\frac{1}{2}k_x^2\nu, \quad \omega_i=k_xc_s\sqrt{1-\left(\frac{1}{2}\frac{k_x\nu}{c_s} \right)^2} \, .$$ Now I want to determine the velocity; it would seem straightforward to use $(1)$ to get $$\partial_xu_x'=-\partial_t\rho'/\rho_0=\frac{\triangle\rho}{\rho_{0}}\sin\left(k_{x}x\right)\left[\omega_{r}\cos\left(\omega_{i}t\right)-\omega_{i}\sin\left(\omega_{i}t\right)\right]\exp\left(-\omega_{r}t\right)$$ and integrate to get $$u_{x}'=-\frac{1}{k_{x}}\frac{\triangle\rho}{\rho_{0}}\cos\left(k_{x}x\right)\left[\omega_{r}\cos\left(\omega_{i}t\right)-\omega_{i}\sin\left(\omega_{i}t\right)\right]\exp\left(-\omega_{r}t\right)+K$$ where $K$ is an integration constant. My approach was to determine $K$ by setting the velocity zero at a antinode (at $x=n_x/4$), to get $$u_{x}'=-\frac{1}{k_{x}}\frac{\triangle\rho}{\rho_{0}}\cos\left(k_{x}x\right)\left[\omega_{r}\cos\left(\omega_{i}t\right)-\omega_{i}\sin\left(\omega_{i}t\right)\right]\exp\left(-\omega_{r}t\right) \, .$$ However, comparing the simulation with the analytical solution it seems that the amplitude of the velocity is much larger in the simulation. Is my approach described above at all correct?
The problem isn't 100% clear, and a full treatment would probably require the use of coupled oscillation techniques that you may or may not have learned yet. But if this is meant to be solved with "basic" techniques, here's how I would think about it: For a normal pendulum, the tension in the string is largest when the string is passing through the horizontal (since its angular speed is largest there.) Thus, if the pendulum has a frequency $f$, the tension in the string will oscillate with a frequency $2f$. The pendulum string is therefore acting as a driving force with frequency $2f$ on the system consisting of the weight $W$ and the spring $k$. What's more, this driving force, at this frequency, causes the weight-spring system to oscillate with a "large amplitude". Take it from there. To sketch out a more formal technique, we can use Lagrangian mechanics. Let $Y$ denote the displacement of the upper mass from its equilibrium position, let $\theta$ denote the angle between the string and the vertical, and let $M = W/g$ denote the mass of the upper block. After some geometry, we can show that the Lagrangian for this system is$$\mathcal{L} = \frac{1}{2} (m+M) \dot{Y}^2 - m \ell \sin \theta \dot{\theta} \dot{Y} + \frac{m}{2} \ell^2 \dot{\theta}^2 + m g \ell \cos \theta - \frac{1}{2} k Y^2,$$ and taking the associated Euler-Lagrange equations, we conclude that\begin{align*}M \ddot{Y} - m \ell \left(\cos \theta \dot{\theta}^2 + \sin \theta \ddot{\theta} \right) &= - k Y \\- m \ell \sin \theta \ddot{Y} + m \ell^2 \ddot{\theta} &= - m g \ell \sin \theta. \end{align*} We now can look for a formal power series solution to these equations:\begin{align}Y(t) &= \epsilon Y^{(1)}(t) + \epsilon^2 Y^{(2)}(t) + \dots \\\theta(t) &= \epsilon \theta^{(1)}(t) + \epsilon^2 \theta^{(2)}(t) + \dots \end{align}We now want to plug these in to the Euler-Lagrange equations and expand them out order by order in $\epsilon$. At $\mathcal{O}(\epsilon)$, we find that$$M \ddot{Y}^{(1)} = -k Y^{(1)}, \qquad m \ell^2 \ddot{\theta}^{(1)} = - m g \ell \theta^{(1)},$$from which we conclude that for small oscillations, we have simple harmonic motion in both $\theta$ and $Y$. Moreover, these oscillations are uncoupled; at this level of approximation, we would not see the behavior described in the problem. To see the coupling effects between the two coordinates, we have to expand the Euler-Lagrange equations to $\mathcal{O}(\epsilon^2)$; if we do this, we get (after some algebra)\begin{align}M \ddot{Y}^{(2)} = m \ell \left( \left( \dot{\theta}^{(1)} \right)^2 + \theta^{(1)} \ddot{\theta}^{(1)} \right) -k Y^{(2)}\end{align}along with a similar equation for $\theta^{(2)}$. This latter equation can be rearranged to yield an undamped driven oscillator equation, where the function $\theta^{(1)}$ and its derivatives act as the "driving force" for the second-order perturbations $Y^{(2)}$. The fact that these oscillations become "large" allows us to say something about the values of $k$ and $M$.
LHCb Collaboration; Bernet, R; Büchler-Germann, A; Bursche, A; Chiapolini, N; De Cian, M; Elsasser, C; Müller, K; Palacios, J; Salzmann, C; Serra, N; Steinkamp, O; Straumann, U; Tobin, M; Vollhardt, A; Anderson, J; Aaij, R; Abellán Beteta, C; Adeva, B; Zvyagin, A (2012). Measurement of the ratio of prompt $\chi_{c}$ to $J/\psi$ production in $pp$ collisions at $\sqrt{s}=7$ TeV. Physics Letters B, 718(2):431-440. Abstract The prompt production of charmonium $\chi_{c}$ and $J/\psi$ states is studied in proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV at the Large Hadron Collider. The $\chi_{c}$ and $J/\psi$ mesons are identified through their decays $\chi_{c}\rightarrow J/\psi \gamma$ and $J/\psi\rightarrow \mu^+\mu^-$ using 36 pb$^{-1}$ of data collected by the LHCb detector in 2010. The ratio of the prompt production cross-sections for $\chi_{c}$ and $J/\psi$, $\sigma (\chi_{c}\rightarrow J/\psi \gamma)/ \sigma (J/\psi)$, is determined as a function of the $J/\psi$ transverse momentum in the range $2 < p_{\mathrm T}^{J/\psi} < 15$ GeV/$c$. The results are in excellent agreement with next-to-leading order non-relativistic expectations and show a significant discrepancy compared with the colour singlet model prediction at leading order, especially in the low $p_{\mathrm T}^{J/\psi}$ region. Abstract The prompt production of charmonium $\chi_{c}$ and $J/\psi$ states is studied in proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV at the Large Hadron Collider. The $\chi_{c}$ and $J/\psi$ mesons are identified through their decays $\chi_{c}\rightarrow J/\psi \gamma$ and $J/\psi\rightarrow \mu^+\mu^-$ using 36 pb$^{-1}$ of data collected by the LHCb detector in 2010. The ratio of the prompt production cross-sections for $\chi_{c}$ and $J/\psi$, $\sigma (\chi_{c}\rightarrow J/\psi \gamma)/ \sigma (J/\psi)$, is determined as a function of the $J/\psi$ transverse momentum in the range $2 < p_{\mathrm T}^{J/\psi} < 15$ GeV/$c$. The results are in excellent agreement with next-to-leading order non-relativistic expectations and show a significant discrepancy compared with the colour singlet model prediction at leading order, especially in the low $p_{\mathrm T}^{J/\psi}$ region. Statistics Downloads 0 downloads since deposited on 12 Mar 2013 0 downloads since 12 months Additional indexing
Does the sequence $\sin(n!)$ diverge(converge)? It seems the sequence diverges. I tried for a contradiction but with no success. Thanks for your cooperation. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Does the sequence $\sin(n!)$ diverge(converge)? It seems the sequence diverges. I tried for a contradiction but with no success. Thanks for your cooperation. Depends on whether the parameter of $\sin$ is in radians or degrees. If in degree, $n!$ becomes multiple of 360 and after that function value will be zero, for all value of $n$. In radians this will not happen as $\pi$ is irrational. Hint: Take a subsequence in which $a_{n_i} \approx \pi ( 4i+1)/2 $ and another where $b_{n_j} \approx \pi ( 4i +3)/2$.
I have developed a differential equation for the variation of a star's semi-major axis with respect to its eccentricity. It is as follows: $$\frac{dy}{dx}=\frac{12}{19}\frac{y\left(1+\left(\frac{73}{24}x^2\right)+\left(\frac{37}{26}x^4\right)\right)}{x\left(1+\left(\frac{121}{304}x^2\right)\right)}$$ Where $y$ is the semi-major axis and $x$ is the eccentricity. Where $y$ is the semi-major axis and $x$ is the eccentricity. The 3-D plots of this equation can be found [here](http://www.wolframalpha.com/input/?i=3D+plot++%5Cfrac%7B12%7D%7B19%7D%5Cfrac%7By%5Cleft(1%2B%5Cleft(%5Cfrac%7B73%7D%7B24%7Dx%5E2%5Cright)%2B%5Cleft(%5Cfrac%7B37%7D%7B26%7Dx%5E4%5Cright)%5Cright)%7D%7Bx%5Cleft(1%2B%5Cleft(%5Cfrac%7B121%7D%7B304%7Dx%5E2%5Cright)%5Cright)%7D) And this is the solution to the above DE [here](http://www.wolframalpha.com/input/?i=solve+y'%3D+%5Cfrac%7B12%7D%7B19%7D%5Cfrac%7By%5Cleft(1%2B%5Cleft(%5Cfrac%7B73%7D%7B24%7Dx%5E2%5Cright)%2B%5Cleft(%5Cfrac%7B37%7D%7B26%7Dx%5E4%5Cright)%5Cright)%7D%7Bx%5Cleft(1%2B%5Cleft(%5Cfrac%7B121%7D%7B304%7Dx%5E2%5Cright)%5Cright)%7D) The decay time of stars can be found by solving the following integral: $$T(a_{0},e_{0})=\frac{12(c_{0}^4)}{19\gamma}\int_{0}^{e_0}{\frac{e^{29/19}[1+(121/304)e^2]^{1181/2299}}{(1-e^2)^{3/2}}}de\tag1$$ Where $$\gamma=\frac{64G^3}{5c^5}m_{1}m_{2}(m_{1}+m_{2})$$ For $e_{0}$ close to $1$ the equation becomes: $$T(a_{0},e_{0})\approx\frac{768}{425}T_{f}a_{0}(1-e_{0}^2)^{7/2}\tag2$$ Where $$T_{f}=\frac{a_{0}^4}{4\gamma}$$ I used Appell's hypergeometric functions to solve integral (1), but is there any way in which I can express the solutions in terms of few special functions with simpler symmetries, so that the analysis becomes easier.There is a well defined symmetry for the above equation from the plot. Hence, is it possible to express this in terms of other special function (which have different symmetries). EDIT: I was suggested that since the powers in the integrand in equation (1) are very non-trivial, probably the hypergeometric function can't be further simplified. But I fail to understand why this might seem to pose a problem. Can't this D.E. be solved by Lie symmetry methods? Or can this solution's field be treated using Frobenius' theorem and the dimensions of it analysed?
Consider the series: $$\sum_{n=1}^{\infty}\frac{\zeta(2n+1)}{n(2n+1)}$$ We can easily prove that it's a convergent series. My question, is there a way to express this series in terms of zeta constants ? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community It is not difficult to find the generating function for the values of the $\zeta$ function over the positive odd integers: $$ f(x)=\sum_{n\geq 1}\zeta(2n+1) x^{2n} = -\gamma-\frac{1}{2}\left[\psi(1-x)+\psi(1+x)\right] \tag{1}$$ and: $$ \sum_{n\geq 1}\frac{\zeta(2n+1)}{n(2n+1)}=2\sum_{n\geq 1}\zeta(2n+1)\left(\frac{1}{2n}-\frac{1}{2n+1}\right)=2\int_{0}^{1}f(x)\left(\frac{1}{x}-1\right)\,dx\tag{2}$$ but the integrals $\int\frac{\psi(1\pm x)}{x}\,dx$ or $\int \log(x)\,\psi'(1\pm x)\,dx$ do not have nice closed forms (by my knowledge) but in terms of the Hurwitz zeta function and its derivatives.
Yeah, this software cannot be too easy to install, my installer is very professional looking, currently not tied into that code, but directs the user how to search for their MikTeX and or install it and does a test LaTeX rendering Some body like Zeta (on codereview) might be able to help a lot... I'm not sure if he does a lot of category theory, but he does a lot of Haskell (not that I'm trying to conflate the two)... so he would probably be one of the better bets for asking for revision of code. he is usually on the 2nd monitor chat room. There are a lot of people on those chat rooms that help each other with projects. i'm not sure how many of them are adept at category theory though... still, this chat tends to emphasize a lot of small problems and occasionally goes off tangent. you're project is probably too large for an actual question on codereview, but there is a lot of github activity in the chat rooms. gl. In mathematics, the Fabius function is an example of an infinitely differentiable function that is nowhere analytic, found by Jaap Fabius (1966). It was also written down as the Fourier transform off^(z)=∏m=1∞(cos... Defined as the probability that $\sum_{n=1}^\infty2^{-n}\zeta_n$ will be less than $x$, where the $\zeta_n$ are chosen randomly and independently from the unit interval @AkivaWeinberger are you familiar with the theory behind Fourier series? anyway here's a food for thought for $f : S^1 \to \Bbb C$ square-integrable, let $c_n := \displaystyle \int_{S^1} f(\theta) \exp(-i n \theta) (\mathrm d\theta/2\pi)$, and $f^\ast := \displaystyle \sum_{n \in \Bbb Z} c_n \exp(in\theta)$. It is known that $f^\ast = f$ almost surely. (a) is $-^\ast$ idempotent? i.e. is it true that $f^{\ast \ast} = f^\ast$? @AkivaWeinberger You need to use the definition of $F$ as the cumulative function of the random variables. $C^\infty$ was a simple step, but I don't have access to the paper right now so I don't recall it. > In mathematics, a square-integrable function, also called a quadratically integrable function, is a real- or complex-valued measurable function for which the integral of the square of the absolute value is finite. I am having some difficulties understanding the difference between simplicial and singular homology. I am aware of the fact that they are isomorphic, i.e. the homology groups are in fact the same (and maybe this doesnt't help my intuition), but I am having trouble seeing where in the setup they d... Usually it is a great advantage to consult the notes, as they tell you exactly what has been done. A book will teach you the field, but not necessarily help you understand the style that the prof. (who creates the exam) creates questions. @AkivaWeinberger having thought about it a little, I think the best way to approach the geometry problem is to argue that the relevant condition (centroid is on the incircle) is preserved by similarity transformations hence you're free to rescale the sides, and therefore the (semi)perimeter as well so one may (for instance) choose $s=(a+b+c)/2=1$ without loss of generality that makes a lot of the formulas simpler, e.g. the inradius is identical to the area It is asking how many terms of the Euler Maclaurin formula do we need in order to compute the Riemann zeta function in the complex plane? $q$ is the upper summation index in the sum with the Bernoulli numbers. This appears to answer it in the positive: "By repeating the above argument we see that we have analytically continued the Riemann zeta-function to the right-half plane σ > 1 − k, for all k = 1, 2, 3, . . .."
Does $\int_1^\infty\sin (\frac{\sin x}{x})\mathrm d x$diverge or not? If it converges, does it converge conditionally or absolutely? I guess that it converges conditionally, also,I think it may be related to $\int_{n\pi}^{(n+1)\pi}\frac{\sin x}{x}\mathrm d x$ , but I do not know how to start? Any help will be appreciated. For any $n\in\mathbb{Z}^+$ the integral $\int_{0}^{+\infty}\left(\frac{\sin x}{x}\right)^n\,dx$ can be explicitly computed (see here, for instance). From the approximation $\frac{\sin x}{x}\approx e^{-x^2/6}$, it is expected to be positive and decay like $\sqrt{\frac{3\pi}{2n}}$. These facts already, together with $\sin z$ being an entire function, give that our integral is a converging one. A more convincing argument, maybe, is that for any $x\geq 1$ the inequality: $$ \left(1-\frac{1}{5x^2}\right)\cdot\frac{\sin x}{x}\leq \sin\left(\frac{\sin x}{x}\right)\leq\frac{\sin x}{x} $$ holds. Being equivalent to $\int_{0}^{+\infty}\frac{\sin x}{x}\,dx$, our integral is converging but not absolutely converging.
Yes, this presents no difficulty. As long as you can sample from the full conditionals (and it sounds like you can) then yes. For a bivariate $(U,V)$ that's just sampling $(V|U=u)$ and $(U|V=v)$. Let's consider a simple case (for which we don't really need Gibbs sampling). Let: $f_{X,Q}(x,q)= \frac{{n\choose x}} {\mathrm{B}(\alpha,\beta)} q^{x+\alpha-1} (1-q)^{n-x+\beta-1}\,, \quad x=0,1,...,n \quad 0<q<1 $ which can be written as $f_{X,Q}(x,q) = f_{X|Q=q}(x) f_Q(q)$ $\qquad\qquad\:\:= {n\choose x}q^x(1-q)^{n-x}\,\cdot\,\frac{1} {\mathrm{B}(\alpha,\beta)} q^{\alpha-1} (1-q)^{\beta-1} $ So if we know $Q=q$ we can sample from $X$; it's just binomial. On the other hand, conditional on $X=x$ we can see that $Q$ just has a beta distribution, so again we can sample from that. [If we perform Gibbs sampling on that pair of full conditionals we'd be ultimately sampling from a beta-binomial marginal distribution for $X$; if that marginal was of primary interest we could calculate it directly by integration.]
I know that if we move a rectangular wire from no magnetic field to through a magnetic field, there would be an induced voltage because there is change in flux (b∆x). However, if we moved a wire/rod in the same situation, it will also induce a voltage but is it due to the change in flux (b∆x) or charge separation? The induced voltage depends on the change in flux according to faraday's law $$volt=-N\frac{\mathrm{d}\phi}{\mathrm{d}t}=-N\mathop{\iint}_{S^{'}}\frac{\mathrm{d}\vec{B}}{\mathrm{d}t}\cdot\mathrm{d}\vec{S}=-N\mathop{\oint}_{C^{'}}\frac{\mathrm{d}}{\mathrm{d}t}\left\{\nabla\times\vec{B}\right\}\cdot\mathrm{d}\vec{l}$$ So as you can see that the flux or the math associated with it needs a defined closed surface for inspection, since the potential induced in your case is NOT electrostatic, but electrodynamic. Electrodynamic potentials are NOT absolute and need a defined closed circuit for their realization. Hence, in this case, the question doesn't make sense since you don't have a closed circuit and are asking for an analysis of electrodynamic potential. What would make sense is to ask that if you have a multimeter attached to the rod and then continuously monitor the potential, would the potential change? The answer depends on the state of motion of the multimeter. If the multimeter is static and the rod moves and while the rod moves the wires from the multimeter to the rod ends unfurl, then yes, you would see a voltage. If, on the other hand the multimeter is soldered to the rod with two other rods so that they form a static hoop, then you will not see any voltage since, in that case, you'd have $$\mathop{\oint}_{C^{'}}\nabla\times\vec{B}\cdot\mathrm{d}\vec{l}=const$$ and the rate of change of that quantity wrt time is zero. Now, when you talk of a Lorentz force, then $\vec{F}=q\left(\vec{v}\times\vec{B}\right)$ and you see the perpendicularity of the velocity and the magnetic field hence creating a force on the electrons which sift towards a given side. This creates a charge seggregation and hence a potential difference on the ends of the rod.
Yeah, this software cannot be too easy to install, my installer is very professional looking, currently not tied into that code, but directs the user how to search for their MikTeX and or install it and does a test LaTeX rendering Some body like Zeta (on codereview) might be able to help a lot... I'm not sure if he does a lot of category theory, but he does a lot of Haskell (not that I'm trying to conflate the two)... so he would probably be one of the better bets for asking for revision of code. he is usually on the 2nd monitor chat room. There are a lot of people on those chat rooms that help each other with projects. i'm not sure how many of them are adept at category theory though... still, this chat tends to emphasize a lot of small problems and occasionally goes off tangent. you're project is probably too large for an actual question on codereview, but there is a lot of github activity in the chat rooms. gl. In mathematics, the Fabius function is an example of an infinitely differentiable function that is nowhere analytic, found by Jaap Fabius (1966). It was also written down as the Fourier transform off^(z)=∏m=1∞(cos... Defined as the probability that $\sum_{n=1}^\infty2^{-n}\zeta_n$ will be less than $x$, where the $\zeta_n$ are chosen randomly and independently from the unit interval @AkivaWeinberger are you familiar with the theory behind Fourier series? anyway here's a food for thought for $f : S^1 \to \Bbb C$ square-integrable, let $c_n := \displaystyle \int_{S^1} f(\theta) \exp(-i n \theta) (\mathrm d\theta/2\pi)$, and $f^\ast := \displaystyle \sum_{n \in \Bbb Z} c_n \exp(in\theta)$. It is known that $f^\ast = f$ almost surely. (a) is $-^\ast$ idempotent? i.e. is it true that $f^{\ast \ast} = f^\ast$? @AkivaWeinberger You need to use the definition of $F$ as the cumulative function of the random variables. $C^\infty$ was a simple step, but I don't have access to the paper right now so I don't recall it. > In mathematics, a square-integrable function, also called a quadratically integrable function, is a real- or complex-valued measurable function for which the integral of the square of the absolute value is finite. I am having some difficulties understanding the difference between simplicial and singular homology. I am aware of the fact that they are isomorphic, i.e. the homology groups are in fact the same (and maybe this doesnt't help my intuition), but I am having trouble seeing where in the setup they d... Usually it is a great advantage to consult the notes, as they tell you exactly what has been done. A book will teach you the field, but not necessarily help you understand the style that the prof. (who creates the exam) creates questions. @AkivaWeinberger having thought about it a little, I think the best way to approach the geometry problem is to argue that the relevant condition (centroid is on the incircle) is preserved by similarity transformations hence you're free to rescale the sides, and therefore the (semi)perimeter as well so one may (for instance) choose $s=(a+b+c)/2=1$ without loss of generality that makes a lot of the formulas simpler, e.g. the inradius is identical to the area It is asking how many terms of the Euler Maclaurin formula do we need in order to compute the Riemann zeta function in the complex plane? $q$ is the upper summation index in the sum with the Bernoulli numbers. This appears to answer it in the positive: "By repeating the above argument we see that we have analytically continued the Riemann zeta-function to the right-half plane σ > 1 − k, for all k = 1, 2, 3, . . .."
Yeah, this software cannot be too easy to install, my installer is very professional looking, currently not tied into that code, but directs the user how to search for their MikTeX and or install it and does a test LaTeX rendering Some body like Zeta (on codereview) might be able to help a lot... I'm not sure if he does a lot of category theory, but he does a lot of Haskell (not that I'm trying to conflate the two)... so he would probably be one of the better bets for asking for revision of code. he is usually on the 2nd monitor chat room. There are a lot of people on those chat rooms that help each other with projects. i'm not sure how many of them are adept at category theory though... still, this chat tends to emphasize a lot of small problems and occasionally goes off tangent. you're project is probably too large for an actual question on codereview, but there is a lot of github activity in the chat rooms. gl. In mathematics, the Fabius function is an example of an infinitely differentiable function that is nowhere analytic, found by Jaap Fabius (1966). It was also written down as the Fourier transform off^(z)=∏m=1∞(cos... Defined as the probability that $\sum_{n=1}^\infty2^{-n}\zeta_n$ will be less than $x$, where the $\zeta_n$ are chosen randomly and independently from the unit interval @AkivaWeinberger are you familiar with the theory behind Fourier series? anyway here's a food for thought for $f : S^1 \to \Bbb C$ square-integrable, let $c_n := \displaystyle \int_{S^1} f(\theta) \exp(-i n \theta) (\mathrm d\theta/2\pi)$, and $f^\ast := \displaystyle \sum_{n \in \Bbb Z} c_n \exp(in\theta)$. It is known that $f^\ast = f$ almost surely. (a) is $-^\ast$ idempotent? i.e. is it true that $f^{\ast \ast} = f^\ast$? @AkivaWeinberger You need to use the definition of $F$ as the cumulative function of the random variables. $C^\infty$ was a simple step, but I don't have access to the paper right now so I don't recall it. > In mathematics, a square-integrable function, also called a quadratically integrable function, is a real- or complex-valued measurable function for which the integral of the square of the absolute value is finite. I am having some difficulties understanding the difference between simplicial and singular homology. I am aware of the fact that they are isomorphic, i.e. the homology groups are in fact the same (and maybe this doesnt't help my intuition), but I am having trouble seeing where in the setup they d... Usually it is a great advantage to consult the notes, as they tell you exactly what has been done. A book will teach you the field, but not necessarily help you understand the style that the prof. (who creates the exam) creates questions. @AkivaWeinberger having thought about it a little, I think the best way to approach the geometry problem is to argue that the relevant condition (centroid is on the incircle) is preserved by similarity transformations hence you're free to rescale the sides, and therefore the (semi)perimeter as well so one may (for instance) choose $s=(a+b+c)/2=1$ without loss of generality that makes a lot of the formulas simpler, e.g. the inradius is identical to the area It is asking how many terms of the Euler Maclaurin formula do we need in order to compute the Riemann zeta function in the complex plane? $q$ is the upper summation index in the sum with the Bernoulli numbers. This appears to answer it in the positive: "By repeating the above argument we see that we have analytically continued the Riemann zeta-function to the right-half plane σ > 1 − k, for all k = 1, 2, 3, . . .."
The subway train will indeed be hit by light, but it will be hard to see the lights through the window. The problem in seeing the tunnel light is relativistic aberration, that the angle of the light is changed by your relative velocity. The formula is $$\cos(\theta_2)=\frac{\cos(\theta_1)+\beta}{1+\beta\cos(\theta_1)}$$ where $\beta =v/c$. $\theta_1$ is the incident light angle, $\theta_2$ the perceived angle (where 0 degrees is forward). As you speed up, most light will arrive from the front - the "searchlight effect". If you have lights at regular intervals along the tunnel there will be some lights far behind you that get a perceived angle of $\pi/2$ just outside the window, so you will indeed see some lights. However, they will be far behind you so their intensity will be low: the near lights will be shining on the front of the train. Plot of light angles and intensities for $\beta$ 0.5 and 0.999999. The circles denote individual lights at coordinates $(x,1)$ seen from the origin, with area proportional to the intensity $1/r^2=1/(1+x^2)$. For $\beta=2$ the brightest light just outside the train is seen at 60 degrees angle, while for the faster train nearly all light comes from the front and will be hard to see from inside the train. In addition, there will be redshifting/blueshifting making lights behind you redshifted, so the light apparently just outside may be impossible to see if it has a spectrum with little ultraviolet light.
Hello,I am an undergraduate who has taken basic linear algebra and ODE. As for physics, I have taken an online edX quantum mechanics course.I am looking at studying some of the necessary math and physics needed for QFT and particle physics. It looks like I need tensors and group theory... Hello, I am newish in group theory so sorry if anything in the following is not entirely correct.In general, one can anticipate if a matrix element <i|O|j> is zero or not by seeing if O|j> shares any irreducible representation with |i>.I know how to reduce to IRs the former product but I... Looking into the infinitesimal view of rotations from Lie, I noticed that the vector cross product can be written in terms of the generators of the rotation group SO(3). For example:$$\vec{\mathbf{A}} \times \vec{\mathbf{B}} = (A^T \cdot J_x \cdot B) \>\> \hat{i} + (A^T \cdot J_y \cdot B)... 1. Homework StatementI am trying to get the C-G Decomposition for 6 ⊗ 3.2. Homework EquationsNeglecting coefficients a tensor can be decomposed into a symmetric part and an antisymmetric part. For the 6 ⊗ 3 = (2,0) ⊗ (1,0) this is:Tij ⊗ Tk = Qijk = (Q{ij}k + Q{ji}k) + (Q[ij]k +... Hi!I'm doing my master thesis in AdS/CFT and I've read several times that "Fields transforms in the adjoint representation" or "Fields transforms in the fundamental representation". I've had courses in Advanced mathematics (where I studied Group theory) and QFTs, but I don't understand (or... 1. Homework StatementLet ##G## be a group of order ##2p## with p a prime and odd number.a) We suppose ##G## as abelian. Show that ##G \simeq \mathbb{Z}/2p\mathbb{Z}##2. Homework Equations3. The Attempt at a SolutionIntuitively I see why but I would like some suggestion of what... 1. Homework Statement[G,G] is the commutator group.Let ##H\triangleleft G## such that ##H\cap [G,G]## = {e}. Show that ##H \subseteq Z(G)##.2. Homework Equations3. The Attempt at a SolutionIn the previous problem I showed that ##G## is abelian iif ##[G,G] = {e}##. I also showed that... 1. Homework StatementLet ##G## be a group. Let ##H \triangleleft G## and ##K \leq G## such that ##H\subseteq K##.a) Show that ##K\triangleleft G## iff ##K/H \triangleleft G/H##b) Suppose that ##K/H \triangleleft G/H##. Show that ##(G/H)/(K/H) \simeq G/K##2. Homework EquationsThe three... It goes without saying that theoretical physics has over the years become overrun with countless distinct - yet sometimes curiously very similar - theories, in some cases even dozens of directly competing theories. Within the foundations things can get far worse once we start to run into... 1. Homework StatementLet m ≥ 3. Show that $$D_m \cong \mathbb{Z}_m \rtimes_{\varphi} \mathbb{Z}_2 $$where $$\varphi_{(1+2\mathbb{Z})}(1+m\mathbb{Z}) = (m-1+m\mathbb{Z})$$2. Homework EquationsI have seen most basic concepts of groups except group actions. Si ideally I should not use them... I teach group theory for physicists, and I like to teach it following some papers. In general my students work with condensed matter, so I discuss group theory following these papers:[1] Group Theory and Normal Modes, American Journal of Physics 36, 529 (1968)[2] Nonsymmorphic Symmetries and... 1. Homework StatementI am translating so bear with me.We have two group homomorphisms:α : G → G'β : G' → GLet β(α(x)) = x ∀x ∈ GShow that1)β is a surjection2)α an injection3) ker(β) = ker(α ο β) (Here ο is the composition of functions.)2. Homework EquationsThis is from a... I do have a fair amount of visual/geometric understanding of groups, but when I start solving problems I always wind up relying on my algebraic intuition, i.e. experience with forms of symbolic expression that arise from theorems, definitions, and brute symbolic manipulation. I even came up with... I'm having a bit of an issue wrapping my head around the adjoint representation in group theory. I thought I understood the principle but I've got a practice problem which I can't even really begin to attempt. The question is this:My understanding of this question is that, given a... 1. Homework StatementDetermine ##\phi(R_{180})##, if ##\phi:D_n\to D_n## is an automorphism where ##n## is even so let ##n=2k##.The solutions manual showed that since the center of ##D_n## is ##\{R_0, R_{180}\}## and ##R_{180}## is not the identity then it can only be that... 1. Homework StatementDecide all abelian groups of order 675. Find an element of order 45 in each one of the groups, if it exists.2. Homework Equations /propositions/definitionsFundamental Theorem of Finite Abelian GroupsLagrange's Theorem and its corollaries (not sure if helpful for this... 1. Homework StatementThe SO(3) representation can be represented as ##3\times 3## matrices with the following form:$$J_1=\frac{1}{\sqrt{2}}\left(\matrix{0&1&0\\1&0&1\\ 0&1&0}\right) \ \ ; \ \ J_2=\frac{1}{\sqrt{2}}\left(\matrix{0&-i&0\\i&0&-i\\ 0&i&0}\right) \ \ ; \ \... In Arnold's book, ordinary differential equations 3rd. WHY Arnold say Tg:M→M instead of Tg:G→S(M) for transformations Tfg=Tf Tg,Tg^-1=(Tg)^-1.Let M be a group and M a set. We say that an action of the group G on the set M is defined if to each element g of G there corresponds a... I've been trying to understand representations of the Lorentz group. So as far as I understand, when an object is in an (m,n) representation, then it has two indices (let's say the object is ##\phi^{ij}##), where one index ##i## transforms as ##\exp(i(\theta_k-i\beta_k)A_k)## and the other index... 1. Homework StatementFor a left invariant vector field γ(t) = exp(tv). For a gauge transformation t -> t(xμ). Intuitively, what happens to the LIVF in the latter case? Is it just displaced to a different point in spacetime or something else?2. Homework Equations3. The Attempt at a... Hello guys,In 90% of the papers I've read about diferent ways to achieve generalizations of the Proca action I've found there's a common condition that has to be satisfied, i.e: The number of degrees of freedom allowed to be propagated by the theory has to be three at most (two if the fields... Hi allI have a shallow understanding of group theory but until now it was sufficient. I'm trying to generalize a problem, it's a Lagrangian with SU(N) symmetry but I changed some basic quantity that makes calculations hard by using a general SU(N) representation basis. Hopefully the details of... Could you please help me to understand what is the difference between notions of «transformation» and «automorphism» (maybe it is more correct to talk about «inner automorphism»), if any? It looks like those two terms are used interchangeably.By «transformation» I mean mapping from some set... 1. Homework StatementAre these functions homomorphisms, determine the kernel and image, and identify the quotient group up to isomorphism?C^∗ is the group of non-zero complex numbers under multiplication, and C is the group of all complex numbers under addition.2. Homework Equationsφ1 ... The group of moves for the 3x3x3 puzzle cube is the Rubik’s Cube group: https://en.wikipedia.org/wiki/Rubik%27s_Cube_group.What are the groups of moves for NxNxN puzzle cubes called in general? Is there even a standardized term?I've been trying to find literature on the groups for the... 1. Homework StatementThe dicyclic group of order 12 is generated by 2 generators x and y such that: ##y^2 = x^3, x^6 = e, y^{-1}xy =x^{-1} ## where the element of Dic 12 can be written in the form ##x^{k}y^{l}, 0 \leq x < 6, y = 0,1##. Write the product between two group elements in the form... 1. Homework StatementConsider the contractions of the 3D Euclidean symmetry while preserving the SO(2) subgroup. In the physics point of view, explain the resulting symmetries G(2) (Galilean symmetry group) and H(3) (Heisenberg-Weyl group for quantum mechanics) and give their Lie algebras... The Galilean transformations are simple.x'=x-vty'=yz'=zt'=t.Then why is there so much jargon and complication involved in proving that Galilean transformations satisfy the four group properties (Closure, Associative, Identity, Inverse)? Why talk of 10 generators? Why talk of rotation as...
In combinatorics there are quite many such disproven conjectures. The most famous of them are: 1) Tait conjecture: Any 3-vertex connected planar cubic graph is Hamiltonian The first counterexample found has 46 vertices. The "least" counterexample known has 38 vertices. 2) Tutte conjecture: Any bipartite cubic graph is Hamiltonian The first counterexample found has 96 vertices. The "least" counterexample known has 54 vertices. 3) Thom conjecture If two finite undirected simple graphs have conjugate adjacency matrices over $\mathbb{Z}$, then they are isomorphic. The least known counterexample pair is formed by two trees with 11 vertices. 4) Borsuk conjecture: Every bounded subset $E$ of $\mathbb{R}^n$can be partitioned into $n+1$ sets, each of which has a smaller diameter, than $E$ In the first counterexample found $n = 1325$. In the "least" counterexample known $n = 64$. 5) Danzer-Gruenbaum conjecture: If $A \subset \mathbb{R}^n$ and $\forall u, v, w \in A$ $(u - w, v - w) > 0,$ then $|A| \leq 2n - 1$ This statement is not true for any $n \geq 35$ 6) The Boolean Pythagorean Triple Conjecture: There exists $S \subset \mathbb{N}$, such that neither $S$, nor $\mathbb{N} \setminus S$ contain Pythagorean triples This conjecture was disproved by M. Heule, O. Kullman and V. Marek. They proved, that there do exist such $S \subset \{n \in \mathbb{N}| n \leq k\}$, such that neither $S$, nor $\{n \in \mathbb{N}| n \leq k\} \setminus S$ contain Pythagorean triples, for all $k \leq 7824$, but not for $k = 7825$ 7) Burnside conjecture: Every finitely generated group with period n is finite This statement is not true for any odd $n \geq 667$ 8) Otto Shmidt conjecture: If all proper subgroups of a group $G$ are isomorphic to $C_p$, where $p$ is a fixed prime number, then $G$ is finite. Alexander Olshanskii proved, that there are continuum many non-isomorphic counterexamples to this conjecture for any $p > 10^{75}$. 9) Von Neuman conjecture Any non-amenable group has a free subgroup of rank 2 The least known finitely presented counterexample has 3 generators and 9 relators 10) Word problem conjecture: Word problem is solvable for any finitely generated group The "least" counterexample known has 12 generators. 11) Leinster conjecture: Any Leinster group has even order The least counterexample known has order 355433039577. 12) Rotman conjecture: Automorphism groups of all finite groups not isomorphic to $C_2$ have even order The first counterexample found has order 78125. The least counterexample has order 2187. It is the automorphism group of a group with order 729. 13) Rose conjecture: Any nontrivial complete finite group has even order The least counterexample known has order 788953370457. 14) Hilton conjecture Automorphism group of a non-abelian group is non-abelian The least counterexample known has order 64. 15)Hughes conjecture: Suppose $G$ is a finite group and $p$ is a prime number. Then $[G : \langle\{g \in G| g^p \neq e\}\rangle] \in \{1, p, |G|\}$ The least known counterexample has order 142108547152020037174224853515625. 16) Moreto conjecture: Let $S$ be a finite simple group and $p$ the largest prime divisor of $|S|$. If $G$ is a finite group with the same number of elements of order $p$ as $S$ and $|G| = |S|$, then $G \cong S$ The first counterexample pair constructed is formed by groups of order 20160 (those groups are $A_8$ and $L_3(4)$) 17) This false statement is not a conjecture, but rather a popular mistake done by many people, who have just started learning group theory: All elements of the commutant of any finite group are commutators The least counterexample has order 96. If the numbers mentioned in this text do not impress you, please, do not feel disappointed: there are complex combinatorial objects "hidden" behind them.
I'm given series $\sum_{n = 1}^{+\infty} \frac{(-1)^{n}}{(n+1)!}\left(1 + 2! + \cdots + n!\right)$ and I have to find whether it is convergent. Testing for absolute convergence, we have $a_n = \frac{1}{(n+1)!} + \frac{2}{(n+1)!} + \cdots + \frac{(n-1)!}{(n+1)!} + \frac{n!}{(n+1)!}$ and since last term is $\frac{n!}{(n+1)!} = \frac{1}{n+1}$ series diverge in comparison with harmonic series and hence can only be conditionally convergent, which I will try to prove from Leibniz criterion. Now, I have to show, that $a_n$-th term is monotonically decreasing and $\lim a_n = 0$. Treating $a_n$ as $\frac{a_n}{b_n} = \frac{1! + 2! + \cdots + n!}{(n+1)!}$ I can use Stolz-Cesàro theorem ($\lim \frac{a_n}{b_n} = \lim\frac{a_{n+1} - a_n}{b_{n+1} - b_n}$) since $b_n$ is monotonically increasing and $\lim b_n = +\infty$. Then $$\lim \frac{a_n}{b_n} = \lim\frac{a_{n+1} - a_n}{b_{n+1} - b_n} = \lim\frac{(n+1)!}{(n+2)! - (n+1)!} = \lim \frac{1}{n+2}\frac{1}{1 - \frac{1}{n+2}} = 0.$$ But how to prove monotonicity? I've tried $\frac{a_{n+1}}{a_n}$ but it didn't get me anywhere. What are some ways to show monotonicity of sequences like $a_n$?
Recent developments of CRISPR-Cas9 based homing endonuclease gene drive systems for the suppression or replacement of mosquito populations have generated much interest in their use for control of mosquito-borne diseases (such as dengue, malaria, Chikungunya and Zika). This is because genetic control of pathogen transmission may complement or even substitute traditional vector-control interventions, which have had limited success in bringing the spread of these diseases to a halt. Despite excitement for the use of gene drives for mosquito control, current modeling efforts have analyzed only a handful of these new approaches (usually studying just one per framework). Moreover, these models usually consider well-mixed populations with no explicit spatial dynamics. To this end, we are developing MGDrivE (Mosquito Gene DRIVe Explorer), in cooperation with the 'UCI Malaria Elimination Initiative', as a flexible modeling framework to evaluate a variety of drive systems in spatial networks of mosquito populations. This framework provides a reliable testbed to evaluate and optimize the efficacy of gene drive mosquito releases. What separates MGDrivE from other models is the incorporation of mathematical and computational mechanisms to simulate a wide array of inheritance-based technologies within the same, coherent set of equations. We do this by treating the population dynamics, genetic inheritance operations, and migration between habitats as separate processes coupled together through the use of mathematical tensor operations. This way we can conveniently swap inheritance patterns whilst still making use of the same set of population dynamics equations. This is a crucial advantage of our system, as it allows other research groups to test their ideas without developing new models and without the need to spend time adapting other frameworks to suit their needs. MGDrivE is based on the idea that we can decouple the genotype inheritance process from the population dynamics equations. This allows the system to be treated and developed in three semi-independent modules that come together to form the system. The original version of this model was based on work by Deredec et al. (2011) and Hancock & Godfray (2007), and adapted to accommodate CRISPR homing dynamics in a previous publication by our team (Marshall et al. 2017). As described, we extended this framework to be able to handle a variable number of genotypes, and migration across spatial scenarios. We did this by adapting the equations to work in a tensor-oriented manner, where each genotype can have different processes affecting their particular strain (death rates, mating fitness, sex-ratio bias, et cetera). Before beginning the full description of the model we will define some of the conventions we followed for the notation of the written description of the system. In the case of one dimensional tensors, each slot represents a genotype of the population. For example, the male population is stored in the following way: \[ \overline{Am} = \left(\begin{array}{c} g_1 \ g_2 \ g_3 \ \vdots \ g_n \end{array}\right) _{i} \] All the processes that affect mosquitoes in a genotype-specific way are defined and stored in this way within the framework. There are two tensors of squared dimensionality in the model: the adult females matrix, and the genotype-specific viability mask. In the case of the former the rows represent the females' genotype, whilst the columns represent the genotype of the male they mated with: \[ \overline{\overline{Af}} = \left(\begin{array}{ccccc} g_{11} & g_{12} & g_{13} & \cdots & g_{1n}\ g_{21} & g_{22} & g_{23} & \cdots & g_{2n}\ g_{31} & g_{32} & g_{33} & \cdots & g_{3n}\ \vdots & \vdots & \vdots & \ddots & \vdots\ g_{n1} & g_{n2} & g_{n3} & \cdots & g_{nn} \end{array}\right) _{ij} \] The genotype-specific viability mask, on the other hand, stores the mothers' genotype in the rows, and the potential eggs' genotype in the columns of the matrix. To model an arbitrary number of genotypes efficiently in the same mathematical framework we use a 3-dimensional array structure (cube) where each axis represents the following information: The cube structure gives us the flexibility to apply tensor operations to the elements within our equations, so that we can calculate the stratified population dynamics rapidly; and within a readable, flexible computational framework. This becomes apparent when we define the equation we use for the computation of eggs laid at any given point in time: \[ \overline{O(T_x)} = \sum_{j=1}^{n} \Bigg( \bigg( (\beta*\overline{s} * \overline{ \overline{Af_{[t-T_x]}}}) * \overline{\overline{\overline{Ih}}} \bigg) * \Lambda \Bigg)^{\top}_{ij} \] In this equation, the matrix containing the number of mated adult females \((\overline{\overline{Af}})\) is multiplied element-wise with each one of the layers containing the eggs genotypes proportions expected from this cross \((\overline{\overline{\overline{Ih}}})\). The resulting matrix is then multiplied by a binary 'viability mask' \((\Lambda)\) that filters out female-parent to offspring genetic combinations that are not viable due to biological impediments (such as cytoplasmic incompatibility). The summation of the transposed resulting matrix returns us the total fraction of eggs resulting from all the male to female genotype crosses (\(\overline{O(T_x)}\)). Note: For inheritance operations to be consistent within the framework the summation of each element in the z-axis (this is, the proportions of each one of the offspring's genotypes) must be equal to one. An inheritance cube in an array object that specifies inheritance probabilities (offspring genotype probability)stratified by male and female parent genotypes. MGDrivE provides the following cubes to model different gene drive systems: During the three aquatic stages, a density-independent mortality process takes place: \[ \theta_{st}=(1-\mu_{st})^{T_{st}} \] Along with a density dependent process dependent on the number of larvae in the environment: \[ F(L[t])=\Bigg(\frac{\alpha}{\alpha+\sum{\overline{L[t]}}}\Bigg)^{1/T_l} \] where \(\alpha\) represents the strength of the density-dependent process. This parameter is calculated with: \[ \alpha=\Bigg( \frac{½ * \beta * \theta_e * Ad_{eq}}{R_m-1} \Bigg) * \Bigg( \frac{1-(\theta_l / R_m)}{1-(\theta_l / R_m)^{1/T_l}} \Bigg) \] in which \(\beta\) is the species' fertility in the absence of gene-drive, \(Ad_{eq}\) is the adult mosquito population equilibrium size, and \(R_{m}\) is the population growth in the absence of density-dependent mortality. This population growth is calculated with the average generation time (\(g\)), the adult mortality rate (\(\mu_{ad}\)), and the daily population growth rate (\(r_{m}\)): \[ g=T_{e}+T_{l}+T_{p}+\frac{1}{\mu_{ad}}\R_{m}=(r_{m})^{g} \] The computation of the larval stage in the population is crucial to the model because the density dependent processes necessary for equilibrium trajectories to be calculated occur here. This calculation is performed with the following equation: \[ D(\theta_l,T_x) = \left\{ \begin{array}{ll} \theta_{l[0]}^{‘}=\theta_l & \quad i = 0 \ \theta_{l[i+1]}^{’} = \theta_{l[i]}^{‘} *F(\overline{L_{[t-i-T_x]}}) & \quad i \leq T_l \end{array} \right. \] In addition to this, we need the larval mortality (\(\mu_{l}\)): \[ %L_{eq}=&\alpha*\lfloor R_{m} -1\rfloor %& \mu_{l}=1-\Bigg( \frac{R_{m} * \mu_{ad}}{½ * \beta * (1-\mu_{m})} \Bigg)^{\frac{1}{T_{e}+T_{l}+T_{p}}} \] With these mortality processes, we are now able to calculate the larval population: \[ \overline{L_{[t]}}= \overline{L_{[t-1]}} * (1-\mu_{l}) * F(\overline{L_{[t-1]})}\ +\overline{O(T_{e})}* \theta_{e} \ %+\overline{\beta}* \theta_{e} * (\overline{\overline{Af_{(t-T_{e})}}} \circ \overline{\overline{\overline{Ih}}})\ - \overline{O(T_{e}+T_{l})} * \theta_{e} * D(\theta_{l},0) %\prod_{i=1}^{T_{l}} F(\overline{L_{[t-i]}}) %\theta_{l} \] where the first term accounts for larvae surviving one day to the other; the second therm accounts for the eggs that ha hatched within the same period of time; and the last term computes the number of larvae that have transformed into pupae. We are ultimately interested in calculating how many adults of each genotype exist at any given point in time. For this, we first calculate the number of eggs that are laid and survive to the adult stages with the equation: \[ \overline{E^{’}}= \overline{O(T_{e}+T_{l}+T_{p})} \ * \bigg(\overline{\xi_{m}} * (\theta_{e} * \theta_{p}) * (1-\mu_{ad}) * D(\theta_{l},T_{p}) \bigg) \] With this information we can calculate the current number of male adults in the population by computing the following equation: \[ \overline{Am_{[t]}}= \overline{Am_{[t-1]}} * (1-\mu_{ad})*\overline{\omega_{m}}\ + (1-\overline{\phi}) * \overline{E^{‘}}\ + \overline{\nu m_{[t-1]}} \] in which the first term represents the number of males surviving from one day to the next; the second one, the fraction of males that survive to adulthood (\eqn{\overline{E'}}) and emerge as males (\eqn{1-\phi}); the last one is used to add males into the population as part of gene-drive release campaigns. Female adult populations are calculated in a similar way: \[ \overline{\overline{Af_{[t]}}}= \overline{\overline{Af_{[t-1]}}} * (1-\mu_{ad}) * \overline{\omega_{f}}\ + \bigg( \overline{\phi} * \overline{E^{’}}+\overline{\nu f_{[t-1]}}\bigg)^{\top} * \bigg( \frac{\overline{\eta}*\overline{Am_{[t-1]}}}{\sum{\overline{Am_{[t-1]}}}} \bigg)%\overline{\overline{Mf}} \] where we first compute the surviving female adults from one day to the next; and then we calculate the mating composition of the female fraction emerging from pupa stage. To do this, we obtain the surviving fraction of eggs that survive to adulthood (\(\overline{E'}\)) and emerge as females (\(\phi\)), we then add the new females added as a result of gene-drive releases (\(\overline{\nu f_{[t-1]}}\)). After doing this, we calculate the proportion of males that are allocated to each female genotype, taking into account their respective mating fitnesses (\(\overline{\eta}\)) so that we can introduce the new adult females into the population pool. As it was briefly mentioned before, we are including the option to release both male and/or female individuals into the populations. Another important thing to emphasize is that we allow flexible releases sizes and schedules. Our model handles releases internally as lists of populations compositions so, it is possible to have releases performed at irregular intervals and with different numbers of mosquito genetic compositions as long as no new genotypes are introduced (which have not been previously defined in the inheritance cube). \[ \overline{\nu} = \bigg\{ \left(\begin{array}{c} g_1 \ g_2 \ g_3 \ \vdots \ g_n \end{array}\right)_{t=1} , \left(\begin{array}{c} g_1 \ g_2 \ g_3 \ \vdots \ g_n \end{array}\right)_{t=2} , \cdots , \left(\begin{array}{c} g_1 \ g_2 \ g_3 \ \vdots \ g_n \end{array}\right)_{t=x} \bigg\} \] So far, however, we have not described the way in which the effects of these gene-drives are included into the mosquito populations dynamics. This is done through the use of various modifiers included in the equations: To simulate migration within our framework we are considering patches (or nodes) of fully-mixed populations in a network structure. This allows us to handle mosquito movement across spatially-distributed populations with a transitions matrix, which is calculated with the tensor outer product of the genotypes populations tensors and the transitions matrix of the network as follows: \[ \overline{Am_{(t)}^{i}}= \sum{\overline{A_{m}^j} \otimes \overline{\overline{\tau m_{[t-1]}}}} \ \overline{\overline{Af_{(t)}^{i}}}= \sum{\overline{\overline{A_{f}^j}} \otimes \overline{\overline{\tau f_{[t-1]}}}} \] In these equations the new population of the patch \(i\) is calculated by summing the migrating mosquitoes of all the \(j\) patches across the network defined by the transitions matrix \(\tau\), which stores the mosquito migration probabilities from patch to patch. It is worth noting that the migration probabilities matrices can be different for males and females; and that there's no inherent need for them to be static (the migration probabilities may vary over time to accommodate wind changes due to seasonality). MGDrivE allows all inheritance, migration, and population dynamics processes to be simulated stochastically; this accounts for the inherent probabilistic nature of the processes governing the interactions and life-cycles of organisms. In the next section, we will describe all the stochastic processes that can be activated in the program. It should be noted that all of these can be turned on and off independently from one another as required by the researcher. Oviposition Stochastic egg laying by female/male pairs is separated into two steps: calculating the number of eggs laid by the females and then distributing laid eggs according to their genotypes. The number of eggs laid follows a Poisson distribution conditioned on the number of female/male pairs and the fertility of each female. \[ Poisson( \lambda = numFemales*Fertility) \] Multinomial sampling, conditioned on the number of offspring and the relative viability of each genotype, determines the genotypes of the offspring. \[ Multinomial \left(numOffspring, p_1, p_2\dots p_b \right)=\frac{numOffspring!}{p_1!\,p_2\,\dots p_n}p_1^{n_1}p_2^{n_2}\dots p_n^{n_n} \] Sex Determination Sex of the offspring is determined by multinomial sampling. This is conditioned on the number of eggs that live to hatching and a probability of being female, allowing the user to design systems that skew the sex ratio of the offspring through reproductive mechanisms. \[ Multinomial(numHatchingEggs, p_{female}, p_{female}) \] Mating Stochastic mating is determined by multinomial sampling conditioned on the number of males and their fitness. It is assumed that females mate only once in their life, therefore each female will sample from the available males and be done, while the males are free to potentially mate with multiple females. The males' ability to mate is modulated with a fitness term, thereby allowing some genotypes to be less fit than others (as seen often with lab releases). \[ Multinomial(numFemales, p_1f_1, p_2f_2, \dots p_nf_n) \] Other Stochastic Processes All remaining stochastic processes (larval survival, hatching , pupating, surviving to adult hood) are determined by multinomial sampling conditioned on factors affecting the current life stage. These factors are determined empirically from mosquito population data. Migration Variance of stochastic movement (not used in diffusion model of migration). It affects the concentration of probability in the Dirchlet simplex, small values lead to high variance and large values lead to low variance.
I simply want to calculate the bulk modulus of water at 50C and increasing pressures. I think I am correctly calculating the new specific volume from the original conditions at (25C and 1atm) to 50C and higher pressures. I am rightly getting a decrease in specific volume with increasing pressure at constant temperature (Column7). Here is the spreadsheet: ${V}^{'}={V}_{o}e^{{\beta}(T-25)-\kappa\Delta P}$ where: ${V}^{'}$ is column 7 ${V}_{o}$ is column 1, the specific volume of water at 1 atm and 25C $T$ is in Celsius $P$ is in atm. I used the above cross plot to graphically solve the slope $(\frac{\partial v} {\partial P})_{T} $ and input it into Column 8: Then to calculate the new compressibility at 50C (${\kappa}$) Column 9: ${\kappa}=-\frac{1} {V}(\frac{\partial v} {\partial P})_{T} $ which gives me the new compressibiliy Column 9. Then I I just take the reciprocal and convert the units to GPa. Oops, bulk modulus (Column 10) should be increasing with pressure at a constant temperature, not decreasing. I know that since dividing by an ever decreasing specific volume as pressure increases will give me a larger compressibility (Column 9) and a decreasing Bulk Modulus (Column 10). But everyone knows increasing pressure should have the opposite effect. Where did I go wrong?
The argument for the first question goes as follows: Consider the Pauli-Lubanski vector $ W_{\mu} = \epsilon_{\mu\nu\rho\sigma}P^{\nu}M^{\rho\sigma}$. Where $P^{\mu}$ are the momenta and $M^{\mu\nu}$ are the Lorentz generators. (The norm of this vector is a Poincare group casimir but this fact will not be needed for the argument.) By symmetry considerations We have $W_{\mu} P^{\mu} = 0$. Now, in the case of a massless particle, a vector orthogonal to a light-like vector must be proportional to it (easy exercise). Thus$ W^{\mu} = h P^{\mu}$, ($ h = const.$). Now, the zero component of the Pauli-Lubanski vector is given by: $ W_{0} = \epsilon_{0\nu\rho\sigma}P^{\mu}M^{\mu\nu} = \epsilon_{abc}P^{a}M^{bc} = \mathbf{P}.\mathbf{J}$, (where the summation after the second equality is on the spatial indices only, and $\mathbf{J}$ are the rotation generators ). Therefore the proportionality constant$ h = \frac{W^{0}}{P^{0}}= \frac{\mathbf{P}.\mathbf{J}}{|\mathbf{P}|}$is the helicity. Now, on the quantum level, if we rotate by an angle of $2 \pi$ around the momentum axis, the wave function acquires a phase of:$exp(2 \pi i\frac{\mathbf{P}}{|\mathbf{P}|}.\mathbf{J}) = exp(2 \pi i h)$.This factor should be $\pm 1$ according to the particle statistics thus $h$ must be half integer. As for the second question, a very powerful method to construct the gluon amplitudes is by the twistor approach.Please see the following article by N.P. Nair for a clear exposition. Update: This update refers to the questions asked by user6818 in the comments: For simplicity I'll consider the case of a photon and not gluons. The strategy of the solution is based on the explicit construction of the angular momentum and spin of a free photon field (which depend on the polarization vectors) and showing that the above relations are satisfied for the photon field.The photon momentum and the angular momentum densities can be obtained via the Noether theorem from the photon Lagrangian. Alternatively, it is well known that the photon linear momentum is given by the Poynting vector (proportional to) $\vec{E}\times\vec{B}$, and it is not difficult to convince onself that the total angular momentum density is (proportional to) $\vec{x}\times \vec{E}\times\vec{B}$. Now, the total angular momentum can be decomposed into angular and spin angular momenta (please see K.T. Hecht: quantum mechanics (page 584 equation 16)) $\vec{J} = \int d^3x (\vec{x}\times \vec{E}\times\vec{B}) =\int d^3x (\vec{E}\times\vec{A} + \sum_{i=1}^3 E_j \vec{x} \times \vec{\nabla} A_j )$ The first term on the right hand side can be interpreted as the spin and the second as the orbital angular momentum as it is proportional to the position. Now, Neither the spin nor the orbital angular momentum densities are gauge invariant (only their sum is). But, one can argue that the total orbital angular momentum is zero because the position averages to zero, thus the total spin: $ \vec{S} =\int d^3x (\vec{E}\times\vec{A})$ is gauge invariant: Now, we can obseve that in canonical quantization: $[A_j, E_k] = i \delta_{jk}$, we get $[S_j, S_k] = 2i \epsilon_{jkl} S_l$. Which are the angular momentum commutation relations apart from the factor 2. Now, by substituting the plane wave solution: $A_k = \sum_{k,m=1,2} a_{km} \vec{\epsilon_m}(k) exp(i(\vec{k}.\vec{x}-|k|t)) +h.c.$ (The condition $\vec{\epsilon_m}(k).\vec{k} = 0$, is just a consequence of the vanishing of the sources). We obtain: $\vec{S} = \sum_{k,m=1,2}(-1){m} a^\dagger_{km}a_{km} \hat{k} = \sum_{k}(n_1-n_2)\hat{k}$ (where $n_1$, $n_2$ are the numbers of right and left circularly polarized photons). Thus for a single free photon, the total spin, thus the total angular momentum are aligned along or opposite to the momentum, which is the same result stated in the first part of the answer. Secondly, the photon total spin operators exist and transform (up to a factor of two) as spin 1/2 angular momentum operators.
As mentioned in NotAstronaut's answer, objects smaller than 25 meters will typically burn up in the atmosphere. One can very easily see why this should be the case using Newton's impact depth formula. This is based on approximating the problem by assuming that the matter in the path of the object is being pushed at the same velocity of the object, so as soon as the object has swiped out path containing the same mass as its own mass, it will have lost all of its initial momentum. All its kinetic energy will then have dissipated there, so if this happens in the atmosphere it will have burned up before reaching the ground. This is, of course, a gross oversimplification, but it will yield correct order of magnitude estimates. We can then calculate the critical diameter as follows. The mass of the atmosphere per unit area equals the atmospheric pressure at sea level divided by the gravitational acceleration, so this is about $10^4\text{ kg/m}^2$. If an asteroid of diameter $D$ and density $\rho$ is to penetrate the atmosphere, its mass of $1/6\pi \rho D^3$ should be larger than the mass of the atmosphere it will encounter on its way to the ground, which is $5/2 \pi 10^3 D^2\text{ kg/m}^2$. Therefore: $$ D > \frac{1.5\times 10^4}{\rho} \text{ kg/m}^2$$ If we take the density $\rho$ to be that of a typical rock of $3\times 10^3 \text{ kg}/\text{m}^3$, then we see that $D>5\text{ m}$, which is reasonably close order of magnitude estimate to the correct answer.
Find all real numbers $a_1, a_2, a_3, b_1, b_2, b_3$ such that for every $i\in \lbrace 1, 2, 3 \rbrace$ numbers $a_{i+1}, b_{i+1}$ are distinct roots of equation $x^2+a_ix+b_i=0$ (suppose $a_4=a_1$ and $b_4=b_1$). There are many ways to do it but I've really wanted to finish the following idea: From Vieta's formulas we get: \begin{align} \begin{cases} a_1+b_1=-a_3 \ \ \ \ \ \ \ \ (a) \\a_2+b_2=-a_1\ \ \ \ \ \ \ \ (b)\\a_3+b_3=-a_2\ \ \ \ \ \ \ \ (c)\\a_1b_1=b_3\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (d)\\a_2b_2=b_1\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (e)\\a_3b_3=b_2\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (f)\end{cases} \end{align} First we notice that each $b_i$ is nonzero. Indeed, suppose $b_1=0$. Then from (d) and (f) we deduce that $b_3=0$ and $b_2=0$, so from (a), (b), (c) we get $a_1=-a_3=-(-a_2)=-(-(-a_1))=-a_1$, hence $a_1=0$ which is impossible. Now, from (a), (b), (c), (d), (e), (f) we obtain: \begin{align} \begin{cases} a_1+b_1-a_1b_1=-a_3-b_3 \ \ \ \ \ \ \ \ \\a_2+b_2-a_2b_2=-a_1-b_1\ \ \ \ \ \ \ \ \\a_3+b_3-a_3b_3=-a_2-b_2\end{cases}, \end{align} so: \begin{align} \begin{cases} (b_1-1)(a_1-1)=1-a_2 \ \ \ \ \ \ \ \ \\(b_2-1)(a_2-1)=1-a_3\ \ \ \ \ \ \ \ \\(b_3-1)(a_3-1)=1-a_1\end{cases}. \end{align} Therefore: \begin{align*} (b_1-1)(b_2-1)(b_3-1)(a_1-1)(a_2-1)(a_3-1)=(1-a_1)(1-a_2)(1-a_3), \end{align*} which implies: \begin{align*} \bigl((a_1-1)(a_2-1)(a_3-1)\bigr)\bigl((b_1-1)(b_2-1)(b_3-1)+1\bigr)=0. \end{align*} I got stuck here. Is it possible to prove that in this case $b_i=0$ is the only solution to equation $(b_1-1)(b_2-1)(b_3-1)=-1$ or maybe get contradiction in some other way? If so, we can assume that $a_1=1$ and from here we can easily show that also $a_2=a_3=1$, so $b_1=b_2=b_3=-2$. Since $(b_1-1)(b_2-1)(b_3-1)>-1$ for every $b_i>0$ and $(b_1-1)(b_2-1)(b_3-1)<-1$ for every $b_i<0$, it suffices to prove that the signs of $b_1, b_2, b_3$ can't be different but I don't know how to do it. I also found out that $(b_1+1)^2+(b_2+1)^2+(b_3+1)^2=3$, so $b_i\in [-\sqrt{3}-1, \sqrt{3}-1]$ but I don't know if we can use it somehow.
I am reading the classical article of A. Salomaa where he gives two axiom systems for regular sets and proofs consistency and completeness. As I have understood it, an axiomatic system in some logic (lets suppose predicate first order logic) are axioms formulated in the language of the logic, i.e. well-formed formulas together with primitive notions (constant, predicate or function symbols). And a (set theoretical) model is an interpretation for this. For example, consider the theory of groups. The primitive notions are groups, multiplication, inversion and idenity, mostly written as $(G, \cdot, ^{-1}, 1)$ and the axioms would be \begin{align*} & \forall x,y,z : (xy)z = x(yz) \\ & \forall x \exists y : xy = 1 \\ & \forall x : x1 = x \land 1x = x. \end{align*} The existence of certain groups shows its consistency, and as there are models such that for example the sentence $\forall x \forall y : xy = xy$ is either true or false, it is not complete. But essential here is that when talking about the theory we have just the axioms in mind, without bearing to any actual realisation/model. Now to come back to Salomaa's paper, in his system $F_1$ he lists $11$ Axioms. Now it is easy to see that regular expression (defined as terms over some alphabet) are a model for these axioms, but besides that their might be other models. When dealing with questions about this axiom system in general we cannot argue with one specific model, or? To be more specific, in Lemma 4 of his paper he shows that every regular expression has an equational characterisation (i.e. a set of equations this expression fulfills) and this is essential for the completeness proof. And the proof goes by induction over the construction of regular expression, so it works just for this specific model. But in fact he must how that everything (not just regular expressions) obeying the axioms have such an equational characterisation, so he must argue more general than using the specific model of regular expression? Am I right? Or why does this works out... or am I confusing something here, in what sense does regular expressions go into the axiom system that we can use this model in proving statements about the axiom system (I guess this is not the only model, or?).
Interpolation and optimal hitting for complete minimal surfaces with finite total curvature 87 Downloads Abstract We prove that, given a compact Riemann surface \(\Sigma \) and disjoint finite sets \(\varnothing \ne E\subset \Sigma \) and \(\Lambda \subset \Sigma \), every map \(\Lambda \rightarrow \mathbb {R}^3\) extends to a complete conformal minimal immersion \(\Sigma \setminus E\rightarrow \mathbb {R}^3\) with finite total curvature. This result opens the door to study optimal hitting problems in the framework of complete minimal surfaces in \(\mathbb {R}^3\) with finite total curvature. To this respect we provide, for each integer \(r\ge 1\), a set \(A\subset \mathbb {R}^3\) consisting of \(12r+3\) points in an affine plane such that if A is contained in a complete nonflat orientable immersed minimal surface \(X:M\rightarrow \mathbb {R}^3\), then the absolute value of the total curvature of X is greater than \(4\pi r\). In order to prove this result we obtain an upper bound for the number of intersections of a complete immersed minimal surface of finite total curvature in \(\mathbb {R}^3\) with a straight line not contained in it, in terms of the total curvature and the Euler characteristic of the surface. Mathematics Subject Classification53A10 52C42 30D30 32E30 Notes Acknowledgements The authors were partially supported by the State Research Agency (SRA) and European Regional Development Fund (ERDF) via the Grants Nos. MTM2014-52368-P and MTM2017-89677-P, MICINN, Spain. They wish to thank an anonymous referee for valuable suggestions which led to an improvement of the exposition. References 1. 2.Alarcón, A., Forstnerič, F.: New complex analytic methods in the theory of minimal surfaces: a survey. J. Aust. Math. Soc. (2018). https://doi.org/10.1017/S1446788718000125 3. 4.Alarcón, A., Forstnerič, F., López, F.J.: New complex analytic methods in the study of non-orientable minimal surfaces in \({\mathbb{R}}^n\). Mem. Amer. Math. Soc. (in press)Google Scholar 5. 6. 7. 8.Barbosa, J.L.M., Colares, A.G.: Minimal Surfaces in \({\mathbf{R}}^{3}\), Volume 1195 of Lecture Notes in Mathematics. Springer, Berlin (1986). Translated from the PortugueseGoogle Scholar 9. 10. 11.Forstnerič, F.: Stein manifolds and holomorphic mappings. The homotopy principle in complex analysis (2nd edn), Volume 56 of Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics. Springer, Berlin (2017)Google Scholar 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23.
I have to prove if this function is differentiable. $$f(x,y)= \begin{cases} \frac{\cos x-\cos y}{x-y} \iff x \neq y \\-\sin x \iff x=y \end{cases}$$ if $x \neq y$ it is continuous, but i want to see if it is continuous in x=y too. i can rewrite f as $$ f(x,y)= \begin{cases} \frac{g(x)-g(y)}{x-y} \iff x \neq y \\ g'(x)=g'(y) \iff x=y \end{cases}$$ and see that $lim_{xy \to xx} g(x,y)=g'(x)$. THus, it is continuous. Also, the partial derivatives exist: $$f_x(x,y)=\begin{cases} \frac{-\sin x(x-y)-\cos x+\cos y}{(x-y)^2} \\ -\cos(x) \end{cases}$$ $$f_y(x,y)=\begin{cases} \frac{\sin y(x-y)+\cos x-\cos y}{(x-y)^2} \\ 0 \end{cases}$$ If I proved that they are continuous, too, for the theorem of the total differential, the function would be differentiable. Still, I'm not sure this is the right way of reasoning.
This is the first entry in what will become an ongoing series on regression analysis and modeling. In this tutorial, we will start with the general definition or topology of a regression model, and then use NumXL program to construct a preliminary model. Next, we will closely examine the different output elements in an attempt to develop a solid understanding of regression, which will pave the way to a more advanced treatment in future issues. In this tutorial, we will use a sample data set gathered from 20 different sales persons. The regression model attempts to explain and predict a sales person’s weekly sales (dependent variable) using two explanatory variables: Intelligence (IQ) and extroversion. Data Preparation First, let’s organize our input data. Although not necessary, it is customary to place all independent variables (X’s) on the left, where each column represents a single variable. In the right-most column, we place the response or the dependent variable values. In this example, we have 20 observations and two independent (explanatory) variables. The amount of weekly sales is the response or dependent variable. Process Now we are ready to conduct our regression analysis. First, select an empty cell in your worksheet where you wish the output to be generated, then locate and click on the regression icon in the NumXL tab (or toolbar). Now the Regression Wizard will appear. Select the cells range for the response/dependent variable values (i.e. weekly sales). Select the cells range for the explanatory (independent) variables values. Notes: The cells range includes (optional) the heading (Label) cell, which would be used in the output tables where it references those variables. The explanatory variables (i.e. X) are already grouped by columns (each column represents a variable), so we don’t need to change that. Leave the “Variable Mask” field blank for now. We will revisit this field in later entries. By default, the output cells range is set to the current selected cell in your worksheet. Finally, once we select the X and Y cells range, the “options,” “Forecast” and “Missing Values” tabs will become available (enabled). Next, select the “Options” tab. Initially, the tab is set to the following values: The regression intercept/constant is left blank. This indicates that the regression intercept will be estimated by the regression. To set the regression to a fixed value (e.g. zero (0)), enter it there. The significance level (aka. ) is set to 5%. In the output section, the most common regression analysis is selected. For auto-modeling, let’s leave it unchecked. We will discuss this functionality in a later issue. Now, click on the “Missing Values” tab. In this tab, you can select the approach to handle missing values in the data set (X and Y). By default, any missing value found in X or in Y in any observation would exclude the observation from the analysis. This treatment is a good approach for our analysis, so let’s leave it unchanged. Now, click “Ok” to generate the output tables. Analysis Let’s now examine the different output tables more closely. 1. Regression Statistics In this table, a number of summary statistics for the goodness-of-fit of the regression model, given the sample, is displayed. The coefficient of determination (R square) describes the ratio of variation in Y described by the regression. The adjusted R-square is an alteration of R square to take into account the number of explanatory variables. The standard error ($\sigma$) is the regression error. In other words, the error in the forecast has a standard deviation around \$332. Log-likelihood function (LLF), Akaike information criterion (AIC), and Schwartz/Bayesian information criterion (SBIC) are different probabilistic measures for the goodness of fit. Finally, “Observations” is the number of non-missing observations used in the analysis. 2. ANOVA Before we can seriously consider the regression model, we must answer the following question: “Is the regression model statistically significant or a statistical data anomaly?” The regression model we have hypothesized is:$$Y_i=\hat Y_i+e_i =\alpha + \beta_1\times X_{1,i}+\beta_2\times X_{2,i}+e_i$$ $$e_i\sim \textrm{i.i.d}\sim N(0,\sigma^2)$$ Where $\hat Y_i$ is the estimated value for the i-th observation. $e_i$ is the error term for the i-th observation. $e_i$ iis assumed to be independent and identically distributed (Gaussian). $\sigma^2$ is the regression variance (standard error squared). $\beta_1 ,\beta_2$ are the regression coefficients. $\alpha$ is the intercept or the constant of the regression. $$H_o:\beta_1=\beta_2=0$$ $$H_1:\exists\beta_k\neq 0$$ $$1\leqslant k\leqslant 2$$ Alternatively, the question can be stated as follows: The analysis of variance (ANOVA) table answers this question. In the first row of the table (i.e. “Regression”), we compute the test-score (F-Stat)and P-Value, then compare them against the significance level ($\alpha$). In our case, the regression model is statistically valid, and it does explain some of the variation in values of the dependent variable (weekly sales). The remaining calculations in the table are simply to help us to get to this point. To be complete, we described its computation, but you can skip that to the next table. $\textrm{df}$ is the degrees of freedom. (For regression, it is the number of explanatory variables ($p$ ). For the total, it is the number of non-missing observations minus one $\left(N-1\right)$ , and for residuals, it is the difference between the two ($N-p-1$ )). Sum of squares (SS) $$SSR=\sum_{i=1}^N(\hat Y_i - \bar Y)^2$$ $$SST=\sum_{i=1}^N(Y_i - \bar Y)^2$$ $$SSE=\sum_{i=1}^N(Y_i - \hat Y_i)^2$$ Mean Square (MS): $$MSR=\frac{SSR}{p}$$ $$MSE=\frac{SSE}{N-p-1}$$ Test statistics: $$F=\frac{MSR}{MSE}\sim F_{p,N-p-1}(.)$$ 3. Residuals Diagnosis Table Once we confirm that the regression model explains some of the variation in the values of the response variable (weekly sales), we can examine the residuals to make sure that the underlying model’s assumptions are met.$$Y_i=\hat Y_i+e_i =\alpha + \beta_1\times X_{1,i}+\beta_2\times X_{2,i}+e_i$$ $$e_i\sim \textrm{i.i.d}\sim N(0,\sigma^2)$$ Using the standardized residuals (\tex>\frac{e_i}{\sigma_i}$), we perform a series of statistical tests to the mean, variance, skew, excess kurtosis and finally, the normality assumption. In this example, the standardized residuals pass the tests with 95% confidence. Note: the standardized (aka “studentized”) residuals are computed using the prediction error ($S_{\textrm{pred}}$). for each observation. $S_{\textrm{pred}}$ takes into account the errors in the values of the regression coefficient, in addition to the general regression error (RMSE or $\sigma$). 4. Regression Coefficients table Once we establish that the regression model is significant, we can look closer at the regression coefficients. Each coefficient (including the intercept) is shown on a separate row, and we compute the following statistics: Value (i.e. $\alpha,\beta_1,\cdots$) Standard error in the coefficient value. Test score (T-stat) for the following hypothesis: $$H_o:\beta_k=0$$ $$H_1:\beta_k \neq 0$$ The P-Values of the test statistics (using Student’s t-distribution). Upper and lower limits of the confidence interval for the coefficient value. A reject/accept decision for the significance of the coefficient value. In our example, only the “extroversion” variable is found significant while the intercept and the “Intelligence” are not found significant. Conclusion In this example, we found that the regression model is statistically significant in explaining the variation in the values of the weekly sales variable, it satisfies the model’s assumptions, but the value of one or more regression coefficient is not significantly different from zero. What do we do now? There may be a number of reasons why this is the case, including possible multicollinearity between the variables or simply that one variable should not be included in the model. As the number of explanatory variables increases, answering such question gets more involved, and we need further analysis. We will cover this particular issue in a separate entry of our series.
You can at least sketch an answer to the "multiply $t,t^2,\dots,t^{m-1}$ where $m$ is the multiplicity" question by considering a family of IVPs where two roots are approaching one another. Consider the IVPs $$y''-(1+a)y'+ay,y(0)=0,y'(0)=1$$ where $a$ is approaching $1$. Let us solve this whenever $a$ is not $1$. The roots of the characteristic polynomial are $1$ and $a$, so the general solution is $c_1 e^{t} + c_2 e^{at}$. To solve the IVP we have to solve the system of equations $$\begin{bmatrix} 1 & 1 \\ 1 & a \end{bmatrix} \begin{bmatrix} c_1 \\ c_2 \end{bmatrix} = \begin{bmatrix} 0 \\ 1 \end{bmatrix}.$$ The solution to this system is $\begin{bmatrix} \frac{-1}{a-1} \\ \frac{1}{a-1} \end{bmatrix}$. Thus the solution to this IVP is $\frac{e^{at} - e^{ t}}{a-1}$. Now compute $\lim_{a \to 1}$ of that. You'll find the familiar $te^t$. Regular perturbation theory then tells us that $te^t$ solves the IVP $y''-2y'+y=0,y(0)=0,y'(0)=1$, as you can of course check directly. Of course you would not want to do this in the general situation. But this hints at trying it. Once you see that, to see why it actually works in the general situation, you can note that when you apply a linear differential operator with constant coefficients to a polynomial of degree $d$ times an exponential, you get a polynomial times that same exponential. The degree of the resulting polynomial just turns out to be $d-m$. So if you go up to polynomials of degree $d+m$ you can make the resulting polynomial be an arbitrary polynomial of degree $d$. Another perspective is offered by linear algebra. Here the factors of $t$ arise when you take the matrix exponential of a Jordan block. For example, one can explicitly calculate $e^{At}$ where $A=\begin{bmatrix} 1 & 1 \\ 0 & 1 \end{bmatrix}$ using the series expansion. You get $\begin{bmatrix} e^t & t e^t \\ 0 & e^t \end{bmatrix}$. Now the matrix associated to $y''-2y'+y=0$ is $\begin{bmatrix} 0 & 1 \\ -1 & 2 \end{bmatrix}$ which has the Jordan form above. You can do the same for higher order Jordan blocks. I'm not so sure how to give a sketch of why variation of parameters should work, though. One thing that might help a little bit is to realize that the only ad hoc part of variation of parameters is the idea of multiplying the homogeneous solutions by variable coefficients. Everything else follows by just following through the algebra/calculus. Typically in textbooks this "everything else" is taught as part of the method itself because this algebra is actually a little messy. That's great for ease of just making the method work, not so great for understanding where it came from.
Given $a$, $b$ and $c$ are positive real numbers. Prove that:$$\sum \limits_{cyc}\frac {a}{(b+c)^2} \geq \frac {9}{4(a+b+c)}$$ Additional info: We can't use induction. We should mostly use Cauchy inequality. Other inequalities can be used rarely. Things I have done so far: The inequality look is similar to Nesbitt's inequality. We could re-write it as: $$\sum \limits_{cyc}\frac {a}{(b+c)^2}(2(a+b+c)) \geq \frac{9}{2}$$ Re-write it again:$$\sum \limits_{cyc}\frac {a}{(b+c)^2}\sum \limits_{cyc}(b+c) \geq \frac{9}{2}$$ Cauchy appears: $$\sum \limits_{cyc}\frac {a}{(b+c)^2}\sum \limits_{cyc}(b+c) \geq \left(\sum \limits_{cyc}\sqrt\frac{a}{b+c}\right)^2$$ So, if I prove $\left(\sum \limits_{cyc}\sqrt\frac{a}{b+c}\right)^2 \geq \frac {9}{2}$ then problem is solved. Re-write in semi expanded form:$$2\left(\sum \limits_{cyc}\frac{a}{b+c}+2\sum \limits_{cyc}\sqrt\frac{ab}{(b+c)(c+a)}\right) \geq 9$$ We know that $\sum \limits_{cyc}\frac{a}{b+c} \geq \frac {3}{2}$.So$$4\sum \limits_{cyc}\sqrt\frac{ab}{(b+c)(c+a)} \geq 6$$ So the problem simplifies to proving this $$\sum \limits_{cyc}\sqrt\frac{ab}{(b+c)(c+a)} \geq \frac{3}{2}$$ And I'm stuck here.
When a symmetry is anomalous, the path integral $Z=\int\mathcal{D}\phi e^{iS[\phi]}$ is not invariant under that group of symmetry transformations $G$. This is because though the classical action $S[\phi]$ is invariant the measure may not be invariant. Since 1PI effective action $\Gamma[\phi_{c}]$ takes quantum corrections into account, I expect that it is not invariant under the symmetry. Is there a way to see/prove whether $\Gamma[\phi_{c}]$ is invariant under the anomalous symmetry? That requires one to know how $\phi_c$ changes under the symmetry. The effective action $\Gamma[\phi_c]$ is the Legendre transform of the generator of connected correlation functions $W[J]$ which for most QFTs is defined as $$W[J]=-i\log Z[J]$$ $Z[J]$ is the generating functional/partition function of the theory. So you can write $$\Gamma[\phi_c]=W[J]-\int d^4x \ \phi_c(x) J(x)$$ where $\phi_c=\langle \phi \rangle_{J}=\frac{\delta W}{\delta J}$ is the vacuum expectation value in the presence of the external source $J$. Ok so now, let's derive the Slavnov-Tylor identities in the case of an anomalous local gauge symmetry or a global symmetry: $$\phi'(x)\rightarrow \phi(x)+\delta \phi(x), \qquad \delta \phi(x) = \epsilon(x) F(x,\phi(x))$$ such that the classical action is left invariant $$S[\phi+\epsilon F]=S[\phi]$$ however if the symmetry is anomalous the functional measure will change: $$\mathcal{D}\phi\rightarrow \mathcal{D}(\phi+\epsilon F)=\mathcal{D}\phi\ e^{i\int d^4x\ \epsilon(x) \mathcal{A}(x)}$$ where $\mathcal{A}$ is the anomaly function in the parametrization we have given. Therefore the connected generating functional $W$ will transform as \begin{align} e^{i W[J]}&=\int \mathcal{D}\phi \exp\left[i S[\phi]+i\int d^4x\ \phi(x)J(x)\right]\\ &=\int \mathcal{D}\phi' \exp\left[i S[\phi']+i\int d^4x\ \phi'(x)J(x)\right]\\ &=\int \mathcal{D}\phi \exp\left[i S[\phi]+i\int d^4x \phi(x)J(x)+i\int d^4x \ \epsilon(x) F(x)J(x)+i\int d^4x\ \epsilon(x) \mathcal{A}(x)\right] \end{align} So, expanding to $O(\epsilon)$ we obtain the Ward identity $$\int d^4x \left(\mathcal{A}(x)+J(x)\langle F(x,\phi)\rangle _J\right)=0$$ now we can perform the Legendre tranformation and get the Slavnov-Tylor identity which is for the effective action: $$\int d^4x \left(\mathcal{A}(x)-\langle F(x,\phi)\rangle _{J_{\phi_c}}\frac{\delta \Gamma[\phi_c]}{\delta \phi_c(x)}\right)=0$$ the reason for this formula is that $J=-\frac{\delta \Gamma[\phi_c]}{\delta \phi_c(x)}$ from the Legendre tranform, while ${J_{\phi_c}}$ means that $J$ is fixed by solving the equation $\phi_c=\langle \phi \rangle_{J}=\frac{\delta W}{\delta J}$ and so it correspond to a fixed value of $\phi_c$. So, the symmetry under which the effective action transforms is $$\phi_c\rightarrow \phi_c+\epsilon\langle F(x,\phi(x))\rangle_{J_{\phi_c}}$$ Up until now we have done all in general, however if we choose a linear transformation to be anomalous, i.e. $F(x,\phi)=f(x)\phi(x)$ depends linearly on the fields (which are most symmetry transformations), we obtain $$\int d^4x \left(\mathcal{A}(x)-F(x,\phi_c)\frac{\delta \Gamma[\phi_c]}{\delta \phi_c(x)}\right)=0$$ where the expectation value is replaced simply by $F$ calculated in $\phi_c$, since $\langle f(x)\phi(x)\rangle=f(x)\phi_c$. Therefore, remembering that $\epsilon F\equiv \delta \phi$, for a linear simmetry we can write that the variation of the effctive action as $$\delta_\epsilon \Gamma[\phi_c]\equiv\int d^4x\ \delta\phi_c(x)\frac{\delta \Gamma[\phi_c]}{\delta \phi_c(x)}=\int d^4x\ \epsilon(x)\mathcal{A}(x)$$ so that if the symmetry was not anomalous the effective action would be invariant. Therefore, for general non-linear symmetry transformation the relation between the field tranformation and the effective action is complicated and has no simple classical analogue since one has to take the expectation value of the transformation, on the other hand if the symmetry is linear then the effective action tranforms as the classical action with $\phi_c$ instead of $\phi$. In both cases the presence of the anomaly makes the variation of the effective action nonzero as we have shown. EDIT: The derivation is perfectly valid for global symmetries too, just pull $\epsilon$ out of the integrals and ignore its $x$ dependence. In the case of gauge symmetries is not that anomalies have to be removed, it is that for a gauge theory to be consistent the anomalies have to cancel. Suppose you have a gauge theory with massless fermions (only massless fermions contribute to the anomaly) $$\mathcal{L}=-\frac{1}{4} F^{\mu\nu}F_{\mu\nu}+\bar{\psi}\left(i\gamma_\mu D^{\mu}\right)\psi$$ where $F_{\mu\nu}=\partial_\mu A_\nu-\partial_\nu A_\mu+i g \left[A_\mu,A_\nu\right]$ and $D_\mu=\partial_\mu+i g A_\mu$. The Fermionic effective action can be calculated exactley $$S_{eff} = -\frac{1}{4} \int d^4x\ F^{\mu\nu}F_{\mu\nu}+tr \log\left(i\gamma_\mu D^\mu[A]\right) = -\frac{1}{4} \int d^4x\ F^{\mu\nu}F_{\mu\nu}+S_F[A]$$ Now, if $S_F[A]$ is gauge invariant you can quantize the theory as usual with ghosts and all that and you get an unitary renormalizable theory. However, if $S_F[A]$ is not gauge invariant, the theory becomes inconsistent. In the presence of a gauge anomaly that's precisely the case, let's write the effective action before the path integration over the matter fields is done. And let'd do a gauge tranformation of the fields, with $A'= A+\delta A$ being a gauge tranformation of the gauge field \begin{align} e^{i S_F[A']}=&\int \mathcal{D}\psi\mathcal{D}\bar{\psi}\ e^{i\int d^4x\ \bar{\psi}\left(i\gamma_\mu D^{\mu}[A']\right)\psi} = \int \mathcal{D}\psi'\mathcal{D}\bar{\psi}'\ e^{i\int d^4x\ \bar{\psi}'\left(i\gamma_\mu D^{\mu}[A']\right)\psi'} \\ &=\int \mathcal{D}\psi\mathcal{D}\bar{\psi}\ e^{i\int d^4x\ \bar{\psi}\left(i\gamma_\mu D^{\mu}[A]\right)\psi} e^{i\int d^4x \ \epsilon(x)\mathcal{A}(x)} = e^{i\int d^4x \ \epsilon(x)\mathcal{A}(x)} e^{i S_F[A]} \end{align} where we have followed the same steps as before where the gauge anomaly pops out of the measure tranformation. Therefore the variation of the effective action under a gauge tranformation is $$\delta_\epsilon S_F[A] \equiv S_F[A+\delta A]-S_F[A]=\int d^4 x\ \epsilon(x) \mathcal{A}(x)$$ and so you see that the anomaly breaks the gauge invariance of the matter fields effective action. This in turn breaks unitarity etc. Ok so now that we have a more precise definition of what inconsistent means, we can see that the anomaly just can't be removed, it has to vanish. This is a property only a few gauge groups have, including the standard model's $SU(3)\times SU(2)\times U(1)$. The reason for this is in the explicit form of gauge anomalies (I will not repeat the derivation here since it can be easily found in textbooks and is quite long) : $$\mathcal{A}_a(x)=-\frac{1}{32 \pi^2}D_{abc}\epsilon^{\mu\nu\rho\sigma}F^b_{\mu\nu}F^c_{\rho\sigma}$$ where $a,b,c$ are indices belonging to the gauge group. The anomaly cancels if the group theory factors $D_{abc} = 0$.
Noether's theorem states that, for every continuous symmetry of an action, there exists a conserved quantity, e.g. energy conservation for time invariance, charge conservation for $U(1)$. Is there any similar statement for discrete symmetries? Noether's theorem states that, for every migrated from math.stackexchange.com Apr 12 '11 at 16:05 This question came from our site for people studying math at any level and professionals in related fields. For continuous global symmetries, Noether theorem gives you a locally conserved charge density (and an associated current), whose integral over all of space is conserved (i.e. time independent). For global discrete symmetries, you have to distinguish between the cases where the conserved charge is continuous or discrete. For infinite symmetries like lattice translations the conserved quantity is continuous, albeit a periodic one. So in such case momentum is conserved modulo vectors in the reciprocal lattice. The conservation is local just as in the case of continuous symmetries. In the case of finite group of symmetries the conserved quantity is itself discrete. You then don't have local conservation laws because the conserved quantity cannot vary continuously in space. Nevertheless, for such symmetries you still have a conserved charge which gives constraints (selection rules) on allowed processes. For example, for parity invariant theories you can give each state of a particle a "parity charge" which is simply a sign, and the total charge has to be conserved for any process, otherwise the amplitude for it is zero. Put into one sentence, Noether's first Theorem states that a continuous, global, off-shell symmetry of an action $S$ implies a local on-shell conservation law. By the words on-shell and off-shell are meant whether Euler-Lagrange equations of motion are satisfied or not. Now the question asks if continuous can be replace by discrete? It should immediately be stressed that Noether Theorem is a machine that for each input in form of an appropriate symmetry produces an output in form of a conservation law. To claim that a Noether Theorem is behind, it is not enough to just list a couple of pairs (symmetry, conservation law). Now, where could a discrete version of Noether's Theorem live? A good bet is in a discrete lattice world, if one uses finite differences instead of differentiation. Let us investigate the situation. Our intuitive idea is that finite symmetries, e.g., time reversal symmetry, etc, can not be used in a Noether Theorem in a lattice world because they don't work in a continuous world. Instead we pin our hopes to that discrete infinite symmetries that become continuous symmetries when the lattice spacings go to zero, can be used. Imagine for simplicity a 1D point particle that can only be at discrete positions $q_t\in\mathbb{Z}a$ on a 1D lattice $\mathbb{Z}a$ with lattice spacing $a$, and that time $t\in\mathbb{Z}$ is discrete as well. (This was, e.g., studied in J.C. Baez and J.M. Gilliam, Lett. Math. Phys. 31 (1994) 205; hat tip: Edward.) The velocity is the finite difference $$v_{t+\frac{1}{2}}:=q_{t+1}-q_t\in\mathbb{Z}a,$$ and is discrete as well. The action $S$ is $$S[q]=\sum_t L_t$$ with Lagrangian $L_t$ on the form $$L_t=L_t(q_t,v_{t+\frac{1}{2}}).$$ Define momentum $p_{t+\frac{1}{2}}$ as $$ p_{t+\frac{1}{2}} := \frac{\partial L_t}{\partial v_{t+\frac{1}{2}}}. $$ Naively, the action $S$ should be extremized wrt. neighboring virtual discrete paths $q:\mathbb{Z} \to\mathbb{Z}a$ to find the equation of motion. However, it does not seem feasible to extract a discrete Euler-Lagrange equation in this way, basically because it is not enough to Taylor expand to the first order in the variation $\Delta q$ when the variation $\Delta q\in\mathbb{Z}a$ is not infinitesimal. At this point, we throw our hands in the air, and declare that the virtual path $q+\Delta q$ (as opposed to the stationary path $q$) does not have to lie in the lattice, but that it is free to take continuous values in $\mathbb{R}$. We can now perform an infinitesimal variation without worrying about higher order contributions, $$0 =\delta S := S[q+\delta q] - S[q] = \sum_t \left[\frac{\partial L_t}{\partial q_t} \delta q_t + p_{t+\frac{1}{2}}\delta v_{t+\frac{1}{2}} \right] $$ $$ =\sum_t \left[\frac{\partial L_t}{\partial q_t} \delta q_{t} + p_{t+\frac{1}{2}}(\delta q_{t+1}- \delta q_t)\right] $$ $$=\sum_t \left[\frac{\partial L_t}{\partial q_t} - p_{t+\frac{1}{2}} + p_{t-\frac{1}{2}}\right]\delta q_t + \sum_t \left[p_{t+\frac{1}{2}}\delta q_{t+1}-p_{t-\frac{1}{2}}\delta q_t \right].$$ Note that the last sum is telescopic. This implies (with suitable boundary conditions) the discrete Euler-Lagrange equation $$\frac{\partial L_t}{\partial q_t} = p_{t+\frac{1}{2}}-p_{t-\frac{1}{2}}.$$ This is the evolution equation. At this point it is not clear whether a solution for $q:\mathbb{Z}\to\mathbb{R}$ will remain on the lattice $\mathbb{Z}a$ if we specify two initial values on the lattice. We shall from now on restrict our considerations to such systems for consistency. As an example, one may imagine that $q_t$ is a cyclic variable, i.e., that $L_t$ does not depend on $q_t$. We therefore have a discrete global translation symmetry $\Delta q_t=a$. The Noether current is the momentum $p_{t+\frac{1}{2}}$, and the Noether conservation law is that momentum $p_{t+\frac{1}{2}}$ is conserved. This is certainly a nice observation. But this does not necessarily mean that a Noether Theorem is behind. Imagine that the enemy has given us a global vertical symmetry $\Delta q_t = Y(q_t)\in\mathbb{Z}a$, where $Y$ is an arbitrary function. (The words vertical and horizontal refer to translation in the $q$ direction and the $t$ direction, respectively. We will for simplicity not discuss symmetries with horizontal components.) The obvious candidate for the bare Noether current is $$j_t = p_{t-\frac{1}{2}}Y(q_t).$$ But it is unlikely that we would be able to prove that $j_t$ is conserved merely from the symmetry $0=S[q+\Delta q] - S[q]$, which would now unavoidably involve higher order contributions. So while we stop short of declaring a no-go theorem, it certainly does not look promising. Perhaps, we would be more successful if we only discretize time, and leave the coordinate space continuous? I might return with an update about this in the future. An example from the continuous world that may be good to keep in mind: Consider a simple gravity pendulum with Lagrangian $$L(\varphi,\dot{\varphi}) = \frac{m}{2}\ell^2 \dot{\varphi}^2 + mg\ell\cos(\varphi).$$ It has a global discrete periodic symmetry $\varphi\to\varphi+2\pi$, but the (angular) momentum $p_{\varphi}:=\frac{\partial L}{\partial\dot{\varphi}}= m\ell^2\dot{\varphi}$ is not conserved if $g\neq 0$. You mentioned crystal symmetries. Crystals have a discrete translation invariance: It is not invariant under an infinitesimal translation, but invariant under translation by a lattice vector. The result of this is conservation of momentum up to a reciprocal lattice vector. There is an additional result: Suppose the Hamiltonian itself is time independent, and suppose the symmetry is related to an operator $\hat S$. An example would be the parity operator $\hat P|x\rangle = |-x\rangle$. If this operator is a symmetry, then $[H,P] = 0$. But since the commutator of an operator with the Hamiltonian also gives you the derivative, you have $\dot P = 0$. Actually there are analogies or generalisations of results which reduce to Noether's theorems under usual cases and which do hold for discrete (and not necesarily discretised) symmetries (including CPT-like symmetries) AbstractWe introduce a method to construct conservation laws for a large class of linear partial differential equations. In contrast to the classical result of Noether, the conserved currents are generated by any symmetry of the operator, including those of the non-Lie type. An explicit example is made of the Dirac equation were we use our construction to find a class of conservation laws associated with a 64 dimensional Lie algebra of discrete symmetries that includes CPT. The way followed is a succesive relaxation of the conditions of Noether's theorem on continuous (Lie) symmetries, which generalise the result in other cases. For example (from above), emphasis, additions mine: The connection between symmetry and conservation laws has been inherent in all of mathematical physics since Emmy Noether published, in 1918, her hugely influential work linking the two. ..[M]any have put forward approaches to study conservation laws, through a variety of different means. In each case, a conservation law is defined as follows. Definition 1.Let $\Delta[u] = 0$ be a system of equations depending on the independent variables $x = (x_1, \dots , x_n)$, the dependent variables $u = (u_1, \dots , u_m)$ and derivatives thereof. Then a conservation law for $\Delta$ is defined by some $P = P[u]$ such that: $${\operatorname{Div} P \; \Big|}_{\Delta=0} = 0 \tag{1.1}$$ where $[u]$ denotes the coordinates on the $N$-th jet of $u$, with $N$ arbitrary. Noether’s [original] theorem is applicable in the [special] case where $\Delta[u] = 0$ arises as the Euler-Lagrange equation to an associated variational problem. It is well known that a PDE has a variational formulation if and only if it has self-adjoint Frechet derivative. That is to say: if the system of equations $\Delta[u] = 0$ is such that $D_{\Delta} = {D_{\Delta}}^*$ then the following result is applicable. Theorem (Noether).For a non-degenerate variational problem with $L[u] = \int_{\Omega} \mathfrak{L} dx$, the correspondence between nontrivial equivalence classes of variational symmetries of $L[u]$ and nontrivial equivalence classes of conservation laws is one-to-one. [..]Given that [the general set of symmetries] is far larger than those considered in the classical work of Noether, there is potentially an even stronger correspondence between symmetry and conservation laws for PDEs[..] Definition 2.We say the operator $\Gamma$ is a symmetry of the linear PDE $\Delta[u] \equiv L[u] = 0$ if there exists an operator $\alpha_{\Gamma}$ such that: $$[L, \Gamma] = \alpha_{\Gamma} L$$ where $[\cdot, \cdot]$ denotes the commutator by composition of operators so $L \Gamma = L \circ \Gamma$. We denote the set of all such symmetries by $sym(\Delta)$. Corollary 1.If $L$ is self-adjoint or skew-adjoint, then each $\Gamma \in sym(L)$ generates a conservation law. Specificaly, for the Dirac Equation and CPT symmetry the following conservation law is derived ( ibid.): No, because discrete symmetries have no infinitesimal form which would give rise to the (characteristic of) conservation law. See also this article for a more detailed discussion. As was said before, this depends on what kind of 'discrete' symmetry you have: if you have a bona fide discrete symmetry, as e.g. $\mathbb{Z}_n$, then the answer is in the negative in the context of Nöther's theorem(s) — even though there are conclusions that you can draw, as Moshe R. explained. However, if you're talking about a discretized symmetry, i.e. a continuous symmetry (global or local) that has been somehow discretized, then you do have an analogue to Nöther's theorem(s) à la Regge calculus. A good talk introducing some of these concepts is Discrete Differential Forms, Gauge Theory, and Regge Calculus (PDF): the bottom line is that you have to find a Finite Difference Scheme that preserves your differential (and/or gauge) structure. There's a big literature on Finite Difference Schemes for Differential Equations (ordinary and partial). Sobering thoughts: Conservation laws are not related to any symmetry, to tell the truth. For a mechanical system with N degrees of freedom there always are N conserved quantities. They are complicated combinations of the dynamical variables. Their existence is provided with existence of the problem solutions. When there is a symmetry, the conserved quantities get just a simpler look. EDIT: I do not know how they teach you but the conservation laws are not related to Noether theorem. The latter just shows how to construct some of conserved quantities from the problem Lagrangian and the problem solutions. Any combination of conserved quantities is also a conserved quantity. So what Noether gives is not unique at all. Maybe, I am by no means an expert, but I read this a few weeks ago. In that paper they consider a 2d lattice and construct an energy analogue. They show it behaves as energy should, and then conclude that for this energy to be conserved space-time would need to be invariant. Electric charge conservation is a "discrete" symmetry. Quarks and anti-quarks have discrete fractional electric charges (±1/3, ±2/3) electrons, positrons and protons have integer charges.
Let's consider $0<\alpha<1/2$ and denote by $W_T^{1-\alpha,\infty}(0,T)$ the space of measurable functions $g:[0,T]\to\Bbb R$ such that $$ ||g||_{1-\alpha,\infty,T}:=\sup_{0<s<t<T}\left[\frac{|g(t)-g(s)|}{(t-s)^{1-\alpha}}+\int_s^t\frac{|g(y)-g(s)|}{(y-s)^{2-\alpha}}\,dy\right]<+\infty\;\;\;. $$ Moreover, we define the right sided Riemann-Liouville integral of order $1-\alpha$ of a function $f\in L^p(0,t)$, with $1\le p\le\infty$, as$$I_{t-}^{1-\alpha}f(x):=\frac{(-1)^{\alpha-1}}{\Gamma(1-\alpha)}\int_x^t(y-x)^{-\alpha}f(y)\,dy\;\;\;$$for a.a. $x\in[0,t]$. My problem is the following: in a paper by Nulart and Rascanu it is stated that, if $g\in W_T^{1-\alpha,\infty}(0,T)$ then its restriction to $[0,t]$ stays in $I_{t-}^{1-\alpha}(L^{\infty}(0,t))$ for all $0<t<T$; so in other words, given such $g$, there exists $h\in L^{\infty}(0,t)$ such that $$ g(x)|_{[0,t]}=I_{t-}^{1-\alpha}h(x)=\frac{(-1)^{\alpha-1}}{\Gamma(1-\alpha)}\int_x^t(y-x)^{-\alpha}h(y)\,dy\;\;\;.$$ It seems to me, that I DON'T HAVE to search explicitly the $h$ depending on the $g$ but rather I should use a theoretical argument, which proves the existence of such $h$; but I don't know how to do it. I'm quite lost, can someone shade a light please? EDIT: Obviously $\Gamma$ is the Euler Gamma function and $(-1)^{\alpha-1}=e^{i\pi(\alpha-1)}$, but these terms are constant, thus this doesn't play any relevant role here. SECOND EDIT: We can state the "symmetric" claim; the underlying duality could help. We denote by $W_0^{\alpha,1}(0,T)$ the space of measurable functions $f:[0,T]\to\Bbb R$ such that $$ ||f||_{\alpha,1}:=\int_0^T\frac{|f(s)|}{s^{\alpha}}\,ds+\int_0^T\int_0^s\frac{|f(s)-f(y)|}{(s-y)^{\alpha+1}}\,dyds<+\infty $$ As before we define the left sided Riemann-Liouville integral of order $\alpha$ of a function $f\in L^p(0,t)$, with $1\le p\le\infty$, as$$I_{0+}^{\alpha}f(x):=\frac{1}{\Gamma(\alpha)}\int_0^x(x-y)^{\alpha-1}f(y)\,dy\;\;\;$$for a.a. $x\in[0,t]$. Then if $g\in W_0^{\alpha,1}(0,T)$ then its restriction to $[0,t]$ stays in $I_{0+}^{\alpha}(L^1(0,t))$ for all $0<t<T$.
I have an ellipse centered at $(h,k)$, with semi-major axis $r_x$, semi-minor axis $r_y$, both aligned with the Cartesian plane. How do I determine if a circle with center $(x,y)$ and radius $r$ is within the area bounded by the ellipse Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community I have an ellipse centered at $(h,k)$, with semi-major axis $r_x$, semi-minor axis $r_y$, both aligned with the Cartesian plane. How do I determine if a circle with center $(x,y)$ and radius $r$ is within the area bounded by the ellipse Idea: doing a translation we can suppose than the circle is centered at $(0,0)$. Parametrize the equation of the ellipse: $$x(t) = h + r_x\cos t,$$ $$y(t) = k + r_y\sin t.$$ And find the maximum and minimum of $t\mapsto x(t)^2 + y(t)^2$. Solve for $x$ or $y$ from $$\frac{(x-h)^2}{r_x^2}+\frac{(y-k)^2}{r_y^2}=1$$ and $$\frac{(x-h)^2}{r^2}+\frac{(y-k)^2}{r^2}=1 . $$ If the point of intersection is complex due to quantity under radical sign (discriminant) being negative, then the circle is inside the ellipse. If it is zero, the circle touches the ellipse and if posive there is intersection between them at 2 or 4 points.
So, for the past few years it's been my goal to create an equation that would give me the position of an object in a gravitational field at time $t$, given it's initial position and velocity. At first the problem was that I didn't know enough to do the math. Now that I can do multivariable calculus I thought that problem would be solved, but I've just ended up running into a new problem. Please don't tell me how to solve it, but if you can give me a hint that would be great. Here's the set up for the problem: A planet of mass $M$ (and radius = 0) is situated at the origin. I know that the magnitude of acceleration due to gravity is $$\frac{GM}{r^2}$$ so an object at $(x,y)$ will have acceleration $$a(x,y)= \frac{GM}{x^2+y^2},$$ or, as a vector, $$\overrightarrow{a}(x,y)= \left\langle \frac{GM}{x^2+y^2}\cos\theta, \frac{GM}{x^2+y^2}\sin\theta\right\rangle$$ $$= \left\langle \frac{GM}{x^2+y^2}\frac{x}{\sqrt{x^2+y^2}}, \frac{GM}{x^2+y^2}\frac{y}{\sqrt{x^2+y^2}}\right\rangle$$ $$= \left\langle \frac{GMx}{(x^2+y^2)^{3/2}}, \frac{GMy}{(x^2+y^2)^{3/2}}\right\rangle$$ So, here's where I'm stuck. I can integrate with respect to distance and get $$ W(x,y) = \left\langle -\frac{GM}{\sqrt{x^2+y^2}}, -\frac{GM}{\sqrt {x^2+y^2}}\right\rangle$$ which I think is a vector who's magnitude is the work done, but that doesn't tell me anything about time. I can integrate with respect to time, but that would give $$f(x,y)= \left\langle \frac{GMx}{(x^2+y^2)^{3/2}}t, \frac{GMy}{(x^2+y^2)^{3/2}}t\right\rangle$$ which... I mean is naïve at best. It doesn't take into account the change in position that happens over time. The only thing that I can think of to do is somehow find parametric equations where $x$ and $y$ are functions of $t$, but that's basically what I'm trying to do anyway. Any ideas? I want to find an equation such that I can put in a location and velocity and the equation will tell me what path the object will take. Is that even possible?
I sought-for the equations of motion of an unrestrained rigid body. The equations of motion are readily available in the literature, but my concern is to derive them by Hamilton's principle. Expressing the position of an infinitesimal particle within the body as: $$ \vec{R} = \vec{R}_0 + \vec{r} $$ where $\vec{R}_0$ and $\vec{r}$ are expressed in terms of body coordinates, origin of which is located at the center of mass. Additionally, $\vec{R}_0$ is the position of the center of mass measured from inertial frame, $\vec{r}$ is the position of the point measured from body frame. The velocity of this point with respect to the inertial frame can be found as: $$ \vec{V} = \dot{\vec{R}}_0 + \vec{\omega} \times (\vec{R}_0+\vec{r}) $$ where $ (\dot{}) $ represents the time derivative with respect to the body frame. To find the acceleration, differentiate with respect to inertial frame once more yields: $$ \vec{a} = \ddot{\vec{R}}_0 + \dot{\vec{\omega}}\times(\vec{R}_0+\vec{r}) + \vec{\omega} \times \dot{\vec{R}}_0 + \vec{\omega} \times \vec{V} $$ One can find the variation of velocity by replacing the time derivatives by variational $\delta$ operator and using $\vec{\delta\theta}$ infinitesimal rotation vector: $$ \delta \vec{V} = \delta \dot{\vec{R}}_0 + \delta \vec{\omega} \times(\vec{R}_0+\vec{r}) + \vec{\omega} \times \delta \vec{R}_0 + \vec{\delta\theta} \times \vec{V} $$ Now, I can use variation of kinetic energy. For the simplicity, I do not consider potential energy and work done by external forces: $$ \int_{t_1}^{t_2} \delta T dt = 0 $$ where $$ \delta T = \int_D \rho \vec{V} \cdot \delta\vec{V} dD $$ By first, calculating the $ \delta \vec{R}_0 $ , I obtain the following: $$ \int_{t_1}^{t_2} \int_D \rho \left[ \left( \dot{\vec{R}}_0 + \vec{\omega} \times (\vec{R}_0+\vec{r}) \right) \cdot \delta \dot{\vec{R}}_0 + \left( \dot{\vec{R}}_0 + \vec{\omega} \times (\vec{R}_0+\vec{r}) \right) \cdot \left( \vec{\omega} \times \delta \vec{R}_0 \right) \right] dD dt $$ The first part of this integral can be integrated by parts, and hence one can obtain the translational equation of motion. The rotational equations of motion can be obtained from the second part: $$ \int_{t_1}^{t_2} \int_D \rho \left[ \left( \dot{\vec{R}}_0 + \vec{\omega} \times (\vec{R}_0+\vec{r}) \right) \cdot \left( \delta \vec{\omega} \times(\vec{R}_0+\vec{r}) \right) + \left( \dot{\vec{R}}_0 + \vec{\omega} \times (\vec{R}_0+\vec{r}) \right) \cdot \left( \vec{\delta\theta} \times \vec{V} \right) \right] dD dt $$ The second part of the integral is zero, hence we end up with the following form: $$ \int_{t_1}^{t_2} \int_D \rho \left[ \left( \vec{R}_0+\vec{r} \right) \times \left( \dot{\vec{R}}_0 + \vec{\omega} \times (\vec{R}_0+\vec{r}) \right) \right] \cdot \delta \vec{\omega} dD dt $$ My question is here. Since angular velocity is non-holonomic, it cannot be expressed as a derivative of a vector, i.e., one cannot obtain an expression for rotation. I need to evaluate this integral by parts to obtain the rotational equations of motion. In other words, how can I find the relation between $ \delta \vec{\omega} $ and $ \vec{\delta \theta} $? Please note that the usage of the following expresion does not yield the correct result: $$ \delta \vec{\omega} = \frac{d (\vec{\delta \theta}) }{dt} $$
Chips Packaging Machine Manufacturer, Factory Supplierchips packing machine animal food packing machine rice husk powder packing PRODUCTS Detail Our company has a long history in China to produce Chips Packaging Machine Manufacturer, Factory Supplierchips packing machine animal food packing machine rice husk powder packing machine high speed vertical packing machine multihead weigher manufacturers, we are a professional and trustworthy manufacturer.In order to provide trustworthy products, we are stricted to control our technological process.Due to our matured craft and professional mechanic, we can providing chips packing machine products in reasonable price.As you know, business is only the first step. We hope that can build a relationship of mutual trust and long-term cooperation with you.Good service is as important as product quality.Thank you for making us acquainted. the load of potato chips in a medium-dimension bag is mentioned to be 10 oz.. The quantity that the packaging computing device places in these bags is believed to have a normal mannequin with an average of oz. and a typical deviation of oz. (circular to 4 decimal areas as necessary.) a) What fraction of all luggage bought are underweight? b) one of the crucial chips are sold in "bargain packs" of 33 baggage. what's the chance that not one of the 33 is underweight? c) what is the chance that the suggest weight of the 33 bags is under the mentioned volume? d) what's the likelihood that the mean weight of a 20-bag case of potato chips is under 10 oz.?ordinary Distribution: A random variable X is said to be following common distribution with meaneq\mu /eq and variance eq\sigma^2 /eq if its distribution is given as eqf(x|\mu,\sigma^2)=\frac1\sqrt2\pi \sigma^2e^-\frac(x-\mu)^22\sigma^2 \qquad , -\infty \leq X \leq \infty \\ \bar x \area \sim \house N(\mu,\frac\sigma^2n). /eq general distribution is given by way of gauss and at the beginning it's used for modeling the error . It is assumed that all the natural phenomenon follows standard distribution.answer and rationalization: it's given that medium size chips is of 10 ounces The volume that packaging computer put observe standard distribution with suggest and standard deviation eq\mu= \\ \sigma= /eq a) what fraction of baggage are underweight under 10 oz So we need to locate zscore and then the usage of NORMSDIST (z) feature of MS Excel we can get the likelihood eqP(X <10)=P(Z<\ /eq b) P(None is underweight)=? None is underweight skill all 33 aren't under 10 oz. Let y denotes variety of underweight y ~Binom(33, P(None is underweight)=P(Y=0)=? eqP(Y=0)=\ /eq c) right here n=33 So we comprehend that eq\bar x \sim N(\mu ,\frac\sigma^2n) /eq for that reason eq\bar x /eq comply with commonplace distribution with mean and average deviation eqP(\bar x <10)=P(Z<\ /eq d) in a similar fashion here n=20 for this reason eq\bar x /eq observe general distribution with suggest and commonplace deviation eqP(\bar x <10)=P(Z<\ /eq
Let $X_n(\Bbb{Z})$ be the simplicial complex whose vertex set is $\Bbb{Z}$ and such that the vertices $v_0,...,v_k$ span a $k$-simplex if and only if $|v_i-v_j| \le n$ for every $i,j$. Prove that $X_n(\Bbb{Z})$ is $n$-dimensional... no kidding, my maths is foundations (basic logic but not pedantic), calc 1 which I'm pretty used to work with, analytic geometry and basic linear algebra (by basic I mean matrices and systems of equations only Anyway, I would assume if removing $ fixes it, then you probably have an open math expression somewhere before it, meaning you didn't close it with $ earlier. What's the full expression you're trying to get? If it's just the frac, then your code should be fine This is my first time chatting here in Math Stack Exchange. So I am not sure if this is frowned upon but just a quick question: I am trying to prove that a proper subgroup of $\mathbb{Z}^n$ is isomorphic to $\mathbb{Z}^k$, where $k \le n$. So we must have $rank(A) = rank(\mathbb{Z}^k)$ , right? For four proper fractions $a, b, c, d$ X writes $a+ b + c >3(abc)^{1/3}$. Y also added that $a + b + c> 3(abcd)^{1/3}$. Z says that the above inequalities hold only if a, b,c are positive. (a) Both X and Y are right but not Z. (b) Only Z is right (c) Only X is right (d) Neither of them is absolutely right. Yes, @TedShifrin the order of $GL(2,p)$ is $p(p+1)(p-1)^2$. But I found this on a classification of groups of order $p^2qr$. There order of $H$ should be $qr$ and it is present as $G = C_{p}^2 \rtimes H$. I want to know that whether we can know the structure of $H$ that can be present? Like can we think $H=C_q \times C_r$ or something like that from the given data? When we say it embeds into $GL(2,p)$ does that mean we can say $H=C_q \times C_r$? or $H=C_q \rtimes C_r$? or should we consider all possibilities? When considering finite groups $G$ of order, $|G|=p^2qr$, where $p,q,r$ are distinct primes, let $F$ be a Fitting subgroup of $G$. Then $F$ and $G/F$ are both non-trivial and $G/F$ acts faithfully on $\bar{F}:=F/ \phi(F)$ so that no non-trivial normal subgroup of $G/F$ stabilizes a series through $\bar{F}$. And when $|F|=pr$. In this case $\phi(F)=1$ and $Aut(F)=C_{p-1} \times C_{r-1}$. Thus $G/F$ is abelian and $G/F \cong C_{p} \times C_{q}$. In this case how can I write G using notations/symbols? Is it like $G \cong (C_{p} \times C_{r}) \rtimes (C_{p} \times C_{q})$? First question: Then it is, $G= F \rtimes (C_p \times C_q)$. But how do we write $F$ ? Do we have to think of all the possibilities of $F$ of order $pr$ and write as $G= (C_p \times C_r) \rtimes (C_p \times C_q)$ or $G= (C_p \rtimes C_r) \rtimes (C_p \times C_q)$ etc.? As a second case we can consider the case where $C_q$ acts trivially on $C_p$. So then how to write $G$ using notations? There it is also mentioned that we can distinguish among 2 cases. First, suppose that the sylow $q$-subgroup of $G/F$ acts non trivially on the sylow $p$-subgroup of $F$. Then $q|(p-1) and $G$ splits over $F$. Thus the group has the form $F \rtimes G/F$. A presentation $\langle S\mid R\rangle$ is a Dehn presentation if for some $n\in\Bbb N$ there are words $u_1,\cdots,u_n$ and $v_1,\cdots, v_n$ such that $R=\{u_iv_i^{-1}\}$, $|u_i|>|v_i|$ and for all words $w$ in $(S\cup S^{-1})^\ast$ representing the trivial element of the group one of the $u_i$ is a subword of $w$ If you have such a presentation there's a trivial algorithm to solve the word problem: Take a word $w$, check if it has $u_i$ as a subword, in that case replace it by $v_i$, keep doing so until you hit the trivial word or find no $u_i$ as a subword There is good motivation for such a definition here So I don't know how to do it precisely for hyperbolic groups, but if $S$ is a surface of genus $g \geq 2$, to get a geodesic representative for a class $[\alpha] \in \pi_1(S)$ where $\alpha$ is an embedded loop, one lifts it to $\widetilde{\alpha}$ in $\Bbb H^2$ by the locally isometric universal covering, and then the deck transformation corresponding to $[\alpha]$ is an isometry of $\Bbb H^2$ which preserves the embedded arc $\widetilde{\alpha}$ It has to be an isometry fixing a geodesic $\gamma$ with endpoints at the boundary being the same as the endpoints of $\widetilde{\alpha}$. Consider the homotopy of $\widetilde{\alpha}$ to $\gamma$ by straightline homotopy, but straightlines being the hyperbolic geodesics. This is $\pi_1(S)$-equivariant, so projects to a homotopy of $\alpha$ and the image of $\gamma$ (which is a geodesic in $S$) downstairs, and you have your desired representative I don't know how to interpret this coarsely in $\pi_1(S)$ @anakhro Well, they print in bulk, and on really cheap paper, almost transparent and very thin, and offset machine is really cheaper per page than a printer, you know, but you should be printing in bulk, its all economy of scale. @ParasKhosla Yes, I am Indian, and trying to get in some good masters progam in math. Algebraic graph theory is a branch of mathematics in which algebraic methods are applied to problems about graphs. This is in contrast to geometric, combinatoric, or algorithmic approaches. There are three main branches of algebraic graph theory, involving the use of linear algebra, the use of group theory, and the study of graph invariants.== Branches of algebraic graph theory ===== Using linear algebra ===The first branch of algebraic graph theory involves the study of graphs in connection with linear algebra. Especially, it studies the spectrum of the adjacency matrix, or the Lap... I can probably guess that they are using symmetries and permutation groups on graphs in this course. For example, orbits and studying the automorphism groups of graphs. @anakhro I have heard really good thing about Palka. Also, if you do not worry about little sacrifice of rigor (e.g. counterclockwise orientation based on your intuition, rather than, on winding numbers, etc.), Howie's Complex analysis is good. It is teeming with typos here and there, but you will be fine, i think. Also, thisbook contains all the solutions in appendix! Got a simple question: I gotta find kernel of linear transformation $F(P)=xP^{''}(x) + (x+1)P^{'''}(x)$ where $F: \mathbb{R}_3[x] \to \mathbb{R}_3[x]$, so I think it would be just $\ker (F) = \{ ax+b : a,b \in \mathbb{R} \}$ since only polynomials of degree at most 1 would give zero polynomial in this case @chandx you're looking for all the $G = P''$ such that $xG + (x+1)G' = 0$; if $G \neq 0$ you can solve the DE to get $G'/G = -x/(x+1) = -1 + 1/(x+1) \implies \ln G = -x + \ln(1+x) \implies G = (1+x)e^(-x) + C$ which is obviously not a polyonomial, so $G = 0$ and thus $P = ax + b$ could you suppose that $\operatorname{deg} P \geq 2$ and show that you wouldn't have nonzero polynomials? Sure.
I'm trying to understand BRST complex in its Lagrangian incarnation i.e. in the form mostly closed to original Faddeev-Popov formulation. It looks like the most important part of that construction (proof of vanishing of higher cohomology groups) is very hard to find in the literature, at least I was not able to do so. Let me formulate couple of questions on BRST, but in the form of exercises on Lie algebra cohomology. Let $X$ be a smooth affine variety, and $g$ is a (reductive?) Lie algebra acting on $X$, I think we assume $g$ to be at least unimodular, otherwise BRST construction won't work, and also assume that map $g \to T_X$ is injective. In physics language this is closed and irreducible action of a Lie algebra of a gauge group of the space of fields $X$. Structure sheaf $\mathcal{O}_X$ is a module over $g$, and I could form Chevalley-Eilenberg complex with coefficients in this module$$C=\wedge g^* \otimes \mathcal{O}_X.$$ The ultimate goal if BRST construction is to provide "free model" of algebra of invarinats $\mathcal{O}_X^g$, it is nor clear what is "free model", but I think BRST construction is just Tate's procedure of killing cycles for Chevalley-Eilenberg complex above (Tate's construction works for any dg algebra, and $C$ is a dg algebra). My first question is what exactly are cohomology of the complex $C$? In other words before killing cohomology I'd like to understand what exactly have to be killed. For me it looks like a classical question on Lie algebra cohomology and, perhaps, it was discussed in the literature 60 years ago. It is not necessary to calculate these cohomology groups and then follow Tate's approach to construct complete BSRT complex (complete means I added anti-ghosts and lagrange multipliers to $C$ and modified the differential), but even if I start with BRST complex$$C_{BRST}=(\mathcal{O}_X \otimes \wedge (g \oplus g^*) \otimes S(g), d_{BRST}=d_{CE}+d_1),$$where I could find a proof that all higher cohomology vanishes? This post imported from StackExchange MathOverflow at 2014-08-24 09:17 (UCT), posted by SE-user Sasha Pavlov
ISSN: 1078-0947 eISSN: 1553-5231 All Issues Discrete & Continuous Dynamical Systems - A January 2014 , Volume 34 , Issue 1 Special issue on Infinite Dimensional Dynamics and Applications Select all articles Export/Reference: Abstract: The theory of infinite dimensional and stochastic dynamical systems is a rapidly expanding and vibrant field of mathematics. In the recent three decades it has been highlighted as a core knowledge and an advancing thrust in the qualitative study of complex systems and processes described by evolutionary partial differential equations in many different settings, stochastic differential equations, functional differential equations and lattice differential equations. The central research topics include the invariant and attracting sets, stability and bifurcation of patterns and waves, asymptotic theory of dissipative systems and reduction of dimensions, and more and more problems of nonlocal systems, ill-posed systems, multicomponent and network dynamics, random dynamics and chaotic dynamics. For more information please click the “Full Text” above Abstract: We establish the existence of a global invariant manifold of bubble states for the mass-conserving Allen-Cahn Equation in two space dimensions and give the dynamics for the center of the bubble. Abstract: In this paper statistical solutions of the 3D Navier-Stokes-$\alpha$ model with periodic boundary condition are considered. It is proved that under certain natural conditions statistical solutions of the 3D Navier-Stokes-$\alpha$ model converge to statistical solutions of the exact 3D Navier-Stokes equations as $\alpha$ goes to zero. The statistical solutions considered here arise as families of time-projections of measures on suitable trajectory spaces. Abstract: In this paper we first prove a rather general theorem about existence of solutions for an abstract differential equation in a Banach space by assuming that the nonlinear term is in some sense weakly continuous. We then apply this result to a lattice dynamical system with delay, proving also the existence of a global compact attractor for such system. Abstract: This article is devoted to the existence and uniqueness of pathwise solutions to stochastic evolution equations, driven by a Hölder continuous function with Hölder exponent in $(1/2,1)$, and with nontrivial multiplicative noise. As a particular situation, we shall consider the case where the equation is driven by a fractional Brownian motion $B^H$ with Hurst parameter $H>1/2$. In contrast to the article by Maslowski and Nualart [17], we present here an existence and uniqueness result in the space of Hölder continuous functions with values in a Hilbert space $V$. If the initial condition is in the latter space this forces us to consider solutions in a different space, which is a generalization of the Hölder continuous functions. That space of functions is appropriate to introduce a non-autonomous dynamical system generated by the corresponding solution to the equation. In fact, when choosing $B^H$ as the driving process, we shall prove that the dynamical system will turn out to be a random dynamical system, defined over the ergodic metric dynamical system generated by the infinite dimensional fractional Brownian motion. Abstract: We introduce and analyze a prototype model for chemotactic effects in biofilm formation. The model is a system of quasilinear parabolic equations into which two thresholds are built in. One occurs at zero cell density level, the second one is related to the maximal density which the cells cannot exceed. Accordingly, both diffusion and taxis terms have degenerate or singular parts. This model extends a previously introduced degenerate biofilm model by combining it with a chemotaxis equation. We give conditions for existence and uniqueness of weak solutions and illustrate the model behavior in numerical simulations. Abstract: We consider the compressible Navier-Stokes system coupled with the Maxwell equations governing the time evolution of the magnetic field. We introduce a relative entropy functional along with the related concept of dissipative solution. As an application of the theory, we show that for small values of the Mach number and large Reynolds number, the global in time weak (dissipative) solutions converge to the ideal MHD system describing the motion of an incompressible, inviscid, and electrically conducting fluid. The proof is based on frequency localized Strichartz estimates for the Neumann Laplacean on unbounded domains. Abstract: Here we consider the nonlocal Cahn-Hilliard equation with constant mobility in a bounded domain. We prove that the associated dynamical system has an exponential attractor, provided that the potential is regular. In order to do that a crucial step is showing the eventual boundedness of the order parameter uniformly with respect to the initial datum. This is obtained through an Alikakos-Moser type argument. We establish a similar result for the viscous nonlocal Cahn-Hilliard equation with singular (e.g., logarithmic) potential. In this case the validity of the so-called separation property is crucial. We also discuss the convergence of a solution to a single stationary state. The separation property in the nonviscous case is known to hold when the mobility degenerates at the pure phases in a proper way and the potential is of logarithmic type. Thus, the existence of an exponential attractor can be proven in this case as well. Abstract: In this paper we strengthen some results on the existence and properties of pullback attractors for a non-autonomous 2D Navier-Stokes model with infinite delay. Actually we prove that under suitable assumptions, and thanks to regularity results, the attraction also happens in the $H^1$ norm for arbitrarily large finite intervals of time. Indeed, from comparison results of attractors we establish that all these families of attractors are in fact the same object. The tempered character of these families in $H^1$ is also analyzed. Abstract: This paper treats the existence of pullback attractors for the non-autonomous 2D Navier--Stokes equations in two different spaces, namely $L^2$ and $H^1$. The non-autonomous forcing term is taken in $L^2_{\rm loc}(\mathbb R;H^{-1})$ and $L^2_{\rm loc}(\mathbb R;L^2)$ respectively for these two results: even in the autonomous case it is not straightforward to show the required asymptotic compactness of the flow with this regularity of the forcing term. Here we prove the asymptotic compactness of the corresponding processes by verifying the flattening property -- also known as ``Condition (C)". We also show, using the semigroup method, that a little additional regularity -- $f\in L^p_{\rm loc}(\mathbb R;H^{-1})$ or $f\in L^p_{\rm loc}(\mathbb R;L^2)$ for some $p>2$ -- is enough to ensure the existence of a compact pullback absorbing family (not only asymptotic compactness). Even in the autonomous case the existence of a compact absorbing set for this model is new when $f$ has such limited regularity. Abstract: This paper studies the asymptotic behavior of solutions for the non-autonomous lattice Selkov model. We prove the existence of a uniform attractor for the generated family of processes and obtain an upper bound of the Kolmogorov $\varepsilon$-entropy for it. Also we establish the upper semicontinuity of the uniform attractor when the infinite lattice systems are approximated by finite lattice systems. Abstract: Frequency domain conditions for the existence of finite-dimensional projectors and determining observations for the set of amenable solutions of semi-dynamical systems in Hilbert spaces are derived. Evolutionary variational equations are considered as control systems in a rigged Hilbert space structure. As an example we investigate a coupled system of Maxwell's equations and the heat equation in one-space dimension. We show the controllability of the linear part and the frequency domain conditions for this example. Abstract: This paper is concerned with the asymptotic behavior of solutions of the damped non-autonomous stochastic wave equations driven by multiplicative white noise. We prove the existence of pullback random attractors in $H^1(\mathbb{R}^n) \times L^2(\mathbb{R}^n)$ when the intensity of noise is sufficiently small. We demonstrate that these random attractors are periodic in time if so are the deterministic non-autonomous external terms. We also establish the upper semicontinuity of random attractors when the intensity of noise approaches zero. In addition, we prove the measurability of random attractors even if the underlying probability space is not complete. Abstract: For a typical stochastic reversible reaction-diffusion system with multiplicative white noise, the trimolecular autocatalytic Gray-Scott system on a three-dimensional bounded domain with random noise perturbation proportional to the state of the system, the existence of a random attractor and its robustness with respect to the reverse reaction rates are proved through sharp and uniform estimates showing the pullback uniform dissipation and the pullback asymptotic compactness. Readers Authors Editors Referees Librarians More Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]

RedStone

Based on the paper "RedStone: Curating General, Code, Math, and QA Data for Large Language Models" and the official GitHub repository, I have replicated the processing of the RedStone-Math dataset in Redstone.

I followed the processing steps outlined in the official repository with minimal modifications.

The final processed dataset is similar in scale to what is presented in the paper, but I have not yet used this data for training to verify its quality.

The release is under the Redstone's license. If any data within it infringes on your copyright, please contact me for removal.

Downloads last month
54